Latest Posts (20 found)

Premium: The Hater's Guide To The SaaSpocalypse

Soundtrack: The Dillinger Escape Plan — Black Bubblegum To understand the AI bubble, you need to understand the context in which it sits, and that larger context is the end of the hyper-growth era in software that I call the Rot-Com Bubble .  Generative AI, at first, appeared to be the panacea — a way to create new products for software companies to sell (by connecting their software to model APIs), a way to sell the infrastructure to run it, and a way to create a new crop of startups that could be bought or sold or taken public.  Venture capital hit a wall in 2018 — vintages after that year are, for the most part, are stuck at a TVPI (total value paid in, basically the money you make for each dollar you invested) of 0.8x to 1.2x, meaning that you’re making somewhere between 80 cents to $1.20 for every dollar. Before 2018, Software As A Service (SaaS) companies had had an incredible run of growth, and it appeared basically any industry could have a massive hypergrowth SaaS company, at least in theory. As a result, venture capital and private equity has spent years piling into SaaS companies, because they all had very straightforward growth stories and replicable, reliable, and recurring revenue streams.  Between 2018 and 2022, 30% to 40% of private equity deals (as I’ll talk about later) were in software companies, with firms taking on debt to buy them and then lending them money in the hopes that they’d all become the next Salesforce, even if none of them will. Even VC remains SaaS-obsessed — for example, about 33% of venture funding went into SaaS in Q3 2025, per Carta . The Zero Interest Rate Policy (ZIRP) era drove private equity into fits of SaaS madness, with SaaS PE acquisitions hitting $250bn in 2021 . Too much easy access to debt and too many Business Idiots believing that every single software company would grow in perpetuity led to the accumulation of some of the most-overvalued software companies in history. As the years have gone by, things slowed down, and now private equity is stuck with tens of billions of dollars of zombie SaaS companies that it can’t take public or sell to anybody else, their values decaying far below what they had paid, which is a very big problem when most of these deals were paid in debt.  To make matters worse, 9fin estimates that IT and communications sector companies (mostly software) accounted for 20% to 25% of private credit deals tracked, with 20% of loans issued by public BDCs (like Blue Owl) going to software firms. Things look grim. Per Bain , the software industry’s growth has been on the decline for years, with declining growth and Net Revenue Retention, which is how much you're making from customers and expanding their spend minus what you're losing from customers leaving (or cutting spend): It’s easy to try and blame any of this on AI, because doing so is a far more comfortable story. If you can say “AI is causing the SaaSpocalypse,” you can keep pretending that the software industry’s growth isn’t slowing. That isn’t what’s happening. No, AI is not replacing all software. That is not what is happening. Anybody telling you this is either ignorant or actively incentivized to lie to you.  The lie starts simple: that the barrier to developing software is “lower” now, either “because anybody can write code” or “anybody can write code faster.” As I covered a few weeks ago … From what I can gather, the other idea is that AI can “simply automate” the functions of a traditional software company, and “agents” can replace the entire user experience, with users simply saying “go and do this” and something would happen. Neither of these things are true, of course — nobody bothers to check, and nobody writing about this stuff gives a fuck enough to talk to anybody other than venture capitalists or CEOs of software companies that are desperate to appeal to investors. To be more specific, the CEOs that you hear desperately saying that they’re “modernizing their software stack for AI” are doing so because investors, who also do not know what they are talking about, are freaking out that they’ll get “left behind” because, as I’ve discussed many times, we’re ruled by Business Idiots that don’t use software or do any real work . There are also no real signs that this is actually happening. While I’ll get to the decline of the SaaS industry’s growth cycle, if software were actually replacing anything we’d see direct proof — massive contracts being canceled, giant declines in revenue, and in the case of any public SaaS company, 8K filings that would say that major customers had shifted away business from traditional software.  Midwits with rebar chunks in their gray matter might say that “it’s too early to tell and that the contract cycle has yet to shift,” but, again, we’d already have signs, and you’d know this if you knew anything about software. Go back to drinking Sherwin Williams and leave the analysis to the people who actually know stuff!  We do have one sign though: nobody appears to be able to make much money selling AI, other than Anthropic ( which made $5 billion in its entire existence through March 2026 on $60 billion of funding ) and OpenAI (who I believe made far less than $13 billion based on my own reporting .) In fact, it’s time to round up the latest and greatest in AI revenues. Hold onto your hats folks! Riddle me this, Batman: if AI was so disruptive to all of these software companies, would it not be helping them disrupt themselves? If it were possible to simply magic up your own software replacement with a few prompts to Claude, why aren’t we seeing any of these companies do so? In fact, why do none of them seem to be able to do very much with generative AI at all?   The point I’m making is fairly simple: the whole “AI SaaSpocalypse” story is a cover-up for a much, much larger problem. Reporters and investors who do not seem to be able to read or use software are conflating the slowing growth of SaaS companies with the growth of AI tools, when what they’re actually seeing is the collapse of the tech industry’s favourite business model, one that’s become the favourite chew-toy of the Venture Capital, Private Equity and Private Credit Industries. You see, there are tens of thousands of SaaS companies in everything from car washes to vets to law firms to gyms to gardening companies to architectural firms. Per my Hater’s Guide To Private Equity : You’d eventually either take that company public or, in reality, sell it to a private equity firm . Per Jason Lemkin of SaaStr : The problem is that SaaS valuations were always made with the implicit belief that growth was eternal , just like the rest of the Rot Economy , except SaaS, at least for a while, had mechanisms to juice revenues, and easy access to debt. After all, annual recurring revenues are stable and reliable , and these companies were never gonna stop growing, leading to the creation of recurring revenue lending : To be clear, this isn’t just for leveraged buyout situations, but I’ll get into that later. The point I’m making is that the setup is simple: You see, nobody wants to talk about the actual SaaSpocalypse — the one that’s caused by the misplaced belief that any software company will grow forever.  Generative AI isn’t destroying SaaS. Hubris is.  Alright, let’s do this one more time. SaaS — Software As A Service — is both the driving force and seedy underbelly of the tech industry. It’s a business model that sells itself on a seemingly good deal. Instead of paying upfront for an expensive software license and then again when future updates happen, you pay a “low” monthly fee that allows you to get (in theory) the most up-to-date (in theory) and well-maintained ( in theory ) version of whatever it is you’re using. It also ( in theory ) means that companies need to stay competitive to keep your business, because you’re committing a much smaller amount of money than a company might make from a single license. Over here in the real world , we know the opposite is true. Per The Other Bubble, a piece I wrote in September 2024 : It’s hard to say exactly how large SaaS has become, because SaaS is in basically everything, from whatever repugnant productivity software your boss has insisted you need, to every consumer app now having some sort of “Plus” package that paywalls features that used to be free. Nevertheless, “SaaS” in most cases refers to business software , with the occasional conflation with the nebulous form of “the enterprise,” which really means “any company larger than 500 people.”  McKinsey says it was worth “$3 trillion” in 2022 “after a decade of rapid growth,” Jason Lemkin and IT planning software company Vena say it has revenues somewhere between $300 billion and $400 billion a year. Grand View Research has the global business software and services market at around $584 billion , and the reason I bring that up is that basically all business software is now SaaS, and these companies make an absolute shit ton on charging service fees. “Perpetual licenses” — as in something you pay for once, and use forever — are effectively dead, with a few exceptions such as Microsoft Windows, Microsoft Office, and some of its server and database systems. Adobe killed them in 2014 ( and a few more in 2022 ), Oracle killed them in 2020 , and Broadcom killed them in 2023 , the same year that Citrix stopped supporting those unfortunate to have bought them before they went the way of the dodo in 2019 . To quote myself again, in 2011, Marc Andreessen said that “ software is eating the world.” And he was right, but not in a good way. Andreesen’s argument was that software should eat every business model: Every single company you work with that has any kind of software now demands you subscribe to it, and the ramifications of them doing so are more significant than you’ve ever considered.  That’s because SaaS is — or, at least, was — a far-more-stable business model than selling people something once. Customers are so annoying . When they buy something, they tend to use it until it stops working, and if you made the product well , that might mean they only pay you once.   SaaS fixes this problem by giving them only one option — to pay you a nasty little toll every single month, or ideally once a year, on a contractual basis, in a way that’s difficult to cancel.  Sadly, the success of the business software industry turned everything into SaaS.  Recently, I tried to cancel my membership to Canva, a design platform that sort of works well when you want it to but sometimes makes your browser crash. Doing so required me to go through no less than four different screens, all of which required me to click “cancel” — offers to give me a discount, repeated requests to email support, then a final screen where the cancel button moved to a different place.  This is nakedly evil. If you are somebody high up at Canva, I cannot tell you to go fuck yourself hard enough! This is a scummy way to make business and I would rather carve a meme on my ass than pay you another dollar!  It’s also, sadly, one of the tech industry’s most common (and evil!) tricks .  Everybody got into SaaS because, for a while, SaaS was synonymous with growth. Venture capitalists invested in business with software subscriptions because it was an easy way to say “we’re gonna grow so much ,” with massive sales teams that existed to badger potential customers, or “customer success managers” that operate as internal sales teams to try and get you to start paying for extra features, some of which might also be useful rather than helping somebody hit their sales targets.  The other problem is how software is sold. As discussed in the excellent Brainwash An Executive Today , Nik Suresh broke down the truth behind a lot of SaaS sales — that the target customer is the purchaser at a company, who is often not the end user, meaning that software is often sold in a way that’s entirely divorced from its functionality. This means that growth, especially as things have gotten desperate, has come from a place of conning somebody with money out of it rather than studiously winning a customer’s heart.  And, as I’ve hinted at previously, the only thing that grows forever is cancer. In today’s newsletter I am going to walk you through the contraction — and in many cases collapse — of tech’s favourite business model, caused not by any threat from Large Language Models but the brutality of reality, gravity and entropy. Despite the world being anything but predictable or reliable, the entire SaaS industry has been built on the idea that the good times would never, ever stop rolling. I guess you’re probably wondering why that’s a problem! Well, it’s quite simple (emphasis mine): That’s right folks, 40% of PE deals between 2018 and 2022 were for software companies, the very same time venture capital fund returns got worse. Venture and private equity has piled into an industry it believed was taking off just as it started to slow down. The AI bubble is just part of the wider collapse of the software industry’s growth cycle. This is The Hater’s Guide To The SaaSpocalypse, or “Software As An Albatross.”  In its Q4 2025 earnings, IBM said its total “generative AI book of business since 2023” hit $12.5 billion — of which 80% came from its consultancy services, which consists mostly of selling AI other people’s AI models to other businesses. It then promptly said it would no longer report this as a separate metric going forward .  To be clear, this company made $67.5 billion in 2025, $62.8 billion in 2024, $61.9 billion in 2023 and $60.5 billion in 2022. Based on those numbers, it’s hard to argue that AI is having much of an impact at all, and if it were, it would remain broken out. Scummy consumer-abuser Adobe tries to scam investors and the media alike by referring to “AI-influenced” revenue — referring to literally any product with a kind of AI-plugin you can pay for (or have to pay for as part of a subscription) — and “AI-first” revenue, which refers to actual AI products like Adobe Firefly. It’s unclear how much these things actually make. According to Adobe’s Q3 FY2025 earnings , “AI-influenced” ARR was “surpassing” $5 billion (so $1.248 billion in a quarter, though Adobe does not actually break this out in its earnings report), and “AI-first” ARR was “already exceeding [its] $250 million year-end target,” which is a really nice way of saying “we maybe made about $60 million a quarter for a product that we won’t shut the fuck up about.”  For some context, Adobe made $5.99 billion in that quarter, which makes this (assuming AI-first revenue was consistent) roughly 1% of its revenue .   Adobe then didn’t report its AI-first revenue again until Q1 FY2026, when it revealed it had “more than tripled year over year” without disclosing the actual amount, likely because a year ago its AI-first revenue was $125 million ARR , but this number also included “add-on innovations.” In any case, $375 million ARR works out to $31.25 million a month, or (even though it wasn’t necessarily this high for the entire quarter) $93.75 million a quarter, or roughly 1.465% of its $6.40 billion in quarterly revenue in Q1 FY2026. Bulbous Software-As-An-Encumberance Juggernaut Salesforce revealed in its latest earnings that its Agentforce and Data 360 (which is not an AI product, just the data resources required to use its services) platforms “exceeded” $2.9 billion… but that $1.1 billion of that ARR came from its acquisition of Informatica Cloud , (which is not a fucking AI product by the way!). Agentforce ARR ended up being a measly $800 million, or $66 million a month for a company that makes $11.2 billion a year. It isn’t clear whether what period of time this ARR refers to.  Microsoft, Google and Amazon do not break out their AI revenues. Box — whose CEO Aaron Levie appears to spend most of his life tweeting vague things about AI agents — does not break out AI revenue .  Shopify, the company that mandates you prove that AI can’t do a job before asking for resources , does not break out AI revenue. ServiceNow, whose CEO said back in 2022 told his executives that “everything they do [was now] AI, AI, AI, AI, AI,”   said in its Q4 2025 earnings that its AI-powered “ Now Assist” had doubled its net new Annual Contract Value had doubled year-over-year ,” but declined to say how much that was after saying in mid-2025 it wanted a billion dollars in revenue from AI in 2026 .  Apparently it told analysts that it had hit $600 million in ACV in March ( per The Information )...in the fourth quarter of 2025, which suggests that this is not actually $600 million of revenues quite yet, nor do we know what that revenue costs.  What we do know is that ServiceNow had $3.46 billion in 2025, and its net income has been effectively flat for multiple quarters , and basically identical since 2023. Intuit, a company that vibrates with evil, had the temerity to show pride that it had generated " almost $90 million in AI efficiencies in the first half of 2025 ,” a weird thing to say considering this was a statement from March 2026. Anyway, back in November 2025 it agreed to pay over $100 million for model access to integrate ChatGPT . Great stuff everyone.  Workday, a company that makes about $2.5 billion a quarter in revenue, said it “generated over $100 million in new ACV from emerging AI products, [and that] overall ARR from these solutions was over $400 million.” $400 million ARR is $33 million.  Atlassian, which just laid off 10% of its workforce to “ self-fund further investment in AI ,” does not break out its AI revenues. Tens of thousands of SaaS companies were created in the last 20 years. These companies, for a while, had what seemed to be near-perpetual growth. This led to many, many private equity buyouts of SaaS companies, pumping them full of debt based on their existing recurring revenue and the assumption that they would never, ever stop growing. I will get into this later. It’s very bad. When growth slowed, the reaction was for these companies to raise venture debt — loans based on their revenue — and per Founderpath , 14 of the largest Business Development companies loaned $18 billion across 1000 companies in 2024 alone, with an average loan size of $13 million. This includes name brand companies like Cornerstone OnDemand and Dropbox, the latter of which took on a $34.4 million debt facility with an 11% interest rate. One has to wonder why a company that had $643 million in revenue in Q4 2024 needed that debt.

0 views

Restoring an Xserve G5: When Apple built real servers

Recently I came into posession of a few Apple Xserves. The one in question today is an Xserve G5, RackMac3,1 , which was built when Apple at the top—and bottom—of it's PowerPC era. This isn't the first Xserve—that honor belongs to the G4 1 . And it wasn't the last—there were a few generations of Intel Xeon-powered RackMacs that followed. But in my opinion, it was the most interesting. Unfortunately, being manufactured in 2004, this Mac's Delta power supply suffers from the Capacitor Plague . The PSU tends to run hot, and some of the capacitors weren't even 105°C-rated, so they tend to wear out, especially if the Xserve was running high-end workloads.

0 views
iDiallo Today

It's Work that taught me how to think

On the first day of my college CS class, the professor walked in holding a Texas Instruments calculator above his head like Steve Jobs unveiling the first iPhone. The students sighed. They had expected computer science to involve little math. The professor told us he had helped build that calculator in the eighties, then spent a few minutes talking about his career and the process behind it. Then he plugged the device into his computer, opened a terminal on the projector, and pushed some code onto it. A couple of minutes later, he unplugged the cable, powered on the calculator, and sure enough, Snake was running on it. A student raised his hand. The professor leaned forward, eager for the first question of the semester. "Um... is this going to be on the test?" While the professor was showing us what it actually means to build something, to push code onto hardware and watch it come alive, his students were already thinking about the grade. About the exit. The experience meant nothing unless it converted into points. That was college for me. Everyone was chasing a passing grade to get to the next class. Learning was mostly incidental. The professors tried, but our incentives were completely misaligned. Talk of higher education becoming obsolete was already in the air, especially in CS. As enthusiastic as I had been when I started, that enthusiasm got chipped away one class at a time until the whole thing felt mechanical. Something I just had to get through. I dropped out shortly after the C++ class, which had taught me almost nothing about programming anyway. I was broke and could only pay for so many courses out of pocket. So I took my skills, such as they were, to a furniture store warehouse. My day job. When customers bought furniture, we pulled their merchandise from the back and loaded it into their trucks. They signed a receipt, we kept a copy, and those copies went into boxes labeled by month and date. At the end of the year, the boxes went onto a pallet, the pallet got shrink-wrapped, and a forklift tucked it away in a high storage compartment. Whenever an accountant called requesting a signed copy, usually because a customer was disputing a charge, the whole process ran in reverse. Someone licensed on the forklift had to retrieve the pallet, we cut the shrink-wrap, found the right box, and sifted through hundreds of receipts until we found the one we needed. The process took hours. One day I decided enough was enough. After my shift, I grabbed the day's signed receipts and fed them into a scanner. For each one, I created two images: a full copy and a cropped version showing just the top of the receipt where the order number was printed. I found a pirated OCR application, then used VBScript and a lot of Googling to write a script that read the order number and renamed each image file to match it. I also wrote my first Excel macros, also in VBScript. When everything was wired together, I had a working system. Each evening, I would enter the day's order numbers, scan the receipts, and let the script match them up with a preview attached. When the OCR failed to read a number, the file was renamed "unknown" with an incrementing number so I could verify those manually. From then on, when an accountant called, I could find and email them the receipt in under a minute, without ever leaving my desk. When I left that warehouse, I was ready to call myself a programmer. That one month building that system taught me more than two years of school ever had. But the education didn't stop there. Years later, now considering myself an experienced developer, a manager handed me what looked like a giant power strip. It had a dozen outlets, and was built for stress-testing set-top boxes in a datacenter. "Can you set this up?" he asked. A few years earlier, I would have panicked. I would have gone looking for someone who already knew the answer, or waited until the problem solved itself. But something had changed in me since the warehouse. Unfamiliar problems no longer felt like walls. They felt like the first receipt I ever fed into a scanner. It was just something to pull apart until it made sense. I had never worked with hardware. I had no idea where to start. But I didn't need to know where to start. I just needed to start. I brought the device to my desk and inspected every inch of it. I wasn't looking for the answer exactly. Instead, I was looking for the first question. And I found one: an RJ45 port on one end. Not exactly the programming interface you'd expect, but it was there for a reason. I looked up the model number of the device, downloaded the manual, and before long I was connected via Telnet, sending commands and reading output in the terminal. Problem solved. Not because I knew anything about hardware going in, but because I had learned to spend time with unfamiliar problems. None of this was in the syllabus. Nobody graded me on it. There was no partial credit for getting halfway there. That's the difference between school and work. School optimizes for the test, like that student who couldn't look past the grade to see what was actually being shown to him. School teaches you the shape of a problem and gives you a method to solve it. Work, on the other hand, doesn't care about the test. Work hands you something broken, or inefficient, or completely unfamiliar, and simply waits. Often, there are no right answers at work. You just have to build your own solution that satisfies the requirement. You figure things out, not because you memorized the right answer, but because you thought your way through it. Then something changes in how you approach every problem after that. You don't flinch at the next problem. You understand that facing unfamiliar problems is the job.

0 views

Tomorrow Never Came

Here is a movie I made with my friend, Tom, back when I was 15 or so. We were sure we’d be the next George Lucas or Steven Spielberg. Little did I know a few years later I’d be working at Lucasfilm and Steven Spielberg would call me up for hints on Monkey Island. He couldn’t use the 1-900 number like everyone else. The movie has sound, but it was lost when it was transferred to VHS and this goofy music was added. In case the Smithsonian wants to preserve the movie as historically important, here is the link.

0 views

chicken nuget

Background: nuget.org is a Microsoft owned and run service that allows users to package software and upload it to nuget so that other users can download it. It is targeted for .Net developers but there is really no filter in what you can offer through their service. Three years ago I reported on how nuget was hosting and providing ancient, outdated and insecure curl packages. Random people download a curl tarball, build curl and then upload it to nuget, and nuget then offers those curl builds to the world – forever. To properly celebrate the three year anniversary of that blog post, I went back to nuget.org , entered curl into the search bar and took a look at the results. I immediately found at least seven different packages where people were providing severely outdated curl versions. The most popular of those, rmt_curl , reports that it has been downloaded almost 100,000 times over the years and is still downloaded almost 1,000 times/week the last few weeks. It is still happening . The packages I reported three years ago are gone, but now there is a new set of equally bad ones. No lessons learned. rmt_curl claims to provide curl 7.51.0, a version we shipped in November 2016. Right now it has 64 known vulnerabilities and we have done more than 9,000 documented bugfixes since then. No one in their right mind should ever download or use this version. Conclusion: the state of nuget is just as sad now as it was three years ago and this triggered another someone is wrong on the internet moments for me. I felt I should do my duty and tell them. Again. Surely they will act this time! Surely they think of the security of their users? The entire nuget concept is setup and destined to end up like this: random users on the internet put something together, upload it to nuget and then the rest of the world downloads and uses those things – trusting that whatever the description says is accurate and well-meaning. Maybe there are some additional security scans done in the background, but I don’t see how anyone can know that they don’t contain any backdoors, trojans or other nasty deliberate attacks. And whatever has been uploaded once seems to then be offered in perpetuity. Like three years ago I listed a bunch of severely outdated curl packages in my report. nuget says I can email them a report, but that just sent me a bounce back saying they don’t accept email reports anymore. (Sigh, and yes I reported that as a separate issue.) I was instead pointed over to the generic Microsoft security reporting page where there is not even any drop-down selection to use for “nuget” so I picked “.NET” instead when I submitted my report. Almost identically to three years ago, my report was closed within less than 48 hours. It’s not a nuget problem they say. Thank you again for submitting this report to the Microsoft Security Response Center (MSRC). After careful investigation, this case has been assessed as not a vulnerability and does not meet Microsoft’s bar for immediate servicing. None of these packages are Microsoft owned, you will need to reach out directly to the owners to get patched versions published. Developers are responsible for removing their own packages or updating the dependencies. In other words: they don’t think it’s nuget’s responsibility to keep the packages they host, secure and safe for their users. I should instead report these things individually to every outdated package provider, who if they cared, would have removed or updated these packages many years ago already. Also, that would imply a never-ending wack-a-mole game for me since people obviously keep doing this. I think I have better things to do in my life. In the cases I reported, the packages seem to be of the kind that once had the attention and energy by someone who kept them up-to-date with the curl releases for a while and then they stopped and since then the packages on nuget has just collected dust and gone stale. Still, apparently users keep finding and downloading them, even if maybe not at terribly high numbers. Thousands of fooled users per week is thousands too many. The uploading users are perfectly allowed to do this, legally, and nuget is perfectly allowed to host these packages as per the curl license. I don’t have a definite answer to what exactly nuget should do to address this problem once and for all, but as long as they allow packages uploaded nine years ago to still get downloaded today, it seems they are asking for this. They contribute and aid users getting tricked into downloading and using insecure software , and they are indifferent to it. A rare few applications that were uploaded nine years ago might actually still be okay but those are extremely rare exceptions. The last time I reported this nuget problem nothing happened on the issue until I tweeted about it. This time around, a well-known Microsoft developer (who shall remain nameless here) saw my Mastodon post about this topic when mirrored over to Bluesky and pushed for the case internally – but not even that helped. The nuget management thinks this is okay. If I were into puns I would probably call them chicken nuget for their unwillingness to fix this. Maybe just closing our eyes and pretending it doesn’t exist will just make it go away? Absolutely no one should use nuget.

0 views

The Playtank Blog Guide

This blog started as a place to gather tabletop role-playing thoughts. Over time, it transformed into an outlet for professional musings. When my focus shifted professionally to systemic design, the blog shifted along. I’m quite proud of this collection of tips, tricks, and practices. It’s come to the point where there’s a consistent monthly readership. But it’s also quite meandering and weird, not exactly accessible for a newcomer. So here’s a brand spanking new Playtank Blog Guide to light your way, Monsieur Newcomer (or returning blog peruser)! As always, you can contact me at [email protected] or make a comment if you feel a sudden need to tell me something. Key posts for understanding what this blog is about. Systemic Building Blocks : examples showcasing what a system can be in a video game. The Systemic Master Scale : investigates the design dichotomy between authorship and emergence. Your Next Systemic Game : a practical process for making systemic games. Designing a Systemic Game : an overview of what goes into designing a systemic game. Simulated Immersion : a series in three parts that starts by talking about the immersive sim legacy , discusses their game design , and then immersive sims as products . First-Person 3Cs : another series in three parts that deals with camera , controls , and character for when you are making first-person games. There’s just one of them yet, but here’s a spot specifically for posts written by someone other than yours truly. Game Economy Design : the fantastic system designer Keelan Bowker-O’Brien teaches you how to design economies, providing some reference spreadsheets for your practical use. As with all things game design, much is just opinion. These are mine. The Interaction Frontier : a treatment on why interaction matters more than you may think. Definitions in Game Design : an argument against the never-ending attempts at trying to define things using words no one ever agreed on. Challenges to Systemic Design : ten specific challenges facing systemic design and how you may approach them. My Game Engine Journey : my own personal journey learning to work with different game engines. A Love Letter to Cyberpunk 2077 : written right after finishing the amazing Cyberpunk 2077 , back in 2021. Ways to Not Have Cooldowns : written because I was annoyed with over-reliance on cooldowns. It’s (Not) an Iterative Process : an attempt to conceptualise how “it’s an iterative process” is actually a problematic adage often used to hide bad processes. Speak to Me! : some musings on why game dialogue hasn’t really improved in the past four decades. Boom, Headshot! : an attempt at a constructive treatment of violence in video games. These posts are practical and game design-related, with a broad segment of topics. Books for Game Designers : by far the most referenced post on the blog. Some recommendations for good game design books. Game Balancing Guide : a guide for anyone about to go knee-deep into game balancing. Eras of Game Design : a very broad walkthrough of different “eras” of game design and the many lessons that risk being lost to time if we’re not curious enough. Designing Good Rules : dedicated to the designing of rules . The glue that make systems work. Combat Design Philosophy : a multi-part series that goes through Melee , Gunplay , Sport , and Drama in the context of combat. Stages of a Game’s Design : one of the first posts where I started exploring how to be more specific with the work of a game designer. Future Game Story : thoughts on the unique elements of video game storytelling and the modes of discourse they create. Originally written in 2014. Tabletop Roleplaying as a Game Design Tool : one of my personal favorites, talking about tabletop roleplaying as a practical design tool. Gamification : dips your toes into the Origin and Implementation of gamification systems, as well as the subject of Loot . Game Design Philosophy : my first attempt at concretising what’s important to me in game design. Systemic design is nothing without its practical dimension. These posts are not nearly as technical as they should be, but keeps the code pseudo. Building a Systemic Gun : the very first pseudocode post, and still probably the best one. State-Space Prototyping : a general discussion on prototyping, but also my favorite method for prototyping systemic games. An Object-Rich World : pivotal for my own personal understanding of object-object relationships, and still mostly holds up. A State-Rich Simulation : an expansion of the object-rich world with the meaning and implementation of states and contexts. What Systems Do : a slightly too general treatment of what systems may do when objects interact. Maximum Iteration : the five broad things you must facilitate if you want to maximise your iteration. Ideas that aren’t really design- or systemic design-related but more about the games industry or game development practices in general. The Systemic Pitch : how to pitch, and how to pitch a systemic game specifically, based on lessons learned. Custom Tools and Work Debt : the cost of pushing work forward onto “someone” before there’s a definition of the work or the tools it may require. The Content Treadmill : one of the biggest problems facing game development today and how you can look at the content you create from a different perspective. Making Money Making Games : a post on budgeting and some of the many unintuitive ways that game developers make money. Where it all began. A mix of musings and one-shots played during the Covid pandemic. When in Doubt, Improvise : goes into my favorite way of playing tabletop role-playing games. Player vs. Player in TTRPGs : my other favorite way of playing tabletop role-playing games: having the players in the room play against each other. Courtroom Intrigue : a mini-campaign where players play the leftover nobles who are forced to step up to a challenge bigger than they are. Investigate Your Own Murder : scenario where you play a ghastly supernatural murder both as the people being murdered and as the agents investigating the crime scene. Tigers, Horses, and Weird Danish Rock Songs : over the top violence, full of angry man-eating horses and divas. Cyberpunk + Heist = Grand Slam : a cyberpunk scenario that the game group asked for specifically and that turned into a mini-campaign.

0 views
Jeff Geerling Yesterday

Can the MacBook Neo replace my M4 Air?

Many of us wonder if the MacBook Neo is 'the one'. Because I have a faster desktop (currently a M4 Max Mac Studio), I've always used a lower-end Mac laptop, like the iBook or MacBook Air, for travel. I've used MacBook Pros in the past, but I like the portability of smaller, cheaper models. In fact, my favorite Mac laptop ever was the 11" Air.

0 views
ava's blog Yesterday

how i stay up-to-date on data protection & privacy law

Data protection, privacy and tech is a very dynamic field; every day, there are new court decisions, actions by big tech companies, and resulting questions, so thought I could share my resources that keep me informed. Unless marked with a German flag 🇩🇪, these are English. Not everyone has an RSS feed or their newsletter has additional info, so I settle for it. These are less interesting/applicable to you as a reader, but are still helpful for me. Reply via email Published 12 Mar, 2026 Interface-eu.org 🇩🇪 Zentrum für Digitalrechte und Demokratie 🇩🇪 Stiftung Datenschutz 🇩🇪 Netzpolitik.org European Law Blog Epicenter.works (🇩🇪 by default, but lets you select English version) Electronic Frontier Foundation TheCitizenLab 🇩🇪 Datenschutzkonferenz 🇩🇪 TÜV SÜD Datenschutz Blog Meetings with the data protection officer at my workplace. Following specific, notable people in the space - like via the RSS feed of their BlueSky or Mastodon. Magazine subscriptions like the Datenschutzberater My volunteer work at noyb.eu , translating and summarizing court cases, and learning about new events and projects in their Country Reporter meetings. Attending conferences, like the Beschäftigtendatenschutztag in Munich (2025) and Computers, Privacy and Data Protection (CPDP) in Brussels (2026, upcoming).

0 views

Binary Compatible Critical Section Delegation

Binary Compatible Critical Section Delegation Junyao Zhang, Zhuo Wang, and Zhe Zhou PPoPP'26 The futex design works great when contention is low but leaves much to be desired when contention is high. I generally think that algorithms should be crafted to avoid high lock contention, but this paper offers a contrarian approach that improves performance without code changes . Acquiring a futex involves atomic operations on the cache lines that contain the futex state. In the case of high contention, these cache lines violently bounce between cores. Also, user space code will eventually give up trying to acquire a lock the easy way and will call into the kernel, which has its own synchronization to protect the shared data structures that manage the queue of threads waiting to acquire the lock. The problems don’t end when a lock is finally acquired. A typical futex guards some specific application data. The cache lines containing that data will also uncomfortably bounce between cores. The idea behind delegation is to replace the queue of pending threads with a queue of pending operations . An operation comprises the code that will be executed under the lock, and the associated data. This C++ code snippet shows how I think of delegation: In the uncontended case, will execute directly. In the contended case, will be placed into a queue to be executed later. After any thread finishes executing a function like , that thread will check the queue. If the queue is not empty, that thread will go ahead and execute all of the functions contained in the queue. In the example above, the data guarded by the lock is . If a particular thread calls 10 functions from the queue, then remains local to the core that thread is running on, and the system will avoid moving between cores 10 times. The magic of this paper is that it shows how to change the OS kernel to automatically implement delegation for any application that uses futexes. When the futex code gives up trying to acquire a futex in user space, it calls the OS to wait on the futex. The implementation of this system call is changed to implement automatic delegation. Automatic delegation can fail (as illustrated by Fig. 2), in which case the traditional futex waiting algorithm is used. Source: https://dl.acm.org/doi/10.1145/3774934.3786439 This paper makes heavy use of the Userspace Bypass library (a.k.a. UB; paper here ). This library allows the kernel to safely execute user-mode code. It was originally designed to optimize syscall heavy applications, by allowing the kernel to execute the small tidbits of user space code in-between system calls. UB uses binary translation to translate instructions that were meant to run in user space into instructions that can securely be executed by the kernel. Binary compatible critical section delegation uses UB to translate the code inside of the critical section (i.e., the code between the futex lock and unlock calls) into code that can be safely executed by the kernel. A pointer to this translated code is placed into a queue of delegated calls (the vw queue ). The set of threads which are trying to acquire a lock cooperatively execute the functions in the vw queue. At any one time, at most one thread is elected to be the delegate thread. It drains the vw queue by executing (in kernel space) all the delegated functions in the queue. This works great in cases where the code inside of the critical section accesses a lot of shared state, because that shared state can happily reside in the cache of the core that is running the delegate thread, rather than bouncing between cores. The paper has impressive results from microbenchmarks, but I think real applications are more relevant. Table 2 shows performance results for a few applications and a few locking strategies. BCD is the work in this paper. TCS and TCB are prior work which have the drawback of not being compatible with existing binaries. Source: https://dl.acm.org/doi/10.1145/3774934.3786439 Dangling Pointers There is a hint here at another advantage of pipeline parallelism over data parallelism: allowing persistent data structures to remain local to a core. Subscribe now

0 views
Robin Moffatt Yesterday

How I do, and don't, use AI on this blog

I use AI heavily on this blog. I don’t use AI to write any content. As any followers of my blog will have seen recently, I am a big fan of the productivity —and enjoyment—that AI can bring to one’s work. (In fact, I firmly believe that to opt out of using AI is a somewhat negative step to take in terms of one’s career.) Here’s how I don’t use AI, and never will : I use AI heavily on this blog. I don’t use AI to write any content.

0 views
Stratechery Yesterday

An Interview with Robert Fishman About the Current State of Hollywood

An interview with MoffettNathanson's Robert Fishman about the current state of Hollywood, including Netflix, Paramount, YouTube, Disney, and Amazon.

0 views

Introduction to SQLAlchemy 2 In Practice

In 2023 I wrote " SQLAlchemy 2 In Practice ", a book in which I offer an in-depth look at SQLAlchemy version 2 , still the current version today. SQLAlchemy is, for those who don't know, the most popular database library and Object-Request Mapper (ORM) for Python. I have a tradition of publishing my books on this blog to read for free, but this is one that I never managed to bring here, and starting today I'm going to work on correcting that. This article includes the Preface of the book. If you are interested, keep an eye out on this blog over the next few weeks, as I will be publishing the eight chapters of the book in order. If you can't wait for the installments, you can buy the book in electronic or paper format today, and I will be eternally thankful, as you will be directly supporting my work.

0 views

Just admit you’re playing the game

It’s fine. Many people do it, and you decided to do the same. That’s ok. But don’t attempt to use some wishy-washy argument to justify your actions. You either believe in something and you’re willing to power through, or you don’t, and you do what everybody else is doing. It’s fine to pick option B, but at least have the courage to admit it and don't use some bullshit argument to justify your actions. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
マリウス Yesterday

GL.iNet Slate 7

If you happened to have stumbled upon my write-up almost four years ago about running an open source home area network , you might know that I’m enjoying a fairly elaborate and mostly FOSS-based infrastructure, that is as lightweight and travel-friendly as possible. Although many things have changed ever since and an update to the original post is well overdue, the fundamentals remained the same: My personal infrastructure has to be as flexible and portable as possible, to fit my ever-changing life. One key component of my setup had been the Linksys WRT3200 ACM router running OpenWrt , an embedded Linux distribution designed primarily for network devices. The Linksys has been a reliable piece of equipment for me for now well over five years and besides its dated and somewhat flaky Wi-Fi I have few complaints about the device’s functionality whatsoever. Whenever I needed to move to a new location or travel for prolonged periods of time, however, the approximately 246×194×52mm device (without its four antennas) isn’t exactly the most travel friendly at 798g/1.76lb. In addition, the Linksys is powered via its barrel connector and requires a dedicated, external PSU, which in turn usually requires bringing either multiple power socket adapters or, given the limited amount of power outlets in hotel rooms, a single adapter and a Type A/B power strip to connect all my electronic devices. This, in turn, brings up the total weight to well over 1kg/2.2lb. Short story long, I have been on the lookout for a replacement for the aging Linksys for a while now and have ultimately decided to give the GL.iNet Slate 7 a try, considering that it’s at least based on OpenWrt . At the hardware level, the Slate 7 is an interesting take on the travel router concept, featuring dual-band Wi-Fi 7 ( 802.11be ) with external foldable antennas, dual 2.5 GbE Ethernet ports, a touchscreen, and probably the most important feature to me, USB-C PD input. All in a compact 130×91×34mm package that weighs only 295g/0.65lbs. Spec-wise the Slate 7 is above most consumer travel routers, but below full-featured routers with tri-band Wi-Fi 7 and multi-gigabit backbones. The router doesn’t support the 6 GHz band, and hence only does Multi-Link Operation ( MLO ) over the 2.4 GHz and the 5 GHz bands. The exact specifications of the hardware are as follows: GL.iNet ’s documentation of the MLO feature sadly is misleading/wrong, depicting the 6 GHz band on the screenshot and saying, quote: Note : This Wi-Fi band is only available on Flint 3e (GL-BE6500), Flint 3 (GL-BE9300), and Slate 7 (GL-BE3600). MLO (Multi-Link Operation) is one of the core features of Wi-Fi 7 (802.11be), designed to improve network performance, significantly reduce latency, and enhance connection stability by utilizing multiple frequency bands simultaneously such as 2.4 GHz, 5 GHz, and 6 GHz. For wireless networking aficionados, the aforementioned lack of the 6 GHz band on the Slate 7 might hence be a deal breaker. The Slate 7 runs OpenWrt 23.05-SNAPSHOT (Kernel 5.4.213) as of the time of writing, with GL.iNet ’s firmware layer on top. This layer includes Qualcomm ’s SDK and binary blobs, which is sadly a proprietary mess, as it is with so many devices (e.g. smartphones) running presumed open-source software these days. That said, the device offers full root access via SSH and it’s possible to install OpenWrt ’s LuCI UI if necessary. Even without that, however, it’s possible to configure everything using the command and the configuration files. This makes it at least slightly more trustworthy than your average ASUS router . If you buy this device for its open-source flexibility, however, be aware you’re effectively in a GL.iNet -flavored OpenWrt sandbox with proprietary Qualcomm components. Like every other OpenWrt device, the Slate 7 also implements a package manager that allows you to install additional components from the package repository. One thing that I like is the fact that it comes with Multi-WAN , WireGuard and DNScrypt-proxy2 pre-installed and offers a user-friendly UI to configure these things which, on my OpenWrt Linksys took a bit of wrangling to get them right, especially the Mwan3 part. The Slate 7 tries something unusual for a router in this class, namely a built-in touchscreen for controls, which is another reason I opted for this device. The touchscreen can display a variety of different things, from your Wi-Fi details with QR-code for quick joining, over VPN status info with on/off toggle, all the way to real-time connection and hardware stats. During firmware upgrades, the touchscreen will display a progress bar with the upgrade process, which is definitely a plus over the Linksys . In its current version, the firmware also implements a lockscreen that protects the display with a 4-digit PIN, in case you wouldn’t want others to access your Wi-Fi details or turn off your VPN. Speaking of which, the router supports WireGuard with ~500 Mbps throughput, as well as OpenVPN with ~380 Mbps peak and integrates with a handful of paid VPN offerings for easy configuration. It’s nevertheless possible to simply import your own configuration. The Slate 7 comes with beta support for Tailscale , which I had briefly tested and which appears to be working without any issues. However, it is not possible to configure advanced Tailscale flags via the web interface. If you need a more sophisticated Tailscale setup, you will likely have to hack it yourself. Yes, the Slate 7 can run a Tor node to allow you to browse Onion sites from within your network. The feature is marked beta but appears to work fairly well. However, the moment Tor is enabled, VPNs , DNS , AdGuard Home and IPv6 will not work properly anymore. Note: These “ will not work properly” limitations are 100% a GL.iNet issue and not caused by OpenWrt . The reason I know this is because I had all these services working simultaneously on the Linksys , and I had them interconnected in a way that would allow to e.g. perform DNS lookups via Tor, through DNScrypt-proxy2 . Clearly it’s possible to neatly integrate all these services, but I guess that the GL.iNet admin UI simply isn’t there yet, as these integrations require more complex configurations in the background. AdGuard Home is part of the default installation of the Slate 7 . I haven’t tested it so far, mainly because my DNS setup already filters most things out, but judging by the web interface and the manual it seems like GL.iNet ’s implementation is pretty much plug-’n-play . The router has a Network Acceleration feature that can be enabled to use hardware acceleration for networking, which reduces CPU load and device heat. However, when enabled, features like Client Speed and Traffic Statistics , Client Speed Limit , Parental Control , and VPN with IPv6 will not work properly, at least with the current firmware version. I’ve had the Slate 7 for about two months at this point and so far I’m relatively satisfied with how it has been performing. Every now and then I have experienced Wi-Fi reconnects specifically on my phone , however, I was unable to reproduce this behavior on any other device. It appears that these reconnects have something to do with the GrapheneOS Pixel 8 rather than the Slate 7 . On the upside, the Slate 7 supports tethering via its USB-A port, so I can directly attach my Netgear Nighthawk M2 LTE router and use its mobile connectivity as WAN. Unlike with Mwan3 on vanilla OpenWrt , configuring USB port tethering on the Slate 7 is a matter of a few clicks. Comparing the Slate 7 to the full-size WRT3200 is a bit of an odd thing to do, given that the devices serve different purposes, despite me misusing the Linksys as a travel router. However, for the sake of comparing a pure OpenWrt device with the Slate 7 the experience I’ve had with the Linksys serves as a good basis. As mentioned before, the Slate 7 is a modern, Wi-Fi 7 dual-band travel router, integrating dual 2.5 GbE ports, a touchscreen, and USB-C PD input in a compact form factor. It assumes you’re optimizing for portability. In contrast, the WRT3200 ACM is a larger SOHO class router from several generations earlier, built around Wi-Fi 5 with Tri-Stream 160 and MU-MIMO . Its hardware (Marvell Armada ARM SoC, 512 MB RAM, 256 MB flash) was high-end in its day and remains capable for routing/firewall throughput on OpenWrt , but it lacks the ability to run many modern features sufficiently, e.g. a WireGuard VPN client at full speed. However, perhaps the largest point of divergence is software openness and the Wi-Fi driver stack. The WRT3200 ACM enjoys true upstream OpenWrt support with builds maintained in the official images, albeit with quirks in its wireless drivers ( ) and some limited features, giving you an experience close to vanilla OpenWrt with full package control, firewall, and kernel update paths. However, the price for this openness sadly is Wi-Fi instability and the lack of more up-to-date features. By contrast, the Slate 7 runs a Qualcomm SDK-based OpenWrt fork with proprietary driver blobs for its Wi-Fi 7 PHY, which enables the vendor firmware to provide the advertised Wi-Fi features (e.g., 160 MHz channels). True vanilla OpenWrt however isn’t easily available and upstream OpenWrt builds won’t natively support the Qualcomm wireless stack. This means you may be stuck on GL.iNet ’s cadence for Wi-Fi driver updates unless the community or Qualcomm upstreams that support. We can safely assume, though, that this is unlikely to happen. The Slate 7 is hence OpenWrt only in spirit . For raw routing, VLANs, firewall, VPN, and routing policies, both are capable platforms with SSH/LuCI and full package ecosystems. The Slate 7 ’s hardware advantages like better multi-gig throughput, lower power envelope, USB-C PD, and next-gen Wi-Fi PHY, skew it towards users who want high-speed WAN ingress/egress, travel/office portability, and modern client support. Meanwhile, the WRT3200 ACM shines as a classic OpenWrt playground with strong software freedom and mature community tooling for advanced network setups (e.g., VLAN trunking, policy routing) but doesn’t offer the multi-gigabit wired backbone or next-gen wireless speed of Slate 7 . While its four 1 GbE LAN ports (+ 1 GbE WAN) still serve well for home and small office LANs, the Linksys is clearly outclassed in wired throughput and spectrum efficiency compared to the Slate 7 . While the Linksys WRT3200 ACM ’s OpenWrt support has at times lagged (e.g. builds stuck at ), its position in the official OpenWrt target tree gives it clear upstream maintenance prospects for years to come. The Slate 7 , on the other hand, may never get full upstream driver support for its Qualcomm hardware, leaving its long-term wireless stack reliant on the cooperation between GL.iNet and Qualcomm , which presents an uncertain future for the device. If your priority is pure open-source flexibility with a mature community rail-to-rail OpenWrt experience, the WRT3200 ACM still holds value for many people. However, if you prioritize/need faster throughput and better efficiency, and a travel-ready appliance that still lets you have it your way (at least to some extent) via OpenWrt , the Slate 7 seems like a decent choice, albeit with some proprietary caveats around wireless drivers. The Slate 7 is a compelling travel router design that bridges modern Wi-Fi tech and OpenWrt customization into an ultra-portable package, but it carries the classic open-source hardware caveat, where the software ecosystem matters as much as the silicon, and only time will tell how that ecosystem is going to develop. If you don’t require portability and prefer a native OpenWrt experience, then the OpenWrt One , or, if you can wait, the OpenWrt Two , which is going to be produced by GL.iNet , might be a better fit for you. If, however, you’re looking for modern hardware that includes proprietary features at the cost of openness, yet still offers a solid OpenWrt basis, the Slate 7 (or its newer, more powerful, Tri-band capable upgrade, the Slate 7 Pro ) might be for you. I will likely stick to the Slate 7 for travel, as the reduced size and weight of the device, and the ability to power it via USB-C PD makes up for its shortcomings. CPU: Qualcomm Quad-core ≈1.1 GHz Memory: 1 GB DDR4 RAM Storage: 512 MB Flash Ethernet: 1× 2.5 GbE WAN, 1× 2.5 GbE LAN Wireless: IEEE 802.11be (Wi-Fi 7) dual-band Not tri-band, only 2.4 GHz and 5 GHz 160 MHz channels on 5 GHz Maximum theoretical PHY rates 2.4 GHz: 688 Mbps 5 GHz: 2882 Mbps Antennas: 2× foldable external USB: 1× USB-A 3.0 for tethering/modem or storage Power: USB-C PD compatible (~5–12 V), usable with powerbanks Size: 130×91×34mm Weight: 295g/0.65lb

0 views
Ruslan Osipov Yesterday

Writing code by hand is dead

The landscape of software engineering is changing. Rapidly. As my colleague Ben likes to say, we will probably stop writing code by hand within the next year. This comes with a move toward orchestration, and a fundamental change in how we engage with our craft. Many of us became coders first, software engineers second. There’s a lot more to software engineering than coding, but coding is our first love. Coding is comfortable, coding is fun, coding is safe. For many of us, the actual writing of syntax was never the bottleneck anyway. But now, you can command swarms of agents to do your bidding (until the compute budget runs out, at least, and we collectively decide that maybe junior engineers aren’t a terrible investment after all). The day-to-day reality of the job is shifting. Instead of writing greenfield code or getting into the flow state to debug a complex problem, you’re now multitasking. You’re switching between multiple long-running tasks, directing AI agents, and explaining to these eager little toddlers that their assumptions are wrong, their contexts are overflowing, or they need to pivot and do X, Y, and Z. And that requires endless context switching. Humans cannot truly multitask ; our brains just rapidly jump context across multiple threads. Inevitably, some of that context gets lost. It’s cognitively exhausting, but it feels hyper-productive because instead of doing one thing, you’re doing three—even if the organizational overhead means it actually takes four times as long to get them all over the finish line. This is, historically, what staff software engineers do. They don’t particularly write much code. They juggle organizational bits and pieces, align architecture, and have engineers orbiting around them executing on the vision. It’s a fine job, and highly impactful, but it’s a fundamentally different job. It requires a different set of skills, and it yields a different type of enjoyment. It’s like people management, but without the fun part: the people. As an industry, we’re trading these intimate puzzles for large scale system architecture. Individual developer can now build at the sacle of a whole product team. But scaling up our levels of abstraction always leaves something visceral behind. It was Ben who first pointed out to me that many of us will grieve writing code by hand, and he’s absolutely right. We will miss the quiet satisfaction of solving an isolated problem ourselves, rather than herding fleets of stochastic machines. We’ll adjust, of course. The field will evolve, the friction will decrease, and the sheer scale of what we can create will ultimately make the trade-off worth it. But the shape of our daily work has permanently changed, and it’s okay to grieve the loss of our first love. Consider this post your permission to do so.

0 views
HeyDingus Yesterday

No Face ID nor iPad apps wrenches my iPhone Duo(?) purchase plans

The Verge ’ s headline sums up Mark Gurman’s latest report on Apple’s folding phone quite succinctly: ‘ iPhone Fold rumor: iPad-like multitasking, but no iPad apps and no Face ID ’ Though the updated layout could make multitasking easier, Gurman reports that the folding iPhone won’t run existing iPad apps. Still, Apple is reportedly trying to take advantage of the phone’s larger screen real estate by updating its “ core” apps with a sidebar on the left side of the screen. It will also give developers the ability to make the iPhone versions of their apps more iPad-like, according to Gurman. Hmph. There’s more. Instead of using Face ID , Apple’s foldable could integrate Touch ID into the device’s side button, as the “ front panel is too thin to accommodate the Face ID sensor array,” Gurman reports. That means in place of the pill-shaped housing for the front-facing camera and Face ID , Apple will reportedly add a small-hole punch camera instead.  Gurman has previously reported  that the foldable could look like two iPhone Airs stuck together. A few things are running through my mind reading this report. First, I’m putting my money behind it being called ‘ iPhone Duo’. It would really tickle me for Apple to put out a ‘ Duo’ and a ‘ Neo’ — two Surface product names that Microsoft used and which flopped and was never released , respectively. Second, this lack of Face ID business really puts a wrench in my plans. I’ve been pretty psyched about replacing my iPhone and my iPad mini with an iPhone Duo. As much as I love my 17 Pro, it’s too big and I think the double-duty device would really work for me. But I don’t think I want to go without Face ID . My iPad mini only has Touch ID in the power button and I’ve never enjoyed that unlocking method. Honestly, it was better in the Home Button. Third, I haven’t really kept up with the folding iPhone’s rumored specs. I presume each half is going to be thinner than the both iPhone Air and the iPad Pro (Apple’s record-holding thinnest device) since both of those feature Face ID . Fourth, leave it to Apple to not do the obvious thing and just let the thing run iPad apps. Why make developers go through designing another layout for their iOS apps if the iPadOS versions are right there ? We’ll see how the software situation shakes out. I’ll be pretty disappointed if this thing doesn’t come with Face ID . It’s probably a deal-breaker, even though I’d want to purchase it to show Apple the foldable is a form factor worth pursuing. There’s always the chance they’ll cancel the whole thing if the first one doesn’t sell well. On the other hand, they did just fix the iPhone 16e’s most glaring omission — MagSafe — year-over-year with the 17e . There’s hope. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views

Flying solo

Flying Solo One of the things I like most when I fly solo is having all the space to myself. The back seats are usually filled with my backpack and jacket, while the right seat holds my iPad, navigation charts, kneeboard, a bottle of water, and occasionally some snacks for longer (and sometimes boring?) flights. Not much different from how I travel by car. After handling take-off procedures and settling into cruise, I love relaxing and watching the landscape in comfortable silence, just the engine’s hum and the soft background chatter of the radio.

0 views
Evan Hahn Yesterday

How I use generative AI on this blog

Inspired by others, I’m publishing how I use generative AI to write this little blog. Generative AI, like any technology, has tradeoffs. I think the cons far outweigh the pros. In other words, the world would be better off without generative AI. Despite this belief, I use it. I’m effectively forced at work, but I also use LLMs to help write this personal blog. I think they can produce better writing if used correctly. Also: I want to be critical of this technology. Specifically, I want to change the minds of “AI maxxers”, not preach to those who already hate it. If I never used this stuff, AI lovers wouldn’t listen to me. These people are more likely to respect criticism from a daily user who’s sympathetic to the benefits. I think there’s space for critique from a user of a technology they wish didn’t exist. I feel discomfort and tension about this, which I hope comes through. With that, let’s get to the specifics. My main rule of thumb: the final product must be word-for-word what I would’ve written without AI , given enough time. I use it in two main ways: I prefer local models that run on my phone and laptop. I’ll keep this post updated. Like a thesaurus. For example, I recently asked, “What’s another way to say that a book was overly positive, not critical of its subject matter?” I used one of its suggestions, “flattering”, in my final draft. Quick brainstorming for specifics. For example, I was listing types of software error in a recent post and asked it for more examples. I plucked one of its many answers—null pointer exceptions—and discarded the rest.

0 views
underlap Yesterday

Moving on to Servo

It’s finally time to move on from implementing to using it in . But how? Since my last post, I’ve been using Claude code extensively to help me complete : I got into a pattern of having Claude write a spec (in the form of commented out API with docs), write tests for the spec, and then implement the spec. Sometime Claude failed to do the simplest things. I asked it to extract some common code into a function, but it got part way through and then started reverting what it had done by inlining the function. Pointing this out didn’t help. It’s as if the lack of a measurable goal [1] made it lose “direction”. It was faster and safer to do this kind of change by hand. I published v0.0.7 of the crate. This is functionally complete and ready for consumption by Servo. is perhaps an ideal project for using Claude Code with its: By comparison, Servo is challenging for Claude (and human developers), having: And that’s not to mention Servo’s (anti-) AI Contribution policy . I asked Claude to plan the migration of Servo from to using the Migration Guide it had written previously. It came up with a credible plan including validating by running various tests. However, the plan didn’t include running the tests before making changes to be sure they already passed and the environment was set up correctly. The first hurdle in getting the tests to pass was that Servo doesn’t build on arch Linux. This is a known problem and a workaround was to use a devcontainer in vscode, a development environment running in a Linux container. A pre-req. was to install Docker, which gave me flashbacks to the latter part of my software development career (when I worked on container runtimes, Docker/OCI image management, and Kubernetes). These aspects of my career were part of my decision to retire when I did. I had little interest in these topics, beyond the conceptual side. After a bit of fiddling to install Docker and get it running, I tried to open the devcontainer in vscodium. The first issue with this was that some 48 GB of “build context” needed transferring to the Docker daemon. This was later reduced to 5 GB. The second issue was that vscodium was missing some APIs that were needed to make the devcontainer usable. So I uninstalled vscodium and installed vscode. I was then able to ask Claude to proceed to check that the validation tests ran correctly. The program failed to run [2] , so Claude used Cargo to run various tests. After implementing the first step of the plan, Claude mentioned that there was still one compilation error, but that didn’t matter because it had been present at the start. This was a mistake along the lines of “AI doesn’t care”. Any developer worth their salt would have dug into the error before proceeding to implement a new feature. Anyway, I got Claude to commit its changes in the devcontainer. I then found, when trying to squash two Claude commits outside the container that some of the files in had been given as an owner, because the devcontainer was running under the container . I tried modifying the devcontainer to use a non-root user ( ), but then the build couldn’t update the directory which is owned by . I considered investigating this further to enable a non-root user to build Servo inside a devcontainer [3] , but at this point I started to feel like I was in a hole and should stop digging: So I took a step back and decided the discuss the way forward with the Servo developers: Applying IPC channel multiplexing to Servo . Code complexity metrics seem to have fallen out of favour, but maybe some such metrics would help Claude to keep going in the right direction when refactoring. ↩︎ I later got running, by installing , but not all the tests ran successfully (four hung). ↩︎ The unit tests passed in the devcontainer with after applying (where is the relevant non-root user and group) to , , and . ↩︎ Not according to Servo’s (anti-) AI Contribution policy . ↩︎ Added non-blocking receive: and methods. Revised to avoid describing internals. Added a Migration Guide on migrating from to . Added and , identified as missing by the Migration Guide. Improved test speed and reduced its variability. Relatively simple and well-documented API, Unit and integration tests (all of which run in under five seconds), Benchmarks, Strong typing (a given for Rust), Linting (including pedantic lints), Standard formatting, the ipc-channel API for comparison. A complex API, Extremely slow tests, An enormous codebase. Do I really want to continue using Claude Code? Would AI generated code be acceptable to Servo developers? [4] Do I actually want to get back into wrestling with a mammoth project now I’m retired? Code complexity metrics seem to have fallen out of favour, but maybe some such metrics would help Claude to keep going in the right direction when refactoring. ↩︎ I later got running, by installing , but not all the tests ran successfully (four hung). ↩︎ The unit tests passed in the devcontainer with after applying (where is the relevant non-root user and group) to , , and . ↩︎ Not according to Servo’s (anti-) AI Contribution policy . ↩︎

0 views
Susam Pal Yesterday

Git Checkout, Reset and Restore

I have always used the and commands to reset my working tree or index but since Git 2.23 there has been a command available for these purposes. In this post, I record how some of the 'older' commands I use map to the new ones. Well, the new commands aren't exactly new since Git 2.23 was released in 2019, so this post is perhaps six years too late. Even so, I want to write this down for future reference. It is worth noting that the old and new commands are not always equivalent. I'll talk more about this briefly as we discuss the commands. However, they can be used to perform similar tasks. Some of these tasks are discussed below. To experiment quickly, we first create an example Git repository. Now we make changes to the files and stage some of the changes. We then add more unstaged changes to one of the staged files. At this point, the working tree and index look like this: File has staged changes. File has both staged and unstaged changes. File has only unstaged changes. File is a new staged file. In each experiment below, we will work with this setup. All results discussed in this post were obtained using Git 2.47.3 on Debian 13.2 (Trixie). As a reminder, we will always use the following command between experiments to ensure that we restore the experimental setup each time: To discard the changes in the working tree and reset the files in the working tree from the index, I typically run: However, the modern way to do this is to use the following command: Both commands leave the working tree and the index in the following state: Both commands operate only on the working tree. They do not alter the index. Therefore the staged changes remain intact in the index. Another common situation is when we have staged some changes but want to unstage them. First, we restore the experimental setup: I normally run the following command to do so: The modern way to do this is: Both commands leave the working tree and the index in the following state: The ( ) option tells to operate on the index (not the working tree) and reset the index entries for the specified files to match the version in . The unstaged changes remain intact as modified files in the working tree. With the option, no changes are made to the working tree. From the arguments we can see that the old and new commands are not exactly equivalent. Without any arguments, the command resets the entire index to , so all staged changes become unstaged. Similarly, when we run without specifying a commit, branch or tag using the ( ) option, it defaults to resetting the index from . The at the end ensures that all paths under the current directory are affected. When we run the command at the top-level directory of the repository, all paths are affected and the entire index gets reset. As a result, both the old and the new commands accomplish the same result. Once again, we restore the experimental setup. This time we not only want to unstage the changes but also discard the changes in the working tree. In other words, we want to reset both the working tree and the index from . This is a dangerous operation because any uncommitted changes discarded in this manner cannot be restored using Git. The modern way to do this is: The working tree is now clean: The ( ) option makes the command operate on the working tree. The ( ) option resets the index as described in the previous section. As a result, this command unstages any changes and discards any modifications in the working tree. Note that when neither of these options is specified, is implied by default. That's why the bare command in the previous section discards the changes in the working tree. The following table summarises how the three pairs of commands discussed above affect the working tree and the index, assuming the commands are run at the top-level directory of a repository. The command is meant to provide a clearer interface for resetting the working tree and the index. I still use the older commands out of habit. Perhaps I will adopt the new ones in another six years, but at least I have the mapping written down now. Read on website | #technology | #how-to Experimental Setup Reset the Working Tree Reset the Index Reset the Working Tree and Index

0 views