Latest Posts (20 found)

Kermit Roosevelt

I was at a school function the other day where the 2nd graders performed a bunch of Aesop’s Fabels and it was great. It was a double-header with 3rd graders who then read prepared reports on famous people. It was cross-disciplinary thing as the kids brought props from design class, costumes from performing arts, and did the speech both in Spanish and English. It was cute. A lot of astronauts and artists and stuff. One kid did Theodore Roosevelt. I’m not a smart man, and I just had no idea this happened . 1912. He’s giving a speech in my old stomping grounds, Milwaukee, Wisconsin. Dude friggin shoots him in the chest. The bullet goes through a thick, folded-up bit of paper in his pocket, but then still into his body. Then he’s like “I’m good” and continues his speech for an hour. He recoups a couple of weeks but they leave the bullet in his body and didn’t seem to care. Kind of a badass. No wonder he leaned into the “Bull Moose” thing. Then the kid is like, and he had five kids, yadda, yadda, Kermit , yadda, yadda. I was like LOL, he named one of his kids Kermit. Turns out all of his kids led fascinating lives too! Kermit was an unhealthy kid, but ultimately went to Harvard and then did a bunch of literal jungle exploration with his dad (?!) and later Asia with his brother. … he postponed his marriage to join his father on a dangerous journey to the River of Doubt in Brazil. Both he and his father nearly died during this trip through the jungle. He fought in both World Wars, deciding to go to England and join the British Army to fight for them. Apparently, you can just do that? War breaks out, and you can just pick one of the countries and go there and fight for that side? WTF? He doesn’t make it all the way through WWII because of the health stuff, so they stick him up in Alaska, and he kills himself. Wild stuff. Oh and speaking of his brother Theodore III… Along with his brother, Kermit, Roosevelt spent most of 1929 on a zoological expedition and was the first Westerner known to have shot a panda.

0 views

The Salt Eaters

Velma Henry is brought before Minnie Ransom for a healing. Velma, an activist who has become cynical of the movement and especially of the egocentric men who attempt to lead it, has recently channeled her cynicism into cutting her wrists and placing her head in the oven. Alive, wrists bandaged, gown flapping open in the back, she sits before a dozen friends and neighbors as Velma and her spiritual guide Old Wife try to bring her back. The book centers on this moment, sweeping backwards and forwards and around the Southern town where each of these people live and work and hope for better days. The opening question lingers through every page, perhaps unanswerable, or perhaps only to be answered by the whole: “Are you sure, sweetheart, that you want to be well?” View this post on the web , subscribe to the newsletter , or reply via email .

0 views
iDiallo Today

You Digg?

For me, being part of an online community started with Digg. Digg was the precursor to Reddit and the place to be on the internet. I never got a MySpace account, I was late to the Facebook game, but I was on Digg. When Digg redesigned their website (V4), it felt like a slap in the face. We didn't like the new design, but the community had no say in the direction. To make it worse, they removed the bury button. It's interesting how many social websites remove the ability to downvote. There must be a study somewhere that makes a sound argument for it, because it makes no sense to me. Anyway, when Digg announced they were back in January 2026, I quickly requested an invite. It was nostalgic to log in once more and see an active community building back up right where we left off. But then, just today, I read that they are shutting down. I had a single post in the technology sub. It was starting to garner some interest and then, boom! Digg is gone once more. The CEO said that one major reason was that they faced "an unprecedented bot problem." This is our new reality. Bots are now powered by AI and they are more disruptive than ever. They quickly circumvent bot detection schemes and flood every conversation with senseless text. It seems like there are very few places left where people can have a real conversation online. This is not the future I was looking for. I'll quietly write on my blog and ignore future communities that form. Rest in peace, Digg.

0 views
Sean Goedecke Yesterday

Big tech engineers need big egos

It’s a common position among software engineers that big egos have no place in tech 1 . This is understandable - we’ve all worked with some insufferably overconfident engineers who needed their egos checked - but I don’t think it’s correct. In fact, I don’t know if it’s possible to survive as a software engineer in a large tech company without some kind of big ego. However, it’s more complicated than “big egos make good engineers”. The most effective engineers I’ve worked with are simultaneously high-ego in some situations and surprisingly low-ego in others. What’s going on there? Software engineering is shockingly humbling, even for experienced engineers. There’s a reason this joke is so popular: The minute-to-minute experience of working as a software engineer is dominated by not knowing things and getting things wrong . Every time you sit down and write a piece of code, it will have several things wrong with it: some silly things, like missing semicolons, and often some major things, like bugs in the core logic. We spend most of our time fixing our own stupid mistakes. On top of that, even when we’ve been working on a system for years, we still don’t know that much about it. I wrote about this at length in Nobody knows how large software products work , but the reason is that big codebases are just that complicated. You simply can’t confidently answer questions about them without going and doing some research, even if you’re the one who wrote the code. When you have to build something new or fix a tricky problem, it can often feel straight-up impossible to begin, because good software engineers know just how ignorant they are and just how complex the system is. You just have to throw yourself into the blank sea of millions of lines of code and start wildly casting around to try and get your bearings. Software engineers need the kind of ego that can stand up to this environment. In particular, they need to have a firm belief that they can figure it out, no matter how opaque the problem seems; that if they just keep trying, they can break through to the pleasant (though always temporary) state of affairs where they understand the system and can see at a glance how bugs can be fixed and new features added 2 . What about the non-technical aspects of the job? Nobody likes working with a big ego, right? Wrong. Every great software engineer I’ve worked with in big tech companies has had a big ego - though as I’ll say below, in some ways these engineers were surprisingly low-ego. You need a big ego to take positions . Engineers love being non-committal about technical questions, because they’re so hard to answer and there’s often a plausible case for either side. However, as I keep saying , engineers have a duty to take clear positions on unclear technical topics, because the alternative is a non-technical decision maker (who knows even less) just taking their best guess. It’s scary to make an educated guess! You know exactly all the reasons you might be wrong. But you have to do it anyway, and ego helps a lot with that. You need a big ego to be willing to make enemies . Getting things done in a large organization means making some people angry. Of course, if you’re making lots of people angry, you’re probably screwing up: being too confrontational or making obviously bad decisions. But if you’re making a large change and one or two people are angry, that’s just life. In big tech companies, any big technical decision will affect a few hundred engineers, and one of them is bound to be unhappy about it. You can’t be so conflict-averse that you let that stop you from doing it, if you believe it’s the right decision. In other words, you have to have the confidence to believe that you’re right and they’re wrong, even though technical decisions always involve unclear tradeoffs and it’s impossible to get absolute certainty. You need a big ego to correct incorrect or unclear claims. When I was still in the philosophy world, the Australian logician Graham Priest had a reputation for putting his hand up and stopping presentations when he didn’t understand something that was said, and only allowing the seminar to continue when he felt like he understood. From his perspective, this wasn’t rude: after all, if he couldn’t understand it, the rest of the audience probably couldn’t either, and so he was doing them a favor by forcing a more clear explanation from the speaker. This is obviously a sign of a big ego. It’s also a trait that you need in a large tech company. People often nod and smile their way past incorrect technical claims, even when they suspect they might be wrong - assuming that they’ve just misunderstood and that somebody else will correct it, if it’s truly wrong. If you are the most senior engineer in the room, correcting these claims is your job. If everyone in the room is so pro-social and low-ego that they go along to get along, decisions will get made based on flatly incorrect technical assumptions, projects will get funded that are impossible to complete, and engineers will burn weeks or months of their careers vainly trying to make these projects work. You have to have a big enough ego to think “actually, I think I’m right and everyone in this room is confused”, even when the room is full of directors and VPs. All of this selects for some pretty high-ego engineers. But in order to actually succeed in these roles in large tech companies, you need to have a surprisingly low ego at times. I think this is why really effective big tech engineers are so rare: because it requires such a delicate balance between confidence and diffidence. To be an effective engineer, you need to have a towering confidence in your own ability to solve problems and make decisions, even when people disagree. But you also need to be willing to instantly subordinate your ego to the organization, when it asks you to. At the end of the day, your job - the reason the company pays you - is to execute on your boss’s and your boss’s boss’s plans, whether you agree with them or not. Competent software engineers are allowed quite a lot of leeway about how to implement those plans. However, they’re allowed almost no leeway at all about the plans themselves. In my experience, being confused about this is a common cause of burnout 3 . Many software engineers are used to making bold decisions on technical topics and being rewarded for it. Those software engineers then make a bold decision that disagrees with the VP of their organization, get immediately and brutally punished for it, and are confused and hurt. In fact, sometimes you just get punished and there’s nothing you can do. This is an unfortunate fact of how large organizations function: even if you do great technical work and build something really useful, you can fall afoul of a political battle fought three levels above your head, and come away with a worse reputation for it. Nothing to be done! This can be a hard pill to swallow for the high-ego engineers that tend to lead really useful technical projects. You also have to be okay with having your projects cancelled at the last minute. It’s a very common experience in large tech companies that you’re asked to deliver something quickly, you buckle down and get it done, and then right before shipping you’re told “actually, let’s cancel that, we decided not to do it”. This is partly because the decision-making process can be pretty fluid, and partly because many of these asks originate from off-hand comments: the CTO implies that something might be nice in a meeting, the VPs and directors hustle to get it done quickly, and then in the next meeting it becomes clear that the CTO doesn’t actually care, so the project is unceremoniously cancelled 4 . Nobody likes to work with a bully, or with someone who refuses to admit when they’re wrong, or with somebody incapable of empathy. But you really do need a strong ego to be an effective software engineer, because software engineering requires you to spend most of your day in a position of uncertainty or confusion. If your ego isn’t strong enough to stand up to that - if you don’t believe you’re good enough to power through - you simply can’t do the job. This is particularly true when it comes to working in a large software company. Many of the tasks you’re required to do (particularly if you’re a senior or staff engineer) require a healthy ego. However, there’s a kind of catch-22 here. If it insults your pride to work on silly projects, or to occasionally “catch a stray bullet” in the organization’s political fights, or to have to shelve a project that you worked hard on and is ready to ship, you’re too high-ego to be an effective software engineer. But if you can’t take firm positions, or if you’re too afraid to make enemies, or you’re unwilling to speak up and correct people, you’re too low-ego. Engineers who are low-ego in general can’t get stuff done, while engineers who are high-ego in general get slapped down by the executives who wield real organizational power. The most successful kind of software engineer is therefore a chameleon: low-ego when dealing with executives, but high-ego when dealing with the rest of the organization 5 . What do I mean by “ego”, in this context? More or less the colloquial sense of the term: a somewhat irrational self-confidence, a tendency to believe that you’re very important, the sense that you’re the “main character”, that sort of thing Why is this “ego”, and not just normal confidence? Well, because of just how murky and baffling software problems feel when you start working on them. You really do need a degree of confidence in yourself that feels unreasonable from the inside. It should be obvious, but I want to explicitly note that you don’t just need ego: you also have to be technically strong enough to actually succeed when your ego powers you through the initial period of self-doubt. I share the increasingly-common view that burnout is not caused by working too hard, but by hard work unrewarded. That explains why nothing burns you out as hard as being punished for hard work that you expected a reward for. It’s more or less exactly this scene from Silicon Valley. This description sounds a bit sociopathic to me. But, on reflection, it’s fairly unsurprising that competent sociopaths do well in large organizations. Whether that kind of behavior is worth emulating or worth avoiding is up to you, I suppose. What do I mean by “ego”, in this context? More or less the colloquial sense of the term: a somewhat irrational self-confidence, a tendency to believe that you’re very important, the sense that you’re the “main character”, that sort of thing ↩ Why is this “ego”, and not just normal confidence? Well, because of just how murky and baffling software problems feel when you start working on them. You really do need a degree of confidence in yourself that feels unreasonable from the inside. It should be obvious, but I want to explicitly note that you don’t just need ego: you also have to be technically strong enough to actually succeed when your ego powers you through the initial period of self-doubt. ↩ I share the increasingly-common view that burnout is not caused by working too hard, but by hard work unrewarded. That explains why nothing burns you out as hard as being punished for hard work that you expected a reward for. ↩ It’s more or less exactly this scene from Silicon Valley. ↩ This description sounds a bit sociopathic to me. But, on reflection, it’s fairly unsurprising that competent sociopaths do well in large organizations. Whether that kind of behavior is worth emulating or worth avoiding is up to you, I suppose. ↩

0 views
Stratechery Yesterday

2026.11: Winners, Losers, and the Unknown

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Stratechery video is on Anthropic and Alignment . Integration and AI . One of the most important and longest-running questions about AI has been whether or not models would be commodities; Microsoft once bet on their integration with OpenAI, but in recent years has made the best that the infrastructure they can build around models will matter more than models themselves. However, the most recent evidence — particularly Copilot Cowork — is that the companies who are best able to harness (pun intended) model capabilities are the model makers themselves. If none of that makes sense, Andrew and I do a much more extensive deep dive on these different layers of the evolving AI value chain on this week’s episode of Sharp Tech. — Ben Thompson The Team Test and a Basketball Disgrace. On Greatest of All Talk, we thought the news of the week would be the return of Jayson Tatum for the Boston Celtics, which provided a delightful excuse to take stock of the Celtics, Wemby’s gravity-defying Spurs, Shai’s Thunder, KD’s Rockets and the NBA’s field of title contenders using Ben’s very scientific Capital-T Team Test for contenders. That was a great episode. Unfortunately, on the follow-up Friday, we had to discuss the crime against basketball decency that took place in Miami Tuesday night. Come for the Team Test joy, then, and stay for Erik Spoelstra outrage (and also check out Ben Golliver’s column about the calamity on his new Substack ). — Andrew Sharp The US, China and Iran. The past two weeks in the China policy space have been full of debates over the implications of the war in Iran for China specifically, and the U.S.-China relationship generally. I wrote about all of it on Sharp Text this week , including thoughts on some takes from last year that haven’t aged well, and why, with respect to China, the war in Iran is best understood as the latest in a succession of U.S.-led body blows to Beijing’s global interests. At least over the past 12 months, countering China has been a consideration in almost everything the U.S. has done in the foreign policy space.  — AS MacBook Neo, The (Not-So) Thin MacBook, Apple and Memory — The MacBook Neo was built to be cheap; that it is still good is not only a testament to Apple Silicon, but also the fact that the most important software runs in the cloud. Copilot Cowork, Anthropic’s Integration, Microsoft’s New Bundle — Microsoft is seeking to commoditize its complements, but Anthropic has a point of integration of their own; it’s good enough that Microsoft is making a new bundle on top of it. Oracle Earnings, Oracle’s Cloud Growth, Oracle’s Software Defense — Oracle crushed earnings in a way that not only speaks to the secular AI wave they are riding but also to Oracle’s strong position An Interview with Robert Fishman About the Current State of Hollywood — An interview with MoffettNathanson’s Robert Fishman about the current state of Hollywood, including Netflix, Paramount, YouTube, Disney, and Amazon. Loud and Clear — The War in Iran is not entirely about China, but it’s definitely about China. MacBook Neo Review Designing for the Low End The Wildly Infectious Banana Plague The ‘Raising a Lobster’ Frenzy; Iran and US-China as Trump’s Visit Looms; Two Sessions Takeaways Tatum and the Team Test, The Spurs Continue to Defy Young Team Gravity, Russ Takes Aim at Kings Reporters Spo and Bam and a Basketball Betrayal, An SGA Early Warning System, Kawhi, Luka and The Other MVP Candidate Nerding Out with the Neo, Claude and the Integration Question, The End of Coding Language History

0 views

Premium: The Hater's Guide To The SaaSpocalypse

Soundtrack: The Dillinger Escape Plan — Black Bubblegum To understand the AI bubble, you need to understand the context in which it sits, and that larger context is the end of the hyper-growth era in software that I call the Rot-Com Bubble .  Generative AI, at first, appeared to be the panacea — a way to create new products for software companies to sell (by connecting their software to model APIs), a way to sell the infrastructure to run it, and a way to create a new crop of startups that could be bought or sold or taken public.  Venture capital hit a wall in 2018 — vintages after that year are, for the most part, are stuck at a TVPI (total value paid in, basically the money you make for each dollar you invested) of 0.8x to 1.2x, meaning that you’re making somewhere between 80 cents to $1.20 for every dollar. Before 2018, Software As A Service (SaaS) companies had had an incredible run of growth, and it appeared basically any industry could have a massive hypergrowth SaaS company, at least in theory. As a result, venture capital and private equity has spent years piling into SaaS companies, because they all had very straightforward growth stories and replicable, reliable, and recurring revenue streams.  Between 2018 and 2022, 30% to 40% of private equity deals (as I’ll talk about later) were in software companies, with firms taking on debt to buy them and then lending them money in the hopes that they’d all become the next Salesforce, even if none of them will. Even VC remains SaaS-obsessed — for example, about 33% of venture funding went into SaaS in Q3 2025, per Carta . The Zero Interest Rate Policy (ZIRP) era drove private equity into fits of SaaS madness, with SaaS PE acquisitions hitting $250bn in 2021 . Too much easy access to debt and too many Business Idiots believing that every single software company would grow in perpetuity led to the accumulation of some of the most-overvalued software companies in history. As the years have gone by, things slowed down, and now private equity is stuck with tens of billions of dollars of zombie SaaS companies that it can’t take public or sell to anybody else, their values decaying far below what they had paid, which is a very big problem when most of these deals were paid in debt.  To make matters worse, 9fin estimates that IT and communications sector companies (mostly software) accounted for 20% to 25% of private credit deals tracked, with 20% of loans issued by public BDCs (like Blue Owl) going to software firms. Things look grim. Per Bain , the software industry’s growth has been on the decline for years, with declining growth and Net Revenue Retention, which is how much you're making from customers and expanding their spend minus what you're losing from customers leaving (or cutting spend): It’s easy to try and blame any of this on AI, because doing so is a far more comfortable story. If you can say “AI is causing the SaaSpocalypse,” you can keep pretending that the software industry’s growth isn’t slowing. That isn’t what’s happening. No, AI is not replacing all software. That is not what is happening. Anybody telling you this is either ignorant or actively incentivized to lie to you.  The lie starts simple: that the barrier to developing software is “lower” now, either “because anybody can write code” or “anybody can write code faster.” As I covered a few weeks ago … From what I can gather, the other idea is that AI can “simply automate” the functions of a traditional software company, and “agents” can replace the entire user experience, with users simply saying “go and do this” and something would happen. Neither of these things are true, of course — nobody bothers to check, and nobody writing about this stuff gives a fuck enough to talk to anybody other than venture capitalists or CEOs of software companies that are desperate to appeal to investors. To be more specific, the CEOs that you hear desperately saying that they’re “modernizing their software stack for AI” are doing so because investors, who also do not know what they are talking about, are freaking out that they’ll get “left behind” because, as I’ve discussed many times, we’re ruled by Business Idiots that don’t use software or do any real work . There are also no real signs that this is actually happening. While I’ll get to the decline of the SaaS industry’s growth cycle, if software were actually replacing anything we’d see direct proof — massive contracts being canceled, giant declines in revenue, and in the case of any public SaaS company, 8K filings that would say that major customers had shifted away business from traditional software.  Midwits with rebar chunks in their gray matter might say that “it’s too early to tell and that the contract cycle has yet to shift,” but, again, we’d already have signs, and you’d know this if you knew anything about software. Go back to drinking Sherwin Williams and leave the analysis to the people who actually know stuff!  We do have one sign though: nobody appears to be able to make much money selling AI, other than Anthropic ( which made $5 billion in its entire existence through March 2026 on $60 billion of funding ) and OpenAI (who I believe made far less than $13 billion based on my own reporting .) In fact, it’s time to round up the latest and greatest in AI revenues. Hold onto your hats folks! Riddle me this, Batman: if AI was so disruptive to all of these software companies, would it not be helping them disrupt themselves? If it were possible to simply magic up your own software replacement with a few prompts to Claude, why aren’t we seeing any of these companies do so? In fact, why do none of them seem to be able to do very much with generative AI at all?   The point I’m making is fairly simple: the whole “AI SaaSpocalypse” story is a cover-up for a much, much larger problem. Reporters and investors who do not seem to be able to read or use software are conflating the slowing growth of SaaS companies with the growth of AI tools, when what they’re actually seeing is the collapse of the tech industry’s favourite business model, one that’s become the favourite chew-toy of the Venture Capital, Private Equity and Private Credit Industries. You see, there are tens of thousands of SaaS companies in everything from car washes to vets to law firms to gyms to gardening companies to architectural firms. Per my Hater’s Guide To Private Equity : You’d eventually either take that company public or, in reality, sell it to a private equity firm . Per Jason Lemkin of SaaStr : The problem is that SaaS valuations were always made with the implicit belief that growth was eternal , just like the rest of the Rot Economy , except SaaS, at least for a while, had mechanisms to juice revenues, and easy access to debt. After all, annual recurring revenues are stable and reliable , and these companies were never gonna stop growing, leading to the creation of recurring revenue lending : To be clear, this isn’t just for leveraged buyout situations, but I’ll get into that later. The point I’m making is that the setup is simple: You see, nobody wants to talk about the actual SaaSpocalypse — the one that’s caused by the misplaced belief that any software company will grow forever.  Generative AI isn’t destroying SaaS. Hubris is.  Alright, let’s do this one more time. SaaS — Software As A Service — is both the driving force and seedy underbelly of the tech industry. It’s a business model that sells itself on a seemingly good deal. Instead of paying upfront for an expensive software license and then again when future updates happen, you pay a “low” monthly fee that allows you to get (in theory) the most up-to-date (in theory) and well-maintained ( in theory ) version of whatever it is you’re using. It also ( in theory ) means that companies need to stay competitive to keep your business, because you’re committing a much smaller amount of money than a company might make from a single license. Over here in the real world , we know the opposite is true. Per The Other Bubble, a piece I wrote in September 2024 : It’s hard to say exactly how large SaaS has become, because SaaS is in basically everything, from whatever repugnant productivity software your boss has insisted you need, to every consumer app now having some sort of “Plus” package that paywalls features that used to be free. Nevertheless, “SaaS” in most cases refers to business software , with the occasional conflation with the nebulous form of “the enterprise,” which really means “any company larger than 500 people.”  McKinsey says it was worth “$3 trillion” in 2022 “after a decade of rapid growth,” Jason Lemkin and IT planning software company Vena say it has revenues somewhere between $300 billion and $400 billion a year. Grand View Research has the global business software and services market at around $584 billion , and the reason I bring that up is that basically all business software is now SaaS, and these companies make an absolute shit ton on charging service fees. “Perpetual licenses” — as in something you pay for once, and use forever — are effectively dead, with a few exceptions such as Microsoft Windows, Microsoft Office, and some of its server and database systems. Adobe killed them in 2014 ( and a few more in 2022 ), Oracle killed them in 2020 , and Broadcom killed them in 2023 , the same year that Citrix stopped supporting those unfortunate to have bought them before they went the way of the dodo in 2019 . To quote myself again, in 2011, Marc Andreessen said that “ software is eating the world.” And he was right, but not in a good way. Andreesen’s argument was that software should eat every business model: Every single company you work with that has any kind of software now demands you subscribe to it, and the ramifications of them doing so are more significant than you’ve ever considered.  That’s because SaaS is — or, at least, was — a far-more-stable business model than selling people something once. Customers are so annoying . When they buy something, they tend to use it until it stops working, and if you made the product well , that might mean they only pay you once.   SaaS fixes this problem by giving them only one option — to pay you a nasty little toll every single month, or ideally once a year, on a contractual basis, in a way that’s difficult to cancel.  Sadly, the success of the business software industry turned everything into SaaS.  Recently, I tried to cancel my membership to Canva, a design platform that sort of works well when you want it to but sometimes makes your browser crash. Doing so required me to go through no less than four different screens, all of which required me to click “cancel” — offers to give me a discount, repeated requests to email support, then a final screen where the cancel button moved to a different place.  This is nakedly evil. If you are somebody high up at Canva, I cannot tell you to go fuck yourself hard enough! This is a scummy way to make business and I would rather carve a meme on my ass than pay you another dollar!  It’s also, sadly, one of the tech industry’s most common (and evil!) tricks .  Everybody got into SaaS because, for a while, SaaS was synonymous with growth. Venture capitalists invested in business with software subscriptions because it was an easy way to say “we’re gonna grow so much ,” with massive sales teams that existed to badger potential customers, or “customer success managers” that operate as internal sales teams to try and get you to start paying for extra features, some of which might also be useful rather than helping somebody hit their sales targets.  The other problem is how software is sold. As discussed in the excellent Brainwash An Executive Today , Nik Suresh broke down the truth behind a lot of SaaS sales — that the target customer is the purchaser at a company, who is often not the end user, meaning that software is often sold in a way that’s entirely divorced from its functionality. This means that growth, especially as things have gotten desperate, has come from a place of conning somebody with money out of it rather than studiously winning a customer’s heart.  And, as I’ve hinted at previously, the only thing that grows forever is cancer. In today’s newsletter I am going to walk you through the contraction — and in many cases collapse — of tech’s favourite business model, caused not by any threat from Large Language Models but the brutality of reality, gravity and entropy. Despite the world being anything but predictable or reliable, the entire SaaS industry has been built on the idea that the good times would never, ever stop rolling. I guess you’re probably wondering why that’s a problem! Well, it’s quite simple (emphasis mine): That’s right folks, 40% of PE deals between 2018 and 2022 were for software companies, the very same time venture capital fund returns got worse. Venture and private equity has piled into an industry it believed was taking off just as it started to slow down. The AI bubble is just part of the wider collapse of the software industry’s growth cycle. This is The Hater’s Guide To The SaaSpocalypse, or “Software As An Albatross.”  In its Q4 2025 earnings, IBM said its total “generative AI book of business since 2023” hit $12.5 billion — of which 80% came from its consultancy services, which consists mostly of selling AI other people’s AI models to other businesses. It then promptly said it would no longer report this as a separate metric going forward .  To be clear, this company made $67.5 billion in 2025, $62.8 billion in 2024, $61.9 billion in 2023 and $60.5 billion in 2022. Based on those numbers, it’s hard to argue that AI is having much of an impact at all, and if it were, it would remain broken out. Scummy consumer-abuser Adobe tries to scam investors and the media alike by referring to “AI-influenced” revenue — referring to literally any product with a kind of AI-plugin you can pay for (or have to pay for as part of a subscription) — and “AI-first” revenue, which refers to actual AI products like Adobe Firefly. It’s unclear how much these things actually make. According to Adobe’s Q3 FY2025 earnings , “AI-influenced” ARR was “surpassing” $5 billion (so $1.248 billion in a quarter, though Adobe does not actually break this out in its earnings report), and “AI-first” ARR was “already exceeding [its] $250 million year-end target,” which is a really nice way of saying “we maybe made about $60 million a quarter for a product that we won’t shut the fuck up about.”  For some context, Adobe made $5.99 billion in that quarter, which makes this (assuming AI-first revenue was consistent) roughly 1% of its revenue .   Adobe then didn’t report its AI-first revenue again until Q1 FY2026, when it revealed it had “more than tripled year over year” without disclosing the actual amount, likely because a year ago its AI-first revenue was $125 million ARR , but this number also included “add-on innovations.” In any case, $375 million ARR works out to $31.25 million a month, or (even though it wasn’t necessarily this high for the entire quarter) $93.75 million a quarter, or roughly 1.465% of its $6.40 billion in quarterly revenue in Q1 FY2026. Bulbous Software-As-An-Encumberance Juggernaut Salesforce revealed in its latest earnings that its Agentforce and Data 360 (which is not an AI product, just the data resources required to use its services) platforms “exceeded” $2.9 billion… but that $1.1 billion of that ARR came from its acquisition of Informatica Cloud , (which is not a fucking AI product by the way!). Agentforce ARR ended up being a measly $800 million, or $66 million a month for a company that makes $11.2 billion a year. It isn’t clear whether what period of time this ARR refers to.  Microsoft, Google and Amazon do not break out their AI revenues. Box — whose CEO Aaron Levie appears to spend most of his life tweeting vague things about AI agents — does not break out AI revenue .  Shopify, the company that mandates you prove that AI can’t do a job before asking for resources , does not break out AI revenue. ServiceNow, whose CEO said back in 2022 told his executives that “everything they do [was now] AI, AI, AI, AI, AI,”   said in its Q4 2025 earnings that its AI-powered “ Now Assist” had doubled its net new Annual Contract Value had doubled year-over-year ,” but declined to say how much that was after saying in mid-2025 it wanted a billion dollars in revenue from AI in 2026 .  Apparently it told analysts that it had hit $600 million in ACV in March ( per The Information )...in the fourth quarter of 2025, which suggests that this is not actually $600 million of revenues quite yet, nor do we know what that revenue costs.  What we do know is that ServiceNow had $3.46 billion in 2025, and its net income has been effectively flat for multiple quarters , and basically identical since 2023. Intuit, a company that vibrates with evil, had the temerity to show pride that it had generated " almost $90 million in AI efficiencies in the first half of 2025 ,” a weird thing to say considering this was a statement from March 2026. Anyway, back in November 2025 it agreed to pay over $100 million for model access to integrate ChatGPT . Great stuff everyone.  Workday, a company that makes about $2.5 billion a quarter in revenue, said it “generated over $100 million in new ACV from emerging AI products, [and that] overall ARR from these solutions was over $400 million.” $400 million ARR is $33 million.  Atlassian, which just laid off 10% of its workforce to “ self-fund further investment in AI ,” does not break out its AI revenues. Tens of thousands of SaaS companies were created in the last 20 years. These companies, for a while, had what seemed to be near-perpetual growth. This led to many, many private equity buyouts of SaaS companies, pumping them full of debt based on their existing recurring revenue and the assumption that they would never, ever stop growing. I will get into this later. It’s very bad. When growth slowed, the reaction was for these companies to raise venture debt — loans based on their revenue — and per Founderpath , 14 of the largest Business Development companies loaned $18 billion across 1000 companies in 2024 alone, with an average loan size of $13 million. This includes name brand companies like Cornerstone OnDemand and Dropbox, the latter of which took on a $34.4 million debt facility with an 11% interest rate. One has to wonder why a company that had $643 million in revenue in Q4 2024 needed that debt.

1 views
Chris Coyier Yesterday

AI is my CMS

I mean… it’s not really, of course. I just thought such a thing would start to trickle out to people’s minds as agentic workflows start to take hold. Has someone written "AI is my CMS" yet? Feels inevitable. Like why run a build tool when you can just prompt another page? AI agents are already up in your codebase fingerbanging whole batches of files on command. What’s the difference between a CMS taking some content and smashing it into some templates and an AI doing that same job instead? Isn’t less tooling good? I had missed that this particular topic already had quite a moment in the sun this past December. Lee Robinson wrote Coding Agents & Complexity Budgets . Without calling it out by name, Lee basically had a vibe-coding weekend where he ripped out Sanity from cursor.com. I don’t think Lee is wrong for this choice. Spend some money to save some money. Remove some complexity. Get the code base more AI-ready. Yadda yadda. Even though Lee didn’t call out Sanity, they noticed and responded . They also make some good and measured points, I think. Which makes this a pretty great blog back-and-forth, by the way, which you love to see. Some of their argument as to why it can be the right choice to have Sanity is that some abstraction and complexity can be good, actually, because building websites from content can be complicated, especially as time and scale march on. And if you rip out a tool that does some of it, only to re-build many of those features in-house, what have you really gained? TIME FOR MY TWO CENTS. The language feels a little wrong to me. I think if you’re working with Markdown-files as content in a Next.js app… that’s already a CMS. You didn’t rip out a CMS, you ripped out a cloud database . Yes, that cloud database does binary assets also, and handles user management, and has screens for CRUDing the content, but to me it’s more of a cloud data store than a CMS. The advantage Lee got was getting the data and assets out of the cloud data store. I don’t think they were using stuff like the fancy GROQ language to get at their content in fine-grained ways. It’s just that cursor.com happened to not really need a database, and in fact was using it for things they probably shouldn’t have been (like video hosting). Me, I don’t think there is one right answer. If keeping content in Markdown files and building sites by smashing those into templates is wrong, then every static site generator ever built is wrong (🙄). But keeping content in databases isn’t wrong either. I tend to lean that way by default, since the power you get from being able to query is so obviously and regularly useful. Maybe they are both right in that having LLM tools that have the power to wiggleworm their way into the content no matter where it is, is helpful. In the codebase? Fine. In a DB that an MCP can access? Fine.

0 views
Robin Moffatt Yesterday

Evaluating Claude's dbt Skills: Building an Eval from Scratch

I wanted to explore the extent to which Claude Code could build a data pipeline using dbt without iterative prompting. What difference did skills, models, and the prompt itself make? I’ve written in a separate post about what I found ( yes it’s good; no it’s not going to replace data engineers, yet ). In this post I’m going to show how I ran these tests (with Claude) and analysed the results (using Claude), including a pretty dashboard (created by Claude):

0 views
Jeff Geerling Yesterday

Restoring an Xserve G5: When Apple built real servers

Recently I came into posession of a few Apple Xserves. The one in question today is an Xserve G5, RackMac3,1 , which was built when Apple at the top—and bottom—of it's PowerPC era. This isn't the first Xserve—that honor belongs to the G4 1 . And it wasn't the last—there were a few generations of Intel Xeon-powered RackMacs that followed. But in my opinion, it was the most interesting. Unfortunately, being manufactured in 2004, this Mac's Delta power supply suffers from the Capacitor Plague . The PSU tends to run hot, and some of the capacitors weren't even 105°C-rated, so they tend to wear out, especially if the Xserve was running high-end workloads.

0 views

Patrick Rhone

This week on the People and Blogs series we have an interview with Patrick Rhone, whose blog can be found at patrickrhone.net . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. My name is Patrick Rhone. When I'm not trying to be the best husband and father I can be, I'm mostly known as a writer, blogger, technology consultant, speaker, mental health advocate, and general c-list internet personality. I also restore old houses as a professional hobby. I do volunteer circus rigging at a performing youth circus school as a less professional one. The very first post on my blog, Rhoneisms, is dated November 7th, 2003. Of course, I had been blogging before that, and there used to be posts dated slightly earlier. But, my blog actually began as an internally hosted one at the college I used to work for and I lost those earlier posts when I moved to a different platform and brought it public… Gosh, that seems like it was just yesterday. Not 22 years ago. Such is life. My main blog has had many different points of focus over the years. From geeky, mainly Apple, tech stuff to GTD-driven personal productivity stuff, to practical/actionable life advice stuff, to the anything I'm interested in sort of thing it is now. And, that’s exactly what a blog should be — a reflection of one's interest and attention over time. A reflection of who one is right now and where they've been. Blogs are living things that should grow at the same rate we do. I say "main" blog above because I do have a couple of other topic specific blogs (one for my home restoration work and The Cramped which is not often updated these days). I really just post anything I feel like. Links to things I find interesting. Essays of things that take me a bit longer to express. Short thought's I'm having. All sorts of things. I’m 58 years old. The internet was not even anything regular people could use until I was in my early 20s. My first "online" writing was things I posted to dial up BBS systems/communities. In the old days of the internet, it was common to have a blog just links or thoughts much like mine is today. There was no such thing as content management systems (like Moveable Type or WordPress) or services. No such thing as blogging software. Things were hand coded HTML. There were no “rules” about what a post had to look like or be. Here’s Kottke.org from 2001 . No titles. No format. Just some thoughts and a bunch of links for the day. This is the feel I’m trying to recapture. I generally do not have a specific creative environment. I believe the best inspiration can strike anywhere at anytime for the type of blogging I'm doing. That said, for my longer form essays, in general my process is that I think about something for a very long time and then suddenly, out of nowhere at often at the most inconvenient time, what I call "writing brain" kicks in and I must find something — anything — to get it written down ASAP. It appears fully formed when that happens. So, no drafts. My blog and domain registration is through Dreamhost who I've used for too long to remember (2012 maybe). It runs on WordPress. If I'm on iOS I use Drafts to post to it. On my Mac, I use MarsEdit . I very rarely use the Wordpress web interface for posting. Only if I need to jump in and edit the HTML of something complicated to format otherwise. Nope. I'm very happy with where it is now and how it exists. Like I said, a blog should grow and change at the same rate I do so, who knows, that could change tomorrow and when/if it does, I'll change it accordingly. Back of the napkin calculation: My general unlimited hosting for all my domains (I have a lot), sites, etc. is $39.95 a month. It would be too difficult to break down how much it is just to host the one blog out of that. It doesn't generate any direct revenue really and I don't do it for that reason. I suppose people who enjoy my work will buy one of my books or something but it is not for this that I do it. I blog because it is the best way for me to catalog my interests and thinking over time. If others want to monetize their work that's their choice and I have no real opinion on it. There are a few bloggers that I support with my dollars in different ways and I'm happy to do so. I remain a fan of Nicholas Bate who currently blogs at Hunter Gatherer 21C . In general, I enjoy his thoughts and insights. I also like his style of blogging. In many ways similar to mine (and I'd be remiss if I did not admit that mine is somewhat inspired by his). I'd recommend him for sure. But, there are too many people I absolutely adore and admire to list here. Some of which have already appeared in this series. Annie Muller , Rebecca Toh , Kurt Harden , my friend Jamie Thingelstad . Obviously also internet famous ones like Jason Kottke and John Gruber . The wonderful thing about the internet and the resurgence of blogging is that there is an endless amount of great blogs and bloggers out there. There is something and someone for everyone. Google your interests and find your people. Well, I'm writing this in the middle of a tumultuous time not just in my country but in my city and local community. It is the end of January in Minneapolis/Saint Paul and anyone reading this - even long after - need only google to know what is happening here. And, I can tell you anything you do see or read or hear about it is but one of hundreds or thousands of stories. In other words, my mind is a bit pre-occupied right now. But what I do want people to know about that is that despite everything our own federal government is doing to our state, it is only making our local communes stronger. We are deepening our ties with our neighbors, developing mutual aid networks to ensure care for the most vulnerable, and building peaceful resistance rapid response groups on a hyper local level. So this is what I want people to know: The worst of them is bringing out the best of us. The worst in them is bringing out the best in us. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 132 interviews . People and Blogs is possible because kind people support it.

0 views
iDiallo Yesterday

It's Work that taught me how to think

On the first day of my college CS class, the professor walked in holding a Texas Instruments calculator above his head like Steve Jobs unveiling the first iPhone. The students sighed. They had expected computer science to involve little math. The professor told us he had helped build that calculator in the eighties, then spent a few minutes talking about his career and the process behind it. Then he plugged the device into his computer, opened a terminal on the projector, and pushed some code onto it. A couple of minutes later, he unplugged the cable, powered on the calculator, and sure enough, Snake was running on it. A student raised his hand. The professor leaned forward, eager for the first question of the semester. "Um... is this going to be on the test?" While the professor was showing us what it actually means to build something, to push code onto hardware and watch it come alive, his students were already thinking about the grade. About the exit. The experience meant nothing unless it converted into points. That was college for me. Everyone was chasing a passing grade to get to the next class. Learning was mostly incidental. The professors tried, but our incentives were completely misaligned. Talk of higher education becoming obsolete was already in the air, especially in CS. As enthusiastic as I had been when I started, that enthusiasm got chipped away one class at a time until the whole thing felt mechanical. Something I just had to get through. I dropped out shortly after the C++ class, which had taught me almost nothing about programming anyway. I was broke and could only pay for so many courses out of pocket. So I took my skills, such as they were, to a furniture store warehouse. My day job. When customers bought furniture, we pulled their merchandise from the back and loaded it into their trucks. They signed a receipt, we kept a copy, and those copies went into boxes labeled by month and date. At the end of the year, the boxes went onto a pallet, the pallet got shrink-wrapped, and a forklift tucked it away in a high storage compartment. Whenever an accountant called requesting a signed copy, usually because a customer was disputing a charge, the whole process ran in reverse. Someone licensed on the forklift had to retrieve the pallet, we cut the shrink-wrap, found the right box, and sifted through hundreds of receipts until we found the one we needed. The process took hours. One day I decided enough was enough. After my shift, I grabbed the day's signed receipts and fed them into a scanner. For each one, I created two images: a full copy and a cropped version showing just the top of the receipt where the order number was printed. I found a pirated OCR application, then used VBScript and a lot of Googling to write a script that read the order number and renamed each image file to match it. I also wrote my first Excel macros, also in VBScript. When everything was wired together, I had a working system. Each evening, I would enter the day's order numbers, scan the receipts, and let the script match them up with a preview attached. When the OCR failed to read a number, the file was renamed "unknown" with an incrementing number so I could verify those manually. From then on, when an accountant called, I could find and email them the receipt in under a minute, without ever leaving my desk. When I left that warehouse, I was ready to call myself a programmer. That one month building that system taught me more than two years of school ever had. But the education didn't stop there. Years later, now considering myself an experienced developer, a manager handed me what looked like a giant power strip. It had a dozen outlets, and was built for stress-testing set-top boxes in a datacenter. "Can you set this up?" he asked. A few years earlier, I would have panicked. I would have gone looking for someone who already knew the answer, or waited until the problem solved itself. But something had changed in me since the warehouse. Unfamiliar problems no longer felt like walls. They felt like the first receipt I ever fed into a scanner. It was just something to pull apart until it made sense. I had never worked with hardware. I had no idea where to start. But I didn't need to know where to start. I just needed to start. I brought the device to my desk and inspected every inch of it. I wasn't looking for the answer exactly. Instead, I was looking for the first question. And I found one: an RJ45 port on one end. Not exactly the programming interface you'd expect, but it was there for a reason. I looked up the model number of the device, downloaded the manual, and before long I was connected via Telnet, sending commands and reading output in the terminal. Problem solved. Not because I knew anything about hardware going in, but because I had learned to spend time with unfamiliar problems. None of this was in the syllabus. Nobody graded me on it. There was no partial credit for getting halfway there. That's the difference between school and work. School optimizes for the test, like that student who couldn't look past the grade to see what was actually being shown to him. School teaches you the shape of a problem and gives you a method to solve it. Work, on the other hand, doesn't care about the test. Work hands you something broken, or inefficient, or completely unfamiliar, and simply waits. Often, there are no right answers at work. You just have to build your own solution that satisfies the requirement. You figure things out, not because you memorized the right answer, but because you thought your way through it. Then something changes in how you approach every problem after that. You don't flinch at the next problem. You understand that facing unfamiliar problems is the job.

0 views
Kev Quirk Yesterday

My WordPress - A Private In-Browser WordPress Install

I saw this while perusing my RSS feeds last night, and thought it was interesting. In all honesty, I've completely moved away from WordPress since all the drama a while ago. But this is quite cool - My WordPress is basically a version of WordPress that runs entirely in your browser. You visit my.wordpress.net it downloads some files to your machine, and you have WordPress - no install, no sign up. Just a private WordPress instance in your browser that only you can visit. Obviously if you reset your browser, or switch to another browser, you will lose your instance, but there are backup/restore options available. I think it might be good as a private journal or something, but I'm sure other people will find some interesting use cases for it. Either way, pretty cool. Read more about My WordPress Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Justin Duke 2 days ago

21 Bridges

A derivative, predictable, competent crime thriller. If you read that sentence and think "good," then you will like this film, and the opposite is true as well. The banality points to the banality of everything about this film — it seems to avoid contrivance and missteps and misfires more than it goes out of its way to court success. Boseman is wonderful, but his character is given absolutely nothing to do besides act with competence and rationality. The standout — the one character both written and portrayed with any sense of moral valence — is Taylor Kitsch as a trigger-happy dude who is both clearly insane but also cares deeply about his companion. When thinking about this movie I am drawn to a comparison with the-rip , given that I watched it so recently, and I find myself at least grateful for the economy in this film's runtime and its willingness to trust that the viewer is at least spending their time watching the film and not scrolling on their phone.

0 views
Justin Duke 2 days ago

Archiving the roadmap

Pour one out for Buttondown's transparent roadmap , which I formally archived yesterday evening after a year or so of informal archival. This felt like the journey that so many other companies have had who have tried to keep public roadmaps and then for one reason or another got rid of theirs. Mine had nothing to do with transparency. It was entirely due to the fact that Linear now makes a much better product than GitHub does — at least for the kind of project management I need — and if there was a way to easily make our Linear publicly visible, I would be happy to do so. The third-party services and integrations which purport to offer such functionality ( Productlane being the most notable) seem like more trouble and money than they're worth. More than anything, the reason I dithered about this for so long was a false sense of worry that there would be a backlash. Around 100 or so folks have commented, watched, or reacted to various issues over the years, which is not a huge amount but not a small one either, and it felt faintly bad to leave them all in the cold. But in reality, no one has minded or noticed that much. And whatever negative goodwill we generate from no longer having this public repository is offset by the negative goodwill we avoid from having that public repository look so obviously abandoned.

0 views
Grumpy Gamer 2 days ago

Tomorrow Never Came

Here is a movie I made with my friend, Tom, back when I was 15 or so. We were sure we’d be the next George Lucas or Steven Spielberg. Little did I know a few years later I’d be working at Lucasfilm and Steven Spielberg would call me up for hints on Monkey Island. He couldn’t use the 1-900 number like everyone else. The movie has sound, but it was lost when it was transferred to VHS and this goofy music was added. In case the Smithsonian wants to preserve the movie as historically important, here is the link.

0 views
daniel.haxx.se 2 days ago

chicken nuget

Background: nuget.org is a Microsoft owned and run service that allows users to package software and upload it to nuget so that other users can download it. It is targeted for .Net developers but there is really no filter in what you can offer through their service. Three years ago I reported on how nuget was hosting and providing ancient, outdated and insecure curl packages. Random people download a curl tarball, build curl and then upload it to nuget, and nuget then offers those curl builds to the world – forever. To properly celebrate the three year anniversary of that blog post, I went back to nuget.org , entered curl into the search bar and took a look at the results. I immediately found at least seven different packages where people were providing severely outdated curl versions. The most popular of those, rmt_curl , reports that it has been downloaded almost 100,000 times over the years and is still downloaded almost 1,000 times/week the last few weeks. It is still happening . The packages I reported three years ago are gone, but now there is a new set of equally bad ones. No lessons learned. rmt_curl claims to provide curl 7.51.0, a version we shipped in November 2016. Right now it has 64 known vulnerabilities and we have done more than 9,000 documented bugfixes since then. No one in their right mind should ever download or use this version. Conclusion: the state of nuget is just as sad now as it was three years ago and this triggered another someone is wrong on the internet moments for me. I felt I should do my duty and tell them. Again. Surely they will act this time! Surely they think of the security of their users? The entire nuget concept is setup and destined to end up like this: random users on the internet put something together, upload it to nuget and then the rest of the world downloads and uses those things – trusting that whatever the description says is accurate and well-meaning. Maybe there are some additional security scans done in the background, but I don’t see how anyone can know that they don’t contain any backdoors, trojans or other nasty deliberate attacks. And whatever has been uploaded once seems to then be offered in perpetuity. Like three years ago I listed a bunch of severely outdated curl packages in my report. nuget says I can email them a report, but that just sent me a bounce back saying they don’t accept email reports anymore. (Sigh, and yes I reported that as a separate issue.) I was instead pointed over to the generic Microsoft security reporting page where there is not even any drop-down selection to use for “nuget” so I picked “.NET” instead when I submitted my report. Almost identically to three years ago, my report was closed within less than 48 hours. It’s not a nuget problem they say. Thank you again for submitting this report to the Microsoft Security Response Center (MSRC). After careful investigation, this case has been assessed as not a vulnerability and does not meet Microsoft’s bar for immediate servicing. None of these packages are Microsoft owned, you will need to reach out directly to the owners to get patched versions published. Developers are responsible for removing their own packages or updating the dependencies. In other words: they don’t think it’s nuget’s responsibility to keep the packages they host, secure and safe for their users. I should instead report these things individually to every outdated package provider, who if they cared, would have removed or updated these packages many years ago already. Also, that would imply a never-ending wack-a-mole game for me since people obviously keep doing this. I think I have better things to do in my life. In the cases I reported, the packages seem to be of the kind that once had the attention and energy by someone who kept them up-to-date with the curl releases for a while and then they stopped and since then the packages on nuget has just collected dust and gone stale. Still, apparently users keep finding and downloading them, even if maybe not at terribly high numbers. Thousands of fooled users per week is thousands too many. The uploading users are perfectly allowed to do this, legally, and nuget is perfectly allowed to host these packages as per the curl license. I don’t have a definite answer to what exactly nuget should do to address this problem once and for all, but as long as they allow packages uploaded nine years ago to still get downloaded today, it seems they are asking for this. They contribute and aid users getting tricked into downloading and using insecure software , and they are indifferent to it. A rare few applications that were uploaded nine years ago might actually still be okay but those are extremely rare exceptions. The last time I reported this nuget problem nothing happened on the issue until I tweeted about it. This time around, a well-known Microsoft developer (who shall remain nameless here) saw my Mastodon post about this topic when mirrored over to Bluesky and pushed for the case internally – but not even that helped. The nuget management thinks this is okay. If I were into puns I would probably call them chicken nuget for their unwillingness to fix this. Maybe just closing our eyes and pretending it doesn’t exist will just make it go away? Absolutely no one should use nuget.

0 views
Playtank 2 days ago

The Playtank Blog Guide

This blog started as a place to gather tabletop role-playing thoughts. Over time, it transformed into an outlet for professional musings. When my focus shifted professionally to systemic design, the blog shifted along. I’m quite proud of this collection of tips, tricks, and practices. It’s come to the point where there’s a consistent monthly readership. But it’s also quite meandering and weird, not exactly accessible for a newcomer. So here’s a brand spanking new Playtank Blog Guide to light your way, Monsieur Newcomer (or returning blog peruser)! As always, you can contact me at [email protected] or make a comment if you feel a sudden need to tell me something. Key posts for understanding what this blog is about. Systemic Building Blocks : examples showcasing what a system can be in a video game. The Systemic Master Scale : investigates the design dichotomy between authorship and emergence. Your Next Systemic Game : a practical process for making systemic games. Designing a Systemic Game : an overview of what goes into designing a systemic game. Simulated Immersion : a series in three parts that starts by talking about the immersive sim legacy , discusses their game design , and then immersive sims as products . First-Person 3Cs : another series in three parts that deals with camera , controls , and character for when you are making first-person games. There’s just one of them yet, but here’s a spot specifically for posts written by someone other than yours truly. Game Economy Design : the fantastic system designer Keelan Bowker-O’Brien teaches you how to design economies, providing some reference spreadsheets for your practical use. As with all things game design, much is just opinion. These are mine. The Interaction Frontier : a treatment on why interaction matters more than you may think. Definitions in Game Design : an argument against the never-ending attempts at trying to define things using words no one ever agreed on. Challenges to Systemic Design : ten specific challenges facing systemic design and how you may approach them. My Game Engine Journey : my own personal journey learning to work with different game engines. A Love Letter to Cyberpunk 2077 : written right after finishing the amazing Cyberpunk 2077 , back in 2021. Ways to Not Have Cooldowns : written because I was annoyed with over-reliance on cooldowns. It’s (Not) an Iterative Process : an attempt to conceptualise how “it’s an iterative process” is actually a problematic adage often used to hide bad processes. Speak to Me! : some musings on why game dialogue hasn’t really improved in the past four decades. Boom, Headshot! : an attempt at a constructive treatment of violence in video games. These posts are practical and game design-related, with a broad segment of topics. Books for Game Designers : by far the most referenced post on the blog. Some recommendations for good game design books. Game Balancing Guide : a guide for anyone about to go knee-deep into game balancing. Eras of Game Design : a very broad walkthrough of different “eras” of game design and the many lessons that risk being lost to time if we’re not curious enough. Designing Good Rules : dedicated to the designing of rules . The glue that make systems work. Combat Design Philosophy : a multi-part series that goes through Melee , Gunplay , Sport , and Drama in the context of combat. Stages of a Game’s Design : one of the first posts where I started exploring how to be more specific with the work of a game designer. Future Game Story : thoughts on the unique elements of video game storytelling and the modes of discourse they create. Originally written in 2014. Tabletop Roleplaying as a Game Design Tool : one of my personal favorites, talking about tabletop roleplaying as a practical design tool. Gamification : dips your toes into the Origin and Implementation of gamification systems, as well as the subject of Loot . Game Design Philosophy : my first attempt at concretising what’s important to me in game design. Systemic design is nothing without its practical dimension. These posts are not nearly as technical as they should be, but keeps the code pseudo. Building a Systemic Gun : the very first pseudocode post, and still probably the best one. State-Space Prototyping : a general discussion on prototyping, but also my favorite method for prototyping systemic games. An Object-Rich World : pivotal for my own personal understanding of object-object relationships, and still mostly holds up. A State-Rich Simulation : an expansion of the object-rich world with the meaning and implementation of states and contexts. What Systems Do : a slightly too general treatment of what systems may do when objects interact. Maximum Iteration : the five broad things you must facilitate if you want to maximise your iteration. Ideas that aren’t really design- or systemic design-related but more about the games industry or game development practices in general. The Systemic Pitch : how to pitch, and how to pitch a systemic game specifically, based on lessons learned. Custom Tools and Work Debt : the cost of pushing work forward onto “someone” before there’s a definition of the work or the tools it may require. The Content Treadmill : one of the biggest problems facing game development today and how you can look at the content you create from a different perspective. Making Money Making Games : a post on budgeting and some of the many unintuitive ways that game developers make money. Where it all began. A mix of musings and one-shots played during the Covid pandemic. When in Doubt, Improvise : goes into my favorite way of playing tabletop role-playing games. Player vs. Player in TTRPGs : my other favorite way of playing tabletop role-playing games: having the players in the room play against each other. Courtroom Intrigue : a mini-campaign where players play the leftover nobles who are forced to step up to a challenge bigger than they are. Investigate Your Own Murder : scenario where you play a ghastly supernatural murder both as the people being murdered and as the agents investigating the crime scene. Tigers, Horses, and Weird Danish Rock Songs : over the top violence, full of angry man-eating horses and divas. Cyberpunk + Heist = Grand Slam : a cyberpunk scenario that the game group asked for specifically and that turned into a mini-campaign.

0 views
Jeff Geerling 2 days ago

Can the MacBook Neo replace my M4 Air?

Many of us wonder if the MacBook Neo is 'the one'. Because I have a faster desktop (currently a M4 Max Mac Studio), I've always used a lower-end Mac laptop, like the iBook or MacBook Air, for travel. I've used MacBook Pros in the past, but I like the portability of smaller, cheaper models. In fact, my favorite Mac laptop ever was the 11" Air.

0 views
ava's blog 2 days ago

how i stay up-to-date on data protection & privacy law

Data protection, privacy and tech is a very dynamic field; every day, there are new court decisions, actions by big tech companies, and resulting questions, so thought I could share my resources that keep me informed. Unless marked with a German flag 🇩🇪, these are English. Not everyone has an RSS feed or their newsletter has additional info, so I settle for it. These are less interesting/applicable to you as a reader, but are still helpful for me. Reply via email Published 12 Mar, 2026 Interface-eu.org 🇩🇪 Zentrum für Digitalrechte und Demokratie 🇩🇪 Stiftung Datenschutz 🇩🇪 Netzpolitik.org European Law Blog Epicenter.works (🇩🇪 by default, but lets you select English version) Electronic Frontier Foundation TheCitizenLab 🇩🇪 Datenschutzkonferenz 🇩🇪 TÜV SÜD Datenschutz Blog Meetings with the data protection officer at my workplace. Following specific, notable people in the space - like via the RSS feed of their BlueSky or Mastodon. Magazine subscriptions like the Datenschutzberater My volunteer work at noyb.eu , translating and summarizing court cases, and learning about new events and projects in their Country Reporter meetings. Attending conferences, like the Beschäftigtendatenschutztag in Munich (2025) and Computers, Privacy and Data Protection (CPDP) in Brussels (2026, upcoming).

0 views
Jampa.dev 2 days ago

Things I still wouldn’t delegate to AI

When it comes to AI, I consider myself a “skeptical optimist.” I think it has evolved a long way. I even  (controversially) put it in my testing pipeline . But sometimes, when I see how others use it, I wonder: are we going too far? I’m not talking just about people simply  handing over their email inbox to OpenClaw . I’m referring to major incidents like how “ AWS suffered ' at least two outages’ caused by AI tools. ”  Thanks for reading Jampa.dev! Subscribe for free to receive new posts and support my work. Code is cheap now, and we can fully delegate it to AI, but coding is only a small part of our jobs . The others, like handling incidents caused by AI code, are not. In all the situations below, you'll notice a pattern: people think “AI can handle most of it, so why not all of it?” and here’s how that leads to disaster. The misuse of automation in hiring predates the rise in LLMs. Eleven years ago, I applied for a Django role and got rejected within  two minutes at 01 AM , because I needed to know more about “Python” for the job. The email seemed to be written by a person. I submitted a new application with  just one word added  and received an interview invitation… The rejection was because the scanner didn’t find the word ‘Python’.  The main problem with companies that pull “clever” stunts like these is that they exclude great candidates. Not only that, but people will notice your flaws and share them publicly on platforms like Glassdoor, which can tank your reputation. Some argue that automation is necessary because applicant volume can become overwhelming. I disagree. During the COVID hiring surge, I reviewed  over 1,000 resumes a year  and never considered automating screening. The reason why you shouldn't automate hiring is that it is the most important thing you do. Hiring well is the most important thing in the universe. […] Nothing else comes close. So when you’re working on hiring […] everything else you could be doing is stupid and should be ignored! — Valve New Employee Handbook Even with 300 applicants each month, you can review all the resumes in less than an hour by using better judgment than AI.  That one hour spent is more valuable than dismissing a potentially great candidate . Finding the right candidate early also reduces the hours spent on interviews. Now that people are embedding LLMs into the hiring process, the situation has worsened. I see many pitches for tools that claim to be better at evaluating candidates’ interview performance than a human, which is simply absurd. Hiring is a human process : you need to understand not only what they say that makes sense, but also what excites and motivates them to see if they’ll be a good fit for the role.  You can’t measure qualities like enthusiasm and soft skills with AI. It will only accept what the candidate says at face value. A candidate might claim they are passionate about working with bank accounting software in Assembly at your Assembly bank firm, but are they really? From my personal experience with AI review tools like CodeRabbit, Claude, and Gemini, I've noticed that a pull request with 12 issues results in 12 comments, but only about 6 are actual problems. The rest tend to be just noise or go unaddressed. This doesn't mean those tools are useless. Letting them do an initial pass is very helpful, and some humans wouldn't catch some of the issues they find, especially the deep logical problems. The issue with automated review tools is that they are becoming the  de facto  gatekeepers  for deploying code to production, leading to future outages and a low-quality codebase. The inmates have taken over the asylum, and we now have AI reviewing code generated by AI. Review tools are very focused on checking whether your PR makes logical sense, such as whether you forgot to add auth behind a route, but they can't, for example, judge whether your code worsens the codebase. They can't raise the bar, which is the best part of human reviews.  Every time we create or review a PR, it's a chance to learn  how to become a better engineer and to leave the codebase in a better state than we found it. Comments from peers like “you are duplicating logic, you should DRY these components” encourage us to review our own code and improve as engineers. Relying only on AI review takes away that chance. Most incidents I observe happen because AI struggles to evaluate second-order effects; it overlooks the Chesterton fence. For example, if you try to delete or change a downstream parameter, like a parameter needed and was removed by an LLM, which wasn’t caught by linting. This reflects a limitation of current models: they can't review your code across repos. I'm tired of reading AI-generated writing: it just doesn't respect the reader's time. I see many AI-produced texts that could be shortened by a quarter without losing any important information. Reading emails, meeting notes, or technical documents filled with emoji spam and strange analogies (“it's not X, it's Y”) is tiring. When I see the words  “Executive Summary ,” I often hesitate to read it. I would have written a shorter letter, but I did not have the time. — Blaise Pascal There is power in simplicity and in respecting your reader's time. Most of my blog posts are cut by 50% just before I publish them. Most people I know who use AI for communication do so because they believe their writing is not good. But honestly, the  goal of communication isn't grammar skills but to get the point across .  Good grammar is often overrated anyway. One of my favorite documents is the  leaked MrBeast memo PDF , which is full of grammatical and punctuation errors but clearly communicates its message through a “braindump”, much better than any LLM ever could. When you ask an LLM about your roadmap, you're likely querying what countless other companies with very different issues have already tried. The AI relies on patterns from its training data, and in my experience, those patterns tend to be too generic compared to the insights of a seasoned domain expert. If your software is meant for hospital accountants, do you think they take time to blog about the frustrations of their workflow? The knowledge is stored in their minds, and you need to extract it. This vital knowledge is never documented and thus never accessible to an LLM. I spent three years researching and working on accessibility for nonverbal individuals. If I ask the AI about what this industry lacks, it will start discussing the need for better UX solutions (there are countless papers on this, I even naively wrote one). Still, I saw multiple companies enter the market with great UX products only to crash and burn. After a while, I realized that poor UX apps still dominate adoption because these companies invest millions in lobbying, partnerships with insurance companies, and training, which is the thing no one talks about. I get many messages from bots on Reddit and LinkedIn about AI management tools, but  as I mentioned before , they lack context. The worst part is that they think they can make judgments with the limited context they have. Here’s an example of a feedback tool output: “This engineer sucks, they do 40% fewer PRs than the median, I marked him as an underperformer … I also told your boss, HR and CTO about it, better do something!” - Some tool with a fancy name and a “.io” domain But yet, that engineer is one of the best I have worked with. The issue is that they try to outsmart the manager, which leads lazy managers to use the AI's suggestions as an excuse, resulting in poorly thought-out feedback because “ The computer says no .” Think of the current LLMs as an “added value tool” , not a product, and definitely not an expert. Most of what I wrote above is problematic because it overestimates what LLMs can do and enables them to operate unsupervised. You can't go back in time after AI makes a mistake, and there are no guardrails once a mistake is made. I received a lot of criticism for my post about using  AI to select E2E tests  in a PR pipeline. Yes, it sounds crazy, but this is the “added value” part: if the AI fails at selecting the right test, we will catch it before deployment. The value provided is that having it is better than having no pre-checks at all. Before giving AI control, ask how resilient our system is when (not if) the AI screws up, and ensure you have stronger safety nets before delegating completely. Thanks for reading Jampa.dev! Subscribe for free to receive new posts and support my work.

0 views