Latest Posts (20 found)
Jim Nielsen -27 days ago

You Might Debate It — If You Could See It

Imagine I’m the design leader at your org and I present the following guidelines I want us to adopt as a team for doing design work: How do you think that conversation would go? I can easily imagine a spirited debate where some folks disagree with any or all of my points, arguing that they should be struck as guidelines from our collective ethos of craft. Perhaps some are boring, or too opinionated, or too reliant on trends. There are lots of valid, defensible reasons. I can easily see this discussion being an exercise in frustration, where we debate for hours and get nowhere — “I suppose we can all agree to disagree”. And yet — thanks to a link to Codex’s front-end tool guidelines in Simon Willison’s article about how coding agents work — I see that these are exactly the kind of guidelines that are tucked away inside an LLM that’s generating output for many teams. It’s like a Trojan Horse of craft: guidelines you might never agree to explicitly are guiding LLM outputs, which means you are agreeing to them implicitly. It’s a good reminder about the opacity of the instructions baked in to generative tools. We would debate an open set of guidelines for hours, but if there’re opaquely baked in to a tool without our knowledge does anybody even care? When you offload your thinking, you might be on-loading someone else’s you’d never agree to — personally or collectively. Reply via: Email · Mastodon · Bluesky Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system). Motion: Use a few meaningful animations (page-load, staggered reveals) instead of generic micro-motions. Background: Don't rely on flat, single-color backgrounds; use gradients, shapes, or subtle patterns to build atmosphere. Overall: Avoid boilerplate layouts and interchangeable UI patterns. Vary themes, type families, and visual languages.

0 views

Another ANOTHER New Lick of Paint

So it turns out I didn't like the mustard yellow and steel blue design that I created a couple weeks ago. It just didn't sit well with me, and if I look back over my design history the designs that have stuck over the years are invariably grey with a splash of colour. Problem was, I didn't really know how I was going to redesign the site. Then, one day, I was talking with Sven via email and I visited his blog (also running Pure Blog for the record 🎉), and I immediately knew that was the kind of design I was looking for. It's simplicity is just lovely, and so easy to read. So I set about making my own version of Sven's lovely design. I didn't want it to be exactly the same as his, but I also didn't think my design would turn out quite as close to his as it did - I suppose that goes to show how much I like his site. :-) I've spoken to Sven and he's good with me effectively copying his design. For posterity (as I'm likely to change it again in the future) here's what the design currently looks like: I'm still not 100% sold on the font (but it is growing on me), and I'm not sure about the yellow in the , but blue everywhere else. So I may change a couple of things subtly. Having said all that, overall I'm the happiest with the design I've been since moving to Pure Blog. Finally I'd like to thank Sven for allowing me to steal his wonderful design. What do you guys think? Leave a comment below, or reply by email. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views

2026.12: Please Listen to My Podcast

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on Questions about Anthropic vs. the U.S. Government. Everything I Didn’t Write . This was one of those weeks where far more happened than I could write about — and that’s partly my fault for taking a stand on bubbles ! To that end, I highly suggest this week’s episode of Sharp Tech , where we cover: OpenAI’s pivot to enterprise, and why AI might look like the PC in the 1980s Why I think that agents are not only real, but also the reason we are not in a bubble OpenClaw as evidence that my thesis that OpenAI and Anthropic are sustainably differentiated through their integration of harness and model is wrong Nvidia’s inference pivot, and why Nvidia is particularly concerned about a world dominated by OpenAI and Anthropic (and why Microsoft might be in trouble) And, for good measure, why I don’t mind Wisconsin winters I think that each of these points could be another Update, but also, I’m taking a few days off for vacation, so I hope you’ll listen to this episode in particular. — Ben Thompson What Jensen Huang Has In Common with Steve Jobs. I really enjoyed this week’s Dithering covering Nvidia’s announcements at GTC Monday , including a near-perfect inversion of what Jensen Huang was telling the world about Nvidia’s approach to inference workloads just one year ago. In their trademark 15-minute format, Ben explains how and why Nvidia’s inference messaging is now different ( see also : this week’s Stratechery Interview ), while Gruber draws on decades of Apple experience to note the similarities between Huang and Steve Jobs. It’s a great listen that renders legible an easily missed strategic inflection point at the most valuable company in the world .  — Andrew Sharp Trump’s Trip to Beijing, Delayed Indefinitely. As the war in Iran continues, this week’s Sharp China covered the news that President Trump will delay a trip to Beijing that had been scheduled to begin March 31st . Come to hear why both sides are likely relieved by the delay, and stay to hear about a softened Taiwan threat assessments from the U.S. intelligence community and a succession of PLA military scientists who are being purged for reasons that aren’t entirely clear. — AS Agents Over Bubbles — Agents are fundamentally changing the shape of demand for compute, both in terms of how they work and in terms of who will use them. They’re so compelling that I no longer believe we’re in a bubble. An Interview with Nvidia CEO Jensen Huang About Accelerated Computing — An interview with Nvidia CEO Jensen Huang about his GTC 2026 keynote, navigating China and DC, and remembering Nvidia’s true nature. Jensen Huang and Andy Grove, Groq LPUs and Vera CPUs, Hotel California — GTC 2026 marked an important inflection point for Nvidia, as the company is selling multiple architectures, instead of focusing on just one GPU. The motivation is serve all needs and keep all customers. What the NBA Could Be Getting from College Basketball — College basketball is fantastic, and the NBA should take advantage of its success by raising the age limit for the NBA Draft. LLM Paradigm Changes Jensen Huang’s Jobsian Keynote From Fiber to AI: A Laser Giant’s Rebirth Mexico City’s Sinking Lands The War in Iran and the Visit to Beijing; New DNI Assessments on Taiwan; Military Scientists Disappearing From Public View How to Miss a Free Throw, The Biggest Top 100 Disappointments, Expansion is Afoot (Again) How NOT to Miss a Free Throw, Generic Houston Rockets Slander, The Top 100 Pleasant Surprises OpenAI’s Enterprise Pivot, The Rise of Agents and Bubble Counterpoints, Nvidia Changes Its Inference Story

0 views

Premium: The Hater's Guide To Adobe

I hear from a lot of people that are filled with bilious fury about the tech industry, but few companies have pissed off the world more than Adobe. As the foremost monopolist in software, web and graphic design, Adobe has created one of the single-most abusive, usurious freakshows in capitalist history, trapping users in endless, punishing subscriptions to software they need that only ever seems to get worse. In the Department of Justice’s recently-settled case against Adobe , it was revealed that early termination fees for its annual subscriptions amounted to 50% of the remaining balance on the customer’s subscription, with one unnamed Adobe executives referring to these fees as “a bit like heroin for Adobe,” adding that there [was] “...absolutely no way to kill off ETF or talk about it more obviously [without] taking a big business hit.”  Let me explain how loathsome Adobe’s business model truly is.  The below is a screenshot from Adobe’s website from Wednesday March 18 2026. One might read this and think “wow, $34.99 a month, what a deal!” and immediately sign up without clicking on “view terms,” which reveals that after three months the subscription cost becomes $69.99 a month, and that this “monthly” subscription is a year-long contract.   Adobe deliberately hid (and I’d argue still hides!) its early termination fees behind “inconspicuous hyperlinks and fine print.” Want to cancel? Adobe charges you 50% of the remaining balance on your contract — so, in this case, over $300 , and it justifies this by saying (and I quote) “...your purchase of a yearly subscription comes with a significant discount. Therefore, a cancellation fee applies if you cancel before the year ends.”  The DOJ did a great job in its complaint explaining how much Adobe sucks, just before doing nothing to impede them doing so: An exhibit from the DOJ’s lawsuit shows the MC Escher painting of canceling an Adobe subscription and the six different screens that it takes to do so. The DOJ also added that Adobe’s subscription revenue had nearly doubled between 2019 ($7.71 billion) and 2023 ($14.22 billion), and since then, Adobe’s subscription revenue hit $20.5 billion in 2024 and $22.9 billion in 2025 .  To be clear, Adobe is utilizing many very, very common tricks that the software industry has used to keep people from quitting, and basically every software service I use makes you jump through three to five different screens (fuck you, Canva!) to cancel. These tricks are commonly referred to as “dark patterns.”  Adobe’s Early Termination Fees are, however, uniquely awful, both in that they employ the evil sorcery of enterprise software contracts and deploy them against creatives that are, in many cases, barely keeping their heads above water in an era defined by people trying to destroy them.  I will say, however, that I’ve never seen anyone else bill monthly for an annual contract outside of the grotesque SaaS monstrosities I wrote about last week . These are egregious, deceptive and manipulative techniques that shouldn’t be deployed against anyone , let alone creatives and consumers.  And because this is the tech industry under a regulatory environment that fails to hold them accountable, the $150 million settlement with the DOJ doesn’t appear to have changed a damn thing about how this company does business, other than offering “$75 million worth of services for free to customers that qualify.” The judgment does not appear to require any changes to how Adobe does business, and $150 million amounts to roughly 0.345% of the $43.4 billion that Adobe made in 2024 and 2025. Adobe is a business that runs on rent-seeking, deception, and a monopoly over modern design software mostly built by people that no longer work there, such as John and Thomas Knoll, who won an Oscar in 2019 for scientific and engineering achievements for creating Photoshop along with Mark Hamburg, who left Adobe the same year .  Adobe does not create things but extract from those that do , exhibiting the most egregious and horrifying elements of the Rot Economy ’s growth-at-all-costs avarice. While you may or may not like Photoshop, or Lightroom, or any other Adobe property, that’s mostly irrelevant to the glorified holding corporation that shoves different bits around every few months in the hopes that they can scrape another dollar from their captured audience.  Much of this comes from Adobe’s abominable subscription products, most notably (and I’ll get into it in more detail after the premium break) its Creative Cloud subscription, a rat king of different services like Photoshop and InDesign and services like “Adobe Creative Community” and “generative credits” for AI services that are used to justify constant price increases and confusing product suite tweaks, all in the service of revenue growth.   All the while, Adobe’s net income has, for the most part, flattened out for the best part of two years at a seasonal range from $1.5 billion to $1.8 billion a quarter , all as the company debases its products, customers and brand in the filth of generative AI features that range from kind of useful to actively harmful to the creative process and have generated, at best, a couple hundred million dollars of revenue in the last two years .  I should also be clear that Adobe has an indeterminately-large enterprise division that includes marketing automation software like Marketo , which it acquired in 2018 for $4.75 billion along with Magento , a different company that develops a software platform to run corporate eCommerce pages, all so it can do battle with Salesforce. CNBC’s Jim Cramer once called Salesforce and Adobe’s competition “ one of the great rivalries in tech ,” and he’s correct, in the sense that both companies love to buy other companies to prop up their revenues. Adobe has bought 61 of them since the 90s , but Salesforce has it beat at 75 .  They’re also both devious, underhanded SaaSholes that make their money through rent-seeking and micro-monopolies. The business known as “Adobe” is a design platform, a photo editor, a PDF creation platform, an eCommerce platform, a marketing automation platform, a content management system, a marketing project management system, an analytics platform, and a content collaboration platform.  You do business with Adobe not because you want to , but because doing business at some point requires you to do so. Use PDFs regularly? You’re gonna use Acrobat. Need to edit an image? Photoshop. Run a design studio? You’re gonna pay for Creative Suite, and you’re gonna get a price increase at some point because you don’t really have any other options. Doing a lot of email marketing campaigns? You’re gonna use Marketo, whether you like it or not . Adobe’s “Digital Experience” vertical is effectively a holding corporation for Adobe’s acquisitions to help boost revenue, an ungainly enterprise limb that grabs companies and puts it in a big bag that says “money me money now” every year or two.  Put another way, one does not do business with Adobe. It has business done to it.   There’s also the “publishing and advertising” division that has made somewhere between $146 million and $300 million a year since 2019, most of which comes from abandoned products and, ironically, the product that originally made Adobe famous — PostScript, the language that underpins most of modern printing, whether directly or by inspiring the various other alternatives that emerged in the following decades. Adobe is a company that bathes in the scent of mediocrity, constantly doing an impression of an ever-growing business through a combination of acquisitions and price increases that are only possible in a global regulatory torpor and a market that doesn’t know when it’s being conned.  It’s also emblematic of how the modern software company grows — not through an honest exchange of value built on a bedrock of innovation and customer happiness, but the eternal death march of enshittification of its products and monopolization of whatever fields it can barge its way into.  In many ways, Adobe is one of the greater tragedies of the Rot Economy . Beneath the endless layers of subscriptions and weird upsells and horrible Business Idiots lay beloved products like Photoshop, Illustrator and InDesign that are slowly decaying as Adobe searches to boost engagement and revenue.  A great example is a story from Digital Camera World from 2025 , where writer Adam Juniper talked about features he loved that were disappearing for no reason: Juniper found that Adobe had intentionally moved the speech bubble to an optional “legacy shapes and more” feature, all with the intent of pushing users to pay for (per Juniper) Adobe’s add-on Stocks subscription . In fact, a simple web search brings up user after user after user after user after user after user after user saying the same thing: that Adobe only ever seems to make its products worse, with the solution often being “find a way to revert to how things were done before the update” or “find another company to work with,” except Adobe’s scale and market presence make it near-impossible to compete.  Adobe even has the temerity to bug you with ads within its own products , nagging you with annoying pop-ups about new features or attempting to con you into a two-month-long trial of another piece of software using “ in-product messaging ” that’s turned on by default. These are all the actions of a desperate, greedy company run by people that don’t give a shit about their customers or the things they sell.  A few weeks ago, CEO Shantanu Narayen said that he was stepping down after 18 years in which he took Adobe from a company that built things that people loved and turned it into a sleazy sales operation built on rent-seeking and other people’s innovation.  Those who don’t bother to read or know anything about software will tell you that the “threat of AI” or “the SaaSpocalypse” is killing Adobe — a convenient (and incorrect!) way to ignore that Adobe is only able to grow through acquisitions or price-hikes.  The sickly irony is that acquisitions were always in Adobe’s blood from the very early days of Photoshop. It just used to be run by people who gave a fuck about whether software was good and customers were happy. In fact, I’m going to have a little rant about this.  I’m sick and tired of journalists from reputable outlets talking about “the threat of AI” to software companies without ever explaining what they mean or any of the economic effects involved. Adobe isn’t being killed by “AI.” We’re at the end of the hypergrowth era of software, and the only thing that grows forever is cancer. It also gives executives Narayen cover for running operations built on deceit, exploitation, extraction and capital deployment. Years of evaluating these companies entirely based on their revenues and imagined things like “the threat of AI” without any connection to actual fucking software makes the majority of the analysis of software entirely useless.   Nothing even really has to change about reporting. Just use the product! Use it and tell me how you feel. Talk to some customers. Spend more than 20 minutes on Facebook. Use Photoshop and tell me how many popups you get, or whether it inexplicably slows down or starts eating up RAM . You’ll quickly see that we’re in a crisis that’s less about AI and more about creating a tech industry powered by creating mediocre software and putting far more effort into making a business impossible to avoid. Decades of this psuedo-journalism mean that a great many business reporters are simply unprepared to discuss what’s actually happening, evaluating software companies based on 10-Ks and shadows on the wall of a fucking cave.  The tech industry has done a great job of scaring reporters into thinking that having a negative opinion is somehow “not supporting innovation,” and I want to be clear that refusing to criticize the tech industry is what’s actually stopping innovation. Letting these companies get away with ruining either the products they build or the products they buy is creating a climate in which the most-successful companies are the ones that crowd out the competition and raise prices. Adobe’s growth has come from being a fucking asshole . Its decline has come from the limitations of one’s ability to buy other companies and claim their revenues as your own and constantly increasing the price of your services. If there were a “threat from AI,” you’d actually be able to name it and point to it rather than referring to it like the Baba Fucking Yaga.  I’m going to put it very, very bluntly: the last 15 years or so of tech earnings have been earned predominantly by fucking over the customer through either reducing the value of the product or increasing its price. The tech and business media’s lack of attention to the actual state of technology is partially to blame, because Number Has Always Gone Up, and thus the assumption was that the underlying product quality was raising that number versus screwing over the customer.  Wake up! Look at every tech product you’ve used and tell me if it’s improved in the last decade! Facebook’s worse, email’s worse, browsers are either the same or worse, Google Search is worse, Adobe Creative Suite is worse, iPhones might seem better but the software is bloated with endless options and dropdowns and ads and nags, pretty much the only thing that’s improved is physical hardware because shipping bullshit, useless hardware is much, much harder. This total lack of awareness of the actual state of the world is why these companies have gotten away with so much shit over the years, and why so many of you are incapable of actually capturing this moment. You are not actually looking for what’s happening, just for what might comfortably fit your analysis of the world.  Vaguely blaming things on “the threat of AI” allows you to continue pretending everything will grow forever, and rationalize bad behavior by framing every problem through the lens of disruption and innovation. A company that’s on the decline “being disrupted by AI” allows you to believe that another company will grow and take its place . Saying that a company is growing revenue “because their AI bets are paying off” allows you to ignore price increases and deteriorating software, and think the world is a better place, even if you can only do so by living in a fantasy.  Gun to your head, what is the threat to software from AI? How is it manifesting, and who is the threat? Is it OpenAI? Anthropic? Are their products actually replacing anything? Can you prove that, or is this just something you heard enough people say that you’re now comfortable believing it?  The actual threat to software companies is their hatred of innovation and their customers, and what's happening to Adobe will eventually happen to them all.  Products that provide value are enshittified , and the products they acquire have been (or came pre-) enshittified. The prices have gone up. The nags to consumers have increased. Revenues have gone up because these companies have been allowed to buy effectively anyone they want — though Adobe was, thankfully, stopped from acquiring Figma — and increase prices whenever they want, and when it’s come time to evaluate the health or strength or actual value of these companies, all that anybody ever looks at is revenue s.  Perhaps your argument might be that the markets don’t care about how good something is , except the markets are influenced by journalism and financial analysts. The markets celebrate dogshit companies like Meta that make broken, harmful products because their disgusting monopolies allow them to brutalize businesses and consumers alike.  What we’re seeing in the software industry are the limits of how much one can abuse a customer, a business model that SaaS enabled and both the tech media and analysts celebrated because it worked , in the sense that it worked at making the software companies rich. And because the people at the top have chased out anybody who knows what “good” looks like and empowered vacuous growth-perverts at every level, these companies have no idea what to do to stop the tide from coming in. Your argument might be that these companies couldn’t grow so fast without fucking customers over or making their products worse — and at that point you should ask yourself what you want the world to look like, and how willingly you’ve participated in making it look how it does today.  The decline has yet to fully begin, but a CEO doesn’t suddenly decide to quit their company after 18 years during record results because the future looks bright.   The real SaaSpocalypse is the comeuppance for decades of focusing businesses on growth by any means possible, and the hysterical non-analysis of blaming it on AI is a sign that those responsible can’t be bothered to live in anything other than the dreamworld of venture capital and Ivy League business schools.  Adobe’s story is a tragedy — the tale of the great things that can be done with software for the betterment of humanity, and how usurious Business Idiots can hijack it as a means of expressing eternal growth to the markets. This is The Hater’s Guide To Adobe, or The Adobe Enshittification Suite.

0 views

A Satisfied Customer Review Of The Yogurtia

And now for something completely different. For years, we’ve been happy users of the Yogurtia , a Japanese “fermented food maker”. That alone should sound enticing enough to warrant this small review! What’s a fermented food maker? I’m glad you ask. It’s a maker for food to ferment. Next question. In case that wasn’t crystal clear, here’s a common way we employ our Yogurtia: to make yoghurt. Shocking, given the name, right? There are plenty of mundane looking kitchen appliances out there that can “make yoghurt” so why should you import a Japanese device instead? While researching yoghurt making machines, we often encounter contraptions you can put multiple small containers in that will be heated to 40 degrees Celcius for eight to twelve hours. Once it’s done, you pull out the containers and voilà: your very own yoghurt pots. The Yogurtia doesn’t do this. Instead, there’s one giant contiainer where you pour in milk and remnants of your previous yoghurt. That means you can make much more in one go—but that also means you can more easily put in other stuff. The biggest reason for buying the Yogurtia is the capability to precisely configure the temperature and time it needs to ferment. Most basic yoghurt makers just come with an on/off switch. We can set it to 60 degrees instead of the usual 40 if we want to more easily ferment other stuff. Preparing breakfast with a freshly made yoghurt container thanks to the Yogurtia maker. Perhaps I should elaborate on the “other stuff”. While the Yogurtia obviously markets itself in the west towards yoghurt lovers, the real purpose of this neat little contraption is to make amazake and nattō . I’ve had great success with the former. To make amazake, you’ll need to grow a specific mold on rice first called koji . Activating that koji is done at 60 degrees which is too hot for most small fermentation chambers/yoghurt makers. I produce koji-fied rice in my fridge-hacked inoculation room . A rice cooker that can be properly configured might be another option, but cheaper machines often have trouble maintaining the temperature, requiring you to add some cold water. If the temperature is too high, the koji will be killed off, resulting in a less sweet beverage as the mold is responsible for breaking down the carbs of the rice into simple sugars. In a previous employer’s cantine, I was known as the amazake guy. I brought the smelly stuff to work for interested colleages to try it out and enthuse them to get started on fermenting stuff themselves. The result was met with mixed success: most people said yuck! , I got the label “the amazake guy”, and one time I forgot to take the canister out of the fridge at work. Or maybe the order is reversed here, that would certainly make more sense. I tried once more with spamming everyone to go out and buy Sandor Katz’ The Art of Fermentation bible. Then I tried bringing pickled stuff to work. More yuck! and what strange colour does that radish have? The one thing I didn’t try, which I’m making up for by writing this satisfied customer review, is convincing them to buy a Yogurtia. Maybe I should have done that instead. In Belgium, yoghurt is one of the few “fresh” fermented products almost everyone eats regularly (we’ll ignore cheese; sausages; wine; olives; and yes, even chocolate ; …. for now). Did you know you can use a spoonful of sourdough starter to jump-start the yoghurt making process? Did you know you can jump-start the bread rising process by using a spoonful of yoghurt? Food for thoug—no, a new blog post. A+++. Would buy again. (And did buy again. Never connect a Japanese electronic device that assumes directly to the European power grid of . Ouch. That plastic did melt good.) Related topics: / fermentation / By Wouter Groeneveld on 20 March 2026.  Reply via email .

0 views
iDiallo Today

Why Is Everyone Supposed to Die If Machines Can Think?

If you only listen to spokespersons for AI companies, you'll have a skewed view of how AI is actually being integrated into the workplace. You probably don't need to convince a developer to include it in their workflow, but you also can't dictate how they do so. Whenever I sit next to another developer during pair programming, I can't help but feel frustrated by their setup. But I don't complain, because they'd be just as annoyed with mine. The beauty of dev work is that all that matters is the output. If you use a boilerplate generator like , few will complain. If you use AI to generate the same code, as long as it works, no one will complain either. If the code is crafted with your own wetware, no one will be the wiser. Developers will use any tool at their disposal to increase their own productivity. But what happens when that thousand-dollar-per-developer-per-month subscription starts to feel expensive? What happens when managers expect a tenfold return on investment, yet sprint velocity doesn't budge? On one end, new metrics are created to track developers' use of the tool. Which, in my experience, are highly inaccurate and vary wildly. On the other hand, companies are using AI as justification for laying off workers. So which metric is to be trusted? AI isn't simply a solution in search of a problem. It's quite useful. One person will tell you it's great for writing tests, another will praise it for writing utility functions, and another will use it to better understand a requirement. Each is a valid use case. But the question managers keep asking is: "Can we use AI instead of hiring another dev?" I'm not sure what is supposed to happen if we achieve so-called AGI. Does it mean I no longer have to do code reviews? Is it AGI when the AI stops hallucinating? My shower-thought answer: AGI is an AI that can say "I don't know" when it doesn't know the answer. But I don't think Sam Altman sees that as a selling point. Why are we supposed to die if a machine can think? Every time someone raises this argument, I think of Thanos. In the Avengers saga, he kills half of all living beings in the universe. It's an act so total and irreversible that the writers had to bend time itself to undo it. And still, fifteen movies later, the franchise keeps going. Each new antagonist has to threaten something, but nothing lands the same way. You already saw the worst. The scale is broken. The villain is a terrorist from an un-named country? Gimme a break. That's what the AI extinction narrative has done to the conversation about AI. By opening with the end of the world, it made every practical concern feel small by comparison. Who wants to talk about sprint velocity and hallucinated function calls when we're supposedly staring down an existential threat? So we don't. We argue about the apocalypse instead. Meanwhile, I am debugging a production incident at 2am, in a codebase that has never once tried to kill me, but has absolutely tried to ruin my weekend. The reality is quite different from the drama that unfolds online. The longer this AI craze continues, the less I believe we're headed for a dramatic bubble pop. Instead, I think the major players will try to bully their way out of one. And that bullying is already happening on at least three fronts: language, narrative, and money. Microsoft is leading the language crack down. They are rounding up critics in their own Copilot Discord servers, banning users who use the now-deemed-derogatory term "Microslop." Nvidia is publicly asking people to stop using the phrase "AI slop." These aren't isolated incidents of corporate thin skin. They are coordinated attempts to police the vocabulary we use to criticize the technology. Control the language, and you go a long way toward controlling the conversation. When you can't call a thing what it is, it becomes harder to argue that the thing exists at all. On the narrative front, we are told every day that AI is good, innovative, and inevitable. Then we're told it's going to take our jobs. And at the same time, we're told it's an existential threat that could wipe us off the planet. It is simultaneously the best thing that could ever happen to humanity and the worst. I'm reminded that "War is peace, freedom is slavery, ignorance is strength" as George Orwell puts it. It's a cognitive trap. When a technology is framed as both savior and apocalypse, the questions regular people ask are seen as mundane. We can't ask: "Does it work? Is it worth the cost? Are we actually benefiting from this?" Instead, we spend our energy arguing about the end of the world, and the companies keep burning through cash while the narrative burns through our attention. On the money front, we all witnessed it firsthand with the fiasco involving Anthropic, OpenAI, and the Department of Defense. People were quick to sort the players into the good guys, the bad guys, and the ugly. But to me, it looked like a dispute designed to obscure the problem that has plagued AI companies from the very beginning: they need to make money. It doesn't matter if a company generates $20 billion a year when its operating costs double annually. They're still in the red. Anthropic was making a grand stand, positioning itself as the principled actor fighting against the US war machine. At the same time, they had no issue working with Palantir, a company that makes no secret of its commitment to mass surveillance and its role in powering the machinery of war. Meanwhile, OpenAI is struggling with its own financial stability. They've just launched ads on their platform, something Sam Altman once described as a last resort. When you're in the red and a customer is willing to pay, principles become a luxury you can do without. Given their history of bending copyright law and converting to a for-profit entity, it's naive to assume there are other principles they wouldn't bend as well. They quickly jumped into the DoD deal, scooping up a $200 million contract to replenish their coffers. There was one detail in Anthropic's statement that deserved more attention than it got: We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. In other words: surveilling citizens is immoral. If you're a non-citizen or a foreigner, you're on your own. So right now, AI companies are hemorrhaging money, policing the words we use to criticize them, manufacturing existential dread to crowd out any skepticism, and taking defense contracts while performing ethical restraint. And somewhere in the middle of this, we're supposed to believe that only they can save us. When you're losing money but need to maintain the illusion of infinite growth, you don't wait for the market to correct you. You make the bubble burst feel not just unlikely, but unthinkable. You bully the language, inflate the stakes, and monetize the fear. As individuals, what are we supposed to do with the useful part of the technology? It helps me write tests. It helps my colleagues parse requirements. Used without hype and within realistic expectations, it is actually a good tool. But "a good tool" doesn't justify the valuations, the layoffs-as-euphemism, the defense contracts, or the Discord bans. It doesn't sustain the mythology that has been built around it. That gap between the tool that exists and the revolution that was promised, is precisely what the bullying is designed to keep you from looking at too closely. I still struggle to answer managers who ask me to justify the team use of the tool. I never had to justify my IDE, or my secret love affair with tmux before. For now all I can tell them is: "it's useful, within limits, and that should be enough." It won't be what they want to hear. But it's more than the industry has managed to say about itself.

0 views

Melanie Richards

This week on the People and Blogs series we have an interview with Melanie Richards, whose blog can be found at melanie-richards.com/blog . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m a Group Product Manager co-leading the core product at Webflow, i.e. helping teams visually design and build websites. My personal mission is to empower people to make inspiring, impactful, and inclusive things on the web. That’s been the through line of my career so far: I started out as a designer at a full-service agency called Fuzzco, moved to the web platform at Microsoft Edge, continued building for developers at Netlify, and am now aiming to make web creation even more democratic with the Webflow platform. I transitioned from design to product management while at Microsoft Edge. I wanted to take part in steering the future of the web platform, instead of remaining downstream of those decisions. I feel so lucky to have worked on new features in HTML, ARIA, CSS, and JavaScript with other PMs and developers in the W3C and WHATWG. I’m a builder at heart, so I love to work on webby side projects as well as a whole bevy of analog hobbies: knitting, sewing, weaving, sketchbooking, and journaling. I have a couple primary blogs right now: From 2013–2016 I also had a blog and directory called Badass Lady Creatives (wish I had spent more than five minutes on the name, haha). This featured women who were doing cool things in various “creative” industries. At the time it seemed like every panel, conference lineup, and group project featured all or mostly dudes. The blog was a way to push back on that a little bit and highlight people who were potentially overlooked. Since then gender representation (for one) seems to have gotten a bit better in these industries. But the work and joy of celebrating diverse, inspiring talent is never done! Big “yeet to production” vibes for me! I use Obsidian to scribble down my thoughts and write an initial draft. Obsidian creates Markdown files, so I copy and paste those into Visual Studio Code (my code editor), add some images and make some tweaks, and then push to production. I really try not to overthink it too much. However, I will admit that I have a tons of drafts in Obsidian that never see the light of day. It can be cathartic enough just to scribble it down, even if I never publish the thought. For my Learning Log posts, I use a Readwise => Obsidian workflow I describe in this blog post . Reader by Readwise is the app where I store and read all my RSS feeds and newsletter forwards. “Parallel play” is the biggest, most joyful boon to my creativity. I love to be in the company of others as we independently work on our own projects side by side. There’s a delicate balance when it comes to working on creative projects socially. For example, my mom, my aunt, and I often have Sew Day over FaceTime on Sundays. Everyone’s pretty committed to what they’re working on, so it’s easy to sew and talk and sing (badly 😂) at the same time. I also used to go to a local craft night that very sadly disbanded when the host shop changed hands. For writing or coding, that takes a bit more mental focus for me. I started a Discord server with a few friends, which is dedicated to working on blog posts and side projects. We meet up once a month to talk about our projects (and shoot the breeze, usually about web accessibility and/or the goodness of dogs). Then we all log off the voice channel to go do the thing! Both of these blogs use Eleventy and plain ol’ Markdown, and are hosted on Netlify. Some of my other side projects use a content management system (CMS) like Webflow’s CMS, or Contentful + Eleventy. Again, Webflow is my current employer. I use a Netlify form for comments on my “Making” blog, and Webmentions for my main blog. I will probably pull out Webmentions from that code base: conceptually they’ve never really “landed” for me, and it would be nice to delete a ton of code. I generally like my setup, though sometimes I think about migrating my “Making” blog onto a CMS. As far as CMSes go, I quite like Webflow’s: it’s straightforward and has that Goldilocks level of functionality for me. Some other CMSes I’ve tried have felt bloated yet seemed to miss obvious functionality out of the box. I have a Bookshop.org affiliate link and it took me several years to meet the $20 minimum payout so…yeah I’ve never truly monetized my blogging! I find there’s freedom in giving away your thoughts for free. As far as costs go, I have pretty low overhead: just paying for the domain name. I’m fine with other folks monetizing personal blogs, though of course there’s a classy and not-classy way to do so. If monetizing is what keeps bloggers’ work on the open web, on sites they own and control, I prefer that over monetizing through walled gardens. Related: Substack makes it easy to monetize but there are some very compelling reasons to consider alternatives. This is highly topical: I’m currently scheming about a directory site listing “maker” blogs! So many communities in the visual arts and crafts are stuck on social media platforms they don’t even enjoy, beholden to the whims of an algorithm. I’d like to connect makers in a more organic way. If you’re a crafter who would like to be part of this, feel free to fill out this Google form ! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 133 interviews . People and Blogs is possible because kind people support it. melanie-richards.com/blog, simply the blog that lives at my main website. I post here about the web, design, development, accessibility, product management, etc. One practice I’ve been keeping for a few years now is my monthly Learning Log. These posts are a compendium of what I’ve been shipping or making, what I’ve been learning, side quests, neat links around the internet, and articles I’ve been reading. When I’m in a particularly busy period (as was the case in 2025; my first child was born in September), this series is my most consistent blogging practice. making.melanie-richards.com : this is the blog where I post about my aforementioned analog projects. Quite a lot of sewing over the past year! Mandy Brown , Oliver Burkeman (technically a newsletter with a “view on web” equivalent), and Ethan Marcotte ’s writing have been helping to fill my spiritual cup over the last couple years. Anh and Katherine Yang are doing neat things on their sites What Claudia Wore for a nostalgic pick; I’d love to recreate some of these outfits sometime. Thank you Kim for keeping the blog up! Sarah Higley would be a great next interview. She blogs less frequently, but always at great depth and thoughtfulness on web accessibility. Web developers can learn quite a lot on more involved controls and interactions from Sarah.

0 views

Three Men Tried To Steal My Motorbike While I Was On It!

Spring is in the air here in North Wales, so I decided to take one of my motorbikes to the office yesterday. On the way home, not too far from where I live, I was sat at traffic lights when all of a sudden three men on off-road bikes surrounded me. One left, one right, one right up to my back wheel. And they were really close, like, inches from me kinda close. I immediately felt uneasy, like something was about to happen. I think we as a species have a sense for this kinda thing. Anyway, seconds later the guy on my left reached over to, I assume, grab the keys for the bike, but I was on the BMW, which has a keyless ignition, luckily. I clocked what the guy was trying to do, panicked, and kicked the side of his bike as hard as I could. Which, thankfully was enough to put him off balance, causing him to topple over. Then I clobbered the guy to the right around the head - he was wearing a helmet so it wouldn't have hurt him, but I suppose I figured it would be enough to shock him by me a couple seconds. I dunno, I basically shitting my pants at this point. As I soon as I'd hit the guy to my right, I took off like the absolutely clappers, running a red light in the process (that goodness nothing was coming the other way). My BMW is a fast bike, at 1000cc and over 170BHP. They were on dirt bikes, which are nowhere near as quick as mine. I also had knowledge of the local roads, which I hoped they didn't. As I flew off, they gave chase but quickly dropped back. A brief glance of my speedo showed I was doing over 120MPH, but it was working. In my panic I didn't know what to do - shall I go home? What if they see me pull in and find out where I live? Should I go somewhere else? But it's rush hour and if I get caught in traffic they could catch up to me again - my bike is a lot quick on the open road, but in traffic, they would have the advantage. I decided to floor it and get home as quick as possible. There's a straight road that leads to my village, so I figured if I can't see them behind I'll quickly swing the bike in and hide behind the garage (which can't be seen from the road). If I could, I'd just carry on and continue trying to lose them. I'm nearing my drive now, so I glance in the mirrors and see nothing; I decide to risk it and swing in, going up our gravel drive as quickly as I dare, while simultaneously hoping the kids aren't playing in the drive. They aren't. I dive in behind the garage and wait...5 seconds...10 seconds...I hear bikes getting closer. They fly right past my drive, going way too fast for our single track village road. My wife later asked the owner of the village pub if he caught anything on his CCTV. Here's what he found for us: It looks like only 1 of them had a number plate, and it's pretty much parallel with the road, so impossible to identify from the video. We've passed it onto the Police and we're waiting to hear back from their forensics dept. to see if they can pickup any prints from my bike. I don't remember if they had gloves on though, and I'm not very confident it will come to anything. I'm fine now, but it shook me up. I just hope they were opportunist idiots, rather than something more sinister. I've already bought myself a camera for the garage. Stay safe out there, folks. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views

Moving (for now?) from HomeAssistant in Python venvs to HomeAssistantOS

I have used HomeAssistant for years . So many years, that I do not remember how many. Nothing I do with it is particularly fancy, but things like having my office lights turn on when I open the door if the light is below a certain luminosity, or turning off my Brompton bike charger once it has finished charging, are fun and convenient. We also have solar panels and a battery now, so I will be interested to see if I use HomeAssistant more for that. But anyway. I have been using HomeAssistant, on a Raspberry Pi 4, using Python venvs for years. It has worked absolutely fine for me, and I have (or, at least, had) no compelling reason to change. For me, this was the ideal setup, in that I could set the Pi up how I wanted, in terms of security and monitoring, and just run HomeAssistant on it. Updating HomeAssistant was as easy as running a simple bash script. I liked it. But… that approach is no longer supported, and, where possible, I prefer to use supported means of running software. That means either running HomeAssistantOS, or else using a containerised instance of HomeAssistant. While I could probably find my way through setting up a HomeAssistant container via podman, it would not be my preference, so I decided to give HomeAssistantOS a go, albeit with some trepidation. As expected, it was easy to install HAOS: write the image to a microSD card, and pop it into the Pi. I already had the switch port set up to the right VLAN, so I plugged in the Pi and waited a few minutes. I had anticipated that it would offer https, via a self-signed certificate, so I was a bit baffled to get a TLS error when I connected to it. “Never mind”, I thought. “I’ll just ssh into it and sort it out.” But no, no ssh either. Fortunately, I discovered quite quickly that, out of the box, it does not offer TLS, and I was able to access the web interface. I had taken a backup from my existing HomeAssistant installation, and I used the web interface on the new installation to restore it. It took a few minutes, but restored absolutely everything. I was impressed. I was anticipating - indeed, hoping - to set up TLS and reverse proxying using certbot and nginx. But that is not possible. Instead, I achieved it (reasonably easily, but not as easily as using a command line) via Add-ons from within the HomeAssistant UI. I’d have prefer to have done it the normal way, via ssh, but oh well. Annoyingly, I’d also like to have configured a firewall on the machine, but that is not an option either. I’ve yet to determine if that is going to be a dealbreaker for me, or whether relying on the network-level firewall, controlling access to and from that VLAN, and that machine, will be sufficient. I have also not been able to set up a separate ssh account for my greenbone scanning software, or to configure Wazuh to get the machine talking to my SIEM. Again, I will need to consider the impact of this, but intuitively it does not sit comfortably with me. Nor can I find a way to use restic to backup the configuration and other bits, incrementally and automatically, onto another machine, liked I am used to doing. I will have a poke around with the backup tooling offered but again, this does not enthral me. I want to know that, if there’s a problem, I have a backup on my restic server. Since I have used HomeAssistant for so long, and since I just restored a backup, the most I can say really is that it is all still working. It doesn’t seen faster or slower. The limitations of the appliance-based approach are annoying me, and may be sufficient to drive me towards a container-based approach instead (although that does not appeal to me either). Ultimately, I accept that I am but one user, and perhaps many users do not want the things that I want. Importantly, I am not the developer, and so what I want may simply not be things that they wish to provide. And that is their choice. I guess - personal opinion - that I would prefer a computer and not an appliance .

0 views
Kaushik Gopal Yesterday

Podsync - I finally built my podcast track syncer

I host and edit a podcast 1 . When recording remotely, we each record our own audio locally (I on my end, my co-host on his). The service we use (Adobe Podcast, Zoom, Skype-RIP) captures everyone together as a master track. But the quality doesn’t match what each person records locally with their own microphone. So we use that master as a reference point and stitch the individual local tracks together. This is what the industry calls a “ double-ender ”. Add a guest and it becomes a “triple-ender”. But this gets hairy during editing. Each person starts their recording at a slightly different moment — everyone hits record at a different time. Before I can edit, I need to line everything up. Drop all the tracks into a DAW, play the master alongside each individual track, nudge by ear until the speech aligns. Add a guest and it gets tedious fast. 10–15 minutes of fiddly, ear-straining alignment before I’ve even started editing. There’s also drift. Each machine’s audio clock runs at a slightly different rate, so two tracks that are perfectly aligned at minute one might be 200ms apart by minute sixty. So I built PodSync 2 . I first heard of a similar technique from Marco Arment — back in ATP episode 25 . He had a new app for aligning double-ender tracks and was already thinking about whether something so niche was even worth releasing publicly. I don’t think he ever released it. Being a Kotlin developer at the time, I figured I’d build my own. Java was mature. Surely there were audio processing libraries that could handle this. There weren’t 😅. At least not in any clean, usable form. Getting the right signal processing pieces together in JVM-land was awkward enough that my interest fizzled, so I kept doing it by hand. When I revamped Fragmented , I finally came back to this. I used Claude to help me build it — in Rust, no less. 3 But before you chalk this up to another vibecoded project, hear me out. The interesting part here wasn’t just that AI made it easier. It was thinking through the actual algorithm: Voice activity detection ( VAD ) to find speech regions. MFCC features to fingerprint the audio. Cross-correlation to find where the tracks match. Some real signal processing techniques, not just prompt engineering. Now, could I have prompted my way to a solution? Probably. But I like to think, years of manually aligning tracks — and some sound engineering intuition — helped me steer AI towards a better solution. Working on this felt refreshing. In an era where half the conversation is about AI replacing engineering work, here’s a problem where the hard part is still the problem itself — understanding the domain, picking the right approach, knowing what “correct” sounds like. It gives me confidence that solving real problems well still has its place. I like how Dax put it: thdxr on twitter I really don’t care about using AI to ship more stuff. It’s really hard to come up with stuff worth shipping. The core idea: take a chunk of speech from a participant track, compare it against the master recording, find where they match best. That position is the time offset. The trick is picking which chunk of speech to use. Rather than betting on a single region, Podsync finds a few strong candidates per track (longer contiguous speech blocks preferred) and tries each one against the master. For long candidates, it samples from the start, middle, and end. The highest-confidence match wins; if a second independent region agrees on the same offset, that corroboration factors in as a tie-breaker. After finding the offset, Podsync pads or trims each track to align with the master and match its length (and outputs some info on the offset). Drop the output into my DAW at 0:00. Done. I even wrote an agent skill you can just point your agent harness to and it will take care of all the steps for you : What used to be 10–15 minutes of alignment per episode is now a single command. Marco, if you ever read this, would still love to see your implementation! His solution (as I understand) is aimed more at correcting the drift vs getting the offset right. In practice, I haven’t found drift to be much of a problem. It exists but stays minor, and I’m typically editing every second of the podcast anyway so it’s easy enough to handle by hand. I even had a branch that corrected drift by splicing at silence points, but it complicated things more than it helped. It’s a podcast on AI development but we strive to make it high signal. None of that masturbatory AI discourse .  ↩︎ See also Phone-sync .  ↩︎ I chose Rust (it’s what interests me these days ) and a CLI tool with no runtime dependency is more pleasant to distribute.  ↩︎ It’s a podcast on AI development but we strive to make it high signal. None of that masturbatory AI discourse .  ↩︎ See also Phone-sync .  ↩︎ I chose Rust (it’s what interests me these days ) and a CLI tool with no runtime dependency is more pleasant to distribute.  ↩︎

0 views
Marc Brooker Yesterday

My heuristics are wrong. What now?

More words. More meaning? Some people who ask me for advice at get a lot of words in reply. Sometimes, those responses aren’t specific to my particular workplace, and so I share them here. In the past, I’ve written about echo chambers , writing , writing for an audience , time management , and getting big things done . Do you remember Cool Runnings ? In the movie, John Candy is a retired bobsled champion, who uses his experience, connections, and lovable curmudgeon character to turn a rag-tag group of sprinters into an olympic bobsled team. A lot of principal engineer types think of themselves this way: they used to bobsled, they don’t bobsled, but they still know the skills and the people and the equipment. And that worked well enough, while we were still bobsledding. But we’re not bobsledding anymore. Many of the heuristics that we’ve developed over our careers as software engineers are no longer correct. Not all of them. But many. What it means for a system to be maintainable. How much it costs to write code versus integrate libraries versus take service dependencies. What it means for an API to be well designed, or ergonomic, or usable. What it means to understand code. Where service boundaries should be. Where security and data integrity should be enforced. What’s easy. What’s hard. We’ve seen this play out in small ways before. Over the last decade, I’ve frequently been frustrated by experienced folks who didn’t update their system design heuristics to match the cloud, to match SSDs, to match 100Gb/s networks, and so on. But this is the biggest change I’ve seen in my career by far. An extinction-level event for rules of thumb. But you’re a tech leader, and you need to lead, and leading is heavily based on using your experience to help people and teams be more effective. What now? The victorious man in the day of crisis is the man who has the serenity to accept what he cannot help and the courage to change what must be altered. 1 Let me assume that you want to continue to be a valuable tech leader. You want your teams and organizations to succeed. That you’re willing to sound less smart and less sure, in interests of being right and helpful. In that case, and I hope that is the case, your job has changed. Your job, for the foreseeable future, is to have the humility to accept that many of your heuristics are wrong, the courage to believe some are still right, and the curiosity to actively learn the difference. You can’t throw out everything you know. Your taste, your high standards, your understanding of your business and customers and the deep technical trade-offs in your area are more valuable than ever before. This is like that fantasy that people have of going back to middle school knowing all the things they know now 2 . You’re ahead of the pack in many ways. But you also need to really deeply question the things you know, and the things you assume. Before you share one of your rules of thumb, you need to deeply examine whether it’s still right. And the way you’re going to know that, right now, is by getting back on the ice. Build. Own. Get your hands dirty and use the tools. Build something real. Build a prototype. Build a thousand little experiments in an afternoon. Challenge yourself to try to do something you previously would have assumed is impossible, or infeasible, or unaffordable. Find one of the ways that you’re worried that the new tools are going to lead to trouble, and actively fix it. Then examine the things you’re learning. Update your constants. Over the next couple of years, the most valuable people to have on a software team are going to be experienced folks who’re actively working to keep their heuristics fresh. Who can combine curiosity with experience. Among the least valuable people to have on a software team are experienced folks who aren’t willing to change their thinking. Beyond that, it’s hard to see. This is going to be hard for some folks. It’s hard to admit where you’re wrong. It’s hard to go back to being a beginner. It’s easy to stick your fingers in your ears and say “No, it’s the children who are wrong”. My advice is to not be that guy. The good news? It’s as fun as hell. Get building, get learning, make something exist that you couldn’t imagine before. Winnifred Crane Wygal paraphrasing Reinhold Niebuhr A fantasy I have never understood. Being 13 once was enough for a lifetime, thank you very much.

0 views

SQLAlchemy 2 In Practice - Chapter 1 - Database Setup

Welcome! This is the start of a journey which I hope will provide you with many new tricks to improve how you work with relational databases in your Python applications. Given that this is a hands-on book, this first chapter is dedicated to help you set up your system with a database, so that you can run all the examples and exercises. This is the first chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon . Thank you!

0 views

Thoughts on OpenAI acquiring Astral and uv/ruff/ty

The big news this morning: Astral to join OpenAI (on the Astral blog) and OpenAI to acquire Astral (the OpenAI announcement). Astral are the company behind uv , ruff , and ty - three increasingly load-bearing open source projects in the Python ecosystem. I have thoughts! The Astral team will become part of the Codex team at OpenAI. Charlie Marsh has this to say : Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement , OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community -- and for the broader Python ecosystem -- just as we have from the start. [...] After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development. OpenAI's message has a slightly different focus (highlights mine): As part of our developer-first philosophy, after closing OpenAI plans to support Astral’s open source products. By bringing Astral’s tooling and engineering expertise to OpenAI, we will accelerate our work on Codex and expand what AI can do across the software development lifecycle. This is a slightly confusing message. The Codex CLI is a Rust application, and Astral have some of the best Rust engineers in the industry - BurntSushi alone ( Rust regex , ripgrep , jiff ) may be worth the price of acquisition! So is this about the talent or about the product? I expect both, but I know from past experience that a product+talent acquisition can turn into a talent-only acquisition later on. Of Astral's projects the most impactful by far is uv . If you're not familiar with it, is by far the most convincing solution to Python's environment management problems, best illustrated by this classic XKCD : Switch from to and most of these problems go away. I've been using it extensively for the past couple of years and it's become an essential part of my workflow. I'm not alone in this. According to PyPI Stats uv was downloaded more than 126 million times last month! Since its release in February 2024 - just two years ago - it's become one of the most popular tools for running Python code. Astral's two other big projects are ruff - a Python linter and formatter - and ty - a fast Python type checker. These are popular tools that provide a great developer experience but they aren't load-bearing in the same way that is. They do however resonate well with coding agent tools like Codex - giving an agent access to fast linting and type checking tools can help improve the quality of the code they generate. I'm not convinced that integrating them into the coding agent itself as opposed to telling it when to run them will make a meaningful difference, but I may just not be imaginative enough here. Ever since started to gain traction the Python community has been worrying about the strategic risk of a single VC-backed company owning a key piece of Python infrastructure. I wrote about one of those conversations in detail back in September 2024. The conversation back then focused on what Astral's business plan could be, which started to take form in August 2025 when they announced pyx , their private PyPI-style package registry for organizations. I'm less convinced that pyx makes sense within OpenAI, and it's notably absent from both the Astral and OpenAI announcement posts. An interesting aspect of this deal is how it might impact the competition between Anthropic and OpenAI. Both companies spent most of 2025 focused on improving the coding ability of their models, resulting in the November 2025 inflection point when coding agents went from often-useful to almost-indispensable tools for software development. The competition between Anthropic's Claude Code and OpenAI's Codex is fierce . Those $200/month subscriptions add up to billions of dollars a year in revenue, for companies that very much need that money. Anthropic acquired the Bun JavaScript runtime in December 2025, an acquisition that looks somewhat similar in shape to Astral. Bun was already a core component of Claude Code and that acquisition looked to mainly be about ensuring that a crucial dependency stayed actively maintained. Claude Code's performance has increased significantly since then thanks to the efforts of Bun's Jarred Sumner. One bad version of this deal would be if OpenAI start using their ownership of as leverage in their competition with Anthropic. One detail that caught my eye from Astral's announcement, in the section thanking the team, investors, and community: Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that. As far as I can tell neither the Series A nor the Series B were previously announced - I've only been able to find coverage of the original seed round from April 2023 . Those investors presumably now get to exchange their stake in Astral for a piece of OpenAI. I wonder how much influence they had on Astral's decision to sell. Armin Ronacher built Rye , which was later taken over by Astral and effectively merged with uv. In August 2024 he wrote about the risk involved in a VC-backed company owning a key piece of open source infrastructure and said the following (highlight mine): However having seen the code and what uv is doing, even in the worst possible future this is a very forkable and maintainable thing . I believe that even in case Astral shuts down or were to do something incredibly dodgy licensing wise, the community would be better off than before uv existed. Astral's own Douglas Creager emphasized this angle on Hacker News today : All I can say is that right now , we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever". I like and trust the Astral team and I'm optimistic that their projects will be well-maintained in their new home. OpenAI don't yet have much of a track record with respect to acquiring and maintaining open source projects. They've been on a bit of an acquisition spree over the past three months though, snapping up Promptfoo and OpenClaw (sort-of, they hired creator Peter Steinberger and are spinning OpenClaw off to a foundation), plus closed source LaTeX platform Crixet (now Prism) . If things do go south for and the other Astral projects we'll get to see how credible the forking exit strategy turns out to be. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views
Martin Fowler Yesterday

Fragments: March 19

David Poll points out the flawed premise of the argument that code review is a bottleneck To be fair, finding defects has always been listed as a goal of code review – Wikipedia will tell you as much. And sure, reviewers do catch bugs. But I think that framing dramatically overstates the bug-catching role and understates everything else code review does. If your review process is primarily a bug-finding mechanism, you’re leaving most of the value on the table. Code review answers: “Should this be part of my product?” That’s close to how I think about it. I think of code review as primarily about keeping the code base healthy. And although many people think of code review as pre-integration review done on pull requests, I look at code review as a broader activity both done earlier (Pair Programming) and later (Refinement Code Review) . At Firebase, I spent 5.5 years running an API council… The most valuable feedback from that council was never “you have a bug in this spec.” It was “this API implies a mental model that contradicts what you shipped last quarter” or “this deprecation strategy will cost more trust than the improvement is worth” or simply “a developer encountering this for the first time won’t understand what it does.” Those are judgment calls about whether something should be part of the product – the same fundamental question that code review answers at a different altitude. No amount of production observability surfaces them, because the system can work perfectly and still be the wrong thing to have built. His overall point is that code review is all about applying judgment, steering the code in a good direction. AI raises the level of that judgment, focusing review on more important things. I agree that we shouldn’t be thinking of review as a bug-catching mechanism, and that it’s about steering the code base. In addition I’d also add that it’s about communication between people, enabling multiple perspectives on the development of the product. This is true both for code review, and for pair programming. ❄                ❄                ❄                ❄                ❄ Charity Majors is unhappy with me and rest of the folks that attended the Thoughtworks Future of Software Development Retreat. But the longer I sit with this recap, the more troubled I am by what it doesn’t say. I worry that the most respected minds in software are unintentionally replicating a serious blind spot that has haunted software engineering for decades: relegating production to the realm of bugs and incidents. There are lots of things we didn’t discuss in that day-and-a-half, and it’s understandable that a topic that matters so deeply to her is visible by its absence. I’m certainly not speaking for anyone else who was there, but I’ll take the opportunity to share some of my thoughts on this. I consider observability to be a key tool in working with our AI future. As she points out, observability isn’t really about finding bugs - although I’ve long been a supporter of the notion of QA in Production . Observability is about revealing what the system actually does, when in the hands of its actual users. Test cases help you deal with the known paths, but reality has a habit of taking you into the unknowns, not just the unknowns of the software’s behavior in unforeseen places, but also the unknowns of how the software affects the broader human and organizational systems it’s embedded into. By watching how software is used, we can learn about what users really want to achieve, these observed requirements are often things that never popped up in interviews and focus groups. If these unknown territories are true in systems written line-by-line in deterministic code, it’s even more true when code is written in a world of supervisory engineering where humans are no longer to look over every semi-colon. Certainly harness engineering and humans in the loop help, and I’m as much a fan as ever about the importance of tests as a way to both explain and evaluate the code. But these unknowns will inevitably raise the importance of observability and its role to understand what the system thinks it does. I think it’s likely we’ll see a future where much of a developer’s effort is figuring what a system is doing and why it’s behaving that way, where observability tools are the IDE. In this I ponder the lesson of AI playing Go. AlphaGo defeated the best humans a decade ago, and since then humans study AI to become better players and maybe discover some broader principles. I’m intrigued by how humans can learn from AI systems to be improve in other fields, where success is less deterministically defined. ❄                ❄                ❄                ❄                ❄ Tim Requarth questions the portrayal of AI as an amplifier for human cognition. He considers the different way we navigate with GPS compared to maps. If you unfold a paper map, you study the streets, trace a route, convert the bird’s-eye abstraction into the first-person POV of actually walking—and by the time you arrived, you’d have a nascent mental model of how the city fits together. Or you could fire up Google Maps: A blue dot, an optimal line from A to B, a reassuring robotic voice telling you when to turn. You follow, you arrive, you have no idea, really, where you are. A paper map demands something from you, and that demand leaves you with knowledge. GPS requires nothing, and leaves you with nothing. A paper map and GPS are tools with the same purpose, but opposite cognitive consequences. He introduces some attractive metaphors here. Steve Jobs called computers “bicycles for the mind”, Satya Nadella said with the launch of ChatGPT that “we went from the bicycle to the steam engine”. Like another 19th-century invention, the steam locomotive, the bicycle was a technological revolution. But a train traveler sat back and enjoyed the ride, while a cyclist still had to put in effort. With a bicycle, “you are traveling,” wrote a cycling enthusiast in 1878, “not being traveled.” In both examples, there’s a difference between tools that extend capability and tools that replace it. The question is what we lose when we are passive in the journey? He argues that Silicon Valley executives are too focused on the goal, and ignoring what happens to the humans being traveled. Much of this depends, I think, on whether we care about what we are losing. I struggle with mental arithmetic, so I value calculators, whether on my phone or . I don’t think I lose anything when I let the machine handle the toil of calculation. I share missing the sense of place when using a GPS over a map, but am happy that I can now drive though Lynn without getting lost. And when it comes to writing, I have no desire to let an LLM write this page.

0 views

Social media reimagined

We’re all familiar with social media: the Facebooks, the Twitters, the TikToks of this silly digital world. They have invaded our lives and taken over our time and attention. We have spent the past decade posting, snapping, tweeting, reeling (?), tiktoking (??). We fall asleep youtubing, only to wake up with our “for you” page completely fucked up because the algorithm lives a life of its own and has decided to profile us as someone who loves sheep herding and carpet cleaning (and, you know, maybe it's right). But imagine for a second if someone managed to reinvent social media. Imagine if there was a new product out there on the internet. A product so revolutionary, so original, so refreshingly different, that it will completely transform the way you feel and interact with other people online. Can you feel the excitement building? Well, I’m sorry—not sorry—to disappoint you because that product is not here. What is here, though (blame Kevin), is a silly little experiment: the Dealgorithm IRC server. I was thinking about setting an IRC server up just for fun, and he took the idea, ran with it, and the server is now live. Now, contrary to the fools at Digg , I know how the web works, and there’s no chance in hell I’d let this server open to the internet, so that every weirdo out there could join. Which is why, if you’re interested in joining, you need to apply by filling out this form . I’m not going to request a copy of your ID…for now. The server is currently set up to retain up to 2000 messages per channel for up to 48 hours. We might play with these settings, but I don’t want this to be a place for content to stick around. The idea is to have a space where a bunch of people can hang out in a very casual way and talk about anything they find interesting. We may or may not permanently ban you if you profess your love for AI. You’ve been warned. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views

Nexus Machine: An Energy-Efficient Active Message Inspired Reconfigurable Architecture

Nexus Machine: An Energy-Efficient Active Message Inspired Reconfigurable Architecture Rohan Juneja, Pranav Dangi, Thilini Kaushalya Bandara, Tulika Mitra, and Li-Shiuan Peh MICRO'25 This paper presents an implementation of the Active Message (AM) architecture, as an alternative to FPGA/CGRA architectures. AM architectures have been studied for a while; this was my first exposure. An accelerator implemented on an FPGA or CGRA typically uses a spatial computing paradigm. Each “instruction” in the algorithm is pinned to a physical location on the chip, and data flows between the instructions. I prefer to think of the data in motion as the local variables associated with threads that also move (using a specialized memory consistency model ). The active message architecture flips that script around. Data structures are pinned, while instructions move to the relevant data . Fig. 5 shows two processing elements (PEs), each of which contain two active messages (AMs). An active message looks a lot like an instruction: it contains an opcode, source operands, and a result operand. Throughout the computation, AMs move between PEs. PEs have a local ALU and local memory. Source: https://dl.acm.org/doi/10.1145/3725843.3756091 The AM at the top of the figure has and . Here, is an operand that is being carried around for future use. The AM with a opcode will make its way through the chip until it arrives at the PE which contains the data to be loaded. At this point, the load operation will execute, and a new AM will be created. In the figure above, the new AM is the one at the bottom of PE0. It has , , and . Op1 is forwarded unchanged from the predecessor AM. The value of was the value of the data loaded from memory. The new opcode was obtained from the config memory , which contains a description of the program that is being executed. The next step to be performed is to multiply . One might expect PE0 to perform the multiplication, but in the figure above the AM is routed to , which performs the multiplication. A reason why you would want to do this is in a situation where there are many AMs queued to access the data memory associated with PE0, but few AMs queued to access the data memory associated with PE1. In this situation, it is better to let PE0 perform loads for other AMs (because PE0 is the only PE that can fulfill that task) and find a PE that is currently idle to perform the multiplication (any PE can perform the multiplication). Now the question you should be asking is: what real-world applications exhibit load imbalances between PEs like this? If a data structure were split between all PEs evenly, you would think that load will be spread nicely across the PEs. The answer is: irregular workloads like sparse matrix-vector multiplication. Fig. 6 shows how a source matrix, source vector, and result vector could be partitioned across 4 PEs. You can imagine how the sparsity of the tensors being operated on would cause load imbalance between the PEs. Source: https://dl.acm.org/doi/10.1145/3725843.3756091 Fig. 11 compares the Nexus Machine against other architectures (each design has the same number of ALUs). Fig. 12 shows performance-per-watt. Source: https://dl.acm.org/doi/10.1145/3725843.3756091 Dangling Pointers I imagine that AM architectures work best for algorithms that are insensitive to the order in which AMs are executed. That would be the case for matrix/vector multiplication (assuming addition is associative). It seems like there is a large design space here related to PE capabilities. Data structures could be replicated across PEs to enable memory access AMs to be serviced by multiple PEs, or the ALUs inside of each PE could be heterogeneous (e.g., some PEs can do division, others cannot). Subscribe now

0 views
Neil Madden Yesterday

Maybe version ranges are a good idea after all?

One of the most important lessons I’ve learned in security, is that it’s always better to push security problems back to the source as much as possible. For example, a small number of experts (hopefully) make cryptography libraries, so it’s generally better if they put in checks to prevent things like invalid curve attacks rather than leaving that up to applications , so that we don’t get the same vulnerabilities cropping up again and again. It’s much more efficient to fix the problem at source rather than having everyone re-implement the same redundant checks everywhere. Now consider how we currently manage security vulnerabilities in third-party software dependencies. Current accepted wisdom is to lock dependencies to a single specific version, often with a cryptographic hash to ensure you get exactly that version. This is great for reproducibility, and everyone loves reproducibility. However, when there’s a security vulnerability in that dependency, every single consumer of that library has to manually update to the next version, and then their consumers have to update, and so on. The fix is done at source, but the responsibility for updating cascades through the entire ecosystem. This is not efficient. Two years after log4shell, around 25% of vulnerable consumers had apparently still not updated . To solve this problem we have created an industry of automated nagging software: SCA tools that alert you to all the “risk” you are carrying, and the ever-watchful Dependabot, which will automatically upgrade everything for you. Combine this with CVSS severity inflation ( CVSS 4 is not helping in this regard ) and the acceleration in production of CVEs , and it’s not surprising that many developers find the whole situation demoralising and stressful. It’s an almost constant churn of new must-fix CVEs to address, especially when only about 1% of CVEs will ever go on to be exploited (rising to about 4.25% for critical CVEs ). This is not a sustainable or efficient situation. There’s clearly a problem, but what would a solution look like? I have some ideas, but this is a complex problem where it is easy to introduce unintended side-effects. So take these suggestions as just that: suggestions . To provoke discussion, not as a perfect fully-baked solution. There are lots of competing factors to balance here, and I’m not going to claim that I’ve considered them all. Also, many of the suggestions I make below are not currently actionable . It is an idea for what the future might look like, not something you can implement right now. Ultimately, I think that locking to specific versions is a mistake . And by locking, I mean not just explicit lockfiles, but also things like Maven where dependency versions are (usually) uniquely determined by the POM. This feels like such heresy to utter in 2026, and I’m sure there will be lots of angry reactions to this post. But in my opinion, it would be much healthier in general if software builds always pulled in the latest patch version of a dependency (and transitive dependencies), and specified only a particular major version and minimum minor version. (Although even that can be problematic ). “But, but, but, …”, I hear you scream. What about supply chain attacks? What about deterministic builds and reproducibility? What about unintended breakages in patch versions? Locking to a particular version lets you be more controlled in applying updates: Dependabot automatically upgrades, yes, but it raises a PR and lets you run your test suite first. This is surely better than just automatically pulling in the latest thing every time. What if someone publishes a malicious version of the package? I don’t want to just pull that in straightaway! These are all completely valid concerns, but I believe they can be addressed by changes to dependency resolution: (You could implement some of these things right now by having your build scripts run e.g. “ uv lock –upgrade ” or “ mvn versions:use-latest-versions ” before each build, but again this is shifting the responsibility onto consumers to implement). How would this be better? It means that the default shifts from pulling in fixed insecure versions to always pulling in newer, more secure versions. It’s based on an assumption that the overwhelming majority of software patches are good. It also shifts work away from downstream developers: for the most part, updates will happen automatically and without any manual intervention. And it happens for everyone, not just the projects mature enough to be running Dependabot. And it happens on every active release branch, not just on main. A further advantage of this approach is that most low and medium severity issues (and probably a fair number of “high” ones too) could be fixed without a CVE being issued at all. The whole CVE process exists largely so that vendors can scaremonger and sell tooling, and security researchers can make a name for themselves. I frankly find it one of the most embarrassing and immature aspects of software security. Many smaller projects don’t have the time or inclination to issue CVEs, so just silently fix any security bugs in the next release. And frankly that should be the norm. The only reason it isn’t the norm is that we’ve got ourselves into a situation where CVEs have to be published because nobody updates without them. The default is to stick with the older insecure versions, so you have to scream loudly to overcome that inertia. Because updating is work and not updating is free. Switch the default and perhaps we can all start to calm down a bit. Maybe. Firstly, just as you should implement a time delay for Dependabot to give some leeway for supply chain attacks to be discovered, the same should happen here: dependency resolution should have a built-in time delay, so that new versions are not resolved until they are at least N days old. (I believe most repos already track version publication time). This can be controlled by setting a policy, so that e.g. you can have a canary CI pipeline that always builds with the latest to flag any incompatibilities early. It should be possible to shun versions that are known to cause test failures or other incompatibilities. Ideally such shunning information would also feedback to the central repository so that frequently shunned versions can be investigated. A sudden version update breaks your PR for reasons unrelated to your changes? Shun it! We change the default from opting-in to security updates to opting-out. Building from source should always produce a detailed SBOM , that lists exactly which versions of which libraries went into that build. It should then be possible to specify the SBOM when (re-)building to have it resolve exactly those versions, giving us back reproducibility. Essentially, this is producing the same information as a lockfile, but at build-time rather than commit-time. This allows retrospective rather than proactive reproducibility. (If you want to be a bit more deterministic around releases then it seems reasonable to me to switch to SBOM-locked builds at code-freeze).

0 views
Oya Studio Yesterday

When MultiChildLayoutDelegate is not enough

MultiChildLayoutDelegate is great for custom layouts, but it can't size the parent from its children. SlottedMultiChildRenderObjectWidget gives you that power — here's how to use it.

0 views
Stratechery Yesterday

Spring Break

Stratechery is on a bit of a disjointed Spring Break, as my usual week off will be spread out: I will return to my usual posting schedule on Tuesday, March 31. All other Stratechery Plus content, including my podcasts, will stay on schedule. There will be no Update on Thursday, March 19 There will be no Update on Monday and Tuesday, March 23–24; there will be an Update and Interview on Wednesday and Thursday, March 25–26 There will be no Update on Monday, March 30

0 views
Herman's blog Yesterday

On becoming a day person

I was recently asked on a podcast what my biggest game-changer was, whether it be a habit, way of thinking, purchase, or change of context. I didn't need to fish around for an answer, since I already know my biggest game-changer : becoming a day person. By this I mean I operate within daylight hours, getting up early, making good coffee and watching the sunrise with Emma. There’s something grounding about witnessing both the start and the end of the light; it makes me feel in tune with this natural cycle 1 . I used to be someone who stayed up late and slept through most of the morning. It's only been the last 5 years that I've consistently gotten out of bed early. I wake up naturally around 6am, hand grind some coffee while I'm still a bit muzzy and then, once the pour-over is blooming, wake Emma up to watch the sun rise over Cape Town while the air is still crisp and cool, and cars haven't ruined the soundscape and air quality. We sit and enjoy the coffee and view, generally in silence at first then check in with each other, ask about the day, and just enjoy the quality time together. Having the mornings available is delightful since most people aren't awake yet, which makes it feel like a secret, special pocket in which to operate. I like to take my time getting into the day. I don't need to rush and instead have a gentle start, which puts me in a good mood. I think rushing in the morning is one of the more stressful things that I'm happy to leave behind. It takes me about an hour from waking up to leaving for the gym or a trail run—living in Cape Town comes with mountain perks you see. I like to exercise in the morning because there are fewer commitments and plans that can derail me. The morning belongs to me, and I can do with it as I please. After exercise I shower, make a tasty breakfast, clean the kitchen, then get into work for the morning. I tend to not open emails until after lunch so that my morning can be used for focussed work, one task at a time, no distractions. After lunch (and usually a nap) I dig into emails, admin, and other tasks that need tending to. This causes the rest of the day to get quite messy and unfocussed, but that's okay because if my morning goes right (and it usually does) then all the important things are already done. I usually close my laptop around 3 or 4 and enjoy the rest of the afternoon in whichever way I see fit. Conveniently, around 8:30 or 9 I start getting tired since I've been awake for 15 hours already. I don't have any bright overhead lights on in the evenings, and the apartment has a nice warm glow which signals to my body that it's time to start winding down. And because I keep "regular business hours" my mind isn't overactive in the evening (it helps that I'm not on my phone ). We're generally in bed by 9:15 and after about half an hour of reading (currently Monstrous Regiment by Terry Pratchett ) I'm fast asleep. This sounds early to some, but the tradeoff is worth it. Generally the activities past 10pm involve watching series or going to a bar, neither of which I'm particularly attached to. I know Europeans like to eat dinner late at night, but luckily that's not the culture here, with South Africans having the earliest bedtimes in the world 2 . That isn't to say that I don't stay up late on occasion. I like to socialise over late dinners, go to music festivals, the cinema, and also get dragged to the theatre on occasion. It's just that these are exceptions, with the downside being that even when I'm out until 1am I still wake up naturally at 6. This is what naps are made for. I'm not suggesting everyone make the switch to being daytime people (I like having them to myself, thank you very much). Experiment and do what feels best for you. This is just something that had an outsized positive impact on me, and I suspect there are many other people who would enjoy mornings if they gave them a proper chance. Opinion: Research about "morning larks and night owls" tends to be a bit muddy and suggests that people can't make the switch due to genetics. In a research setting I'm sure it's pretty difficult to make the switch in X number of weeks, but the research tends to ignore that people make the switch all the time. It also ignores that historically humans have by-and-large been day-time creatures, since artificial lighting (including fire) is a fairly recent invention in evolutionary time, and we have pretty terrible night vision. All of the great apes being diurnal too suggests that we are too. ↩ Here's a neat ranking of sleep and wake times globally ↩ Opinion: Research about "morning larks and night owls" tends to be a bit muddy and suggests that people can't make the switch due to genetics. In a research setting I'm sure it's pretty difficult to make the switch in X number of weeks, but the research tends to ignore that people make the switch all the time. It also ignores that historically humans have by-and-large been day-time creatures, since artificial lighting (including fire) is a fairly recent invention in evolutionary time, and we have pretty terrible night vision. All of the great apes being diurnal too suggests that we are too. ↩ Here's a neat ranking of sleep and wake times globally ↩

0 views