Latest Posts (20 found)
Jim Nielsen -27 days ago

You Might Debate It — If You Could See It

Imagine I’m the design leader at your org and I present the following guidelines I want us to adopt as a team for doing design work: How do you think that conversation would go? I can easily imagine a spirited debate where some folks disagree with any or all of my points, arguing that they should be struck as guidelines from our collective ethos of craft. Perhaps some are boring, or too opinionated, or too reliant on trends. There are lots of valid, defensible reasons. I can easily see this discussion being an exercise in frustration, where we debate for hours and get nowhere — “I suppose we can all agree to disagree”. And yet — thanks to a link to Codex’s front-end tool guidelines in Simon Willison’s article about how coding agents work — I see that these are exactly the kind of guidelines that are tucked away inside an LLM that’s generating output for many teams. It’s like a Trojan Horse of craft: guidelines you might never agree to explicitly are guiding LLM outputs, which means you are agreeing to them implicitly. It’s a good reminder about the opacity of the instructions baked in to generative tools. We would debate an open set of guidelines for hours, but if there’re opaquely baked in to a tool without our knowledge does anybody even care? When you offload your thinking, you might be on-loading someone else’s you’d never agree to — personally or collectively. Reply via: Email · Mastodon · Bluesky Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system). Motion: Use a few meaningful animations (page-load, staggered reveals) instead of generic micro-motions. Background: Don't rely on flat, single-color backgrounds; use gradients, shapes, or subtle patterns to build atmosphere. Overall: Avoid boilerplate layouts and interchangeable UI patterns. Vary themes, type families, and visual languages.

0 views

Solod: Go can be a better C

I'm working on a new programming language named Solod ( So ). It's a strict subset of Go that translates to C, without hidden memory allocations and with source-level interop. Highlights: So supports structs, methods, interfaces, slices, multiple returns, and defer. To keep things simple, there are no channels, goroutines, closures, or generics. So is for systems programming in C, but with Go's syntax, type safety, and tooling. Hello world • Language tour • Compatibility • Design decisions • FAQ • Final thoughts This Go code in a file : Translates to a header file : Plus an implementation file : In terms of features, So is an intersection between Go and C, making it one of the simplest C-like languages out there — on par with Hare. And since So is a strict subset of Go, you already know it if you know Go. It's pretty handy if you don't want to learn another syntax. Let's briefly go over the language features and see how they translate to C. Variables • Strings • Arrays • Slices • Maps • If/else and for • Functions • Multiple returns • Structs • Methods • Interfaces • Enums • Errors • Defer • C interop • Packages So supports basic Go types and variable declarations: is translated to ( ), to ( ), and to ( ). is not treated as an interface. Instead, it's translated to . This makes handling pointers much easier and removes the need for . is translated to (for pointer types). Strings are represented as type in C: All standard string operations are supported, including indexing, slicing, and iterating with a for-range loop. Converting a string to a byte slice and back is a zero-copy operation: Converting a string to a rune slice and back allocates on the stack with : There's a stdlib package for heap-allocated strings and various string operations. Arrays are represented as plain C arrays ( ): on arrays is emitted as compile-time constant. Slicing an array produces a . Slices are represented as type in C: All standard slice operations are supported, including indexing, slicing, and iterating with a for-range loop. As in Go, a slice is a value type. Unlike in Go, a nil slice and an empty slice are the same thing: allocates a fixed amount of memory on the stack ( ). only works up to the initial capacity and panics if it's exceeded. There's no automatic reallocation; use the stdlib package for heap allocation and dynamic arrays. Maps are fixed-size and stack-allocated, backed by parallel key/value arrays with linear search. They are pointer-based reference types, represented as in C. No delete, no resize. Only use maps when you have a small, fixed number of key-value pairs. For anything else, use heap-allocated maps from the package (planned). Most of the standard map operations are supported, including getting/setting values and iterating with a for-range loop: As in Go, a map is a pointer type. A map emits as in C. If-else and for come in all shapes and sizes, just like in Go. Standard if-else with chaining: Init statement (scoped to the if block): Traditional for loop: While-style loop: Range over an integer: Regular functions translate to C naturally: Named function types become typedefs: Exported functions (capitalized) become public C symbols prefixed with the package name ( ). Unexported functions are . Variadic functions use the standard syntax and translate to passing a slice: Function literals (anonymous functions and closures) are not supported. So supports two-value multiple returns in two patterns: and . Both cases translate to C type: Named return values are not supported. Structs translate to C naturally: works with types and values: Methods are defined on struct types with pointer or value receivers: Pointer receivers pass in C and cast to the struct pointer. Value receivers pass the struct by value, so modifications operate on a copy: Calling methods on values and pointers emits pointers or values as necessary: Methods on named primitive types are also supported. Interfaces in So are like Go interfaces, but they don't include runtime type information. Interface declarations list the required methods: In C, an interface is a struct with a pointer and function pointers for each method (less efficient than using a static method table, but simpler; this might change in the future): Just as in Go, a concrete type implements an interface by providing the necessary methods: Passing a concrete type to functions that accept interfaces: Type assertion works for concrete types ( ), but not for interfaces ( ). Type switch is not supported. Empty interfaces ( and ) are translated to . So supports typed constant groups as enums: Each constant is emitted as a C : is supported for integer-typed constants: Iota values are evaluated at compile time and translated to integer literals: Errors use the type (a pointer): So only supports sentinel errors, which are defined at the package level using (implemented as compiler built-in): Errors are compared using . This is an O(1) operation (compares pointers, not strings): Dynamic errors ( ), local error variables ( inside functions), and error wrapping are not supported. schedules a function or method call to run at the end of the enclosing scope. The scope can be either a function (as in Go): Or a bare block (unlike Go): Deferred calls are emitted inline (before returns, panics, and scope end) in LIFO order: Defer is not supported inside other scopes like or . Include a C header file with : Declare an external C type (excluded from emission) with : Declare an external C function (no body or ): When calling extern functions, and arguments are automatically decayed to their C equivalents: string literals become raw C strings ( ), string values become , and slices become raw pointers. This makes interop cleaner: The decay behavior can be turned off with the flag: The package includes helpers for converting C pointers back to So string and slice types. The package is also available and is implemented as compiler built-ins. Each Go package is translated into a single + pair, regardless of how many files it contains. Multiple files in the same package are merged into one file, separated by comments. Exported symbols (capitalized names) are prefixed with the package name: Unexported symbols (lowercase names) keep their original names and are marked : Exported symbols are declared in the file (with for variables). Unexported symbols only appear in the file. Importing a So package translates to a C : Calling imported symbols uses the package prefix: That's it for the language tour! So generates C11 code that relies on several GCC/Clang extensions: You can use GCC, Clang, or to compile the transpiled C code. MSVC is not supported. Supported operating systems: Linux, macOS, and Windows (partial support). So is highly opinionated. Simplicity is key . Fewer features are always better. Every new feature is strongly discouraged by default and should be added only if there are very convincing real-world use cases to support it. This applies to the standard library too — So tries to export as little of Go's stdlib API as possible while still remaining highly useful for real-world use cases. No heap allocations are allowed in language built-ins (like maps, slices, new, or append). Heap allocations are allowed in the standard library, but they must clearly state when an allocation happens and who owns the allocated data. Fast and easy C interop . Even though So uses Go syntax, it's basically C with its own standard library. Calling C from So, and So from C, should always be simple to write and run efficiently. The So standard library (translated to C) should be easy to add to any C project. Readability . There are several languages that claim they can transpile to readable C code. Unfortunately, the C code they generate is usually unreadable or barely readable at best. So isn't perfect in this area either (though it's arguably better than others), but it aims to produce C code that's as readable as possible. Go compatibility . So code is valid Go code. No exceptions. Raw performance . You can definitely write C code by hand that runs faster than code produced by So. Also, some features in So, like interfaces, are currently implemented in a way that's not very efficient, mainly to keep things simple. Hiding C entirely . So is a cleaner way to write C, not a replacement for it. You should know C to use So effectively. Go feature parity . Less is more. Iterators aren't coming, and neither are generic methods. I have heard these several times, so it's worth answering. Why not Rust/Zig/Odin/other language? Because I like C and Go. Why not TinyGo? TinyGo is lightweight, but it still has a garbage collector, a runtime, and aims to support all Go features. What I'm after is something even simpler, with no runtime at all, source-level C interop, and eventually, Go's standard library ported to plain C so it can be used in regular C projects. How does So handle memory? Everything is stack-allocated by default. There's no garbage collector or reference counting. The standard library provides explicit heap allocation in the package when you need it. Is it safe? So itself has few safeguards other than the default Go type checking. It will panic on out-of-bounds array access, but it won't stop you from returning a dangling pointer or forgetting to free allocated memory. Most memory-related problems can be caught with AddressSanitizer in modern compilers, so I recommend enabling it during development by adding to your . Can I use So code from C (and vice versa)? Yes. So compiles to plain C, therefore calling So from C is just calling C from C. Calling C from So is equally straightforward. Can I compile existing Go packages with So? Not really. Go uses automatic memory management, while So uses manual memory management. So also supports far fewer features than Go. Neither Go's standard library nor third-party packages will work with So without changes. How stable is this? Not for production at the moment. Where's the standard library? There is a growing set of high-level packages ( , , , ...). There are also low-level packages that wrap the libc API ( , , , ...). Check the links below for more details. Even though So isn't ready for production yet, I encourage you to try it out on a hobby project or just keep an eye on it if you like the concept. Further reading: Go in, C out. You write regular Go code and get readable C11 as output. Zero runtime. No garbage collection, no reference counting, no hidden allocations. Everything is stack-allocated by default. Heap is opt-in through the standard library. Native C interop. Call C from So and So from C — no CGO, no overhead. Go tooling works out of the box — syntax highlighting, LSP, linting and "go test". Binary literals ( ) in generated code. Statement expressions ( ) in macros. for package-level initialization. for local type inference in generated code. for type inference in generic macros. for and other dynamic stack allocations. Installation and usage So by example Language description Stdlib description Source code

0 views

Re: People Are Not Friction

Dave Rupert puts words to the feeling in the air: the unspoken promise of AI is that you can automate away all the tasks and people who stand in your way. Sometimes I feel like there’s a palpable tension in the air as if we’re waiting to see whether AI will replace designers or engineers first. Designers empowered by AI might feel those pesky nay-saying, opinionated engineers aren’t needed anymore. Engineers empowered with AI might feel like AI creates designs that are good enough for most situations. Backend engineers feel like frontend engineering is a solved problem. Frontend engineers know scaffolding a CRUD app or an entire backend API is simple fodder for the agent. Meanwhile, management cackles in their leather chairs saying “Let them fight…” It reminds me of something Paul Ford said : The most brutal fact of life is that the discipline you love and care for is utterly irrelevant without the other disciplines that you tend to despise. Ah yes, that age-old mindset where you believe your discipline is the only one that really matters. Paradoxically, the promise of AI to every discipline is that it will help bypass the tedious-but-barely-necessary tasks (and people) of the other pesky disciplines. AI whispers in our ears: “everyone else’s job is easy except yours” . But people matter. They always have. Interacting with each other is the whole point! I look forward to a future where, hopefully, decision makers realize: “Shit! The best products come from teams of people across various disciplines who know how to work with each other, instead of trying to obviate each other.” Reply via: Email · Mastodon · Bluesky

0 views
Kev Quirk Yesterday

Another ANOTHER New Lick of Paint

So it turns out I didn't like the mustard yellow and steel blue design that I created a couple weeks ago. It just didn't sit well with me, and if I look back over my design history the designs that have stuck over the years are invariably grey with a splash of colour. Problem was, I didn't really know how I was going to redesign the site. Then, one day, I was talking with Sven via email and I visited his blog (also running Pure Blog for the record 🎉), and I immediately knew that was the kind of design I was looking for. It's simplicity is just lovely, and so easy to read. So I set about making my own version of Sven's lovely design. I didn't want it to be exactly the same as his, but I also didn't think my design would turn out quite as close to his as it did - I suppose that goes to show how much I like his site. :-) I've spoken to Sven and he's good with me effectively copying his design. For posterity (as I'm likely to change it again in the future) here's what the design currently looks like: I'm still not 100% sold on the font (but it is growing on me), and I'm not sure about the yellow in the , but blue everywhere else. So I may change a couple of things subtly. Having said all that, overall I'm the happiest with the design I've been since moving to Pure Blog. Finally I'd like to thank Sven for allowing me to steal his wonderful design. What do you guys think? Leave a comment below, or reply by email. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
Stratechery Yesterday

2026.12: Please Listen to My Podcast

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on Questions about Anthropic vs. the U.S. Government. Everything I Didn’t Write . This was one of those weeks where far more happened than I could write about — and that’s partly my fault for taking a stand on bubbles ! To that end, I highly suggest this week’s episode of Sharp Tech , where we cover: OpenAI’s pivot to enterprise, and why AI might look like the PC in the 1980s Why I think that agents are not only real, but also the reason we are not in a bubble OpenClaw as evidence that my thesis that OpenAI and Anthropic are sustainably differentiated through their integration of harness and model is wrong Nvidia’s inference pivot, and why Nvidia is particularly concerned about a world dominated by OpenAI and Anthropic (and why Microsoft might be in trouble) And, for good measure, why I don’t mind Wisconsin winters I think that each of these points could be another Update, but also, I’m taking a few days off for vacation, so I hope you’ll listen to this episode in particular. — Ben Thompson What Jensen Huang Has In Common with Steve Jobs. I really enjoyed this week’s Dithering covering Nvidia’s announcements at GTC Monday , including a near-perfect inversion of what Jensen Huang was telling the world about Nvidia’s approach to inference workloads just one year ago. In their trademark 15-minute format, Ben explains how and why Nvidia’s inference messaging is now different ( see also : this week’s Stratechery Interview ), while Gruber draws on decades of Apple experience to note the similarities between Huang and Steve Jobs. It’s a great listen that renders legible an easily missed strategic inflection point at the most valuable company in the world .  — Andrew Sharp Trump’s Trip to Beijing, Delayed Indefinitely. As the war in Iran continues, this week’s Sharp China covered the news that President Trump will delay a trip to Beijing that had been scheduled to begin March 31st . Come to hear why both sides are likely relieved by the delay, and stay to hear about a softened Taiwan threat assessments from the U.S. intelligence community and a succession of PLA military scientists who are being purged for reasons that aren’t entirely clear. — AS Agents Over Bubbles — Agents are fundamentally changing the shape of demand for compute, both in terms of how they work and in terms of who will use them. They’re so compelling that I no longer believe we’re in a bubble. An Interview with Nvidia CEO Jensen Huang About Accelerated Computing — An interview with Nvidia CEO Jensen Huang about his GTC 2026 keynote, navigating China and DC, and remembering Nvidia’s true nature. Jensen Huang and Andy Grove, Groq LPUs and Vera CPUs, Hotel California — GTC 2026 marked an important inflection point for Nvidia, as the company is selling multiple architectures, instead of focusing on just one GPU. The motivation is serve all needs and keep all customers. What the NBA Could Be Getting from College Basketball — College basketball is fantastic, and the NBA should take advantage of its success by raising the age limit for the NBA Draft. LLM Paradigm Changes Jensen Huang’s Jobsian Keynote From Fiber to AI: A Laser Giant’s Rebirth Mexico City’s Sinking Lands The War in Iran and the Visit to Beijing; New DNI Assessments on Taiwan; Military Scientists Disappearing From Public View How to Miss a Free Throw, The Biggest Top 100 Disappointments, Expansion is Afoot (Again) How NOT to Miss a Free Throw, Generic Houston Rockets Slander, The Top 100 Pleasant Surprises OpenAI’s Enterprise Pivot, The Rise of Agents and Bubble Counterpoints, Nvidia Changes Its Inference Story

0 views

Premium: The Hater's Guide To Adobe

I hear from a lot of people that are filled with bilious fury about the tech industry, but few companies have pissed off the world more than Adobe. As the foremost monopolist in software, web and graphic design, Adobe has created one of the single-most abusive, usurious freakshows in capitalist history, trapping users in endless, punishing subscriptions to software they need that only ever seems to get worse. In the Department of Justice’s recently-settled case against Adobe , it was revealed that early termination fees for its annual subscriptions amounted to 50% of the remaining balance on the customer’s subscription, with one unnamed Adobe executives referring to these fees as “a bit like heroin for Adobe,” adding that there [was] “...absolutely no way to kill off ETF or talk about it more obviously [without] taking a big business hit.”  Let me explain how loathsome Adobe’s business model truly is.  The below is a screenshot from Adobe’s website from Wednesday March 18 2026. One might read this and think “wow, $34.99 a month, what a deal!” and immediately sign up without clicking on “view terms,” which reveals that after three months the subscription cost becomes $69.99 a month, and that this “monthly” subscription is a year-long contract.   Adobe deliberately hid (and I’d argue still hides!) its early termination fees behind “inconspicuous hyperlinks and fine print.” Want to cancel? Adobe charges you 50% of the remaining balance on your contract — so, in this case, over $300 , and it justifies this by saying (and I quote) “...your purchase of a yearly subscription comes with a significant discount. Therefore, a cancellation fee applies if you cancel before the year ends.”  The DOJ did a great job in its complaint explaining how much Adobe sucks, just before doing nothing to impede them doing so: An exhibit from the DOJ’s lawsuit shows the MC Escher painting of canceling an Adobe subscription and the six different screens that it takes to do so. The DOJ also added that Adobe’s subscription revenue had nearly doubled between 2019 ($7.71 billion) and 2023 ($14.22 billion), and since then, Adobe’s subscription revenue hit $20.5 billion in 2024 and $22.9 billion in 2025 .  To be clear, Adobe is utilizing many very, very common tricks that the software industry has used to keep people from quitting, and basically every software service I use makes you jump through three to five different screens (fuck you, Canva!) to cancel. These tricks are commonly referred to as “dark patterns.”  Adobe’s Early Termination Fees are, however, uniquely awful, both in that they employ the evil sorcery of enterprise software contracts and deploy them against creatives that are, in many cases, barely keeping their heads above water in an era defined by people trying to destroy them.  I will say, however, that I’ve never seen anyone else bill monthly for an annual contract outside of the grotesque SaaS monstrosities I wrote about last week . These are egregious, deceptive and manipulative techniques that shouldn’t be deployed against anyone , let alone creatives and consumers.  And because this is the tech industry under a regulatory environment that fails to hold them accountable, the $150 million settlement with the DOJ doesn’t appear to have changed a damn thing about how this company does business, other than offering “$75 million worth of services for free to customers that qualify.” The judgment does not appear to require any changes to how Adobe does business, and $150 million amounts to roughly 0.345% of the $43.4 billion that Adobe made in 2024 and 2025. Adobe is a business that runs on rent-seeking, deception, and a monopoly over modern design software mostly built by people that no longer work there, such as John and Thomas Knoll, who won an Oscar in 2019 for scientific and engineering achievements for creating Photoshop along with Mark Hamburg, who left Adobe the same year .  Adobe does not create things but extract from those that do , exhibiting the most egregious and horrifying elements of the Rot Economy ’s growth-at-all-costs avarice. While you may or may not like Photoshop, or Lightroom, or any other Adobe property, that’s mostly irrelevant to the glorified holding corporation that shoves different bits around every few months in the hopes that they can scrape another dollar from their captured audience.  Much of this comes from Adobe’s abominable subscription products, most notably (and I’ll get into it in more detail after the premium break) its Creative Cloud subscription, a rat king of different services like Photoshop and InDesign and services like “Adobe Creative Community” and “generative credits” for AI services that are used to justify constant price increases and confusing product suite tweaks, all in the service of revenue growth.   All the while, Adobe’s net income has, for the most part, flattened out for the best part of two years at a seasonal range from $1.5 billion to $1.8 billion a quarter , all as the company debases its products, customers and brand in the filth of generative AI features that range from kind of useful to actively harmful to the creative process and have generated, at best, a couple hundred million dollars of revenue in the last two years .  I should also be clear that Adobe has an indeterminately-large enterprise division that includes marketing automation software like Marketo , which it acquired in 2018 for $4.75 billion along with Magento , a different company that develops a software platform to run corporate eCommerce pages, all so it can do battle with Salesforce. CNBC’s Jim Cramer once called Salesforce and Adobe’s competition “ one of the great rivalries in tech ,” and he’s correct, in the sense that both companies love to buy other companies to prop up their revenues. Adobe has bought 61 of them since the 90s , but Salesforce has it beat at 75 .  They’re also both devious, underhanded SaaSholes that make their money through rent-seeking and micro-monopolies. The business known as “Adobe” is a design platform, a photo editor, a PDF creation platform, an eCommerce platform, a marketing automation platform, a content management system, a marketing project management system, an analytics platform, and a content collaboration platform.  You do business with Adobe not because you want to , but because doing business at some point requires you to do so. Use PDFs regularly? You’re gonna use Acrobat. Need to edit an image? Photoshop. Run a design studio? You’re gonna pay for Creative Suite, and you’re gonna get a price increase at some point because you don’t really have any other options. Doing a lot of email marketing campaigns? You’re gonna use Marketo, whether you like it or not . Adobe’s “Digital Experience” vertical is effectively a holding corporation for Adobe’s acquisitions to help boost revenue, an ungainly enterprise limb that grabs companies and puts it in a big bag that says “money me money now” every year or two.  Put another way, one does not do business with Adobe. It has business done to it.   There’s also the “publishing and advertising” division that has made somewhere between $146 million and $300 million a year since 2019, most of which comes from abandoned products and, ironically, the product that originally made Adobe famous — PostScript, the language that underpins most of modern printing, whether directly or by inspiring the various other alternatives that emerged in the following decades. Adobe is a company that bathes in the scent of mediocrity, constantly doing an impression of an ever-growing business through a combination of acquisitions and price increases that are only possible in a global regulatory torpor and a market that doesn’t know when it’s being conned.  It’s also emblematic of how the modern software company grows — not through an honest exchange of value built on a bedrock of innovation and customer happiness, but the eternal death march of enshittification of its products and monopolization of whatever fields it can barge its way into.  In many ways, Adobe is one of the greater tragedies of the Rot Economy . Beneath the endless layers of subscriptions and weird upsells and horrible Business Idiots lay beloved products like Photoshop, Illustrator and InDesign that are slowly decaying as Adobe searches to boost engagement and revenue.  A great example is a story from Digital Camera World from 2025 , where writer Adam Juniper talked about features he loved that were disappearing for no reason: Juniper found that Adobe had intentionally moved the speech bubble to an optional “legacy shapes and more” feature, all with the intent of pushing users to pay for (per Juniper) Adobe’s add-on Stocks subscription . In fact, a simple web search brings up user after user after user after user after user after user after user saying the same thing: that Adobe only ever seems to make its products worse, with the solution often being “find a way to revert to how things were done before the update” or “find another company to work with,” except Adobe’s scale and market presence make it near-impossible to compete.  Adobe even has the temerity to bug you with ads within its own products , nagging you with annoying pop-ups about new features or attempting to con you into a two-month-long trial of another piece of software using “ in-product messaging ” that’s turned on by default. These are all the actions of a desperate, greedy company run by people that don’t give a shit about their customers or the things they sell.  A few weeks ago, CEO Shantanu Narayen said that he was stepping down after 18 years in which he took Adobe from a company that built things that people loved and turned it into a sleazy sales operation built on rent-seeking and other people’s innovation.  Those who don’t bother to read or know anything about software will tell you that the “threat of AI” or “the SaaSpocalypse” is killing Adobe — a convenient (and incorrect!) way to ignore that Adobe is only able to grow through acquisitions or price-hikes.  The sickly irony is that acquisitions were always in Adobe’s blood from the very early days of Photoshop. It just used to be run by people who gave a fuck about whether software was good and customers were happy. In fact, I’m going to have a little rant about this.  I’m sick and tired of journalists from reputable outlets talking about “the threat of AI” to software companies without ever explaining what they mean or any of the economic effects involved. Adobe isn’t being killed by “AI.” We’re at the end of the hypergrowth era of software, and the only thing that grows forever is cancer. It also gives executives Narayen cover for running operations built on deceit, exploitation, extraction and capital deployment. Years of evaluating these companies entirely based on their revenues and imagined things like “the threat of AI” without any connection to actual fucking software makes the majority of the analysis of software entirely useless.   Nothing even really has to change about reporting. Just use the product! Use it and tell me how you feel. Talk to some customers. Spend more than 20 minutes on Facebook. Use Photoshop and tell me how many popups you get, or whether it inexplicably slows down or starts eating up RAM . You’ll quickly see that we’re in a crisis that’s less about AI and more about creating a tech industry powered by creating mediocre software and putting far more effort into making a business impossible to avoid. Decades of this psuedo-journalism mean that a great many business reporters are simply unprepared to discuss what’s actually happening, evaluating software companies based on 10-Ks and shadows on the wall of a fucking cave.  The tech industry has done a great job of scaring reporters into thinking that having a negative opinion is somehow “not supporting innovation,” and I want to be clear that refusing to criticize the tech industry is what’s actually stopping innovation. Letting these companies get away with ruining either the products they build or the products they buy is creating a climate in which the most-successful companies are the ones that crowd out the competition and raise prices. Adobe’s growth has come from being a fucking asshole . Its decline has come from the limitations of one’s ability to buy other companies and claim their revenues as your own and constantly increasing the price of your services. If there were a “threat from AI,” you’d actually be able to name it and point to it rather than referring to it like the Baba Fucking Yaga.  I’m going to put it very, very bluntly: the last 15 years or so of tech earnings have been earned predominantly by fucking over the customer through either reducing the value of the product or increasing its price. The tech and business media’s lack of attention to the actual state of technology is partially to blame, because Number Has Always Gone Up, and thus the assumption was that the underlying product quality was raising that number versus screwing over the customer.  Wake up! Look at every tech product you’ve used and tell me if it’s improved in the last decade! Facebook’s worse, email’s worse, browsers are either the same or worse, Google Search is worse, Adobe Creative Suite is worse, iPhones might seem better but the software is bloated with endless options and dropdowns and ads and nags, pretty much the only thing that’s improved is physical hardware because shipping bullshit, useless hardware is much, much harder. This total lack of awareness of the actual state of the world is why these companies have gotten away with so much shit over the years, and why so many of you are incapable of actually capturing this moment. You are not actually looking for what’s happening, just for what might comfortably fit your analysis of the world.  Vaguely blaming things on “the threat of AI” allows you to continue pretending everything will grow forever, and rationalize bad behavior by framing every problem through the lens of disruption and innovation. A company that’s on the decline “being disrupted by AI” allows you to believe that another company will grow and take its place . Saying that a company is growing revenue “because their AI bets are paying off” allows you to ignore price increases and deteriorating software, and think the world is a better place, even if you can only do so by living in a fantasy.  Gun to your head, what is the threat to software from AI? How is it manifesting, and who is the threat? Is it OpenAI? Anthropic? Are their products actually replacing anything? Can you prove that, or is this just something you heard enough people say that you’re now comfortable believing it?  The actual threat to software companies is their hatred of innovation and their customers, and what's happening to Adobe will eventually happen to them all.  Products that provide value are enshittified , and the products they acquire have been (or came pre-) enshittified. The prices have gone up. The nags to consumers have increased. Revenues have gone up because these companies have been allowed to buy effectively anyone they want — though Adobe was, thankfully, stopped from acquiring Figma — and increase prices whenever they want, and when it’s come time to evaluate the health or strength or actual value of these companies, all that anybody ever looks at is revenue s.  Perhaps your argument might be that the markets don’t care about how good something is , except the markets are influenced by journalism and financial analysts. The markets celebrate dogshit companies like Meta that make broken, harmful products because their disgusting monopolies allow them to brutalize businesses and consumers alike.  What we’re seeing in the software industry are the limits of how much one can abuse a customer, a business model that SaaS enabled and both the tech media and analysts celebrated because it worked , in the sense that it worked at making the software companies rich. And because the people at the top have chased out anybody who knows what “good” looks like and empowered vacuous growth-perverts at every level, these companies have no idea what to do to stop the tide from coming in. Your argument might be that these companies couldn’t grow so fast without fucking customers over or making their products worse — and at that point you should ask yourself what you want the world to look like, and how willingly you’ve participated in making it look how it does today.  The decline has yet to fully begin, but a CEO doesn’t suddenly decide to quit their company after 18 years during record results because the future looks bright.   The real SaaSpocalypse is the comeuppance for decades of focusing businesses on growth by any means possible, and the hysterical non-analysis of blaming it on AI is a sign that those responsible can’t be bothered to live in anything other than the dreamworld of venture capital and Ivy League business schools.  Adobe’s story is a tragedy — the tale of the great things that can be done with software for the betterment of humanity, and how usurious Business Idiots can hijack it as a means of expressing eternal growth to the markets. This is The Hater’s Guide To Adobe, or The Adobe Enshittification Suite.

0 views
David Bushell Yesterday

404 Deno CEO not found

I visited deno.com yesterday. I wanted to know if the hundreds of hours I’d spent mastering Deno was a sunk cost. Do I continue building for the runtime, or go back to Node? Well I guess that pretty much sums up why a good chunk of Deno employees left the company over the last week. Layoffs are what American corpo culture calls firing half the staff. Totally normal practice for a sustainable business. Mass layoffs are deemed better for the moral of those who remain than a weekly culling before Friday beers. The Romans loved a good decimation. † If I were a purveyor of slop and tortured metaphors, I’d have adorned this post with a deepfake of Ryan Dahl fiddling as Deno burned. But I’m not, so the solemn screenshot will suffice. † I read Rome, Inc. recently. Not a great book, I’m just explaining the reference. A year ago I wrote about Deno’s decline . The facts, undeterred by my subjective scorn, painted a harsh picture; Deno Land Inc. was failing. Deno incorporated with $4.9M of seed capital five years ago. They raised a further $21M series A a year later. Napkin math suggests a five year runway for an unprofitable company (I have no idea, I just made that up.) Coincidentally, after my blog post topped Hacker News — always a pleasure for my inbox — Ryan Dahl (Deno CEO) clapped back on the offical Deno blog: There’s been some criticism lately about Deno - about Deploy, KV, Fresh, and our momentum in general. You may have seen some of the criticism online; it’s made the rounds in the usual places, and attracted a fair amount of attention. Some of that criticism is valid. In fact, I think it’s fair to say we’ve had a hand in causing some amount of fear and uncertainty by being too quiet about what we’re working on, and the future direction of our company and products. That’s on us. Reports of Deno’s Demise Have Been Greatly Exaggerated - Ryan Dahl Dahl mentioned that adoption had doubled following Deno 2.0. Since the release of Deno 2 last October - barely over six months ago! - Deno adoption has more than doubled according to our monthly active user metrics. User base doubling sounds like a flex for a lemonade stand unless you give numbers. I imagine Sequoia Capital expected faster growth regardless. The harsh truth is that Deno’s offerings have failed to capture developers’ attention. I can’t pretend to know why — I was a fanboy myself — but far too few devs care about Deno. On the rare occasions Deno gets attention on the orange site, the comments page reads like in memoriam . I don’t even think the problem was that Deno Deploy, the main source of revenue, sucked. Deploy was plagued by highly inconsistent isolate start times . Solicited feedback was ignored. Few cared. It took an issue from Wes Bos , one of the most followed devs in the game, for anyone at Deno to wake up. Was Deploy simply a ghost town? Deno rushed the Deploy relaunched for the end of 2025 and it became “generally available” last month. Anyone using it? Anyone care? The Deno layoffs this week suggest only a miracle would have saved jobs. The writing was on the wall. Speaking of ghost towns, the JSR YouTube channel is so lonely I feel bad for linking it. I only do because it shows just how little interest some Deno-led projects mustered. JSR floundered partly because Deno was unwilling couldn’t afford to invest in better infrastructure . But like everything else in the Deno ecosystem, users just weren’t interested. What makes a comparable project like NPMX flourish so quickly? Evidently, developers don’t want to replace Node and NPM. They just want what they already have but better; a drop-in improvement without friction. To Deno and Dahl’s credit, they recognised this with the U-turn on HTTP imports . But the resulting packaging mess made things worse. JSR should have been NPMX. Deno should have gone all-in on but instead we got mixed messaging and confused docs. I could continue but it would just be cruel to dissect further. I’ve been heavily critical of Deno in the past but I really wanted it to succeed. There were genuinely good people working at Deno who lost their job and that sucks. I hope the Deno runtime survives. It’s a breath of fresh air. B*n has far more bugs and compatibility issues than anyone will admit. Node still has too much friction around TypeScript and ECMAScript modules. So where does Deno go from here? Over to you, Ryan. Where is Deno CEO, Ryan Dahl? Tradition dictates an official PR statement following layoffs. Seems weird not to have one prepared in advance. That said, today is Friday, the day to bury bad news. I may be publishing this mere hours before we hear what happens next… Given Dahl’s recent tweets and blog post , a pivot to AI might be Deno’s gamble. By the way, it’s rather telling that all the ex-employees posted their departures on Bluesky. What that tells you depends on whether you enjoy your social media alongside Grok undressing women upon request. I digress. Idle speculation has led to baseless rumours of an OpenAI acquisition. I’m not convinced that makes sense but neither does the entire AI industry. I’m not trying to hate on Dahl but c’mon bro you’re the CEO. What’s next for Deno? Give me users anyone a reason to care. Although if you’re planning a 10× resurgence with automated Mac Minis, I regret asking. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
Brain Baking Yesterday

A Satisfied Customer Review Of The Yogurtia

And now for something completely different. For years, we’ve been happy users of the Yogurtia , a Japanese “fermented food maker”. That alone should sound enticing enough to warrant this small review! What’s a fermented food maker? I’m glad you ask. It’s a maker for food to ferment. Next question. In case that wasn’t crystal clear, here’s a common way we employ our Yogurtia: to make yoghurt. Shocking, given the name, right? There are plenty of mundane looking kitchen appliances out there that can “make yoghurt” so why should you import a Japanese device instead? While researching yoghurt making machines, we often encounter contraptions you can put multiple small containers in that will be heated to 40 degrees Celcius for eight to twelve hours. Once it’s done, you pull out the containers and voilà: your very own yoghurt pots. The Yogurtia doesn’t do this. Instead, there’s one giant contiainer where you pour in milk and remnants of your previous yoghurt. That means you can make much more in one go—but that also means you can more easily put in other stuff. The biggest reason for buying the Yogurtia is the capability to precisely configure the temperature and time it needs to ferment. Most basic yoghurt makers just come with an on/off switch. We can set it to 60 degrees instead of the usual 40 if we want to more easily ferment other stuff. Preparing breakfast with a freshly made yoghurt container thanks to the Yogurtia maker. Perhaps I should elaborate on the “other stuff”. While the Yogurtia obviously markets itself in the west towards yoghurt lovers, the real purpose of this neat little contraption is to make amazake and nattō . I’ve had great success with the former. To make amazake, you’ll need to grow a specific mold on rice first called koji . Activating that koji is done at 60 degrees which is too hot for most small fermentation chambers/yoghurt makers. I produce koji-fied rice in my fridge-hacked inoculation room . A rice cooker that can be properly configured might be another option, but cheaper machines often have trouble maintaining the temperature, requiring you to add some cold water. If the temperature is too high, the koji will be killed off, resulting in a less sweet beverage as the mold is responsible for breaking down the carbs of the rice into simple sugars. In a previous employer’s cantine, I was known as the amazake guy. I brought the smelly stuff to work for interested colleages to try it out and enthuse them to get started on fermenting stuff themselves. The result was met with mixed success: most people said yuck! , I got the label “the amazake guy”, and one time I forgot to take the canister out of the fridge at work. Or maybe the order is reversed here, that would certainly make more sense. I tried once more with spamming everyone to go out and buy Sandor Katz’ The Art of Fermentation bible. Then I tried bringing pickled stuff to work. More yuck! and what strange colour does that radish have? The one thing I didn’t try, which I’m making up for by writing this satisfied customer review, is convincing them to buy a Yogurtia. Maybe I should have done that instead. In Belgium, yoghurt is one of the few “fresh” fermented products almost everyone eats regularly (we’ll ignore cheese; sausages; wine; olives; and yes, even chocolate ; …. for now). Did you know you can use a spoonful of sourdough starter to jump-start the yoghurt making process? Did you know you can jump-start the bread rising process by using a spoonful of yoghurt? Food for thoug—no, a new blog post. A+++. Would buy again. (And did buy again. Never connect a Japanese electronic device that assumes directly to the European power grid of . Ouch. That plastic did melt good.) Related topics: / fermentation / By Wouter Groeneveld on 20 March 2026.  Reply via email .

0 views
Jeff Geerling Yesterday

The best laptop Apple ever made

Today I posted a video titled The best laptop Apple ever made , and tl;dw 1 it's the 11" MacBook Air. I acknowledge in the video my pick is slightly subjective, and I also asked a number of other YouTubers which Mac laptop they consider the best (or at least most influential). If you don't want to watch the video, I'll summarize their choices here:

0 views
iDiallo Yesterday

Why Is Everyone Supposed to Die If Machines Can Think?

If you only listen to spokespersons for AI companies, you'll have a skewed view of how AI is actually being integrated into the workplace. You probably don't need to convince a developer to include it in their workflow, but you also can't dictate how they do so. Whenever I sit next to another developer during pair programming, I can't help but feel frustrated by their setup. But I don't complain, because they'd be just as annoyed with mine. The beauty of dev work is that all that matters is the output. If you use a boilerplate generator like , few will complain. If you use AI to generate the same code, as long as it works, no one will complain either. If the code is crafted with your own wetware, no one will be the wiser. Developers will use any tool at their disposal to increase their own productivity. But what happens when that thousand-dollar-per-developer-per-month subscription starts to feel expensive? What happens when managers expect a tenfold return on investment, yet sprint velocity doesn't budge? On one end, new metrics are created to track developers' use of the tool. Which, in my experience, are highly inaccurate and vary wildly. On the other hand, companies are using AI as justification for laying off workers. So which metric is to be trusted? AI isn't simply a solution in search of a problem. It's quite useful. One person will tell you it's great for writing tests, another will praise it for writing utility functions, and another will use it to better understand a requirement. Each is a valid use case. But the question managers keep asking is: "Can we use AI instead of hiring another dev?" I'm not sure what is supposed to happen if we achieve so-called AGI. Does it mean I no longer have to do code reviews? Is it AGI when the AI stops hallucinating? My shower-thought answer: AGI is an AI that can say "I don't know" when it doesn't know the answer. But I don't think Sam Altman sees that as a selling point. Why are we supposed to die if a machine can think? Every time someone raises this argument, I think of Thanos. In the Avengers saga, he kills half of all living beings in the universe. It's an act so total and irreversible that the writers had to bend time itself to undo it. And still, fifteen movies later, the franchise keeps going. Each new antagonist has to threaten something, but nothing lands the same way. You already saw the worst. The scale is broken. The villain is a terrorist from an un-named country? Gimme a break. That's what the AI extinction narrative has done to the conversation about AI. By opening with the end of the world, it made every practical concern feel small by comparison. Who wants to talk about sprint velocity and hallucinated function calls when we're supposedly staring down an existential threat? So we don't. We argue about the apocalypse instead. Meanwhile, I am debugging a production incident at 2am, in a codebase that has never once tried to kill me, but has absolutely tried to ruin my weekend. The reality is quite different from the drama that unfolds online. The longer this AI craze continues, the less I believe we're headed for a dramatic bubble pop. Instead, I think the major players will try to bully their way out of one. And that bullying is already happening on at least three fronts: language, narrative, and money. Microsoft is leading the language crack down. They are rounding up critics in their own Copilot Discord servers, banning users who use the now-deemed-derogatory term "Microslop." Nvidia is publicly asking people to stop using the phrase "AI slop." These aren't isolated incidents of corporate thin skin. They are coordinated attempts to police the vocabulary we use to criticize the technology. Control the language, and you go a long way toward controlling the conversation. When you can't call a thing what it is, it becomes harder to argue that the thing exists at all. On the narrative front, we are told every day that AI is good, innovative, and inevitable. Then we're told it's going to take our jobs. And at the same time, we're told it's an existential threat that could wipe us off the planet. It is simultaneously the best thing that could ever happen to humanity and the worst. I'm reminded that "War is peace, freedom is slavery, ignorance is strength" as George Orwell puts it. It's a cognitive trap. When a technology is framed as both savior and apocalypse, the questions regular people ask are seen as mundane. We can't ask: "Does it work? Is it worth the cost? Are we actually benefiting from this?" Instead, we spend our energy arguing about the end of the world, and the companies keep burning through cash while the narrative burns through our attention. On the money front, we all witnessed it firsthand with the fiasco involving Anthropic, OpenAI, and the Department of Defense. People were quick to sort the players into the good guys, the bad guys, and the ugly. But to me, it looked like a dispute designed to obscure the problem that has plagued AI companies from the very beginning: they need to make money. It doesn't matter if a company generates $20 billion a year when its operating costs double annually. They're still in the red. Anthropic was making a grand stand, positioning itself as the principled actor fighting against the US war machine. At the same time, they had no issue working with Palantir, a company that makes no secret of its commitment to mass surveillance and its role in powering the machinery of war. Meanwhile, OpenAI is struggling with its own financial stability. They've just launched ads on their platform, something Sam Altman once described as a last resort. When you're in the red and a customer is willing to pay, principles become a luxury you can do without. Given their history of bending copyright law and converting to a for-profit entity, it's naive to assume there are other principles they wouldn't bend as well. They quickly jumped into the DoD deal, scooping up a $200 million contract to replenish their coffers. There was one detail in Anthropic's statement that deserved more attention than it got: We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. In other words: surveilling citizens is immoral. If you're a non-citizen or a foreigner, you're on your own. So right now, AI companies are hemorrhaging money, policing the words we use to criticize them, manufacturing existential dread to crowd out any skepticism, and taking defense contracts while performing ethical restraint. And somewhere in the middle of this, we're supposed to believe that only they can save us. When you're losing money but need to maintain the illusion of infinite growth, you don't wait for the market to correct you. You make the bubble burst feel not just unlikely, but unthinkable. You bully the language, inflate the stakes, and monetize the fear. As individuals, what are we supposed to do with the useful part of the technology? It helps me write tests. It helps my colleagues parse requirements. Used without hype and within realistic expectations, it is actually a good tool. But "a good tool" doesn't justify the valuations, the layoffs-as-euphemism, the defense contracts, or the Discord bans. It doesn't sustain the mythology that has been built around it. That gap between the tool that exists and the revolution that was promised, is precisely what the bullying is designed to keep you from looking at too closely. I still struggle to answer managers who ask me to justify the team use of the tool. I never had to justify my IDE, or my secret love affair with tmux before. For now all I can tell them is: "it's useful, within limits, and that should be enough." It won't be what they want to hear. But it's more than the industry has managed to say about itself.

0 views

Melanie Richards

This week on the People and Blogs series we have an interview with Melanie Richards, whose blog can be found at melanie-richards.com/blog . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m a Group Product Manager co-leading the core product at Webflow, i.e. helping teams visually design and build websites. My personal mission is to empower people to make inspiring, impactful, and inclusive things on the web. That’s been the through line of my career so far: I started out as a designer at a full-service agency called Fuzzco, moved to the web platform at Microsoft Edge, continued building for developers at Netlify, and am now aiming to make web creation even more democratic with the Webflow platform. I transitioned from design to product management while at Microsoft Edge. I wanted to take part in steering the future of the web platform, instead of remaining downstream of those decisions. I feel so lucky to have worked on new features in HTML, ARIA, CSS, and JavaScript with other PMs and developers in the W3C and WHATWG. I’m a builder at heart, so I love to work on webby side projects as well as a whole bevy of analog hobbies: knitting, sewing, weaving, sketchbooking, and journaling. I have a couple primary blogs right now: From 2013–2016 I also had a blog and directory called Badass Lady Creatives (wish I had spent more than five minutes on the name, haha). This featured women who were doing cool things in various “creative” industries. At the time it seemed like every panel, conference lineup, and group project featured all or mostly dudes. The blog was a way to push back on that a little bit and highlight people who were potentially overlooked. Since then gender representation (for one) seems to have gotten a bit better in these industries. But the work and joy of celebrating diverse, inspiring talent is never done! Big “yeet to production” vibes for me! I use Obsidian to scribble down my thoughts and write an initial draft. Obsidian creates Markdown files, so I copy and paste those into Visual Studio Code (my code editor), add some images and make some tweaks, and then push to production. I really try not to overthink it too much. However, I will admit that I have a tons of drafts in Obsidian that never see the light of day. It can be cathartic enough just to scribble it down, even if I never publish the thought. For my Learning Log posts, I use a Readwise => Obsidian workflow I describe in this blog post . Reader by Readwise is the app where I store and read all my RSS feeds and newsletter forwards. “Parallel play” is the biggest, most joyful boon to my creativity. I love to be in the company of others as we independently work on our own projects side by side. There’s a delicate balance when it comes to working on creative projects socially. For example, my mom, my aunt, and I often have Sew Day over FaceTime on Sundays. Everyone’s pretty committed to what they’re working on, so it’s easy to sew and talk and sing (badly 😂) at the same time. I also used to go to a local craft night that very sadly disbanded when the host shop changed hands. For writing or coding, that takes a bit more mental focus for me. I started a Discord server with a few friends, which is dedicated to working on blog posts and side projects. We meet up once a month to talk about our projects (and shoot the breeze, usually about web accessibility and/or the goodness of dogs). Then we all log off the voice channel to go do the thing! Both of these blogs use Eleventy and plain ol’ Markdown, and are hosted on Netlify. Some of my other side projects use a content management system (CMS) like Webflow’s CMS, or Contentful + Eleventy. Again, Webflow is my current employer. I use a Netlify form for comments on my “Making” blog, and Webmentions for my main blog. I will probably pull out Webmentions from that code base: conceptually they’ve never really “landed” for me, and it would be nice to delete a ton of code. I generally like my setup, though sometimes I think about migrating my “Making” blog onto a CMS. As far as CMSes go, I quite like Webflow’s: it’s straightforward and has that Goldilocks level of functionality for me. Some other CMSes I’ve tried have felt bloated yet seemed to miss obvious functionality out of the box. I have a Bookshop.org affiliate link and it took me several years to meet the $20 minimum payout so…yeah I’ve never truly monetized my blogging! I find there’s freedom in giving away your thoughts for free. As far as costs go, I have pretty low overhead: just paying for the domain name. I’m fine with other folks monetizing personal blogs, though of course there’s a classy and not-classy way to do so. If monetizing is what keeps bloggers’ work on the open web, on sites they own and control, I prefer that over monetizing through walled gardens. Related: Substack makes it easy to monetize but there are some very compelling reasons to consider alternatives. This is highly topical: I’m currently scheming about a directory site listing “maker” blogs! So many communities in the visual arts and crafts are stuck on social media platforms they don’t even enjoy, beholden to the whims of an algorithm. I’d like to connect makers in a more organic way. If you’re a crafter who would like to be part of this, feel free to fill out this Google form ! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 133 interviews . People and Blogs is possible because kind people support it. melanie-richards.com/blog, simply the blog that lives at my main website. I post here about the web, design, development, accessibility, product management, etc. One practice I’ve been keeping for a few years now is my monthly Learning Log. These posts are a compendium of what I’ve been shipping or making, what I’ve been learning, side quests, neat links around the internet, and articles I’ve been reading. When I’m in a particularly busy period (as was the case in 2025; my first child was born in September), this series is my most consistent blogging practice. making.melanie-richards.com : this is the blog where I post about my aforementioned analog projects. Quite a lot of sewing over the past year! Mandy Brown , Oliver Burkeman (technically a newsletter with a “view on web” equivalent), and Ethan Marcotte ’s writing have been helping to fill my spiritual cup over the last couple years. Anh and Katherine Yang are doing neat things on their sites What Claudia Wore for a nostalgic pick; I’d love to recreate some of these outfits sometime. Thank you Kim for keeping the blog up! Sarah Higley would be a great next interview. She blogs less frequently, but always at great depth and thoughtfulness on web accessibility. Web developers can learn quite a lot on more involved controls and interactions from Sarah.

0 views
Kev Quirk Yesterday

Three Men Tried To Steal My Motorbike While I Was On It!

Spring is in the air here in North Wales, so I decided to take one of my motorbikes to the office yesterday. On the way home, not too far from where I live, I was sat at traffic lights when all of a sudden three men on off-road bikes surrounded me. One left, one right, one right up to my back wheel. And they were really close, like, inches from me kinda close. I immediately felt uneasy, like something was about to happen. I think we as a species have a sense for this kinda thing. Anyway, seconds later the guy on my left reached over to, I assume, grab the keys for the bike, but I was on the BMW, which has a keyless ignition, luckily. I clocked what the guy was trying to do, panicked, and kicked the side of his bike as hard as I could. Which, thankfully was enough to put him off balance, causing him to topple over. Then I clobbered the guy to the right around the head - he was wearing a helmet so it wouldn't have hurt him, but I suppose I figured it would be enough to shock him by me a couple seconds. I dunno, I basically shitting my pants at this point. As I soon as I'd hit the guy to my right, I took off like the absolutely clappers, running a red light in the process (that goodness nothing was coming the other way). My BMW is a fast bike, at 1000cc and over 170BHP. They were on dirt bikes, which are nowhere near as quick as mine. I also had knowledge of the local roads, which I hoped they didn't. As I flew off, they gave chase but quickly dropped back. A brief glance of my speedo showed I was doing over 120MPH, but it was working. In my panic I didn't know what to do - shall I go home? What if they see me pull in and find out where I live? Should I go somewhere else? But it's rush hour and if I get caught in traffic they could catch up to me again - my bike is a lot quick on the open road, but in traffic, they would have the advantage. I decided to floor it and get home as quick as possible. There's a straight road that leads to my village, so I figured if I can't see them behind I'll quickly swing the bike in and hide behind the garage (which can't be seen from the road). If I could, I'd just carry on and continue trying to lose them. I'm nearing my drive now, so I glance in the mirrors and see nothing; I decide to risk it and swing in, going up our gravel drive as quickly as I dare, while simultaneously hoping the kids aren't playing in the drive. They aren't. I dive in behind the garage and wait...5 seconds...10 seconds...I hear bikes getting closer. They fly right past my drive, going way too fast for our single track village road. My wife later asked the owner of the village pub if he caught anything on his CCTV. Here's what he found for us: It looks like only 1 of them had a number plate, and it's pretty much parallel with the road, so impossible to identify from the video. We've passed it onto the Police and we're waiting to hear back from their forensics dept. to see if they can pickup any prints from my bike. I don't remember if they had gloves on though, and I'm not very confident it will come to anything. I'm fine now, but it shook me up. I just hope they were opportunist idiots, rather than something more sinister. I've already bought myself a camera for the garage. Stay safe out there, folks. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
neilzone Yesterday

Moving (for now?) from HomeAssistant in Python venvs to HomeAssistantOS

I have used HomeAssistant for years . So many years, that I do not remember how many. Nothing I do with it is particularly fancy, but things like having my office lights turn on when I open the door if the light is below a certain luminosity, or turning off my Brompton bike charger once it has finished charging, are fun and convenient. We also have solar panels and a battery now, so I will be interested to see if I use HomeAssistant more for that. But anyway. I have been using HomeAssistant, on a Raspberry Pi 4, using Python venvs for years. It has worked absolutely fine for me, and I have (or, at least, had) no compelling reason to change. For me, this was the ideal setup, in that I could set the Pi up how I wanted, in terms of security and monitoring, and just run HomeAssistant on it. Updating HomeAssistant was as easy as running a simple bash script. I liked it. But… that approach is no longer supported, and, where possible, I prefer to use supported means of running software. That means either running HomeAssistantOS, or else using a containerised instance of HomeAssistant. While I could probably find my way through setting up a HomeAssistant container via podman, it would not be my preference, so I decided to give HomeAssistantOS a go, albeit with some trepidation. As expected, it was easy to install HAOS: write the image to a microSD card, and pop it into the Pi. I already had the switch port set up to the right VLAN, so I plugged in the Pi and waited a few minutes. I had anticipated that it would offer https, via a self-signed certificate, so I was a bit baffled to get a TLS error when I connected to it. “Never mind”, I thought. “I’ll just ssh into it and sort it out.” But no, no ssh either. Fortunately, I discovered quite quickly that, out of the box, it does not offer TLS, and I was able to access the web interface. I had taken a backup from my existing HomeAssistant installation, and I used the web interface on the new installation to restore it. It took a few minutes, but restored absolutely everything. I was impressed. I was anticipating - indeed, hoping - to set up TLS and reverse proxying using certbot and nginx. But that is not possible. Instead, I achieved it (reasonably easily, but not as easily as using a command line) via Add-ons from within the HomeAssistant UI. I’d have prefer to have done it the normal way, via ssh, but oh well. Annoyingly, I’d also like to have configured a firewall on the machine, but that is not an option either. I’ve yet to determine if that is going to be a dealbreaker for me, or whether relying on the network-level firewall, controlling access to and from that VLAN, and that machine, will be sufficient. I have also not been able to set up a separate ssh account for my greenbone scanning software, or to configure Wazuh to get the machine talking to my SIEM. Again, I will need to consider the impact of this, but intuitively it does not sit comfortably with me. Nor can I find a way to use restic to backup the configuration and other bits, incrementally and automatically, onto another machine, liked I am used to doing. I will have a poke around with the backup tooling offered but again, this does not enthral me. I want to know that, if there’s a problem, I have a backup on my restic server. Since I have used HomeAssistant for so long, and since I just restored a backup, the most I can say really is that it is all still working. It doesn’t seen faster or slower. The limitations of the appliance-based approach are annoying me, and may be sufficient to drive me towards a container-based approach instead (although that does not appeal to me either). Ultimately, I accept that I am but one user, and perhaps many users do not want the things that I want. Importantly, I am not the developer, and so what I want may simply not be things that they wish to provide. And that is their choice. I guess - personal opinion - that I would prefer a computer and not an appliance .

0 views

Feds Disrupt IoT Botnets Behind Huge DDoS Attacks

The U.S. Justice Department joined authorities in Canada and Germany in dismantling the online infrastructure behind four highly disruptive botnets that compromised more than three million Internet of Things (IoT) devices, such as routers and web cameras. The feds say the four botnets — named Aisuru , Kimwolf , JackSkid and Mossad — are responsible for a series of recent record-smashing distributed denial-of-service (DDoS) attacks capable of knocking nearly any target offline. Image: Shutterstock, @Elzicon. The Justice Department said the Department of Defense Office of Inspector General’s (DoDIG) Defense Criminal Investigative Service (DCIS) executed seizure warrants targeting multiple U.S.-registered domains, virtual servers, and other infrastructure involved in DDoS attacks against Internet addresses owned by the DoD. The government alleges the unnamed people in control of the four botnets used their crime machines to launch hundreds of thousands of DDoS attacks, often demanding extortion payments from victims. Some victims reported tens of thousands of dollars in losses and remediation expenses. The oldest of the botnets — Aisuru — issued more than 200,000 attacks commands, while JackSkid hurled at least 90,000 attacks. Kimwolf issued more than 25,000 attack commands, the government said, while Mossad was blamed for roughy 1,000 digital sieges. The DOJ said the law enforcement action was designed to prevent further infection to victim devices and to limit or eliminate the ability of the botnets to launch future attacks. The case is being investigated by the DCIS with help from the FBI’s field office in Anchorage, Alaska, and the DOJ’s statement credits nearly two dozen technology companies with assisting in the operation. “By working closely with DCIS and our international law enforcement partners, we collectively identified and disrupted criminal infrastructure used to carry out large-scale DDoS attacks,” said Special Agent in Charge Rebecca Day of the FBI Anchorage Field Office. Aisuru emerged in late 2024, and by mid-2025 it was launching record-breaking DDoS attacks as it rapidly infected new IoT devices. In October 2025, Aisuru was used to seed Kimwolf, an Aisuru variant which introduced a novel spreading mechanism that allowed the botnet to infect devices hidden behind the protection of the user’s internal network. On January 2, 2026, the security firm Synthient publicly disclosed the vulnerability Kimwolf was using to propagate so quickly. That disclosure helped curtail Kimwolf’s spread somewhat, but since then several other IoT botnets have emerged that effectively copy Kimwolf’s spreading methods while competing for the same pool of vulnerable devices. According to the DOJ, the JackSkid botnet also sought out systems on internal networks just like Kimwolf. The DOJ said its disruption of the four botnets coincided with “law enforcement actions” conducted in Canada and Germany targeting individuals who allegedly operated those botnets, although no further details were available on the suspected operators. In late February, KrebsOnSecurity identified a 22-year-old Canadian man as a core operator of the Kimwolf botnet. Multiple sources familiar with the investigation told KrebsOnSecurity the other prime suspect is a 15-year-old living in Germany.

0 views

Some Things Just Take Time

Trees take quite a while to grow. If someone 50 years ago planted a row of oaks or a chestnut tree on your plot of land, you have something that no amount of money or effort can replicate. The only way is to wait. Tree-lined roads, old gardens, houses sheltered by decades of canopy: if you want to start fresh on an empty plot, you will not be able to get that. Because some things just take time. We know this intuitively. We pay premiums for Swiss watches, Hermès bags and old properties precisely because of the time embedded in them. Either because of the time it took to build them or because of their age. We require age minimums for driving, voting, and drinking because we believe maturity only comes through lived experience. Yet right now we also live in a time of instant gratification, and it’s entering how we build software and companies. As much as we can speed up code generation, the real defining element of a successful company or an Open Source project will continue to be tenacity. The ability of leadership or the maintainers to stick to a problem for years, to build relationships, to work through challenges fundamentally defined by human lifetimes. The current generation of startup founders and programmers is obsessed with speed. Fast iteration, rapid deployment, doing everything as quickly as possible. For many things, that’s fine. You can go fast, leave some quality on the table, and learn something along the way. But there are things where speed is actively harmful, where the friction exists for a reason. Compliance is one of those cases. There’s a strong desire to eliminate everything that processes like SOC2 require, and an entire industry of turnkey solutions has sprung up to help — Delve just being one example, there are more. There’s a feeling that all the things that create friction in your life should be automated away. That human involvement should be replaced by AI-based decision-making. Because it is the friction of the process that is the problem. When in fact many times the friction, or that things just take time, is precisely the point. There’s a reason we have cooling-off periods for some important decisions in one’s life. We recognize that people need time to think about what they’re doing, and that doing something right once doesn’t mean much because you need to be able to do it over a longer period of time. AI writes code fast which isn’t news anymore. What’s interesting is that we’re pushing this force downstream: we seemingly have this desire to ship faster than ever, to run more experiments and that creates a new desire, one to remove all the remaining friction of reviews, designing and configuring infrastructure, anything that slows the pipeline. If the machines are so great, why do we even need checklists or permission systems? Express desire, enjoy result. Because we now believe it is important for us to just do everything faster. But increasingly, I also feel like this means that the shelf life of much of the software being created today — software that people and businesses should depend on — can be measured only in months rather than decades, and the relationships alongside. In one of last year’s earlier YC batches, there was already a handful that just disappeared without even saying what they learned or saying goodbye to their customers. They just shut down their public presence and moved on to other things. And to me, that is not a sign of healthy iteration. That is a sign of breaking the basic trust you need to build a relationship with customers. A proper shutdown takes time and effort, and our current environment treats that as time not wisely spent. Better to just move on to the next thing. This is extending to Open Source projects as well. All of a sudden, everything is an Open Source project, but many of them only have commits for a week or so, and then they go away because the motivation of the creator already waned. And in the name of experimentation, that is all good and well, but what makes a good Open Source project is that you think and truly believe that the person that created it is either going to stick with it for a very long period of time, or they are able to set up a strategy for succession, or they have created enough of a community that these projects will stand the test of time in one form or another. Relatedly, I’m also increasingly skeptical of anyone who sells me something that supposedly saves my time. When all that I see is that everybody who is like me, fully onboarded into AI and agentic tools, seemingly has less and less time available because we fall into a trap where we’re immediately filling it with more things. We all sell each other the idea that we’re going to save time, but that is not what’s happening. Any time saved gets immediately captured by competition. Someone who actually takes a breath is outmaneuvered by someone who fills every freed-up hour with new output. There is no easy way to bank the time and it just disappears. I feel this acutely. I’m very close to the red-hot center of where economic activity around AI is taking place, and more than anything, I have less and less time, even when I try to purposefully scale back and create the space. For me this is a problem. It’s a problem because even with the best intentions, I actually find it very hard to create quality when we are quickly commoditizing software, and the machines make it so appealing. I keep coming back to the trees. I’ve been maintaining Open Source projects for close to two decades now. The last startup I worked on, I spent 10 years at. That’s not because I’m particularly disciplined or virtuous. It’s because I or someone else, planted something, and then I kept showing up, and eventually the thing had roots that went deeper than my enthusiasm on any given day. That’s what time does! It turns some idea or plan into a commitment and a commitment into something that can shelter and grow other people. Nobody is going to mass-produce a 50-year-old oak. And nobody is going to conjure trust, or quality, or community out of a weekend sprint. The things I value most — the projects, the relationships, the communities — are all things that took years to become what they are. No tool, no matter how fast, was going to get them there sooner. We recently planted a new tree with Colin. I want it to grow into a large one. I know that’s going to take time, and I’m not in a rush.

0 views
Ash's Blog Yesterday

NumKong: 2'000 Mixed Precision Kernels For All 🦍

Around 2'000 SIMD kernels for mixed-precision BLAS-like numerics — dot products, batched GEMMs, distances, geospatial, ColBERT MaxSim, and mesh alignment — from Float6 to Float118, leveraging RISC-V, Intel AMX, Arm SME, and WebAssembly Relaxed SIMD, in 7 languages and 5 MB.

0 views
Kaushik Gopal Yesterday

Podsync - I finally built my podcast track syncer

I host and edit a podcast 1 . When recording remotely, we each record our own audio locally (I on my end, my co-host on his). The service we use (Adobe Podcast, Zoom, Skype-RIP) captures everyone together as a master track. But the quality doesn’t match what each person records locally with their own microphone. So we use that master as a reference point and stitch the individual local tracks together. This is what the industry calls a “ double-ender ”. Add a guest and it becomes a “triple-ender”. But this gets hairy during editing. Each person starts their recording at a slightly different moment — everyone hits record at a different time. Before I can edit, I need to line everything up. Drop all the tracks into a DAW, play the master alongside each individual track, nudge by ear until the speech aligns. Add a guest and it gets tedious fast. 10–15 minutes of fiddly, ear-straining alignment before I’ve even started editing. There’s also drift. Each machine’s audio clock runs at a slightly different rate, so two tracks that are perfectly aligned at minute one might be 200ms apart by minute sixty. So I built PodSync 2 . I first heard of a similar technique from Marco Arment — back in ATP episode 25 . He had a new app for aligning double-ender tracks and was already thinking about whether something so niche was even worth releasing publicly. I don’t think he ever released it. Being a Kotlin developer at the time, I figured I’d build my own. Java was mature. Surely there were audio processing libraries that could handle this. There weren’t 😅. At least not in any clean, usable form. Getting the right signal processing pieces together in JVM-land was awkward enough that my interest fizzled, so I kept doing it by hand. When I revamped Fragmented , I finally came back to this. I used Claude to help me build it — in Rust, no less. 3 But before you chalk this up to another vibecoded project, hear me out. The interesting part here wasn’t just that AI made it easier. It was thinking through the actual algorithm: Voice activity detection ( VAD ) to find speech regions. MFCC features to fingerprint the audio. Cross-correlation to find where the tracks match. Some real signal processing techniques, not just prompt engineering. Now, could I have prompted my way to a solution? Probably. But I like to think, years of manually aligning tracks — and some sound engineering intuition — helped me steer AI towards a better solution. Working on this felt refreshing. In an era where half the conversation is about AI replacing engineering work, here’s a problem where the hard part is still the problem itself — understanding the domain, picking the right approach, knowing what “correct” sounds like. It gives me confidence that solving real problems well still has its place. I like how Dax put it: thdxr on twitter I really don’t care about using AI to ship more stuff. It’s really hard to come up with stuff worth shipping. The core idea: take a chunk of speech from a participant track, compare it against the master recording, find where they match best. That position is the time offset. The trick is picking which chunk of speech to use. Rather than betting on a single region, Podsync finds a few strong candidates per track (longer contiguous speech blocks preferred) and tries each one against the master. For long candidates, it samples from the start, middle, and end. The highest-confidence match wins; if a second independent region agrees on the same offset, that corroboration factors in as a tie-breaker. After finding the offset, Podsync pads or trims each track to align with the master and match its length (and outputs some info on the offset). Drop the output into my DAW at 0:00. Done. I even wrote an agent skill you can just point your agent harness to and it will take care of all the steps for you : What used to be 10–15 minutes of alignment per episode is now a single command. Marco, if you ever read this, would still love to see your implementation! His solution (as I understand) is aimed more at correcting the drift vs getting the offset right. In practice, I haven’t found drift to be much of a problem. It exists but stays minor, and I’m typically editing every second of the podcast anyway so it’s easy enough to handle by hand. I even had a branch that corrected drift by splicing at silence points, but it complicated things more than it helped. It’s a podcast on AI development but we strive to make it high signal. None of that masturbatory AI discourse .  ↩︎ See also Phone-sync .  ↩︎ I chose Rust (it’s what interests me these days ) and a CLI tool with no runtime dependency is more pleasant to distribute.  ↩︎ It’s a podcast on AI development but we strive to make it high signal. None of that masturbatory AI discourse .  ↩︎ See also Phone-sync .  ↩︎ I chose Rust (it’s what interests me these days ) and a CLI tool with no runtime dependency is more pleasant to distribute.  ↩︎

0 views
Marc Brooker Yesterday

My heuristics are wrong. What now?

More words. More meaning? Some people who ask me for advice at get a lot of words in reply. Sometimes, those responses aren’t specific to my particular workplace, and so I share them here. In the past, I’ve written about echo chambers , writing , writing for an audience , time management , and getting big things done . Do you remember Cool Runnings ? In the movie, John Candy is a retired bobsled champion, who uses his experience, connections, and lovable curmudgeon character to turn a rag-tag group of sprinters into an olympic bobsled team. A lot of principal engineer types think of themselves this way: they used to bobsled, they don’t bobsled, but they still know the skills and the people and the equipment. And that worked well enough, while we were still bobsledding. But we’re not bobsledding anymore. Many of the heuristics that we’ve developed over our careers as software engineers are no longer correct. Not all of them. But many. What it means for a system to be maintainable. How much it costs to write code versus integrate libraries versus take service dependencies. What it means for an API to be well designed, or ergonomic, or usable. What it means to understand code. Where service boundaries should be. Where security and data integrity should be enforced. What’s easy. What’s hard. We’ve seen this play out in small ways before. Over the last decade, I’ve frequently been frustrated by experienced folks who didn’t update their system design heuristics to match the cloud, to match SSDs, to match 100Gb/s networks, and so on. But this is the biggest change I’ve seen in my career by far. An extinction-level event for rules of thumb. But you’re a tech leader, and you need to lead, and leading is heavily based on using your experience to help people and teams be more effective. What now? The victorious man in the day of crisis is the man who has the serenity to accept what he cannot help and the courage to change what must be altered. 1 Let me assume that you want to continue to be a valuable tech leader. You want your teams and organizations to succeed. That you’re willing to sound less smart and less sure, in interests of being right and helpful. In that case, and I hope that is the case, your job has changed. Your job, for the foreseeable future, is to have the humility to accept that many of your heuristics are wrong, the courage to believe some are still right, and the curiosity to actively learn the difference. You can’t throw out everything you know. Your taste, your high standards, your understanding of your business and customers and the deep technical trade-offs in your area are more valuable than ever before. This is like that fantasy that people have of going back to middle school knowing all the things they know now 2 . You’re ahead of the pack in many ways. But you also need to really deeply question the things you know, and the things you assume. Before you share one of your rules of thumb, you need to deeply examine whether it’s still right. And the way you’re going to know that, right now, is by getting back on the ice. Build. Own. Get your hands dirty and use the tools. Build something real. Build a prototype. Build a thousand little experiments in an afternoon. Challenge yourself to try to do something you previously would have assumed is impossible, or infeasible, or unaffordable. Find one of the ways that you’re worried that the new tools are going to lead to trouble, and actively fix it. Then examine the things you’re learning. Update your constants. Over the next couple of years, the most valuable people to have on a software team are going to be experienced folks who’re actively working to keep their heuristics fresh. Who can combine curiosity with experience. Among the least valuable people to have on a software team are experienced folks who aren’t willing to change their thinking. Beyond that, it’s hard to see. This is going to be hard for some folks. It’s hard to admit where you’re wrong. It’s hard to go back to being a beginner. It’s easy to stick your fingers in your ears and say “No, it’s the children who are wrong”. My advice is to not be that guy. The good news? It’s as fun as hell. Get building, get learning, make something exist that you couldn’t imagine before. Winnifred Crane Wygal paraphrasing Reinhold Niebuhr A fantasy I have never understood. Being 13 once was enough for a lifetime, thank you very much.

0 views

SQLAlchemy 2 In Practice - Chapter 1 - Database Setup

Welcome! This is the start of a journey which I hope will provide you with many new tricks to improve how you work with relational databases in your Python applications. Given that this is a hands-on book, this first chapter is dedicated to help you set up your system with a database, so that you can run all the examples and exercises. This is the first chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon . Thank you!

0 views
Simon Willison 2 days ago

Thoughts on OpenAI acquiring Astral and uv/ruff/ty

The big news this morning: Astral to join OpenAI (on the Astral blog) and OpenAI to acquire Astral (the OpenAI announcement). Astral are the company behind uv , ruff , and ty - three increasingly load-bearing open source projects in the Python ecosystem. I have thoughts! The Astral team will become part of the Codex team at OpenAI. Charlie Marsh has this to say : Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement , OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community -- and for the broader Python ecosystem -- just as we have from the start. [...] After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development. OpenAI's message has a slightly different focus (highlights mine): As part of our developer-first philosophy, after closing OpenAI plans to support Astral’s open source products. By bringing Astral’s tooling and engineering expertise to OpenAI, we will accelerate our work on Codex and expand what AI can do across the software development lifecycle. This is a slightly confusing message. The Codex CLI is a Rust application, and Astral have some of the best Rust engineers in the industry - BurntSushi alone ( Rust regex , ripgrep , jiff ) may be worth the price of acquisition! So is this about the talent or about the product? I expect both, but I know from past experience that a product+talent acquisition can turn into a talent-only acquisition later on. Of Astral's projects the most impactful by far is uv . If you're not familiar with it, is by far the most convincing solution to Python's environment management problems, best illustrated by this classic XKCD : Switch from to and most of these problems go away. I've been using it extensively for the past couple of years and it's become an essential part of my workflow. I'm not alone in this. According to PyPI Stats uv was downloaded more than 126 million times last month! Since its release in February 2024 - just two years ago - it's become one of the most popular tools for running Python code. Astral's two other big projects are ruff - a Python linter and formatter - and ty - a fast Python type checker. These are popular tools that provide a great developer experience but they aren't load-bearing in the same way that is. They do however resonate well with coding agent tools like Codex - giving an agent access to fast linting and type checking tools can help improve the quality of the code they generate. I'm not convinced that integrating them into the coding agent itself as opposed to telling it when to run them will make a meaningful difference, but I may just not be imaginative enough here. Ever since started to gain traction the Python community has been worrying about the strategic risk of a single VC-backed company owning a key piece of Python infrastructure. I wrote about one of those conversations in detail back in September 2024. The conversation back then focused on what Astral's business plan could be, which started to take form in August 2025 when they announced pyx , their private PyPI-style package registry for organizations. I'm less convinced that pyx makes sense within OpenAI, and it's notably absent from both the Astral and OpenAI announcement posts. An interesting aspect of this deal is how it might impact the competition between Anthropic and OpenAI. Both companies spent most of 2025 focused on improving the coding ability of their models, resulting in the November 2025 inflection point when coding agents went from often-useful to almost-indispensable tools for software development. The competition between Anthropic's Claude Code and OpenAI's Codex is fierce . Those $200/month subscriptions add up to billions of dollars a year in revenue, for companies that very much need that money. Anthropic acquired the Bun JavaScript runtime in December 2025, an acquisition that looks somewhat similar in shape to Astral. Bun was already a core component of Claude Code and that acquisition looked to mainly be about ensuring that a crucial dependency stayed actively maintained. Claude Code's performance has increased significantly since then thanks to the efforts of Bun's Jarred Sumner. One bad version of this deal would be if OpenAI start using their ownership of as leverage in their competition with Anthropic. One detail that caught my eye from Astral's announcement, in the section thanking the team, investors, and community: Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that. As far as I can tell neither the Series A nor the Series B were previously announced - I've only been able to find coverage of the original seed round from April 2023 . Those investors presumably now get to exchange their stake in Astral for a piece of OpenAI. I wonder how much influence they had on Astral's decision to sell. Armin Ronacher built Rye , which was later taken over by Astral and effectively merged with uv. In August 2024 he wrote about the risk involved in a VC-backed company owning a key piece of open source infrastructure and said the following (highlight mine): However having seen the code and what uv is doing, even in the worst possible future this is a very forkable and maintainable thing . I believe that even in case Astral shuts down or were to do something incredibly dodgy licensing wise, the community would be better off than before uv existed. Astral's own Douglas Creager emphasized this angle on Hacker News today : All I can say is that right now , we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever". I like and trust the Astral team and I'm optimistic that their projects will be well-maintained in their new home. OpenAI don't yet have much of a track record with respect to acquiring and maintaining open source projects. They've been on a bit of an acquisition spree over the past three months though, snapping up Promptfoo and OpenClaw (sort-of, they hired creator Peter Steinberger and are spinning OpenClaw off to a foundation), plus closed source LaTeX platform Crixet (now Prism) . If things do go south for and the other Astral projects we'll get to see how credible the forking exit strategy turns out to be. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views