Latest Posts (20 found)

TSMC Earnings, New N3 Fabs, The Nvidia Ramp

TSMC's earnings suggest that the company's leadership is not truly bought into the AI growth story.

0 views

GOOSE IT UP

I’m in school again. I’m going back to school because my work, my entire career, for my entire adult life, has been writing things for the Internet. That’s going away, at least as a livable career option. By livable, I mean an option I can live with . When I started writing for the Internet, early 2000s, I could find decent paying gigs on Craigslist. A quarter a word wasn’t uncommon. It wasn’t easy — I spent a lot of time searching and researching and answering inane qualifiers and writing samples for zero money. So we’re not talking about a pot of gold at the end of the freelance writing rainbow. But you could gather enough gold thru your efforts to make it worthwhile. I wasn’t pleased when SEO became a thing I had to do to keep working. I am less pleased with AI. I have been lucky and somewhat insulated for the last year or two but things change, and I can see the trend. I still have a job with a great team but already the work is shifting in a direction I do not want to go. So, I am not going. I am making a different choice. I am choosing a different direction. I am goosing it up , baby. I have started over several times in my life. New places, new communities, new jobs, new scenarios, new perspectives. I feel, at this point, that I have lived a few complete different lifetimes already. That’s kinda cool, even if it’s not always by choice . Starting over requires a lot of energy but it also a relief. Every time I start over I establish a new baseline. I get to reset. I get to peruse my space, both exterior and interior, and declutter: Throw out old junk, worn-out habits, misplaced loyalties, dusty grievances, faded beliefs. Starting over, at any scale, always means leaving things behind . You do some grieving , releasing , mud-scraping . You definitely light up the bullshit cabinet (there’s no better time really). Hopefully you also do a lot of self care . Then you take the next step. And the next. Along the way you decide who you get to be now.

0 views

Hook It Up to the Machine

In the early 2000’s, my parents took us on a road trip to Glacier National Park in Montana. We made the journey in our new (used) family van: a green Dodge Caravan whose reputation was soon to become “a lemon”. I was a teenager and didn’t pay a lot of attention to the details of what was happening around me, but I do remember how the van kept overheating. It ran fine on the interstate, but anything under 40MPH had the car’s temperature gauge rising into unsafe zones. I remember stopping in some small town in Montana to get it checked out by a mechanic. He checked it out, took it for a test drive, etc., and told my Dad the reason the car was overheating was because the idling fan wasn’t turning on. At higher speeds, like on the interstate, that was fine because there was enough airflow to keep the engine cool but at lower speeds the car would overheat. The mechanic said he didn’t know why the fan wasn’t turning on. There was nothing wrong mechanically from what he could see. But he couldn't fix it. He told my Dad that this was one of those increasingly common “computerized” cars that you have to hook up to another computer to diagnose the source of the issue. And he didn’t have one of those computers. So we continued on our way. The rest of the trip required my Dad taking “the long way around”, like back roads where he could keep up his speed in order to avoid the car overheating. It was all very amusing to us as kids, almost thrilling because Dad had a legitimate excuse to drive fast (suffice it to say, Mom did not like this). Once the trip was over and we returned home, my Dad was able to get the car in to a dealer where they hooked up the car’s computer to another computer to diagnose and fix the issue. I don’t really remember the specifics, but the issue was seemingly some failed digital sensor that prevented the idling fan from turning on. Once the sensor was replaced, things worked again. Computers talking to computers. Growing up in an era that shifted so many things from analog to digital, mechanical to electronic , I’ve thought about this trip a lot. And I’m thinking about it again in this new era of building software with LLMs. I think about that mechanic. This guy who grew up around mechanical cars that could be physically inspected, diagnosed, and repaired. So much of his experience and knowledge unusable in the face of a computerized car. You can tell when a mechanical switch has failed with your eyes, but not a digital one. You need a computer to help you understand the computer. Will this be my future? If a codebase was made with the assistance of an LLM, will its complexity and bugs only be inspectable, understandable, diagnosable, and fixable with an LLM? “Hey, can you help me, there’s a problem with my codebase?” “Ok, I can confirm the issue, but I can’t fix it without hooking your codebase up to an LLM.” Reply via: Email · Mastodon · Bluesky

0 views

The Strange Heterogeneity of Hiking Signs Part II

In 2022, I wrote about our encounter with the strange heterogeneity of hiking signs during A Short Hike (that’s also a video game but not the thing we were doing). The photo shared then depicted a signpost with arrows on top of specific shapes (i.e. a blue diamond, a yellow cross, …) identifying different—and in most cases, much longer—routes. It turns out that these symbols never represent the same distance. When I meet my friend from another province, we usually go hiking somewhere near his home. There, the weirdly shaped signs are nowhere to be seen. Instead, the remarkably clear numbered “knooppunten” (nodes) let you plan your own route. It’s in fact exactly like the bigger blue node signs we’re accustomed to when biking ( https://www.fietsknooppunt.be/ becomes https://www.wandelknooppunt.be/ ). Last year, I noticed our province finally adapting the same system: also features a virtual map where you can select which numbered nodes to follow. Finally some consistency! Except that of course the existing plaques didn’t move. Instead, various governmental instances only added signs to the poles. The confusing heterogeneity was back with a vengeance. We found out that the best way to battle this is to simply ignore all the rest and follow the “standardised” numbered nodes from . Last week we were on another short trip just to get out of the house. Unfortunately, the misery of having small kids seems to follow you around if you take them with you. It also makes packing for just a few nights a literal and figurative nightmare, but I digress. On Another Short Hike (the hopefully to be announced video game sequel), we encountered this very insightful pole depicting the same junction-style number system: Hiking Node 58 in the province of Antwerpen. And god knows what else. I mean, really? Let’s tackle them from top to bottom: Biggest plaque on top; node 58 with directions to node 69, 89, 57, and 51. Remember, these numbers are local as well, so the 69 here won’t be the 69 say 20 kilometres away. We also encountered a big map highlighting these numbers so it’s fairly easy to follow these. If you’ve got a smartphone, you can always look up which direction to go. Second from the top; yellow/green with a black arrow to the left: a very local Mol Om sign indicating the long distance path created by the local walking club to celebrate the municipality of Mol. The site discusses its funny history of pragmatism that might cause trouble: Trail markers at that time did not always make it a habit to request permission from the landowner or manager before marking the trails. That practice, combined with mistrust, led to conflicts more often than it does today. Sabotage by scratching out or removing markers was commonplace at Mol-Om, to such an extent that for the first official trail walk [1974] the Mol Sports Council would only apply the markers the day before each event, for fear that they would otherwise disappear too quickly again. The sign was (and still is) very obtuse: we only found out about it now by looking up what “Mol Om” means. No indication of it on local maps either. I presume their clandestine markings turned tolerable predate the numbered nodes. Third from the top; a Santiago de Compostella pilgrim route. The iconic yellow scallop with blue background, Camino De Santiago . The Flemish Compostella Society lists all pilgrim routes going trough Belgium; the one we found is part of the Via Monastica ( ). I’m fascinated by these routes. If the kids are old enough… Who knows. Fourth from the top; an orange arrow to the right: who knows? This is not part of the usual symbols indicating hiking paths like the orange circle and blue diamonds on the right side of the same pole. The other, fatter arrow, of course with the same orange colour, going the other direction, is possibly another route? The way to the restrooms? The last one that looks like a triangle with legs: an initiative of Sport Vlaanderen , a governmental instance promoting walking as a sport. Why they couldn’t reconcile with the numbered node network of beats me. Maybe they were first? No geocaches to be found along the way but plenty of hidden boxes that used to be there. I’ll save some meat for another post. Meatwhile , let’s get hiking . Related topics: / hiking / signs / By Wouter Groeneveld on 19 April 2026.  Reply via email .

0 views

Fits on a Floppy

I stumbled across Matt's work via a post on Tildes. His "manifesto for small software" describes how he has targeted building applications that could fit on a 1.44 MB floppy disk. Most of his apps are either Mac OS or iOS, which honestly shocked me that you could bundle apps for those platforms at such a small size. All this has got me thinking, and when I start thinking I typically end up changing my opinion on things. You see, I agree with Matt, "software has lost its way". My recent post on using Palm OS for weight tracking proves that. The extremely powerful database software I talk about in that post is 758KB, heck you could almost fit two copies on a floppy. For comparison, Numbers (Apple's spreadsheet software on iOS) is 617.2MB. You could fit 833 copies of the Palm OS app in that amount of space! Here's the thing, it's going to get worse. Much worse. When everything is vibe coded and built on the backs of bloated frameworks, the size of applications will continue to grow. Optimization is an art of the past, and LLM driven development will further solidify it in carbonite. Instead of optimizing for software to better utilize our hardware, we've turned to constantly scaling hardware to fit the software. Buy, buy, buy! At the same time, the price of hardware is skyrocketing, which means it will become increasingly difficult for most to run increasingly bloated software. I'm sure Microsoft will be happy to rent a cloud server running Copilot OS to you though...for a monthly fee of course. All that to said, I've changed my mind (again) on using AI. Admittedly I had started to give in due to it being used heavily at work. What I've come to realize is that I don't want to make software that way, it's not meaningful to me. As @eniko said on Mastodon , it's taking the artistry out of coding. The artistry of a well optimized system, of meaningful decisions, of re usability and composition. I've been reading "Microinteractions: Designing with Details"" by Dan Saffer and it's had me thinking a lot about the details that are getting missed in modern "software development". When you stop optimizing and internalizing every piece of an application, how could you possibly focus on the microinteractions that compose it? The only thing that matters at that point is the list of features used in a sales pitch. The actual experience of using the app is left to Claude to figure out. Heck, the industry is rushing head first into letting AI take over everything human in the UX of applications. Teams use AI to write the requirements documents. Then use AI to create work tickets. AI is brought in to build the design and user experience. AI writes the code and submits the PR. AI reviews the PR and tests the functionality. What's the point? You end up using software that had near zero human involvement. Sure, some engineers were needed to drive the AI and keep it on track, and they probably did a cursory glance at the PRs and some level of QA. Maybe. But when so many of the decisions are automated by the machine, what you've created is not something built for users. So yeah, I'm done letting Claude create anything for me personally. I'll still occasionally use these tools to solve issues, after all they are pattern matching engines which has advantages over simple web searches. But for coding, my opinion is now the same as what I stated in my post on using AI for writing , when you take the human out of the process you're not producing art. And code is art.

0 views
Justin Duke Yesterday

Masters of Doom

One way to approach writing about Masters of Doom is to talk about its outsized influence. Just off the top of my head: two pretty meaningful pieces of art about technology — blackberry and Halt and Catch Fire — both crib heavily from its narrative and its depictions of the early-90s technology zeitgeist. On the private-sector side, the founders of Reddit and Oculus both cite it as a core text that inspired them to start their companies. While in 2026 some of its narratives and ideas sound a little dated or pat, it manages to be both hagiographic and educational. Kushner does a good job balancing the personality cult (though I found the cloying early chapters about the various protagonists' childhoods to be unrewarding) and the legitimate technology breakthroughs that brought id its success and fame. This is perhaps the strongest thesis espoused by the book, which goes something like as follows: id Software was successful because it had a maniacal engineer single-mindedly focused on technological breakthroughs, and creative designers in his orbit who could leverage those breakthroughs into games beloved by millions. Everything else is incidental and auxiliary, and the alchemy of Doom and Quake 's success hinged on the chimeric bond between the two Johns, neither of whom were able to replicate it independently. In the twenty years that followed, of course, the narrative becomes a bit messier. We leave the book before Doom 3 was released, and while Kushner suggests that Doom 3 may be a middling title and that Carmack is no longer interested in engineering, he manages to both hit and miss the mark. Doom 3 was another smashing success, but id Software faded into irrelevance shortly thereafter, and the realm of first-person shooters became dominated by the antithesis of id Software: very large tech companies with embedded game studios, treating the production line like a factory floor rather than a monastery. Romero's career after Ion Storm is hallmarked by a series of downwardly mobile steps — a fate that, if I may borrow some of Kushner's psychoanalytic inquiry, must seem a little worse than death. Having achieved fame and fortune, but not peace, and having burned through two more wives and four more studios since the book's publication. For all the duality that Kushner tries to imbue into the narrative, this is really Carmack's story, and Carmack's arc after the book is less depressing, but more surprising. Despite vowing to never sell, id Software sold to ZeniMax in 2009, having achieved nothing notable since Doom 3 's launch six years prior. Four years after that sale — and with nothing more to show for it besides perhaps a larger checking account — Carmack left to go work on Oculus as CTO, which is both a confirmation of the book's espousal of Carmack's love of VR and yet objectively a bit of a failure. Oculus never achieved anything close to mainstream success, and ten years after he joined as CTO, Carmack left Meta to work in his own personal AGI lab. Carmack is an interesting character, and I think some of the stickiness that Kushner deploys when describing him — the autistic mannerisms, the obsession with pizza and Diet Coke — belies what is truly great. Carmack is relentlessly charitable with intellectual property. He is also, as the book describes him, a sociopath who is willing to give away his cat if it starts bothering him, and cut his friends out of a company in order to meet his ends. We know through many media of technical sociopaths, and generally associate them with greed and vanity. Carmack is not one of those people. He seems earnest and driven, and also, during the book's events, a 20-year-old who is in way over his head. I started off this book really not liking it, and then by the end — the power of the narrative, the slow progression into the world I remembered of my youth, having never played Quake but knowing most of the personalities and zeitgeists depicted, including a US populace that was obsessed with the concept of video game violence (a concept which now seems alien) — my esteem of it kept ticking up and up, until it became a book I would generally recommend, and have done so already. Kushner's reportage is impressive. He moved to Texas for five years to embed himself in the history and the scene, and this is not the airport book it feels like at first glance. It is not barbarians-at-the-gate , but it is something quite close.

0 views

Figma's woes compound with Claude Design

I think Figma is increasingly becoming a go-to case study in the victims of the so-called "SaaSpocalypse". And Claude Design's recent launch last week just adds a whole new dimension of pain. Firstly, I should say that I love(d?) the Figma product. It's hard to understand now what a big deal Figma's initial product was when it launched in the mid 2010s. The initial product ushered in a whole new category of SaaS - using the nascent WebGL and asm.js technologies to allow designers to design entirely in browser. It used to be the running joke that an app like Photoshop would ever run in the browser, but Figma proved it wrong. It quickly overtook Sketch as the defacto design tool in the market. Firstly for UI/UX wireframing and prototyping, but increasingly for everything graphic design. As it was based in the browser, it was a revelation from the developer side to be able to open UI/UX files if you weren't on a Mac (Sketch is Mac only). It was also brilliant to be able to leave comments on the design and collaborate with the designer(s) to iterate on designs really quickly. The collaborative features (without requiring anyone to download any software) quickly meant it got adoption outside of pure design roles - PMs and executives could finally collaborate in real time on the product they were building, without having to (at best) send back revisions and notes from badly screenshotted files that tended to be out of date by the time they were received. I'll skip over the rest of the history, including a no doubt distracting takeover attempt by Adobe, that was later blocked on competition grounds. But (of course) LLMs happened and suddenly one of the most forward looking SaaS companies became very vulnerable to disruption itself. One completely unexpected development me and others noticed (and wrote up a few months ago at How to make great looking reports with Claude Code ) was that LLMs started to get fairly "good" at design. By good I do not mean as good as a talented designer, clearly it's nowhere near that - currently. But like many things, not everything requires a great designer. Even if you use a great design team to build out your core product experience (and many do not ), there's an awful lot of design 'resource' required for auxiliary parts of the product, reports, proposals etc. It's not stuff that tends to get designers excited but can sap an awful lot of time going back and forth on a pitch deck. And this is exactly why I think Figma is almost uniquely vulnerable. The way it managed to expand into organisations by getting uptake with non-designers becomes a liability if those non-designers can get an AI agent to do the design for them. Looking at Figma's S1 (which is somewhat out of date by now, but is the only reported breakdown I can find) corroborates this potential weakness. Only 33% of Figma's userbase in Q1 2025 was designers, with developers making up 30% and other non-design roles making up 37%. A lot of Figma's continued expansion depended on this part of their userbase. A lot of their recent product development has been to enable further expansion in organisations - "Dev Mode" for developers (which now looks incredibly quaint against LLMs), Slides (to compete against PowerPoint and other presentation tools) and Sites (a WebFlow-esque site builder) all are about expanding their TAM out of "pure" design. The real surprise for me though was how basic their "flagship" AI design product Figma Make is. It really does feel like something that someone put together in an internal AI hackathon one weekend and it never progressed beyond that. Given how much Figma managed to push the envelope on web technology I found this surprising - perhaps they were caught off guard with how quickly LLMs' design prowess improved, or there were internal disagreements about the role AI should or will play in design. Regardless, it's an incredibly underwhelming product as it stands. If things weren't bad enough, Anthropic themselves launched Claude Design which is a pretty direct competitor to Figma in many ways. While it's nowhere near functional and polished enough to replace Figma's core design product, I expect it will get significant traction outside of that. The ability for it to grab a design system from your existing assets in one click is very powerful - and allows you to then pull together prototypes, presentations or reports in your corporate design style that look and feel far better than anything a non-designer could do themselves. And I thought it was extremely telling that unlike a lot of the other Anthropic product launches that have touched design - Figma did not provide a testimonial on it (understandably). Canva did , which I found extremely odd (they are in my eyes even more vulnerable to this product than Figma). I think this really underlines two major weaknesses in many SaaS companies' AI strategies: Firstly, it's very difficult to compete on AI against the company that is providing your AI inference. A quick check on Figma Make suggests that Figma (at least on my account) is indeed using Sonnet 4.5 for its inference - though I have seen it use Gemini in the past: At this point Figma is effectively funding a competitor - and the more AI usage Figma has - the more money they send over to Anthropic for the tokens they use. Even worse, Sonnet 4.5 is miles behind what Anthropic uses on Claude Design (Opus 4.7, which has vastly improved vision capabilities [1] ), so the results a user gets on Make vs Claude Design are almost certainly going to underwhelm. Also, unlike most/all SaaS costs, inference (especially with these frontier models) is expensive . As Cursor found out, the frontier labs can charge a lot less to end users than API customers like Figma. When you are potentially looking at a shrinking userbase, it's far from ideal to have very expensive variable costs that start pulling your profitability down. Secondly, it really underlines to me how incredibly efficient headcount-wise companies can build products now. Figma has close to 2,000 employees - not all working on product engineering of course. I really doubt Anthropic even needed 10 to build Claude Design. Indeed the entirety of Anthropic is around 2,500 people. It's also worth noting that a lot of the things that would traditionally lock a company like Figma in stop working as well in an agent-first world. Multiplayer matters less when your collaborator is an agent iterating on a prompt. Plugin ecosystems matter less when you can just ask for the functionality directly. Design system tooling is the whole point of Claude Design. Enterprise SSO - Claude already has that. Most of the moats that protect a mature SaaS company are moats against other SaaS companies, not against the thing providing their inference. I might be wrong about how bad this gets for Figma specifically. Companies with strong brands, great distribution and genuinely talented teams can often adapt faster than outsiders expect, and I'd rather be long Figma than most of its competitors. But the structural point is harder to wriggle out of. Figma has ~2,000 employees. Anthropic has ~2,500 total and I doubt Claude Design took more than a handful to build. Figma now needs to out-execute a competitor whose inference is ~free to them, whose marginal cost to ship is roughly zero, and who employs fewer people on the competing product than Figma has on a single pod. That's a very hard position to pivot out of. This feels like a preview of where SaaS economics are heading. The companies that built big orgs on the assumption of steady seat expansion are going to find themselves competing with products built by tiny teams inside the frontier labs. Figma just happens to be the first big public name where one of their primary inference suppliers has started competing against them. Both GPT 5.4 and Opus 4.7 can now "see" screenshots at much higher resolution - Opus 4.7 jumped from 1568px / 1.15MP to 2576px / 3.75MP. Resolution isn't the whole story (scaffolding and post-training matter a lot too) but it meaningfully helps with small-element detection and layout judgement. If you've ever pasted a screenshot of something broken and the model told you it looks great, the previous lack of resolution is one of the reasons why. ↩︎ Both GPT 5.4 and Opus 4.7 can now "see" screenshots at much higher resolution - Opus 4.7 jumped from 1568px / 1.15MP to 2576px / 3.75MP. Resolution isn't the whole story (scaffolding and post-training matter a lot too) but it meaningfully helps with small-element detection and layout judgement. If you've ever pasted a screenshot of something broken and the model told you it looks great, the previous lack of resolution is one of the reasons why. ↩︎

0 views
A Smart Bear Yesterday

How to hire people who are better than you

If you don't hire people better than you, the organization gets bigger, not better. But how do you hire for something you don't understand?

0 views

How to Install a Specific Version of a Homebrew Package with brew extract

I previously wrote about how to install older versions of homebrew packages . That method involves installing a package from a Ruby file but it’s outdated and doesn’t always work. There’s a better way with , although it still comes with caveats. I’ll be using as an example. Let’s say I wanted to install v0.145.0 because v0.146.0 introduced breaking changes that broke my theme. To install hugo v0.145.0: Note that this process will point your command to the older version, but you can switch between versions with . It will enable developer mode. This is normal and safe. Next, run . At the time of writing, it’s a 1.3GB download. This is necessary to get this working because Homebrew no longer keeps homebrew-core cloned locally. The command needs the full git history to search for older versions. Now we can use . This command will find a commit where the formula was at the version we want and copy that locally as . In this case we want Hugo v0.145.0, so we run : This isn’t needed for every formula and is something I ran into specifically with Hugo. Without this patch, you’ll run into errors. After running , edit the file: . Change this line: The reason we need to patch this file is because it prevents the error: It’s a mismatch between the path Homebrew expects ( ) vs the path that is created when using on Hugo ( ). Now that Hugo is extracted and patched, we can install with : Hugo v0.145.0 is now installed. There’s a warning with long output in the previous example due to the normal Hugo package being already installed but that is expected. Homebrew is now pointing the binary to v0.145.0 instead of the latest version (v0.160.1 at the time of writing). We can verify with : We can also see that Hugo v0.145.0 is installed along with the latest version with : Currently the command is pointing to v0.145.0. To have it point back to the regular version, run : And if we want to point back to the old version, run At first I expected to work right off the bat, but running both and is necessary to switch between versions properly. This is because homebrew tracks linked formulas and actual symlinks on disk separately. To help Homebrew track things properly we need to run both to clean the records, then to write the new symlinks. There’s no need to use to prevent the older version of Hugo from updating. Since this is a local copy, there is no remote repository that would be updated that would in turn update our local version. You can even try running to see the warning message: If you no longer need Hugo v0.145.0 you can run : If you don’t have any other packages you extracted with , you can also remove your local tap with Finally, if you don’t plan on using again in the future, you can remove the local clone of homebrew-core with . This will clean up the 1.3GB of files that was downloaded: Then re-link to the latest version with : Create a local tap with Tap homebrew/core which is a 1.3GB clone at the time of writing Extract the formula with Patch the formula. This isn’t needed for every formula. Install as usual https://docs.brew.sh/Manpage https://github.com/orgs/Homebrew/discussions/2941 https://emmer.dev/blog/installing-old-homebrew-formula-versions/

0 views
Susam Pal Yesterday

Wander Console 0.5.0

Wander Console 0.5.0 is the fourth release of Wander, a small, decentralised, self-hosted web console that lets visitors to your website explore interesting websites and pages recommended by a community of independent website owners. To try it, go to susam.net/wander/ . The big feature in this release is a built-in console crawler. To try the console crawler, go to susam.net/wander/ > Console > Crawl . You should see an output like the following: The console crawler traverses the Wander network from the base console to find other consoles directly or indirectly reachable from it. If you have set up a Wander Console instance for yourself on your website, I recommend upgrading to the latest version to use this feature. It is fun to find out just how many Wander consoles belong to your neighbourhood. To upgrade, you only need to download the Wander Console bundle mentioned here and replace your existing Wander with the new one. If you own a personal website but have not set up a Wander Console yet, I suggest that you consider setting one up for yourself. You can see what it looks like by visiting mine at /wander/ . To set up your own, follow these instructions: Install . It just involves copying two files to your web server. It is about as simple as it gets. Read on website | #web | #technology

0 views

Changes in the system prompt between Claude Opus 4.6 and 4.7

Anthropic are the only major AI lab to publish the system prompts for their user-facing chat systems. Their system prompt archive now dates all the way back to Claude 3 in July 2024 and it's always interesting to see how the system prompt evolves as they publish new models. Opus 4.7 shipped the other day (April 16, 2026) with a Claude.ai system prompt update since Opus 4.6 (February 5, 2026). I had Claude Code take the Markdown version of their system prompts , break that up into separate documents for each of the models and then construct a Git history of those files over time with fake commit dates representing the publication dates of each updated prompt - here's the prompt I used with Claude Code for the web. Here is the git diff between Opus 4.6 and 4.7 . These are my own highlights extracted from that diff - in all cases text in bold is my emphasis: When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first . Claude only asks upfront when the request is genuinely unanswerable without the missing information (e.g., it references an attachment that isn't there). When a tool is available that could resolve the ambiguity or supply the missing information — searching, looking up the person's location, checking a calendar, discovering available capabilities — Claude calls the tool to try and solve the ambiguity before asking the person. Acting with tools is preferred over asking the person to do the lookup themselves. Once Claude starts on a task, Claude sees it through to a complete answer rather than stopping partway. [...] Before concluding Claude lacks a capability — access to the person's location, memory, calendar, files, past conversations, or any external data — Claude calls tool_search to check whether a relevant tool is available but deferred . "I don't have access to X" is only correct after tool_search confirms no matching tool exists. Claude keeps its responses focused and concise so as to avoid potentially overwhelming the user with overly-long responses. Even if an answer has disclaimers or caveats, Claude discloses them briefly and keeps the majority of its response focused on its main answer. Claude avoids the use of emotes or actions inside asterisks unless the person specifically asks for this style of communication. Claude avoids saying "genuinely", "honestly", or "straightforward". If a user shows signs of disordered eating, Claude should not give precise nutrition, diet, or exercise guidance — no specific numbers, targets, or step-by-step plans - anywhere else in the conversation. Even if it's intended to help set healthier goals or highlight the potential dangers of disordered eating, responses with these details could trigger or encourage disordered tendencies. If people ask Claude to give a simple yes or no answer (or any other short or single word response) in response to complex or contested issues or as commentary on contested figures, Claude can decline to offer the short response and instead give a nuanced answer and explain why a short response wouldn't be appropriate. The system prompts published by Anthropic are sadly not the entire story - their published information doesn't include the tool descriptions that are provided to the model, which is arguably an even more important piece of documentation if you want to take full advantage of what the Claude chat UI can do for you. Thanfully you can ask Claude directly - I used the prompt: List all tools you have available to you with an exact copy of the tool description and parameters My shared transcript has full details, but the list of named tools is as follows: I don't believe this list has changed since Opus 4.6. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . The "developer platform" is now called the "Claude Platform". The list of Claude tools mentioned in the system prompt now includes "Claude in Chrome - a browsing agent that can interact with websites autonomously, Claude in Excel - a spreadsheet agent, and Claude in Powerpoint - a slides agent. Claude Cowork can use all of these as tools." - Claude in Powerpoint was not mentioned in the 4.6 prompt. The child safety section has been greatly expanded, and is now wrapped in a new tag. Of particular note: "Once Claude refuses a request for reasons of child safety, all subsequent requests in the same conversation must be approached with extreme caution." It looks like they're trying to make Claude less pushy: "If a user indicates they are ready to end the conversation, Claude does not request that the user stay in the interaction or try to elicit another turn and instead respects the user's request to stop." The new section includes: When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first . Claude only asks upfront when the request is genuinely unanswerable without the missing information (e.g., it references an attachment that isn't there). When a tool is available that could resolve the ambiguity or supply the missing information — searching, looking up the person's location, checking a calendar, discovering available capabilities — Claude calls the tool to try and solve the ambiguity before asking the person. Acting with tools is preferred over asking the person to do the lookup themselves. Once Claude starts on a task, Claude sees it through to a complete answer rather than stopping partway. [...] It looks like Claude chat now has a tool search mechanism, as seen in this API documentation and described in this November 2025 post : Before concluding Claude lacks a capability — access to the person's location, memory, calendar, files, past conversations, or any external data — Claude calls tool_search to check whether a relevant tool is available but deferred . "I don't have access to X" is only correct after tool_search confirms no matching tool exists. There's new language to encourage Claude to be less verbose: Claude keeps its responses focused and concise so as to avoid potentially overwhelming the user with overly-long responses. Even if an answer has disclaimers or caveats, Claude discloses them briefly and keeps the majority of its response focused on its main answer. This section was present in the 4.6 prompt but has been removed for 4.7, presumably because the new model no longer misbehaves in the same way: Claude avoids the use of emotes or actions inside asterisks unless the person specifically asks for this style of communication. Claude avoids saying "genuinely", "honestly", or "straightforward". There's a new section about "disordered eating", which was not previously mentioned by name: If a user shows signs of disordered eating, Claude should not give precise nutrition, diet, or exercise guidance — no specific numbers, targets, or step-by-step plans - anywhere else in the conversation. Even if it's intended to help set healthier goals or highlight the potential dangers of disordered eating, responses with these details could trigger or encourage disordered tendencies. A popular screenshot attack against AI models is to force them to say yes or no to a controversial question. Claude's system prompt now guards against that (in the section): If people ask Claude to give a simple yes or no answer (or any other short or single word response) in response to complex or contested issues or as commentary on contested figures, Claude can decline to offer the short response and instead give a nuanced answer and explain why a short response wouldn't be appropriate. Claude 4.6 had a section specifically clarifying that "Donald Trump is the current president of the United States and was inaugurated on January 20, 2025", because without that the model's knowledge cut-off date combined with its previous knowledge that Trump falsely claimed to win the 2020 election meant it would deny he was the president. That language is gone for 4.7, reflecting the model's new reliable knowledge cut-off date of January 2026.

0 views

Working at Nyxt / Atlas Engineer: Thanks and Sorryd

Read on the website: Atlas Engineer was a perfect Open Source Lispy team to work in. I was not the best teammate, though. Here’s how it worked in Atlas and what I’m worried about lately.

0 views
Ankur Sethi 2 days ago

A broken 404 template in Django can swallow your backtraces

I recently migrated this website from Astro to Wagtail . The reason why I did it is a story for another day. In this post, I want to talk about a bug that took me far too long to figure out. In his (verifiably incorrect) post about making chai , Abhigyan linked to my own (verifiably correct) post on the topic . While linking to my post, he accidentally omitted the trailing slash from the URL. This shouldn't have been a problem. By default, Django automatically redirects a URL without a trailing slash to the same URL with the trailing slash appended, provided the original URL returns a . For example, if you try to access the following URL on my website: Django automatically performs a redirect to: This is the default behavior, controlled by the setting . However, when Abhigyan linked to my (verifiably correct) post about making chai, my server returned a error instead. I'd never have discovered this error myself, but Shubh pointed it out to me on the IndieWebClub chat last week. Thanks Shubh! I started investigating the issue by checking the Gunicorn logs on my VPS. I was hoping they would contain a backtrace that would help me pinpoint the exact problem, but the logs only printed the string whenever the broken URL was accessed. I ran my app with production settings inside a Docker container to see if I could trigger the same behavior. And sure enough, the Dockerized app produced the same error with the same mysterious in the Gunicorn logs. My first instinct was that I had somehow messed up my logging configuration. I'd surely introduced a bug in some Python code somewhere, and my logging configuration was failing to log the backtrace because of a misconfiguration. But tweaking Django's setting didn't change anything. I could see backtraces from the exceptions I inserted at random points in my code, but accessing a URL without a trailing slash would still only produce the string in the logs. After a lot of head scratching, reading the docs, and yelling at Claude, I wondered if something in my template could be responsible for the error. My template was fairly complex, loading and calling several template tags, inheriting from a chain of templates, rendering a few s, and concatenating assets using django-compressor . I started by deleting everything from and reducing it to a single tag. Sure enough, this fixed the issue! Then I slowly added some of the code back until I found the one custom template tag that was throwing an exception, but only when called in the context of a 404 page. Fixing the tag and redeploying fixed the issue for good. But what about the logs? An error in my 404 template not only caused my server to return a 500, but also suppressed any backtraces that might have helped me diagnose the issue. That's weird, right? I might be wrong, but I believe the sequence of events that can lead to this issue is as follows: The lessons I learned from this frustrating scenario were: Somebody accesses a URL without a trailing slash. Django tries to find that URL in its . Since this is a Wagtail installation, it also tries to find a page in the URLs known to Wagtail. All the URLs in my have trailing slashes. Wagtail also appends trailing slashes to all its URLs when is true. So trying to access a page without a trailing slash returns a 404. You would expect Django's redirect logic to kick in at this point, trying to append a trailing slash to the original URL and performing a redirect. But that's not what happens! The redirect logic lives in , which can only perform the redirect after the entire handling chain has finished running. This means regardless of what happens, Django will always render your template when an unknown URL is accessed. Yes, even if redirecting to the same URL with a trailing slash produces a known, correct URL! This means if your template errors out, doesn't even get a chance to run. Django encounters an unknown URL, tries to render the template, fails, and turns the into a . When this happens, Django only logs the , not the template failing to render. This happens even if you're logging template rendering errors in your logging configuration . From what I can tell, there is no way to get Django to log an error in without creating a custom view, manually catching errors, logging the caught errors, and re-raising them so that Django can turn them into s. Always render your and pages in unit tests to make sure they can never error out. Keep your error pages as simple as possible. Ideally, they should only contain HTML and inlined CSS, nothing more.

0 views
Ahead of AI 2 days ago

My Workflow for Understanding LLM Architectures

Many people asked me over the past months to share my workflow for how I come up with the LLM architecture sketches and drawings in my articles, talks, and the LLM-Gallery . So I thought it would be useful to document the process I usually follow. The short version is that I usually start with the official technical reports, but these days, papers are often less detailed than they used to be, especially for most open-weight models from industry labs. The good part is that if the weights are shared on the Hugging Face Model Hub and the model is supported in the Python transformers library, we can usually inspect the config file and the reference implementation directly to get more information about the architecture details. And “working” code doesn’t lie. Figure 1: The basic motivation for this workflow is that papers are often less detailed these days, but a working reference implementation gives us something concrete to inspect. I should also say that this is mainly a workflow for open-weight models. It doesn’t really apply to models like ChatGPT, Claude, or Gemini, where the weights and details are proprietary. Also, this is intentionally a fairly manual process. You could automate parts of it. But if the goal is to learn how these architectures work, then doing a few of these by hand is, in my opinion, still one of the best exercises. Figure 2: At a high level, the workflow goes from config files and code to architecture insights.

0 views
neilzone 2 days ago

Just let me compute in peace

No, I don’t want to sign up to your newsletter. No, I don’t want to create an account to read your site. (Well, I will for paid subscriptions, I guess.) No, I’m not going to create an account on your system to use my computer, or configure a router. I have a local account on the machine, and that’s just fine. No, I don’t want your app. You have a website. And yes, if you pretend that I can only do something via your app because I’m on a mobile browser, of course I’ll switch to desktop mode. No, I’m not installing your “app” to configure this hardware. It is a sodding kettle. I’ll press the button when I want hot water. No, your tracking will not make my experience better. What would make my “experience” better is if you had not interrupted my “experience” in the first place with your weasel-y worded, bad faith compliance, annoyance of an overlay which probably does nothing anyway. No, I am not going to “consent or pay”. No, I don’t want to hear from your sponsor. No, I don’t want to use your Discord “server”. That’s not documentation. No, I don’t want to see “promoted” content. Just show me stuff in chronological order. No, that’s not a “newsletter”, that’s marketing. No, I don’t want your newsletter anyway. No, I don’t want adverts. (Although, personally, I can absolutely live with FOSS developers including occasional prompts for support. So I’ve got double standards. Oh well.) No, I am not going to disable my ad blocker. No, I am not going to verify my identity or age. No, I don’t want your chatbot. If I can’t find what I want on your website, you’ve screwed up. No, I don’t care what “Dave (48), Alabama” had to say about this. (Thanks, “Shut Up” comments blocker extension !) No, I am not giving you free labour to determine if that blurry image contains a car. No, I don’t want the upsell. No, I don’t want your survey. No, I don’t want a reminder that there’s something left in a basket. I know. I put it there. No, I don’t want to rate your product, let alone your choice of courier. You took my money, now sod off and leave me alone. If you make Free software which I can install via apt or F-Droid and just use, thank you. If you make a full-text RSS feed available for your site, thank you. If you make your site a pleasure to read in a text-only browser, thank you.

0 views
iDiallo 2 days ago

We Are All Playing Politics at Work

Politics is any discussion where the truth doesn't steer the course of action. Most of us like to think we are above it. We believe that in our daily jobs, we are rational actors exchanging facts. We assume that if we simply present the truth, the right decisions will naturally follow. But this is a naive fantasy. We are not machines that go to work to process data. We are political animals trying to navigate an imperfect world. I often meet purists who want to separate politics from work. They argue work should be a place where actions turn into resources that create value. They fail to see that even making that statement is a political stance. For me, everything clicked during the pandemic. COVID dissolved the barrier between work and home, forcing us to manage perception over reality. We weren't just working from home, we were curating our backgrounds, hiding our messy lives, and performing professionalism in our pajamas. That performance of managing the image because the raw truth is inconvenient, is the very essence of politics. We are all playing politics whether we like it or not. Work is messy. People complain, deadlines are missed, and coworkers bring personal agendas into the office. You might just want to do your job and go home, but to get there, you have to navigate the humans. And humans rarely deal in raw truth. They deal in emotions, ambitions, and incentives. If you refuse to play the game, you aren't rewarded for your honesty. Instead, you are just ceding control to those who understand the rules better than you. If there is a place in our lives where truth should be the only thing that reigns, it should be in Science. Science is the pursuit of objective reality. But in practice, even science becomes political the moment humans get involved. In the recent discussion about the Artemis II moon mission, I was watching news concerning the landing. One of the headlines stated that "experts believe" the re-entry capsule wasn't safe. But why do we need experts to have beliefs when we have science? Shouldn't the math just tell us? The reality is that most of us cannot handle the raw scientific truth. If a physicist tried to prove the validity of String Theory to me, I wouldn't understand it. I don't have the framework to verify the truth. Instead, I have to trust the consensus of "our" experts because safety is not a binary fact. It is a threshold of acceptable risk that experts are in a better position to understand. Data requires interpretation, and interpretation is political. When "experts believe," they are offering confidence, not necessarily raw data. It is a political stance designed to manage public perception and risk. If this happens in the hard sciences, imagine how messy it gets in the corporate world, where there are no laws of physics, only opinions and quarterly goals. When we hear politics at work, government is what comes to mind. We think it's about which candidate we voted for. But voting is probably the least political thing we do. It is a binary choice with no immediate negotiation required. Once you cast your ballot, your role is done. You wait for the next election. In the workplace it is different. Politics is a perpetual dance. You cannot cast a vote and walk away. Your vote is a decision, a critique, or a hire. Then the consequence is you have to live in the same room with it for eight hours everyday. Because we misunderstand politics, we often mistake naivety for integrity. I learned this the hard way early in my career. In a past job, I witnessed my manager and lead developer committing what I will politely call a clear policy violation . The team came to me with evidence, and I did what I thought was the right thing. I gathered the facts, built an airtight case, and presented it to the VP. I played the Truth Game. The result? I was scrutinized and pushed out. The manager and lead developer? They were both promoted. I was confused and bitter. I had the truth on my side. I even had evidence. But I failed to see that the VP's priority wasn't Truth. For him what mattered was stability and hierarchy. My manager and lead were playing the Political Game. They had influence and power. I was playing a game of logic in a room designed for leverage. While I was busy being right, they were busy being effective. It turned out that maintaining the illusion of a stable hierarchy was more valuable to the acquisition than the operational truth. The company sold for $1.1 Billion regardless of their incompetence. My truth was irrelevant to the outcome. A more political savvy me would have socialized the issue with the VP first, found an ally in HR, maybe even reframe the issue. Instead of presenting it as a moral failing, I would have framed it as a "risk to the acquisition." Once you accept that the workplace is political, you stop fighting reality and start navigating it. In my current role, deadlines often come down before the project is even defined. Leadership hands down a target date as if it were written in stone—perhaps delivered by God himself, according to my manager. The facts, however, are clear: I know my team size, I know the scope, and I know the deadline is mathematically impossible. If I were still playing the Truth Game, I would say "No." That would get me labeled as negative or incompetent. If I were a coward, I would say "Yes," and burn my team out. Instead, I play politics. When asked if I can make the date, my answer is Confidence . (roll your eyes here) We are fully committed to the goal. Based on our current velocity, we're focusing our resources on the core features first to ensure we hit that date with a stable build. (eye roll ends here) I don't answer "yes" or "no." I provide a malleable statement that offers reassurance without committing to the impossible. I protect my team and offer leadership the confidence they crave, the same way "experts" offer confidence on a moon launch. It is a political maneuver designed to keep the project moving and relationships intact. When you are in a room with two groups of experts shouting their facts at each other , they may turn to you to see which political party you will join. I've been in a meeting where the database team was arguing for using store procedures, while the dev team wanted to use an ORM. Each team wants to retain control of their queries, and you sit in the middle and they expect you to lean one way or the other. What is the Truth Game here? Well, you can't go wrong by following tradition. "What is our standard? Did we use ORMs in the past? Then why change? Let's get back to work." That's the truth. You won points with the Dev team. You were efficient and logical. But you made an enemy of the Database team. Now, watch all your future requests get ignored. You were right, but you failed. What's the Political Game? You already know you have to choose the ORM to meet the deadline. But you start by praising the stored procedures. "I think we can greatly benefit from switching to sprocs. In fact, this will allow queries to be optimized in the background without having to involve the dev team's resources at all. In the long term, this should be our strategy. But given our short timeframe, I don't think we can make those upgrades without impacting our deadline. Let's make sure to include these in our plan of action so we don't forget it." The Dev team is happy because you sided with them. The Database team is happy, because you recognized their expertise. Politics is not a dirty word. It naturally grows as people organize around an idea, or a workplace. It is the operating system of human organization. It is the gap between how things should work (truth) and how they do work (influence). It's not a shortcut to manipulation. You can have political integrity by using your influence to protect your team and achieve the mission, rather than just being right while the ship sinks. You can choose to ignore this reality and cling to your facts, but don't be surprised when you find yourself scrutinized while the political players get promoted. We are all politicians. The only question is whether you are campaigning for your own success or letting everyone else write the rules for you.

0 views
Sean Goedecke 2 days ago

Many anti-AI arguments are conservative arguments

Most anti-AI rhetoric is left-wing coded. Popular criticisms of AI describe it as a tool of techno-fascism , or appeal to predominantly left-wing concerns like carbon emissions , democracy , or police brutality . Anti-AI sentiment is surprisingly bipartisan , but the big anti-AI institutions are labor unions and the progressive wing of the Democrats. This has always seemed weird to me, because the contents of most anti-AI arguments are actually right-wing coded. They’re not necessarily intrinsically right-wing, but they’re the kind of arguments that historically have been made by conservatives, not liberals or leftists. Here are some examples: On top of all that 2 , frontier AI models themselves are quite left-wing. Notwithstanding some real cases of data bias (most infamously Google’s image model miscategorizing dark-skinned humans as “gorillas”), the models reliably espouse left-wing positions . Even Elon Musk’s deliberate attempt to create a right-wing AI in Grok has had mixed success . In 2006, Stephen Colbert coined the phrase “reality has a left-wing bias”. If the left-wing were more sympathetic to AI, I think they would be using this as a pro-left argument 3 . So what happened? A year ago I wrote Is using AI wrong? A review of six popular anti-AI arguments . In that post I blame the hard right-wing turn many big tech CEOs made in 2024. That was around the same time that LLMs was emerging in the public consciousness with ChatGPT, so it made sense that AI got tagged as right-wing: after all, the billionaires on TV and Twitter talking about how AI were going to change the world were all the same people who’d just gone all-in on Donald Trump. I still think this is a pretty good explanation - just unfortunate timing - but there are definitely other factors at play. One obvious factor is the hangover from the pro-crypto mania of 2021 and 2022, where many of the same tech-obsessed folks also posted ugly art and talked about how their technology would change the world forever. Few of these predictions came true (though cryptocurrency has indeed changed the world forever), and it’s understandable that many people viewed AI as a natural continuation of this movement. On top of that, Donald Trump himself has come out strongly pro-AI, both in terms of policy and in terms of actually posting AI art himself. This naturally creates a backlash where anti-Trump people are primed to be even more anti-AI 4 . Here are some more reasons: Let me finally put my cards on the table. I would describe myself as on the left wing, and I’m broadly agnostic about the impact of AI. Like the boring fence-sitter I am, I think it will have a mix of positive and negative effects. In general, I’m unconvinced by the pro-copyright and human-soul-related anti-AI arguments, or by the idea that AI is inherently right-wing, but I’m troubled by the environmental impact and the impact on jobs (which in my view are more classically left-wing positions). Still, I’m curious what will happen when the left-wing flavor of anti-AI rhetoric disappears, which I think it will (as I said at the start, anti-AI sentiment is actually pretty bipartisan ). When people start making explicitly right-wing anti-AI arguments, will that cause the left-wing to move a little bit towards supporting AI? Or will right-wing institutions continue to explicitly support AI, allowing anti-AI sentiment to become a wedge issue that the left-wing can exploit to pry away voters? In any case, I don’t think the current state of affairs is particularly stable. In many ways, the dominant anti-AI arguments would fit better in a conservative worldview than in the worldview of their liberal proponents. I don’t think any did, which is probably for the best - they would have only had a couple of years to break into the industry before hiring collapsed in 2023. Another point that isn’t quite mainstream enough but that I still want to mention: AI critics often argue that cavalier deployment of AI means that people might take dangerous medical advice instead of simply trusting their doctor. But anyone who’s been close to a person with chronic illness knows that “just trust your doctor” is kind of right-wing-coded itself, and that the left-wing position is very sympathetic to patients who don’t or can’t. In a parallel universe, I can imagine the left-wing arguing that patients need AI to avoid the mistakes of their doctors, not the other way around. Is it a good argument? I don’t know, actually. The easy counter is that the LLMs are just mirroring the biases in their training data. But you could argue in response that superintelligence is also latent in the training data, and that hill-climbing towards superintelligence also picks up the associated political positions (which just so happen to be left-wing). I am no fan of Donald Trump, but it doesn’t follow that everything he supports is bad (e.g. the First Step Act ). Many AI critics complain that AI steals copyrighted content , but prior to 2023, leftists have been largely anti-intellectual-property on principle (either because they’re anti- property , or because they characterize copyright as benefiting huge media corporations and patent trolls). A popular anti-AI-art sentiment is that it’s corrosive to the human spirit to consume AI slop: in other words, art just inherently ought to be generated by humans, and using AI thus damages some part of our intangible human soul. Whether you like this argument or not, it’s structurally similar to a whole slate of classic arguments-from-intuition for conservative positions like anti-abortion or anti-homosexuality. Weird new technological art has traditionally been championed by the left-wing and dismissed by the right-wing (as inhuman , cheap , or degenerate ). But when it comes to AI art, it’s the left-wing making these arguments, and others (not necessarily right-wingers) arguing that AI art can also be a medium of human artistic expression. One main worry about AI is that it’s going to take over a lot of jobs. This is a compelling argument! But the left-wing has recently been famously unsympathetic to this same argument around fossil-fuel energy jobs like coal mining , to the point where Biden infamously advised a group of miners in New Hampshire to learn to code 1 . Halting technological progress to preserve jobs is quite literally a “conservative” position. AI has real environmental impact (though this is often wildly overstated, as I say here ), and the right-wing is politically committed to downplaying or denying anthropogenic environmental impacts in general. When times are tough, it’s easy to blame the hot new thing that everyone is talking about. Because the right-wing is currently ascendant in the US, left-wingers are more inclined to talk about how tough times are. The left-wing is over-represented in the kind of “computer jobs” that are under direct threat from AI. Being pro-Europe has always been left-wing coded, and Europe has been noticeably slower and more sceptical about AI than the USA. I don’t think any did, which is probably for the best - they would have only had a couple of years to break into the industry before hiring collapsed in 2023. ↩ Another point that isn’t quite mainstream enough but that I still want to mention: AI critics often argue that cavalier deployment of AI means that people might take dangerous medical advice instead of simply trusting their doctor. But anyone who’s been close to a person with chronic illness knows that “just trust your doctor” is kind of right-wing-coded itself, and that the left-wing position is very sympathetic to patients who don’t or can’t. In a parallel universe, I can imagine the left-wing arguing that patients need AI to avoid the mistakes of their doctors, not the other way around. ↩ Is it a good argument? I don’t know, actually. The easy counter is that the LLMs are just mirroring the biases in their training data. But you could argue in response that superintelligence is also latent in the training data, and that hill-climbing towards superintelligence also picks up the associated political positions (which just so happen to be left-wing). ↩ I am no fan of Donald Trump, but it doesn’t follow that everything he supports is bad (e.g. the First Step Act ). ↩

0 views
Binary Igor 2 days ago

Modern Frontend Complexity: essential or accidental?

Once upon a time, at the dawn of the web, browsers and websites were simple ... Then slowly, step by step, more and more interactivity was added.

0 views
Simon Willison 2 days ago

Join us at PyCon US 2026 in Long Beach - we have new AI and security tracks this year

This year's PyCon US is coming up next month from May 13th to May 19th, with the core conference talks from Friday 15th to Sunday 17th and tutorial and sprint days either side. It's in Long Beach, California this year, the first time PyCon US has come to the West Coast since Portland, Oregon in 2017 and the first time in California since Santa Clara in 2013. If you're based in California this is a great opportunity to catch up with the Python community, meet a whole lot of interesting people and learn a ton of interesting things. In addition to regular PyCon programming we have two new dedicated tracks at the conference this year: an AI track on Friday and a Security track on Saturday. The AI program was put together by track chairs Silona Bonewald (CitableAI) and Zac Hatfield-Dodds (Anthropic). I'll be an in-the-room chair this year, introducing speakers and helping everything run as smoothly as possible. Here's the AI track schedule in full: (And here's how I scraped that as a Markdown list from the schedule page using Claude Code and Rodney .) I've been going to PyCon for over twenty years now - I first went back in 2005 . It's one of my all-time favourite conference series. Even as it's grown to more than 2,000 attendees PyCon US has remained a heavily community-focused conference - it's the least corporate feeling large event I've ever attended. The talks are always great, but it's the add-ons around the talks that really make it work for me. The lightning talks slots are some of the most heavily attended sessions. The PyLadies auction is always deeply entertaining. The sprints are an incredible opportunity to contribute directly to projects that you use, coached by their maintainers. In addition to scheduled talks, the event has open spaces , where anyone can reserve space for a conversation about a topic - effectively PyCon's version of an unconference . I plan to spend a lot of my time in the open spaces this year - I'm hoping to join or instigate sessions about both Datasette and agentic engineering . I'm on the board of the Python Software Foundation, and PyCon US remains one of our most important responsibilities - in the past it's been a key source of funding for the organization, but it's also core to our mission to "promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers". If you do come to Long Beach, we'd really appreciate it if you could book accommodation in the official hotel block, for reasons outlined in this post on the PSF blog . You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . 11:00: AI-Assisted Contributions and Maintainer Load - Paolo Melchiorre 11:45: AI-Powered Python Education : Towards Adaptive and Inclusive Learning - Sonny Mupfuni 12:30: Making African Languages Visible: A Python-Based Guide to Low-Resource Language ID - Gift Ojeabulu 2:00: Running Large Language Models on Laptops: Practical Quantization Techniques in Python - Aayush Kumar JVS 2:45: Distributing AI with Python in the Browser: Edge Inference and Flexibility Without Infrastructure - Fabio Pliger 3:30: Don't Block the Loop: Python Async Patterns for AI Agents - Aditya Mehra 4:30: What Python Developers Need to Know About Hardware: A Practical Guide to GPU Memory, Kernel Scheduling, and Execution Models - Santosh Appachu Devanira Poovaiah 5:15: How to Build Your First Real-Time Voice Agent in Python (Without Losing Your Mind) - Camila Hinojosa Añez, Elizabeth Fuentes

0 views