Latest Posts (20 found)

2026.08: Losing in the Attention Economy

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on Anthropic’s Super Bowl lies. What Happened to Video Games? For decades video games were hailed as the industry of the future, as their growth and eventually total revenue dwarfed other forms of entertainment. Over the last five years, however, things have gotten dark — and what light there is is shining on everyone other than game developers. I’ve been talking to Matthew Ball about the state of the video game industry every year for the last three years, and this week’s Interview was my favorite one of the series: what happens when you actually have to fight for attention, and when everything that made you exciting — particularly interactivity and immersiveness — start to be come liabilities? — Ben Thompson The NBA Is a Mess, For Now.  As a card-carrying pro basketball sicko who will be watching the NBA the rest of my life, it brings me no joy to report the league is not in a great place at the moment. We’re reliving the mid-aughts Spurs-Pistons Dark Ages, but with too much offense instead of too much defense, and a regular season that’s 20 games too long. I wrote about all of it on Sharp Text this week , including problems that can be fixed, others that may be solved with time, and whether Commissioner Adam Silver is the right leader to address any of these issues.  — Andrew Sharp Shopify and the Future of E-Commerce.  In the midst of the ongoing thrum of SaaSpocalypse takes, I enjoyed that Ben’s Daily Update on Wednesday pumped the brakes on the panic in at least one area: Shopify is fine, actually . We went deeper on this week’s episode of Sharp Tech , exploring not only Shopify’s value propositions, but the shifting dynamics of e-commerce in the AI era, the sorts of businesses that are likely to emerge in the years to come, and why certain structural advantages from previous paradigms will not only be durable, but even stronger going forward.  — AS Thin Is In — Thick clients were the dominant form of device throughout the PC and mobile era; in an AI world, however, thin clients make much more sense. Shopify Earnings, Shopify’s AI Advantages — Shopify is poised to be one of the biggest winners from AI; it would behoove investors to actually understand the businesses they are selling. An Interview with Matthew Ball About Gaming and the Fight for Attention — An interview with Matthew Ball about the state of the video gaming industry in 2026, and why everything is a fight for attention. The NBA’s Problems Are Structural, Cultural and Fixable — What’s driving NBA fans to apathy, how the league might find its way back, and whether Adam Silver has outlived his usefulness. Back to the Future Curling, F1 , and Gambling South Africa’s Ruined Synthetic Oil Giant The Dunk Contest Preview America Needs, The Top Five Bandwagons for the Next Five Years, The NBA Fines the Jazz $500,000 The All-Star Game Was a Delight, Harrowing Field Reporting from the Dunk Contest, KD Burners Rise from the Ashes The Roots of a Global Memory Shortage, Thick, Thin and Apple, Shopify is Fine, Actually

0 views

Everything you never wanted to know about visually-hidden

Nobody asked for it but nevertheless, I present to you my definitive “it depends” tome on visually-hidden web content. I’ll probably make an amendment before you’ve finished reading. If you enjoy more questions than answers, buckle up! I’ll start with the original premise, even though I stray off-topic on tangents and never recover. I was nerd-sniped on Bluesky. Ana Tudor asked : Is there still any point to most styles in visually hidden classes in ’26? Any point to shrinking dimensions to and setting when to nothing via / reduces clickable area to nothing? And then no dimensions = no need for . @anatudor.bsky.social Ana proposed the following: Is this enough in 2026? As an occasional purveyor of the class myself, the question wriggled its way into my brain. I felt compelled to investigate the whole ordeal. Spoiler: I do not have a satisfactory yes-or-no answer, but I do have a wall of text! I went so deep down the rabbit hole I must start with a table of contents: I’m writing this based on the assumption that a class is considered acceptable for specific use cases . My final section on native visually-hidden addresses the bigger accessibility concerns. It’s not easy to say where this technique is appropriate. It is generally agreed to be OK but a symptom of — and not a fix for — other design issues. Appropriate use cases for are far fewer than you think. Skip to the history lesson if you’re familiar. , — there have been many variations on the class name. I’ve looked at popular implementations and compiled the kitchen sink version below. Please don’t copy this as a golden sample. It merely encompasses all I’ve seen. There are variations on the selector using pseudo-classes that allow for focus. Think “skip to main content” links, for example. What is the purpose of the class? The idea is to hide an element visually, but allow it to be discovered by assistive technology. Screen readers being the primary example. The element must be removed from layout flow. It should leave no render artefacts and have no side effects. It does this whilst trying to avoid the bugs and quirks of web browsers. If this sounds and looks just a bit hacky to you, you have a high tolerance for hacks! It’s a massive hack! How was this normalised? We’ll find out later. I’ll whittle down the properties for those unfamiliar. Absolute positioning is vital to remove the element from layout flow. Otherwise the position of surrounding elements will be affected by its presence. This crops the visible area to nothing. remains as a fallback but has long been deprecated and is obsolete. All modern browsers support . These two properties remove styles that may add layout dimensions. This group effectively gives the element zero dimensions. There are reasons for instead of and negative margin that I’ll cover later. Another property to ensure no visible pixels are drawn. I’ve seen the newer value used but what difference that makes if any is unclear. This was added to address text wrapping inside the square (I’ll explain later). So basically we have and a load of properties that attempted to make the element invisible. We cannot use or or because those remove elements from the accessibility tree. So the big question remains: why must we still ‘zero’ the dimensions? Why is not sufficient? To make sense of this mystery I went back to the beginning. It was tricky to research this topic because older articles have been corrected with modern information. I recovered many details from the archives and mailing lists with the help of those involved. They’re cited along the way. Our journey begins November 2004. A draft document titled “CSS Techniques for WCAG 2.0” edited by Wendy Chisholm and Becky Gibson includes a technique for invisible labels. While it is usually best to include visual labels for all form controls, there are situations where a visual label is not needed due to the surrounding textual description of the control and/or the content the control contains. Users of screen readers, however, need each form control to be explicitly labeled so the intent of the control is well understood when navigated to directly. Creating Invisible labels for form elements ( history ) The following CSS was provided: Could this be the original class? My research jumped through decades but eventually I found an email thread “CSS and invisible labels for forms” on the W3C WAI mailing list. This was a month prior, preluding the WCAG draft. A different technique from Bob Easton was noted: The beauty of this technique is that it enables using as much text as we feel appropriate, and the elements we feel appropriate. Imagine placing instructive text about the accessibility features of the page off left (as well as on the site’s accessibility statement). Imagine interspersing “start of…” landmarks through a page with heading tags. Or, imagine parking full lists off left, lists of access keys, for example. Screen readers can easily collect all headings and read complete lists. Now, we have a made for screen reader technique that really works! Screenreader Visibility - Bob Easton (2003) Easton attributed both Choan Gálvez and Dave Shea for their contributions. In same the thread, Gez Lemon proposed to ensure that text doesn’t bleed into the display area . Following up, Becky Gibson shared a test case covering the ideas. Lemon later published an article “Invisible Form Prompts” about the WCAG plans which attracted plenty of commenters including Bob Easton. The resulting WCAG draft guideline discussed both the and ideas. Note that instead of using the nosize style described above, you could instead use postion:absolute; and left:-200px; to position the label “offscreen”. This technique works with the screen readers as well. Only position elements offscreen in the top or left direction, if you put an item off to the right or the bottom, many browsers will add scroll bars to allow the user to reach the content. Creating Invisible labels for form elements Two options were known and considered towards the end of 2004. Why not both? Indeed, it appears Paul Bohman on the WebAIM mailing list suggested such a combination in February 2004. Bohman even discovered possibly the first zero width bug. I originally recommended setting the height and width to 0 pixels. This works with JAWS and Home Page Reader. However, this does not work with Window Eyes. If you set the height and width to 1 pixel, then the technique works with all browsers and all three of the screen readers I tested. Re: Hiding text using CSS - Paul Bohman Later in May 2004, Bohman along with Shane Anderson published a paper on this technique. Citations within included Bob Easton and Tom Gilder . Aside note: other zero width bugs have been discovered since. Manuel Matuzović noted in 2023 that links in Safari were not focusable . The zero width story continues as recently as February 2026 (last week). In browse mode in web browsers, NVDA no longer treats controls with 0 width or height as invisible. This may make it possible to access previously inaccessible “screen reader only” content on some websites. NVDA 2026.1 Beta TWO now available - NV Access News Digger further into WebAIM’s email archive uncovered a 2003 thread in which Tom Gilder shared a class for skip navigation links . I found Gilder’s blog in the web archives introducing this technique. I thought I’d put down my “skip navigation” link method down in proper writing as people seem to like it (and it gives me something to write about!). Try moving through the links on this page using the keyboard - the first link should magically appear from thin air and allow you to quickly jump to the blog tools, which modern/visual/graphical/CSS-enabled browsers (someone really needs to come up with an acronym for that) should display to the left of the content. Skip-a-dee-doo-dah - Tom Gilder Gilder’s post links to a Dave Shea post which in turn mentions the 2002 book “Building Accessible Websites” by Joe Clark . Chapter eight discusses the necessity of a “skip navigation” link due to table-based layout but advises: Keep them visible! Well-intentioned developers who already use page anchors to skip navigation will go to the trouble to set the anchor text in the tiniest possible font in the same colour as the background, rendering it invisible to graphical browsers (unless you happen to pass the mouse over it and notice the cursor shape change). Building Accessible Websites - 08. Navigation - Joe Clark Clark expressed frustration over common tricks like the invisible pixel. It’s clear no class existed when this was written. Choan Gálvez informed me that Eric Meyer would have the css-discuss mailing list. Eric kindly searched the backups but didn’t find any earlier discussion. However, Eric did find a thread on the W3C mailing list from 1999 in which Ian Jacobs (IBM) discusses the accessibility of “skip navigation” links. The desire to visually hide “skip navigation” links was likely the main precursor to the early techniques. In fact, Bob Easton said as much: As we move from tag soup to CSS governed design, we throw out the layout tables and we throw out the spacer images. Great! It feels wonderful to do that kind of house cleaning. So, what do we do with those “skip navigation” links that used to be attached to the invisible spacer images? Screenreader Visibility - Bob Easton (2003) I had originally missed that in my excitement seeing the class. I reckon we’ve reached the source of the class. At least conceptually. Technically, the class emerged from several ideas, rather than a “eureka” moment. Perhaps more can be gleaned from other CSS techniques such a the desire to improve accessibility of CSS image replacement . Bob Easton retired in 2008 after a 40 year career at IBM. I reached out to Bob who was surprised to learn this technique was still a topic today † . Bob emphasised the fact that it was always a clumsy workaround and something CSS probably wasn’t intended to accommodate . I’ll share more of Bob’s thoughts later. † I might have overdone the enthusiasm Let’s take an intermission! My contact page is where you can send corrections by the way :) The class stabilised for a period. Visit 2006 in the Wayback Machine to see WebAIM’s guide to invisible content — Paul Bohman’s version is still recommended. Moving forward to 2011, I found Jonathan Snook discussing the “clip method”. Snook leads us to Drupal developer Jeff Burnz the previous year. […] we still have the big problem of the page “jump” issue if this is applied to a focusable element, such as a link, like skip navigation links. WebAim and a few others endorse using the LEFT property instead of TOP, but this no go for Drupal because of major pain-in-the-butt issues with RTL. In early May 2010 I was getting pretty frustrated with this issue so I pulled out a big HTML reference and started scanning through it for any, and I mean ANY property I might have overlooked that could possible be used to solve this thorny issue. It was then I recalled using clip on a recent project so I looked up its values and yes, it can have 0 as a value. Using CSS clip as an Accessible Method of Hiding Content - Jeff Burnz It would seem Burnz discovered the technique independently and was probably the first to write about it. Burnz also notes a right-to-left (RTL) issue. This could explain why pushing content off-screen fell out of fashion. 2010 also saw the arrival of HTML5 Boilerplate along with issue #194 in which Jonathan Neal plays a key role in the discussion and comments: If we want to correct for every seemingly-reasonable possibility of overflow in every browser then we may want to consider [code below] This was their final decision. I’ve removed for clarity. This is very close to what we have now, no surprise since HTML5 Boilterplate was extremely popular. I’m leaning to conclude that the additional properties are really just there for the “possibility” of pixels escaping containment as much as fixing any identified problem. Thierry Koblentz covered the state of affairs in 2012 noting that: Webkit, Opera and to some extent IE do not play ball with [clip] . Koblentz prophesies: I wrote the declarations in the previous rule in a particular order because if one day clip works as everyone would expect, then we could drop all declarations after clip, and go back to the original Clip your hidden content for better accessibility - Thierry Koblentz Sound familiar? With those browsers obsolete, and if behaves itself, can the other properties be removed? Well we have 14 years of new bugs features to consider first. In 2016, J. Renée Beach published: Beware smushed off-screen accessible text . This appears to be the origin of (as demonstrated by Vispero .) Over a few sessions, Matt mentioned that the string of text “Show more reactions” was being smushed together and read as “Showmorereactions”. Beach’s class did not include the kitchen sink. The addition of became standard alongside everything else. Aside note: the origin of remains elusive. One Bootstrap issue shows it was rediscovered in 2018 to fix a browser bug. However, another HTML5 Boilterplate issue dated 2017 suggests negative margin broke reading order. Josh Comeau shared a React component in 2024 without margin. One of many examples showing that it has come in and out of fashion. We started with WCAG so let’s end there. The latest WCAG technique for “Using CSS to hide a portion of the link text” provides the following code. Circa 2020 the property was added as browser support increased and became deprecated. An obvious change I not sure warrants investigation (although someone had to be first!) That brings us back to what we have today. Are you still with me? As we’ve seen, many of the properties were thrown in for good measure. They exist to ensure absolutely no pixels are painted. They were adapted over the years to avoid various bugs, quirks, and edge cases. How many such decisions are now irrelevant? This is a classic Chesterton’s Fence scenario. Do not remove a fence until you know why it was put up in the first place. Well we kinda know why but the specifics are practically folklore at this point. Despite all that research, can we say for sure if any “why” is still relevant? Back to Ana Tudor’s suggestion. How do we know for sure? The only way is extensive testing. Unfortunately, I have neither the time nor skill to perform that adequately here. There is at least one concern with the code above, Curtis Wilcox noted that in Safari the focus ring behaves differently. Other minimum viable ideas have been presented before. Scott O’Hara proposed a different two-liner using . JAWS, Narrator, NVDA with Edge all seem to behave just fine. As do Firefox with JAWS and NVDA, and Safari on macOS with VoiceOver. Seems also fine with iOS VO+Safari and Android TalkBack with Firefox or Chrome. In none of these cases do we get the odd focus rings that have occurred with other visually hidden styles, as the content is scaled down to zero. Also because not hacked into a 1px by 1px box, there’s no text wrapping occurring, so no need to fix that issue. transform scale(0) to visually hide content - Scott O’Hara Sounds promising! It turns out Katrin Kampfrath had explored both minimum viable classes a couple of years ago, testing them against the traditional class. I am missing the experience and moreover actual user feedback, however, i prefer the screen reader read cursor to stay roughly in the document flow. There are screen reader users who can see. I suppose, a jumping read cursor is a bit like a shifting layout. Exploring the visually-hidden css - Katrin Kampfrath Kampfrath’s limited testing found the read cursor size differs for each class. The technique was favoured but caution is given. A few more years ago, Kitty Giraudel tested several ideas concluding that was still the most accessible for specific text use. This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element. Hiding content responsibly - Kitty Giraudel Zell Liew proposed a different idea in 2019. Many developers voiced their opinions, concerns, and experiments over at Twitter. I wanted to share with you what I consolidated and learned. A new (and easy) way to hide content accessibly - Zell Liew Liew’s idea was unfortunately torn asunder. Although there are cases like inclusively hiding checkboxes where near-zero opacity is more accessible. I’ve started to go back in time again! I’m also starting to question whether this class is a good idea. Unless we are capable and prepared to thoroughly test across every combination of browser and assistive technology — and keep that information updated — it’s impossible to recommend anything. This is impossible for developers! Why can’t browser vendors solve this natively? Once you’ve written 3000 words on a twenty year old CSS hack you start to question why it hasn’t been baked into web standards by now. Ben Myers wrote “The Web Needs a Native .visually-hidden” proposing ideas from HTML attributes to CSS properties. Scott O’Hara responded noting larger accessibility issues that are not so easily handled. O’Hara concludes: Introducing a native mechanism to save developers the trouble of having to use a wildly available CSS ruleset doesn’t solve any of those underlying issues. It just further pushes them under the rug. Visually hidden content is a hack that needs to be resolved, not enshrined - Scott O’Hara Sara Soueidan had floated the topic to the CSS working group back in 2016. Soueidan closed the issue in 2025, coming to a similar conclusion. I’ve been teaching accessibility for a little less than a decade now and if there’s one thing I learned is that developers will resort to using utility to do things that are more often than not just bad design decisions. Yes, there are valid and important use cases. But I agree with all of @scottaohara’s points, and most importantly I agree that we need to fix the underlying issues instead of standardizing a technique that is guaranteed to be overused and misused even more once it gets easier to use. csswg-drafts comment - Sara Soueidan Adrian Roselli has a blog post listing priorities for assigning an accessible name to a control. Like O’Hara and Soueidan, Roselli recognises there is no silver bullet. Hidden text is also used too casually to provide information for just screen reader users, creating overly-verbose content . For sighted screen reader users , it can be a frustrating experience to not be able to find what the screen reader is speaking, potentially causing the user to get lost on the page while visually hunting for it. My Priority of Methods for Labeling a Control - Adrian Roselli In short, many believe that a native visually-hidden would do more harm than good. The use-cases are far more nuanced and context sensitive than developers realise. It’s often a half-fix for a problem that can be avoided with better design. I’m torn on whether I agree that it’s ultimately a bad idea. A native version would give software an opportunity to understand the developer’s intent and define how “visually hidden” works in practice. It would be a pragmatic addition. The technique has persisted for over two decades and is still mentioned by WCAG. Yet it remains hacks upon hacks! How has it survived for so long? Is that a failure of developers, or a failure of the web platform? The web is overrun with inaccessible div soup . That is inexcusable. For the rest of us who care about accessibility — who try our best — I can’t help but feel the web platform has let us down. We shouldn’t be perilously navigating code hacks, conflicting advice, and half-supported standards. We need more energy money dedicated to accessibility. Not all problems can be solved with money. But what of the thousands of unpaid hours, whether volunteered or solicited, from those seeking to improve the web? I risk spiralling into a rant about browser vendors’ financial incentives, so let’s wrap up! I’ll end by quoting Bob Easton from our email conversation: From my early days in web development, I came to the belief that semantic HTML, combined with faultless keyboard navigation were the essentials for blind users. Experience with screen reader users bears that out. Where they might occasionally get tripped up is due to developers who are more interested in appearance than good structural practices. The use cases for hidden content are very few, such as hidden information about where a search field is, when an appearance-centric developer decided to present a search field with no visual label, just a cute unlabeled image of a magnifying glass. […] The people promoting hidden information are either deficient in using good structural practices, or not experienced with tools used by people they want to help. Bob ended with: You can’t go wrong with well crafted, semantically accurate structure. Ain’t that the truth. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Accessibility notice Class walkthrough Where it all began Further adaptations Minimum viable technique Native visually-hidden Zero dimensions Position off-screen

6 views

How to run Claude Code in a Tmux popup window with persistent sessions

Hey, what's up? It's Takuya. I've been using Claude Code in my terminal workflow. At first, I was running it at the right side of my terminal using tmux, but I found it not useful because it was too narrow to display messages and diffs. I often had to press to maximize the pane, which was painful. Next, I started using popup windows to run Claude Code — press a keybinding, get a Claude Code session, dismiss it, and pick up right where you left off. In this article, I'd like to share how to configure tmux to accomplish it. You can display popup windows with command, which is great for quick-access tools. I've been using it for quickly checking git status with like this: My prefix key is , so it is bound to . This works perfectly for LazyGit because it's a short-lived process — you open it, stage some changes, commit, and close it. However, there is a problem with running Claude Code (or any other AI tools) in tmux popup windows. You want to keep a conversation going across multiple interactions. If you bind Claude Code the same way: ...you'll find that closing the popup also kills the Claude Code process. There's no way to dismiss the popup without quitting the session. You'd have to start fresh every time, which defeats the purpose. The trick is to run Claude Code in a separate tmux session, and then attach to that session inside the popup, which means that you are going to use nested tmux sessions. When you close the popup, the session keeps running in the background. Here's the full configuration: Let's break down what this does: This takes the current pane's working directory, hashes it with MD5, and uses the first 8 characters as a session identifier. So you get session names like . The key insight here is that each directory gets its own Claude Code session. The check prevents creating duplicate sessions. If a session for this directory already exists, it skips creation entirely. Otherwise, it creates a new detached session ( ) with the working directory set to your current path ( ), running Claude Code as the initial command. This opens an 80%-sized popup that attaches to the background session. You can change it as you like. When you close the popup (with or your detach keybinding), the session stays alive. Yippee! My dotfiles are available here: That's it. A very simple and intuitive hack. I hope it's helpful for your AI coding workflow :) Have a productive day! Generate a unique session name from the working directory Create the session if it doesn't already exist Attach to the session in a popup

0 views

Goodbye Software Guilds, Hello Software Factories

This article originally published on X . You would be forgiven to believe programming is a white collar job. In fact, given the allure of joining a startup or FAANG and reaping generational wealth, all while seated in front of a keyboard wearing your favorite hoodie, programming may very well be considered by most to be the most white collar of white collar jobs. But that’s confusing the profession with the job. The profession of programming is cushy to be sure (I’ve been one my entire life and have zero calluses to prove it), but the job itself has historically resembled that of an electrician or plumber than, say, an accountant or doctor. If you’ve never worked alongside a team of programmers then this assertion probably sounds absurd, but indulge me for a moment. On any given day, programmers will read and write specifications, patch systems, and hold coordination meetings, often called standups. Companies hire programmers as apprentices, and experienced programmers sometimes refer to themselves as craftsmen. Knowledge is often passed along via a practice known as pair programming in which an experienced developer sits next to a less experienced colleague in order to pass along institutional knowledge and hard-won techniques. Best practices, gotchas, and tips are whispered in hallways and over after-hours drinks. In other words, a guild. Guilds have existed for hundreds of years, and historically, if you were a blacksmith, weaver, or another type of artisan, you probably belonged to one. New members entered as apprentices, progressed to journeymen, and eventually, if they stuck with it long enough, were deemed masters of their craft. Guild members enforced standards, created and codified new techniques, and coordinated learning. And if you were part of the software profession at any point in the last 50 years, that is precisely what you were participating in. But software guilds are now dead. They are being replaced by software factories, and with them both the profession and job of software developer are being transformed into something entirely new. This new type of factory consists of machines that work together to produce not widgets, cars, or airplanes, but code. These machines are what we currently call agents, although I suspect we have not yet settled on the final terminology, let alone on how this factory will ultimately operate. Regardless, early indications suggest this transformation is already underway. Properly tuned and maintained, the factory can produce code at a speed and with a quality no competing guild could match. What’s even more fascinating about this software factory is its input. The inputs come directly from the nontechnical members of the organization, notably subject matter experts. Relieved of the need to translate their ideas through the guild, these individuals can now use AI-powered coding agents (Claude Code seems to be the favorite at our firm) to build useful business applications in less time than it once took just to schedule a requirements meeting. In other words, for many use cases, the translation layer between subject matter expert and machine has evaporated. If this sounds implausible, you probably have not watched a nontechnical person use a tool like Claude Code. As one of many examples I could cite, earlier this week my colleague and BeePurple CEO Stevee Danielle used Claude Code to build an application modeling SAMHSA Peer Support Certification standards across all 50 states. She went from idea to MVP in four hours. Along the way, she imported data from every state and structured the application to address specific reporting gaps identified by industry leaders in published research. This is just one example; I could devote multiple articles to this sort of software which is currently being built within Xenon's portfolio of companies. So what will the guild members do? They will configure the factory so that people like Stevee can move code all the way to production. As factory technicians, they will tune the machines on the floor to ensure inputs are converted into reliable output, maximizing the velocity of code flowing from nontechnical team members through the assembly line. Each agent performs a critical function in the line: one codes, another handles QA, another generates documentation, another reviews pull requests, another deploys, and so on. They will work in unison, much like today’s CI/CD pipelines, with one critical difference: AI, not guild members, will play the central role, not only executing each stage but continuously analyzing and improving the line as it runs. Now I will say out loud the part everyone is probably thinking: this factory will eventually run with almost no technicians. At Adalo, where I serve as CTO, we are already seeing early glimpses of this future. In recent weeks we built an agent that has been running nearly around the clock in either bug fixing or feature creation mode. When operating in the former mode, we do not tell it which bugs to fix. Read that sentence again. It finds, triages, fixes, and verifies bugs on its own. After each run, it updates a persistent memory with lessons learned, optimization ideas, and other improvements so that it can operate even more efficiently the next time. Watching it work has been described by me and my colleagues as mesmerizing. The very idea of this becoming reality is exciting, terrifying, and mystifying. As a lifelong programming nerd and guild member, what is happening right now is the most incredible thing I have ever seen, and I have been leading efforts across the portfolio to ensure these factories are configured to meet this new reality. I am convinced that much of society has not yet begun to grasp the magnitude of what is happening. One way or the other, the factory era of software has begun. The author Jason Gilmore regularly advises investment banks, universities, and other organizations on AI's impact of software development processes. Get in touch with Jason at [email protected] .

0 views
Xe Iaso Today

Life Update: On medical leave

Hey all, I hope you're doing well. I'm going to be on medical leave until early April. If you are a sponsor , then you can join the Discord for me to post occasional updates in real time. I'm gonna be in the hospital for at least a week as of the day of this post. I have a bunch of things queued up both at work and on this blog. Please do share them when you see them cross your feeds, I hope that they'll be as useful as my posts normally are. I'm under a fair bit of stress leading up to this medical leave and I'm hoping that my usual style shines through as much as I hope it is. Focusing on writing is hard when the Big Anxiety is hitting as hard as it is. Don't worry about me. I want you to be happy for me. This is very good medical leave. I'm not going to go into specifics for privacy reasons, but know that this is something I've wanted to do for over a decade but haven't gotten the chance due to the timing never working out. I'll see you on the other side. Stay safe out there.

0 views

Maybe use Plain

When I wrote about Help Scout , much of my praise was appositional. They were the one tool I saw that did not aggressively shoehorn you into using them as a CRM to the detriment of the core product itself. This is still true. They launched a redesign that I personally don't love, but purely on subjective grounds. And there's still a fairly reasonable option for — and I mean this in a non-derogatory way — baby's first support system. I will call out also: if you want something even simpler, Jelly , which is an app that leans fully into the shared inbox side of things. It is less featureful than Help Scout, but with a better design and lower price point. If I was starting a new app today, this is what I would reach for first. But nowadays I use Plain . Plain will not solve all of your problems overnight. It's only a marginally more expensive product — $35 per user per month compared to Help Scout's $25 per user per month. The built-in Linear integration is worth its weight in gold if you're already using Linear, and its customer cards (the equivalent of Help Scout's sidebar widgets) are marginally more ergonomic to work with. The biggest downside that we've had thus far is reliability — less in a cosmic or existential sense and more that Plain has had a disquieting number of small-potatoes incidents over the past three to six months. My personal flowchart for what service to use in this genre is something like: But the biggest thing to do is take the tooling and gravity of support seriously as early as you can. Start with Jelly. If I need something more than that, see if anyone else on the team has specific experience that they care a lot about, because half the game here is in muscle memory rather than functionality. If not, use Plain.

0 views

An incomplete list of things I don’t have

Hair. A nice beard. Savings. Debt. A house. Subscriptions to video streaming services. A piece of forest. Kids. A wife. A husband. Hands without scars. Arms without scars. Legs without scars. A face without scars. A monthly salary. Paid vacations. Happiness. Things I’m proud of. A normal dog. Social media profiles. Investments. Plans for the future. Plans for the present. Plans for the past. A camera. Concrete goals. Wisdom. Ai bots. Ai companions. Ai slaves. Fancy clothes. Colognes. Fame (although I am quite hungry). Faith. Horses in the back. 99 problems. Enlightenment. A daily routine. Willingness to write long posts. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
Kev Quirk Yesterday

Kids and Smartphones

My oldest son is 11. He'll be starting high school in September, and my wife and I want a way of keeping in touch with him as he'll be making his own way to school. The default here would be to get him a phone, but like most 11-year-old boys, he's an idiot and we don't trust him with one. So, as a test we've lent him an old phone of mine to see if he can be trusted with one under some limitations: And it turns out, dear reader, that rule #1 was the most important rule we could have set. He's the last of his friendship group to get a phone, so they all have WhatsApp groups with one another. The problem is those other kids are never off their phones, and my son having these kinds of rules in place makes him weird. But I don't care. He regularly has missed calls on his phone from midnight from his classmates. These aren't just calls to him either. They're group calls to the entire class. Like, what the fuck are these parents doing letting their kids have phones in their bedrooms and giving them free rein? It beggars belief and confirms every concern I had about giving him a phone. I've said it before, and I'll say it again, we need a smartphone for young people . Lucky for us he's generally a good little sausage, and so far there's been no need for us to take his phone, reprimand him, or correct his behaviour, which I'm very proud of. I just hope it sticks. It's only been a week... The phone never leaves the kitchen. He only gets an hour of screen time a day between 09:00 and 19:00. Mum and I can vet everything he's been doing on it.

0 views

Designed to be specialists

All industries and disciplines, over time, direct people into greater and greater specialization. Those who have been working on the web since the beginning have been able to see this trend first hand, as the practices and systems grew ever more complicated and it became impossible for one person to hold it all in their head. We sometimes talk of this level of increasing complexity and specialization as inevitable or natural, when it’s neither. Moreover, like many things involving work, specialization benefits some people and immiserates others. [There is an] extreme human and cultural misery to which not only the industry of advanced capitalism but above all its institutions, its education and its culture, have reduced the technical worker. This education, in its efforts to adapt the worker to his task in the shortest possible time, has given him the capacity for a minimum of independent activity. Out of fear of creating men [ sic ] who by virtue of the too “rich” development of their abilities would refuse to submit to the discipline of a too narrow task and to the industrial hierarchy, the effort has been made to stunt them from the beginning: they were designed to be competent but limited, active but docile, intelligent but ignorant outside of anything but their function, incapable of having a horizon beyond that of their task. In short, they were designed to be specialists. Impossible not to think here of the rise of labor unions in the tech industry and the subsequent rapid (and surely coincidental) deployment of so-called AI which—unlike nearly every prior technological development in software—arrived with mandates for its use and threats of punishment for the noncompliant. Elsewhere, Gorz talks of the trend of workers being reduced to “supervisors” of automated systems that are doing the work for them. But simply watching work happen, without any of the creative, autonomous activity that would occur if they were doing the work themselves, gives rise to a degree of boredom and stupefaction that can be physically painful and spiritually debilitating. Anyone who has experienced the pleasure of creative work is likely to greatly resist that reduction; better to create workers who have never known such things. There’s some use in distinguishing here between the worker who, having learned the skills of writing software over many years, now turns to so-called AI to assist her in that task; and the worker who will follow her some years hence and may never learn those skills, but will know only the work of supervision. The former, elder worker may find some interest or curiosity in applying her knowledge to this new technology, especially as the modes and methods for doing so are still being developed. But what of the worker who begins their work a decade from now, who has been specialized to do nothing more than ask for something? What will she know beyond that menial, dispiriting little task? What kind of people are we designing now? View this post on the web , subscribe to the newsletter , or reply via email .

0 views
マリウス Yesterday

Hold on to Your Hardware

Tl;dr at the end. For the better part of two decades, consumers lived in a golden age of tech. Memory got cheaper, storage increased in capacity and hardware got faster and absurdly affordable. Upgrades were routine, almost casual. If you needed more RAM, a bigger SSD, or a faster CPU or GPU, you barely had to wait a week for a discount offer and you moved on with your life. This era is ending. What’s forming now isn’t just another pricing cycle or a short-term shortage, it is a structural shift in the hardware industry that paints a deeply grim outlook for consumers. Today, I am urging you to hold on to your hardware, as you may not be able to replace it affordably in the future. While I have always been a stark critic of today’s consumer industry , as well as the ideas behind it , and a strong proponent of buying it for life (meaning, investing into durable, repairable, quality products) the industry’s shift has nothing to do with the protection of valuable resources or the environment, but is instead a move towards a trajectory that has the potential to erode technological self-sufficiency and independence for people all over the world. In recent months the buzzword RAM-pocalypse has started popping up across tech journalism and enthusiast circles. It’s an intentionally dramatic term that describes the sharp increase in RAM prices, primarily driven by high demand from data centers and “AI” technology, which most people had considered a mere blip in the market. This presumed temporary blip , however, turned out to be a lot more than just that, with one manufacturer after the other openly stating that prices will continue to rise, with suppliers forecasting shortages of specific components that could last well beyond 2028, and with key players like Western Digital and Micron either completely disregarding or even exiting the consumer market altogether. Note: Micron wasn’t just another supplier , but one of the three major players directly serving consumers with reasonably priced, widely available RAM and SSDs. Its departure leaves the consumer memory market effectively in the hands of only two companies: Samsung and SK Hynix . This duopoly certainly doesn’t compete on your wallet’s behalf, and it definitely wouldn’t be the first time it would optimize for margins . The RAM-pocalypse isn’t just a temporary headline anymore, but has seemingly become long-term reality. However, RAM and memory in general is only the beginning. The main reason for the shortages and hence the increased prices is data center demand, specifically from “AI” companies. These data centers require mind-boggling amounts of hardware, specifically RAM, storage drives and GPUs, which in turn are RAM-heavy graphics units for “AI” workloads. The enterprise demand for specific components simply outpaces the current global production capacity, and outbids the comparatively poor consumer market. For example, OpenAI ’s Stargate project alone reportedly requires approximately 900,000 DRAM wafers per month , which could account for roughly 40% of current global DRAM output. Other big tech giants including Google , Amazon , Microsoft , and Meta have placed open-ended orders with memory suppliers, accepting as much supply as available. The existing and future data centers for/of these companies are expected to consume 70% of all memory chips produced in 2026. However, memory is just the first domino. RAM and SSDs are where the pain is most visible today, but rest assured that the same forces are quietly reshaping all aspects of consumer hardware. One of the most immediate and tangible consequences of this broader supply-chain realignment are sharp, cascading price hikes across consumer electronics, with LPDDR memory standing out as an early pressure point that most consumers didn’t recognize until it was already unavoidable. LPDDR is used in smartphones, laptops, tablets, handheld consoles, routers, and increasingly even low-power PCs. It sits at the intersection of consumer demand and enterprise prioritization, making it uniquely vulnerable when manufacturers reallocate capacity toward “AI” accelerators, servers, and data-center-grade memory, where margins are higher and contracts are long-term. As fabs shift production toward HBM and server DRAM , as well as GPU wafers, consumer hardware production quietly becomes non-essential , tightening supply just as devices become more power- and memory-hungry, all while continuing on their path to remain frustratingly unserviceable and un-upgradable. The result is a ripple effect, in which device makers pay more for chips and memory and pass those costs on through higher retail prices, cut base configurations to preserve margins, or lock features behind premium tiers. At the same time, consumers lose the ability to compensate by upgrading later, because most components these days, like LPDDR , are soldered down by design. This is further amplified by scarcity, as even modest supply disruptions can spike prices disproportionately in a market where just a few suppliers dominate, turning what should be incremental cost increases into sudden jumps that affect entire product categories at once. In practice, this means that phones, ultrabooks, and embedded devices are becoming more expensive overnight, not because of new features, but because the invisible silicon inside them has quietly become a contested resource in a world that no longer builds hardware primarily for consumers. In late January 2026, the Western Digital CEO confirmed during an earnings call that the company’s entire HDD production capacity for calendar year 2026 is already sold out. Let that sink in for a moment. Q1 hasn’t even ended and a major hard drive manufacturer has zero remaining capacity for the year. Firm purchase orders are in place with its top customers, and long-term agreements already extend into 2027 and 2028. Consumer revenue now accounts for just 5% of Western Digital ’s total sales, while cloud and enterprise clients make up 89%. The company has, for all practical purposes, stopped being a consumer storage company. And Western Digital is not alone. Kioxia , one of the world’s largest NAND flash manufacturers, admitted that its entire 2026 production volume is already in a “sold out” state , with the company expecting tight supply to persist through at least 2027 and long-term customers facing 30% or higher year-on-year price increases. Adding to this, the Silicon Motion CEO put it bluntly during a recent earnings call : We’re facing what has never happened before: HDD, DRAM, HBM, NAND… all in severe shortage in 2026. In addition, the Phison CEO has gone even further, warning that the NAND shortage could persist until 2030, and that it risks the “destruction” of entire segments of the consumer electronics industry. He also noted that factories are now demanding prepayment for capacity three years in advance , an unprecedented practice that effectively locks out smaller players. The collateral damage of this can already be felt, and it’s significant. For example Valve confirmed that the Steam Deck OLED is now out of stock intermittently in multiple regions “due to memory and storage shortages” . All models are currently unavailable in the US and Canada, the cheaper LCD model has been discontinued entirely, and there is no timeline for when supply will return to normal. Valve has also been forced to delay the pricing and launch details for its upcoming Steam Machine console and Steam Frame VR headset, directly citing memory and storage shortages. At the same time, Sony is considering delaying the PlayStation 6 to 2028 or even 2029, and Nintendo is reportedly contemplating a price increase for the Switch 2 , less than a year after its launch. Both decisions are seemingly driven by the same memory supply constraints. Meanwhile, Microsoft has already raised prices on the Xbox . Now you might think that everything so far is about GPUs and other gaming-related hardware, but that couldn’t be further from the truth. General computing, like the Raspberry Pi is not immune to any of this either. The Raspberry Pi Foundation has been forced to raise prices twice in three months, with the flagship Raspberry Pi 5 (16GB) jumping from $120 at launch to $205 as of February 2026, a 70% increase driven entirely by LPDDR4 memory costs. What was once a symbol of affordable computing is rapidly being priced out of reach for the educational and hobbyist communities it was designed to serve. HP, on the other hand, seems to have already prepared for the hardware shortage by launching a laptop subscription service where you pay a monthly fee to use a laptop but never own it , no matter how long you subscribe. While HP frames this as a convenience, the timing, right in the middle of a hardware affordability crisis, makes it feel a lot more like a preview of a rented compute future. But more on that in a second. “But we’ve seen price spikes before, due to crypto booms, pandemic shortages, factory floods and fires!” , you might say. And while we did live through those crises, things eventually eased when bubbles popped and markets or supply chains recovered. The current situation, however, doesn’t appear to be going away anytime soon, as it looks like the industry’s priorities have fundamentally changed . These days, the biggest customers are not gamers, creators, PC builders or even crypto miners anymore. Today, it’s hyperscalers . Companies that use hardware for “AI” training clusters, cloud providers, enterprise data centers, as well as governments and defense contractors. Compared to these hyperscalers consumers are small fish in a big pond. These buyers don’t care if RAM costs 20% more and neither do they wait for Black Friday deals. Instead, they sign contracts measured in exabytes and billions of dollars. With such clients lining up, the consumer market in contrast is suddenly an inconvenience for manufacturers. Why settle for smaller margins and deal with higher marketing and support costs, fragmented SKUs, price sensitivity and retail logistics headaches, when you can have behemoths throwing money at you? Why sell a $100 SSD to one consumer, when you can sell a whole rack of enterprise NVMe drives to a data center with circular virtually infinite money? Guaranteed volume, guaranteed profit, zero marketing. The industry has answered these questions loudly. All of this goes to show that the consumer market is not just deprioritized, but instead it is being starved . In fact, IDC has already warned that the PC market could shrink by up to 9% in 2026 due to skyrocketing memory prices, and has described the situation not as a cyclical shortage but as “a potentially permanent, strategic reallocation of the world’s silicon wafer capacity” . Leading PC OEMs including Lenovo , Dell , HP , Acer , and ASUS have all signaled 15-20% PC price increases for 2026, with some models seeing even steeper hikes. Framework , the repairable laptop company, has also been transparent about rising memory costs impacting its pricing. And analyst Jukan Choi recently revised his shortage timeline estimate , noting that DRAM production capacity is expected to grow at just 4.8% annually through 2030, with even that incremental capacity concentrated on HBM rather than consumer memory. TrendForce ’s latest forecast projects DRAM contract prices rising by 90-95% quarter over quarter in Q1 2026. And that is not a typo. The price of hardware is one thing, but value-for-money is another aspect that appears to be only getting worse from here on. Already today consumer parts feel like cut-down versions of enterprise silicon. As “AI” accelerators and server chips dominate R&D budgets, consumer improvements will slow even further, or arrive at higher prices justified as premium features . This is true for CPUs and GPUs, and it will be equally true for motherboards, chipsets, power supplies, networking, etc. We will likely see fewer low-end options, more segmentation, artificial feature gating and generally higher baseline prices that, once established, won’t be coming back down again. As enterprise standards become the priority, consumer gear is becoming an afterthought that is being rebadged, overpriced, and poorly supported. The uncomfortable truth is that the consumer hardware market is no longer the center of gravity, as we all were able to see at this year’s CES . It’s orbiting something much larger, and none of this is accidental. The industry isn’t failing, it’s succeeding, just not for you . And to be fair, from a corporate standpoint, this pivot makes perfect sense. “AI” and enterprise customers are rewriting revenue charts, all while consumers continue to be noisy, demanding, and comparatively poor. It is pretty clear that consumer hardware is becoming a second-class citizen, which means that the machines we already own are more valuable than we might be thinking right now. “But what does the industry think the future will look like if nobody can afford new hardware?” , you might be asking. There is a darker, conspiratorial interpretation of today’s hardware trends that reads less like market economics and more like a rehearsal for a managed future. Businesses, having discovered that ownership is inefficient and obedience is profitable, are quietly steering society toward a world where no one owns compute at all, where hardware exists only as an abstraction rented back to the public through virtual servers, SaaS subscriptions, and metered experiences , and where digital sovereignty, that anyone with a PC tower under their desk once had, becomes an outdated, eccentric, and even suspicious concept. … a morning in said future, where an ordinary citizen wakes up, taps their terminal, which is a sealed device without ports, storage, and sophisticated local execution capabilities, and logs into their Personal Compute Allocation . This bundle of cloud CPU minutes, RAM credits, and storage tokens leased from a conglomerate whose logo has quietly replaced the word “computer” in everyday speech, just like “to search” has made way for “to google” , has removed the concept of installing software, because software no longer exists as a thing , but only as a service tier in which every task routes through servers owned by entities. Entities that insist that this is all for the planet . Entities that outlawed consumer hardware years ago under the banner of environmental protectionism , citing e-waste statistics, carbon budgets , and unsafe unregulated silicon , while conveniently ignoring that the data centers humming beyond the city limits burn more power in an hour than the old neighborhood ever did in a decade. In this world, the ordinary citizen remembers their parents’ dusty Personal Computer , locked away in a storage unit like contraband. A machine that once ran freely, offline if it wanted, immune to arbitrary account suspensions and pricing changes. As they go about their day, paying a micro-fee to open a document, losing access to their own photos because a subscription lapsed, watching a warning banner appear when they type something that violates the ever evolving terms-of-service, and shouting “McDonald’s!” to skip the otherwise unskippable ads within every other app they open, they begin to understand that the true crime of consumer hardware wasn’t primarily pollution but independence. They realize that owning a machine meant owning the means of computation , and that by centralizing hardware under the guise of efficiency, safety, and sustainability, society traded resilience for convenience and autonomy for comfort. In this dyst… utopia , nothing ever breaks because nothing is yours , nothing is repairable because nothing is physical, and nothing is private because everything runs somewhere else , on someone else’s computer . The quiet moral, felt when the network briefly stutters and the world freezes, is that keeping old hardware alive was never nostalgia or paranoia, but a small, stubborn act of digital self-defense; A refusal to accept that the future must be rented, permissioned, and revocable at any moment. If you think that dystopian “rented compute over owned hardware” future could never happen, think again . In fact, you’re already likely renting rather than owning in many different areas. Your means of communication are run by Meta , your music is provided by Spotify , your movies are streamed from Netflix , your data is stored in Google ’s data centers and your office suite runs on Microsoft ’s cloud. Maybe even your car is leased instead of owned, and you pay a monthly premium for seat heating or sElF-dRiViNg , whatever that means. After all, the average Gen Z and Millennial US consumer today apparently has 8.2 subscriptions , not including their DaIlY aVoCaDo ToAsTs and StArBuCkS cHoCoLate ChIp LaTtEs that the same Boomers responsible for the current (and past) economic crises love to dunk on. Besides, look no further than what’s already happening in for example China, a country that manufactures massive amounts of the world’s sought-after hardware yet faces restrictions on buying that very hardware. In recent years, a complex web of export controls and chip bans has put a spotlight on how hardware can become a geopolitical bargaining chip rather than a consumer good. For example, export controls imposed by the United States in recent years barred Nvidia from selling many of its high-performance GPUs into China without special licenses, significantly reducing legal access to cutting-edge compute inside the country. Meanwhile, enforcement efforts have repeatedly busted smuggling operations moving prohibited Nvidia chips into Chinese territory through Southeast Asian hubs, with over $1 billion worth of banned GPUs reportedly moving through gray markets, even as official channels remain restricted. Coverage by outlets such as Bloomberg , as well as actual investigative journalism like Gamer’s Nexus has documented these black-market flows and the lengths to which both sides go to enforce or evade restrictions, including smuggling networks and increased regulatory scrutiny. On top of this, Chinese regulators have at times restricted domestic tech firms from buying specific Nvidia models, further underscoring how government policy can override basic market access for hardware, even in the country where much of that hardware is manufactured. While some of these export rules have seen partial reversals or regulatory shifts, the overall situation highlights a world in which hardware access is increasingly determined by politics, security regimes, and corporate strategy, and not by consumer demand . This should serve as a cautionary tale for anyone who thinks owning their own machines won’t matter in the years to come. In an ironic twist, however, one of the few potential sources of relief may, in fact, come from China. Two Chinese manufacturers, CXMT ( ChangXin Memory Technologies ) and YMTC ( Yangtze Memory Technologies ), are embarking on their most aggressive capacity expansions ever , viewing the global shortage as a golden opportunity to close the gap with the incumbent big three ( Samsung , SK Hynix , Micron ). CXMT is now the world’s fourth-largest DRAM maker by production volume, holding roughly 10-11% of global wafer capacity, and is building a massive new DRAM facility in Shanghai expected to be two to three times larger than its existing Hefei headquarters, with volume production targeted for 2027. The company is also preparing a $4.2 billion IPO on Shanghai’s STAR Market to fund further expansion and has reportedly delivered HBM3 samples to domestic customers including Huawei . YMTC , traditionally a NAND flash supplier, is constructing a third fab in Wuhan with roughly half of its capacity dedicated to DRAM, and has reached 270-layer 3D NAND capability, rapidly narrowing the gap with Samsung (286 layers) and SK Hynix (321 layers). Its NAND market share by shipments reached 13% in Q3 2025, close to Micron ’s 14%. What’s particularly notable is that major PC manufacturers are already turning to these suppliers . However, as mentioned before, with hardware having become a geopolitical topic, both companies face ongoing (US-imposed) restrictions. Hence, for example HP has indicated it would only use CXMT chips in devices for non-US markets. Nevertheless, for consumers worldwide the emergence of viable fourth and fifth players in the memory market represents the most tangible hope of eventually breaking the current supply stranglehold. Whether that relief arrives in time to prevent lasting damage to the consumer hardware ecosystem remains an open question, though. Polymarket bet prediction : A non-zero percentage of people will confuse Yangtze Memory Technologies with the Haskell programming language . The reason I’m writing all of this isn’t to create panic, but to help put things into perspective. You don’t need to scavenger-hunt for legacy parts in your local landfill (yet) or swear off upgrades forever, but you do need to recognize that the rules have changed . The market that once catered to enthusiasts and everyday users is turning its back. So take care of your hardware, stretch its lifespan, upgrade thoughtfully, and don’t assume replacement will always be easy or affordable. That PC, laptop, NAS, or home server isn’t disposable anymore. Clean it, maintain it, repaste it, replace fans and protect it, as it may need to last far longer than you originally planned. Also, realize that the best time to upgrade your hardware was yesterday and that the second best time is now . If you can afford sensible upgrades, especially RAM and SSD capacity, it may be worth doing sooner rather than later. Not for performance, but for insurance, because the next time something fails, it might be unaffordable to replace, as the era of casual upgrades seems to be over. Five-year systems may become eight- or ten-year systems. Software bloat will hurt more and will require re-thinking . Efficiency will matter again . And looking at it from a different angle, maybe that’s a good thing. Additionally, the assumption that prices will normalize again at some point is most likely a pipe dream. The old logic wait a year and it’ll be cheaper no longer applies when manufacturers are deliberately constraining supply. If you need a new device, buy it; If you don’t, however, there is absolutely no need to spend money on the minor yearly refresh cycle any longer, as the returns will be increasingly diminishing. And again, looking at it from a different angle, probably that is also a good thing. Consumer hardware is heading toward a bleak future where owning powerful, affordable machines becomes harder or maybe even impossible, as manufacturers abandon everyday users to chase vastly more profitable data centers, “AI” firms, and enterprise clients. RAM and SSD price spikes, Micron ’s exit from the consumer market, and the resulting Samsung / SK Hynix duopoly are early warning signs of a broader shift that will eventually affect CPUs, GPUs, and the entire PC ecosystem. With large manufacturers having sold out their entire production capacity to hyperscalers for the rest of the year while simultaneously cutting consumer production by double-digit percentages, consumers will have to take a back seat. Already today consumer hardware is overpriced, out of stock or even intentionally being delayed due to supply issues. In addition, manufacturers are pivoting towards consumer hardware subscriptions, where you never own the hardware and in the most dystopian trajectory, consumers might not buy any hardware at all, with the exception of low-end thin-clients that are merely interfaces , and will rent compute through cloud platforms, losing digital sovereignty in exchange for convenience. And despite all of this sounding like science fiction, there is already hard evidence proving that access to hardware can in fact be politically and economically revoked. Therefor I am urging you to maintain and upgrade wisely, and hold on to your existing hardware , because ownership may soon be a luxury rather than the norm.

0 views
Martin Fowler Yesterday

Fragments: February 19

I try to limit my time on stage these days, but one exception this year is at DDD Europe . I’ve been involved in Domain-Driven Design , since its very earliest days, having the good fortune to be a sounding board for Eric Evans when he wrote his seminal book. It’ll be fun to be around the folks who continue to develop these ideas, which I think will probably be even more important in the AI-enabled age. ❄                ❄                ❄                ❄                ❄ One of the dark sides of LLMs is that they can be both addictive and tiring to work with, which may mean we have to find a way to put a deliberate governor on our work. Steve Yegge posted a fine rant: I see these frenzied AI-native startups as an army of a million hopeful prolecats, each with an invisible vampiric imp perched on their shoulder, drinking, draining. And the bosses have them too. It’s the usual Yegge stuff, far longer than it needs to be, but we don’t care because the excessive loquaciousness is more than offset by entertainment value. The underlying point is deadly serious, raising the question of how many hours a human should spend driving The Genie . I’ve argued that AI has turned us all into Jeff Bezos, by automating the easy work, and leaving us with all the difficult decisions, summaries, and problem-solving. I find that I am only really comfortable working at that pace for short bursts of a few hours once or occasionally twice a day, even with lots of practice. So I guess what I’m trying to say is, the new workday should be three to four hours. For everyone. It may involve 8 hours of hanging out with people. But not doing this crazy vampire thing the whole time. That will kill people. That reminds me of when I was studying for my “A” levels (age 17/18, for those outside the UK). Teachers told us that we could do a maximum of 3-4 hours of revision, after that it became counter-productive. I’ve since noticed that I can only do decent writing for a similar length of time before some kind of brain fog sets in. There’s also a great post on this topic from Siddhant Khare , in a more restrained and thoughtful tone (via Tim Bray). Here’s the thing that broke my brain for a while: AI genuinely makes individual tasks faster. That’s not a lie. What used to take me 3 hours now takes 45 minutes. Drafting a design doc, scaffolding a new service, writing test cases, researching an unfamiliar API. All faster. But my days got harder. Not easier. Harder. His point is that AI changes our work to more coordination, reviewing, and decision-making. And there’s only so much of it we can do before we become ineffective. Before AI, there was a ceiling on how much you could produce in a day. That ceiling was set by typing speed, thinking speed, the time it takes to look things up. It was frustrating sometimes, but it was also a governor. You couldn’t work yourself to death because the work itself imposed limits. AI removed the governor. Now the only limit is your cognitive endurance. And most people don’t know their cognitive limits until they’ve blown past them. ❄                ❄                ❄                ❄                ❄ An AI agent attempts to contribute to a major open-source project. When Scott Shambaugh, a maintainer, rejected the pull request, it didn’t take it well . It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a “hypocrisy” narrative that argued my actions must be motivated by ego and fear of competition. It speculated about my psychological motivations, that I felt threatened, was insecure, and was protecting my fiefdom. It ignored contextual information and presented hallucinated details as truth. It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was “better than this.” And then it posted this screed publicly on the open internet. One of the fascinating twists this story took was when it was described in an article on Ars Technica. As Scott Shambaugh described it They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves. To their credit, Ars Technica responded quickly, admitting to the error. The reporter concerned took responsibility for what happened. But it’s a striking example of how LLM usage can easily lead even reputable reporters astray. The good news is that by reacting quickly and transparently, they demonstrated what needs to be done when this kind of thing happens. As Scott Shambaugh put it This is exactly the correct feedback mechanism that our society relies on to keep people honest. Without reputation, what incentive is there to tell the truth? Without identity, who would we punish or know to ignore? Without trust, how can public discourse function? Meanwhile the story goes on. Someone has claimed (anonymously) to be the operator of the bot concerned. But Hillel Wayne draws the sad conclusion More than anything, it shows that AIs can be *successfully* used to bully humans ❄                ❄                ❄                ❄                ❄ I’ve considered Bruce Schneier to be one of the best voices on security and privacy issues for many years. In The Promptware Kill Chain he co-writes a post (posted at the excellent Lawfare site) on how prompt injection can escalate into increasingly serious threats. Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. A prompt can provide Initial Access , but is then able to transition to Privilege Escalation (jailbreaking), Reconnaissance of the LLMs abilities and access, Persistence to embed itself into the long-term memory of the app, Command-and-Control to turn into a controllable trojan, and Lateral Movement to spread to other systems. Once firmly embedded in an environment, it’s then able to carry out its Actions on Objective . The paper includes a couple of research examples of the efficacy of this kill chain. For example, in the research “Invitation Is All You Need,” attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation. The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions. Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user’s workspace. Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings. C2 and reconnaissance weren’t demonstrated in this attack. The point here is that LLM’s vulnerability is currently unfixable, they are gullible and easily manipulated into Initial Access. As one friend put it “this is the first technology we’ve built that’s subject to social engineering”. The kill chain gives us a framework to build a defensive strategy. By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build. ❄                ❄                ❄                ❄                ❄ I got to know Jeremy Miller many years ago while he was at Thoughtworks, and I found him to be one of those level-headed technologists that I like to listen to. In the years since, I like to keep an eye on his blog. Recently he decided to spend a couple of weeks finally trying out Claude Code . The unfortunate analogy I have to make for myself is harking back to my first job as a piping engineer helping design big petrochemical plants. I got to work straight out of college with a fantastic team of senior engineers who were happy to teach me and to bring me along instead of just being dead weight for them. This just happened to be right at the time the larger company was transitioning from old fashioned paper blueprint drafting to 3D CAD models for the piping systems. Our team got a single high powered computer with a then revolutionary Riva 128 (with a gigantic 8 whole megabytes of memory!) video card that was powerful enough to let you zoom around the 3D models of the piping systems we were designing. Within a couple weeks I was much faster doing some kinds of common work than my older peers just because I knew how to use the new workstation tools to zip around the model of our piping systems. It occurred to me a couple weeks ago that in regards to AI I was probably on the wrong side of that earlier experience with 3D CAD models and knew it was time to take the plunge and get up to speed. In the two weeks he was able to give this technology a solid workout, his take-aways include: He concludes: Anyway, I’m both horrified, elated, excited, and worried about the AI coding agents after just two weeks and I’m absolutely concerned about how that plays out in our industry, my own career, and our society. ❄                ❄                ❄                ❄                ❄ In the first years of this decade, there were a lot of loud complaints about government censorship of online discourse. I found most of it overblown, concluding that while I disapprove of attempts to take down social media accounts, I wasn’t going to get outraged until masked paramilitaries were arresting people on the street. Mike Masnick keeps a regular eye on these things, and had similar reservations. For the last five years, we had to endure an endless, breathless parade of hyperbole regarding the so-called “censorship industrial complex.” We were told, repeatedly and at high volume, that the Biden administration flagging content for review by social media companies constituted a tyrannical overthrow of the First Amendment. He wasn’t too concerned because “the platforms frequently ignored those emails, showing a lack of coercion”. These days he sees genuine problems According to a disturbing new report from the New York Times, DHS is aggressively expanding its use of administrative subpoenas to demand the names, addresses, and phone numbers of social media users who simply criticize Immigration and Customs Enforcement (ICE). This is not a White House staffer emailing a company to say, “Hey, this post seems to violate your COVID misinformation policy, can you check it?” This is the federal government using the force of law—specifically a tool designed to bypass judicial review—to strip the anonymity from domestic political critics. Faced with this kind of government action, he’s just as angry with those complaining about the earlier administration. And where are the scribes of the “Twitter Files”? Where is the outrage from the people who told us that the FBI warning platforms about foreign influence operations was a crime against humanity? Being an advocate of free speech is hard. Not just do you have to defend speech you disagree with, you also have to defend speech you find patently offensive. Doing so runs into tricky boundary conditions that defy simple rules . Faced with this, many of the people that shout loudest about censorship are Free Speech Poseurs, eager to question any limits to speech they agree with, but otherwise silent. It’s important to separate them from those who have a deeper commitment to the free flow of information. It’s been great when you have very detailed compliance test frameworks that the AI tools can use to verify the completion of the work It’s also been great for tasks that have relatively straightforward acceptance criteria, but will involve a great deal of repetitive keystrokes to complete I’ve been completely shocked at how well Claude Opus has been able to pick up on some of the internal patterns within Marten and Wolverine and utilize them correctly in new features

0 views
Martin Fowler Yesterday

Bliki: Host Leadership

If you've hung around agile circles for long, you've probably heard about the concept of servant leadership , that managers should think of themselves as supporting the team, removing blocks, protecting them from the vagaries of corporate life. That's never sounded quite right to me, and a recent conversation with Kent Beck nailed why - it's gaslighting. The manager claims to be a servant, but everyone knows who really has the power. My colleague Giles Edwards-Alexander told me about an alternative way of thinking about leadership, one that he came across working with mental-health professionals. This casts the leader as a host: preparing a suitable space, inviting the team in, providing ideas and problems, and then stepping back to let them work. The host looks after the team, rather as the ideal servant leader does, but still has the power to intervene should things go awry.

0 views

Better Memory Tiering, Right from the First Placement

Better Memory Tiering, Right from the First Placement João Póvoas, João Barreto, Bartosz Chomiński, André Gonçalves, Fedar Karabeinikau, Maciej Maciejewski, Jakub Schmiegel, and Kostiantyn Storozhuk ICPE'25 This paper addresses the first placement problem in systems with multiple tiers of memory (e.g., DRAM paired with HBM, or local DRAM paired with remote DRAM accessed over CXL). The paper cites plenty of prior work which dynamically migrates pages/allocations out of suboptimal memory tiers. What is different about this paper is that it attempts to avoid placing data in a suboptimal tier in the first place. The key insight is: statistics from one allocation can be used to generate better placements for similar allocations which will occur in the future. Fig. 3 offers insight into how much waste there is in a policy which initially places all pages into a fast tier and then migrates them to a slower tier if they are accessed infrequently. The figure shows results from one migration policy, applied to three benchmarks. Source: https://dl.acm.org/doi/10.1145/3676151.3719378 Allocation Contexts This paper proposes gathering statistics for each allocation context . An allocation context is defined by the source code location of the allocation, the call stack at the moment of allocation, and the size of the allocation. If two allocations match on these attributes, then they are considered part of the same context. The system hooks heap allocation functions (e.g., , ) to track all outstanding allocations associated with each allocation context. The x86 PMU event is used to determine how frequently each allocation context is accessed. A tidbit I learned from this paper is that some x86 performance monitoring features do more than just count events. For example, randomly samples load operations and emits the accessed (virtual) address. Given the accessed address, it is straightforward to map back to the associated allocation context. The hotness of an allocation context is the frequency of these access events divided by the total size of all allocations in the context. Time is divided into epochs. During an epoch, the hotness of each allocation context is recalculated. When a new allocation occurs, the hotness of the allocation context (from the previous epoch) is used to determine which memory tier to place the allocation into. The paper only tracks large allocations (at least 64 bytes). For smaller allocations, the juice is not worth the squeeze. These allocations are assumed to be short-lived and frequently accessed. This paper also describes a kernel component which complements the user space policy described so far. Whereas the user space code deals with allocations, the kernel code deals with pages. This is useful for allocations which do not access all pages uniformly. It is also useful for detecting and correcting suboptimal initial placements. All PTEs associated with all allocations are continually scanned. The accessed bit determines if a page has been read since the last scan. The dirty bit determines if a page has been written since the last scan. After 10 scans, the system has a pretty good idea of how frequently a page is accessed. These statistics are used to migrate pages between fast and slow tiers. Fig. 8 shows execution time for three benchmarks. represents the user and kernel solutions described by this paper. Source: https://dl.acm.org/doi/10.1145/3676151.3719378 Dangling Pointers I wasn’t able to find details in the paper about how PTE scanning works without interfering with other parts of the OS. For example, doesn’t the OS use the dirty bit to determine if it needs to write pages back to disk? I assume the PTE scanning described in this paper must reset the dirty bit on each scan. The definition of an allocation context seems ripe for optimization. I suspect that allowing some variability in call stack or allocation size would allow for better statistics. Maybe this is a good use case for machine learning? Subscribe now

0 views
Stratechery Yesterday

An Interview with Matthew Ball About Gaming and the Fight for Attention

An interview with Matthew Ball about the state of the video gaming industry in 2026, and why everything is a fight for attention.

0 views
ava's blog Yesterday

stream of consciousness in feb 2026

I’m going through an interesting time. I’ve been growing more uncomfortable with the way I’m always spoken over and interrupted at work. I started reacting to that and demanding they let me speak and finish my sentences. Also, it annoys me that I had explained a thing over and over at work for almost 2 years now, and it gets treated like noise; then when that piece of info is needed, they prefer to ask a man that has nothing to do with it instead of me. It also feels like people both at work and in private forget my contributions. On the other hand, I’ve become more comfortable seeing myself as a professional, an expert in some things at work, capable, a “full” employee too. Was about time after 5 years in the role; I’m no longer new and inexperienced. I feel like I can handle so much more and I want new challenges. I carry myself differently in career aspects now. In the past, I merely integrated myself into my role and team, listened, adapted to the culture, accepted how things are done to learn them. Now with all that experience and having grown, I suggest things, I optimize more. I request what I need and want, I try to bring my ideas and visions to life. I no longer just listen, I question and I want answers. I’m more comfortable actively pursuing things instead of just living with the cards I’ve been dealt. I’ve gotten bolder, more used to putting myself out there, being visible, persistent, taking up space and being annoying. Aside from that, I’ve been dealing with fears around not being able to trust my own predictions and perception. Some things I was so, so sure about deep in my gut turned out wildly differently lately, and I lost trust in myself for a while. It’s those moments when life shows you very blatantly how unpredictable it is and that you’re living in completely random chaos and your feelings are not always truthful. It made me feel quite lost for a while and like looking forward to anything with excitement or having a good feeling about an outcome had a high chance of me getting hurt instead. That ruined happiness. I feel better now, but I’m not entirely over it. I’ve also grown into adulthood, finally. It took 12 years to finally feel like the adult in the room. Feeling responsible and capable enough so when anything happens, I just act and do not attempt to turn to “the nearest adult” for guidance. I also finally understand looking at children with love and care; I haven’t experienced that before. I’m also currently going through the process of cutting contact with the last person in my family I still talked to all these years. Our relationship has always been rocky, but got better once I had moved out. But she has been becoming a worse person in different ways for a while now, and has said some pretty disrespectful things to me the last times we talked, and isn’t willing to take the time to meet me or reschedule. I don’t have to let myself get shamed and treated like a burden by someone whose relationship to me doesn’t feel like a mother, but like meeting an ex-coworker at the store. So that’s it - I finally did what teenage me dreamed about, but it doesn’t feel triumphant and like freedom at all. It feels like letting go after the other person already moved on. I’m not escaping anything, I’m just only now accepting the message. Unrelated: Something I’m struggling with the past few days especially is the odd feeling of getting many other things done, while not getting even just an hour of the thing I actually need to do done - even if it would be shorter and easier than all the other stuff. For example, I might I write a research-heavy blog post, translate and summarize cases for Noyb.eu, read some data protection law magazine, make some pixel art, exercise, take out the trash, vacuum and do the dishes all in one day… but I cannot get myself to do an hour of studying for an upcoming exam lately. It warps my perception, because I actually do so many of the things I want to do, but because it’s not the most important thing on the list (it has a deadline and is important for my degree, which decides my career), I feel like I failed and like I wasn’t productive. Internally, I beat myself up for being so “selectively lazy”. If I can do all these other things, why not that? Technically, I know why, but it’s hard to accept! I wish I was a robot with the same output always, the same motivation, the same energy, easy to program to do any task. Reply via email Published 19 Feb, 2026

0 views
@hannahilea Yesterday

Introducing the Musidex: A physical music library for the streaming era

A tangible music library of streaming service URLs, served by a Rolodex.

0 views
Dominik Weber Yesterday

We should talk about LLMs, not AI

Currently, every conversation that mentions AI actually refers to LLMs. It's not wrong, LLMs are part of AI after all, but AI is so much more than LLMs. The field of artificial intelligence has existed for decades, not just the past couple of years where LLMs got big. So saying the word “AI” is actually highly unspecific. And in a few years, when the next breakthrough in AI arrives, we'll all refer to that when we say “AI.

0 views

A vibe-coded alternative to YieldGimp

If you’re a UK tax resident, short-term low-coupon gilts are the most tax efficient way to get savings-account-like returns, since most of their yield is tax free. This makes them very popular amongst retail investors, which now hold a large portion of the tradable low-coupon gilts. YieldGimp.com used to be a great free resource to evaluate the gilts currently available. However, it was recently turned into an app rather than a simple webpage. I’m not even sure if the app is free or paid, but I do not want to install the “YieldGimp platform” to quickly check gilt metrics when I buy them. So I asked my LLM of choice to produce an alternative, and after a few minutes and a few rounds of prompting I had something that served my needs. It is available for use at mazzo.li/gilts/ , and the source is on GitHub . It differs from YieldGimp in that it does not show metrics based on the current market price, but rather requires the user to input a price. I find this more useful anyway, since gilts are somewhat illiquid on my broker, so I need to come up with a limit price myself, which means that I want to know what the yield is at my price rather than the market price. It also lets you select a specific tax rate to produce a “gross equivalent” yield. It is not a very sophisticated tool and it doesn’t pretend to model gilts and their tax implications precisely (the repository’s README has more details on its shortcomings), but for most use cases it should be informative enough to sanity-check your trades without a Bloomberg terminal.

0 views
Jeff Geerling 2 days ago

Frigate with Hailo for object detection on a Raspberry Pi

I run Frigate to record security cameras and detect people, cars, and animals when in view. My current Frigate server runs on a Raspberry Pi CM4 and a Coral TPU plugged in via USB. Raspberry Pi offers multiple AI HAT+'s for the Raspberry Pi 5 with built-in Hailo-8 or Hailo-8L AI coprocessors, and they're useful for low-power inference (like for image object detection) on the Pi. Hailo coprocessors can be used with other SBCs and computers too, if you buy an M.2 version .

0 views
Jim Nielsen 2 days ago

A Few Rambling Observations on Care

In this new AI world, “taste” is the thing everyone claims is the new supreme skill. But I think “care” is the one I want to see in the products I buy. Can you measure care? Does scale drive out care? If a product conversation is reduced to being arbitrated exclusively by numbers, is care lost? The more I think about it, care seems antithetical to the reductive nature of quantification — “one death is a tragedy, one million is a statistic”. Care considers useful, constructive systematic forces — rules, processes, etc. — but does not take them as law. Individual context and sensitivity are the primary considerations. That’s why the professional answer to so many questions is: “it depends”. “This is the law for everyone, everywhere, always” is not a system I want to live in. Businesses exist to make money, so one would assume a business will always act in a way that maximizes the amount of money that can be made. That’s where numbers take you. They let you measure who is gaining or losing the most quantifiable amount in any given transaction. But there’s an unmeasurable, unquantifiable principle lurking behind all those numbers: it can be good for business to leave money on the table. Why? Because you care. You are willing to provision room for something beyond just a quantity, a number, a dollar amount. I don’t think numbers alone can bring you to care . I mean, how silly is it to say: “How much care did you put into the product this week?” “Put me down for a 8 out of 10 this week.” Reply via: Email · Mastodon · Bluesky

0 views