Latest Posts (20 found)

Updated thoughts on People and Blogs

This is a follow-up on my previous post . After talking to a few friends and getting feedback from the kind people who decided to email me and share their thoughts, I decided that I will stop once interview number 150 is out, on July 10th. 150 is a neat number because it means I can match each interview to a first gen Pokemon. I am a 90s kid after all. That said, my stopping on the 10th of July doesn’t mean the series also has to stop. If anyone out there is interested in picking it up and carrying it forward, I’ll be more than happy to give the series away. If that's you, send me an email. I’m also happy to part ways with the domain name if it can be of any help. Whether someone picks up the torch or not, the first 150 interviews will be archived here on my blog for as long as I have a presence on the web. 20 interviews left, 6 drafts are ready to go, a few more people have the questions, and I’m waiting to get their answers (that may or may not arrive before July 10th). It’s going to be fun to see who ends up being the final guest. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views

Miscalibrated

I’ve been gaining weight again. More than twenty pounds in the last ~4 months. I’ve been hitting the gym hard and getting measurably stronger, so: Food! See, your boy can eat. The amount I can eat before I feel full would astound most of you out there. Whatever you think of as a complete hearty meal, sure as you’re born, ain’t gonna get me there. Being fat comes with one (1) society-regimented bucket of shame. People look away. It’s a thing. I had gone off my last round of GLP-1 drugs because I was doing OK, and it had lost its effectiveness. I’m not sure if it’s everyone’s experience, but it’s mine, and it’s happened a couple of times now. Honestly, I think my I CAN EAT THROUGH OZEMPIC line of XXXL T-Shirts has a chance. These drugs work very well for a bit. I like them because it gives me a glimpse of what it’s like to be a regular person who eats a regular amount of food and feels a regular amount of full. You settle into that for a while with these drugs. But, in time, effectiveness wanes. And the pharmacies have an answer: higher doses! All these GLP-1 drugs, and I’m pretty sure it is all of them, have dosage tiers. The three I’ve tried have three tiers. Ozempic rolls like this: Wegovy is getting in on the action: Mounjaro has even more layers: Again, they do this because it loses effectiveness. I don’t think people quite realize this??? Even though it’s not hidden in any way. I think these drugs are pretty amazing, and I’m proud of science for starting to figure all this out, but I’m also a little sick of hearing about how airlines are going to spend less money on fuel now. I’ve been reading this story for many years. It’s laughable when we literally know they don’t work permanently. Look at those graphics above. This isn’t a forever solution yet. They are literally showing and telling us that. There is no answer once they lose effectiveness. Perhaps controversial, but I think overeating, in the form I experience it, is an addiction, and addictions come back. Is it possible to beat it? Absolutely. Is it likely? No. I hope you don’t know firsthand, but I bet you already know that cocaine doesn’t maintain effectiveness, either. You need a second line for the same thrill before long. It doesn’t end well. Anyway, I’m back on GLP-1s. At least they work for a while, and that while feels pretty good. It was a rough start, though. My doctor agreed it’s good for me and we should kick up the dosage based on the waned effectiveness. Wegovy this time. It was this past Tuesday that I picked up the meds. It’s down to $350 now! It used to be like $1,200 without insurance. I jabbed myself Tuesday night at about 8pm. I was hugging the toilet hard by midnight. That was a first. See, there was a lot of food in my body. I remember lunch that day, where I made a sandwich were my rational brain saw it and thought that’s 2-3 sandwiches. But of course I ate all of it. And one of those salad bags that make a Caesar salad for a family of four. And a pint of cottage cheese. And a bag of Doritos. I was full after that, but the trick is just to switch to sugar after that, and I can keep going. It wasn’t quite noon, and I had a decent breakfast in me already. I ate dinner that night as well. So when the Wegovy started to hit, which tells your body you’re full when you eat a celery stick, it told my body that it was about to pop . I puked in four sessions over 24 hours. Now it’s Friday, and I’ve barely eaten since. I’ve eaten a little . Like, I’m fine. It’s just weird. I’m miscalibrated. On my own, nature, nurture, whatever you think, my current body is miscalibrated. It doesn’t do food correctly. On GLP-1 drugs, I’m also miscalibrated. My body doesn’t do food correctly. It highly over corrects. That can feel good for a while. I don’t wanna be skinny, I just wanna be normal. I want to eat, and stop eating, like a calibrated person.

0 views

Adding TILs, releases, museums, tools and research to my blog

I've been wanting to add indications of my various other online activities to my blog for a while now. I just turned on a new feature I'm calling "beats" (after story beats, naming this was hard!) which adds five new types of content to my site, all corresponding to activity elsewhere. Here's what beats look like: Those three are from the 30th December 2025 archive page. Beats are little inline links with badges that fit into different content timeline views around my site, including the homepage, search and archive pages. There are currently five types of beats: That's five different custom integrations to pull in all of that data. The good news is that this kind of integration project is the kind of thing that coding agents really excel at. I knocked most of the feature out in a single morning while working in parallel on various other things. I didn't have a useful structured feed of my Research projects, and it didn't matter because I gave Claude Code a link to the raw Markdown README that lists them all and it spun up a parser regex . Since I'm responsible for both the source and the destination I'm fine with a brittle solution that would be too risky against a source that I don't control myself. Claude also handled all of the potentially tedious UI integration work with my site, making sure the new content worked on all of my different page types and was handled correctly by my faceted search engine . I actually prototyped the initial concept for beats in regular Claude - not Claude Code - taking advantage of the fact that it can clone public repos from GitHub these days. I started with: And then later in the brainstorming session said: After some iteration we got to this artifact mockup , which was enough to convince me that the concept had legs and was worth handing over to full Claude Code for web to implement. If you want to see how the rest of the build played out the most interesting PRs are Beats #592 which implemented the core feature and Add Museums Beat importer #595 which added the Museums content type. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . Releases are GitHub releases of my many different open source projects, imported from this JSON file that was constructed by GitHub Actions . TILs are the posts from my TIL blog , imported using a SQL query over JSON and HTTP against the Datasette instance powering that site. Museums are new posts on my niche-museums.com blog, imported from this custom JSON feed . Tools are HTML and JavaScript tools I've vibe-coded on my tools.simonwillison.net site, as described in Useful patterns for building HTML tools . Research is for AI-generated research projects, hosted in my simonw/research repo and described in Code research projects with async coding agents like Claude Code and Codex .

0 views

‘Starkiller’ Phishing Service Proxies Real Login Pages, MFA

Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the target and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses. There are countless phishing kits that would-be scammers can use to get started, but successfully wielding them requires some modicum of skill in configuring servers, domain names, certificates, proxy services, and other repetitive tech drudgery. Enter Starkiller , a new phishing service that dynamically loads a live copy of the real login page and records everything the user types, proxying the data from the legitimate site back to the victim. According to an analysis of Starkiller by the security firm Abnormal AI , the service lets customers select a brand to impersonate (e.g., Apple, Facebook, Google, Microsoft et. al.) and generates a deceptive URL that visually mimics the legitimate domain while routing traffic through the attacker’s infrastructure. For example, a phishing link targeting Microsoft customers appears as “login.microsoft.com@[malicious/shortened URL here].” The “@” sign in the link trick is an oldie but goodie, because everything before the “@” in a URL is considered username data, and the real landing page is what comes after the “@” sign. Here’s what it looks like in the target’s browser: Image: Abnormal AI. The actual malicious landing page is blurred out in this picture, but we can see it ends in .ru. The service also offers the ability to insert links from different URL-shortening services. Once Starkiller customers select the URL to be phished, the service spins up a Docker container running a headless Chrome browser instance that loads the real login page, Abnormal found. “The container then acts as a man-in-the-middle reverse proxy, forwarding the end user’s inputs to the legitimate site and returning the site’s responses,” Abnormal researchers Callie Baron and Piotr Wojtyla wrote in a blog post on Thursday . “Every keystroke, form submission, and session token passes through attacker-controlled infrastructure and is logged along the way.” Starkiller in effect offers cybercriminals real-time session monitoring, allowing them to live-stream the target’s screen as they interact with the phishing page, the researchers said. “The platform also includes keylogger capture for every keystroke, cookie and session token theft for direct account takeover, geo-tracking of targets, and automated Telegram alerts when new credentials come in,” they wrote. “Campaign analytics round out the operator experience with visit counts, conversion rates, and performance graphs—the same kind of metrics dashboard a legitimate SaaS [software-as-a-service] platform would offer.” Abnormal said the service also deftly intercepts and relays the victim’s MFA credentials, since the recipient who clicks the link is actually authenticating with the real site through a proxy, and any authentication tokens submitted are then forwarded to the legitimate service in real time. “The attacker captures the resulting session cookies and tokens, giving them authenticated access to the account,” the researchers wrote. “When attackers relay the entire authentication flow in real time, MFA protections can be effectively neutralized despite functioning exactly as designed.” The “URL Masker” feature of the Starkiller phishing service features options for configuring the malicious link. Image: Abnormal. Starkiller is just one of several cybercrime services offered by a threat group calling itself Jinkusu , which maintains an active user forum where customers can discuss techniques, request features and troubleshoot deployments. One a-la-carte feature will harvest email addresses and contact information from compromised sessions, and advises the data can be used to build target lists for follow-on phishing campaigns. This service strikes me as a remarkable evolution in phishing, and its apparent success is likely to be copied by other enterprising cybercriminals (assuming the service performs as well as it claims). After all, phishing users this way avoids the upfront costs and constant hassles associated with juggling multiple phishing domains, and it throws a wrench in traditional phishing detection methods like domain blocklisting and static page analysis. It also massively lowers the barrier to entry for novice cybercriminals, Abnormal researchers observed. “Starkiller represents a significant escalation in phishing infrastructure, reflecting a broader trend toward commoditized, enterprise-style cybercrime tooling,” their report concludes. “Combined with URL masking, session hijacking, and MFA bypass, it gives low-skill cybercriminals access to attack capabilities that were previously out of reach.”

0 views
Heather Burns Yesterday

The Prince, The Paedo, The Palace, and the “Safety Tech” app

Shame must change sides. And this week, that means certain corners of the "children's online safety" crusade.

0 views

Premium: The Hater's Guide to Anthropic

In May 2021, Dario Amodei and a crew of other former OpenAI researchers formed Anthropic and dedicated themselves to building the single-most-annoying Large Language Model company of all time.  Pardon me, sorry, I mean safest , because that’s the reason that Amodei and his crew claimed was why they left OpenAI : I’m also being a little sarcastic. Anthropic, a “public benefit corporation” (a company that is quasi-legally required to sometimes sort of focus on goals that aren’t profit driven, and in this case, one that chose to incorporate in Delaware as opposed to California, where it would have actual obligations), is the only meaningful competitor to OpenAI, one that went from (allegedly) making about $116 million in March 2025 to making $1.16 billion in February 2026 , in the very same month it raised $30 billion from thirty-seven different investors, including a “partial” investment from NVIDIA and Microsoft announced in November 2025 that was meant to be “up to” $15 billion.  Anthropic’s models regularly dominate the various LLM model leaderboards , and its Claude Code command-line interface tool (IE: a terminal you type stuff into) has become quite popular with developers who either claim it writes every single line of their code, or that it’s vaguely useful in some situations.  CEO Dario Amodei predicted last March that in six months AI would be writing 90% of code, and when that didn’t happen, he simply made the same prediction again in January , because, and I do not say this lightly, Dario Amodei is full of shit. You see, Anthropic has, for the best part of five years, been framing itself as the trustworthy , safe alternative to OpenAI, focusing more on its paid offerings and selling to businesses (realizing that the software sales cycle usually focuses on dimwitted c-suite executives rather than those who actually use the products ), as opposed to building a giant, expensive free product that lots of people use but almost nobody pays for.  Anthropic, separately, has avoided following OpenAI in making gimmicky (and horrendously expensive) image and video generation tools, which I assume is partly due to the cost, but also because neither of those things are likely something that an enterprise actually cares about.  Anthropic also caught on early to the idea that coding was the one use case that Large Language Models fit naturally: Anthropic has held the lead in coding LLMs since the launch of June 2024’s Claude Sonnet 3.5 , and as a story from The Information from December 2024 explained, this terrified OpenAI : Cursor would, of course, eventually go on to become its own business, raising $3.2 billion in 2025 to compete with Claude Code, a product made by Anthropic, which Cursor pays to offer its models through its AI coding product. Cursor is Anthropic’s largest customer, with the second being Microsoft’s Github Copilot . I have heard from multiple sources that Cursor is spending more than 100% of its revenue on API calls, with the majority going to Anthropic and OpenAI, both of whom now compete with Cursor. Anthropic sold itself as the stable, thoughtful, safety-oriented AI lab, with Amodei himself saying in an August 2023 interview that he purposefully avoided the limelight: A couple of months later in October 2023, Amodei joined The Logan Bartlett show , saying that he “didn’t like the term AGI” because, and I shit you not, “...because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.” This was all part of Anthropic’s use of well-meaning language to tell a story that said “you should be scared” and “only Anthropic will save you.” In July 2023 , Amodei spoke before a senate committee about AI oversight and regulation, starting sensible (IE: if AI does become powerful, we should have regulations to mitigate those problems) and eventually veering aggressively into marketing slop: This is Amodei’s favourite marketing trick — using a vague timeline (2-3 years) to suggest that something vaguely bad that’s also good for Anthropic is just around the corner, but managed correctly, could also be good for society (a revolution in technology and science! But also, havoc! ). Only Dario has  the answers (regulations that start with “securing the AI supply chain” meaning “please stop China from competing”).  In retrospect, this was the most honest that he’d ever be. In 2024, Amodei would quickly learn that he loved personalizing companies, and that destroying his soul fucking rocked.  In October 2024, Amodei put out a 15,000-word-long blog — ugh, AI is coming for my job! — where he’d say that Anthropic needed to “avoid the perception of propaganda” while also saying that “as early as 2026 (but there are also ways it could take much longer),” AI would be smarter than a Nobel Prize winner, autonomously able to complete weeks-long tasks, and be the equivalent of a “country of geniuses in a datacenter.”  This piece, like all of his proclamations, had two goals: generating media coverage and investment. Amodei is a deeply dishonest man, couching “predictions” based on nothing in terms like “maybe,” “possibly,” or “as early as,” knowing that the media will simply ignore those words and report what he says as a wise, evidence-based fact.  Amodei (and by extension Anthropic) nakedly manipulates the media by having them repeat these things without analysis or counterpoints — such as that “AI could surpass almost all humans at almost everything shortly after 2027 (which I’ll get back to in a bit).” He knows that these things aren’t true. He knows he doesn’t have any proof. And he knows that nobody will ask, and that his bullshit will make for a sexy traffic-grabbing headline. To be clear, that statement was made three months after Amodei’s essay said that AI labs needed to avoid “the perception of propaganda.” Amodei is a con artist that knows he can’t sell Anthropic’s products by explaining what they actually do, and everybody is falling for it. And, almost always, these predictions match up with Anthropic’s endless fundraising. On September 23, 2024, The Information reported that Anthropic was raising a round at a $30-$40 billion valuation, and on October 12 2024, Amodei pooped out Machines of Loving Grace with the express position that he and Anthropic “had not talked that much about powerful AI’s upsides.”  A month later on November 22, 2024 , Anthropic would raise another $4 billion from Amazon, a couple of weeks after doing a five-hour-long interview with Lex Fridman in which he’d say that “someday AI would be better at everything.”  On November 27, 2024 , Amodei would do a fireside chat at Eric Newcomer’s Cerebral Valley AI Summit where he’d say that in 2025, 2026, or 2027 (yes, he was that vague), AI could be as “good as a Nobel Prize winner, polymathic across many fields,” and have “agency [to] act on its own for hours or days,” the latter of which deliberately laid foundation for one of Anthropic’s greatest lies: that AI can “work uninterrupted” for periods of time, leaving the reader or listener to fill in the (unsaid) gap of “...and actually create useful stuff.” Amodei crested 2024 with an interview with the Financial Times , and let slip what I believe will eventually become Anthropic’s version of WeWork’s Community-Adjusted EBITDA , by which I mean “a way to lie and suggest profitability when a company isn’t profitable”: Yeah man, if a company made $300 million in revenue and spent $1 billion. No amount of DarioMath about how a model “costs this much and makes this much revenue” changes the fact that profitability is when a company makes more money than it spends.  On January 5, 2025, Forbes would report that Anthropic was working on a $60 billion round that would make Amodei, his sister Daniela, and five other cofounders billionaires . Anyway, as I said at Davos on January 21, 2025, Amodei said that he was “more confident than ever” that we’re “very close” to “powerful capabilities,” defined as “systems that are better than almost all humans at almost all terms,” citing his long, boring essay. A day later, Anthropic would raise another $1 billion from Google . On January 27, 2025, he’d tell Economist editor-in-chief Zanny Minton Beddoes that AI would get “as good and eventually better” at thinking as human beings, and that the ceiling of what models could do was “well above humans.”  On February 18, 2025, he’d tell Beddoes that we’d get a model “...that can do everything a human can do at the level of a Nobel laureate across many fields” by 2026 or 2027, and that we’re “on the eve of something that has great challenges” that would “upend the balance of power” because we’d have “10 million people smarter than any human alive…” oh god, I’m not fucking writing it out. I’m sorry. It’s always the same shit. The models are people, we’re so scared.  On February 28, 2025, Amodei would join the New York Times’ Hard Fork , saying that he wanted to “slow down authoritarians,” and that “public officials and leaders at companies” would “look back at this period [where humanity would become a “post-powerful AI society that co-exists with powerful intelligences]” and “feel like a fool,” and that that was the number one goal of these people. Amodei would also add that he had been in the field for 10 years — something he loves to say! — and that there was a 70-80% chance that we will “get a very large number of AI systems that are much smarter than humans at almost everything” before the end of the decade. Three days later, Anthropic would raise $3.5 billion at a $61.5 billion valuation . Beneath the hype, Anthropic is, like OpenAI, a company making LLMs that can generate code and text, and that can interpret data from images and videos, all while burning billions of dollars and having no path to profitability. Per The Information , Anthropic made $4.5 billion in revenue and lost $5.2 billion generating it, and based on my own reporting from last year , costs appear to scale linearly above revenue. Some will argue that the majority of Anthropic’s losses ($4.1 billion) were from training, and I think it’s time we had a chat about what “training” means, especially as Anthropic plans to spend $100 billion on it in the next four years . Per my piece from last week: In an interview on the Dwarkesh Podcast , Amodei even admitted that if you “never train another model” you “don’t have any demand because you’ll fall behind.” Training is opex, and should be part of gross margins. It’s time we had an honest conversation about Anthropic.  Despite its positioning as the trustworthy, “nice” AI lab, Anthropic is as big, ugly and wasteful as OpenAI, and Dario Amodei is an even bigger bullshit artist than Sam Altman. It burns just as much of its revenue on inference (59%, or $2.79 billion on $4.5 billion of revenue , versus OpenAI’s 62%, or $2.5 billion  on $4.3 billion of revenue in the first half of 2025 , if you use The Information’s numbers), and shows no sign of any “efficiency” or “cost-cutting.” Worse still, Anthropic continually abuses its users through varying rate limits to juice revenues and user numbers — along with Amodei’s gas-leak-esque proclamations — to mislead the media, the general public, and investors about the financial condition of the company.  Based on an analysis of many users’ actual token burn on Claude Code, I believe Anthropic is burning anywhere from $3 to $20 to make $1, and that the product that users are using (and the media is raving about) is not one that Anthropic can actually support long-term.  I also see signs that Amodei himself is playing fast and loose with financial metrics in a way that will blow up in his face if Anthropic ever files its paperwork to go public . In simpler terms, Anthropic’s alleged “ 38% gross margins ” are, if we are to believe Amodei’s own words, not the result of “revenue minus COGS” but “how much a model costs and how much revenue it’s generated.” Anthropic is also making promises it can’t keep. It’s promising to spend $30 billion on Microsoft Azure (and an additional "up to one gigawatt”), “tens of billions” on Google Cloud , $21 billion on Google TPUs with Broadcom , “$50 billion on American infrastructure,” as much as $3 billion on Hut8’s data center in Louisiana , and an unknowable (yet likely in the billions) amount of money with Amazon Web Services. Not to worry, Dario also adds that if you’re off by a couple of years on your projections of revenue and ability to pay for compute, it’ll be “ruinous.” I think that he’s right. Anthropic cannot afford to pay its bills, as the ruinous costs of training — which will never, ever stop — and inference will always outpace whatever spikes of revenue it can garner through media campaigns built on deception, fear-mongering, and an exploitation of reporters unwilling to ask or think about the hard questions.  I see no difference between OpenAI’s endless bullshit non-existent deal announcements and what Anthropic has done in the last few months. Anthropic is as craven and deceptive as OpenAI, and Dario Amodei is as willing a con artist as Altman, and I believe is desperately jealous of his success. And after hours and hours of listening to Amodei talk, I think he is one of the most annoying, vacuous, bloviating fuckwits in tech history. He rambles endlessly, stutters more based on how big a lie he’s telling, and will say anything and everything to get on TV and say noxious, fantastical, intentionally-manipulative bullshit to people who should know better but never seem to learn. He stammers, he blithers, he rambles, he continually veers between “this is about to happen” and “actually it’s far away” so that nobody can say he’s a liar, but that’s exactly what I call a person who intentionally deceives people, even if they couch their lies in “maybes” and “possiblies.”  Dario Amodei fucking sucks, and it’s time to stop pretending otherwise. Anthropic has no more soul or ethics than OpenAI — it’s just done a far better job of conning people into believing otherwise. This is the Hater’s Guide To Anthropic, or “DarioWare: Get It Together.”  Thanks to sites like Stack Overflow and Github, as well as the trillions of lines of open source code in circulation, there’s an absolute fuckton of material to train the model on. Software engineers are data perverts (I mean this affectionately), and will try basically anything to speed up, automate or “add efficiency” to their work. Software engineering is a job that most members of the media don’t understand. Software engineers never shut the fuck up when they’ve found something new that feels good. Software engineers will spend hours only defending the honour of any corporation that courts them. Software engineers will at times overestimate their capabilities, as demonstrated by  the METR study that found that developers believed they were 24% faster when using LLMs, when in fact coding models made them 19% slower . This, naturally, makes them quite defensive of the products they use, and whether or not they’re actually seeing improvements.

0 views
Stratechery Yesterday

2026.08: Losing in the Attention Economy

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on Anthropic’s Super Bowl lies. What Happened to Video Games? For decades video games were hailed as the industry of the future, as their growth and eventually total revenue dwarfed other forms of entertainment. Over the last five years, however, things have gotten dark — and what light there is is shining on everyone other than game developers. I’ve been talking to Matthew Ball about the state of the video game industry every year for the last three years, and this week’s Interview was my favorite one of the series: what happens when you actually have to fight for attention, and when everything that made you exciting — particularly interactivity and immersiveness — start to be come liabilities? — Ben Thompson The NBA Is a Mess, For Now.  As a card-carrying pro basketball sicko who will be watching the NBA the rest of my life, it brings me no joy to report the league is not in a great place at the moment. We’re reliving the mid-aughts Spurs-Pistons Dark Ages, but with too much offense instead of too much defense, and a regular season that’s 20 games too long. I wrote about all of it on Sharp Text this week , including problems that can be fixed, others that may be solved with time, and whether Commissioner Adam Silver is the right leader to address any of these issues.  — Andrew Sharp Shopify and the Future of E-Commerce.  In the midst of the ongoing thrum of SaaSpocalypse takes, I enjoyed that Ben’s Daily Update on Wednesday pumped the brakes on the panic in at least one area: Shopify is fine, actually . We went deeper on this week’s episode of Sharp Tech , exploring not only Shopify’s value propositions, but the shifting dynamics of e-commerce in the AI era, the sorts of businesses that are likely to emerge in the years to come, and why certain structural advantages from previous paradigms will not only be durable, but even stronger going forward.  — AS Thin Is In — Thick clients were the dominant form of device throughout the PC and mobile era; in an AI world, however, thin clients make much more sense. Shopify Earnings, Shopify’s AI Advantages — Shopify is poised to be one of the biggest winners from AI; it would behoove investors to actually understand the businesses they are selling. An Interview with Matthew Ball About Gaming and the Fight for Attention — An interview with Matthew Ball about the state of the video gaming industry in 2026, and why everything is a fight for attention. The NBA’s Problems Are Structural, Cultural and Fixable — What’s driving NBA fans to apathy, how the league might find its way back, and whether Adam Silver has outlived his usefulness. Back to the Future Curling, F1 , and Gambling South Africa’s Ruined Synthetic Oil Giant The Dunk Contest Preview America Needs, The Top Five Bandwagons for the Next Five Years, The NBA Fines the Jazz $500,000 The All-Star Game Was a Delight, Harrowing Field Reporting from the Dunk Contest, KD Burners Rise from the Ashes The Roots of a Global Memory Shortage, Thick, Thin and Apple, Shopify is Fine, Actually

0 views
David Bushell Yesterday

Everything you never wanted to know about visually-hidden

Nobody asked for it but nevertheless, I present to you my definitive “it depends” tome on visually-hidden web content. I’ll probably make an amendment before you’ve finished reading. If you enjoy more questions than answers, buckle up! I’ll start with the original premise, even though I stray off-topic on tangents and never recover. I was nerd-sniped on Bluesky. Ana Tudor asked : Is there still any point to most styles in visually hidden classes in ’26? Any point to shrinking dimensions to and setting when to nothing via / reduces clickable area to nothing? And then no dimensions = no need for . @anatudor.bsky.social Ana proposed the following: Is this enough in 2026? As an occasional purveyor of the class myself, the question wriggled its way into my brain. I felt compelled to investigate the whole ordeal. Spoiler: I do not have a satisfactory yes-or-no answer, but I do have a wall of text! I went so deep down the rabbit hole I must start with a table of contents: I’m writing this based on the assumption that a class is considered acceptable for specific use cases . My final section on native visually-hidden addresses the bigger accessibility concerns. It’s not easy to say where this technique is appropriate. It is generally agreed to be OK but a symptom of — and not a fix for — other design issues. Appropriate use cases for are far fewer than you think. Skip to the history lesson if you’re familiar. , — there have been many variations on the class name. I’ve looked at popular implementations and compiled the kitchen sink version below. Please don’t copy this as a golden sample. It merely encompasses all I’ve seen. There are variations on the selector using pseudo-classes that allow for focus. Think “skip to main content” links, for example. What is the purpose of the class? The idea is to hide an element visually, but allow it to be discovered by assistive technology. Screen readers being the primary example. The element must be removed from layout flow. It should leave no render artefacts and have no side effects. It does this whilst trying to avoid the bugs and quirks of web browsers. If this sounds and looks just a bit hacky to you, you have a high tolerance for hacks! It’s a massive hack! How was this normalised? We’ll find out later. I’ll whittle down the properties for those unfamiliar. Absolute positioning is vital to remove the element from layout flow. Otherwise the position of surrounding elements will be affected by its presence. This crops the visible area to nothing. remains as a fallback but has long been deprecated and is obsolete. All modern browsers support . These two properties remove styles that may add layout dimensions. This group effectively gives the element zero dimensions. There are reasons for instead of and negative margin that I’ll cover later. Another property to ensure no visible pixels are drawn. I’ve seen the newer value used but what difference that makes if any is unclear. This was added to address text wrapping inside the square (I’ll explain later). So basically we have and a load of properties that attempted to make the element invisible. We cannot use or or because those remove elements from the accessibility tree. So the big question remains: why must we still ‘zero’ the dimensions? Why is not sufficient? To make sense of this mystery I went back to the beginning. It was tricky to research this topic because older articles have been corrected with modern information. I recovered many details from the archives and mailing lists with the help of those involved. They’re cited along the way. Our journey begins November 2004. A draft document titled “CSS Techniques for WCAG 2.0” edited by Wendy Chisholm and Becky Gibson includes a technique for invisible labels. While it is usually best to include visual labels for all form controls, there are situations where a visual label is not needed due to the surrounding textual description of the control and/or the content the control contains. Users of screen readers, however, need each form control to be explicitly labeled so the intent of the control is well understood when navigated to directly. Creating Invisible labels for form elements ( history ) The following CSS was provided: Could this be the original class? My research jumped through decades but eventually I found an email thread “CSS and invisible labels for forms” on the W3C WAI mailing list. This was a month prior, preluding the WCAG draft. A different technique from Bob Easton was noted: The beauty of this technique is that it enables using as much text as we feel appropriate, and the elements we feel appropriate. Imagine placing instructive text about the accessibility features of the page off left (as well as on the site’s accessibility statement). Imagine interspersing “start of…” landmarks through a page with heading tags. Or, imagine parking full lists off left, lists of access keys, for example. Screen readers can easily collect all headings and read complete lists. Now, we have a made for screen reader technique that really works! Screenreader Visibility - Bob Easton (2003) Easton attributed both Choan Gálvez and Dave Shea for their contributions. In same the thread, Gez Lemon proposed to ensure that text doesn’t bleed into the display area . Following up, Becky Gibson shared a test case covering the ideas. Lemon later published an article “Invisible Form Prompts” about the WCAG plans which attracted plenty of commenters including Bob Easton. The resulting WCAG draft guideline discussed both the and ideas. Note that instead of using the nosize style described above, you could instead use postion:absolute; and left:-200px; to position the label “offscreen”. This technique works with the screen readers as well. Only position elements offscreen in the top or left direction, if you put an item off to the right or the bottom, many browsers will add scroll bars to allow the user to reach the content. Creating Invisible labels for form elements Two options were known and considered towards the end of 2004. Why not both? Indeed, it appears Paul Bohman on the WebAIM mailing list suggested such a combination in February 2004. Bohman even discovered possibly the first zero width bug. I originally recommended setting the height and width to 0 pixels. This works with JAWS and Home Page Reader. However, this does not work with Window Eyes. If you set the height and width to 1 pixel, then the technique works with all browsers and all three of the screen readers I tested. Re: Hiding text using CSS - Paul Bohman Later in May 2004, Bohman along with Shane Anderson published a paper on this technique. Citations within included Bob Easton and Tom Gilder . Aside note: other zero width bugs have been discovered since. Manuel Matuzović noted in 2023 that links in Safari were not focusable . The zero width story continues as recently as February 2026 (last week). In browse mode in web browsers, NVDA no longer treats controls with 0 width or height as invisible. This may make it possible to access previously inaccessible “screen reader only” content on some websites. NVDA 2026.1 Beta TWO now available - NV Access News Digger further into WebAIM’s email archive uncovered a 2003 thread in which Tom Gilder shared a class for skip navigation links . I found Gilder’s blog in the web archives introducing this technique. I thought I’d put down my “skip navigation” link method down in proper writing as people seem to like it (and it gives me something to write about!). Try moving through the links on this page using the keyboard - the first link should magically appear from thin air and allow you to quickly jump to the blog tools, which modern/visual/graphical/CSS-enabled browsers (someone really needs to come up with an acronym for that) should display to the left of the content. Skip-a-dee-doo-dah - Tom Gilder Gilder’s post links to a Dave Shea post which in turn mentions the 2002 book “Building Accessible Websites” by Joe Clark . Chapter eight discusses the necessity of a “skip navigation” link due to table-based layout but advises: Keep them visible! Well-intentioned developers who already use page anchors to skip navigation will go to the trouble to set the anchor text in the tiniest possible font in the same colour as the background, rendering it invisible to graphical browsers (unless you happen to pass the mouse over it and notice the cursor shape change). Building Accessible Websites - 08. Navigation - Joe Clark Clark expressed frustration over common tricks like the invisible pixel. It’s clear no class existed when this was written. Choan Gálvez informed me that Eric Meyer would have the css-discuss mailing list. Eric kindly searched the backups but didn’t find any earlier discussion. However, Eric did find a thread on the W3C mailing list from 1999 in which Ian Jacobs (IBM) discusses the accessibility of “skip navigation” links. The desire to visually hide “skip navigation” links was likely the main precursor to the early techniques. In fact, Bob Easton said as much: As we move from tag soup to CSS governed design, we throw out the layout tables and we throw out the spacer images. Great! It feels wonderful to do that kind of house cleaning. So, what do we do with those “skip navigation” links that used to be attached to the invisible spacer images? Screenreader Visibility - Bob Easton (2003) I had originally missed that in my excitement seeing the class. I reckon we’ve reached the source of the class. At least conceptually. Technically, the class emerged from several ideas, rather than a “eureka” moment. Perhaps more can be gleaned from other CSS techniques such a the desire to improve accessibility of CSS image replacement . Bob Easton retired in 2008 after a 40 year career at IBM. I reached out to Bob who was surprised to learn this technique was still a topic today † . Bob emphasised the fact that it was always a clumsy workaround and something CSS probably wasn’t intended to accommodate . I’ll share more of Bob’s thoughts later. † I might have overdone the enthusiasm Let’s take an intermission! My contact page is where you can send corrections by the way :) The class stabilised for a period. Visit 2006 in the Wayback Machine to see WebAIM’s guide to invisible content — Paul Bohman’s version is still recommended. Moving forward to 2011, I found Jonathan Snook discussing the “clip method”. Snook leads us to Drupal developer Jeff Burnz the previous year. […] we still have the big problem of the page “jump” issue if this is applied to a focusable element, such as a link, like skip navigation links. WebAim and a few others endorse using the LEFT property instead of TOP, but this no go for Drupal because of major pain-in-the-butt issues with RTL. In early May 2010 I was getting pretty frustrated with this issue so I pulled out a big HTML reference and started scanning through it for any, and I mean ANY property I might have overlooked that could possible be used to solve this thorny issue. It was then I recalled using clip on a recent project so I looked up its values and yes, it can have 0 as a value. Using CSS clip as an Accessible Method of Hiding Content - Jeff Burnz It would seem Burnz discovered the technique independently and was probably the first to write about it. Burnz also notes a right-to-left (RTL) issue. This could explain why pushing content off-screen fell out of fashion. 2010 also saw the arrival of HTML5 Boilerplate along with issue #194 in which Jonathan Neal plays a key role in the discussion and comments: If we want to correct for every seemingly-reasonable possibility of overflow in every browser then we may want to consider [code below] This was their final decision. I’ve removed for clarity. This is very close to what we have now, no surprise since HTML5 Boilterplate was extremely popular. I’m leaning to conclude that the additional properties are really just there for the “possibility” of pixels escaping containment as much as fixing any identified problem. Thierry Koblentz covered the state of affairs in 2012 noting that: Webkit, Opera and to some extent IE do not play ball with [clip] . Koblentz prophesies: I wrote the declarations in the previous rule in a particular order because if one day clip works as everyone would expect, then we could drop all declarations after clip, and go back to the original Clip your hidden content for better accessibility - Thierry Koblentz Sound familiar? With those browsers obsolete, and if behaves itself, can the other properties be removed? Well we have 14 years of new bugs features to consider first. In 2016, J. Renée Beach published: Beware smushed off-screen accessible text . This appears to be the origin of (as demonstrated by Vispero .) Over a few sessions, Matt mentioned that the string of text “Show more reactions” was being smushed together and read as “Showmorereactions”. Beach’s class did not include the kitchen sink. The addition of became standard alongside everything else. Aside note: the origin of remains elusive. One Bootstrap issue shows it was rediscovered in 2018 to fix a browser bug. However, another HTML5 Boilterplate issue dated 2017 suggests negative margin broke reading order. Josh Comeau shared a React component in 2024 without margin. One of many examples showing that it has come in and out of fashion. We started with WCAG so let’s end there. The latest WCAG technique for “Using CSS to hide a portion of the link text” provides the following code. Circa 2020 the property was added as browser support increased and became deprecated. An obvious change I not sure warrants investigation (although someone had to be first!) That brings us back to what we have today. Are you still with me? As we’ve seen, many of the properties were thrown in for good measure. They exist to ensure absolutely no pixels are painted. They were adapted over the years to avoid various bugs, quirks, and edge cases. How many such decisions are now irrelevant? This is a classic Chesterton’s Fence scenario. Do not remove a fence until you know why it was put up in the first place. Well we kinda know why but the specifics are practically folklore at this point. Despite all that research, can we say for sure if any “why” is still relevant? Back to Ana Tudor’s suggestion. How do we know for sure? The only way is extensive testing. Unfortunately, I have neither the time nor skill to perform that adequately here. There is at least one concern with the code above, Curtis Wilcox noted that in Safari the focus ring behaves differently. Other minimum viable ideas have been presented before. Scott O’Hara proposed a different two-liner using . JAWS, Narrator, NVDA with Edge all seem to behave just fine. As do Firefox with JAWS and NVDA, and Safari on macOS with VoiceOver. Seems also fine with iOS VO+Safari and Android TalkBack with Firefox or Chrome. In none of these cases do we get the odd focus rings that have occurred with other visually hidden styles, as the content is scaled down to zero. Also because not hacked into a 1px by 1px box, there’s no text wrapping occurring, so no need to fix that issue. transform scale(0) to visually hide content - Scott O’Hara Sounds promising! It turns out Katrin Kampfrath had explored both minimum viable classes a couple of years ago, testing them against the traditional class. I am missing the experience and moreover actual user feedback, however, i prefer the screen reader read cursor to stay roughly in the document flow. There are screen reader users who can see. I suppose, a jumping read cursor is a bit like a shifting layout. Exploring the visually-hidden css - Katrin Kampfrath Kampfrath’s limited testing found the read cursor size differs for each class. The technique was favoured but caution is given. A few more years ago, Kitty Giraudel tested several ideas concluding that was still the most accessible for specific text use. This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element. Hiding content responsibly - Kitty Giraudel Zell Liew proposed a different idea in 2019. Many developers voiced their opinions, concerns, and experiments over at Twitter. I wanted to share with you what I consolidated and learned. A new (and easy) way to hide content accessibly - Zell Liew Liew’s idea was unfortunately torn asunder. Although there are cases like inclusively hiding checkboxes where near-zero opacity is more accessible. I’ve started to go back in time again! I’m also starting to question whether this class is a good idea. Unless we are capable and prepared to thoroughly test across every combination of browser and assistive technology — and keep that information updated — it’s impossible to recommend anything. This is impossible for developers! Why can’t browser vendors solve this natively? Once you’ve written 3000 words on a twenty year old CSS hack you start to question why it hasn’t been baked into web standards by now. Ben Myers wrote “The Web Needs a Native .visually-hidden” proposing ideas from HTML attributes to CSS properties. Scott O’Hara responded noting larger accessibility issues that are not so easily handled. O’Hara concludes: Introducing a native mechanism to save developers the trouble of having to use a wildly available CSS ruleset doesn’t solve any of those underlying issues. It just further pushes them under the rug. Visually hidden content is a hack that needs to be resolved, not enshrined - Scott O’Hara Sara Soueidan had floated the topic to the CSS working group back in 2016. Soueidan closed the issue in 2025, coming to a similar conclusion. I’ve been teaching accessibility for a little less than a decade now and if there’s one thing I learned is that developers will resort to using utility to do things that are more often than not just bad design decisions. Yes, there are valid and important use cases. But I agree with all of @scottaohara’s points, and most importantly I agree that we need to fix the underlying issues instead of standardizing a technique that is guaranteed to be overused and misused even more once it gets easier to use. csswg-drafts comment - Sara Soueidan Adrian Roselli has a blog post listing priorities for assigning an accessible name to a control. Like O’Hara and Soueidan, Roselli recognises there is no silver bullet. Hidden text is also used too casually to provide information for just screen reader users, creating overly-verbose content . For sighted screen reader users , it can be a frustrating experience to not be able to find what the screen reader is speaking, potentially causing the user to get lost on the page while visually hunting for it. My Priority of Methods for Labeling a Control - Adrian Roselli In short, many believe that a native visually-hidden would do more harm than good. The use-cases are far more nuanced and context sensitive than developers realise. It’s often a half-fix for a problem that can be avoided with better design. I’m torn on whether I agree that it’s ultimately a bad idea. A native version would give software an opportunity to understand the developer’s intent and define how “visually hidden” works in practice. It would be a pragmatic addition. The technique has persisted for over two decades and is still mentioned by WCAG. Yet it remains hacks upon hacks! How has it survived for so long? Is that a failure of developers, or a failure of the web platform? The web is overrun with inaccessible div soup . That is inexcusable. For the rest of us who care about accessibility — who try our best — I can’t help but feel the web platform has let us down. We shouldn’t be perilously navigating code hacks, conflicting advice, and half-supported standards. We need more energy money dedicated to accessibility. Not all problems can be solved with money. But what of the thousands of unpaid hours, whether volunteered or solicited, from those seeking to improve the web? I risk spiralling into a rant about browser vendors’ financial incentives, so let’s wrap up! I’ll end by quoting Bob Easton from our email conversation: From my early days in web development, I came to the belief that semantic HTML, combined with faultless keyboard navigation were the essentials for blind users. Experience with screen reader users bears that out. Where they might occasionally get tripped up is due to developers who are more interested in appearance than good structural practices. The use cases for hidden content are very few, such as hidden information about where a search field is, when an appearance-centric developer decided to present a search field with no visual label, just a cute unlabeled image of a magnifying glass. […] The people promoting hidden information are either deficient in using good structural practices, or not experienced with tools used by people they want to help. Bob ended with: You can’t go wrong with well crafted, semantically accurate structure. Ain’t that the truth. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Accessibility notice Class walkthrough Where it all began Further adaptations Minimum viable technique Native visually-hidden Zero dimensions Position off-screen

6 views
neilzone Yesterday

Updating my TicWatch to AsteroidOS 2.0

I have a TicWatch Pro 2020, running AsteroidOS . I’ve been using it for about three months now, and I’ve been very pleased with it. Sure, it would be great if the battery life was longer than a day-and-a-bit, but this just means that I need to charge it each night, which is not a major hardship. It does everything I want from a smartwatch, and not really anything more. AsteroidOS launched AsteroidOS v2.0 a few days ago, and I was keen to give it a try. I installed it by following the instructions for the TicWatch (i.e. a new installation, rather an “update”), and this worked fine. I had to re-pair the watch to GadgetBridge, and then I rebooted it. When it came up, it connected to my phone, and set the time correctly. I have a feeling that the update has removed the watch face that I was using, and re-installing it would be a faff, so I just picked one of the default faces. Since I don’t have “Always on” enabled, so I see the TicWatch’s secondary LCD most of the time, this is not a big deal for me. I turned off tilt-to-wake (in Display settings), because I don’t want that; I imagine that it will be waking the watch up too often, increasing power consumption. The “compass” app is quite cool, giving me easy direction finding on my wrist, but I’m not sure I’ll have much use for it. The heart rate sensor works, showing that I do indeed have a pulse, but again, I don’t really need this day to day. Perhaps because of my incredibly basic use, most of the user-facing changes are not particularly relevant to me. I’ll be interested to see if the battery life improvements apply to my watch though. A simple, successful, update, and one which, thankfully, does not get in the way of me using the watch.

0 views

Stefano Verna

This week on the People and Blogs series we have an interview with Stefano Verna, whose blog can be found at squeaki.sh . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by x-way and the other 116 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m Stefano, I’m 40 years old, I live in Italy. I have three sons (the oldest turned 18 last week — happy birthday Ale!). I try to be a present and attentive father, and I believe I am, despite the compromises that come with divorce. I discovered programming at 12 with a little book I found at the library featuring games in QuickBASIC… and I never stopped from there. Creating digital things has always been my greatest passion. In my first year of university, I released one of the very first Firefox extensions , which was an immediate huge success: in no time, 2M daily users… and thousands were donating on PayPal! A huge thing for a 19-year-old. From that experience on, I kept recreating that recipe: building my own software on the web. After many years in the web agency world, one of the many ideas I threw together in my spare time for fun, DatoCMS , was once again very successful. 10 years after the first line of code I wrote, the product continues to exist, grow, and be used all over the world. Today we’re about 15 people working on it. For me, it’s a true dream come true. Apart from programming, which continues to be a fundamental part of my life in terms of fulfillment and satisfaction (perhaps too much so), I’m an idealist, a man of the left, and a great enthusiast of meditation, psychology, and personal growth work in general. I’ve had various blogs in my life. The first one was as a teenager, in the full Blogger era (2004), to communicate and find friends. I even found my future wife and mother of my children there. The second was to find work and make myself known professionally (the articles are still on Github ). My current blog, squeakish , was born after a month-long vacation I took a couple of years ago in Brazil: disconnecting (for the first time in my life, actually!) from responsibilities for an extended period gave me the chance to think about many things differently. It inspired me and made me want to study and write again. It’s called squeakish because I’m (proudly?) the exact opposite of a solid and confident person. I’m full of internal creaks, and my blog contains posts that represent “yieldings,” vulnerabilities that I feel like exploring and sharing. Inspiration always comes from personal reflections that I feel the need to communicate. Often these are difficult things that I struggle to put out into the world. Of these reflections, only a small portion ends up on the blog. Most of them I feel are too personal in their details to be of value to someone else. This is perhaps the biggest block at the moment: understanding the threshold for when something should move from my personal journal to being shared on the blog. I should probably worry less about it? My posts are always written in a single session — I want them to remain as authentic as possible to the moment they were conceived. I wait a few hours before publishing them, to be able to reread them and see if something can be improved, and then they’re online. My creative process needs to be facilitated, first of all by taking dedicated time. This is the fundamental thing. Normally I’ve always written from home, in my usual “nest,” but lately (and even right now) I’m trying to change locations (bars, cafés). Surrounding yourself with different things helps you see things differently. I also try to avoid any kind of “aesthetic” distraction — I write in a notepad without any formatting ( Paper ), and only at the very end I copy on the CMS and format. The site is in Astro and the code is available on Github : there’s a README that explains the details. I had fun learning and implementing webmentions, microformats, backfeeding from Mastodon, and I wrote a brief guide about it. The content is on, well, DatoCMS. I didn’t want to invent anything new — it’s what I know like the back of my hand, and I know it already gives me everything I need and like, including easy image and video management. The site is deployed on Cloudflare Pages, the domain is on Spaceship . I tried to keep the layout as simple as possible, and even copied the Hey World layout. No distractions! The first version of the site was in Svelte: working in the headless CMS world, in ten years I’ve really worked with all the available platforms, static site generators, and frameworks, and I’ve come to the conclusion that today Astro is the most suitable and versatile tool for producing content-driven websites. YMMV. The name “Squeakish” still appeals to me — it has something playful about it and doesn’t take itself too seriously — but I’ve never been a fanatic about finding perfect names. So yeah, right now I’m good with what I have! The only cost… is for the domain ($30/year)? Cloudflare Pages is free, the DatoCMS project is on a free plan. Personally, I have no need to monetize my blog. With monetization automatically comes a sense of responsibility, and this is exactly the opposite of what I’m looking for. I have no negative opinion about those who do it. The important thing is to avoid the enshittification that money normally brings. Personal blogs, as you well know, are the soul of the Internet, and we must try to preserve them free and sincere. God, there are so many! My feed reader is actually publicly visible at /news and at the bottom there’s the list of people I follow. Personally, I’d go with David Celis and/or Chris ! Having your own simple feed reader publicly available inside your own website is something I haven’t seen anywhere else, but it’s simple to build and I feel gives a nice high-level view into what one person is currently feeding himself with. I've actually wrote a bit about this . I just watched a wonderful film, so I feel the need to share it: O Filho de mil Homens . Finally, I’d like to use this space to offer my experience (personal? professional?) to anyone who might need it: if you’d like to have a chat, and you think I might be able to help you with something, reach out via PM on Mastodon and I’ll try to do my best! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 129 interviews . Make sure to also say thank you to Brennan Kenneth Brown and the other 116 supporters for making this series possible.

0 views
iDiallo Yesterday

Teleoperation is Always the Butt of the Joke

A few years back, the term "AI" took an unexpected turn when it was redefined as "Actual Indian". As in, a person in India operating the machine remotely. I first heard the term when Amazon was boasting about their cashierless grocery stores. There was a big sign in the store that said "Just Walk Out," meaning you grab your items, walk out, and get charged the correct amount automatically. How did they do it? According to Amazon, they used AI. What kind of AI exactly, nobody was quite sure. But customers started reporting something odd. They weren't charged immediately after leaving the store. Some said it took several days for a charge to appear on their account. It eventually came out that the technology was sophisticated tracking performed by Amazon's team in India. Workers would manually review footage of each customer's visit and charge them accordingly. What's fascinating is that this operation was impressive. Coordinating thousands of store visits, matching items to customers across multiple camera angles, and doing it accurately enough that most people never noticed the delay. But because it was buried under the "AI" label, the moment the truth came out, the whole thing became a punchline. In 2024, Tesla held their "We, Robot" event, where Optimus robots operated a bar. They were serving drinks, dancing, and mingling with guests. It was a pretty impressive display. The robots moved fluidly, held conversations, and handed off drinks without fumbling. Elon Musk claimed they were AI-driven , fully autonomous. People were genuinely impressed by the interactions, and for good reason. Fluid, bipedal locomotion in a crowded social environment is an extraordinarily hard robotics problem. The moment it came out that the robots were teleoperated, the sentiment flipped entirely. It didn't matter how dexterous or natural the movement was. It felt like a magic trick exposed. But think about what was actually being demonstrated. Humanoid robots walking through a crowd, responding in real time to a human operator's inputs, without tripping over guests or spilling drinks. That's not nothing. Slapping "AI" on it turned an engineering achievement into a scandal. More recently, the company 1X unveiled a friendly humanoid robot available for purchase at $20,000. The demo looks genuinely impressive. The robot can perform domestic tasks like doing laundry, folding clothes, and navigating a home environment. And if it doesn't know how to do something, it can be taught. You can authorize a remote worker to take control, demonstrate the task, and the robot learns from that demonstration, adding it to its growing repertoire. That's a legitimately interesting approach to machine learning through human guidance. What got glossed over is how much of the current capability relies on that remote worker. Right after the unveiling, the Wall Street Journal was invited to test the robots. In their video, the robot is being operated entirely by a person sitting in the next room. To be fair, the smoothness of that teleoperation is itself a technical achievement. Real-time control of a bipedal robot performing fine motor tasks, like folding a shirt, requires low-latency communication, precise motor control, and a well-designed interface for the operator. That's years of engineering work. But because teleoperation isn't the product being sold, AI is,that achievement gets treated as evidence of fraud rather than progress. We've built an environment where "teleoperated" has become a slur, and anything short of full autonomy is seen as cheating. Even Waymo, whose self-driving cars have logged millions of autonomous miles, feels compelled to publicly defend themselves against accusations of secretly using remote operators. As if any human involvement would invalidate everything they've built. I think teleoperation is pretty impressive. It's a valuable technology in its own right. Surgeons use it to operate across continents. Industrial operators use it to work in places no human could safely go. In all of these cases, having a human-in-the-loop is the point. Every "AI" product that turns out to have a person behind the curtain makes the public more skeptical. In a parallel universe, there is a version of the tech industry that celebrates teleoperation as a stepping stone. Where we are building tools to make collaboration easier through teleoperation, and it's not viewed as an embarrassing secret.

0 views
Neil Madden Yesterday

Looking for vulnerabilities is the last thing I do

There’s a common misconception among developers that my job, as a (application) Security Engineer, is to just search for security bugs in their code. They may well have seen junior security engineers doing this kind of thing. But, although this can be useful (and is part of the job), it’s not what I focus on and it can be counterproductive. Let me explain. If I’m coming into a company as the sole or lead application security engineer (common), especially if they haven’t had someone doing that role for a while, my first task is always to see how mature their existing processes and tooling are. If we find a vulnerability, how quickly are they likely to be able to fix it and get a patch out? The fixing-the-bug part of this is the easy part. Developers usually have established procedures in place for fixing bugs. Often, organisations that don’t have established processes for security get bogged down in the communication to customers phase: nobody knows who can sign-off a security advisory, so things tend to escalate. It’s not unusual to find people insisting that everything needs to be run past the CEO and Legal. All this is to say that for companies with low security maturity, finding security bugs comes with a very outsized overhead in terms of tying up resources. If your security team is one or two people, then this makes it harder to get out of this rut and into a better place. So my primary job is to improve the processes and documentation so that these incidents become a well-oiled machine, and don’t tie up resources any more than necessary. I generally use OWASP SAMM as a framework to measure what needs to be done (sticking largely to the Design, Implementation & Verification functions), but it boils down to a number of phases to raise the bar: In both SAMM and my phases above, looking for bugs is way down the list. There will be bugs. There will be lots of bugs, and some of them will be really serious. If you go looking for them, you will find them, and that will feel good and earn some kudos. And it will make the product a little bit more secure. But if you instead wait and do the boring grunt work first to improve the security posture of the organisation, then when you do find the security bugs you will be in a better place to fix them systematically and prevent them coming back. Otherwise you risk perpetually fighting just to keep your head above water fixing one ad-hoc issue after another, which is a way to burn out and leaving the org no better off than when you joined. Firstly, stopping the rot. If there has not been a culture of security previously, then developers may still be implementing features in a way that introduces new security issues in future. There are few techniques as effective as having your developers know and care about security. Specific tasks here include revamping the secure development training (almost always crap, I tend to develop something in-house, tailored to the org), introducing threat modelling, and adding code review checklists/guidelines. Develop internal standards for at least the following (and then communicating them to developers!): Secure coding and code review Use of cryptography Vulnerability management (detection, tracking, prioritisation, remediation, and communication) Identifying a “security champion” in each team and teaching them how to triage and score vulnerabilities with CVSS, so this doesn’t become another bottleneck on the appsec team/individual. This also helps foster the idea that security is developers’ responsibility, not something to off-load to a separate security person. Securing build pipelines, and adding standard tooling: SCA first, then secret scans, and then SAST. Report-only to begin with, with regular meetings to review any High/Critical issues and identify false positives. Only start failing the build once confidence in the tool has been earned. Finally, after all this is in place, then I will start actively looking for security bugs: via more aggressive SAST, DAST (e.g. OWASP ZAP), internal testing/code review, and competent external pen tests. (Often orgs have existing tick-box external pen testing for compliance, so this is about finding pentesters who actually know how to find bugs).

0 views

How to run Claude Code in a Tmux popup window with persistent sessions

Hey, what's up? It's Takuya. I've been using Claude Code in my terminal workflow. At first, I was running it at the right side of my terminal using tmux, but I found it not useful because it was too narrow to display messages and diffs. I often had to press to maximize the pane, which was painful. Next, I started using popup windows to run Claude Code — press a keybinding, get a Claude Code session, dismiss it, and pick up right where you left off. In this article, I'd like to share how to configure tmux to accomplish it. You can display popup windows with command, which is great for quick-access tools. I've been using it for quickly checking git status with like this: My prefix key is , so it is bound to . This works perfectly for LazyGit because it's a short-lived process — you open it, stage some changes, commit, and close it. However, there is a problem with running Claude Code (or any other AI tools) in tmux popup windows. You want to keep a conversation going across multiple interactions. If you bind Claude Code the same way: ...you'll find that closing the popup also kills the Claude Code process. There's no way to dismiss the popup without quitting the session. You'd have to start fresh every time, which defeats the purpose. The trick is to run Claude Code in a separate tmux session, and then attach to that session inside the popup, which means that you are going to use nested tmux sessions. When you close the popup, the session keeps running in the background. Here's the full configuration: Let's break down what this does: This takes the current pane's working directory, hashes it with MD5, and uses the first 8 characters as a session identifier. So you get session names like . The key insight here is that each directory gets its own Claude Code session. The check prevents creating duplicate sessions. If a session for this directory already exists, it skips creation entirely. Otherwise, it creates a new detached session ( ) with the working directory set to your current path ( ), running Claude Code as the initial command. This opens an 80%-sized popup that attaches to the background session. You can change it as you like. When you close the popup (with or your detach keybinding), the session stays alive. Yippee! My dotfiles are available here: That's it. A very simple and intuitive hack. I hope it's helpful for your AI coding workflow :) Have a productive day! Generate a unique session name from the working directory Create the session if it doesn't already exist Attach to the session in a popup

0 views

Goodbye Software Guilds, Hello Software Factories

This article originally published on X . You would be forgiven to believe programming is a white collar job. In fact, given the allure of joining a startup or FAANG and reaping generational wealth, all while seated in front of a keyboard wearing your favorite hoodie, programming may very well be considered by most to be the most white collar of white collar jobs. But that’s confusing the profession with the job. The profession of programming is cushy to be sure (I’ve been one my entire life and have zero calluses to prove it), but the job itself has historically resembled that of an electrician or plumber than, say, an accountant or doctor. If you’ve never worked alongside a team of programmers then this assertion probably sounds absurd, but indulge me for a moment. On any given day, programmers will read and write specifications, patch systems, and hold coordination meetings, often called standups. Companies hire programmers as apprentices, and experienced programmers sometimes refer to themselves as craftsmen. Knowledge is often passed along via a practice known as pair programming in which an experienced developer sits next to a less experienced colleague in order to pass along institutional knowledge and hard-won techniques. Best practices, gotchas, and tips are whispered in hallways and over after-hours drinks. In other words, a guild. Guilds have existed for hundreds of years, and historically, if you were a blacksmith, weaver, or another type of artisan, you probably belonged to one. New members entered as apprentices, progressed to journeymen, and eventually, if they stuck with it long enough, were deemed masters of their craft. Guild members enforced standards, created and codified new techniques, and coordinated learning. And if you were part of the software profession at any point in the last 50 years, that is precisely what you were participating in. But software guilds are now dead. They are being replaced by software factories, and with them both the profession and job of software developer are being transformed into something entirely new. This new type of factory consists of machines that work together to produce not widgets, cars, or airplanes, but code. These machines are what we currently call agents, although I suspect we have not yet settled on the final terminology, let alone on how this factory will ultimately operate. Regardless, early indications suggest this transformation is already underway. Properly tuned and maintained, the factory can produce code at a speed and with a quality no competing guild could match. What’s even more fascinating about this software factory is its input. The inputs come directly from the nontechnical members of the organization, notably subject matter experts. Relieved of the need to translate their ideas through the guild, these individuals can now use AI-powered coding agents (Claude Code seems to be the favorite at our firm) to build useful business applications in less time than it once took just to schedule a requirements meeting. In other words, for many use cases, the translation layer between subject matter expert and machine has evaporated. If this sounds implausible, you probably have not watched a nontechnical person use a tool like Claude Code. As one of many examples I could cite, earlier this week my colleague and BeePurple CEO Stevee Danielle used Claude Code to build an application modeling SAMHSA Peer Support Certification standards across all 50 states. She went from idea to MVP in four hours. Along the way, she imported data from every state and structured the application to address specific reporting gaps identified by industry leaders in published research. This is just one example; I could devote multiple articles to this sort of software which is currently being built within Xenon's portfolio of companies. So what will the guild members do? They will configure the factory so that people like Stevee can move code all the way to production. As factory technicians, they will tune the machines on the floor to ensure inputs are converted into reliable output, maximizing the velocity of code flowing from nontechnical team members through the assembly line. Each agent performs a critical function in the line: one codes, another handles QA, another generates documentation, another reviews pull requests, another deploys, and so on. They will work in unison, much like today’s CI/CD pipelines, with one critical difference: AI, not guild members, will play the central role, not only executing each stage but continuously analyzing and improving the line as it runs. Now I will say out loud the part everyone is probably thinking: this factory will eventually run with almost no technicians. At Adalo, where I serve as CTO, we are already seeing early glimpses of this future. In recent weeks we built an agent that has been running nearly around the clock in either bug fixing or feature creation mode. When operating in the former mode, we do not tell it which bugs to fix. Read that sentence again. It finds, triages, fixes, and verifies bugs on its own. After each run, it updates a persistent memory with lessons learned, optimization ideas, and other improvements so that it can operate even more efficiently the next time. Watching it work has been described by me and my colleagues as mesmerizing. The very idea of this becoming reality is exciting, terrifying, and mystifying. As a lifelong programming nerd and guild member, what is happening right now is the most incredible thing I have ever seen, and I have been leading efforts across the portfolio to ensure these factories are configured to meet this new reality. I am convinced that much of society has not yet begun to grasp the magnitude of what is happening. One way or the other, the factory era of software has begun. The author Jason Gilmore regularly advises investment banks, universities, and other organizations on AI's impact of software development processes. Get in touch with Jason at [email protected] .

0 views
Xe Iaso Yesterday

Life Update: On medical leave

Hey all, I hope you're doing well. I'm going to be on medical leave until early April. If you are a sponsor , then you can join the Discord for me to post occasional updates in real time. I'm gonna be in the hospital for at least a week as of the day of this post. I have a bunch of things queued up both at work and on this blog. Please do share them when you see them cross your feeds, I hope that they'll be as useful as my posts normally are. I'm under a fair bit of stress leading up to this medical leave and I'm hoping that my usual style shines through as much as I hope it is. Focusing on writing is hard when the Big Anxiety is hitting as hard as it is. Don't worry about me. I want you to be happy for me. This is very good medical leave. I'm not going to go into specifics for privacy reasons, but know that this is something I've wanted to do for over a decade but haven't gotten the chance due to the timing never working out. I'll see you on the other side. Stay safe out there.

0 views
Justin Duke Yesterday

Maybe use Plain

When I wrote about Help Scout , much of my praise was appositional. They were the one tool I saw that did not aggressively shoehorn you into using them as a CRM to the detriment of the core product itself. This is still true. They launched a redesign that I personally don't love, but purely on subjective grounds. And there's still a fairly reasonable option for — and I mean this in a non-derogatory way — baby's first support system. I will call out also: if you want something even simpler, Jelly , which is an app that leans fully into the shared inbox side of things. It is less featureful than Help Scout, but with a better design and lower price point. If I was starting a new app today, this is what I would reach for first. But nowadays I use Plain . Plain will not solve all of your problems overnight. It's only a marginally more expensive product — $35 per user per month compared to Help Scout's $25 per user per month. The built-in Linear integration is worth its weight in gold if you're already using Linear, and its customer cards (the equivalent of Help Scout's sidebar widgets) are marginally more ergonomic to work with. The biggest downside that we've had thus far is reliability — less in a cosmic or existential sense and more that Plain has had a disquieting number of small-potatoes incidents over the past three to six months. My personal flowchart for what service to use in this genre is something like: But the biggest thing to do is take the tooling and gravity of support seriously as early as you can. Start with Jelly. If I need something more than that, see if anyone else on the team has specific experience that they care a lot about, because half the game here is in muscle memory rather than functionality. If not, use Plain.

0 views
Manuel Moreale 2 days ago

An incomplete list of things I don’t have

Hair. A nice beard. Savings. Debt. A house. Subscriptions to video streaming services. A piece of forest. Kids. A wife. A husband. Hands without scars. Arms without scars. Legs without scars. A face without scars. A monthly salary. Paid vacations. Happiness. Things I’m proud of. A normal dog. Social media profiles. Investments. Plans for the future. Plans for the present. Plans for the past. A camera. Concrete goals. Wisdom. Ai bots. Ai companions. Ai slaves. Fancy clothes. Colognes. Fame (although I am quite hungry). Faith. Horses in the back. 99 problems. Enlightenment. A daily routine. Willingness to write long posts. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
Kev Quirk 2 days ago

Kids and Smartphones

My oldest son is 11. He'll be starting high school in September, and my wife and I want a way of keeping in touch with him as he'll be making his own way to school. The default here would be to get him a phone, but like most 11-year-old boys, he's an idiot and we don't trust him with one. So, as a test we've lent him an old phone of mine to see if he can be trusted with one under some limitations: And it turns out, dear reader, that rule #1 was the most important rule we could have set. He's the last of his friendship group to get a phone, so they all have WhatsApp groups with one another. The problem is those other kids are never off their phones, and my son having these kinds of rules in place makes him weird. But I don't care. He regularly has missed calls on his phone from midnight from his classmates. These aren't just calls to him either. They're group calls to the entire class. Like, what the fuck are these parents doing letting their kids have phones in their bedrooms and giving them free rein? It beggars belief and confirms every concern I had about giving him a phone. I've said it before, and I'll say it again, we need a smartphone for young people . Lucky for us he's generally a good little sausage, and so far there's been no need for us to take his phone, reprimand him, or correct his behaviour, which I'm very proud of. I just hope it sticks. It's only been a week... The phone never leaves the kitchen. He only gets an hour of screen time a day between 09:00 and 19:00. Mum and I can vet everything he's been doing on it.

0 views

Designed to be specialists

All industries and disciplines, over time, direct people into greater and greater specialization. Those who have been working on the web since the beginning have been able to see this trend first hand, as the practices and systems grew ever more complicated and it became impossible for one person to hold it all in their head. We sometimes talk of this level of increasing complexity and specialization as inevitable or natural, when it’s neither. Moreover, like many things involving work, specialization benefits some people and immiserates others. [There is an] extreme human and cultural misery to which not only the industry of advanced capitalism but above all its institutions, its education and its culture, have reduced the technical worker. This education, in its efforts to adapt the worker to his task in the shortest possible time, has given him the capacity for a minimum of independent activity. Out of fear of creating men [ sic ] who by virtue of the too “rich” development of their abilities would refuse to submit to the discipline of a too narrow task and to the industrial hierarchy, the effort has been made to stunt them from the beginning: they were designed to be competent but limited, active but docile, intelligent but ignorant outside of anything but their function, incapable of having a horizon beyond that of their task. In short, they were designed to be specialists. Impossible not to think here of the rise of labor unions in the tech industry and the subsequent rapid (and surely coincidental) deployment of so-called AI which—unlike nearly every prior technological development in software—arrived with mandates for its use and threats of punishment for the noncompliant. Elsewhere, Gorz talks of the trend of workers being reduced to “supervisors” of automated systems that are doing the work for them. But simply watching work happen, without any of the creative, autonomous activity that would occur if they were doing the work themselves, gives rise to a degree of boredom and stupefaction that can be physically painful and spiritually debilitating. Anyone who has experienced the pleasure of creative work is likely to greatly resist that reduction; better to create workers who have never known such things. There’s some use in distinguishing here between the worker who, having learned the skills of writing software over many years, now turns to so-called AI to assist her in that task; and the worker who will follow her some years hence and may never learn those skills, but will know only the work of supervision. The former, elder worker may find some interest or curiosity in applying her knowledge to this new technology, especially as the modes and methods for doing so are still being developed. But what of the worker who begins their work a decade from now, who has been specialized to do nothing more than ask for something? What will she know beyond that menial, dispiriting little task? What kind of people are we designing now? View this post on the web , subscribe to the newsletter , or reply via email .

0 views
マリウス 2 days ago

Hold on to Your Hardware

Tl;dr at the end. For the better part of two decades, consumers lived in a golden age of tech. Memory got cheaper, storage increased in capacity and hardware got faster and absurdly affordable. Upgrades were routine, almost casual. If you needed more RAM, a bigger SSD, or a faster CPU or GPU, you barely had to wait a week for a discount offer and you moved on with your life. This era is ending. What’s forming now isn’t just another pricing cycle or a short-term shortage, it is a structural shift in the hardware industry that paints a deeply grim outlook for consumers. Today, I am urging you to hold on to your hardware, as you may not be able to replace it affordably in the future. While I have always been a stark critic of today’s consumer industry , as well as the ideas behind it , and a strong proponent of buying it for life (meaning, investing into durable, repairable, quality products) the industry’s shift has nothing to do with the protection of valuable resources or the environment, but is instead a move towards a trajectory that has the potential to erode technological self-sufficiency and independence for people all over the world. In recent months the buzzword RAM-pocalypse has started popping up across tech journalism and enthusiast circles. It’s an intentionally dramatic term that describes the sharp increase in RAM prices, primarily driven by high demand from data centers and “AI” technology, which most people had considered a mere blip in the market. This presumed temporary blip , however, turned out to be a lot more than just that, with one manufacturer after the other openly stating that prices will continue to rise, with suppliers forecasting shortages of specific components that could last well beyond 2028, and with key players like Western Digital and Micron either completely disregarding or even exiting the consumer market altogether. Note: Micron wasn’t just another supplier , but one of the three major players directly serving consumers with reasonably priced, widely available RAM and SSDs. Its departure leaves the consumer memory market effectively in the hands of only two companies: Samsung and SK Hynix . This duopoly certainly doesn’t compete on your wallet’s behalf, and it definitely wouldn’t be the first time it would optimize for margins . The RAM-pocalypse isn’t just a temporary headline anymore, but has seemingly become long-term reality. However, RAM and memory in general is only the beginning. The main reason for the shortages and hence the increased prices is data center demand, specifically from “AI” companies. These data centers require mind-boggling amounts of hardware, specifically RAM, storage drives and GPUs, which in turn are RAM-heavy graphics units for “AI” workloads. The enterprise demand for specific components simply outpaces the current global production capacity, and outbids the comparatively poor consumer market. For example, OpenAI ’s Stargate project alone reportedly requires approximately 900,000 DRAM wafers per month , which could account for roughly 40% of current global DRAM output. Other big tech giants including Google , Amazon , Microsoft , and Meta have placed open-ended orders with memory suppliers, accepting as much supply as available. The existing and future data centers for/of these companies are expected to consume 70% of all memory chips produced in 2026. However, memory is just the first domino. RAM and SSDs are where the pain is most visible today, but rest assured that the same forces are quietly reshaping all aspects of consumer hardware. One of the most immediate and tangible consequences of this broader supply-chain realignment are sharp, cascading price hikes across consumer electronics, with LPDDR memory standing out as an early pressure point that most consumers didn’t recognize until it was already unavoidable. LPDDR is used in smartphones, laptops, tablets, handheld consoles, routers, and increasingly even low-power PCs. It sits at the intersection of consumer demand and enterprise prioritization, making it uniquely vulnerable when manufacturers reallocate capacity toward “AI” accelerators, servers, and data-center-grade memory, where margins are higher and contracts are long-term. As fabs shift production toward HBM and server DRAM , as well as GPU wafers, consumer hardware production quietly becomes non-essential , tightening supply just as devices become more power- and memory-hungry, all while continuing on their path to remain frustratingly unserviceable and un-upgradable. The result is a ripple effect, in which device makers pay more for chips and memory and pass those costs on through higher retail prices, cut base configurations to preserve margins, or lock features behind premium tiers. At the same time, consumers lose the ability to compensate by upgrading later, because most components these days, like LPDDR , are soldered down by design. This is further amplified by scarcity, as even modest supply disruptions can spike prices disproportionately in a market where just a few suppliers dominate, turning what should be incremental cost increases into sudden jumps that affect entire product categories at once. In practice, this means that phones, ultrabooks, and embedded devices are becoming more expensive overnight, not because of new features, but because the invisible silicon inside them has quietly become a contested resource in a world that no longer builds hardware primarily for consumers. In late January 2026, the Western Digital CEO confirmed during an earnings call that the company’s entire HDD production capacity for calendar year 2026 is already sold out. Let that sink in for a moment. Q1 hasn’t even ended and a major hard drive manufacturer has zero remaining capacity for the year. Firm purchase orders are in place with its top customers, and long-term agreements already extend into 2027 and 2028. Consumer revenue now accounts for just 5% of Western Digital ’s total sales, while cloud and enterprise clients make up 89%. The company has, for all practical purposes, stopped being a consumer storage company. And Western Digital is not alone. Kioxia , one of the world’s largest NAND flash manufacturers, admitted that its entire 2026 production volume is already in a “sold out” state , with the company expecting tight supply to persist through at least 2027 and long-term customers facing 30% or higher year-on-year price increases. Adding to this, the Silicon Motion CEO put it bluntly during a recent earnings call : We’re facing what has never happened before: HDD, DRAM, HBM, NAND… all in severe shortage in 2026. In addition, the Phison CEO has gone even further, warning that the NAND shortage could persist until 2030, and that it risks the “destruction” of entire segments of the consumer electronics industry. He also noted that factories are now demanding prepayment for capacity three years in advance , an unprecedented practice that effectively locks out smaller players. The collateral damage of this can already be felt, and it’s significant. For example Valve confirmed that the Steam Deck OLED is now out of stock intermittently in multiple regions “due to memory and storage shortages” . All models are currently unavailable in the US and Canada, the cheaper LCD model has been discontinued entirely, and there is no timeline for when supply will return to normal. Valve has also been forced to delay the pricing and launch details for its upcoming Steam Machine console and Steam Frame VR headset, directly citing memory and storage shortages. At the same time, Sony is considering delaying the PlayStation 6 to 2028 or even 2029, and Nintendo is reportedly contemplating a price increase for the Switch 2 , less than a year after its launch. Both decisions are seemingly driven by the same memory supply constraints. Meanwhile, Microsoft has already raised prices on the Xbox . Now you might think that everything so far is about GPUs and other gaming-related hardware, but that couldn’t be further from the truth. General computing, like the Raspberry Pi is not immune to any of this either. The Raspberry Pi Foundation has been forced to raise prices twice in three months, with the flagship Raspberry Pi 5 (16GB) jumping from $120 at launch to $205 as of February 2026, a 70% increase driven entirely by LPDDR4 memory costs. What was once a symbol of affordable computing is rapidly being priced out of reach for the educational and hobbyist communities it was designed to serve. HP, on the other hand, seems to have already prepared for the hardware shortage by launching a laptop subscription service where you pay a monthly fee to use a laptop but never own it , no matter how long you subscribe. While HP frames this as a convenience, the timing, right in the middle of a hardware affordability crisis, makes it feel a lot more like a preview of a rented compute future. But more on that in a second. “But we’ve seen price spikes before, due to crypto booms, pandemic shortages, factory floods and fires!” , you might say. And while we did live through those crises, things eventually eased when bubbles popped and markets or supply chains recovered. The current situation, however, doesn’t appear to be going away anytime soon, as it looks like the industry’s priorities have fundamentally changed . These days, the biggest customers are not gamers, creators, PC builders or even crypto miners anymore. Today, it’s hyperscalers . Companies that use hardware for “AI” training clusters, cloud providers, enterprise data centers, as well as governments and defense contractors. Compared to these hyperscalers consumers are small fish in a big pond. These buyers don’t care if RAM costs 20% more and neither do they wait for Black Friday deals. Instead, they sign contracts measured in exabytes and billions of dollars. With such clients lining up, the consumer market in contrast is suddenly an inconvenience for manufacturers. Why settle for smaller margins and deal with higher marketing and support costs, fragmented SKUs, price sensitivity and retail logistics headaches, when you can have behemoths throwing money at you? Why sell a $100 SSD to one consumer, when you can sell a whole rack of enterprise NVMe drives to a data center with circular virtually infinite money? Guaranteed volume, guaranteed profit, zero marketing. The industry has answered these questions loudly. All of this goes to show that the consumer market is not just deprioritized, but instead it is being starved . In fact, IDC has already warned that the PC market could shrink by up to 9% in 2026 due to skyrocketing memory prices, and has described the situation not as a cyclical shortage but as “a potentially permanent, strategic reallocation of the world’s silicon wafer capacity” . Leading PC OEMs including Lenovo , Dell , HP , Acer , and ASUS have all signaled 15-20% PC price increases for 2026, with some models seeing even steeper hikes. Framework , the repairable laptop company, has also been transparent about rising memory costs impacting its pricing. And analyst Jukan Choi recently revised his shortage timeline estimate , noting that DRAM production capacity is expected to grow at just 4.8% annually through 2030, with even that incremental capacity concentrated on HBM rather than consumer memory. TrendForce ’s latest forecast projects DRAM contract prices rising by 90-95% quarter over quarter in Q1 2026. And that is not a typo. The price of hardware is one thing, but value-for-money is another aspect that appears to be only getting worse from here on. Already today consumer parts feel like cut-down versions of enterprise silicon. As “AI” accelerators and server chips dominate R&D budgets, consumer improvements will slow even further, or arrive at higher prices justified as premium features . This is true for CPUs and GPUs, and it will be equally true for motherboards, chipsets, power supplies, networking, etc. We will likely see fewer low-end options, more segmentation, artificial feature gating and generally higher baseline prices that, once established, won’t be coming back down again. As enterprise standards become the priority, consumer gear is becoming an afterthought that is being rebadged, overpriced, and poorly supported. The uncomfortable truth is that the consumer hardware market is no longer the center of gravity, as we all were able to see at this year’s CES . It’s orbiting something much larger, and none of this is accidental. The industry isn’t failing, it’s succeeding, just not for you . And to be fair, from a corporate standpoint, this pivot makes perfect sense. “AI” and enterprise customers are rewriting revenue charts, all while consumers continue to be noisy, demanding, and comparatively poor. It is pretty clear that consumer hardware is becoming a second-class citizen, which means that the machines we already own are more valuable than we might be thinking right now. “But what does the industry think the future will look like if nobody can afford new hardware?” , you might be asking. There is a darker, conspiratorial interpretation of today’s hardware trends that reads less like market economics and more like a rehearsal for a managed future. Businesses, having discovered that ownership is inefficient and obedience is profitable, are quietly steering society toward a world where no one owns compute at all, where hardware exists only as an abstraction rented back to the public through virtual servers, SaaS subscriptions, and metered experiences , and where digital sovereignty, that anyone with a PC tower under their desk once had, becomes an outdated, eccentric, and even suspicious concept. … a morning in said future, where an ordinary citizen wakes up, taps their terminal, which is a sealed device without ports, storage, and sophisticated local execution capabilities, and logs into their Personal Compute Allocation . This bundle of cloud CPU minutes, RAM credits, and storage tokens leased from a conglomerate whose logo has quietly replaced the word “computer” in everyday speech, just like “to search” has made way for “to google” , has removed the concept of installing software, because software no longer exists as a thing , but only as a service tier in which every task routes through servers owned by entities. Entities that insist that this is all for the planet . Entities that outlawed consumer hardware years ago under the banner of environmental protectionism , citing e-waste statistics, carbon budgets , and unsafe unregulated silicon , while conveniently ignoring that the data centers humming beyond the city limits burn more power in an hour than the old neighborhood ever did in a decade. In this world, the ordinary citizen remembers their parents’ dusty Personal Computer , locked away in a storage unit like contraband. A machine that once ran freely, offline if it wanted, immune to arbitrary account suspensions and pricing changes. As they go about their day, paying a micro-fee to open a document, losing access to their own photos because a subscription lapsed, watching a warning banner appear when they type something that violates the ever evolving terms-of-service, and shouting “McDonald’s!” to skip the otherwise unskippable ads within every other app they open, they begin to understand that the true crime of consumer hardware wasn’t primarily pollution but independence. They realize that owning a machine meant owning the means of computation , and that by centralizing hardware under the guise of efficiency, safety, and sustainability, society traded resilience for convenience and autonomy for comfort. In this dyst… utopia , nothing ever breaks because nothing is yours , nothing is repairable because nothing is physical, and nothing is private because everything runs somewhere else , on someone else’s computer . The quiet moral, felt when the network briefly stutters and the world freezes, is that keeping old hardware alive was never nostalgia or paranoia, but a small, stubborn act of digital self-defense; A refusal to accept that the future must be rented, permissioned, and revocable at any moment. If you think that dystopian “rented compute over owned hardware” future could never happen, think again . In fact, you’re already likely renting rather than owning in many different areas. Your means of communication are run by Meta , your music is provided by Spotify , your movies are streamed from Netflix , your data is stored in Google ’s data centers and your office suite runs on Microsoft ’s cloud. Maybe even your car is leased instead of owned, and you pay a monthly premium for seat heating or sElF-dRiViNg , whatever that means. After all, the average Gen Z and Millennial US consumer today apparently has 8.2 subscriptions , not including their DaIlY aVoCaDo ToAsTs and StArBuCkS cHoCoLate ChIp LaTtEs that the same Boomers responsible for the current (and past) economic crises love to dunk on. Besides, look no further than what’s already happening in for example China, a country that manufactures massive amounts of the world’s sought-after hardware yet faces restrictions on buying that very hardware. In recent years, a complex web of export controls and chip bans has put a spotlight on how hardware can become a geopolitical bargaining chip rather than a consumer good. For example, export controls imposed by the United States in recent years barred Nvidia from selling many of its high-performance GPUs into China without special licenses, significantly reducing legal access to cutting-edge compute inside the country. Meanwhile, enforcement efforts have repeatedly busted smuggling operations moving prohibited Nvidia chips into Chinese territory through Southeast Asian hubs, with over $1 billion worth of banned GPUs reportedly moving through gray markets, even as official channels remain restricted. Coverage by outlets such as Bloomberg , as well as actual investigative journalism like Gamer’s Nexus has documented these black-market flows and the lengths to which both sides go to enforce or evade restrictions, including smuggling networks and increased regulatory scrutiny. On top of this, Chinese regulators have at times restricted domestic tech firms from buying specific Nvidia models, further underscoring how government policy can override basic market access for hardware, even in the country where much of that hardware is manufactured. While some of these export rules have seen partial reversals or regulatory shifts, the overall situation highlights a world in which hardware access is increasingly determined by politics, security regimes, and corporate strategy, and not by consumer demand . This should serve as a cautionary tale for anyone who thinks owning their own machines won’t matter in the years to come. In an ironic twist, however, one of the few potential sources of relief may, in fact, come from China. Two Chinese manufacturers, CXMT ( ChangXin Memory Technologies ) and YMTC ( Yangtze Memory Technologies ), are embarking on their most aggressive capacity expansions ever , viewing the global shortage as a golden opportunity to close the gap with the incumbent big three ( Samsung , SK Hynix , Micron ). CXMT is now the world’s fourth-largest DRAM maker by production volume, holding roughly 10-11% of global wafer capacity, and is building a massive new DRAM facility in Shanghai expected to be two to three times larger than its existing Hefei headquarters, with volume production targeted for 2027. The company is also preparing a $4.2 billion IPO on Shanghai’s STAR Market to fund further expansion and has reportedly delivered HBM3 samples to domestic customers including Huawei . YMTC , traditionally a NAND flash supplier, is constructing a third fab in Wuhan with roughly half of its capacity dedicated to DRAM, and has reached 270-layer 3D NAND capability, rapidly narrowing the gap with Samsung (286 layers) and SK Hynix (321 layers). Its NAND market share by shipments reached 13% in Q3 2025, close to Micron ’s 14%. What’s particularly notable is that major PC manufacturers are already turning to these suppliers . However, as mentioned before, with hardware having become a geopolitical topic, both companies face ongoing (US-imposed) restrictions. Hence, for example HP has indicated it would only use CXMT chips in devices for non-US markets. Nevertheless, for consumers worldwide the emergence of viable fourth and fifth players in the memory market represents the most tangible hope of eventually breaking the current supply stranglehold. Whether that relief arrives in time to prevent lasting damage to the consumer hardware ecosystem remains an open question, though. Polymarket bet prediction : A non-zero percentage of people will confuse Yangtze Memory Technologies with the Haskell programming language . The reason I’m writing all of this isn’t to create panic, but to help put things into perspective. You don’t need to scavenger-hunt for legacy parts in your local landfill (yet) or swear off upgrades forever, but you do need to recognize that the rules have changed . The market that once catered to enthusiasts and everyday users is turning its back. So take care of your hardware, stretch its lifespan, upgrade thoughtfully, and don’t assume replacement will always be easy or affordable. That PC, laptop, NAS, or home server isn’t disposable anymore. Clean it, maintain it, repaste it, replace fans and protect it, as it may need to last far longer than you originally planned. Also, realize that the best time to upgrade your hardware was yesterday and that the second best time is now . If you can afford sensible upgrades, especially RAM and SSD capacity, it may be worth doing sooner rather than later. Not for performance, but for insurance, because the next time something fails, it might be unaffordable to replace, as the era of casual upgrades seems to be over. Five-year systems may become eight- or ten-year systems. Software bloat will hurt more and will require re-thinking . Efficiency will matter again . And looking at it from a different angle, maybe that’s a good thing. Additionally, the assumption that prices will normalize again at some point is most likely a pipe dream. The old logic wait a year and it’ll be cheaper no longer applies when manufacturers are deliberately constraining supply. If you need a new device, buy it; If you don’t, however, there is absolutely no need to spend money on the minor yearly refresh cycle any longer, as the returns will be increasingly diminishing. And again, looking at it from a different angle, probably that is also a good thing. Consumer hardware is heading toward a bleak future where owning powerful, affordable machines becomes harder or maybe even impossible, as manufacturers abandon everyday users to chase vastly more profitable data centers, “AI” firms, and enterprise clients. RAM and SSD price spikes, Micron ’s exit from the consumer market, and the resulting Samsung / SK Hynix duopoly are early warning signs of a broader shift that will eventually affect CPUs, GPUs, and the entire PC ecosystem. With large manufacturers having sold out their entire production capacity to hyperscalers for the rest of the year while simultaneously cutting consumer production by double-digit percentages, consumers will have to take a back seat. Already today consumer hardware is overpriced, out of stock or even intentionally being delayed due to supply issues. In addition, manufacturers are pivoting towards consumer hardware subscriptions, where you never own the hardware and in the most dystopian trajectory, consumers might not buy any hardware at all, with the exception of low-end thin-clients that are merely interfaces , and will rent compute through cloud platforms, losing digital sovereignty in exchange for convenience. And despite all of this sounding like science fiction, there is already hard evidence proving that access to hardware can in fact be politically and economically revoked. Therefor I am urging you to maintain and upgrade wisely, and hold on to your existing hardware , because ownership may soon be a luxury rather than the norm.

0 views