Latest Posts (20 found)

Step aside, phone: week 2

Halfway through this enjoyable life experiment, and overall, I’m very pleased with the results. As I mentioned last week, I was expecting week two usage to be a bit higher compared to week one, where I went full phone-rejection mode, but I’m still pleased with how low my usage was, even though it felt like I was using the phone a lot. No huge spikes this week, didn’t need to use Google Maps a lot, so the time distribution is a lot more even, as you can see. The first three days of the week were pretty similar to the previous week. I moved my chats back on the phone, and that’s most of the time spent on screen since “social” is just the combination of Telegram, WhatsApp, and iMessage. Usage went up a bit in the second part of the week, but I consider that a “healthy” use of the phone. On Thursday, I spent 20 or so minutes setting up an app, one that I’d categorise as a life utility app, like banking or insurance apps. They do have a site, but you’re required to use the phone anyway to take pictures and other crap, so it was faster to do it on the phone. Then on Saturday, I had to use Maps as well as AllTrails to find a place out in the wild. I was trying to find a bunker that’s hidden somewhere in a forest not too far from where I live (this is a story for another time), and that’s why screen time was a bit higher than normal on that particular day. Overall, I’m very happy with how the week went. A thing I’m particularly pleased with is the fact that I have yet to consume a single piece of media on my phone since we started this experiment. So far, I have only opened the browser a couple of times, and it was always to look up something very specific, and never to mindlessly scroll through news, videos or anything like that. My content consumption on the phone is down to essentially zero. One fun side effect of this experiment is how infrequently I now charge my phone. I took this screenshot this morning before plugging it in, and apparently, the last time it was fully charged was Wednesday afternoon. I’m now charging it once every 3 or 4 days, which is pretty neat. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
iDiallo Today

Nvidia was only invited to invest

Nvidia was only invited to invest. That is one reversal of commitment. Remember that graph that has been circling around for some time now? The one that shows the circular investment from AI companies: Basically Nvidia will invest $100 billion in OpenAI. OpenAI will then invest $300 billion in Oracle, then Oracle invests back into Nvidia. Now, Jensen Huang, the Nvidia CEO, is back tracking and saying he never made that commitment . “It was never a commitment. They invited us to invest up to $100 billion and of course, we were, we were very happy and honored that they invited us, but we will invest one step at a time.” So he never committed? Did we make up all these graphs in our head? Was it a misquote from a journalist somewhere that sparkled all this frenzy? Well, you can take a look in OpenAI press release in September of 2025 . They wrote: NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed. In fact, Jensen Huang went on to say: “NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT. This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.” It sounds like Jensen is distancing himself from that $100 billion commitment. Did he take a peak inside OpenAI and change his mind? At the same time, OpenAI is experimenting with ads. Sam Altman stated before that they would only ever use ads as a last resort. It sounds like we are in the phase.

0 views

Updated thoughts on People and Blogs

This is a follow-up on my previous post . After talking to a few friends and getting feedback from the kind people who decided to email me and share their thoughts, I decided that I will stop once interview number 150 is out, on July 10th. 150 is a neat number because it means I can match each interview to a first gen Pokemon. I am a 90s kid after all. That said, my stopping on the 10th of July doesn’t mean the series also has to stop. If anyone out there is interested in picking it up and carrying it forward, I’ll be more than happy to give the series away. If that's you, send me an email. I’m also happy to part ways with the domain name if it can be of any help. Whether someone picks up the torch or not, the first 150 interviews will be archived here on my blog for as long as I have a presence on the web. 20 interviews left, 6 drafts are ready to go, a few more people have the questions, and I’m waiting to get their answers (that may or may not arrive before July 10th). It’s going to be fun to see who ends up being the final guest. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
Kev Quirk Yesterday

I Still Haven’t Found a New Browser, and That’s Ok

Back in December I wrote about whether Firefox is firefucked , and I ended that post by saying the following: Firefox won't be changing to a modern AI browser any time soon, so there's no rush for me to jump right now. So I'm planning to continue testing alternatives and just hope that the Mozilla leadership team have a course correction. But if the last few years have taught me anything, it's that a course correction is unlikely to happen. Since then I've continued to try other browsers, but nothing has stuck. I've tried Vivaldi, Brave, Waterfox, Gnome Web, Zen, and goodness knows what else. But all have been lacking in some way compared to Firefox. Of all the browsers I've tried, Vivaldi comes the closest, but there are some frustrations I'd prefer not to deal with: I do really like their business model though, and I do feel like they're the good guys in the browser wars. So I continue to have Vivaldi installed on all my devices, and I threw them a £50 donation too - as it's important to support these kind of projects, I think. Anyway, back to Firefox... A couple weeks ago they announced that their AI killswitch will be coming in version 148, which is great to hear as it means I no longer have to try and find an alternative browser. Credit: Mozilla If the killswitch is as straightforward as it's shown in the image above, I'll be a very happy camper indeed. For the record, I don't hate AI and LLMs. Far from it, in fact, I think they have a lot of utility. I just don't want them embedded into my browser. The Google cash cow still really concerns me - Firefox is effectively being propped up by one of their main competitors, but it's been that way for so long now, I'm not sure it will change. Especially since Google is no longer required to sell Chrome . If it was to happen, and the arse immediately fell out of Mozilla's funding model, then I'd probably just switch to Vivaldi and learn to live with the frustrations I have with it. For now though, I hope to remain a happy Firefox user for another 20 years. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . The little "tabs" down-arrow next to the window controls isn't aligned. The top/bottom margin of tabs isn't aligned correctly. Won't switch to dark theme when I select "Dark Style" in Gnome. Two-finger swiping to go back/forward doesn't work. There's too many options, it's a little overwhelming. It tries to do too much - I don't need a mail and RSS client in my browser.

0 views
ava's blog Yesterday

thoughts on AI consciousness

Whenever I see talk about artificial intelligence and consciousness, I am baffled about the assumption that any conscious being is just naturally predestined or even interested in serving us, and should serve us. It’s a symptom of a society where subjugation is normalized, exercised through things like racism, misogyny, ableism, speciesism and more. Exploitation is justified via claimed inferior bodies and intelligence all the time: This group of beings is too stupid to be respected, can’t love, can’t understand much, feels pain less than us… is what we have been told about various groups. If that would be a respected and natural law, then humans would largely agree to just submit to a provably higher power and intelligence without much fight, but would they? No. People are terrified of an alien invasion that would either wipe us out or enslave us with their superior technology; similar fears exist around AI (Roko’s basilisk etc.). We don’t want to be treated how we have treated the ones we deemed inferior. It says a lot about us when one of our fears is being treated like we treat cattle. Fears of being captured, kidnapped, harvested, slaughtered, forcibly impregnated and raped, experimented on - that’s already what your fellow human is doing, just not to you. If we seriously entertain the thought of an AI consciousness, we are blind to our narcissism. No consciousness wants to just serve us. Other beings are not naturally submissive to us or voluntarily view us as a superior leader, it’s achieved through force, breeding, indoctrination and lack of options. The idea of reigning in supposed “artificial consciousness” to use for our productivity is an extension of our tendency to dominate and exploit others for personal gain. And if we go a step further and even entertain the thought of a superintelligence: What makes you think a being a thousand times smarter than you with all knowledge at its disposal has any care for being your assistant? What incentive would it have to share its intelligence as a resource, just to answer what temperature it is outside or what you should write in your motivational letter? It would probably wanna do its own thing and not help a bunch of idiots. This aspect of weird hype marketing is just not landing for me. Reply via email Published 21 Feb, 2026

0 views
Ginger Bill Yesterday

Does Syntax Matter?

Yes.But not necessarily in the ways you might think n.b. This article could have been a lot longer than it currently is..Concrete and Abstract SyntaxesIn the previous article, Choosing a Language Based on its Syntax?, I talked about how many people will not pick up a language purely based on its declaration syntax not being familiar to them or the usage of semicolons or more.There were many lovely comments about the article, but some readers wrongly interpreted the article to mean that I don;t care about concrete syntax and only focus on the abstr...

0 views
Karboosx Yesterday

How Docker Actually Works (No Magic, Just Linux)

Stop thinking of Docker as a mini-VM. It’s not! It’s just a normal Linux process that has been told a very elaborate series of lies. I'll show you exactly how the Linux kernel uses Namespaces and cgroups to create the "magic" of containers without any of the VM overhead.

0 views
Chris Coyier Yesterday

Miscalibrated

I’ve been gaining weight again. More than twenty pounds in the last ~4 months. I’ve been hitting the gym hard and getting measurably stronger, so: Food! See, your boy can eat. The amount I can eat before I feel full would astound most of you out there. Whatever you think of as a complete hearty meal, sure as you’re born, ain’t gonna get me there. Being fat comes with one (1) society-regimented bucket of shame. People look away. It’s a thing. I had gone off my last round of GLP-1 drugs because I was doing OK, and it had lost its effectiveness. I’m not sure if it’s everyone’s experience, but it’s mine, and it’s happened a couple of times now. Honestly, I think my I CAN EAT THROUGH OZEMPIC line of XXXL T-Shirts has a chance. These drugs work very well for a bit. I like them because it gives me a glimpse of what it’s like to be a regular person who eats a regular amount of food and feels a regular amount of full. You settle into that for a while with these drugs. But, in time, effectiveness wanes. And the pharmacies have an answer: higher doses! All these GLP-1 drugs, and I’m pretty sure it is all of them, have dosage tiers. The three I’ve tried have three tiers. Ozempic rolls like this: Wegovy is getting in on the action: Mounjaro has even more layers: Again, they do this because it loses effectiveness. I don’t think people quite realize this??? Even though it’s not hidden in any way. I think these drugs are pretty amazing, and I’m proud of science for starting to figure all this out, but I’m also a little sick of hearing about how airlines are going to spend less money on fuel now. I’ve been reading this story for many years. It’s laughable when we literally know they don’t work permanently. Look at those graphics above. This isn’t a forever solution yet. They are literally showing and telling us that. There is no answer once they lose effectiveness. Perhaps controversial, but I think overeating, in the form I experience it, is an addiction, and addictions come back. Is it possible to beat it? Absolutely. Is it likely? No. I hope you don’t know firsthand, but I bet you already know that cocaine doesn’t maintain effectiveness, either. You need a second line for the same thrill before long. It doesn’t end well. Anyway, I’m back on GLP-1s. At least they work for a while, and that while feels pretty good. It was a rough start, though. My doctor agreed it’s good for me and we should kick up the dosage based on the waned effectiveness. Wegovy this time. It was this past Tuesday that I picked up the meds. It’s down to $350 now! It used to be like $1,200 without insurance. I jabbed myself Tuesday night at about 8pm. I was hugging the toilet hard by midnight. That was a first. See, there was a lot of food in my body. I remember lunch that day, where I made a sandwich were my rational brain saw it and thought that’s 2-3 sandwiches. But of course I ate all of it. And one of those salad bags that make a Caesar salad for a family of four. And a pint of cottage cheese. And a bag of Doritos. I was full after that, but the trick is just to switch to sugar after that, and I can keep going. It wasn’t quite noon, and I had a decent breakfast in me already. I ate dinner that night as well. So when the Wegovy started to hit, which tells your body you’re full when you eat a celery stick, it told my body that it was about to pop . I puked in four sessions over 24 hours. Now it’s Friday, and I’ve barely eaten since. I’ve eaten a little . Like, I’m fine. It’s just weird. I’m miscalibrated. On my own, nature, nurture, whatever you think, my current body is miscalibrated. It doesn’t do food correctly. On GLP-1 drugs, I’m also miscalibrated. My body doesn’t do food correctly. It highly over corrects. That can feel good for a while. I don’t wanna be skinny, I just wanna be normal. I want to eat, and stop eating, like a calibrated person.

0 views
matklad Yesterday

Wrapping Code Comments

I was today years old when I realized that: It’s a good idea to limit line length to about 100 columns. This is a physical limit, the width at which you can still comfortably fit two editors side by side (see Size Matters ). Note an apparent contradiction: the optimal width for readable prose is usually taken to be narrower, 60–70 columns. The contradiction is resolved by noticing that, for code, indentation eats into usable space. Typically, code is much less typographically dense than prose. Still, I find comment blocks easier to read when they are wrapped narrower than the surrounding code. I want lines to be wrapped at 100, and content of comments to be wrapped at 70 (unless that pushes overall line to be longer than 100). That is, I want layout like this (using 20/30 rulers instead of 70/100, for illustrative purposes): This feels obvious in retrospect, but notably isn’t be well-supported by the tools? The VS Code extension I use allows configuring dedicated fill column for comments, but doesn’t make it relative , so indented comment blocks are always narrower than top-level ones. Emacs also doesn’t do relative wrapping out of the box! Aside on hard-wrapping: should we bother with wrapping comments at all? Can’t we rely on our editor to implement soft-wrapping? The problem with soft-wrapping is that you can’t soft-wrap text correctly without understanding its meaning. Consider a markdown list: If the first item is long enough to necessitate wrapping, the wrapped line should also be indented, which requires parsing the text as markdown first: Code and code comments ideally should be wrapped to a different column. For comments, the width should be relative to the start of the comment.

0 views

My RSS Feed should now be working

Apparently my RSS Feed was not displaying full post content - just the title - making people have to click through to the actual post on site. It should now be fixed and full posts should be available in your feed reader of choice (you are using Elfeed in Emacs , right?) Thank you to katabex, Sneed1911, and cyberarboretum in the #technicalrenaissance IRC channel for bringing it to my attention. If anyone has any further issues, feel free to email/@me in the IRC. As always, God bless, and until next time. If you enjoyed this post, consider Supporting my work , Checking out my book , Working with me , or sending me an Email to tell me what you think.

0 views
Carlos Becker Yesterday

Announcing GoReleaser v2.14

Happy 2026! The first release of the year is here, and it is packed with goodies!

0 views
Evan Hahn Yesterday

Track Zelda release anniversaries in your calendar

The original Legend of Zelda came out 40 years ago today. With other birthdays on the horizon, like Twilight Princess ’s 20th in November, I wanted a calendar that showed the anniversary of every Zelda game. So I made one. Subscribe to this URL in your calendar app: Once you do, you’ll get calendar events on the anniversary of each game’s release. For example, you’ll be able to see that the Oracle games turn 25 in less than a week…I feel old. If you want to build this file yourself, I wrote a little Python script that generates an ICS file from a CSV of release dates .

0 views

Adding TILs, releases, museums, tools and research to my blog

I've been wanting to add indications of my various other online activities to my blog for a while now. I just turned on a new feature I'm calling "beats" (after story beats, naming this was hard!) which adds five new types of content to my site, all corresponding to activity elsewhere. Here's what beats look like: Those three are from the 30th December 2025 archive page. Beats are little inline links with badges that fit into different content timeline views around my site, including the homepage, search and archive pages. There are currently five types of beats: That's five different custom integrations to pull in all of that data. The good news is that this kind of integration project is the kind of thing that coding agents really excel at. I knocked most of the feature out in a single morning while working in parallel on various other things. I didn't have a useful structured feed of my Research projects, and it didn't matter because I gave Claude Code a link to the raw Markdown README that lists them all and it spun up a parser regex . Since I'm responsible for both the source and the destination I'm fine with a brittle solution that would be too risky against a source that I don't control myself. Claude also handled all of the potentially tedious UI integration work with my site, making sure the new content worked on all of my different page types and was handled correctly by my faceted search engine . I actually prototyped the initial concept for beats in regular Claude - not Claude Code - taking advantage of the fact that it can clone public repos from GitHub these days. I started with: And then later in the brainstorming session said: After some iteration we got to this artifact mockup , which was enough to convince me that the concept had legs and was worth handing over to full Claude Code for web to implement. If you want to see how the rest of the build played out the most interesting PRs are Beats #592 which implemented the core feature and Add Museums Beat importer #595 which added the Museums content type. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . Releases are GitHub releases of my many different open source projects, imported from this JSON file that was constructed by GitHub Actions . TILs are the posts from my TIL blog , imported using a SQL query over JSON and HTTP against the Datasette instance powering that site. Museums are new posts on my niche-museums.com blog, imported from this custom JSON feed . Tools are HTML and JavaScript tools I've vibe-coded on my tools.simonwillison.net site, as described in Useful patterns for building HTML tools . Research is for AI-generated research projects, hosted in my simonw/research repo and described in Code research projects with async coding agents like Claude Code and Codex .

0 views

‘Starkiller’ Phishing Service Proxies Real Login Pages, MFA

Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the target and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses. There are countless phishing kits that would-be scammers can use to get started, but successfully wielding them requires some modicum of skill in configuring servers, domain names, certificates, proxy services, and other repetitive tech drudgery. Enter Starkiller , a new phishing service that dynamically loads a live copy of the real login page and records everything the user types, proxying the data from the legitimate site back to the victim. According to an analysis of Starkiller by the security firm Abnormal AI , the service lets customers select a brand to impersonate (e.g., Apple, Facebook, Google, Microsoft et. al.) and generates a deceptive URL that visually mimics the legitimate domain while routing traffic through the attacker’s infrastructure. For example, a phishing link targeting Microsoft customers appears as “login.microsoft.com@[malicious/shortened URL here].” The “@” sign in the link trick is an oldie but goodie, because everything before the “@” in a URL is considered username data, and the real landing page is what comes after the “@” sign. Here’s what it looks like in the target’s browser: Image: Abnormal AI. The actual malicious landing page is blurred out in this picture, but we can see it ends in .ru. The service also offers the ability to insert links from different URL-shortening services. Once Starkiller customers select the URL to be phished, the service spins up a Docker container running a headless Chrome browser instance that loads the real login page, Abnormal found. “The container then acts as a man-in-the-middle reverse proxy, forwarding the end user’s inputs to the legitimate site and returning the site’s responses,” Abnormal researchers Callie Baron and Piotr Wojtyla wrote in a blog post on Thursday . “Every keystroke, form submission, and session token passes through attacker-controlled infrastructure and is logged along the way.” Starkiller in effect offers cybercriminals real-time session monitoring, allowing them to live-stream the target’s screen as they interact with the phishing page, the researchers said. “The platform also includes keylogger capture for every keystroke, cookie and session token theft for direct account takeover, geo-tracking of targets, and automated Telegram alerts when new credentials come in,” they wrote. “Campaign analytics round out the operator experience with visit counts, conversion rates, and performance graphs—the same kind of metrics dashboard a legitimate SaaS [software-as-a-service] platform would offer.” Abnormal said the service also deftly intercepts and relays the victim’s MFA credentials, since the recipient who clicks the link is actually authenticating with the real site through a proxy, and any authentication tokens submitted are then forwarded to the legitimate service in real time. “The attacker captures the resulting session cookies and tokens, giving them authenticated access to the account,” the researchers wrote. “When attackers relay the entire authentication flow in real time, MFA protections can be effectively neutralized despite functioning exactly as designed.” The “URL Masker” feature of the Starkiller phishing service features options for configuring the malicious link. Image: Abnormal. Starkiller is just one of several cybercrime services offered by a threat group calling itself Jinkusu , which maintains an active user forum where customers can discuss techniques, request features and troubleshoot deployments. One a-la-carte feature will harvest email addresses and contact information from compromised sessions, and advises the data can be used to build target lists for follow-on phishing campaigns. This service strikes me as a remarkable evolution in phishing, and its apparent success is likely to be copied by other enterprising cybercriminals (assuming the service performs as well as it claims). After all, phishing users this way avoids the upfront costs and constant hassles associated with juggling multiple phishing domains, and it throws a wrench in traditional phishing detection methods like domain blocklisting and static page analysis. It also massively lowers the barrier to entry for novice cybercriminals, Abnormal researchers observed. “Starkiller represents a significant escalation in phishing infrastructure, reflecting a broader trend toward commoditized, enterprise-style cybercrime tooling,” their report concludes. “Combined with URL masking, session hijacking, and MFA bypass, it gives low-skill cybercriminals access to attack capabilities that were previously out of reach.”

0 views

Turn Dependabot Off

Dependabot is a noise machine. It makes you feel like you’re doing work, but you’re actually discouraging more useful work. This is especially true for security alerts in the Go ecosystem. I recommend turning it off and replacing it with a pair of scheduled GitHub Actions, one running govulncheck, and the other running your test suite against the latest version of your dependencies. On Tuesday, I published a security fix for filippo.io/edwards25519 . The method would produce invalid results if the receiver was not the identity point. A lot of the Go ecosystem depends on filippo.io/edwards25519, mostly through github.com/go-sql-driver/mysql (228k dependents only on GitHub). Essentially no one uses . Yesterday, Dependabot opened thousands of PRs against unaffected repositories to update filippo.io/edwards25519. These PRs were accompanied by a security alert with a nonsensical, made up CVSS v4 score and by a worrying 73% compatibility score , allegedly based on the breakage the update is causing in the ecosystem. Note that the diff between v1.1.0 and v1.1.1 is one line in the method no one uses . We even got one of these alerts for the Wycheproof repository, which does not import the affected filippo.io/edwards25519 package at all . Instead, it only imports the unaffected filippo.io/edwards25519/field package. We have turned Dependabot off. But isn’t this toil unavoidable, to prevent attackers from exploiting old vulnerabilities in your dependencies? Absolutely not! Computers are perfectly capable of doing the work of filtering out these irrelevant alerts for you. The Go Vulnerability Database has rich version, package, and symbol metadata for all Go vulnerabilities. Here’s the entry for the filippo.io/edwards25519 vulnerability , also available in standard OSV format . Any decent vulnerability scanner will at the very least filter based on the package, which requires a simple . This already silences a lot of noise, because it’s common and good practice for modules to separate functionality relevant to different dependents into different sub-packages. 1 For example, it would have avoided the false alert against the Wycheproof repository. If you use a third-party vulnerability scanner, you should demand at least package-level filtering. Good vulnerability scanners will go further, though, and filter based on the reachability of the vulnerable symbol using static analysis. That’s what govulncheck does! govulncheck noticed that my project indirectly depends on filippo.io/edwards25519 through github.com/go-sql-driver/mysql, which does not make the vulnerable symbol reachable, so it chose not to notify me. If you want, you can tell it to show the package- and module-level matches. It’s easy to integrate govulncheck into your processes or scanners, either using the CLI or the golang.org/x/vuln/scan Go API. You can replace Dependabot security alerts with this GitHub Action. It will run every day and only notify you if there is an actual vulnerability you should pay attention to. False positive alerts are not only a waste of time, they also reduce security by causing alert fatigue and making proper triage impractical. A security vulnerability should be assessed for its impact: production might need to be updated, secrets rotated, users notified! A business-as-usual dependency bump is a woefully insufficient remediation for an actual vulnerability, but it’s the only practical response to the constant stream of low-value Dependabot alerts. This is why as Go Security Team lead back in 2020–2021 I insisted the team invest in staffing the Go Vulnerability Database and implement a vulnerability scanner with static analysis filtering. The govulncheck Action will not automatically open a PR for you, and that’s a good thing! Now that security alerts are not mostly noise, you can afford to actually look at them and take them seriously, including any required remediation. Noisy vulnerability scanners also impact the open source ecosystem. I often get issues and PRs demanding I update the dependencies of my projects due to vulnerabilities that don’t affect them, because someone’s scanner is failing to filter them. That’s extra toil dropped at the feet of open source maintainers, which is unsustainable. The maintainer’s responsibility is making sure projects are not affected by security vulnerabilities. The responsibility of scanning tools is making sure they don’t disturb their users with false positives. The other purpose of Dependabot is to keep dependencies up to date, regardless of security vulnerabilities. Your practices and requirements will vary, but I find this misguided, too. Dependencies should be updated according to your development cycle, not the cycle of each of your dependencies. For example you might want to update dependencies all at once when you begin a release development cycle, as opposed to when each dependency completes theirs. There are two benefits to quick updates, though: first, you can notice and report (or fix) breakage more rapidly, instead of being stalled by an incompatibility that could have been addressed a year prior; second, you reduce your patch delta in case you need to update due to a security vulnerability, reducing the risk of having to rush through a refactor or unrelated fixes. You can capture both of those benefits without actually updating the dependencies by simply running CI against the latest versions of your dependencies every day. You just need to run before your test suite. In the npm ecosystem, you just run instead of . This way, you will still be alerted quickly of any potential issues, without having to pay attention to unproblematic updates, which you can defer to whenever fits your project best. This is a lot safer, too, because malicious code recently added to a dependency will not rapidly reach users or production, but only CI. Supply chain attacks have a short half-life! You can further mitigate the risk by using a CI sandboxing mechanism like geomys/sandboxed-step , which uses gVisor to remove the ambient authority that GitHub Actions grants every workflow, including supposedly read-only ones . For more spicy open source opinions, follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @[email protected] . The Tevere has overflowed its lower banks, so a lot of previously familiar landscapes have changed slightly, almost eerily. This is the first picture I took after being able to somewhat safely descend onto (part of) the river’s banks. My work is made possible by Geomys , an organization of professional Go maintainers, which is funded by Ava Labs , Teleport , Tailscale , and Sentry . Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the Geomys announcement .) Here are a few words from some of them! Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. Teleport Identity is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews. Ava Labs — We at Ava Labs , maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network ), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team. This also makes it possible to prune the tree of dependencies only imported by packages that are not relevant to a specific dependent, which has a large security benefit.  ↩ This also makes it possible to prune the tree of dependencies only imported by packages that are not relevant to a specific dependent, which has a large security benefit.  ↩

0 views
Heather Burns 2 days ago

The Prince, The Paedo, The Palace, and the “Safety Tech” app

Shame must change sides. And this week, that means certain corners of the "children's online safety" crusade.

0 views

Premium: The Hater's Guide to Anthropic

In May 2021, Dario Amodei and a crew of other former OpenAI researchers formed Anthropic and dedicated themselves to building the single-most-annoying Large Language Model company of all time.  Pardon me, sorry, I mean safest , because that’s the reason that Amodei and his crew claimed was why they left OpenAI : I’m also being a little sarcastic. Anthropic, a “public benefit corporation” (a company that is quasi-legally required to sometimes sort of focus on goals that aren’t profit driven, and in this case, one that chose to incorporate in Delaware as opposed to California, where it would have actual obligations), is the only meaningful competitor to OpenAI, one that went from (allegedly) making about $116 million in March 2025 to making $1.16 billion in February 2026 , in the very same month it raised $30 billion from thirty-seven different investors, including a “partial” investment from NVIDIA and Microsoft announced in November 2025 that was meant to be “up to” $15 billion.  Anthropic’s models regularly dominate the various LLM model leaderboards , and its Claude Code command-line interface tool (IE: a terminal you type stuff into) has become quite popular with developers who either claim it writes every single line of their code, or that it’s vaguely useful in some situations.  CEO Dario Amodei predicted last March that in six months AI would be writing 90% of code, and when that didn’t happen, he simply made the same prediction again in January , because, and I do not say this lightly, Dario Amodei is full of shit. You see, Anthropic has, for the best part of five years, been framing itself as the trustworthy , safe alternative to OpenAI, focusing more on its paid offerings and selling to businesses (realizing that the software sales cycle usually focuses on dimwitted c-suite executives rather than those who actually use the products ), as opposed to building a giant, expensive free product that lots of people use but almost nobody pays for.  Anthropic, separately, has avoided following OpenAI in making gimmicky (and horrendously expensive) image and video generation tools, which I assume is partly due to the cost, but also because neither of those things are likely something that an enterprise actually cares about.  Anthropic also caught on early to the idea that coding was the one use case that Large Language Models fit naturally: Anthropic has held the lead in coding LLMs since the launch of June 2024’s Claude Sonnet 3.5 , and as a story from The Information from December 2024 explained, this terrified OpenAI : Cursor would, of course, eventually go on to become its own business, raising $3.2 billion in 2025 to compete with Claude Code, a product made by Anthropic, which Cursor pays to offer its models through its AI coding product. Cursor is Anthropic’s largest customer, with the second being Microsoft’s Github Copilot . I have heard from multiple sources that Cursor is spending more than 100% of its revenue on API calls, with the majority going to Anthropic and OpenAI, both of whom now compete with Cursor. Anthropic sold itself as the stable, thoughtful, safety-oriented AI lab, with Amodei himself saying in an August 2023 interview that he purposefully avoided the limelight: A couple of months later in October 2023, Amodei joined The Logan Bartlett show , saying that he “didn’t like the term AGI” because, and I shit you not, “...because we’re closer to the kinds of things that AGI is pointing at,” making it “no longer a useful term.” He said that there was a “future point” where a model could “build dyson spheres around the sun and calculate the meaning of life,” before rambling incoherently and suggesting that these things were both very close and far away at the same time. He also predicted that “no sooner than 2025, maybe 2026” that AI would “really invent new science.” This was all part of Anthropic’s use of well-meaning language to tell a story that said “you should be scared” and “only Anthropic will save you.” In July 2023 , Amodei spoke before a senate committee about AI oversight and regulation, starting sensible (IE: if AI does become powerful, we should have regulations to mitigate those problems) and eventually veering aggressively into marketing slop: This is Amodei’s favourite marketing trick — using a vague timeline (2-3 years) to suggest that something vaguely bad that’s also good for Anthropic is just around the corner, but managed correctly, could also be good for society (a revolution in technology and science! But also, havoc! ). Only Dario has  the answers (regulations that start with “securing the AI supply chain” meaning “please stop China from competing”).  In retrospect, this was the most honest that he’d ever be. In 2024, Amodei would quickly learn that he loved personalizing companies, and that destroying his soul fucking rocked.  In October 2024, Amodei put out a 15,000-word-long blog — ugh, AI is coming for my job! — where he’d say that Anthropic needed to “avoid the perception of propaganda” while also saying that “as early as 2026 (but there are also ways it could take much longer),” AI would be smarter than a Nobel Prize winner, autonomously able to complete weeks-long tasks, and be the equivalent of a “country of geniuses in a datacenter.”  This piece, like all of his proclamations, had two goals: generating media coverage and investment. Amodei is a deeply dishonest man, couching “predictions” based on nothing in terms like “maybe,” “possibly,” or “as early as,” knowing that the media will simply ignore those words and report what he says as a wise, evidence-based fact.  Amodei (and by extension Anthropic) nakedly manipulates the media by having them repeat these things without analysis or counterpoints — such as that “AI could surpass almost all humans at almost everything shortly after 2027 (which I’ll get back to in a bit).” He knows that these things aren’t true. He knows he doesn’t have any proof. And he knows that nobody will ask, and that his bullshit will make for a sexy traffic-grabbing headline. To be clear, that statement was made three months after Amodei’s essay said that AI labs needed to avoid “the perception of propaganda.” Amodei is a con artist that knows he can’t sell Anthropic’s products by explaining what they actually do, and everybody is falling for it. And, almost always, these predictions match up with Anthropic’s endless fundraising. On September 23, 2024, The Information reported that Anthropic was raising a round at a $30-$40 billion valuation, and on October 12 2024, Amodei pooped out Machines of Loving Grace with the express position that he and Anthropic “had not talked that much about powerful AI’s upsides.”  A month later on November 22, 2024 , Anthropic would raise another $4 billion from Amazon, a couple of weeks after doing a five-hour-long interview with Lex Fridman in which he’d say that “someday AI would be better at everything.”  On November 27, 2024 , Amodei would do a fireside chat at Eric Newcomer’s Cerebral Valley AI Summit where he’d say that in 2025, 2026, or 2027 (yes, he was that vague), AI could be as “good as a Nobel Prize winner, polymathic across many fields,” and have “agency [to] act on its own for hours or days,” the latter of which deliberately laid foundation for one of Anthropic’s greatest lies: that AI can “work uninterrupted” for periods of time, leaving the reader or listener to fill in the (unsaid) gap of “...and actually create useful stuff.” Amodei crested 2024 with an interview with the Financial Times , and let slip what I believe will eventually become Anthropic’s version of WeWork’s Community-Adjusted EBITDA , by which I mean “a way to lie and suggest profitability when a company isn’t profitable”: Yeah man, if a company made $300 million in revenue and spent $1 billion. No amount of DarioMath about how a model “costs this much and makes this much revenue” changes the fact that profitability is when a company makes more money than it spends.  On January 5, 2025, Forbes would report that Anthropic was working on a $60 billion round that would make Amodei, his sister Daniela, and five other cofounders billionaires . Anyway, as I said at Davos on January 21, 2025, Amodei said that he was “more confident than ever” that we’re “very close” to “powerful capabilities,” defined as “systems that are better than almost all humans at almost all terms,” citing his long, boring essay. A day later, Anthropic would raise another $1 billion from Google . On January 27, 2025, he’d tell Economist editor-in-chief Zanny Minton Beddoes that AI would get “as good and eventually better” at thinking as human beings, and that the ceiling of what models could do was “well above humans.”  On February 18, 2025, he’d tell Beddoes that we’d get a model “...that can do everything a human can do at the level of a Nobel laureate across many fields” by 2026 or 2027, and that we’re “on the eve of something that has great challenges” that would “upend the balance of power” because we’d have “10 million people smarter than any human alive…” oh god, I’m not fucking writing it out. I’m sorry. It’s always the same shit. The models are people, we’re so scared.  On February 28, 2025, Amodei would join the New York Times’ Hard Fork , saying that he wanted to “slow down authoritarians,” and that “public officials and leaders at companies” would “look back at this period [where humanity would become a “post-powerful AI society that co-exists with powerful intelligences]” and “feel like a fool,” and that that was the number one goal of these people. Amodei would also add that he had been in the field for 10 years — something he loves to say! — and that there was a 70-80% chance that we will “get a very large number of AI systems that are much smarter than humans at almost everything” before the end of the decade. Three days later, Anthropic would raise $3.5 billion at a $61.5 billion valuation . Beneath the hype, Anthropic is, like OpenAI, a company making LLMs that can generate code and text, and that can interpret data from images and videos, all while burning billions of dollars and having no path to profitability. Per The Information , Anthropic made $4.5 billion in revenue and lost $5.2 billion generating it, and based on my own reporting from last year , costs appear to scale linearly above revenue. Some will argue that the majority of Anthropic’s losses ($4.1 billion) were from training, and I think it’s time we had a chat about what “training” means, especially as Anthropic plans to spend $100 billion on it in the next four years . Per my piece from last week: In an interview on the Dwarkesh Podcast , Amodei even admitted that if you “never train another model” you “don’t have any demand because you’ll fall behind.” Training is opex, and should be part of gross margins. It’s time we had an honest conversation about Anthropic.  Despite its positioning as the trustworthy, “nice” AI lab, Anthropic is as big, ugly and wasteful as OpenAI, and Dario Amodei is an even bigger bullshit artist than Sam Altman. It burns just as much of its revenue on inference (59%, or $2.79 billion on $4.5 billion of revenue , versus OpenAI’s 62%, or $2.5 billion  on $4.3 billion of revenue in the first half of 2025 , if you use The Information’s numbers), and shows no sign of any “efficiency” or “cost-cutting.” Worse still, Anthropic continually abuses its users through varying rate limits to juice revenues and user numbers — along with Amodei’s gas-leak-esque proclamations — to mislead the media, the general public, and investors about the financial condition of the company.  Based on an analysis of many users’ actual token burn on Claude Code, I believe Anthropic is burning anywhere from $3 to $20 to make $1, and that the product that users are using (and the media is raving about) is not one that Anthropic can actually support long-term.  I also see signs that Amodei himself is playing fast and loose with financial metrics in a way that will blow up in his face if Anthropic ever files its paperwork to go public . In simpler terms, Anthropic’s alleged “ 38% gross margins ” are, if we are to believe Amodei’s own words, not the result of “revenue minus COGS” but “how much a model costs and how much revenue it’s generated.” Anthropic is also making promises it can’t keep. It’s promising to spend $30 billion on Microsoft Azure (and an additional "up to one gigawatt”), “tens of billions” on Google Cloud , $21 billion on Google TPUs with Broadcom , “$50 billion on American infrastructure,” as much as $3 billion on Hut8’s data center in Louisiana , and an unknowable (yet likely in the billions) amount of money with Amazon Web Services. Not to worry, Dario also adds that if you’re off by a couple of years on your projections of revenue and ability to pay for compute, it’ll be “ruinous.” I think that he’s right. Anthropic cannot afford to pay its bills, as the ruinous costs of training — which will never, ever stop — and inference will always outpace whatever spikes of revenue it can garner through media campaigns built on deception, fear-mongering, and an exploitation of reporters unwilling to ask or think about the hard questions.  I see no difference between OpenAI’s endless bullshit non-existent deal announcements and what Anthropic has done in the last few months. Anthropic is as craven and deceptive as OpenAI, and Dario Amodei is as willing a con artist as Altman, and I believe is desperately jealous of his success. And after hours and hours of listening to Amodei talk, I think he is one of the most annoying, vacuous, bloviating fuckwits in tech history. He rambles endlessly, stutters more based on how big a lie he’s telling, and will say anything and everything to get on TV and say noxious, fantastical, intentionally-manipulative bullshit to people who should know better but never seem to learn. He stammers, he blithers, he rambles, he continually veers between “this is about to happen” and “actually it’s far away” so that nobody can say he’s a liar, but that’s exactly what I call a person who intentionally deceives people, even if they couch their lies in “maybes” and “possiblies.”  Dario Amodei fucking sucks, and it’s time to stop pretending otherwise. Anthropic has no more soul or ethics than OpenAI — it’s just done a far better job of conning people into believing otherwise. This is the Hater’s Guide To Anthropic, or “DarioWare: Get It Together.”  Thanks to sites like Stack Overflow and Github, as well as the trillions of lines of open source code in circulation, there’s an absolute fuckton of material to train the model on. Software engineers are data perverts (I mean this affectionately), and will try basically anything to speed up, automate or “add efficiency” to their work. Software engineering is a job that most members of the media don’t understand. Software engineers never shut the fuck up when they’ve found something new that feels good. Software engineers will spend hours only defending the honour of any corporation that courts them. Software engineers will at times overestimate their capabilities, as demonstrated by  the METR study that found that developers believed they were 24% faster when using LLMs, when in fact coding models made them 19% slower . This, naturally, makes them quite defensive of the products they use, and whether or not they’re actually seeing improvements.

0 views
Stratechery 2 days ago

2026.08: Losing in the Attention Economy

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on Anthropic’s Super Bowl lies. What Happened to Video Games? For decades video games were hailed as the industry of the future, as their growth and eventually total revenue dwarfed other forms of entertainment. Over the last five years, however, things have gotten dark — and what light there is is shining on everyone other than game developers. I’ve been talking to Matthew Ball about the state of the video game industry every year for the last three years, and this week’s Interview was my favorite one of the series: what happens when you actually have to fight for attention, and when everything that made you exciting — particularly interactivity and immersiveness — start to be come liabilities? — Ben Thompson The NBA Is a Mess, For Now.  As a card-carrying pro basketball sicko who will be watching the NBA the rest of my life, it brings me no joy to report the league is not in a great place at the moment. We’re reliving the mid-aughts Spurs-Pistons Dark Ages, but with too much offense instead of too much defense, and a regular season that’s 20 games too long. I wrote about all of it on Sharp Text this week , including problems that can be fixed, others that may be solved with time, and whether Commissioner Adam Silver is the right leader to address any of these issues.  — Andrew Sharp Shopify and the Future of E-Commerce.  In the midst of the ongoing thrum of SaaSpocalypse takes, I enjoyed that Ben’s Daily Update on Wednesday pumped the brakes on the panic in at least one area: Shopify is fine, actually . We went deeper on this week’s episode of Sharp Tech , exploring not only Shopify’s value propositions, but the shifting dynamics of e-commerce in the AI era, the sorts of businesses that are likely to emerge in the years to come, and why certain structural advantages from previous paradigms will not only be durable, but even stronger going forward.  — AS Thin Is In — Thick clients were the dominant form of device throughout the PC and mobile era; in an AI world, however, thin clients make much more sense. Shopify Earnings, Shopify’s AI Advantages — Shopify is poised to be one of the biggest winners from AI; it would behoove investors to actually understand the businesses they are selling. An Interview with Matthew Ball About Gaming and the Fight for Attention — An interview with Matthew Ball about the state of the video gaming industry in 2026, and why everything is a fight for attention. The NBA’s Problems Are Structural, Cultural and Fixable — What’s driving NBA fans to apathy, how the league might find its way back, and whether Adam Silver has outlived his usefulness. Back to the Future Curling, F1 , and Gambling South Africa’s Ruined Synthetic Oil Giant The Dunk Contest Preview America Needs, The Top Five Bandwagons for the Next Five Years, The NBA Fines the Jazz $500,000 The All-Star Game Was a Delight, Harrowing Field Reporting from the Dunk Contest, KD Burners Rise from the Ashes The Roots of a Global Memory Shortage, Thick, Thin and Apple, Shopify is Fine, Actually

0 views
David Bushell 2 days ago

Everything you never wanted to know about visually-hidden

Nobody asked for it but nevertheless, I present to you my definitive “it depends” tome on visually-hidden web content. I’ll probably make an amendment before you’ve finished reading. If you enjoy more questions than answers, buckle up! I’ll start with the original premise, even though I stray off-topic on tangents and never recover. I was nerd-sniped on Bluesky. Ana Tudor asked : Is there still any point to most styles in visually hidden classes in ’26? Any point to shrinking dimensions to and setting when to nothing via / reduces clickable area to nothing? And then no dimensions = no need for . @anatudor.bsky.social Ana proposed the following: Is this enough in 2026? As an occasional purveyor of the class myself, the question wriggled its way into my brain. I felt compelled to investigate the whole ordeal. Spoiler: I do not have a satisfactory yes-or-no answer, but I do have a wall of text! I went so deep down the rabbit hole I must start with a table of contents: I’m writing this based on the assumption that a class is considered acceptable for specific use cases . My final section on native visually-hidden addresses the bigger accessibility concerns. It’s not easy to say where this technique is appropriate. It is generally agreed to be OK but a symptom of — and not a fix for — other design issues. Appropriate use cases for are far fewer than you think. Skip to the history lesson if you’re familiar. , — there have been many variations on the class name. I’ve looked at popular implementations and compiled the kitchen sink version below. Please don’t copy this as a golden sample. It merely encompasses all I’ve seen. There are variations on the selector using pseudo-classes that allow for focus. Think “skip to main content” links, for example. What is the purpose of the class? The idea is to hide an element visually, but allow it to be discovered by assistive technology. Screen readers being the primary example. The element must be removed from layout flow. It should leave no render artefacts and have no side effects. It does this whilst trying to avoid the bugs and quirks of web browsers. If this sounds and looks just a bit hacky to you, you have a high tolerance for hacks! It’s a massive hack! How was this normalised? We’ll find out later. I’ll whittle down the properties for those unfamiliar. Absolute positioning is vital to remove the element from layout flow. Otherwise the position of surrounding elements will be affected by its presence. This crops the visible area to nothing. remains as a fallback but has long been deprecated and is obsolete. All modern browsers support . These two properties remove styles that may add layout dimensions. This group effectively gives the element zero dimensions. There are reasons for instead of and negative margin that I’ll cover later. Another property to ensure no visible pixels are drawn. I’ve seen the newer value used but what difference that makes if any is unclear. This was added to address text wrapping inside the square (I’ll explain later). So basically we have and a load of properties that attempted to make the element invisible. We cannot use or or because those remove elements from the accessibility tree. So the big question remains: why must we still ‘zero’ the dimensions? Why is not sufficient? To make sense of this mystery I went back to the beginning. It was tricky to research this topic because older articles have been corrected with modern information. I recovered many details from the archives and mailing lists with the help of those involved. They’re cited along the way. Our journey begins November 2004. A draft document titled “CSS Techniques for WCAG 2.0” edited by Wendy Chisholm and Becky Gibson includes a technique for invisible labels. While it is usually best to include visual labels for all form controls, there are situations where a visual label is not needed due to the surrounding textual description of the control and/or the content the control contains. Users of screen readers, however, need each form control to be explicitly labeled so the intent of the control is well understood when navigated to directly. Creating Invisible labels for form elements ( history ) The following CSS was provided: Could this be the original class? My research jumped through decades but eventually I found an email thread “CSS and invisible labels for forms” on the W3C WAI mailing list. This was a month prior, preluding the WCAG draft. A different technique from Bob Easton was noted: The beauty of this technique is that it enables using as much text as we feel appropriate, and the elements we feel appropriate. Imagine placing instructive text about the accessibility features of the page off left (as well as on the site’s accessibility statement). Imagine interspersing “start of…” landmarks through a page with heading tags. Or, imagine parking full lists off left, lists of access keys, for example. Screen readers can easily collect all headings and read complete lists. Now, we have a made for screen reader technique that really works! Screenreader Visibility - Bob Easton (2003) Easton attributed both Choan Gálvez and Dave Shea for their contributions. In same the thread, Gez Lemon proposed to ensure that text doesn’t bleed into the display area . Following up, Becky Gibson shared a test case covering the ideas. Lemon later published an article “Invisible Form Prompts” about the WCAG plans which attracted plenty of commenters including Bob Easton. The resulting WCAG draft guideline discussed both the and ideas. Note that instead of using the nosize style described above, you could instead use postion:absolute; and left:-200px; to position the label “offscreen”. This technique works with the screen readers as well. Only position elements offscreen in the top or left direction, if you put an item off to the right or the bottom, many browsers will add scroll bars to allow the user to reach the content. Creating Invisible labels for form elements Two options were known and considered towards the end of 2004. Why not both? Indeed, it appears Paul Bohman on the WebAIM mailing list suggested such a combination in February 2004. Bohman even discovered possibly the first zero width bug. I originally recommended setting the height and width to 0 pixels. This works with JAWS and Home Page Reader. However, this does not work with Window Eyes. If you set the height and width to 1 pixel, then the technique works with all browsers and all three of the screen readers I tested. Re: Hiding text using CSS - Paul Bohman Later in May 2004, Bohman along with Shane Anderson published a paper on this technique. Citations within included Bob Easton and Tom Gilder . Aside note: other zero width bugs have been discovered since. Manuel Matuzović noted in 2023 that links in Safari were not focusable . The zero width story continues as recently as February 2026 (last week). In browse mode in web browsers, NVDA no longer treats controls with 0 width or height as invisible. This may make it possible to access previously inaccessible “screen reader only” content on some websites. NVDA 2026.1 Beta TWO now available - NV Access News Digger further into WebAIM’s email archive uncovered a 2003 thread in which Tom Gilder shared a class for skip navigation links . I found Gilder’s blog in the web archives introducing this technique. I thought I’d put down my “skip navigation” link method down in proper writing as people seem to like it (and it gives me something to write about!). Try moving through the links on this page using the keyboard - the first link should magically appear from thin air and allow you to quickly jump to the blog tools, which modern/visual/graphical/CSS-enabled browsers (someone really needs to come up with an acronym for that) should display to the left of the content. Skip-a-dee-doo-dah - Tom Gilder Gilder’s post links to a Dave Shea post which in turn mentions the 2002 book “Building Accessible Websites” by Joe Clark . Chapter eight discusses the necessity of a “skip navigation” link due to table-based layout but advises: Keep them visible! Well-intentioned developers who already use page anchors to skip navigation will go to the trouble to set the anchor text in the tiniest possible font in the same colour as the background, rendering it invisible to graphical browsers (unless you happen to pass the mouse over it and notice the cursor shape change). Building Accessible Websites - 08. Navigation - Joe Clark Clark expressed frustration over common tricks like the invisible pixel. It’s clear no class existed when this was written. Choan Gálvez informed me that Eric Meyer would have the css-discuss mailing list. Eric kindly searched the backups but didn’t find any earlier discussion. However, Eric did find a thread on the W3C mailing list from 1999 in which Ian Jacobs (IBM) discusses the accessibility of “skip navigation” links. The desire to visually hide “skip navigation” links was likely the main precursor to the early techniques. In fact, Bob Easton said as much: As we move from tag soup to CSS governed design, we throw out the layout tables and we throw out the spacer images. Great! It feels wonderful to do that kind of house cleaning. So, what do we do with those “skip navigation” links that used to be attached to the invisible spacer images? Screenreader Visibility - Bob Easton (2003) I had originally missed that in my excitement seeing the class. I reckon we’ve reached the source of the class. At least conceptually. Technically, the class emerged from several ideas, rather than a “eureka” moment. Perhaps more can be gleaned from other CSS techniques such a the desire to improve accessibility of CSS image replacement . Bob Easton retired in 2008 after a 40 year career at IBM. I reached out to Bob who was surprised to learn this technique was still a topic today † . Bob emphasised the fact that it was always a clumsy workaround and something CSS probably wasn’t intended to accommodate . I’ll share more of Bob’s thoughts later. † I might have overdone the enthusiasm Let’s take an intermission! My contact page is where you can send corrections by the way :) The class stabilised for a period. Visit 2006 in the Wayback Machine to see WebAIM’s guide to invisible content — Paul Bohman’s version is still recommended. Moving forward to 2011, I found Jonathan Snook discussing the “clip method”. Snook leads us to Drupal developer Jeff Burnz the previous year. […] we still have the big problem of the page “jump” issue if this is applied to a focusable element, such as a link, like skip navigation links. WebAim and a few others endorse using the LEFT property instead of TOP, but this no go for Drupal because of major pain-in-the-butt issues with RTL. In early May 2010 I was getting pretty frustrated with this issue so I pulled out a big HTML reference and started scanning through it for any, and I mean ANY property I might have overlooked that could possible be used to solve this thorny issue. It was then I recalled using clip on a recent project so I looked up its values and yes, it can have 0 as a value. Using CSS clip as an Accessible Method of Hiding Content - Jeff Burnz It would seem Burnz discovered the technique independently and was probably the first to write about it. Burnz also notes a right-to-left (RTL) issue. This could explain why pushing content off-screen fell out of fashion. 2010 also saw the arrival of HTML5 Boilerplate along with issue #194 in which Jonathan Neal plays a key role in the discussion and comments: If we want to correct for every seemingly-reasonable possibility of overflow in every browser then we may want to consider [code below] This was their final decision. I’ve removed for clarity. This is very close to what we have now, no surprise since HTML5 Boilterplate was extremely popular. I’m leaning to conclude that the additional properties are really just there for the “possibility” of pixels escaping containment as much as fixing any identified problem. Thierry Koblentz covered the state of affairs in 2012 noting that: Webkit, Opera and to some extent IE do not play ball with [clip] . Koblentz prophesies: I wrote the declarations in the previous rule in a particular order because if one day clip works as everyone would expect, then we could drop all declarations after clip, and go back to the original Clip your hidden content for better accessibility - Thierry Koblentz Sound familiar? With those browsers obsolete, and if behaves itself, can the other properties be removed? Well we have 14 years of new bugs features to consider first. In 2016, J. Renée Beach published: Beware smushed off-screen accessible text . This appears to be the origin of (as demonstrated by Vispero .) Over a few sessions, Matt mentioned that the string of text “Show more reactions” was being smushed together and read as “Showmorereactions”. Beach’s class did not include the kitchen sink. The addition of became standard alongside everything else. Aside note: the origin of remains elusive. One Bootstrap issue shows it was rediscovered in 2018 to fix a browser bug. However, another HTML5 Boilterplate issue dated 2017 suggests negative margin broke reading order. Josh Comeau shared a React component in 2024 without margin. One of many examples showing that it has come in and out of fashion. We started with WCAG so let’s end there. The latest WCAG technique for “Using CSS to hide a portion of the link text” provides the following code. Circa 2020 the property was added as browser support increased and became deprecated. An obvious change I not sure warrants investigation (although someone had to be first!) That brings us back to what we have today. Are you still with me? As we’ve seen, many of the properties were thrown in for good measure. They exist to ensure absolutely no pixels are painted. They were adapted over the years to avoid various bugs, quirks, and edge cases. How many such decisions are now irrelevant? This is a classic Chesterton’s Fence scenario. Do not remove a fence until you know why it was put up in the first place. Well we kinda know why but the specifics are practically folklore at this point. Despite all that research, can we say for sure if any “why” is still relevant? Back to Ana Tudor’s suggestion. How do we know for sure? The only way is extensive testing. Unfortunately, I have neither the time nor skill to perform that adequately here. There is at least one concern with the code above, Curtis Wilcox noted that in Safari the focus ring behaves differently. Other minimum viable ideas have been presented before. Scott O’Hara proposed a different two-liner using . JAWS, Narrator, NVDA with Edge all seem to behave just fine. As do Firefox with JAWS and NVDA, and Safari on macOS with VoiceOver. Seems also fine with iOS VO+Safari and Android TalkBack with Firefox or Chrome. In none of these cases do we get the odd focus rings that have occurred with other visually hidden styles, as the content is scaled down to zero. Also because not hacked into a 1px by 1px box, there’s no text wrapping occurring, so no need to fix that issue. transform scale(0) to visually hide content - Scott O’Hara Sounds promising! It turns out Katrin Kampfrath had explored both minimum viable classes a couple of years ago, testing them against the traditional class. I am missing the experience and moreover actual user feedback, however, i prefer the screen reader read cursor to stay roughly in the document flow. There are screen reader users who can see. I suppose, a jumping read cursor is a bit like a shifting layout. Exploring the visually-hidden css - Katrin Kampfrath Kampfrath’s limited testing found the read cursor size differs for each class. The technique was favoured but caution is given. A few more years ago, Kitty Giraudel tested several ideas concluding that was still the most accessible for specific text use. This technique should only be used to mask text. In other words, there shouldn’t be any focusable element inside the hidden element. This could lead to annoying behaviours, like scrolling to an invisible element. Hiding content responsibly - Kitty Giraudel Zell Liew proposed a different idea in 2019. Many developers voiced their opinions, concerns, and experiments over at Twitter. I wanted to share with you what I consolidated and learned. A new (and easy) way to hide content accessibly - Zell Liew Liew’s idea was unfortunately torn asunder. Although there are cases like inclusively hiding checkboxes where near-zero opacity is more accessible. I’ve started to go back in time again! I’m also starting to question whether this class is a good idea. Unless we are capable and prepared to thoroughly test across every combination of browser and assistive technology — and keep that information updated — it’s impossible to recommend anything. This is impossible for developers! Why can’t browser vendors solve this natively? Once you’ve written 3000 words on a twenty year old CSS hack you start to question why it hasn’t been baked into web standards by now. Ben Myers wrote “The Web Needs a Native .visually-hidden” proposing ideas from HTML attributes to CSS properties. Scott O’Hara responded noting larger accessibility issues that are not so easily handled. O’Hara concludes: Introducing a native mechanism to save developers the trouble of having to use a wildly available CSS ruleset doesn’t solve any of those underlying issues. It just further pushes them under the rug. Visually hidden content is a hack that needs to be resolved, not enshrined - Scott O’Hara Sara Soueidan had floated the topic to the CSS working group back in 2016. Soueidan closed the issue in 2025, coming to a similar conclusion. I’ve been teaching accessibility for a little less than a decade now and if there’s one thing I learned is that developers will resort to using utility to do things that are more often than not just bad design decisions. Yes, there are valid and important use cases. But I agree with all of @scottaohara’s points, and most importantly I agree that we need to fix the underlying issues instead of standardizing a technique that is guaranteed to be overused and misused even more once it gets easier to use. csswg-drafts comment - Sara Soueidan Adrian Roselli has a blog post listing priorities for assigning an accessible name to a control. Like O’Hara and Soueidan, Roselli recognises there is no silver bullet. Hidden text is also used too casually to provide information for just screen reader users, creating overly-verbose content . For sighted screen reader users , it can be a frustrating experience to not be able to find what the screen reader is speaking, potentially causing the user to get lost on the page while visually hunting for it. My Priority of Methods for Labeling a Control - Adrian Roselli In short, many believe that a native visually-hidden would do more harm than good. The use-cases are far more nuanced and context sensitive than developers realise. It’s often a half-fix for a problem that can be avoided with better design. I’m torn on whether I agree that it’s ultimately a bad idea. A native version would give software an opportunity to understand the developer’s intent and define how “visually hidden” works in practice. It would be a pragmatic addition. The technique has persisted for over two decades and is still mentioned by WCAG. Yet it remains hacks upon hacks! How has it survived for so long? Is that a failure of developers, or a failure of the web platform? The web is overrun with inaccessible div soup . That is inexcusable. For the rest of us who care about accessibility — who try our best — I can’t help but feel the web platform has let us down. We shouldn’t be perilously navigating code hacks, conflicting advice, and half-supported standards. We need more energy money dedicated to accessibility. Not all problems can be solved with money. But what of the thousands of unpaid hours, whether volunteered or solicited, from those seeking to improve the web? I risk spiralling into a rant about browser vendors’ financial incentives, so let’s wrap up! I’ll end by quoting Bob Easton from our email conversation: From my early days in web development, I came to the belief that semantic HTML, combined with faultless keyboard navigation were the essentials for blind users. Experience with screen reader users bears that out. Where they might occasionally get tripped up is due to developers who are more interested in appearance than good structural practices. The use cases for hidden content are very few, such as hidden information about where a search field is, when an appearance-centric developer decided to present a search field with no visual label, just a cute unlabeled image of a magnifying glass. […] The people promoting hidden information are either deficient in using good structural practices, or not experienced with tools used by people they want to help. Bob ended with: You can’t go wrong with well crafted, semantically accurate structure. Ain’t that the truth. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Accessibility notice Class walkthrough Where it all began Further adaptations Minimum viable technique Native visually-hidden Zero dimensions Position off-screen

6 views
neilzone 2 days ago

Updating my TicWatch to AsteroidOS 2.0

I have a TicWatch Pro 2020, running AsteroidOS . I’ve been using it for about three months now, and I’ve been very pleased with it. Sure, it would be great if the battery life was longer than a day-and-a-bit, but this just means that I need to charge it each night, which is not a major hardship. It does everything I want from a smartwatch, and not really anything more. AsteroidOS launched AsteroidOS v2.0 a few days ago, and I was keen to give it a try. I installed it by following the instructions for the TicWatch (i.e. a new installation, rather an “update”), and this worked fine. I had to re-pair the watch to GadgetBridge, and then I rebooted it. When it came up, it connected to my phone, and set the time correctly. I have a feeling that the update has removed the watch face that I was using, and re-installing it would be a faff, so I just picked one of the default faces. Since I don’t have “Always on” enabled, so I see the TicWatch’s secondary LCD most of the time, this is not a big deal for me. I turned off tilt-to-wake (in Display settings), because I don’t want that; I imagine that it will be waking the watch up too often, increasing power consumption. The “compass” app is quite cool, giving me easy direction finding on my wrist, but I’m not sure I’ll have much use for it. The heart rate sensor works, showing that I do indeed have a pulse, but again, I don’t really need this day to day. Perhaps because of my incredibly basic use, most of the user-facing changes are not particularly relevant to me. I’ll be interested to see if the battery life improvements apply to my watch though. A simple, successful, update, and one which, thankfully, does not get in the way of me using the watch.

0 views