Posts in Css (20 found)

Warning: containment breach in cascade layer!

CSS cascade layers are the ultimate tool to win the specificity wars. Used alongside the selector, specificity problems are a thing of the past. Or so I thought. Turns out cascade layers are leakier than a xenonite sieve. Cross-layer shenanigans can make bad CSS even badder. I discovered a whole new level of specificity hell. Scroll down if you dare! There are advantages too, I’ll start with a neat trick. To setup this trick I’ll quickly cover my favoured CSS methodology for a small website. I find defining three cascade layers is plenty. In I add my reset styles , custom properties, anything that touches a global element, etc. In I add the core of the website. In I add classes that look suspiciously like Tailwind , for pragmatic use. Visually-hidden is a utility class in my system. I recently built a design where many headings and UI elements used an alternate font with a unique style. It made practical sense to use a utility class like the one below. This is but a tribute, the real class had more properties. The class is DRY and easily integrated into templates and content editors. Adding this to the highest cascade layer makes sense. I don’t have to worry about juggling source order or overriding properties on the class itself. I especially do not have to care about specificity or slap everywhere like a fool. This worked well. Then I zoom further into the Figma picture and was betrayed! The design had an edge case where letter-spacing varied for one specific component. It made sense for the design. It did not make sense for my system. If you remember, my cascade layer takes priority over my layer so I can’t simply apply a unique style to the component. For the sake of a demo let’s assume my component has this markup. I want to change back to the normal letter-spacing. Oops, I’ve lost the specificity war regardless of what selector I use. The utility class wins because I set it up to win. My “escape hatch” uses custom property fallback values . In most cases is not defined and the default is applied. For my edge case component I can ‘configure’ the utility class. I’ve found this to be an effective solution that feels logical and intuitive. I’m working with the cascade. It’s a good thing that custom properties are not locked within cascade layers! I don’t think anyone would expect that to happen. In drafting this post I was going to use an example to show the power of cascade layers. I was going to say that not even wins. Then I tested my example and found that does actually override higher cascade layers. It breaches containment too! What colour are the paragraphs? Suffice it to say that things get very weird. See my CodePen . Spoiler: blue wins. I’m sure there is a perfectly cromulent reason for this behaviour but on face value I don’t like it! Bleh! I feel like should be locked within a cascade layer. I don’t even want to talk about the inversion… I’m sure there are GitHub issues, IRC logs, and cave wall paintings that discuss how cascade layers should handle — they got it wrong! The fools! We could have had something good here! Okay, maybe I’m being dramatic. I’m missing the big picture, is there a real reason it has to work this way? It just feels… wrong? I’ve never seen a use case for that wasn’t tear-inducing technical debt. Permeating layers with feels wrong even though custom properties behaving similar feels right. It’s hard to explain. I reckon if you’ve built enough websites you’ll get that sense too? Or am I just talking nonsense? I subscribe to the dogma that says should never be used but it’s not always my choice . I build a lot of bespoke themes. The WordPress + plugin ecosystem is the ultimate specificity war. WordPress core laughs in the face of “CSS methodology” and loves to put styles where they don’t belong . Plugin authors are forced to write even gnarlier selectors. When I finally get to play, styles are an unmitigated disaster. Cascade layers can curtail unruly WordPress plugins but if they use it’s game over; I’m back to writing even worse code. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
David Bushell 5 days ago

No-stack web development

This year I’ve been asked more than ever before what web development “stack” I use. I always respond: none. We shouldn’t have a go-to stack! Let me explain why. My understanding is that a “stack” is a choice of software used to build a website. That includes language and tooling, libraries and frameworks , and heaven forbid: subscription services. Text editors aren’t always considered part of the stack but integration is a major factor. Web dev stacks often manifest as used to install hundreds of megs of JavaScript, Blazing Fast ™ Rust binaries, and never ending supply chain attacks . A stack is also technical debt, non-transferable knowledge, accelerated obsolescence, and vendor lock-in. That means fragility and overall unnecessary complication. Popular stacks inevitably turn into cargo cults that build in spite of the web, not for it. Let’s break that down. If you have a go-to stack, you’ve prescribed a solution before you’ve diagnosed a problem. You’ve automatically opted in to technical baggage that you must carry the entire project. Project doesn’t fit the stack? Tough; shoehorn it to fit. Stacks are opinionated by design. To facilitate their opinions, they abstract away from web fundamentals. It takes all of five minutes for a tech-savvy person to learn JSON . It takes far, far longer to learn Webpack JSON . The latter becomes useless knowledge once you’ve moved on to better things. Brain space is expensive. Other standards like CSS are never truly mastered but learning an abstraction like Tailwind will severely limit your understanding. Stacks are a collection of move-fast-and-break churnware; fleeting software that updates with incompatible changes, or deprecates entirely in favour of yet another Rust refactor. A basic HTML document written 20 years ago remains compatible today. A codebase built upon a stack 20 months ago might refuse to play. The cost of re-stacking is usually unbearable. Stack-as-a-service is the endgame where websites become hopelessly trapped. Now you’re paying for a service that can’t fix errors . You’ve sacrificed long-term stability and freedom for “developer experience”. I’m not saying you should code artisanal organic free-range websites. I’m saying be aware of the true costs associated with a stack. Don’t prescribed a solution before you’ve diagnosed a problem. Choose the right tool for each job only once the impact is known. Satisfy specific goals of the website, not temporary development goals. Don’t ask a developer what their stack is without asking what problem they’re solving. Be wary of those who promote or mandate a default stack. Be doubtful of those selling a stack. When you develop for a stack, you risk trading the stability of the open web platform, that is to say: decades of broad backwards compatibility, for GitHub’s flavour of the month. The web platform does not require build toolchains. Always default to, and regress to, the fundamentals of CSS, HTML, and JavaScript. Those core standards are the web stack. Yes, you’ll probably benefits from more tools. Choose them wisely. Good tools are intuitive by being based on standards, they can be introduced and replaced with minimal pain. My only absolute advice: do not continue legacy frameworks like React . If that triggers an emotional reaction: you need a stack intervention! It may be difficult to accept but Facebook never was your stack; it’s time to move on. Use the tool, don’t become the tool. Edit: forgot to say: for personal projects, the gloves are off. Go nuts! Be the churn. Learn new tools and even code your own stack. If you’re the sole maintainer the freedom to make your own mistakes can be a learning exercise in itself. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
Nelson Figueroa 1 weeks ago

Proxying GoatCounter Requests for a Hugo Blog on CloudFront to bypass Ad Blockers

I’ve been running GoatCounter on my site using the script . The problem is that adblockers like uBlock Origin block it (understandably). To get around this, I set up proxying so that the GoatCounter requests go to an endpoint under my domain , and then from there CloudFront handles it and sends it to GoatCounter. Most ad blockers work based on domain and GoatCounter is on the blocklists. Since the browser is now sending requests to the same domain as my site, it shouldn’t trigger any ad blockers. This post explains how I did it in case it’s useful for anyone else. It’s possible to self-host GoatCounter, but my approach was easier to do and less infrastructure to maintain. Perhaps in the future. I know there are concerns around analytics being privacy-invasive. GoatCounter is privacy-respecting. I care about privacy. I am of the belief that GoatCounter is harmless. I just like to keep track of the visitors on my site. Read the GoatCounter developer’s take if you want another opinion: Analytics on personal websites . Clicking through the AWS console to configure CloudFront distributions is a pain in the ass. I took the time to finally get the infrastructure for my blog managed as infrastructure-as-code with Pulumi and Python . So while you can click around the console and do all of this, I will be showing how to configure everything with Pulumi. If you don’t want to use IaC, you can still find all of these options/settings in AWS itself. To set up GoatCounter proxying via CloudFront, we’ll need to CloudFront functions are JavaScript scripts that run before a request reaches a CloudFront distribution’s origin. In this case, the function strips the from . We need to strip for two reasons: Here is the code for the function: And here is the CloudFront function resource defined in Pulumi (using Python) that includes the JavaScript from above. This is a new resource defined in the same Python file where my existing distribution already exists: Here is my existing CloudFront distribution being updated with a new origin and cache behavior in Pulumi code. At the time of writing CloudFront only allows to be a list of HTTP methods in specific combinations. The value must be one of these: Since the GoatCounter JavaScript sends a request, and the third option is the only one that includes , we’re forced to use all HTTP verbs. It should be harmless though. Now that my Pulumi code has both the CloudFront function defined and the CloudFront distribution has been updated, I ran to apply changes. Finally, I updated goatcounter.js to use the new endpoint. So instead of I changed it to my own domain at the very top of the snippet: After this, I built my site with Hugo and deployed it on S3/CloudFront by updating the freshly built HTML/CSS/JS in my S3 Bucket and then invalidating the existing CloudFront cache . Now, GoatCounter should no longer be blocked by uBlock Origin. I tested by loading my site on an incognito browser window and checked that uBlock Origin was no longer blocking anything on my domain. Everything looks good! If you’re using GoatCounter you should consider sponsoring the developer . It’s a great project. Create a new CloudFront function resource Add a second origin to the distribution Add an ordered cache behavior to the distribution (which references the CloudFront function using its ARN) Update the GoatCounter script to point to this new endpoint I chose to proxy requests that hit the endpoint on my site to make sure there’s no collision with post titles/slugs. I’ll never use the path for posts. GoatCounter accepts requests under , not https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-functions.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistS3AndCustomOrigins.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesCacheBehavior.html https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html https://www.goatcounter.com/help/js https://www.goatcounter.com/help/backend https://www.goatcounter.com/help/countjs-host

0 views

CSS Naked Day

Heard about CSS Naked Day thanks to Andreas , thought it would be fun to participate as well! So break out of your RSS reader and view this site in it's full, naked glory! With how simple my new blog is, it works really well!

0 views
iDiallo 1 weeks ago

AI Did It in 12 Minutes. It Took Me 10 Hours to Fix It

I've been working on personal projects since the 2000s. One thing I've always been adamant about is understanding the code I write. Even when Stack Overflow came along, I was that annoying guy who told people not to copy and paste code into their repos. Instead, they should read it and adapt it to their specific case. On personal projects, I've applied this to a fault. Projects never get done because I'm reading and editing code to make it work exactly as I want. I am by no means trying to convince you that my code is high quality. Every day, I regret the design choices I made for this very blog. But at the very least, I like to understand the code that powers my projects. So you can imagine how I struggle with the reviewing part when AI writes a large chunk of our daily work. Large language models are just so verbose, and often produce large blocks of code that don't even get used. I don't want to attribute it to malice (wasting your tokens) when I know this is an emergent technology we are all still adapting to. But it doesn't help that there is just so much code to review. What I tell myself when I review an AI-generated PR is: if I don't have a mental model of how the application works, how can I be of any use when it fails? This weekend, I decided to tackle a project I've been postponing since I created this blog over a decade ago. I needed a nice interface to upload assets, such as images, to go with each blog post. According to my git history, I started work on it 11 years ago but never finished. Yet I've still been uploading images regularly to go with blog posts. How? Using and . Like a caveman. Whenever I finish writing a blog post, I create images in Photoshop or Photopea, fire up FileZilla to connect to my file server, and drag and drop my assets. Then, of course, I have to log back into the blog's admin panel and provide the correct URLs for those assets. It's as primitive as it gets. This was a perfect project to tackle with AI. The good news is that I was able to complete it in a single day. The bad news. Well, I read through close to 5,000 lines of code. This is the first time I've used an LLM to write PHP code. I used the GLM-5 model from z.ai, which I've often played with for generating graphs from data. I'll have to try it on a few more PHP projects before I commit to this opinion, but it consistently wrote spaghetti code . Spaghetti code, to me, is anything where logic is embedded directly in a template. HTML, CSS, and database queries all crammed into the same file. That's really hard to maintain. I did realize, though, that my instructions were in direct contradiction with the agent's system prompt. It was explicit in the thinking process: This is NOT a Next.js project - the user explicitly wants a PHP app. So I should NOT use the Next.js project that's already initialized. I should create a PHP application. Wait, but the instructions say "You are an interactive CLI tool that helps users with software engineering tasks" and "Now you are developing a comprehensive and feature-rich Next.js project from scratch." But the user is explicitly asking for a PHP app. I should follow the user's request. It did generate a bunch of Node.js code, which I was able to remove manually. Luckily, it kept the PHP project in its own folder. If you're wondering how 12 files contain ~5,000 lines of code, I wondered the same. But that's what spaghetti code does. I set it up locally, ran and , and a few more files and folders were generated. When I finally ran the application, it didn't work. I spent a few hours working through permissions, updating the install script, and modifying the SQLite setup. I thought StackOverflow was dead, but I don't think I would have gotten SQLite working without it. One error, for example, was that SQLite kept throwing a warning that it was running in read-only mode. Apparently, you have to make the parent folder writable (not just the database file) to enable write mode. It had been a long time since I'd manually d files in PHP. I normally use namespaces and autoload. Since this project was generated from scratch, I had to hunt down various statements that all had incorrect paths. Once I sorted those out, I had to deal with authentication. PHP sessions come with batteries included, you call and you can read and write session variables via the global. But I couldn't figure out why it kept failing. When I created a standalone test file, sessions worked fine. But when loaded through the application, values weren't being saved. I spent a good while debugging before I found that was missing from the login success flow. When I logged in, the page redirected to the dashboard, but every subsequent action that required authentication immediately kicked me out. Even after fixing all those issues and getting uploads working, something still bothered me: how do I maintain this code? How do I add new pages to manage uploaded assets? Do I add meatballs directly to the spaghetti? Or do I just trust the AI agent to know where to put new features? Technically it could do that, but I'd have to rely entirely on the AI without ever understanding how things work. So I did the only sane thing: I rewrote a large part of the code and restructured the project. Maybe I should have started there, but I didn't know what I wanted until I saw it. Which is probably why I had been dragging this project along for 11 years. Yes, now I have 22 files, almost double the original count. But the code is also much simpler at just 1,254 lines. There's far less cognitive load when it comes to fixing bugs. There's still a lot to improve, but it's a much leaner foundation. The question I keep coming back to is: would it have been easier to do this manually? Well, the timeline speaks for itself. I had been neglecting this project for years. Without AI, I probably never would have finished it. That said, it would have been easier to build on my existing framework. My blog's framework has been tested for years and has accumulated a lot of useful features: a template engine, a working router, an auth system, and more. All things I had to re-engineer from scratch here. If I'd taken the time to work within my own framework, it probably would have taken less time overall. But AI gave me the illusion that the work could be done much faster. Z.ai generated the whole thing in just 12 minutes. It took an additional 10 hours to clean it up and get it working the way I wanted. This reminds me of several non-technical friends who built/vibe-coded apps last year. The initial results looked impressive. Most of them don't have a working app anymore, because they realized that the cleanup is just as important as the generation if you want something that actually holds together. I can only imagine what "vibe-debugging" looks like. I'm glad I have a working app, but I'm not sure I can honestly call this vibe-coded. Most, if not all, of the files have been rewritten. When companies claim that a significant percentage of their code is AI-generated , do their developers agree? For me, it's unthinkable to deploy code I haven't vetted and understood. But I'm not the benchmark. In the meantime, I think I've earned the right to say this the next time I ship an AI-assisted app: "I apologize for so many lines of code - I didn't have time to write a shorter app."

0 views
Chris Coyier 1 weeks ago

Help Me Understand How To Get Jetpack Search to Search a Custom Post Type

I’ve got a Custom Post Type in WordPress. It’s called because it’s for documentation pages. This is for the CodePen 2.0 Docs . The Classic Docs are just “Pages” in WordPress, and that works fine, but I thought I’d do the correct WordPress thing and make a unique kind of content a Custom Post Type. This works quite nicely, except that they don’t turn up at all in Jetpack Search . I like Jetpack Search. It works well. It’s got a nice UI. You basically turn it on and forget about it. I put it on CSS-Tricks, and they still use it there. I put it on the Frontend Masters blog. It’s here on this blog. It’s a paid product, and I pay for it and use it because it’s good. I don’t begrudge core WordPress for not having better search, because raw MySQL search just isn’t very good. Jetpack Search uses Elasticsearch, a product better-suited for full-blown site search. That’s not a server requirement they could reasonably bake into core. But the fact that it just doesn’t index Custom Post Types is baffling to me. I suspect it’s just something I’m doing wrong. I can tell it doesn’t work with basic tests. For example, I’ve got a page called “Inline Block Processing” but if you search for “Inline Block Processing” it returns zero results . In the Customizing Jetpack Search area,  I’m specifically telling Jetpack Search  not to exclude “Docs” . That very much feels like it will include it . I’ve tried manually reindexing a couple of times, both from SSHing into Pressable and using WP-CLI to reindex, and from the “Manage Connections” page on WordPress.com. No dice. I contacted Jetpack Support, and they said: Jetpack Search handles Custom Post Types individually, so it may be that the slug for your post type isn’t yet included in the Jetpack Search index.   We have a list of slugs we index here:   https://github.com/Automattic/jetpack/blob/trunk/projects/packages/sync/src/modules/class-search.php#L691   If the slug isn’t on the list, please submit an issue here so that our dev team can add it: Where they sent me on GitHub was a bit confusing. It’s the end of a variable called , which doesn’t seem quite right, as that seems like, ya know, post metadata that shouldn’t be indexed, which isn’t what’s going on here. But it’s also right before a variable called private static $taxonomies_to_sync, which feels closer, but I know what a taxonomy is, and this isn’t that. A taxonomy is categories, tags, and stuff (you can make your own), but I’m not using any custom taxonomies here; I’m using a Custom Post Type. They directed me to open a GitHub Issue, so I did that . But it’s sat untouched for a month. I just need to know whether Jetpack Search can handle Custom Post Types. If it does, what am I doing wrong to make it not work? If it can’t, fine, I just wanna know so I can figure out some other way to handle this. Unsearchable docs are not tenable.

0 views
Ankur Sethi 1 weeks ago

I'm no longer using coding assistants on personal projects

I’ve spent the last few months figuring out how best to use LLMs to build software. In January and February, I used Claude Code to build a little programming language in C. In December I used local a local LLM to analyze all the journal entries I wrote in 2025 , and then used Gemini to write scripts that could visualize that data. Besides what I’ve written about publicly, I’ve also used Claude Code to: I won’t lie, I started off skeptical about the ability of LLMs to write code, but I can’t deny the fact that, in 2026, they can produce code that’s as good or better than a junior-to-intermediate developer for most programming domains. If you’re abstaining from learning about or using LLMs in your own work, you’re doing a disservice to yourself and your career. It’s a very real possibility that in five years, most of the code we write will be produced using an LLM. It’s not a certainty, but it’s a strong possibility. However, I’m not going to stop writing code by hand. Not anytime soon. As long as there are computers to program, I will be programming them using my own two fleshy human hands. I started programming computers because I enjoy the act of programming. I enjoy thinking through problems, coming up with solutions, evolving those solutions so that they are as correct and clear as possible, and then putting them out into the world where they can be of use to people. It’s a fun and fulfilling profession. Some people see the need for writing code as an impediment to getting good use out of a computer. In fact, some of the most avid fans of generative AI believe that the act of actually doing the work is a punishment. They see work as unnecesary friction that must be optimized away. Truth is, the friction inherent in doing any kind of work—writing, programming, making music, painting, or any other creative activity generative AI purpots to replace—is the whole point. The artifacts you produce as the result of your hard work are not important. They are incidental. The work itself is the point. When you do the work, you change and grow and become more yourself. Work—especially creative work—is an act of self-love if you choose to see it that way. Besides, when you rely on generative AI to do the work, you miss out on the pleasurable sensations of being in flow state. Your skills atrophy (no, writing good prompts is not a skill, any idiot can do it). Your brain gets saturated with dopamine in the same way when you gamble, doomscroll, or play a gatcha game. Using Claude Code as your main method of producing code is like scrolling TikTok eight hours a day, every day, for work. And the worst part? The code you produce using LLMs is pure cognitive debt. You have no idea what it’s doing, only that it seems to be doing what you want it to do. You don’t have a mental model for how it works, and you can’t fix it if it breaks in production. Such a codebase is not an asset but a liability. I predict that in 1-3 years we’re going see organizations rewrite their LLM-generated software using actual human programmers. Personally, I’ve stopped using generative AI to write code for my personal projects. I still use Claude Code as a souped up search engine to look up information, or to help me debug nasty errors. But I’m manually typing every single line of code in my current Django project, with my own fingers, using a real physical keyboard. I’m even thinking up all the code using my own brain. Miraculous! For the commercial projects I work on for my clients, I’m going to follow whatever the norms around LLM use happen to at my workplace. If a client requires me to use Claude Code to write every single line of code, I’ll be happy to oblige. If they ban LLMs outright, I’m fine with that too. After spending hundreds of hours yelling at Claude, I’m dangerously proficient at getting it to do the right thing. But I haven’t lost my programming skills yet, and I don’t plan to. I’m flexible. Given the freedom to choose, I’d probably pick a middle path: use LLMs to generate boilerplate code, write tricky test cases, debug nasty issues I can’t think of, and quickly prototype ideas to test. I’m not an AI vegan. But when it comes to code I write for myself—which includes the code that runs this website—I’m going to continue writing it myself, line by line, like I always did. Somebody has to clean up after the robots when they make a mess, right? Write and debug Emacs Lisp for my personal Emacs configuration. Write several Alfred workflows (in Bash, AppleScript, and Swift) to automate tasks on my computer. Debug CSS issues on this very website. Generate React components for a couple of throwaway side projects. Generate Django apps for a couple of throwaway side projects. Port color themes between text editors. A lot more that I’m forgetting now.

0 views
Manuel Moreale 1 weeks ago

Anthony Nelzin-Santos

This week on the People and Blogs series we have an interview with Anthony Nelzin-Santos, whose blog can be found at z1nz0l1n.com . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Bonjour ! I’m a militant wayfarer, budding typographer, pathological reader, slow cyclist, obsessive tinkerer, dangerous cook, amateur bookbinder, homicidal gardener, mediocre sewist, and fanatical melomaniac living in Lyon (France). I was a technology journalist and journalism teacher for sixteen years, but i now work in instructional design. In my spare time, i take photos of old storefronts to preserve a rapidly fading typographical tradition. One of these days, i’ll finally finish the typefaces i’ve been working on forever. And my novel. And the painting of the bathroom. (My wife is a saint.) I was born a few years before the web was invented and grew up at this fascinating time when everybody wanted to do something with it, but nobody knew quite what yet. We were still supposed to learn Logo and Pascal in technology class, but most of the teachers understood the importance of the web and taught us the basics of HTML and CSS. I built my first website in 2000… as a school assignment! By 2007, i was one of those insufferable tech bloggers who made enough money to feel entitled, but not enough to feel safe. (I moonlighted as a graphic designer.) When more established outlets came knocking at my door, i shut down my blog and became one of those insufferable tech journalists who make enough money to feel entitled, but not enough to feel safe. (I moonlighted as a journalism teacher.) I kept a personal blog under the “zinzolin” moniker. This shade of purple is my favourite colour, partly because it sounds a bit like my name. Over the years, it became more and more difficult to find the energy to write recreationally after having spent the day writing professionally. In 2025, feeling more than a little burnt out, i rebooted my blog and switched from French to English. Fortunately, the name is equally weird in both languages. I don’t have a process so much as a way of managing the incessant chatter in my head. I write to give myself the permission to forget, and i publish to gift myself the ability to remember. You’ll never catch me without some way to capture those little “brain itches” — a notebook, the Bloom app, a digital recorder, the back of my hand… (I wrote part of this interview as a long series of text messages to myself!) In the middle of the week, i start reviewing my notes to find a common theme or extract the strongest idea. When an incomplete thought keeps coming back, i don’t try to force it by staring at a blinking cursor. I take a long walk, and usually, i have to stop part way to write. Most of the actual blogging is done long before i sit down to properly draft my weekly note. I have this romantic notion that the more comfortable i am, the more i can edit, the worse my writing tends to get. If i could, i’d write everything longhand in a rickety train, stream-of-consciousness style, and publish the raw scans of my notebooks. You wouldn’t be able to read half of it, but i can assure you the illegible half would be Nobel-prize worthy. But then, some things only happen after a few hours of diligent editing. If i give myself enough time, i can stop transcribing my notes and start conversing with them. There’s always something worth exploring in the gap between our past and present selves – even if the past was two days ago – but that delicate work requires a conducive environment. Judging by my recent output, it looks like this environment comprises a good chair , a MacBook Air on one of those ugly lap desks, my custom international QWERTY layout , iA Writer for writing and Antidote for proofreading, cosy lighting, just the right amount of background noise, and most important of all, a pot of delicious coffee. I’ve tried pretty much every CMS and SSG under the sun, but i’ve always come back to WordPress, until Matt Mullenweg reminded us that a benevolent dictator still is a dictator . Z1NZ0L1N is now built on Ghost and hosted by Magic Pages . I used to use Tinylytics and Buttondown , but i’m now using Ghost’s integrated analytics and newsletter features. My other websites are hosted on a VPS with Infomaniak , which is also where i get my domain names, e-mail, and assorted cloud services. That’s a question i had to ask myself when i rebooted Z1NZ0L1N last year. I switched to English in a bid to better separate my professional output from my recreational output. I jettisoned most of my audience, but i found a new community around the IndieWeb Carnival and quickly rebuilt a readership on my own merits. I get excited each time i get an e-mail from someone i don’t know from a country on the other side of the globe. I wanted to find a way to publish regularly without turning Z1NZ0L1N into the umpteenth link blog. After a few experiments, i’ve settled on a weekly note that’s part “what i’m doing”, part “what the rest of the world is doing”. This is old-school blogging meets recommendation algorithms — and i love it. Some things haven’t changed, though, and will never change. I use an open-source CMS that i could host myself, not a proprietary platform that i can’t control. I designed my theme myself. I don’t play the SEO/GEO game. I pay a little less than €10/month for Magic Pages’ starter plan with the custom themes add-on. Considering that it saves me €15/month in third-party services, i’d say it’s a fair price. I pay €12/year for the domain, but i also registered a few variations, including , which was first registered in 1999! Blogging is my least expensive hobby — by far. As someone who’s worked a lot on the economics of independent publishing, i’m happily subscribed to a few news outlets and magazines. I like the idea of $1/month memberships for blogs, but in practice, i find it hard to track multiple micro-subscriptions on top of my existing (and frankly far too numerous) digital subscriptions. I wonder if we should create blogging collectives, almost like unions and coops, to collect and redistribute a single subscription in between members. In the meantime, i’ll continue not talking about my Ko-Fi page . The Forest and Ye Olde Blogroll are fantastic discovery tools. A lot of my favourite bloggers have already been featured in People and blogs : VH Belvadi, BSAG, Frank Chimero, Keenan, Piper Haywood, Nick Heer, Tom McWright, Riccardo Mori, Jim Nielsen, Kev Quirk, Arun Venkatesan, Zinzy… I’d love to see how Rob Weychert , Chris Glass , Josh Ginter or Melanie Richards would answer. Their approach to blogging couldn’t be more different, but they each informed mine in their own way. Since 2008, i’ve taken thousands of photos of old storefronts. It began as a way to inform my typographical practice, but it rapidly became an excuse to go out and pay attention – really pay attention – to the world around me. You wouldn’t believe the things i’ve discovered in side streets, the number of conversations i’ve struck after taking a picture of a once-beloved shop, and how my way of looking at the evolution of cities has entirely changed. If you’re up for a little challenge, find your own collection. It might be cool doors, weird postboxes, triangular things, every bookshop in Nova Scotia , sewer manholes, purple things, number signs… It’ll give you another perspective not only when travelling in foreign places, but also on your (not so) familiar surroundings. It doesn’t cost a penny, but it’ll pay off immensely. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 135 interviews . People and Blogs is possible because kind people support it.

0 views
David Bushell 1 weeks ago

CSS subgrid is super good

I’m all aboard the CSS subgrid train. Now I’m seeing subgrid everywhere. Seriously, what was I doing before subgrid? I feel like I was bashing rocks together. Consider the follower HTML: The content could be simple headings and paragraphs. It could also be complex HTML patterns from a Content Management System (CMS) like the WordPress block editor, or ACF flexible content (a personal favourite). Typically when working with CMS output, the main content will be restricted to a maximum width for readable line lengths. We could use a CSS grid to achieve such a layout. Below is a visual example using the Chromium dev tools to highlight grid lines. This example uses five columns with no gap resulting in six grid lines. The two outer most columns are meaning they can expand to fill space or collapse to zero-width. The two inner columns are which act as a margin. The centre column is the smallest or two values; either , or the full viewport width (minus the margins). Counting grid line correctly requires embarrassing finger math and pointing at the screen. Thankfully we can name the lines. I set a default column of for all child elements. Of course, we could have done this the old fashioned way. Something like: But grid has so much more potential to unlock! What if a fancy CMS wraps a paragraph in a block with the class . This block is expected to magically extend a background to the full-width of the viewport like the example below. This used to be a nightmare to code but with CSS subgrid it’s a piece of cake. We break out of the column by changing the to — that’s the name I chose for the outer most grid lines. We then inherit the parent grid using the template. Finally, the nested children are moved back to the column. The selector keeps specificity low. This allows a single class to override the default column. CSS subgrid isn’t restricted to one level. We could keep nesting blocks inside each other and they would all break containment. If we wanted to create a “boxed” style we can simply change the to instead of . This is why I put the margins inside. In hindsight my grid line names are probably confusing, but I don’t have time to edit the examples so go paint your own bikeshed :) On smaller viewports below the outer most columns collapse to zero-width and the “boxed” style looks exactly like the style. This approach is not restricted to one centred column. See my CodePen example and the screenshot below. I split the main content in half to achieve a two-column block where the text edge still aligns, but the image covers the available space. CSS subgrid is perfect for WordPress and other CMS content that is spat out as a giant blob of HTML. We basically have to centre the content wrapper for top-level prose to look presentable. With the technique I’ve shown we can break out more complex block patterns and then use subgrid to align their contents back inside. It only takes a single class to start! Here’s the CodePen link again if you missed it. Look how clean that HTML is! Subgrid helps us avoid repetitive nested wrappers. Not to mention any negative margin shenanigans. Powerful stuff, right? Browser support? Yes. Good enough that I’ve not had any complaints. Your mileage may vary, I am not a lawyer. Don’t subgrid and drive. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
ava's blog 2 weeks ago

offer: blogmaxxing class

Looksmaxxing is all the rage nowadays, but what about your blog? Look no further! I am easily one of the bloggers ever, and I have compiled everything I have learned in the years on this platform. And you guys get it first, for 50% off! ✍️ For only 67.67 Euro , you'll get course material covering ✨ For a steal of 69.99 Euro , you unlock access to everything about 🚀 The final lessons are yours for 42.00 Euro : Your blog deserves more than mediocrity. It deserves at least 50 upvotes . With this, you’ll unlock the secret 3-step system top bloggers use to dominate the Trending page while looking effortlessly perfect. ⏳ WARNING: Only 17 spots left for VIP access , and only available until 01.04.2026 23:59:59 CET ! Reply via email Published 01 Apr, 2026 High-impact writing and leveling up your Word/Memorability Ratio . Striking the balance between Jestermaxxing and Corporatemogging . Sharp sentence structure for a chiseled outline! Lessons learned from beating your header with a hammer. Smoothing out your CSS wrinkles with hardcore AI Sculpting ™. How the optimal font-weight changed my life! The art of biohacking Cortisol and Dopamine spikes that turns readers into fans. FOMO Widgets : “ 15 people are reading this now, ” and other social proof hacks that build core community moments! The undeniable magic of using OpenClaw to auto-respond to reader mails and letting it clean your Inbox for you :)

0 views
Maurycy 2 weeks ago

My ramblings are available over gopher

It has recently come to my attention that people need a thousand lines of C code to read my website. This is unacceptable. For simpler clients, my server supports gopher: The response is just a text file: it has no markup, no links and no embedded content. For navigation, gopher uses specially formatted directory-style menus: The first character on a line indicates the type of the linked resource: The type is followed by a tab-separated list containing a display name, file path, hostname and port. Lines beginning with an "i" are purely informational and do not link to anything. (This is non-standard, but widely used) Storing metadata in links is weird to modern sensibilities , but it keeps the protocol simple. Menus are the only thing that the client has to understand: there's no URLs, no headers, no mime types — the only thing sent to the server is the selector (file path), and the only thing received is the file. ... as a bonus, this one liner can download files: That's quite clunky , but there are lots of programs that support it. If you have Lynx installed, you should be able to just point it at this URL: ... although you will want to put in because it's not 1991 anymore [Citation Needed] I could use informational lines to replicate the webs navigation by making everything a menu — but that would be against the spirit of the thing: gopher is document retrieval protocol, not a hypertext format. Instead, I converted all my blog posts in plain text and set up some directory-style navigation. I've actually been moving away from using inline links anyways because they have two opposing design goals: While reading, links must be normal text. When you're done, links must be distinct clickable elements. I've never been able to find a good compromise: Links are always either distracting to the reader, annoying to find/click, or both. Also, to preempt all the emails : ... what about Gemini? (The protocol, not the autocomplete from google.) Gemini is the popular option for non-web publishing... but honestly, it feels like someone took HTTP and slapped markdown on top of it. This is a Gemini request... ... and this is an HTTP request: For both protocols, the server responds with metadata followed by hypertext. It's true that HTTP is more verbose, but 16 extra bytes doesn't create a noticeable difference. Unlike gopher, which has a unique navigation model and is of historical interest , Gemini is just the web but with limited features... so what's the point? I can already write websites that don't have ads or autoplaying videos, and you can already use browsers that don't support features you don't like. After stripping away all the fluff (CSS, JS, etc) the web is quite simple: a functional browser can be put together in a weekend. ... and unlike gemini, doing so won't throw out 35 years of compatibility: Someone with Chrome can read a barebones website, and someone with Lynx can read normal sites. Gemini is a technical solution to an emotional problem . Most people have a bad taste for HTTP due to the experience of visiting a commercial website. Gemini is the obvious choice for someone looking for "the web but without VC types". It doesn't make any sense when I'm looking for an interesting (and humor­ously outdated) protocol. /projects/tinyweb/ : A browser in 1000 lines of C ... /about.html#links : ... and thoughts on links for navigation. https://www.rfc-editor.org/rfc/rfc1436.html : Gopher RFC https://lynx.invisible-island.net/ : Feature complete text-based web browser

0 views
Den Odell 2 weeks ago

You're Looking at the Wrong Pretext Demo

Pretext , a new JavaScript library from Cheng Lou, crossed 7,000 GitHub stars in its first three days. If you've been anywhere near the frontend engineering circles in that time, you've seen the demos: a dragon that parts text like water , fluid smoke rendered as typographic ASCII , a wireframe torus drawn through a character grid , multi-column editorial layouts with animated orbs displacing text at 60fps . These are visually stunning and they're why the library went viral. But they aren't the reason this library matters. The important thing Pretext does is predict the height of a block of text without ever reading from the DOM. This means you can position text nodes without triggering a single layout recalculation. The text stays in the DOM, so screen readers can read it and users can select it, copy it, and translate it. The accessibility tree remains intact, the performance gain is real, and the user experience is preserved for everyone. This is the feature that will change how production web applications handle text, and it's the feature almost nobody is demonstrating. The community has spent three days building dragons. It should be building chat interfaces. And the fact that the dragons went viral while the measurement engine went unnoticed tells us something important about how the frontend community evaluates tools: we optimize for what we can see, not for what matters most to the people using what we build. The problem is forced layout recalculation, where the browser has to pause and re-measure the page layout before it can continue. When a UI component needs to know the height of a block of text, the standard approach is to measure it from the DOM. You call or read , and the browser synchronously calculates layout to give you an answer. Do this for 500 text blocks in a virtual list and you've forced 500 of these pauses. This pattern, called layout thrashing , remains a leading cause of visual stuttering in complex web applications. Pretext's insight is that uses the same font engine as DOM rendering but operates outside the browser's layout process entirely. Measure a word via canvas, cache the width, and from that point forward layout becomes pure arithmetic: walk cached widths, track running line width, and insert breaks when you exceed the container's maximum. No slow measurement reads, and no synchronous pauses. The architecture separates this into two phases. does the expensive work once: normalize whitespace, segment the text using for locale-aware word boundaries, handle bidirectional text (such as mixing English and Arabic), measure segments with canvas, and return a reusable reference. is then pure calculation over cached widths, taking about 0.09ms for a 500-text batch against roughly 19ms for . Cheng Lou himself calls the 500x comparison "unfair" since it excludes the one-time cost, but that cost is only paid once and spread across every subsequent call. It runs once when the text appears, and every subsequent resize takes the fast path, where the performance boost is real and substantial. The core idea traces back to Sebastian Markbage's research at Meta, where Cheng Lou implemented the earlier prototype that proved canvas font metrics could substitute for DOM measurement. Pretext builds on that foundation with production-grade internationalization, bidirectional text support, and the two-phase architecture that makes the fast path so fast. Lou has a track record here: react-motion and ReasonML both followed the same pattern of identifying a constraint everyone accepted as given and removing it with a better abstraction. The first use case Pretext serves, and the one I want to make the case for, is measuring text height so you can render DOM text nodes in exactly the right position without ever asking the browser how tall they are. This isn't a compromise path, it's the most capable thing the library does. Consider a virtual scrolling list of 500 chat messages. To render only the visible ones, you need to know each message's height before it enters the viewport. The traditional approach is to insert the text into the DOM, measure it, and then position it, paying the layout cost for every message. Pretext lets you predict the height mathematically and then render the text node at the right position. The text itself still lives in the DOM, so the accessibility model, selection behavior, and find-in-page all work exactly as they would with any other text node. Here's what that looks like in practice: Two function calls: the first measures and caches, the second predicts height through calculation. No layout cost, yet the text you render afterward is a standard DOM node with full accessibility. The shrinkwrap demo is the clearest example of why this path matters. CSS sizes a container to the widest wrapped line, which wastes space when the last line is short. There's no CSS property that says "find the narrowest width that still wraps to exactly N lines." Pretext's calculates the optimal width mathematically, and the result is a tighter chat bubble rendered as a standard DOM text node. The performance gain comes from smarter measurement, not from abandoning the DOM. Nothing about the text changes for the end user. Accordion sections whose heights are calculated from Pretext, and masonry layouts with height prediction instead of DOM reads: these both follow the same model of fast measurement feeding into standard DOM rendering. There are edge cases worth knowing about, starting with the fact that the prediction is only as accurate as the font metrics available at measurement time, so fonts need to be loaded before runs or results will drift. Ligatures (where two characters merge into one glyph, like "fi"), advanced font features, and certain CJK composition rules can introduce tiny differences between canvas measurement and DOM rendering. These are solvable problems and the library handles many of them already, but acknowledging them is part of taking the approach seriously rather than treating it as magic. Pretext also supports manual line layout for rendering to Canvas, SVG, or WebGL. These APIs give you exact line coordinates so you can paint text yourself rather than letting the DOM handle it. This is the path that went viral, and the one that dominates every community showcase. The canvas demos are impressive and they're doing things the DOM genuinely can't do at 60fps. But they're also painting pixels, and when you paint text as canvas pixels, the browser has no idea those pixels represent language. Screen readers like VoiceOver, NVDA, and JAWS derive their understanding of a page from the accessibility tree, which is itself built from the DOM, so canvas content is invisible to them. Browser find-in-page and translation tools both skip canvas pixels entirely. Native text selection is tied to DOM text nodes and canvas has no equivalent, so users can't select, copy, or navigate the content by keyboard. A element is also a single tab stop, meaning keyboard users can't move between individual words or paragraphs within it, even if it contains thousands of words. In short, everything that makes text behave as text rather than an image of text disappears. None of this means the canvas path is automatically wrong. There are legitimate contexts where canvas text rendering is the right choice: games, data visualizations, creative installations, and design tools that have invested years in building their own accessibility layer on top of canvas. For SVG rendering, the trade-offs are different again, since SVG text elements do participate in the accessibility tree, making it a middle ground between DOM and canvas. But the canvas path is not the breakthrough, because canvas text rendering has existed for fifteen or more years across dozens of libraries. What none of them offered was a way to predict DOM text layout without paying the layout cost. Pretext's and do exactly that, and it's genuinely new. This pattern often repeats across the frontend ecosystem, and I understand why. A dragon parting text like water is something you can record as a GIF, post to your socials, and collect thousands of impressions. A virtual scrolling list that pre-calculates text heights looks identical to one that doesn't. The performance difference is substantial but invisible to the eye. Nobody makes a showcase called "works flawlessly with VoiceOver" or "scrolls 10,000 messages without a single forced layout" because these things look like nothing. They look like a web page working the way web pages are supposed to work. This is Goodhart's Law applied to web performance: once a metric becomes a target, it ceases to be a good measure. Frame rate and layout cost are proxies for "does this work well for users." GitHub stars are a proxy for "is this useful." When the proxy gets optimized instead, in this case by visually impressive demos that happen to use the path with the steepest accessibility trade-offs, the actual signal about what makes the library important gets lost. The library's identity gets set by its most visually impressive feature in the first 72 hours, and the framing becomes "I am drawing things" rather than "I am measuring things faster than anyone has before." Once that framing is set, it's hard to shift. The best text-editing libraries on the web, CodeMirror , Monaco , and ProseMirror , all made the deliberate choice to stay in the DOM even when leaving it would have been faster, because the accessibility model isn't optional. Pretext's DOM measurement path belongs in that tradition but goes further: those editors still read from the DOM when they need to know how tall something is. Pretext eliminates that step entirely, predicting height through arithmetic before the node is ever rendered. It's the next logical step in the same philosophy: keep text where it belongs, but stop paying the measurement cost to do so. I've been thinking about performance engineering as a discipline for most of my career, and what strikes me about Pretext is that the real innovation is the one that is hardest to see. Predicting how text will lay out before it reaches the page, while keeping the text in the DOM and preserving everything that makes it accessible, is a genuinely new capability on the web platform. It's the kind of foundational improvement that every complex text-heavy application can adopt immediately. If you're reaching for Pretext this week, reach for and first. Build something that keeps text in the DOM and predicts its height without asking the browser. Ship an interface that every user can read, select, search, and navigate. Nobody else has done this yet, and it deserves building. Performance engineering is at its best when it serves everyone without asking anyone to give something up. Faster frame rates that don't make someone nauseous. Fewer layout pauses that mean a page responds when someone with motor difficulties needs it to. Text that is fast and readable and selectable and translatable and navigable by keyboard and comprehensible to a screen reader. The dragons are fun. The measurement engine is important. Let's try not to confuse the two.

0 views
Manuel Moreale 2 weeks ago

Nikhil Anand

This week on the People and Blogs series we have an interview with Nikhil Anand, whose blog can be found at nikhil.io . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hi I'm Nikhil! I grew up the UAE and came to the United States for college and graduated with a degree in biomedical engineering. I worked in academia and industry for about 15 years before deciding to turn my attention and energies towards problems in healthcare. I'm now a graduate student at Columbia University's Medical Center and am studying clinical informatics and loving the magnificent beehive that is New York City. With the time I have, I love going to art museums, practicing calligraphy, reading short stories and graphic novels, and watching every suspense/mystery show or movie I can (huge fan of the genre; for example I've watched all of Columbo at least three times). I'm also trying to learn CAD and have 3D printed several small abominations. I started blogging around 2003 after discovering blogs like Kottke.org, Jeffrey Zeldman's blog , Greg Storey's Airbag.ca , and Todd Dominey's WhatDoIKnow.org . My first blog was at freeorange.net which I now use as a placeholder for my tiny LLC's future site. I used to live in Ames, Iowa at the time and decided to and blog what I knew, about stuff going on in the town: gossip, lectures and shows I'd attended, photos of random scenes and events, and so on. That last part proved to be great: I'd hear from a quite a few alumni or former residents who'd have photo requests for nostalgia and I'd gladly oblige, especially since I was super excited to use my first digital camera, a whopping 5 megapixel Sony DSC-F717 😊 I then stopped blogging for about 10 or so years and resumed in 2018. My current blog is essentially a freeform dump: just this mélange of stuff I find interesting and/or may want to reference later. There's really no audience in mind. I use a lot of tags on my posts and am often delighted by exploring them a while later. I moved all my bookmarks over from PinBoard (an excellent service) and am trying to get off Instagram . I'm also trying to be better about making and sharing things (photos, calligraphy, art) no matter how terrible they are and not just consuming them. As for the name, I really wanted a domain hack, , but this sadly required permission from the Israeli government I was pretty sure I wouldn't get 😅 So I went with the shortest and 'coolest' TLD I could find and ended up with nikhil.io. I also have nikhil.fish as an alias for no reason. I think half my site's half a a tumblelog . As for the other half, I have a Markdown file called in my iCloud Drive that I dump inchoate thoughts into (it's at about half a meg right now). I also use the excellent Things app on my phone to save blog posts, names, recommendations, articles, and media of interest to peruse later. When I have time, I look at these two sources to post and comment on something I think is beautiful, interesting, or funny. All professional creatives I know personally have a space that they attend to do their work and they have told me that this matters immensely to them. In my case, I have a setup I've used reliably over many years and love it. I especially love my sit-to-stand desk (on wheels), giant display, and clickity-clack keyboard. I always listen to ambient music or white noise while working on anything ( Loscil 's works are a favorite). I've found that I just cannot focus in coffeehouses or libraries. And I absolutely cannot work or think in harsh "cool white" lighting (3000K or lower; if you need me to divulge secrets, just put me in a room with two tubelights for thirty seconds). I know a lot of people (like my wife , a writer) who can work anywhere and may be a bit envious. I am also in the habit of pacing around and muttering things to myself while working and these are not nice things to do at coffeehouses or libraries. I write all my posts in Markdown and use an old and heavily modded version of 11ty.js with several Markdown-it plugins and supported by quite a few and Node scripts to generate the HTML pages. Images are processed with Sharp . The blog theme is a mess of TSX and SASS files. All posts and code are in and Github. I build everything on my laptop and sync all the files to an S3 bucket that serves my blog through CloudFront. Not really. I've spent enough time monkeying with the design/structure and code where my setup fits my needs like a bespoke suit. You can always nerd out over tooling, and it's a lot of fun, but I've suspended that in favor of using the tools. For the time being at least 😅 Now if my wife or a friend were starting a blog, I would absolutely recommend a platform like Bear . Anything simple, hosted, not creepy, and not run by greedy and/or awful people. It costs ~$5 a month. A giant part of that cost is the domain name. Zero revenue. No plans on 'growing' it or whatever; it's just my little garden on the internet. I have no problem with people monetising their blogs as long as the strategy they employ is respectful to visitors' privacy and unobtrusive to their experience. Patronage/memberships aside, The Deck comes to mind as an ad platform that achieved both these things very well. I do have my problems with platforms like Substack and might write a blog post about this later. Please interview Chris Glass ! His lovely and popular blog is a huge inspiration for mine, layout and content, and he's been at it since at least 2003 IIRC. Another old favorite is Witold Riedel's log . I'm also really digging this blog I discovered recently. I just put up a small project I've wanted to do for a while, my own little curated digital gallery of art I've loved over the years. It was mostly a design exercise but I thought I might use some LLM to discover some themes in why I love these works (or maybe you just love looking at things and don't really need to understand why). Other than that, I am so happy with what feels to me like a resurgence in personal blogging (here's a recent index of personal blogs from readers of HackerNews). Thank you for having me in your beautiful space and featuring several other lovely and interesting people! This is a fantastic project Manu 🤗 Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 134 interviews . People and Blogs is possible because kind people support it.

0 views
matduggan.com 2 weeks ago

I Can't See Apple's Vision

I don't typically write about Apple stuff. It's the most written-about company on earth. Every product launch gets the kind of forensic scrutiny normally reserved for plane crashes and celebrity divorces. Mostly though, I feel like a line cook at a Denny's talking trash about whether the French Laundry has lost their way. I'm back here microwaving a Grand Slam and opining about Thomas Keller's sauce work. The engineers I know personally at Apple are, on average, much more talented than me. They work harder, they do it for decades without a break, and none of them have ever shipped a feature while still wearing pajama pants at 2 PM. It seems insane for someone of my mediocre talent to critique them. It also feels a little dog-pile-y. Apple employees know Tahoe sucks. They know it the way you know your haircut is bad — they don't need strangers on the internet confirming it. And to be fair, there's genuinely great work buried inside Tahoe: the clipboard manager, the automation APIs, a much-improved Spotlight. But visually it's gross, and that matters when your entire brand identity is "we're the ones who care about design." Instead, I want to talk about a bigger problem and one that I do feel qualified to talk about because I am very guilty of committing this sin. I don't see a cohesive vision for MacOS and WatchOS. This, more than one bad release, seems far worse to me and dangerous for the company. Since this is already 2000 words as a draft I'll save WatchOS for another time. I'm verbose but even I have limits. Now to be clear this isn't across every product . iPadOS has a strong vision and have the strength of their convictions to change approaches. The different stabs at solving the window problem inside of the iPad and make it so that you still have an iPad experience while being able to do multiple things at the same time is proof of that. iOS has an incredibly strong vision for what the product is and isn't and how the software works with that. VisionOS and tvOS are less strong, but visionOS is still finding its footing in a brand new world. The Apple TV hardware and software is in a weirdly good position even though nothing has changed about it in what feels like geological time. I've purchased every version of the Apple TV, and with the exception of that black glass remote — the one that felt like it was designed by someone who had never held a remote, or possibly a physical object — everything has been pretty good. I'm still not clear how storage works on the Apple TV and I don't think anybody outside of Apple does either. I'm not even sure Apple knows. But somehow it's fine. But with watchOS and MacOS we have 2 software stacks that seem to be letting down the great hardware they are installed in. They seem to be evolving in random directions with no clear end goal in mind. I used to be able to see what OS X was aiming for, even if it didn't hit that goal. Now with two of Apple's platform I'm not able to see anything except a desire to come up with something to show as this years release. When I got my first Mac — an iBook G3 — the experience was like test-driving a Ferrari that someone had fitted with a lawnmower engine. You'd click on the hard drive icon and wait. And wait. And in those few seconds of waiting, you'd think: man, this would be incredible if the hardware could keep up. The software had somewhere it wanted to go. The hardware just couldn't get it there yet. This trend continued for a long time on OS X, where you'd see Apple really pushing the absolute limits of what it could get away with. After the rock solid stability of 10.4 Apple took a lot of swings with 10.5 and they didn't all land. The first time you opened the Time Machine UI and the entire thing crawled to an almost crash, you'd think boy maybe this wasn't quite ready for prime time . But this entire time there wasn't really a question, ever, that there was a vision for what this looked like. The progression of OS X from the beta onward was this: OS X tried to accommodate you, not the other way around. When you look at these screenshots I'm always surprised how light the touch is. There isn't a lot of OS here to the user. Almost everything is happening behind the scenes and the stuff you do see is pretty obvious. The first time I thought "oh man, they've lost the thread" was Notifications. On iOS, Notifications make sense — you've got apps buried in folders three screens deep, so a unified system for surfacing what's happening is genuinely useful. On macOS, this design makes absolutely no sense at all. You can see your applications. They're right there. In the Dock. Which is also right there. This is the beginning of this feeling of "we aren't sure what we're doing here with the Mac anymore". iOS users like Notifications so maybe you dorks will too? It consumes a huge amount of screen real estate, it was never (and still isn't) clear what should and shouldn't be a notification. Even opening up mine right now it's filled with garbage that doesn't make sense to notify me about. A thing has completed running the thing that I asked it to run? Why would I need to know that? There is also already a clear way to communicate this information to me. The application icon adds an exclamation point or bounces up and down in the dock. With Notifications you end up with just garbage noise taking up your screen for no reason. Maybe worse, it's not even garbage designed with the Mac in mind. It's just like random crap nobody cares about that looks exactly like iOS Notifications. The issue with copying everything from iOS is that it's like copying someone's homework — except they go to a different school, in a different country, studying a different subject. It's not just wrong in the way where you tried and failed. It's wrong in a way that makes everyone who encounters it deeply uncomfortable. The teacher doesn't even know where to begin. They just stare at it. For years afterwards it seemed like the purpose of MacOS was just to port iOS features to the Mac years after their launch on iOS. Often these didn't make much sense or hadn't had a lot of effort expended in making them very Mac-y. Like there was clearly a favorite child with iOS, then a sassy middle child with iPadOS and then, like a 1980s sitcom where there was a contract dispute, "another child" you saw every 5th episode run down the stairs in the background with no lines. Me at home would shout at my TV "I knew they didn't kill you off MacOS!". Now with Tahoe there's clearly some sort of struggle happening inside of the team. And here's what's maddening — buried inside this visual catastrophe, someone at Apple is doing incredible work. Clipboard management has been table stakes in the third-party ecosystem for years. Apple finally added a version that handles 90% of use cases. It's classic Sherlocking: Apple shows up ten years late to the party, brings a decent bottle of wine, and somehow half the guests leave with them. Same with Spotlight. Spotlight hasn't gotten a ton of love in years. Suddenly it's really competing with third-party tools. If you're searching for a file, you can filter it based on where the file is stored. Type "name of Directory" press the Tab key, and then type the name of the file before pressing Enter. This is great! We finally have keyword search for stuff like . Application shortcuts for opening stuff with things like for Firefox is nice. Assign a quick key like “se” to  Send Email . Type it in Spotlight, hit enter, and compose your message. This is all classic Apple thinking which is "how can we make the Mac as good as possible such that you, the user, don't need to download any third-party applications to get a nice experience". You don't need a word processor, you have a word processor and a spreadsheet application and presentation software and a PDF viewer and a clipboard manager and a system launcher and automation APIs etc etc etc. This is a vision that is consistent throughout the entire systems history, how can we help you do the things you need to do more easily. But the reason why I'm stressed as someone who is pretty invested in the ecosystem is that the visual stuff is so bad and not just bad, but negligent. We didn't test how it was gonna look under a bunch of situations so that's now someone else's problem. Whenever I get a finder sidebar covering folder contents so I had to resize the window every time, or the Dock freaks out and refuses to come back out, it feels like I installed one of those OS X skins for a Linux distro. I buy Apple stuff cause its nice to look at and this is horrible to look at. Why is this so big? Why did you cut off the word "Finder" from Force Quit? Everywhere you look there's a million of these papercuts. We have a resolution on our laptops screen that would have made people collapse in 2005 why must we waste all of it on UI elements? Also you can't grab window edges as shown by the best post ever written here: https://noheger.at/blog/2026/01/11/the-struggle-of-resizing-windows-on-macos-tahoe/ Why is there so much empty space between everything? Why are there six ways to do literally everything? Why did we copy the concept of Control Center from iOS at all if there's very little limit on screen real estate and we could already do this from the menu bar? So we're going to keep the Mac menu bar but we're going to add a full iPad control system and then we're going to use the iPad control system to manage the menu bar . I will say the "Start Screen Saver" makes me laugh because its a mistake I would make in CSS. The text is too long so the button is giant but we didn't resize the icon so it looks crazy. Now do we need the same text inside the button as outside of it? No, and that leads me to the other banger. It's pretty clear the two white boxes inside of "Scene or Accessory" were supposed to be text, Scene on the top and then Accessory on the bottom, but SwiftUI couldn't do that so they left the placeholder. Somewhere there is a Jira ticket to come back to this that got trashed. Also, complete aside. Has anyone in the entire fucking world ever run Shazam from a Mac? What scenario are we designing for here? I hear a banger at the coffee shop so I hold my MacBook Pro up over my head like John Cusack in Say Anything , hoping it catches enough audio before my arms give out? "Recognize Music" is in my menu bar, taking up space that could be used for literally anything else, on the off chance I need to identify a song using a device that weighs four pounds and has no microphone worth using in a noisy room. If you are going to copy ipadOS's homework you need to think about it for 30 seconds . So my hope is that the improvement camp wins. That the people who built the better Spotlight and the clipboard manager and the automation APIs are the ones who get to set the direction. Because right now it feels like the best work on macOS is being done in spite of the overall vision, not because of it. Like someone's sneaking vegetables into a toddler's mac and cheese. The good stuff is in there — you just have to eat around a lot of neon orange nonsense to find it. Steve Jobs talked about creative people having to persuade five layers of management to do what they know is right. I don't know how many layers there are now. But I know what it looks like when the creative people are losing that argument, and I know what it looks like when they're winning it. Right now, on macOS, it looks like both are happening at the same time, in the same release, on the same screen. And that's scarier than any one bad design choice. It's Unix, but you never need to know that. All the power, none of the beard. You get the stability of a server OS without ever having to type into anything. Everything annoying is abstracted away. Drivers? Gone. "Installing" an application? You drag it into a folder. That's it. That's the install. It felt like the computer was meeting you more than halfway — it was practically doing your job for you and then apologizing for not doing it sooner. If it seems like it should work, it works. Double-click a PDF, it opens. Put in a DVD, it plays. Drag an app to the Applications folder and it becomes an application. This sounds obvious now, but in 2003 this was like witchcraft if you were coming from Windows. But it was also serious. It wasn't cluttered with stupid bullshit. It was designed for people who made things — with real font management, color calibration, the works. The OS tried to stay out of your way. Your content was the show; everything else was stagecraft.

0 views
Stratechery 2 weeks ago

An Interview with Arm CEO Rene Haas About Selling Chips

Listen to this post: Good morning, This week’s Stratechery Interview is with Arm CEO Rene Haas, who I previously spoke to in January 2024 , and who recently made a major announcement at Arm’s first-ever standalone keynote : the long-time IP-licensing company is undergoing a dramatic shift in its business model and selling its own chips for the first time. We dive deep into that decision in this interview, including the meta of the keynote, Arm’s history, and how the company has evolved, particularly under Haas’ leadership. Then we get into why CPUs matter for AI, and how Arm’s CPU compares to Nvidia’s, x86, and other custom Arm silicon. At the end we discuss the risks Arm faces, including a maxed-out supply chain, and how the company will need to change to support this new direction. As a reminder, all Stratechery content, including interviews, is available as a podcast; click the link at the top of this email to add Stratechery to your podcast player. On to the Interview: This interview is lightly edited for clarity. Rene Haas, welcome back to Stratechery. RH: Ben Thompson, thank you. Well, you used to be someone special, I think you were the only CEO I talked to who did nothing other than license IP, now you’re just another fabless chip guy like [Nvidia CEO] Jensen [Huang] or [Qualcomm CEO] Cristiano [Amon]. RH: (laugh) Yeah, you can put me in that category, I guess. Well the reason to talk this week is about the momentous announcements you made at the Arm Everywhere keynote — you will be selling your own chip. But before I get to the chip, i’m kind of interested in the meta of the keynote itself, is this Arm Everywhere concept new like as far as being a keynote? Why have your own event? RH: You know, we were talking a little bit about this going into the day. I don’t think we’ve ever as a company done anything like this. Yeah I didn’t think so either, I was trying to verify just to make sure my memory was correct, but yes it’s usually like at Computex or something like that. RH: Our product launches have usually been lower key, we try to use them usually around OEM products that are using our IP that use our partner’s chips, but we just felt like this was such a momentous day for the company/very different day for the company that we want to do something very, very unique. So it was very intentional, we were chatting about it prior, I don’t think we’ve done anything like before. Who was the customer for the keynote specifically? Because you’re making a chip — Meta is your first customer, they knew about this, they don’t need to be told — what was the motivation here? Who are you targeting? RH: When you prepare for these things, that’s one of the first questions you ask yourself, “Who is this for?”, “Is it for the ecosystem?”, “Is it for customers?”, “Is it for investors?”, “Is it for employees?”, and I think under the umbrella of Arm Everywhere, the answer to those questions was “Yes”, everybody. We felt we needed to, because a lot of questions come up on this, right, Ben, in terms of, “What are we doing?” “Why are we doing?”, “What’s this all about?”, the answer to that question was “Yes”, it was for everyone. One more question: Why the name “Arm Everywhere”? RH: We were trying to come up with something that was going to thematically remind people a bit about who Arm was and what we are and what we encompass, but not actually tease out that we were going to be announcing something. Right, you can’t say “Arm’s New Chip Event”. RH: (laughing) Yes, exactly, “Come to the new product launch that we’ve not yet announced”. So we just decided that that would be enough of a teaser to get people interested. Just to note you said, “What Arm was “, what was Arm? You used the past tense there. RH: Yeah, and I will say, we are still doing IP licensing, you can still buy CSSs [Compute Subsystem Platforms], so we are still offering all of the products we did before that day and plus chips, so I’m not yet just another chip CEO, I think I’m still very different than the other folks you talked to. Actually, back up, give me the whole Rene Haas version of the history of Arm. RH: Oh, my goodness gracious. The company was born out of a joint venture way back in the day between Acorn Computer and then ultimately Apple and VLSI to design a low-power CPU to power PDAs. The thing that was kind of important was, “I need something that is going to run in a plastic package” — you may remember back then just about everything was in ceramic — “I can’t melt the PDA, and oh, by the way, this thing’s got to run off a battery”. So they chose a RISC architecture, and that’s where the ARM ISA [ instruction set architecture ] was born and that’s what the first chip was intended to do, and the thing wasn’t very successful. So fast forward, however, the founders and then a very, very important guy in Arm’s history, Robin Saxby , put out a goal to make the ARM ISA the global standard for CPUs. And if you go back to early 1990s, there were a lot of CPUs out there and also there was not an IP business, there really wasn’t a very good fabless semiconductor model, and there was not a very good set of tools to develop SoCs [system on a chip] . So in some ways, and this is what I love about the company, it was a bit of a crazy idea because you didn’t really have all the things in place necessary to go off and do that. But back then, there were a lot of companies designing their own CPUs, if you will, and the idea there being that ultimately this would be something that customers could be able to access, acquire, and build, and then ultimately build a standard upon it. It was ultimately the killer design win for the company, and I know you’re a strategist and historian as well around this area, is the classic accidental example of TI was developing the baseband modem for an applications processor for the Nokia GSM phone and they needed a microcontroller, something to kind of manage the overall process, and they stumbled across what we were doing, and we licensed them the IP. That was kind of the first killer license that got the company off the ground and that’s what really got us into mobile. People may think, “You were the heart of the smartphone and you had this premonition to design around iOS” or, “You worked really closely in the early days of Android”, it was the accidental, we found ourselves into the Nokia phone, GSM phone, Symbian gets ported to ARM, and then there starts to be at least enough of a buzz around nascent software, but that’s how the company was born. I did enjoy for the keynote, you had a bunch of different Arm devices in the run-up running on the screen, and my heart did do a little pitter-patter when the Nokia phones popped on. Another day, to be sure. RH: Yeah, cool stuff right? But that’s kind of how the company got off the ground, and as it was a general purpose CPU which meant we didn’t really have it designed for, “It’s going to be good at X”, or, “It’s going to be good at Y, it’s going to be good at Z”, it turned out that because it was low power, it was pretty good to run in a mobile application. I think the historic design win where the company took off was obviously the iPhone, and the precursor to the iPhone was the iPod was using a chipset from PortalPlayer that used the ARM7 and the Mac OS was all x86, and then inside the company, it was Tony Fadell’s team arguing , “Let’s use this PortalPlayer architecture”, versus, “Do we go with Intel’s x86 and a derivative atom”, back in the day, and once a decision was made that “We’re going to port to ARM for iOS”, that’s where the tailwind took off. So is it definitely making up too much history to go back and say, “The reason Arm was a joint venture to start is because people knew you needed to have an ecosystem and not be owned by any one company”, or whatever it might be, that’s being too cute about things — the reality is it was just stumbling around, barely surviving, and just fell backwards into this? RH: Which, by the way, every good startup that’s really been successful, that’s kind of how the formula works. You stumble around in the dark, you find something you’re good at and then you engage with a customer and you find what ultimately is sticky and that’s really what happened with Arm. When you consider the changes that you’ve made at Arm, and I want to get your description of the changes that you’ve made, but how many of the challenges that you face were based on legitimate market fears about, “We’re going to alienate customers” or whatever it might be versus maybe more cultural values like, “We serve everyone”, versus almost like a fear like, “This is just the market we’ve got, let’s hold on to it”? RH: I think, Ben, we thought about it much more broadly, and when I took over and you and I met not long after that, there were a couple of things that were happening in the market in terms of a need to develop SoCs faster, a need to get to market more quickly and we knew that intuitively that no one knew how to combine 128 Arm cores together with a mesh network and have it perform better than we could because that’s what we had to do to go off and verify the cores. So we knew that doing compute subsystems really mattered, but I came from a bit of a different belief that if you own the ISA at the end of the day, you are the platform, you are the compute platform and it is incumbent upon you to think about how to have a closer connection between the hardware and the software, that is just table stakes. I don’t think it’s anything new, if you think about what Steve Jobs thought about with Apple and everything we’ve seen with Microsoft, with Wintel. I felt with Arm, particularly not long after I started, in 2023 and 2024, this was only getting accelerated with AI. Because with AI, the models and innovation moving way, way faster than the hardware could possibly keep up. I just felt for the company in the long term that this was a direction that we had to strongly consider, because if you are the ISA and you are the platform, the chip is not the product, the system is. That’s the thing that I was sort of driving at when I was writing about your launch. There’s an aspect where you’ve made these big changes, you’re originally just the ISA, then you’re doing your own cores, not selling them, but you’re basically designing the cores, then you’re moving to these systems on a chip designs and now you’re selling your own chips. But it feels like your portion of the overall, “What is a computer?”, has stayed fairly stable, actually, because, “What is a computer?”, is just becoming dramatically more expansive. RH: I think that’s exactly right. Again, if you are a curator of the architecture and you are an owner of the ISA, as good as the performance-per-watt is, as interesting as the microarchitecture is, as cool as it is in terms of how you do branch prediction, the software ecosystem determines your destiny. And the software ecosystem for anyone building a platform needs to have a much closer relationship between hardware and software, simply in terms of just how fast can you bring features to market, how fast can you accelerate the ecosystem, and how can you move with the direction of travel in terms of how things are evolving. You mentioned the big turning point or biggest design win was the iPhone way back in the day, and the way I’ve thought about Arm versus x86 — there’s been, you could make the case, ARM/RISC has been theoretically more efficient then CISC, and I’ve talked to Pat Gelsinger about how there was a big debate in Intel way back in the 80s about should we switch from CISC to RISC, and he was on the side of and won the argument that by the time we port everything to RISC we could have just built a faster CISC chip that is going to make up all the difference and that carried the day for a very long time. However, mobile required a total restart, you had to rebuild everything from scratch to deliver the power efficiency, and I guess the question is, you’ve had a similar dynamic for a long time about Arm in the data center theoretically is better, you care about power efficiency etc, is there something now — is this an iPhone-type moment where there’s actually an opportunity for a total reset to get all the software rewritten that needs to be done? Or have companies like Amazon and Qualcomm or whatever efforts they’ve done paved the ground that it’s not so stark of a change? RH: It’s a combination of both. One of the big advantages we got with Amazon doing Graviton in 2019, and then subsequently the designs we had with Google, with Axion, and Microsoft with Cobalt, is it just really accelerated everything going on with cloud-native, and anything that moves to cloud-native has kind of started with ARM. What do you mean by cloud native? RH: Cloud-native meaning these are applications that are starting from scratch to be ported to ARM. Built on a Linux distro, but not having to carry anything about running super old legacy software or running COBOL or something of that nature on-prem, so that was a huge benefit for us in terms of the go-forward. Certainly we got a huge interjection of growth when Nvidia went from the generation before Hopper, which I think was Volta or Pascal, I may be mixing up their versions, which was an x86 connect to Grace. So when they went to Grace Hopper, then Grace Blackwell, and now Vera, the AI stack for the head node now starts to look like ARM, that helps a lot in terms of how the data center is organized, so we certainly got a benefit with that. I think for us, the penny drop moment was when, and it’s probably 2018, 19 timeframe, is when Red Hat had production Linux distros for ARM and that really also accelerated things in terms of the open source community, the uploads and things that made things a lot, a lot easier from the software standpoint. Give me the timeline of this chip. When did you make the decision to build this chip? You can tell me now, when did this start? RH: You know, it started with a CSS, right? And we were talking to Meta about the CSS implementation. Right. And just for listeners, CSS is where you’re basically delivering the design for a whole system on a chip sort of thing. RH: Compute subsystem, yeah, so it’s the whole system on a chip. And by the way, it’s probably 95% of the IP that sits on a chip. What doesn’t include? It doesn’t include the I/O, the PCIe controllers, the memory controllers, but it’s most of the IP. And this is what undergirds — is Cobalt really the first real shipping CSS chip? Or does Graviton fall under this as well? RH: Cobalt’s probably the first incarnation of using that, so Meta was looking at using that and I think the discussions were taking place in the 2025 timeframe, mid-2025 timeframe. Here’s the key thing, Ben, not that long ago. Right. Well, that was my sense it was not that long ago, so I’m glad to hear that confirmed. RH: Not that long ago. Because CSS takes you a lot of the way there so that discussion in around the 2025 timeframe that we were going back and forth of, “Are you licensing CSS”, versus, “Could you build something for us?”, and we had been musing about, “Was this the right thing for us to do from a strategy standpoint?”, and how we thought about it, but ultimately it came down to Meta saying, “We really want you to do this for us, we think this is going to be the best way to accelerate time to market and give us a chip that’s performant and in the schedule that we need”, so somewhere in the 2025-ish timeframe, we agreed that, yes, we’ll do this for you. Why did Meta want you to do it instead of them finishing it off themselves? RH: I think they just did the ROI, in terms of, “I’ve got a lot of people working on things like MTIA , I’ve got a whole bunch of different projects internally, is it better that you do it versus we do it”? “How much can we actually differentiate a CPU”? RH: Yeah and by the way, that is ultimately what it comes down to at some point in time and the fact that the first one that came back works, it’s going to be able to go into production, and it’s ready to go. I’m not going to say they were shocked, but we kind of knew that was going to happen because we knew how to do this stuff and the products were highly performant and tested in the CSS, so it happened fast is the short answer. So if we talk about Arm crossing the Rubicon, was it actually not you selling this chip it was when you did CSS? RH: One could say that that was a big step. When we started talking about doing CSSs, let me step back, we made a decision to do CSSs— Explain CSSs and that decision because I think that’s actually quite interesting. RH: What is a CSS? It’s a compute subsystem, it takes all of the blocks of IP that we sold individually and puts them together in a fully configured, verified, performant deliverable that we can just hand to the customer and they can go off and complete the SoC. Some customers have told us it saves a year, some say a year-and-a-half and this is really around the test and verification in terms of the flow. One of the examples I gave, it’s a little cheeky, but it kind of worked during the road show, was when we were trying to explain to investors, “What’s IP, what’s a CSS?”, I said, go to the Lego store, and you’ve got a bin of Legos, yellow Legos, red Legos, blue Legos, trying to buy all those Legos and building the Statue of Liberty is a pain, or you can go over to the boxes where it’s the Statue of Liberty and just put those pieces together, and the Statue of Liberty is going to look beautiful. This is what the CSS was. I just want to jump in on that, because I was actually thinking about this, the Lego block concept is a common one that’s used when talking about semiconductors, but I remember being back in business school, and this was 2010, somewhere around then, and one of the case studies that we did was actually Lego, and the case study was the thought process of Lego deciding whether or not to pursue IP licensing as opposed to sticking with their traditional model, and all these trade-offs about, “We’re going to change our market”, “We’re going to lose what Lego is”, the creativity aspect, “It’s going to become these set pieces”. I just thought about that in this context where I came down very firmly on the side of, “Of course they should do this IP licensing”, but it was almost the counter was this sort of traditionalist argument which is kind of true — Legos today are kind of like toys for adults to a certain extent, and you build it once, reading directions and you think back to when I was a kid and you had all the Legos and it was just your creativity and your imagination and I’m like, “Maybe this analogy with Arm is actually more apt than it seems”. There’s a very romantic notion of IP licensing, you go out and make new things, “We got this for you”, versus, “No we’re just giving you the whole chip”, or in this case of CSS you, to your point, you could go get The Statue of Liberty, don’t even bother building it yourself. RH: And I think I came across this in the early days. In the 1990s, I was working with ASIC design at Compaq Computer, and they were doing all their ASICs for Northbridge , Southbridge , VGA controllers, and this is when the whole chipset industry took off. And I remember one of the senior guys at Compaq explaining why you’re doing this, he said, “I’m all about differentiation, but there needs to be a difference”. And to some extent, that’s a little bit of this, right? You can spend all the time building it, but if it’s all built and you spent all this time and it’s not functionally different nor performant different, but you spent time — well, if you’re playing around with Legos and you got all day, that’s fine — but if you’re running a business and you’re trying to get products out quickly, then time is everything, and that’s really what CSS did. It kind of established to folks that, “My gosh, I can save a lot of time on the work I was doing that was not highly differentiated”, and in fact, in some case, it was undifferentiated because we could get to a solution faster in such a way that it was much more performant than what folks might be trying to get to the last mile. So when we started talking about this to investors back in 2023 during the roadshow, their first question was, “Aren’t you going to be competing with your customers?”, and, “Isn’t this what your customers do?”, and, “Aren’t they going to be annoyed by it?”, and my answer was, “If it provides them benefit, they’ll buy it, if it does not present a benefit, they won’t buy it”, that’s it. And what we found is a lot of people are taking it, even in mobile, where people where we were told was, “No, no, these are the black belts and they’re going to grind out the last mile and you can’t really add a lot of value” — we’ve done a bunch in the mobile space, too. So with Meta, was the deal like, “Okay, we’ll do the whole thing for you, but then we get a sell to everyone?”, and they’re like, “That’s fine, we don’t care, it doesn’t matter”? RH: Yes, exactly. We said, “If we’re going to do this, how do you feel about us selling it to other customers?”, and they said, “We’re fine with that”. When did you realize that the CPU was going to be critical to AI? RH: Oh, I think we always thought it was. I had a cheeky little slide in the keynote about the demise of the CPU, and I had to spend a lot of time. I mean, I don’t know, I might have talked to someone recently who I swear was pretty adamant that a lot of CPUs should be replaced with GPUs, and now they’re selling CPUs, too. RH: I had to talk to investors and media to explain to them why a CPU was even needed. They were a little bit like, “Can’t the GPU run by itself?”, it’s like a kite that doesn’t need anything to hang on to. First off, on table stakes, obviously you need the data center but particularly as AI moves into smaller form factors, physical AI, edge, where you obviously have to have a CPU because you’re running display, you have I/O, you have human interface. It’s how do you add accelerated AI onto the CPU? So yeah, I think we kind of always knew it was going to be there, and there was going to be continued demand for it. Right, but there’s a difference between everyone on the edge is going to have a CPU so we can layer on some AI capabilities. It doesn’t have the power envelope or the cost structure to support a dedicated GPU, that’s fair, that’s all correct. It’s also correct that, to your point, a GPU needs a CPU to manage its scheduling and its I/O and all those sorts of things, but what I’m asking about specifically is actually, we’re going to have these agentic workflows, all of which what the agent does is CPU tasks and so it’s not just that we will continue to need CPUs, we might actually need an astronomical more amount of CPUs. Was that part of your thesis all along? RH: I think we have instinctively thought that to be the case. And what drives that? The sheer generation of tokens, tokens by the pound, tokens by the dump truck, if you will. The more tokens that the accelerators are generating, whether that’s done by agentic input, human input, whatever the input is, the more tokens that are generated, those tokens have to be distributed. And the distribution of those tokens, how they are managed, how they are orchestrated, how they are scheduled, that is a CPU task purely. So we kind of intuitively felt that over time, as these data centers go from hundreds of megawatts to gigawatts, you are going to need, at a minimum, CPUs that have more cores, period. There was this belief of 64 cores might be enough and maybe 128 cores would be the limit, Graviton 5 is 192 cores, the Arm AGI CPU is 136, we were already starting to see core counts go up, and we started thinking about, “What’s driving all these core counts going up, is it agentic AI?”. A proxy for it was just sheer tokens being generated in a larger fashion that needed to be distributed in a fast way and what was layered onto that was things like Codex, where latency matters, performance matters, delivering the token at speed matters. So I think all of that was bringing us to a place that we thought, “Yeah, you know what?”, we’re seeing this core count thing really starting to go up, we were seeing that about a year ago, Ben. So am I surprised that the CPU demand is exploding the way it is? Not really. Agentic AI, just the acceleration of how these agents have been launched, certainly is another tailwind kicker. Which happens to line up with your mid-2025 decision that, “Maybe we should sell CPUs”. RH: Yeah, it all kind of lines up. We were seeing that, you know what, we think that this is going to be a potentially really, really large market where not only core count matters, but number of cores matters, efficiency matters because we could imagine a world where each one of these cores is running an agent or a hypervisor and the number of cores can really, really matter in the system, which laid claim to what we were thinking about in terms of, “Okay, we can see a path here in terms of where things are going”. So CSSs with greater than 128 cores in the implementation? Absolutely. Do I think, could I see 256? Absolutely. Could I see 512? Possibly. I think then it comes down to the memory subsystem, how you keep them fed, etc., but yeah, so short answer, about a year ago we started seeing this. Do you think that core count is going to be most important or is it going to be performance-per-core? RH: I think core count is going to be quite important because I think, again, I have a belief that each one of these cores will want to potentially run their own agent, launch a hypervisor job, launch a job that can be run independently, launch it, get the work done, go to sleep. The performance of the core is going to matter, no doubt about it, but I think the efficiency of that core is probably going to matter just as much as the performance is. Well, the reason I ask is because you talked a lot in this presentation about the efficiency advantage, where the company born from a battery or whatever your phrase was, and that certainly, I think, rings true, particularly in isolation. But in a large data center, if the biggest cost is the GPUs, then isn’t it more important to keep the GPUs fed? Which basically to say, is a chip’s capability to feed GPUs actually more important on a systemic level than necessarily the chip’s efficiency on its own? RH: I’m going to plead the fifth and say yes to both. You’ve got to pick one! RH: Well, what’s important? I think the design choice that Nvidia made with Vera was very important, Vera is designed to feed Rubin, it has a very specific interface, NVLink Fusion or NVLink chip-to-chip, provides a blazing fast interface, and has the right number of cores in terms of to keep that GPU fed optimally. But at the same time, is it the right configuration in a general-purpose application where you want to run an air-cooled rack in the same data hall? If you think about a data hall where you might have a Vera Rubin liquid-cooled rack sitting right next to a liquid-cooled Vera rack, but somewhere else inside the data center, you’ve got room for multiple air-cooled racks. That space that you may have not used in the past for CPU, you want to because of the problem statement that I just gave. So I actually think it’s a “both” world, which is why when people ask me, “Oh my gosh, aren’t you competing with Nvidia Vera, and aren’t people going to get confused?” — not particularly, I think there’s ample space for both. So you feel like Nvidia might be selling standalone Vera racks but that’s not necessarily what Vera was designed for, that’s what you’re designed for, and you think that’s where you’re going to be different. RH: Yes, and I mean, if you look at what’s been announced so far from Nvidia, they announced a giant 256-CPU liquid-cooled rack and the first implementation that we’re doing with Meta is a much smaller air-cooled rack. So very, very different right off the get-go. But you will have a liquid-cooled option? RH: If customers want that, we can do that too. I think that differentiation makes sense. Well, speaking of differentiation, why ARM versus x86? Why is there an opportunity here? RH: Performance-per-watt, period. Graviton sort of started it, and they’ve been very public about their 40% to 50%, Cobalt stated the same with Microsoft, Axion, Google stated the same, Nvidia has stated the same. Just on table stakes, 2x performance-per-watt is pretty undeniable. And that, I think, it starts there as probably the primary value proposition. What is x86 still better at? You can’t say legacy software, other than legacy software. RH: Go back to our earlier part of our conversation, right? The ISA, what is the value of the ISA? It is the software that it runs, right? It is the software that it runs. So if you were to look at where does x86 have a stronghold, x86 is very good at legacy on-prem software. Ok, fine, we’ll give you legacy on-prem software and I think part of the thesis here to your point a lot of this agentic work, it’s on Linux, it’s using containers, it’s all relatively new, it all by and large works well in ARM already, but you did have a bit in the presentation where you interviewed a guy from Meta that was about porting software. How much work still needs to be done there? RH: There’s a delta between the porting work and the optimization work. Graviton, what Amazon will tell you, is that greater than 50% of their new deployments and accelerating is ARM-based. And, yes, am I the CEO of Arm and do I have a biased opinion? Of course. But I find it hard to, on a clean sheet design, if you were starting from scratch and the software porting was done and you had either cloud-native or the application space was established or as a head node, I don’t know why you’d start with x86. What about, why are you doing ARM? We did ARM versus x86, I’m sort of working my way down the chain here — actually, I did backwards, we stuck in Vera already — but why you versus custom silicon generally? You talked about Amazon. Why do you need to do the whole thing? RH: So let’s think about an Amazon, for example. Amazon does Graviton, would I like Amazon to buy the Arm AGI CPU? Yes. Am I going to be heartbroken if they never buy one? No, I’m perfectly fine if they stay building what they’re building. Are they ever going to buy one? No. RH: I hope they do! But if they don’t, it’s not going to be the end of the world. SAP — SAP runs a lot of software on Amazon, they run SAP HANA on Amazon, they also have a desire to do stuff on-prem and if they’re doing something on-prem in a smaller space and they’re looking to leverage that work, they’d love to have something that is ARM-based. Prior to us doing this product, there was no option at all, right? So that’s a very, very good example. Similar with a Cloudflare. Is Cloudflare going to do their own implementation? Likely not. Do they run on other people’s clouds? Sure, they do. Do they have an application that could be on-prem running on ARM? Absolutely. So we think that, and I don’t want to prefetch this, Ben, but we had a lot of questions from folks like, “Amazon won’t buy from you”, “Google won’t buy from you”, “Microsoft won’t buy from you”, because you’re competing with them. And we say, well, Google builds TPUs, yet they buy a lot of Nvidia GPUs, so it’s not so binary. That’s true. They’ll buy what their customers ask them to buy. RH: 100%. And if we solve a problem with an implementation that theirs does not, they’ll buy it, and if we don’t, they won’t. Just you know between you and me, is the only customer silicon that is truly potentially competitive Qualcomm and you’re just not too worried about making them mad? RH: This is off the record here? (laughing) I didn’t say off the record. RH: Qualcomm, it’s funny, I had a question at the investor conference about competing with Nvidia. And I said, you know, a month ago, no one would have asked about any Arm person competing with anybody. So it’s wonderful to have these kind of conversations, the market is underserved and there aren’t choices. There isn’t a product from Qualcomm, there isn’t a product from MediaTek, there isn’t a product from Infineon, there just isn’t. Is that sort of your case? If there were a bunch of options in the market, would you still be entering? RH: We entered this because Meta asked us to and because Meta asked us to we did. So if I was to answer your question, would we have entered if those other four guys were there or five hypotheticals? I don’t know that Meta would have asked us. If the Arm AGI CPU, it’s being built on TSMC’s 3-nm node, which is kind of impossible to get allocation for. How’d you get allocation? If you started this in 2025, how’d you pull that off? RH: We’re working through a back-end ASIC partner that helps secure the allocation for us. Oh, interesting. Are you concerned about that in the long run ? Like this business blows up and actually you just can’t make enough chips? RH: I’m probably less worried about that at the moment than I am about memory. I think that the business, the demand is very, very high actually for the chip, Ben and through our partner, we’re able to secure upside through TSMC, that has not been a problem. But memory is quite challenging and I think if there’s any limit to how big this business can get and I would say that what we provided to investors as a financial forecast is based upon the capacity we’ve secured on both memory and logic but if there was more memory could we sell more? Yes. This is sort of the sweet spot though of making predictions, everyone gets to say, “Wow, how are your predictions so accurate?”, it’s like, “Well it’s because I knew exactly how much what I would be able to make”. RH: Yeah, if there was more memory we’d be even more aggressive on the numbers. How did you make the memory decisions that you did in terms of memory bandwidth and all those sorts of pieces, particularly given the short timeline which you made this you. That wasn’t necessarily part of the CSS spec before, so how were you thinking about that? RH: The things we kind of looked at was, we sort of started with LP versus standard DRAM . Because Vera’s doing LP and you decided to do standard. RH: We’re doing standard DRAM, yeah. We thought we’d be a little bit better on the cost side that could help and at the same time, a little bit better on the capacity side. So it really kind of drove down to, we’re going to solve for capacity because we thought that that might matter in a more generalized application space to give the broader width of use, which then brought us to standard DDR versus LP. I think the reason we talked last time was in the context of you making a deal with Intel to get Arm working on 18A, and this was going to be a multi-generational partnership. What happened to that? Is that still around? RH: It’s still around. We did a lot of work on 18A because we felt that it was going to be really, really important if someone wanted to build on Intel 18A, that the Arm IP was available. So we did our part relative to if someone wants to go build an ARM-based SoC on Intel process, but that unfortunately hasn’t come to pass just yet. It’s interesting you mentioned that you’re actually not worried about TSMC capacity but you are worried about memory — I didn’t fully think through that being another headwind for Intel where they could really use TSMC having insufficient capacity to help them, but if memory is the first constraint then no one’s even getting there. RH: First off, obviously HBM [ high bandwidth memory ] being such a capacity hog, and then people moving from LP into HBM at the memory guys, then compounding on it, all of the explosion of the CPU demand drives up memory demand. So it all kind of adds on to itself, which makes the memory problem pretty acute. What exactly is in the bill of materials that you’re selling? You showed racks but you mentioned a partnership with Super Micro for example — if I buy a chip from Arm what exactly am I buying? You’ve mentioned memory obviously, so what else is in that? And what are you getting from partners? RH: Yeah, so we’ll send you a voucher code after the show, and you can place your orders. Just the SoCs. If you need to secure the memory, that’s on you, we’re not securing memory at this point in time. We did a lot of work with Super Micro, with Lenovo, with ASRock. So there’s a full 1U, 2U server blade reference architecture so the full BOM relative to all the passives and everything you need from an interconnect standpoint is all there. There’s a full BOM, which, as we mentioned in the session, the rack physically itself complies with OCP standards and then we’ve done all the work in terms of the reference design. So we can provide the full BOM of the reference platform, memory, but what we are selling only is the SoC. Very nerdy question here, but how are you going to report this from an accounting perspective? Just right off the top chips have a very different margin profile, is this going to all be broken out? How are you thinking about that? RH: We’ll probably do that. Today we break down licensing and royalty of the IP business, we’ll probably break out chips as a separate revenue stream. To go back to, you did call this event Arm Everywhere, will you ever sell a smartphone chip? RH: I don’t know, that’s a really hard question. I think we’re going to look at areas where we think we could add significant value to a market that’s underserved, that market’s pretty well served. It’s very well served and this agentic AI, potentially a new market, fresh software stack, makes sense to me. What risks are you worried about with this? You come across as very confident, “This is very obviously what we should”, how does this go wrong? RH: Most of my career has been spent actually in companies that have chips as their end business as opposed to IP. I’ve been at Arm 12 years, 13 years, I’ve been the CEO for about four-and-a-half. I did a couple of years, two, three years at a company called Tensilica that was doing, or actually the longer, five years, but most of my career was either NEC Semiconductor, Texas Instruments, Nvidia. Chip business is not easy, right? You introduce a whole different new set of characteristics. You have to introduce this term called “inventory” to your company. RH: RMAs, inventory, customer field failures, just a whole cadre of things that’s very new for our company, there certainly is execution risk that we’ve added that has not existed before. We had a 35-year machine being built that is incredibly good at delivering world-class IP to customers — doing chips is a whole different deal. I don’t want to minimize that, but at the same time, I don’t want to communicate that that’s something that we haven’t thought about deeply over the years and we’ve got a lot of people who have done that work inside the company. A lot of my senior executive team, ex-Broadcom, ex-Marvell, ex-Nvidia, we’ve got a lot of people inside the engineering organization who have come from that world, we’ve built up an operations team to go off and support that. So while there is risk, we’ve been taking a lot of steps inside the company to be adding the resources. We’ve been increasing our OpEx quite a bit in the quarters leading up to this, about 25% year-on-year, investors were asking a ton of questions about, “When are we going to see why you’re adding all those people?”, and Arm Everywhere explained that. We also told investors that that’s now going to taper off because we’ve got, we think what we need to go off and execute on all this. But I think that’s the biggest thing, Ben. And the upside is just absolute revenue dollars, I guess absolute profit dollars. RH: I think there’s a financial upside, certainly, in terms of financial dollars. But I think back to the platform, I think by being closer to the hardware and the software and the systems, we can develop even better products around IP, CSS, etc. because I think when you are the compute platform, it is incumbent upon you to have as close a relationship as you can between the software that’s developed on your platform. What’s the state of the business in China these days, by the way? RH: China still represents probably 15% of our revenue, we still have a joint venture in China, the majority of our businesses is royalties, royalties is much bigger than licensing in China. We still have a lot of design wins coming in the mobile space for people doing their own SoCs like a Xiaomi. The hyperscaler market is strong between Alibaba, ByteDance, Tencent, and then most of the robotics and EV guys are doing stuff based on ARM, whether it’s XPeng, BYD, Horizon Robotics. So our business is pretty healthy in China. You do have the Immortalis and Mali GPUs. Are those good at AI? RH: Yes they can be very good, we’ve added a lot of things to to our GPUs around what we call neural graphics so this is adding essentially a convolution and vector engine that can can help with AI. Right now the focus has been really more around AI in a graphics application, whether it’s around things like DLSS and things of other area, but we’ve got a lot of ingredients in those GPUs. So we should stay tuned, sounds very interesting. You did have one moment in the presentation that was a little weird, you were trying to say that this AI thing is definitely a real thing but you’re like, “Well it might be a financial bubble, but the AI is real”. Are you worried about all this money that is going into this that you’re making a play for a piece of, but is there some consternation in that regard? RH: No, what I was trying to indicate was when people talk about bubbles, typically it’s either valuation bubbles or investment bubbles. The valuation bubbles, those come and go over time. The investment bubble, I’m not as worried about in the sense of, “Is there going to be real ROI on the investment being made?”, I actually worry more about the, “Can you get all the stuff required to build out all of the scale?” — we just talked about memory, there’s TSMC capacity. I think the memory will be solved, they will ultimately not be able to help themselves, they will build more capacity, I’m worried about leading edge. TSMC will help themselves if they don’t have any challengers. RH: Turbines, right? You’ve got companies who are like GE Vernova or Mitsubishi, this is not their world of building factories well ahead to go serve an extra 5 to 10 gigawatts of power. So I think TSMC is super disciplined, and they’ve been world class at that throughout their history. Will the memory guys be able to help themselves? The numbers are now so large that even the Sandisk’s of the world and storage, everything has kind of gotten bananas, and that is a concern in terms of if just one of those key components of the supply chain blinks and decides not to invest to provide the capacity, then things kind of slow down. But the numbers, Ben, the numbers we’re talking about are numbers we’ve never seen before. $200 billion CapEx from an Amazon or $200 billion CapEx from a Google. And then you have companies like Anthropic talking about $6 billion revenue increases over a three-to-four month period, which are the size of some software companies. So we are in some very stratospheric levels in terms of spend that would I be surprised if there was a pause in something just as people calibrate? Yeah, I wouldn’t be surprised at all. But if I think about the 5 to 10-year trajectory, there’s no way you can say this is a bubble. If you said, “I think machines that can think as well as humans and make us more productive, that’s kind of a fad”, I don’t actually think that’s going to happen, it’s almost nonsensical. Just to sort of go full circle, you’ve been on the edge, and now this new product that gets the Arm Everywhere moniker but it’s about being in the data center — is the edge dead? Or if not dead is it are we in a fundamental shift where the most important compute is going to be in data centers or is there a bit where AI is real but it actually does leave the data center, go to the edge and that’s a bigger challenge? RH: I think until something is invented that is different than the transformer, and we talk about some very different model as to how AI is trained and inferred, then we’re looking at a lot of compute in the data center and some level of compute on the edge. I think if you just suspend animation for a second and we say, you know what, the transformer is it, and that’s what the world looks like for the next number of, the next 5 to 10 years, the edge is not going to be dead. The edge is going to have to run some level of native compute for whatever the thing has to do, and it’s going to run some AI acceleration, of course. But is everything going to happen in your pocket? No. I mean, that’s not going to happen. I’ve come down to that side too. I think in the fullness of time, at least for now, the thin client model, it looks like it’s going to be it. I guess that seems to be your case as well because you had a big event, it is for a data center GPU. Arm is Everywhere, but not everyone can buy it. RH: And power efficiency was a nice to have in the data center, but I would say it wasn’t existential. It is now, though. And I say that’s another big change because, again, one of the examples I gave, if you’re 4x-ing or 5x-ing or 6x-ing the CPUs in a given data center and you don’t want to give up one ounce of GPU accelerator power, then you’re going to squeeze everywhere you can and that, I think, is a thing that’s in our favor. Where’s Arm in 10 years? RH: I would like to think of as one of the most important semiconductor companies on the planet. We’re not there yet, but that’s how I would like the company to be thought about. Rene Haas, congratulations, great to talk. RH: Thank you, Ben. This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery . The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly. Thanks for being a supporter, and have a great day!

0 views
Lonami 3 weeks ago

Ditching GitHub

AI. AI AI AI. Artificial "Intelligence". Large Language Models. Well, they sure are large, I'll give them that. This isn't quite how I was hoping to write a new blog post after years of not touching the site, but I guess it's what we're going with. To make it very clear: none of the text, code, images or any other output I produce is AI-written or AI-assisted. I also refuse to acknowledge that AI is even a thing by adding a disclaimer to all my posts saying that I do not use it. But this post is titled "Ditching GitHub", so let's address that first. Millions of developers and businesses call GitHub home And that's probably not a good thing. I myself am guilty of often searching "<project> github" in DuckDuckGo many a time when I want to find open-source projects. I'll probably keep doing it, too, because that's what search engines understand. So, GitHub. According to their API, I joined the first day of 2014 after noon (seriously, did I not have anything better to do on new year's? And how is that over twelve years ago already‽). Back then, I was fairly into C# programming on Windows. It seems I felt fairly comfortable with my code already, and was willing to let other people see and use it. That was after I had been dabbling with Visual Basic Scripts, which in turn was after console batch scripting. I also tried Visual Basics before C#, but as a programming noob, with few-to-none programming terms learnt, I found the whole and quite strange ↪1 . Regardless of the language, telling the computer to do things and have it obey you was pretty cool! Even more so if those things had a visual interface. So let's show others what cool things we could pull off! During that same year, I also started using Telegram. Such a refreshing application this used to be. Hey, wouldn't it be cool if you could automate Telegram itself ? Let's search to see if other people have made something to use that from C#. Turns out TLSharp did in fact exist! The repository seems to be archived now, in favor of WTelegramClient . I tried to contribute to it. I remember being excited to have a working code generator that could be used to automatically update the types and functions that the library had to offer, based on the most recent definitions provided by Telegram (at least indirectly, via their own open-source repositories.) Unfortunately, I had some friction with the maintainer back then. Perhaps it was a misunderstanding, or I was too young, naive, or just couldn't get my point across. That didn't discourage me though ↪2 . Instead, I took it upon myself to reimplement the library. Back then, Telegram's lack of documentation on the protocol made it quite the headache (literally, and not just once) to get it working. Despite that, I persevered, and was able to slowly make progress. Fast-forward a bit ↪3 , still young and with plenty of time on my hands, one day I decided I wanted to try this whole Linux thing. But C# felt like it was mostly a Windows thing. Let's see, what other languages are there that are commonplace in Linux… " Python " huh? Looks pretty neat, let's give it a shot! Being the imaginative person I am, I obviously decided to call my new project a mix between Tele gram and Python . Thus, Telethon was born ↪4 . Ah, GitHub stars. Quite the meaningless metric, considering they can be bought, and yet… there's something about them. I can't help myself. I like internet points. They make me feel like there are other people out there who, just like me, have a love for the craft, and share it with this small gesture. I never intended for Telethon to become as popular as it has. I attribute its success to a mix of luck, creating it at the right time, choice of popular programming language, and lack of many other options back then. And of course, the ridiculous amount of time, care and patience I have put (and continue to put) into the project out of my own volition. Downloads are not a metric I've cared to look at much. But then came support questions. A steady growth of stars. Bug reports. Feature requests. Pull requests. Small donations! And heart-felt thank-you emails or messages. Each showing that people like it enough to spend their time on it, and some even like it enough that they want to see it become better, or take the time to show their appreciation. This… this feels nice, actually. Sure, it's not perfect. There will always be an idiot who thinks you owe them even more time ↪5 . Because the gift of open-source you've given the world is not enough. But that's okay. I've had a bit of an arc in how I've dealt with issues, from excited, to tired and quite frankly pretty rude at times (sorry! Perhaps it was burn-out?), to now where I try to first and foremost remain polite, even if my responses can feel cold or blunt. There are real human beings behind the screens. Let's not forget that. Telethon is closing-in on twelve thousand stars on GitHub ↪6 . I don't know how many are bots, or how many still use GitHub at all, but that's a really darn impressive number. cpython itself is at seventy-two thousand! We're talking the same order of magnitude here. So I am well aware that such a project makes for quite the impressive portfolio. There's no denying that. We don't have infinite time to carefully audit all dependencies we rely on, as much as we should. So clearly, bigger star number must mean better project, or something like that. To an extent, it does, even if subconsciously. Unfortunately for me, that means I can't quite fully ditch GitHub. Not only would I be contributing to link-rot, but the vast majority of projects are still hosted there. So whether I like it or not, I'm going to have to keep my account if I want to retain my access to help out other projects. And, yes. Losing that amount of stars would suck. But wow has the platform gotten worse. Barely a screen into GitHub's landing page while not logged in, there it is. The first mention of AI. Scroll a bit further, and… Your AI partner everywhere. They're not wrong. It is everywhere. AI continues to be shoved so hard in so many places . Every time I'm reading a blog post and there's even the slightest mention of AI, or someone points it out in the comments, my heart sinks a little. "Aw, I was really enjoying reading this. Too bad." ↪7 It doesn't help that I'm quite bad at picking up the tell-tale signs of AI-written text ↪8 . So it hurts even more when I find out. AI used to be a fun topic. Learning how to make self-improving genetic algorithms, or basic neural networks to recognize digits . For pity's sake, even I have written about AI before . I used to be fascinated by @carykh's YouTube videos about their Evolution Simulator . It was so cool ! And now I feel so disgusted by the current situation. Remember when I said I was proud of having a working code generator for TLSharp? Shouldn't I be happy LLMs have commoditized that aspect? No, not at all. Learning is the point . Tearing apart the black boxes that computers seem. This code thing. It's actually within your grasp with some effort. Linux itself, programming languages. They're not magic, despite some programmers being absolute wizards. You can understand it too. Now? Oh, just tell the machine what you want in prose. It will do something. Something . That's terrifying. "But there's this fun trick where you can ask the AI to be a professional engineer with many years of experience and it will produce better code!" I uh… What? Oh, is that how we're supposed to interact with them. Swaying the statistical process in a more favourable direction. Yikes. This does not inspire any confidence at all. Time and time again I see mentions on how AI-written code introduces bugs in very subtle ways. In ways that a human wouldn't, which also makes them harder to catch. I don't want to review the ridiculous amount of code that LLMs produce. I want to be the one writing the code. Writing the code is the fun part . Figuring out the solution comes before that, and along experimentation, takes the longest. But once the code you've written behaves the way you wanted it, that's the payoff. There is no joy in having a machine guess some code that may very well do something completely different the next time you prompt it the same. As others have put it very eloquently before me, LLM-written text is "a cognitive DoS ". It's spam. It destroys trust. I don't want to read an amalgamation of code or answers from the collective internet. I want to know people's thoughts. So please, respect my time, or I'll make that choice myself by disengaging with the content. Embrace AI or get out -- GitHub's CEO Out we go then. If not GitHub, where to go? GitHub pages makes it extremely easy to push some static HTML and CSS and make it available everywhere reliably, despite the overall GitHub status dropping below 90% for what feels like every day. I would need to host my website(s) somewhere else. Should I do the same with my code? I still enjoy being part of the open source community. I don't want to just shut it all down, although that's a fate others have gone through . Many projects larger than mine struggle with 'draining and demoralizing' AI slop submissions , and not just of code . I have, thankfully, been able to stay out of that for the most part. Others have not . I thought about it. Unfortunately, another common recurring theme is how often AI crawlers beat the shit out of servers, with zero respect for any sensible limits. Frankly, that's not a problem I'm interested in dealing with. I mean, why else would people feel the need to be Goofing on Meta's AI Crawler otherwise? Because what else can you do when you get 270,000 URLs being crawled in a day. Enter Codeberg . A registered non-profit association. Kord Extensions did it , Zig did it , and I'm sure many others have and will continue to do it. I obviously don't want this to end in another monopoly. There are alternatives, such as SourceHut , which I also have huge respect for. But I had to make a choice, and Codeberg was that choice. With the experience from the migration, which was quite straightforward ↪9 , jumping ship again should I need to doesn't seem as daunting anymore. Codeberg's stance on AI and Crawling is something I align with, and they take measures to defend against it. So far, I'm satisfied with my choice, and the interface feels so much snappier than GitHub's current one too! But crawling is far from the only issue I have with AI. They will extract as much value from you as possible, whether you like it or not. They will control every bit that they can from your thoughts. Who they? Well, the watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds . Putting aside the wonderful experience that the site's design provides (maybe I should borrow that starry background…), the contents are concerning . So I feel very validated in the fact that I've never made an attempt to use any of the services all these companies are trying to sell me. I don't want to use them even if I got paid . Please stay away, Microslop . But whether I like it or not, we are, unfortunately, very much paying for it. So Hold on to Your Hardware . Allow me to quote a part from the article: Q1 hasn’t even ended and a major hard drive manufacturer has zero remaining capacity for the year So yeah. It's important to own your hardware. And I would suggest you own your code, too. Don't let them take that away from you. Now, I'm not quite at the point where I'm hosting everything I do from my own home, and I really hope it doesn't have to come to that. But there is comfort in paying for a service, such as renting a server to host this very site ↪10 , knowing that you are not the product (or, at least, whoever is offering the paid service has an incentive not to make you one.) Some people pair the move from GitHub to Codeberg along statichost.eu . But just how bad can hosting something youself can get anyway? Judging by the amount of people that are Messing with bots , it indeed seems there are plenty of websites that want to keep LLM crawlers are bay, with a multitude of approaches like Blocking LLM crawlers, without JavaScript or the popular Anubis . If I were to self-host my forge, I would probably be Guarding My Git Forge Against AI Scrapers out of need too. Regardless of the choice, let's say we're happy with the measures in place to keep crawlers busy being fed garbage. Are we done? We're protected against slop now, right? No, because they're doing the same. To those that vibecode entire projects and don't disclaim they're done with AI: your project sucks . And it's in your browser too. Even though I think nobody wants AI in Firefox, Mozilla . Because I don't care how well your "AI" works . And No, Cloudflare's Matrix server isn't an earnest project either. If that's how well AIs can do, I remain unimpressed. I haven't even mentioned the impact all these models have on jobs either ↪11 ! Cozy projects aren't safe either. WigglyPaint also suffers from low quality slop redistribution. "LLMs enable source code laundering" and frequently make mistakes. I Am An AI Hater . That's why we see forks stripping AI out, with projects like A code editor for humanoid apes and grumpy toads as a fork of Zed. While I am really happy to see that there are more and more projects adopting policies against AI submissions , all other fronts seem to just keep getting worse. To quote more comments , AI cause: environmental harms , reinforce bias , generate racist output , cause cognitive harms , support suicides , amplify numerous problems around consent and copyright , enable fraud , disinformation , harassment and surveillance and exploit and fire workers. Utter disrespect for community-maintained spaces. Source code laundering. Questionable ties to governments. Extreme waste of compute and finite resources. Exacerbating already-existing problems. I'm not alone thinking this . Are we expected to use AI to keep up? This is A Horrible Conclusion . Yeah. I don't want to have to do anything with it. I hope the post at least made some sense. There are way too many citations that it's hard to tie them neatly. Who knows, maybe one day I'll be forced to work at a local bakery and code only on my free time with how things are going. 1 I get them now. Though I prefer the terseness of no- or .  ↩ 2 I like to think I'm quite pragmatic, and frankly, I've learnt to brush off a lot of things. Having thick skin has proven to be quite useful on the internet.  ↩ 3 I kept working on C# GUI programs and toyed around with making more game-y things, with Processing using Java, which also naturally lent itself to making GUI applications for Android. These aren't quite as relevant to the story though (while both Stringlate and Klooni had/have seen some success, it's not nearly as much.)  ↩ 4 My project-naming skills haven't improved.  ↩ 5 Those are the good ones. There are worse , and then there is far worse. Stay safe.  ↩ 6 And for some reason I also have 740 followers? I have no idea what that feature does.  ↩ 7 Quite ironic… If you're one of those that also closes the tab when they see AI being mentioned, thanks for sticking by. I'm using this post to vent and let it all out. It would be awkward to address the topic otherwise, though I did think about trying to do it that way.  ↩ 8 As much as I try to avoid engaging with it, I'm afraid I'll eventually be forced to learn those patterns one way or another.  ↩ 9 I chose not to use the import features to bring over everything from GitHub. I saw this as an opportunity to start clean, and it's also just easier to not have to worry about the ownership of other people's contributions to issues if they remain the sole owner at their original place in GitHub.  ↩ 10 I have other things I host here, so I find it useful to rent a VPS rather than simply paying for a static file host. Hosting browsable Git repositories seems like an entirely different beast to hosting static sites though, hence the choice of using Codeberg for code. If all commits and all files are reachable, crawlers are going to have fun with that one.  ↩ 11 Even on my current job the company has enabled automatic Copilot code-reviews for every pull request. I can't disable them, and I feel bad opening PRs knowing that I am wasting compute on pointless bot comments. It just feels like an expensive, glorified spell-checker. The company culture is fine if we ignore this detail, but it feels like I'm fighting an uphill battle, and I'm not sure I'd have much luck elsewhere…  ↩

0 views
Cassidy Williams 3 weeks ago

A history of styling choices leading to native CSS

I recently updated my app todometer to be styled with pure, native CSS! Changing the CSS libraries in todometer has been a real reflection of CSS styling history. When I first built it more than 9 years ago now , that first initial commit had React, Electron, and Less for CSS. Less at the time was great for what I wanted (Node-based styling with nesting). It let me use variables ( like this ) and nesting ( like this ), and got the job done with some global styles. Eventually in 2020, I wanted more encapsulated styles, thus I wanted to use CSS Modules. Also at the time, wanting to keep my variables and nesting where I could (but also modernize), I switched to . When you look at the commits here and here , you can see the variables switching from starting with to , and how I moved everything everything to their respective component (ultimately only keeping variables at the global level). The behaviors all stayed the same, just was more modern under the hood! Yet another big refactor was due in 2023, when I got rid of all Sass and used plain ol’ CSS files, and for transformations. had been deprecated, which really led me to reevaluating the styling stack, and CSS variables existed natively, so that was one less thing needed! That led me to where we were until earlier today, . These libraries sound almost exactly the same, but act different, and Chris Coyier talks about it a bit here . To save you a click, has syntax like Sass, and has syntax like the CSS spec! Given the history above, it makes sense how the transition happened here. Moving from Less to Sass to a more vanilla CSS approach, all while keeping the core of variables + nesting, is all I really wanted. The library back in 2023 let me keep styling almost exactly the same when I transitioned away from Sass, with the exception of variables ( see the commit ). I switched in this commit today to mostly to make sure that everything transitioned smoothly. It involved a laughably small change list, to just add nesting selectors across some files. The transition to fully native CSS for the entire app is possible now because CSS nesting is natively available ! I probably didn’t actually need to do the “switch to ” step, but it felt like a good iterative one. And since I did that iterative step, the only changes I did for a “fully pure” CSS solution was to simply delete the files! Look at that diff. So much red!! So nice!! It’s really amazing to see how far we’ve come in browsers to be able to do these things without any libraries at all. Yes, I don’t have the most complex styles in the world, and yes, I’m really “only” using variables and nesting, but it’s cool that a “quality of life, nice to have” thing that I enjoyed nearly a decade ago is now a standard. Look at us go! Anyway, you can check out the repo for todometer here , with a new version being properly cut soon!

0 views
Susam Pal 3 weeks ago

Wander 0.2.0

Wander 0.2.0 is the second release of Wander, a small, decentralised, self-hosted web console that lets visitors to your website explore interesting websites and pages recommended by a community of independent personal website owners. To try it, go to susam.net/wander . This release brings a number of improvements. When I released version 0.1.0, it was the initial version of the software I was using for my own website. Naturally, I was the only user initially and I only added trusted web pages to the recommendation list of my console. But ever since I announced this project on Hacker News , it has received a good amount of attention. It has been less than a week since I announced it there but over 30 people have set up a Wander console on their personal websites. There are now over a hundred web pages being recommended by this network of consoles. With the growth in the number of people who have set up Wander console, came several feature requests, most of which have been implemented already. This release makes these new features available. Since Wander 0.2.0, the file of remote consoles is executed in a sandbox to ensure that it has no side effects on the parent Wander console page. Similarly, the pages recommended by the network are also loaded into a sandbox . This release also brings several customisation features. Console owners can customise their Wander console by adding custom CSS or JavaScript. Console owners can also block certain URLs from ever being recommended on their console. This is especially important in providing a good wandering experience to visitors. Since this network is completely decentralised, console owners can add any web page they like to their console. Sometimes they inadvertently add pages that do not load successfully in the console due to frame embedding restrictions. This leads to an uneven wandering experience because these page recommendations occasionally make it to other consoles where they fail to load. Console owners can now block such URLs in their console to decrease the likelihood of these failed page loads. This helps make the wandering experience smoother. Another significant feature in this release is the expanded Console dialog box. This dialog box now shows various details about the console and the current wandering session. For example, it shows the console's configuration: recommended pages, ignored URLs and linked consoles. It also shows a wandering history screen where you can see each link that was recommended to you along with the console that recommendation came from. There is another screen that shows all the consoles discovered during the discovery process. Those who care about how Wander works would find this dialog box quite useful. To check it out, go to my Wander console and explore. To learn more about Wander, how it works and how to set it up, please read the project README at codeberg.org/susam/wander . Read on website | #web | #technology

0 views
alikhil 3 weeks ago

What is a CDN and Why It Matters?

With the rapid growth of GenAI solutions and the continuous launch of new applications, understanding the fundamental challenges and solutions of the web is becoming increasingly important. One of the core challenges is delivering content quickly to the end user . This is where a CDN comes into play. A CDN stands for Content Delivery Network . Let’s break it down. (Note: Modern CDN providers often bundle additional services such as WAF, DDoS protection, and bot management. Here, we focus on the static content delivery.) Content refers to any asset that needs to be loaded on the user’s device: images, audio/video files, JavaScript, CSS, and more. Delivery means that this content is not only available but also delivered efficiently and quickly. A CDN is a network of distributed nodes that cache content. Instead of fetching files directly from the origin server, users receive them from the nearest node, minimizing latency. Consider an online marketplace for digital assets, such as a photo stock or NFT platform. The application stores thousands of images on a central server. Whenever users open the app, those images must load quickly. If the application server is hosted in Paris, users in Paris will experience minimal ping. However: These numbers only reflect simple ICMP ping times. Actual file delivery involves additional overhead such as TCP connections and TLS handshakes, which increase delays even further. With a CDN, each user connects to the nearest edge node instead of the origin server. This is typically achieved via GeoDNS. Importantly, only the CDN knows the actual address of the origin server, which also improves security by reducing exposure to direct DDoS attacks. CDN providers usually operate edge nodes in major world cities. When a request is made: If the requested file is already cached on the edge node ( cache hit ), it is delivered instantly. If not ( cache miss ), the edge node requests it from the CDN shield . If the shield has the file cached, it is returned to the edge and then served to the user. If not, the shield fetches it from the origin server, caching it along the way. For popular websites, the cache hit rate approaches but rarely reaches 100% due to purges, new files, or new users. The shield node plays a critical role. Without it, each cache miss from any edge node would hit the origin server directly, increasing load. Many providers offer shields as an optional feature, and enabling them can significantly reduce origin stress. Beyond cache hits and misses, performance can be measured with concrete indicators: Time to First Byte (TTFB): How long it takes for the first data to arrive after a request. CDNs usually reduce TTFB by terminating connections closer to the user. Latency reduction: The difference in round-trip time between delivery from the origin versus delivery from an edge node. Cache hit ratio: The percentage of requests served directly from edge caches. These KPIs provide a real, measurable view of CDN efficiency rather than theoretical assumptions. The closer the edge node is to the end user, the faster the content loads. The key questions are: Where are the users located? Which CDN providers have the best edge coverage for those locations? But don’t rely on maps alone. Measure real performance with Real User Monitoring (RUM) using metrics like TTFB and Core Web Vitals. There are plenty of ready-made tools available. If you’re interested in building your own RUM system, leave a comment or reaction – I can cover that in a follow-up post. Users in Spain may see about 2× ping time. Users in the USA may see 6× ping time. Users in Australia may see 12× ping time. If the requested file is already cached on the edge node ( cache hit ), it is delivered instantly. If not ( cache miss ), the edge node requests it from the CDN shield . If the shield has the file cached, it is returned to the edge and then served to the user. If not, the shield fetches it from the origin server, caching it along the way. Time to First Byte (TTFB): How long it takes for the first data to arrive after a request. CDNs usually reduce TTFB by terminating connections closer to the user. Latency reduction: The difference in round-trip time between delivery from the origin versus delivery from an edge node. Cache hit ratio: The percentage of requests served directly from edge caches.

0 views
Manuel Moreale 3 weeks ago

Melanie Richards

This week on the People and Blogs series we have an interview with Melanie Richards, whose blog can be found at melanie-richards.com/blog . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m a Group Product Manager co-leading the core product at Webflow, i.e. helping teams visually design and build websites. My personal mission is to empower people to make inspiring, impactful, and inclusive things on the web. That’s been the through line of my career so far: I started out as a designer at a full-service agency called Fuzzco, moved to the web platform at Microsoft Edge, continued building for developers at Netlify, and am now aiming to make web creation even more democratic with the Webflow platform. I transitioned from design to product management while at Microsoft Edge. I wanted to take part in steering the future of the web platform, instead of remaining downstream of those decisions. I feel so lucky to have worked on new features in HTML, ARIA, CSS, and JavaScript with other PMs and developers in the W3C and WHATWG. I’m a builder at heart, so I love to work on webby side projects as well as a whole bevy of analog hobbies: knitting, sewing, weaving, sketchbooking, and journaling. I have a couple primary blogs right now: From 2013–2016 I also had a blog and directory called Badass Lady Creatives (wish I had spent more than five minutes on the name, haha). This featured women who were doing cool things in various “creative” industries. At the time it seemed like every panel, conference lineup, and group project featured all or mostly dudes. The blog was a way to push back on that a little bit and highlight people who were potentially overlooked. Since then gender representation (for one) seems to have gotten a bit better in these industries. But the work and joy of celebrating diverse, inspiring talent is never done! Big “yeet to production” vibes for me! I use Obsidian to scribble down my thoughts and write an initial draft. Obsidian creates Markdown files, so I copy and paste those into Visual Studio Code (my code editor), add some images and make some tweaks, and then push to production. I really try not to overthink it too much. However, I will admit that I have a tons of drafts in Obsidian that never see the light of day. It can be cathartic enough just to scribble it down, even if I never publish the thought. For my Learning Log posts, I use a Readwise => Obsidian workflow I describe in this blog post . Reader by Readwise is the app where I store and read all my RSS feeds and newsletter forwards. “Parallel play” is the biggest, most joyful boon to my creativity. I love to be in the company of others as we independently work on our own projects side by side. There’s a delicate balance when it comes to working on creative projects socially. For example, my mom, my aunt, and I often have Sew Day over FaceTime on Sundays. Everyone’s pretty committed to what they’re working on, so it’s easy to sew and talk and sing (badly 😂) at the same time. I also used to go to a local craft night that very sadly disbanded when the host shop changed hands. For writing or coding, that takes a bit more mental focus for me. I started a Discord server with a few friends, which is dedicated to working on blog posts and side projects. We meet up once a month to talk about our projects (and shoot the breeze, usually about web accessibility and/or the goodness of dogs). Then we all log off the voice channel to go do the thing! Both of these blogs use Eleventy and plain ol’ Markdown, and are hosted on Netlify. Some of my other side projects use a content management system (CMS) like Webflow’s CMS, or Contentful + Eleventy. Again, Webflow is my current employer. I use a Netlify form for comments on my “Making” blog, and Webmentions for my main blog. I will probably pull out Webmentions from that code base: conceptually they’ve never really “landed” for me, and it would be nice to delete a ton of code. I generally like my setup, though sometimes I think about migrating my “Making” blog onto a CMS. As far as CMSes go, I quite like Webflow’s: it’s straightforward and has that Goldilocks level of functionality for me. Some other CMSes I’ve tried have felt bloated yet seemed to miss obvious functionality out of the box. I have a Bookshop.org affiliate link and it took me several years to meet the $20 minimum payout so…yeah I’ve never truly monetized my blogging! I find there’s freedom in giving away your thoughts for free. As far as costs go, I have pretty low overhead: just paying for the domain name. I’m fine with other folks monetizing personal blogs, though of course there’s a classy and not-classy way to do so. If monetizing is what keeps bloggers’ work on the open web, on sites they own and control, I prefer that over monetizing through walled gardens. Related: Substack makes it easy to monetize but there are some very compelling reasons to consider alternatives. This is highly topical: I’m currently scheming about a directory site listing “maker” blogs! So many communities in the visual arts and crafts are stuck on social media platforms they don’t even enjoy, beholden to the whims of an algorithm. I’d like to connect makers in a more organic way. If you’re a crafter who would like to be part of this, feel free to fill out this Google form ! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 133 interviews . People and Blogs is possible because kind people support it. melanie-richards.com/blog, simply the blog that lives at my main website. I post here about the web, design, development, accessibility, product management, etc. One practice I’ve been keeping for a few years now is my monthly Learning Log. These posts are a compendium of what I’ve been shipping or making, what I’ve been learning, side quests, neat links around the internet, and articles I’ve been reading. When I’m in a particularly busy period (as was the case in 2025; my first child was born in September), this series is my most consistent blogging practice. making.melanie-richards.com : this is the blog where I post about my aforementioned analog projects. Quite a lot of sewing over the past year! Mandy Brown , Oliver Burkeman (technically a newsletter with a “view on web” equivalent), and Ethan Marcotte ’s writing have been helping to fill my spiritual cup over the last couple years. Anh and Katherine Yang are doing neat things on their sites What Claudia Wore for a nostalgic pick; I’d love to recreate some of these outfits sometime. Thank you Kim for keeping the blog up! Sarah Higley would be a great next interview. She blogs less frequently, but always at great depth and thoughtfulness on web accessibility. Web developers can learn quite a lot on more involved controls and interactions from Sarah.

0 views