Latest Posts (20 found)
Jim Nielsen -21 days ago

You Might Debate It — If You Could See It

Imagine I’m the design leader at your org and I present the following guidelines I want us to adopt as a team for doing design work: How do you think that conversation would go? I can easily imagine a spirited debate where some folks disagree with any or all of my points, arguing that they should be struck as guidelines from our collective ethos of craft. Perhaps some are boring, or too opinionated, or too reliant on trends. There are lots of valid, defensible reasons. I can easily see this discussion being an exercise in frustration, where we debate for hours and get nowhere — “I suppose we can all agree to disagree”. And yet — thanks to a link to Codex’s front-end tool guidelines in Simon Willison’s article about how coding agents work — I see that these are exactly the kind of guidelines that are tucked away inside an LLM that’s generating output for many teams. It’s like a Trojan Horse of craft: guidelines you might never agree to explicitly are guiding LLM outputs, which means you are agreeing to them implicitly. It’s a good reminder about the opacity of the instructions baked in to generative tools. We would debate an open set of guidelines for hours, but if there’re opaquely baked in to a tool without our knowledge does anybody even care? When you offload your thinking, you might be on-loading someone else’s you’d never agree to — personally or collectively. Reply via: Email · Mastodon · Bluesky Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system). Motion: Use a few meaningful animations (page-load, staggered reveals) instead of generic micro-motions. Background: Don't rely on flat, single-color backgrounds; use gradients, shapes, or subtle patterns to build atmosphere. Overall: Avoid boilerplate layouts and interchangeable UI patterns. Vary themes, type families, and visual languages.

1 views

Bring back MiniDV with this Raspberry Pi FireWire HAT

In my last post, I showed you to use FireWire on a Raspberry Pi with a PCI Express IEEE 1394 adapter. Now I'll show you how I'm using a new FireWire HAT and a PiSugar3 Plus battery to make a portable MRU, or 'Memory Recording Unit', to replace tape in older FireWire/i.Link/DV cameras. The alternative is an old used MRU like Sony's HVR-MRC1 , which runs around $300 on eBay 1 .

0 views
iDiallo Today

Sharing a Name

My bank card never arrived. I called the bank and, after being redirected through several departments, was assured that it had been mailed. Then we argued a bit about what "7 to 10 business days" meant, we were already on day 14. We ended the call by agreeing to disagree. Eventually, I did get my card. But it wasn't the mailman who delivered it. Instead, it was my neighbor from two streets down. On the envelope, my address had been crossed out, and the word "incorrect" was handwritten beside it. Why? Because the mailman had done it. You see, I had just moved into the apartment complex, and my name looked familiar to him. Of course he knew who Ibrahima Diallo was, he had been delivering his mail for years. So he corrected it. In the US, both my first and last name are uncommon (or so I thought). They're often a source of confusion when my Starbucks order gets called out. As it turns out, one of my neighbors shares the exact same name. And on top of that, he uses the same West African spelling: Ibrahima . The mailman, trying to be helpful, had redirected my mail to what he thought was the right address. My neighbor and I laughed about it. Then I immediately cancelled the card and requested a new one... Some years ago, I dated a woman from Bulgaria. She grew up in a small city where everyone knew each other. In their town, there was a single Black family. You probably know where this is going, but pretend you don't and follow along. It was so unusual to have an outsider in this town that the man and his family became local fixtures. Wherever they went, people stopped to take pictures with them. They were like minor celebrities. So naturally, when she pulled out a photo from her childhood, there he was, posing cheerfully with the neighbors. She turned the photo over to read the names written on the back. She stopped. She burst out laughing. I looked at the name. I can't read Cyrillic, but I know exactly how to spell my name in Bulgarian. His name read: Ibrahima Diallo . When I was hired at AT&T many years ago, there was a week of confusion at first. I didn't receive my welcome kit. My manager swore that he had carefully selected my name, and sent it to my Texas address... As you may have guessed, I do not have a Texas address. I lived in Los Angeles and the company where we worked in person was in Los Angeles. Somewhere in Texas, a long time employee must have been confused with this new welcome kit showing up in the mail. Back when I was featured on the BBC , a wave of people reached out. Even though my picture was prominently displayed in the article, several people emailed me as if they already knew me, picking up conversations we had apparently started at work, signing off with "see you tomorrow." According to my inbox, I had met quite a few people in London. The only problem was, well, I've never been to London. As it turned out, my neighbor's uncle had called him to say that some journalists were trying to reach his nephew through him. You'll never guess the uncle's name. Yes, it's Ibrahima Diallo. I eventually met this uncle. We had a long conversation and discovered that he knew my father from back home. In fact, he had gone to school with one of my uncles and spoke fondly of him, saying he was a brilliant student. What's my uncle's name, you ask? Of course it's Ibrahima Diallo. Growing up, I assumed my name was uniquely mine. But as I've made my way through the world, I've found that I share it with a surprisingly large number of people. I already snagged ibrahimdiallo.com . I'm keeping an eye on ibrahimadiallo.com , hoping it expires this June so I can claim that one too. If it does become available, I'll gather an army of Ibrahimas, and we will... Well, I'm not entirely sure what we'll do yet. But it will definitely be fun. Anyway, that's a story about my name. A postscript worth mentioning: Both of my older brothers share the same first and last name as each other. You can imagine the fun they have. This is what happens in West African families when you name your children after their grandparents, and the grandparents happen to share the same name. One brother does have a middle name, intended as a differentiator. But middle names are rarely included in US mailing addresses, so that doesn't help much either.

0 views

RIP Mac Pro

The Mac Pro is no longer a product in Apple’s lineup . For a computer that has caused so much consternation over the years, its story can be told very succinctly. Stephen Hackett captured it all in six sentences : The Mac Pro was introduced way back in 2006 as a replacement for the outgoing Power Mac G5 . It had a good few years , then languished until the 2013 model was announced . That machine was a dud, and it languished until the 2019 model was announced . It came out in December 2019, which was less than a year before Apple silicon was announced and the M1 shipped . The Mac Pro got one last update in June 2023, when Apple dropped the Intel version for one with an M2 Ultra inside. It’s been languishing again ever since. (Or, for the long version, read this retrospective by Joe Rossignol on MacRumors .) Definitely sad to see the Mac Pro, and its amazingly-still-modern-looking-even-seven-years-later chassis head to the farm upstate. I’d held out hope for a new screamer of a machine with an ‘ Extreme’ M-series chip, but alas. It seems that Apple was waiting for permission from John Siracusa , the world’s preeminent Mac Pro believer , to kill the product. Here he is in the latest episode of the Accidental Tech Podcast , recorded just last night: @ marcoarment @ siracusa if you sell it, I will buy it and wear it to WWDC @ marcoarment @ siracusa The Mac Pro dies twice: first, when Apple discontinues it, second, when its name is spoken by John for the last time. Exciting that both “ Believe” shirts were resolved this month. ✅ Upgrade AirPods Max Believe ☠️ ATP Mac Pro Believe There’s something poetic about the Mac Pro being discontinued as the MacBook Neo takes off like a rocket. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
iDiallo Today

How we get radicalized in America

Be healthy, be young, fall ill. You have a great job of course, you have insurance. It would be ok if the worst thing about health insurance in America was it is hard to navigate. No! The actual problem is that your insurance is incentivized not to cover you at your most vulnerable moment. You pay them every month. That's money that goes from your paycheck, into their pockets. Now if they cover you, that's money that leaves their pocket, and go into your treatment. There are two ways they can make money. 1. You continue paying every month, and never fall ill. 2. You fall ill, and they deny you care. Only the second option is an active option. Health Insurance is a scam that we have normalized in the United States. It helps no one, it makes healthcare unaffordable, and you have to fight tooth and nail to get any sort of care. When Luigi was in the headlines, and news anchors were asking how such a young man can get radicalized, I shook my head. In America, it is our tradition to get 2 jobs. It is our tradition to live paycheck to paycheck. And it is our tradition to get radicalized the moment we get sick. When you get sick, the healthcare industry tries to charge much as they can get away with and the insurance industry tries to deny as much as it can.

1 views
Martin Fowler Yesterday

Fragments: March 26

Anthropic carried a study, done by getting its model to interview some 80,000 users to understand their opinions about AI, what they hope from it, and what they fear. Two things stood out to me. It’s easy to assume there are AI optimists and AI pessimists, divided into separate camps. But what we actually found were people organized around what they value—financial security, learning, human connection— watching advancing AI capabilities while managing both hope and fear at once. That makes sense, if asked whether I’m a an AI booster or an AI doomer, I answer “yes”. I am both fascinated by its impact on my profession, expectant of the benefits it will bring to our world, and worried by the harms that will come from it. Powerful technologies rarely yield simple consequences. The other thing that struck me was that, despite most people mixing the two, there was an overall variance between optimism and pessimism with AI by geography. In general, the less developed the country, the more optimism about AI. ❄                ❄                ❄                ❄                ❄ Julias Shaw describes how to fix a gap in many people’s use of specs to drive LLMs: Here’s what I keep seeing: the specification-driven development (SDD) conversation has exploded. The internet is overflowing with people saying you should write a spec before prompting. Describe the behavior you want. Define the constraints. Give the agent guardrails. Good advice. I often follow it myself. But almost nobody takes the next step. Encoding those specifications into automated tests that actually enforce the contract. And the strange part is, most developers outside the extreme programming crowd don’t realize they need to. They genuinely believe the spec document is the safety net. It isn’t. The spec document is the blueprint. The safety net is the test suite that catches the moment your code drifts away from it. As well as explaining why it’s important to have such a test suite, he provides an astute five-step checklist to turn spec documents into executable tests. ❄                ❄                ❄                ❄                ❄ Lawfare has a long article on potential problems countering covert action by Iran . It’s a long article, and I confess I only skip-read it. It begins by outlining a bunch of plots hatched in the last few years. Then it says: If these examples seem repetitive, it’s because they are. Iran has proved itself relentless in its efforts to carry out attacks on U.S. soil—and the U.S., for its part, has demonstrated that it is capable of countering those efforts. The above examples show how robustly the U.S. national security apparatus was able to respond, largely through the FBI and the Justice Department…. That is, potentially, until now. The current administration has decimated the national security elements of both agencies through firings and forced resignations. People with decades of experience in building interagency and critical source relationships around the world, handling high-pressure, complicated investigations straddling classified and unclassified spaces, and acting in time to prevent violence and preserve evidence have been pushed out the door. Those who remain not only have to stretch to make up for the personnel deficit but also are being pulled away by White House priorities not tied to the increasing threat of an Iranian response. The article goes into detail about these cuts, and the threats that may exploit the resulting gaps. It’s the nature of national security people to highlight potential threats and call for more resources and power. But it’s also the nature of enemies to find weak spots and look to cause havoc. I wonder what we’ll think should we read this article again in a few years time

1 views
matduggan.com Yesterday

I Can't See Apple's Vision

I don't typically write about Apple stuff. It's the most written-about company on earth. Every product launch gets the kind of forensic scrutiny normally reserved for plane crashes and celebrity divorces. Mostly though, I feel like a line cook at a Denny's talking trash about whether the French Laundry has lost their way. I'm back here microwaving a Grand Slam and opining about Thomas Keller's sauce work. The engineers I know personally at Apple are, on average, much more talented than me. They work harder, they do it for decades without a break, and none of them have ever shipped a feature while still wearing pajama pants at 2 PM. It seems insane for someone of my mediocre talent to critique them. It also feels a little dog-pile-y. Apple employees know Tahoe sucks. They know it the way you know your haircut is bad — they don't need strangers on the internet confirming it. And to be fair, there's genuinely great work buried inside Tahoe: the clipboard manager, the automation APIs, a much-improved Spotlight. But visually it's gross, and that matters when your entire brand identity is "we're the ones who care about design." Instead, I want to talk about a bigger problem and one that I do feel qualified to talk about because I am very guilty of committing this sin. I don't see a cohesive vision for MacOS and WatchOS. This, more than one bad release, seems far worse to me and dangerous for the company. Since this is already 2000 words as a draft I'll save WatchOS for another time. I'm verbose but even I have limits. Now to be clear this isn't across every product . iPadOS has a strong vision and have the strength of their convictions to change approaches. The different stabs at solving the window problem inside of the iPad and make it so that you still have an iPad experience while being able to do multiple things at the same time is proof of that. iOS has an incredibly strong vision for what the product is and isn't and how the software works with that. VisionOS and tvOS are less strong, but visionOS is still finding its footing in a brand new world. The Apple TV hardware and software is in a weirdly good position even though nothing has changed about it in what feels like geological time. I've purchased every version of the Apple TV, and with the exception of that black glass remote — the one that felt like it was designed by someone who had never held a remote, or possibly a physical object — everything has been pretty good. I'm still not clear how storage works on the Apple TV and I don't think anybody outside of Apple does either. I'm not even sure Apple knows. But somehow it's fine. But with watchOS and MacOS we have 2 software stacks that seem to be letting down the great hardware they are installed in. They seem to be evolving in random directions with no clear end goal in mind. I used to be able to see what OS X was aiming for, even if it didn't hit that goal. Now with two of Apple's platform I'm not able to see anything except a desire to come up with something to show as this years release. When I got my first Mac — an iBook G3 — the experience was like test-driving a Ferrari that someone had fitted with a lawnmower engine. You'd click on the hard drive icon and wait. And wait. And in those few seconds of waiting, you'd think: man, this would be incredible if the hardware could keep up. The software had somewhere it wanted to go. The hardware just couldn't get it there yet. This trend continued for a long time on OS X, where you'd see Apple really pushing the absolute limits of what it could get away with. After the rock solid stability of 10.4 Apple took a lot of swings with 10.5 and they didn't all land. The first time you opened the Time Machine UI and the entire thing crawled to an almost crash, you'd think boy maybe this wasn't quite ready for prime time . But this entire time there wasn't really a question, ever, that there was a vision for what this looked like. The progression of OS X from the beta onward was this: OS X tried to accommodate you, not the other way around. When you look at these screenshots I'm always surprised how light the touch is. There isn't a lot of OS here to the user. Almost everything is happening behind the scenes and the stuff you do see is pretty obvious. The first time I thought "oh man, they've lost the thread" was Notifications. On iOS, Notifications make sense — you've got apps buried in folders three screens deep, so a unified system for surfacing what's happening is genuinely useful. On macOS, this design makes absolutely no sense at all. You can see your applications. They're right there. In the Dock. Which is also right there. This is the beginning of this feeling of "we aren't sure what we're doing here with the Mac anymore". iOS users like Notifications so maybe you dorks will too? It consumes a huge amount of screen real estate, it was never (and still isn't) clear what should and shouldn't be a notification. Even opening up mine right now it's filled with garbage that doesn't make sense to notify me about. A thing has completed running the thing that I asked it to run? Why would I need to know that? There is also already a clear way to communicate this information to me. The application icon adds an exclamation point or bounces up and down in the dock. With Notifications you end up with just garbage noise taking up your screen for no reason. Maybe worse, it's not even garbage designed with the Mac in mind. It's just like random crap nobody cares about that looks exactly like iOS Notifications. The issue with copying everything from iOS is that it's like copying someone's homework — except they go to a different school, in a different country, studying a different subject. It's not just wrong in the way where you tried and failed. It's wrong in a way that makes everyone who encounters it deeply uncomfortable. The teacher doesn't even know where to begin. They just stare at it. For years afterwards it seemed like the purpose of MacOS was just to port iOS features to the Mac years after their launch on iOS. Often these didn't make much sense or hadn't had a lot of effort expended in making them very Mac-y. Like there was clearly a favorite child with iOS, then a sassy middle child with iPadOS and then, like a 1980s sitcom where there was a contract dispute, "another child" you saw every 5th episode run down the stairs in the background with no lines. Me at home would shout at my TV "I knew they didn't kill you off MacOS!". Now with Tahoe there's clearly some sort of struggle happening inside of the team. And here's what's maddening — buried inside this visual catastrophe, someone at Apple is doing incredible work. Clipboard management has been table stakes in the third-party ecosystem for years. Apple finally added a version that handles 90% of use cases. It's classic Sherlocking: Apple shows up ten years late to the party, brings a decent bottle of wine, and somehow half the guests leave with them. Same with Spotlight. Spotlight hasn't gotten a ton of love in years. Suddenly it's really competing with third-party tools. If you're searching for a file, you can filter it based on where the file is stored. Type "name of Directory" press the Tab key, and then type the name of the file before pressing Enter. This is great! We finally have keyword search for stuff like . Application shortcuts for opening stuff with things like for Firefox is nice. Assign a quick key like “se” to  Send Email . Type it in Spotlight, hit enter, and compose your message. This is all classic Apple thinking which is "how can we make the Mac as good as possible such that you, the user, don't need to download any third-party applications to get a nice experience". You don't need a word processor, you have a word processor and a spreadsheet application and presentation software and a PDF viewer and a clipboard manager and a system launcher and automation APIs etc etc etc. This is a vision that is consistent throughout the entire systems history, how can we help you do the things you need to do more easily. But the reason why I'm stressed as someone who is pretty invested in the ecosystem is that the visual stuff is so bad and not just bad, but negligent. We didn't test how it was gonna look under a bunch of situations so that's now someone else's problem. Whenever I get a finder sidebar covering folder contents so I had to resize the window every time, or the Dock freaks out and refuses to come back out, it feels like I installed one of those OS X skins for a Linux distro. I buy Apple stuff cause its nice to look at and this is horrible to look at. Why is this so big? Why did you cut off the word "Finder" from Force Quit? Everywhere you look there's a million of these papercuts. We have a resolution on our laptops screen that would have made people collapse in 2005 why must we waste all of it on UI elements? Also you can't grab window edges as shown by the best post ever written here: https://noheger.at/blog/2026/01/11/the-struggle-of-resizing-windows-on-macos-tahoe/ Why is there so much empty space between everything? Why are there six ways to do literally everything? Why did we copy the concept of Control Center from iOS at all if there's very little limit on screen real estate and we could already do this from the menu bar? So we're going to keep the Mac menu bar but we're going to add a full iPad control system and then we're going to use the iPad control system to manage the menu bar . I will say the "Start Screen Saver" makes me laugh because its a mistake I would make in CSS. The text is too long so the button is giant but we didn't resize the icon so it looks crazy. Now do we need the same text inside the button as outside of it? No, and that leads me to the other banger. It's pretty clear the two white boxes inside of "Scene or Accessory" were supposed to be text, Scene on the top and then Accessory on the bottom, but SwiftUI couldn't do that so they left the placeholder. Somewhere there is a Jira ticket to come back to this that got trashed. Also, complete aside. Has anyone in the entire fucking world ever run Shazam from a Mac? What scenario are we designing for here? I hear a banger at the coffee shop so I hold my MacBook Pro up over my head like John Cusack in Say Anything , hoping it catches enough audio before my arms give out? "Recognize Music" is in my menu bar, taking up space that could be used for literally anything else, on the off chance I need to identify a song using a device that weighs four pounds and has no microphone worth using in a noisy room. If you are going to copy ipadOS's homework you need to think about it for 30 seconds . So my hope is that the improvement camp wins. That the people who built the better Spotlight and the clipboard manager and the automation APIs are the ones who get to set the direction. Because right now it feels like the best work on macOS is being done in spite of the overall vision, not because of it. Like someone's sneaking vegetables into a toddler's mac and cheese. The good stuff is in there — you just have to eat around a lot of neon orange nonsense to find it. Steve Jobs talked about creative people having to persuade five layers of management to do what they know is right. I don't know how many layers there are now. But I know what it looks like when the creative people are losing that argument, and I know what it looks like when they're winning it. Right now, on macOS, it looks like both are happening at the same time, in the same release, on the same screen. And that's scarier than any one bad design choice. It's Unix, but you never need to know that. All the power, none of the beard. You get the stability of a server OS without ever having to type into anything. Everything annoying is abstracted away. Drivers? Gone. "Installing" an application? You drag it into a folder. That's it. That's the install. It felt like the computer was meeting you more than halfway — it was practically doing your job for you and then apologizing for not doing it sooner. If it seems like it should work, it works. Double-click a PDF, it opens. Put in a DVD, it plays. Drag an app to the Applications folder and it becomes an application. This sounds obvious now, but in 2003 this was like witchcraft if you were coming from Windows. But it was also serious. It wasn't cluttered with stupid bullshit. It was designed for people who made things — with real font management, color calibration, the works. The OS tried to stay out of your way. Your content was the show; everything else was stagecraft.

0 views

Don’t trust, verify

Software and digital security should rely on verification , rather than trust. I want to strongly encourage more users and consumers of software to verify curl. And ideally require that you could do at least this level of verification of other software components in your dependency chains. With every source code commit and every release of software, there are risks. Also entirely independent of those. Some of the things a widely used project can become the victim of, include… In the event any of these would happen, they could of course also happen in combinations and in a rapid sequence. curl, mostly in the shape of libcurl, runs in tens of billions of devices. Clearly one of the most widely used software components in the world. People ask me how I sleep at night given the vast amount of nasty things that could occur virtually at any point. There is only one way to combat this kind of insomnia: do everything possible and do it openly and transparently. Make it a little better this week than it was last week. Do software engineering right. Provide means for everyone to verify what we do and what we ship. Iterate, iterate, iterate. If even just a few users verify that they got a curl release signed by the curl release manager and they verify that the release contents is untainted and only contains bits that originate from the git repository, then we are in a pretty good state. We need enough independent outside users to do this, so that one of them can blow the whistle if anything at any point would look wrong. I can’t tell you who these users are, or in fact if they actually exist, as they are and must be completely independent from me and from the curl project. We do however provide all the means and we make it easy for such users to do this verification . The few outsiders who verify that nothing was tampered with in the releases can only validate that the releases are made from what exists in git. It is our own job to make sure that what exists in git is the real thing . The secure and safe curl. We must do a lot to make sure that whatever we land in git is okay. Here’s a list of activities we do. All this done in the open with full transparency and full accountability. Anyone can follow along and verify that we follow this. Require this for all your dependencies. We plan for the event when someone actually wants and tries to hurt us and our users really bad. Or when that happens by mistake. A successful attack on curl can in theory reach widely . This is not paranoia. This setup allows us to sleep well at night. This is why users still rely on curl after thirty years in the making. I recently added a verify page to the curl website explaining some of what I write about in this post. Jia Tan is a skilled and friendly member of the project team but is deliberately merging malicious content disguised as something else. An established committer might have been breached unknowingly and now their commits or releases contain tainted bits. A rando convinced us to merge what looks like a bugfix but is a small step in a long chain of tiny pieces building up a planted vulnerability or even backdoor Someone blackmails or extorts an existing curl team member into performing changes not otherwise accepted in the project A change by an established and well-meaning project member that adds a feature or fixes a bug mistakenly creates a security vulnerability. The website on which tarballs are normally distributed gets hacked and now evil alternative versions of the latest release are provided, spreading malware. Credentials of a known curl project member is breached and misinformation gets distributed appearing to be from a known and trusted source . Via email, social media or websites. Could even be this blog! Something in this list is backed up by an online deep-fake video where a known project member seemingly repeats something incorrect to aid a malicious actor. A tool used in CI, hosted by a cloud provider, is hacked and runs something malicious While the primary curl git repository has a downtime, someone online (impersonating a curl team member?) offers a temporary “curl mirror” that contains tainted code. we have a consistent code style (invalid style causes errors). This reduces the risk for mistakes and makes it easier to debug existing code. we ban and avoid a number of “sensitive” and “hard-to-use” C functions (use of such functions causes errors) we have a ceiling for complexity in functions to keep them easy to follow, read and understand (failing to do so causes errors) we review all pull requests before merging, both with humans and with bots. We link back commits to their origin pull requests in commit messages. we ban use of “binary blobs” in git to not provide means for malicious actors to bundle encrypted payloads (trying to include a blob causes errors) we actively avoid base64 encoded chunks as they too could function as ways to obfuscate malicious contents we ban most uses of Unicode in code and documentation to avoid easily mixed up characters that look like other characters. (adding Unicode characters causes errors) we document everything to make it clear how things are supposed to work. No surprises. Lots of documentation is tested and verified in addition to spellchecks and consistent wording. we have thousands of tests and we add test cases for (ideally) every functionality. Finding “white spots” and adding coverage is a top priority. curl runs on countless operating systems, CPU architectures and you can build curl in billions of different configuration setups: not every combination is practically possible to test we build curl and run tests in over two hundred CI jobs that are run for every commit and every PR. We do not merge commits that have unexplained test failures. we build curl in CI with the most picky compiler options enabled and we never allow compiler warnings to linger. We always use that converts warnings to errors and fail the builds. we run all tests using valgrind and several combinations of sanitizers to find and reduce the risk for memory problems, undefined behavior and similar we run all tests as “torture tests”, where each test case is rerun to have every invoked fallible function call fail once each, to make sure curl never leaks memory or crashes due to this. we run fuzzing on curl: non-stop as part of Google’s OSS-Fuzz project, but also briefly as part of the CI setup for every commit and PR we make sure that the CI jobs we have for curl never “write back” to curl. They access the source repository read-only and even if they would be breached, they cannot infect or taint source code. we run and other code analyzer tools on the CI job config scripts to reduce the risk of us running or using insecure CI jobs. we are committed to always fix reported vulnerabilities in the following release. Security problems never linger around once they have been reported. we document everything and every detail about all curl vulnerabilities ever reported our commitment to never breaking ABI or API allows all users to easily upgrade to new releases. This enables users to run recent security-fixed versions instead of legacy insecure versions. our code has been audited several times by external security experts, and the few issues that have been detected in those were immediately addressed Two-factor authentication on GitHub is mandatory for all committers

0 views
Stratechery Yesterday

An Interview with Arm CEO Rene Haas About Selling Chips

Listen to this post: Good morning, This week’s Stratechery Interview is with Arm CEO Rene Haas, who I previously spoke to in January 2024 , and who recently made a major announcement at Arm’s first-ever standalone keynote : the long-time IP-licensing company is undergoing a dramatic shift in its business model and selling its own chips for the first time. We dive deep into that decision in this interview, including the meta of the keynote, Arm’s history, and how the company has evolved, particularly under Haas’ leadership. Then we get into why CPUs matter for AI, and how Arm’s CPU compares to Nvidia’s, x86, and other custom Arm silicon. At the end we discuss the risks Arm faces, including a maxed-out supply chain, and how the company will need to change to support this new direction. As a reminder, all Stratechery content, including interviews, is available as a podcast; click the link at the top of this email to add Stratechery to your podcast player. On to the Interview: This interview is lightly edited for clarity. Rene Haas, welcome back to Stratechery. RH: Ben Thompson, thank you. Well, you used to be someone special, I think you were the only CEO I talked to who did nothing other than license IP, now you’re just another fabless chip guy like [Nvidia CEO] Jensen [Huang] or [Qualcomm CEO] Cristiano [Amon]. RH: (laugh) Yeah, you can put me in that category, I guess. Well the reason to talk this week is about the momentous announcements you made at the Arm Everywhere keynote — you will be selling your own chip. But before I get to the chip, i’m kind of interested in the meta of the keynote itself, is this Arm Everywhere concept new like as far as being a keynote? Why have your own event? RH: You know, we were talking a little bit about this going into the day. I don’t think we’ve ever as a company done anything like this. Yeah I didn’t think so either, I was trying to verify just to make sure my memory was correct, but yes it’s usually like at Computex or something like that. RH: Our product launches have usually been lower key, we try to use them usually around OEM products that are using our IP that use our partner’s chips, but we just felt like this was such a momentous day for the company/very different day for the company that we want to do something very, very unique. So it was very intentional, we were chatting about it prior, I don’t think we’ve done anything like before. Who was the customer for the keynote specifically? Because you’re making a chip — Meta is your first customer, they knew about this, they don’t need to be told — what was the motivation here? Who are you targeting? RH: When you prepare for these things, that’s one of the first questions you ask yourself, “Who is this for?”, “Is it for the ecosystem?”, “Is it for customers?”, “Is it for investors?”, “Is it for employees?”, and I think under the umbrella of Arm Everywhere, the answer to those questions was “Yes”, everybody. We felt we needed to, because a lot of questions come up on this, right, Ben, in terms of, “What are we doing?” “Why are we doing?”, “What’s this all about?”, the answer to that question was “Yes”, it was for everyone. One more question: Why the name “Arm Everywhere”? RH: We were trying to come up with something that was going to thematically remind people a bit about who Arm was and what we are and what we encompass, but not actually tease out that we were going to be announcing something. Right, you can’t say “Arm’s New Chip Event”. RH: (laughing) Yes, exactly, “Come to the new product launch that we’ve not yet announced”. So we just decided that that would be enough of a teaser to get people interested. Just to note you said, “What Arm was “, what was Arm? You used the past tense there. RH: Yeah, and I will say, we are still doing IP licensing, you can still buy CSSs [Compute Subsystem Platforms], so we are still offering all of the products we did before that day and plus chips, so I’m not yet just another chip CEO, I think I’m still very different than the other folks you talked to. Actually, back up, give me the whole Rene Haas version of the history of Arm. RH: Oh, my goodness gracious. The company was born out of a joint venture way back in the day between Acorn Computer and then ultimately Apple and VLSI to design a low-power CPU to power PDAs. The thing that was kind of important was, “I need something that is going to run in a plastic package” — you may remember back then just about everything was in ceramic — “I can’t melt the PDA, and oh, by the way, this thing’s got to run off a battery”. So they chose a RISC architecture, and that’s where the ARM ISA [ instruction set architecture ] was born and that’s what the first chip was intended to do, and the thing wasn’t very successful. So fast forward, however, the founders and then a very, very important guy in Arm’s history, Robin Saxby , put out a goal to make the ARM ISA the global standard for CPUs. And if you go back to early 1990s, there were a lot of CPUs out there and also there was not an IP business, there really wasn’t a very good fabless semiconductor model, and there was not a very good set of tools to develop SoCs [system on a chip] . So in some ways, and this is what I love about the company, it was a bit of a crazy idea because you didn’t really have all the things in place necessary to go off and do that. But back then, there were a lot of companies designing their own CPUs, if you will, and the idea there being that ultimately this would be something that customers could be able to access, acquire, and build, and then ultimately build a standard upon it. It was ultimately the killer design win for the company, and I know you’re a strategist and historian as well around this area, is the classic accidental example of TI was developing the baseband modem for an applications processor for the Nokia GSM phone and they needed a microcontroller, something to kind of manage the overall process, and they stumbled across what we were doing, and we licensed them the IP. That was kind of the first killer license that got the company off the ground and that’s what really got us into mobile. People may think, “You were the heart of the smartphone and you had this premonition to design around iOS” or, “You worked really closely in the early days of Android”, it was the accidental, we found ourselves into the Nokia phone, GSM phone, Symbian gets ported to ARM, and then there starts to be at least enough of a buzz around nascent software, but that’s how the company was born. I did enjoy for the keynote, you had a bunch of different Arm devices in the run-up running on the screen, and my heart did do a little pitter-patter when the Nokia phones popped on. Another day, to be sure. RH: Yeah, cool stuff right? But that’s kind of how the company got off the ground, and as it was a general purpose CPU which meant we didn’t really have it designed for, “It’s going to be good at X”, or, “It’s going to be good at Y, it’s going to be good at Z”, it turned out that because it was low power, it was pretty good to run in a mobile application. I think the historic design win where the company took off was obviously the iPhone, and the precursor to the iPhone was the iPod was using a chipset from PortalPlayer that used the ARM7 and the Mac OS was all x86, and then inside the company, it was Tony Fadell’s team arguing , “Let’s use this PortalPlayer architecture”, versus, “Do we go with Intel’s x86 and a derivative atom”, back in the day, and once a decision was made that “We’re going to port to ARM for iOS”, that’s where the tailwind took off. So is it definitely making up too much history to go back and say, “The reason Arm was a joint venture to start is because people knew you needed to have an ecosystem and not be owned by any one company”, or whatever it might be, that’s being too cute about things — the reality is it was just stumbling around, barely surviving, and just fell backwards into this? RH: Which, by the way, every good startup that’s really been successful, that’s kind of how the formula works. You stumble around in the dark, you find something you’re good at and then you engage with a customer and you find what ultimately is sticky and that’s really what happened with Arm. When you consider the changes that you’ve made at Arm, and I want to get your description of the changes that you’ve made, but how many of the challenges that you face were based on legitimate market fears about, “We’re going to alienate customers” or whatever it might be versus maybe more cultural values like, “We serve everyone”, versus almost like a fear like, “This is just the market we’ve got, let’s hold on to it”? RH: I think, Ben, we thought about it much more broadly, and when I took over and you and I met not long after that, there were a couple of things that were happening in the market in terms of a need to develop SoCs faster, a need to get to market more quickly and we knew that intuitively that no one knew how to combine 128 Arm cores together with a mesh network and have it perform better than we could because that’s what we had to do to go off and verify the cores. So we knew that doing compute subsystems really mattered, but I came from a bit of a different belief that if you own the ISA at the end of the day, you are the platform, you are the compute platform and it is incumbent upon you to think about how to have a closer connection between the hardware and the software, that is just table stakes. I don’t think it’s anything new, if you think about what Steve Jobs thought about with Apple and everything we’ve seen with Microsoft, with Wintel. I felt with Arm, particularly not long after I started, in 2023 and 2024, this was only getting accelerated with AI. Because with AI, the models and innovation moving way, way faster than the hardware could possibly keep up. I just felt for the company in the long term that this was a direction that we had to strongly consider, because if you are the ISA and you are the platform, the chip is not the product, the system is. That’s the thing that I was sort of driving at when I was writing about your launch. There’s an aspect where you’ve made these big changes, you’re originally just the ISA, then you’re doing your own cores, not selling them, but you’re basically designing the cores, then you’re moving to these systems on a chip designs and now you’re selling your own chips. But it feels like your portion of the overall, “What is a computer?”, has stayed fairly stable, actually, because, “What is a computer?”, is just becoming dramatically more expansive. RH: I think that’s exactly right. Again, if you are a curator of the architecture and you are an owner of the ISA, as good as the performance-per-watt is, as interesting as the microarchitecture is, as cool as it is in terms of how you do branch prediction, the software ecosystem determines your destiny. And the software ecosystem for anyone building a platform needs to have a much closer relationship between hardware and software, simply in terms of just how fast can you bring features to market, how fast can you accelerate the ecosystem, and how can you move with the direction of travel in terms of how things are evolving. You mentioned the big turning point or biggest design win was the iPhone way back in the day, and the way I’ve thought about Arm versus x86 — there’s been, you could make the case, ARM/RISC has been theoretically more efficient then CISC, and I’ve talked to Pat Gelsinger about how there was a big debate in Intel way back in the 80s about should we switch from CISC to RISC, and he was on the side of and won the argument that by the time we port everything to RISC we could have just built a faster CISC chip that is going to make up all the difference and that carried the day for a very long time. However, mobile required a total restart, you had to rebuild everything from scratch to deliver the power efficiency, and I guess the question is, you’ve had a similar dynamic for a long time about Arm in the data center theoretically is better, you care about power efficiency etc, is there something now — is this an iPhone-type moment where there’s actually an opportunity for a total reset to get all the software rewritten that needs to be done? Or have companies like Amazon and Qualcomm or whatever efforts they’ve done paved the ground that it’s not so stark of a change? RH: It’s a combination of both. One of the big advantages we got with Amazon doing Graviton in 2019, and then subsequently the designs we had with Google, with Axion, and Microsoft with Cobalt, is it just really accelerated everything going on with cloud-native, and anything that moves to cloud-native has kind of started with ARM. What do you mean by cloud native? RH: Cloud-native meaning these are applications that are starting from scratch to be ported to ARM. Built on a Linux distro, but not having to carry anything about running super old legacy software or running COBOL or something of that nature on-prem, so that was a huge benefit for us in terms of the go-forward. Certainly we got a huge interjection of growth when Nvidia went from the generation before Hopper, which I think was Volta or Pascal, I may be mixing up their versions, which was an x86 connect to Grace. So when they went to Grace Hopper, then Grace Blackwell, and now Vera, the AI stack for the head node now starts to look like ARM, that helps a lot in terms of how the data center is organized, so we certainly got a benefit with that. I think for us, the penny drop moment was when, and it’s probably 2018, 19 timeframe, is when Red Hat had production Linux distros for ARM and that really also accelerated things in terms of the open source community, the uploads and things that made things a lot, a lot easier from the software standpoint. Give me the timeline of this chip. When did you make the decision to build this chip? You can tell me now, when did this start? RH: You know, it started with a CSS, right? And we were talking to Meta about the CSS implementation. Right. And just for listeners, CSS is where you’re basically delivering the design for a whole system on a chip sort of thing. RH: Compute subsystem, yeah, so it’s the whole system on a chip. And by the way, it’s probably 95% of the IP that sits on a chip. What doesn’t include? It doesn’t include the I/O, the PCIe controllers, the memory controllers, but it’s most of the IP. And this is what undergirds — is Cobalt really the first real shipping CSS chip? Or does Graviton fall under this as well? RH: Cobalt’s probably the first incarnation of using that, so Meta was looking at using that and I think the discussions were taking place in the 2025 timeframe, mid-2025 timeframe. Here’s the key thing, Ben, not that long ago. Right. Well, that was my sense it was not that long ago, so I’m glad to hear that confirmed. RH: Not that long ago. Because CSS takes you a lot of the way there so that discussion in around the 2025 timeframe that we were going back and forth of, “Are you licensing CSS”, versus, “Could you build something for us?”, and we had been musing about, “Was this the right thing for us to do from a strategy standpoint?”, and how we thought about it, but ultimately it came down to Meta saying, “We really want you to do this for us, we think this is going to be the best way to accelerate time to market and give us a chip that’s performant and in the schedule that we need”, so somewhere in the 2025-ish timeframe, we agreed that, yes, we’ll do this for you. Why did Meta want you to do it instead of them finishing it off themselves? RH: I think they just did the ROI, in terms of, “I’ve got a lot of people working on things like MTIA , I’ve got a whole bunch of different projects internally, is it better that you do it versus we do it”? “How much can we actually differentiate a CPU”? RH: Yeah and by the way, that is ultimately what it comes down to at some point in time and the fact that the first one that came back works, it’s going to be able to go into production, and it’s ready to go. I’m not going to say they were shocked, but we kind of knew that was going to happen because we knew how to do this stuff and the products were highly performant and tested in the CSS, so it happened fast is the short answer. So if we talk about Arm crossing the Rubicon, was it actually not you selling this chip it was when you did CSS? RH: One could say that that was a big step. When we started talking about doing CSSs, let me step back, we made a decision to do CSSs— Explain CSSs and that decision because I think that’s actually quite interesting. RH: What is a CSS? It’s a compute subsystem, it takes all of the blocks of IP that we sold individually and puts them together in a fully configured, verified, performant deliverable that we can just hand to the customer and they can go off and complete the SoC. Some customers have told us it saves a year, some say a year-and-a-half and this is really around the test and verification in terms of the flow. One of the examples I gave, it’s a little cheeky, but it kind of worked during the road show, was when we were trying to explain to investors, “What’s IP, what’s a CSS?”, I said, go to the Lego store, and you’ve got a bin of Legos, yellow Legos, red Legos, blue Legos, trying to buy all those Legos and building the Statue of Liberty is a pain, or you can go over to the boxes where it’s the Statue of Liberty and just put those pieces together, and the Statue of Liberty is going to look beautiful. This is what the CSS was. I just want to jump in on that, because I was actually thinking about this, the Lego block concept is a common one that’s used when talking about semiconductors, but I remember being back in business school, and this was 2010, somewhere around then, and one of the case studies that we did was actually Lego, and the case study was the thought process of Lego deciding whether or not to pursue IP licensing as opposed to sticking with their traditional model, and all these trade-offs about, “We’re going to change our market”, “We’re going to lose what Lego is”, the creativity aspect, “It’s going to become these set pieces”. I just thought about that in this context where I came down very firmly on the side of, “Of course they should do this IP licensing”, but it was almost the counter was this sort of traditionalist argument which is kind of true — Legos today are kind of like toys for adults to a certain extent, and you build it once, reading directions and you think back to when I was a kid and you had all the Legos and it was just your creativity and your imagination and I’m like, “Maybe this analogy with Arm is actually more apt than it seems”. There’s a very romantic notion of IP licensing, you go out and make new things, “We got this for you”, versus, “No we’re just giving you the whole chip”, or in this case of CSS you, to your point, you could go get The Statue of Liberty, don’t even bother building it yourself. RH: And I think I came across this in the early days. In the 1990s, I was working with ASIC design at Compaq Computer, and they were doing all their ASICs for Northbridge , Southbridge , VGA controllers, and this is when the whole chipset industry took off. And I remember one of the senior guys at Compaq explaining why you’re doing this, he said, “I’m all about differentiation, but there needs to be a difference”. And to some extent, that’s a little bit of this, right? You can spend all the time building it, but if it’s all built and you spent all this time and it’s not functionally different nor performant different, but you spent time — well, if you’re playing around with Legos and you got all day, that’s fine — but if you’re running a business and you’re trying to get products out quickly, then time is everything, and that’s really what CSS did. It kind of established to folks that, “My gosh, I can save a lot of time on the work I was doing that was not highly differentiated”, and in fact, in some case, it was undifferentiated because we could get to a solution faster in such a way that it was much more performant than what folks might be trying to get to the last mile. So when we started talking about this to investors back in 2023 during the roadshow, their first question was, “Aren’t you going to be competing with your customers?”, and, “Isn’t this what your customers do?”, and, “Aren’t they going to be annoyed by it?”, and my answer was, “If it provides them benefit, they’ll buy it, if it does not present a benefit, they won’t buy it”, that’s it. And what we found is a lot of people are taking it, even in mobile, where people where we were told was, “No, no, these are the black belts and they’re going to grind out the last mile and you can’t really add a lot of value” — we’ve done a bunch in the mobile space, too. So with Meta, was the deal like, “Okay, we’ll do the whole thing for you, but then we get a sell to everyone?”, and they’re like, “That’s fine, we don’t care, it doesn’t matter”? RH: Yes, exactly. We said, “If we’re going to do this, how do you feel about us selling it to other customers?”, and they said, “We’re fine with that”. When did you realize that the CPU was going to be critical to AI? RH: Oh, I think we always thought it was. I had a cheeky little slide in the keynote about the demise of the CPU, and I had to spend a lot of time. I mean, I don’t know, I might have talked to someone recently who I swear was pretty adamant that a lot of CPUs should be replaced with GPUs, and now they’re selling CPUs, too. RH: I had to talk to investors and media to explain to them why a CPU was even needed. They were a little bit like, “Can’t the GPU run by itself?”, it’s like a kite that doesn’t need anything to hang on to. First off, on table stakes, obviously you need the data center but particularly as AI moves into smaller form factors, physical AI, edge, where you obviously have to have a CPU because you’re running display, you have I/O, you have human interface. It’s how do you add accelerated AI onto the CPU? So yeah, I think we kind of always knew it was going to be there, and there was going to be continued demand for it. Right, but there’s a difference between everyone on the edge is going to have a CPU so we can layer on some AI capabilities. It doesn’t have the power envelope or the cost structure to support a dedicated GPU, that’s fair, that’s all correct. It’s also correct that, to your point, a GPU needs a CPU to manage its scheduling and its I/O and all those sorts of things, but what I’m asking about specifically is actually, we’re going to have these agentic workflows, all of which what the agent does is CPU tasks and so it’s not just that we will continue to need CPUs, we might actually need an astronomical more amount of CPUs. Was that part of your thesis all along? RH: I think we have instinctively thought that to be the case. And what drives that? The sheer generation of tokens, tokens by the pound, tokens by the dump truck, if you will. The more tokens that the accelerators are generating, whether that’s done by agentic input, human input, whatever the input is, the more tokens that are generated, those tokens have to be distributed. And the distribution of those tokens, how they are managed, how they are orchestrated, how they are scheduled, that is a CPU task purely. So we kind of intuitively felt that over time, as these data centers go from hundreds of megawatts to gigawatts, you are going to need, at a minimum, CPUs that have more cores, period. There was this belief of 64 cores might be enough and maybe 128 cores would be the limit, Graviton 5 is 192 cores, the Arm AGI CPU is 136, we were already starting to see core counts go up, and we started thinking about, “What’s driving all these core counts going up, is it agentic AI?”. A proxy for it was just sheer tokens being generated in a larger fashion that needed to be distributed in a fast way and what was layered onto that was things like Codex, where latency matters, performance matters, delivering the token at speed matters. So I think all of that was bringing us to a place that we thought, “Yeah, you know what?”, we’re seeing this core count thing really starting to go up, we were seeing that about a year ago, Ben. So am I surprised that the CPU demand is exploding the way it is? Not really. Agentic AI, just the acceleration of how these agents have been launched, certainly is another tailwind kicker. Which happens to line up with your mid-2025 decision that, “Maybe we should sell CPUs”. RH: Yeah, it all kind of lines up. We were seeing that, you know what, we think that this is going to be a potentially really, really large market where not only core count matters, but number of cores matters, efficiency matters because we could imagine a world where each one of these cores is running an agent or a hypervisor and the number of cores can really, really matter in the system, which laid claim to what we were thinking about in terms of, “Okay, we can see a path here in terms of where things are going”. So CSSs with greater than 128 cores in the implementation? Absolutely. Do I think, could I see 256? Absolutely. Could I see 512? Possibly. I think then it comes down to the memory subsystem, how you keep them fed, etc., but yeah, so short answer, about a year ago we started seeing this. Do you think that core count is going to be most important or is it going to be performance-per-core? RH: I think core count is going to be quite important because I think, again, I have a belief that each one of these cores will want to potentially run their own agent, launch a hypervisor job, launch a job that can be run independently, launch it, get the work done, go to sleep. The performance of the core is going to matter, no doubt about it, but I think the efficiency of that core is probably going to matter just as much as the performance is. Well, the reason I ask is because you talked a lot in this presentation about the efficiency advantage, where the company born from a battery or whatever your phrase was, and that certainly, I think, rings true, particularly in isolation. But in a large data center, if the biggest cost is the GPUs, then isn’t it more important to keep the GPUs fed? Which basically to say, is a chip’s capability to feed GPUs actually more important on a systemic level than necessarily the chip’s efficiency on its own? RH: I’m going to plead the fifth and say yes to both. You’ve got to pick one! RH: Well, what’s important? I think the design choice that Nvidia made with Vera was very important, Vera is designed to feed Rubin, it has a very specific interface, NVLink Fusion or NVLink chip-to-chip, provides a blazing fast interface, and has the right number of cores in terms of to keep that GPU fed optimally. But at the same time, is it the right configuration in a general-purpose application where you want to run an air-cooled rack in the same data hall? If you think about a data hall where you might have a Vera Rubin liquid-cooled rack sitting right next to a liquid-cooled Vera rack, but somewhere else inside the data center, you’ve got room for multiple air-cooled racks. That space that you may have not used in the past for CPU, you want to because of the problem statement that I just gave. So I actually think it’s a “both” world, which is why when people ask me, “Oh my gosh, aren’t you competing with Nvidia Vera, and aren’t people going to get confused?” — not particularly, I think there’s ample space for both. So you feel like Nvidia might be selling standalone Vera racks but that’s not necessarily what Vera was designed for, that’s what you’re designed for, and you think that’s where you’re going to be different. RH: Yes, and I mean, if you look at what’s been announced so far from Nvidia, they announced a giant 256-CPU liquid-cooled rack and the first implementation that we’re doing with Meta is a much smaller air-cooled rack. So very, very different right off the get-go. But you will have a liquid-cooled option? RH: If customers want that, we can do that too. I think that differentiation makes sense. Well, speaking of differentiation, why ARM versus x86? Why is there an opportunity here? RH: Performance-per-watt, period. Graviton sort of started it, and they’ve been very public about their 40% to 50%, Cobalt stated the same with Microsoft, Axion, Google stated the same, Nvidia has stated the same. Just on table stakes, 2x performance-per-watt is pretty undeniable. And that, I think, it starts there as probably the primary value proposition. What is x86 still better at? You can’t say legacy software, other than legacy software. RH: Go back to our earlier part of our conversation, right? The ISA, what is the value of the ISA? It is the software that it runs, right? It is the software that it runs. So if you were to look at where does x86 have a stronghold, x86 is very good at legacy on-prem software. Ok, fine, we’ll give you legacy on-prem software and I think part of the thesis here to your point a lot of this agentic work, it’s on Linux, it’s using containers, it’s all relatively new, it all by and large works well in ARM already, but you did have a bit in the presentation where you interviewed a guy from Meta that was about porting software. How much work still needs to be done there? RH: There’s a delta between the porting work and the optimization work. Graviton, what Amazon will tell you, is that greater than 50% of their new deployments and accelerating is ARM-based. And, yes, am I the CEO of Arm and do I have a biased opinion? Of course. But I find it hard to, on a clean sheet design, if you were starting from scratch and the software porting was done and you had either cloud-native or the application space was established or as a head node, I don’t know why you’d start with x86. What about, why are you doing ARM? We did ARM versus x86, I’m sort of working my way down the chain here — actually, I did backwards, we stuck in Vera already — but why you versus custom silicon generally? You talked about Amazon. Why do you need to do the whole thing? RH: So let’s think about an Amazon, for example. Amazon does Graviton, would I like Amazon to buy the Arm AGI CPU? Yes. Am I going to be heartbroken if they never buy one? No, I’m perfectly fine if they stay building what they’re building. Are they ever going to buy one? No. RH: I hope they do! But if they don’t, it’s not going to be the end of the world. SAP — SAP runs a lot of software on Amazon, they run SAP HANA on Amazon, they also have a desire to do stuff on-prem and if they’re doing something on-prem in a smaller space and they’re looking to leverage that work, they’d love to have something that is ARM-based. Prior to us doing this product, there was no option at all, right? So that’s a very, very good example. Similar with a Cloudflare. Is Cloudflare going to do their own implementation? Likely not. Do they run on other people’s clouds? Sure, they do. Do they have an application that could be on-prem running on ARM? Absolutely. So we think that, and I don’t want to prefetch this, Ben, but we had a lot of questions from folks like, “Amazon won’t buy from you”, “Google won’t buy from you”, “Microsoft won’t buy from you”, because you’re competing with them. And we say, well, Google builds TPUs, yet they buy a lot of Nvidia GPUs, so it’s not so binary. That’s true. They’ll buy what their customers ask them to buy. RH: 100%. And if we solve a problem with an implementation that theirs does not, they’ll buy it, and if we don’t, they won’t. Just you know between you and me, is the only customer silicon that is truly potentially competitive Qualcomm and you’re just not too worried about making them mad? RH: This is off the record here? (laughing) I didn’t say off the record. RH: Qualcomm, it’s funny, I had a question at the investor conference about competing with Nvidia. And I said, you know, a month ago, no one would have asked about any Arm person competing with anybody. So it’s wonderful to have these kind of conversations, the market is underserved and there aren’t choices. There isn’t a product from Qualcomm, there isn’t a product from MediaTek, there isn’t a product from Infineon, there just isn’t. Is that sort of your case? If there were a bunch of options in the market, would you still be entering? RH: We entered this because Meta asked us to and because Meta asked us to we did. So if I was to answer your question, would we have entered if those other four guys were there or five hypotheticals? I don’t know that Meta would have asked us. If the Arm AGI CPU, it’s being built on TSMC’s 3-nm node, which is kind of impossible to get allocation for. How’d you get allocation? If you started this in 2025, how’d you pull that off? RH: We’re working through a back-end ASIC partner that helps secure the allocation for us. Oh, interesting. Are you concerned about that in the long run ? Like this business blows up and actually you just can’t make enough chips? RH: I’m probably less worried about that at the moment than I am about memory. I think that the business, the demand is very, very high actually for the chip, Ben and through our partner, we’re able to secure upside through TSMC, that has not been a problem. But memory is quite challenging and I think if there’s any limit to how big this business can get and I would say that what we provided to investors as a financial forecast is based upon the capacity we’ve secured on both memory and logic but if there was more memory could we sell more? Yes. This is sort of the sweet spot though of making predictions, everyone gets to say, “Wow, how are your predictions so accurate?”, it’s like, “Well it’s because I knew exactly how much what I would be able to make”. RH: Yeah, if there was more memory we’d be even more aggressive on the numbers. How did you make the memory decisions that you did in terms of memory bandwidth and all those sorts of pieces, particularly given the short timeline which you made this you. That wasn’t necessarily part of the CSS spec before, so how were you thinking about that? RH: The things we kind of looked at was, we sort of started with LP versus standard DRAM . Because Vera’s doing LP and you decided to do standard. RH: We’re doing standard DRAM, yeah. We thought we’d be a little bit better on the cost side that could help and at the same time, a little bit better on the capacity side. So it really kind of drove down to, we’re going to solve for capacity because we thought that that might matter in a more generalized application space to give the broader width of use, which then brought us to standard DDR versus LP. I think the reason we talked last time was in the context of you making a deal with Intel to get Arm working on 18A, and this was going to be a multi-generational partnership. What happened to that? Is that still around? RH: It’s still around. We did a lot of work on 18A because we felt that it was going to be really, really important if someone wanted to build on Intel 18A, that the Arm IP was available. So we did our part relative to if someone wants to go build an ARM-based SoC on Intel process, but that unfortunately hasn’t come to pass just yet. It’s interesting you mentioned that you’re actually not worried about TSMC capacity but you are worried about memory — I didn’t fully think through that being another headwind for Intel where they could really use TSMC having insufficient capacity to help them, but if memory is the first constraint then no one’s even getting there. RH: First off, obviously HBM [ high bandwidth memory ] being such a capacity hog, and then people moving from LP into HBM at the memory guys, then compounding on it, all of the explosion of the CPU demand drives up memory demand. So it all kind of adds on to itself, which makes the memory problem pretty acute. What exactly is in the bill of materials that you’re selling? You showed racks but you mentioned a partnership with Super Micro for example — if I buy a chip from Arm what exactly am I buying? You’ve mentioned memory obviously, so what else is in that? And what are you getting from partners? RH: Yeah, so we’ll send you a voucher code after the show, and you can place your orders. Just the SoCs. If you need to secure the memory, that’s on you, we’re not securing memory at this point in time. We did a lot of work with Super Micro, with Lenovo, with ASRock. So there’s a full 1U, 2U server blade reference architecture so the full BOM relative to all the passives and everything you need from an interconnect standpoint is all there. There’s a full BOM, which, as we mentioned in the session, the rack physically itself complies with OCP standards and then we’ve done all the work in terms of the reference design. So we can provide the full BOM of the reference platform, memory, but what we are selling only is the SoC. Very nerdy question here, but how are you going to report this from an accounting perspective? Just right off the top chips have a very different margin profile, is this going to all be broken out? How are you thinking about that? RH: We’ll probably do that. Today we break down licensing and royalty of the IP business, we’ll probably break out chips as a separate revenue stream. To go back to, you did call this event Arm Everywhere, will you ever sell a smartphone chip? RH: I don’t know, that’s a really hard question. I think we’re going to look at areas where we think we could add significant value to a market that’s underserved, that market’s pretty well served. It’s very well served and this agentic AI, potentially a new market, fresh software stack, makes sense to me. What risks are you worried about with this? You come across as very confident, “This is very obviously what we should”, how does this go wrong? RH: Most of my career has been spent actually in companies that have chips as their end business as opposed to IP. I’ve been at Arm 12 years, 13 years, I’ve been the CEO for about four-and-a-half. I did a couple of years, two, three years at a company called Tensilica that was doing, or actually the longer, five years, but most of my career was either NEC Semiconductor, Texas Instruments, Nvidia. Chip business is not easy, right? You introduce a whole different new set of characteristics. You have to introduce this term called “inventory” to your company. RH: RMAs, inventory, customer field failures, just a whole cadre of things that’s very new for our company, there certainly is execution risk that we’ve added that has not existed before. We had a 35-year machine being built that is incredibly good at delivering world-class IP to customers — doing chips is a whole different deal. I don’t want to minimize that, but at the same time, I don’t want to communicate that that’s something that we haven’t thought about deeply over the years and we’ve got a lot of people who have done that work inside the company. A lot of my senior executive team, ex-Broadcom, ex-Marvell, ex-Nvidia, we’ve got a lot of people inside the engineering organization who have come from that world, we’ve built up an operations team to go off and support that. So while there is risk, we’ve been taking a lot of steps inside the company to be adding the resources. We’ve been increasing our OpEx quite a bit in the quarters leading up to this, about 25% year-on-year, investors were asking a ton of questions about, “When are we going to see why you’re adding all those people?”, and Arm Everywhere explained that. We also told investors that that’s now going to taper off because we’ve got, we think what we need to go off and execute on all this. But I think that’s the biggest thing, Ben. And the upside is just absolute revenue dollars, I guess absolute profit dollars. RH: I think there’s a financial upside, certainly, in terms of financial dollars. But I think back to the platform, I think by being closer to the hardware and the software and the systems, we can develop even better products around IP, CSS, etc. because I think when you are the compute platform, it is incumbent upon you to have as close a relationship as you can between the software that’s developed on your platform. What’s the state of the business in China these days, by the way? RH: China still represents probably 15% of our revenue, we still have a joint venture in China, the majority of our businesses is royalties, royalties is much bigger than licensing in China. We still have a lot of design wins coming in the mobile space for people doing their own SoCs like a Xiaomi. The hyperscaler market is strong between Alibaba, ByteDance, Tencent, and then most of the robotics and EV guys are doing stuff based on ARM, whether it’s XPeng, BYD, Horizon Robotics. So our business is pretty healthy in China. You do have the Immortalis and Mali GPUs. Are those good at AI? RH: Yes they can be very good, we’ve added a lot of things to to our GPUs around what we call neural graphics so this is adding essentially a convolution and vector engine that can can help with AI. Right now the focus has been really more around AI in a graphics application, whether it’s around things like DLSS and things of other area, but we’ve got a lot of ingredients in those GPUs. So we should stay tuned, sounds very interesting. You did have one moment in the presentation that was a little weird, you were trying to say that this AI thing is definitely a real thing but you’re like, “Well it might be a financial bubble, but the AI is real”. Are you worried about all this money that is going into this that you’re making a play for a piece of, but is there some consternation in that regard? RH: No, what I was trying to indicate was when people talk about bubbles, typically it’s either valuation bubbles or investment bubbles. The valuation bubbles, those come and go over time. The investment bubble, I’m not as worried about in the sense of, “Is there going to be real ROI on the investment being made?”, I actually worry more about the, “Can you get all the stuff required to build out all of the scale?” — we just talked about memory, there’s TSMC capacity. I think the memory will be solved, they will ultimately not be able to help themselves, they will build more capacity, I’m worried about leading edge. TSMC will help themselves if they don’t have any challengers. RH: Turbines, right? You’ve got companies who are like GE Vernova or Mitsubishi, this is not their world of building factories well ahead to go serve an extra 5 to 10 gigawatts of power. So I think TSMC is super disciplined, and they’ve been world class at that throughout their history. Will the memory guys be able to help themselves? The numbers are now so large that even the Sandisk’s of the world and storage, everything has kind of gotten bananas, and that is a concern in terms of if just one of those key components of the supply chain blinks and decides not to invest to provide the capacity, then things kind of slow down. But the numbers, Ben, the numbers we’re talking about are numbers we’ve never seen before. $200 billion CapEx from an Amazon or $200 billion CapEx from a Google. And then you have companies like Anthropic talking about $6 billion revenue increases over a three-to-four month period, which are the size of some software companies. So we are in some very stratospheric levels in terms of spend that would I be surprised if there was a pause in something just as people calibrate? Yeah, I wouldn’t be surprised at all. But if I think about the 5 to 10-year trajectory, there’s no way you can say this is a bubble. If you said, “I think machines that can think as well as humans and make us more productive, that’s kind of a fad”, I don’t actually think that’s going to happen, it’s almost nonsensical. Just to sort of go full circle, you’ve been on the edge, and now this new product that gets the Arm Everywhere moniker but it’s about being in the data center — is the edge dead? Or if not dead is it are we in a fundamental shift where the most important compute is going to be in data centers or is there a bit where AI is real but it actually does leave the data center, go to the edge and that’s a bigger challenge? RH: I think until something is invented that is different than the transformer, and we talk about some very different model as to how AI is trained and inferred, then we’re looking at a lot of compute in the data center and some level of compute on the edge. I think if you just suspend animation for a second and we say, you know what, the transformer is it, and that’s what the world looks like for the next number of, the next 5 to 10 years, the edge is not going to be dead. The edge is going to have to run some level of native compute for whatever the thing has to do, and it’s going to run some AI acceleration, of course. But is everything going to happen in your pocket? No. I mean, that’s not going to happen. I’ve come down to that side too. I think in the fullness of time, at least for now, the thin client model, it looks like it’s going to be it. I guess that seems to be your case as well because you had a big event, it is for a data center GPU. Arm is Everywhere, but not everyone can buy it. RH: And power efficiency was a nice to have in the data center, but I would say it wasn’t existential. It is now, though. And I say that’s another big change because, again, one of the examples I gave, if you’re 4x-ing or 5x-ing or 6x-ing the CPUs in a given data center and you don’t want to give up one ounce of GPU accelerator power, then you’re going to squeeze everywhere you can and that, I think, is a thing that’s in our favor. Where’s Arm in 10 years? RH: I would like to think of as one of the most important semiconductor companies on the planet. We’re not there yet, but that’s how I would like the company to be thought about. Rene Haas, congratulations, great to talk. RH: Thank you, Ben. This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery . The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly. Thanks for being a supporter, and have a great day!

0 views
Lonami Yesterday

Ditching GitHub

AI. AI AI AI. Artificial "Intelligence". Large Language Models. Well, they sure are large, I'll give them that. This isn't quite how I was hoping to write a new blog post after years of not touching the site, but I guess it's what we're going with. To make it very clear: none of the text, code, images or any other output I produce is AI-written or AI-assisted. I also refuse to acknowledge that AI is even a thing by adding a disclaimer to all my posts saying that I do not use it. But this post is titled "Ditching GitHub", so let's address that first. Millions of developers and businesses call GitHub home And that's probably not a good thing. I myself am guilty of often searching "<project> github" in DuckDuckGo many a time when I want to find open-source projects. I'll probably keep doing it, too, because that's what search engines understand. So, GitHub. According to their API, I joined the first day of 2014 after noon (seriously, did I not have anything better to do on new year's? And how is that over twelve years ago already‽). Back then, I was fairly into C# programming on Windows. It seems I felt fairly comfortable with my code already, and was willing to let other people see and use it. That was after I had been dabbling with Visual Basic Scripts, which in turn was after console batch scripting. I also tried Visual Basics before C#, but as a programming noob, with few-to-none programming terms learnt, I found the whole and quite strange ↪1 . Regardless of the language, telling the computer to do things and have it obey you was pretty cool! Even more so if those things had a visual interface. So let's show others what cool things we could pull off! During that same year, I also started using Telegram. Such a refreshing application this used to be. Hey, wouldn't it be cool if you could automate Telegram itself ? Let's search to see if other people have made something to use that from C#. Turns out TLSharp did in fact exist! The repository seems to be archived now, in favor of WTelegramClient . I tried to contribute to it. I remember being excited to have a working code generator that could be used to automatically update the types and functions that the library had to offer, based on the most recent definitions provided by Telegram (at least indirectly, via their own open-source repositories.) Unfortunately, I had some friction with the maintainer back then. Perhaps it was a misunderstanding, or I was too young, naive, or just couldn't get my point across. That didn't discourage me though ↪2 . Instead, I took it upon myself to reimplement the library. Back then, Telegram's lack of documentation on the protocol made it quite the headache (literally, and not just once) to get it working. Despite that, I persevered, and was able to slowly make progress. Fast-forward a bit ↪3 , still young and with plenty of time on my hands, one day I decided I wanted to try this whole Linux thing. But C# felt like it was mostly a Windows thing. Let's see, what other languages are there that are commonplace in Linux… " Python " huh? Looks pretty neat, let's give it a shot! Being the imaginative person I am, I obviously decided to call my new project a mix between Tele gram and Python . Thus, Telethon was born ↪4 . Ah, GitHub stars. Quite the meaningless metric, considering they can be bought, and yet… there's something about them. I can't help myself. I like internet points. They make me feel like there are other people out there who, just like me, have a love for the craft, and share it with this small gesture. I never intended for Telethon to become as popular as it has. I attribute its success to a mix of luck, creating it at the right time, choice of popular programming language, and lack of many other options back then. And of course, the ridiculous amount of time, care and patience I have put (and continue to put) into the project out of my own volition. Downloads are not a metric I've cared to look at much. But then came support questions. A steady growth of stars. Bug reports. Feature requests. Pull requests. Small donations! And heart-felt thank-you emails or messages. Each showing that people like it enough to spend their time on it, and some even like it enough that they want to see it become better, or take the time to show their appreciation. This… this feels nice, actually. Sure, it's not perfect. There will always be an idiot who thinks you owe them even more time ↪5 . Because the gift of open-source you've given the world is not enough. But that's okay. I've had a bit of an arc in how I've dealt with issues, from excited, to tired and quite frankly pretty rude at times (sorry! Perhaps it was burn-out?), to now where I try to first and foremost remain polite, even if my responses can feel cold or blunt. There are real human beings behind the screens. Let's not forget that. Telethon is closing-in on twelve thousand stars on GitHub ↪6 . I don't know how many are bots, or how many still use GitHub at all, but that's a really darn impressive number. cpython itself is at seventy-two thousand! We're talking the same order of magnitude here. So I am well aware that such a project makes for quite the impressive portfolio. There's no denying that. We don't have infinite time to carefully audit all dependencies we rely on, as much as we should. So clearly, bigger star number must mean better project, or something like that. To an extent, it does, even if subconsciously. Unfortunately for me, that means I can't quite fully ditch GitHub. Not only would I be contributing to link-rot, but the vast majority of projects are still hosted there. So whether I like it or not, I'm going to have to keep my account if I want to retain my access to help out other projects. And, yes. Losing that amount of stars would suck. But wow has the platform gotten worse. Barely a screen into GitHub's landing page while not logged in, there it is. The first mention of AI. Scroll a bit further, and… Your AI partner everywhere. They're not wrong. It is everywhere. AI continues to be shoved so hard in so many places . Every time I'm reading a blog post and there's even the slightest mention of AI, or someone points it out in the comments, my heart sinks a little. "Aw, I was really enjoying reading this. Too bad." ↪7 It doesn't help that I'm quite bad at picking up the tell-tale signs of AI-written text ↪8 . So it hurts even more when I find out. AI used to be a fun topic. Learning how to make self-improving genetic algorithms, or basic neural networks to recognize digits . For pity's sake, even I have written about AI before . I used to be fascinated by @carykh's YouTube videos about their Evolution Simulator . It was so cool ! And now I feel so disgusted by the current situation. Remember when I said I was proud of having a working code generator for TLSharp? Shouldn't I be happy LLMs have commoditized that aspect? No, not at all. Learning is the point . Tearing apart the black boxes that computers seem. This code thing. It's actually within your grasp with some effort. Linux itself, programming languages. They're not magic, despite some programmers being absolute wizards. You can understand it too. Now? Oh, just tell the machine what you want in prose. It will do something. Something . That's terrifying. "But there's this fun trick where you can ask the AI to be a professional engineer with many years of experience and it will produce better code!" I uh… What? Oh, is that how we're supposed to interact with them. Swaying the statistical process in a more favourable direction. Yikes. This does not inspire any confidence at all. Time and time again I see mentions on how AI-written code introduces bugs in very subtle ways. In ways that a human wouldn't, which also makes them harder to catch. I don't want to review the ridiculous amount of code that LLMs produce. I want to be the one writing the code. Writing the code is the fun part . Figuring out the solution comes before that, and along experimentation, takes the longest. But once the code you've written behaves the way you wanted it, that's the payoff. There is no joy in having a machine guess some code that may very well do something completely different the next time you prompt it the same. As others have put it very eloquently before me, LLM-written text is "a cognitive DoS ". It's spam. It destroys trust. I don't want to read an amalgamation of code or answers from the collective internet. I want to know people's thoughts. So please, respect my time, or I'll make that choice myself by disengaging with the content. Embrace AI or get out -- GitHub's CEO Out we go then. If not GitHub, where to go? GitHub pages makes it extremely easy to push some static HTML and CSS and make it available everywhere reliably, despite the overall GitHub status dropping below 90% for what feels like every day. I would need to host my website(s) somewhere else. Should I do the same with my code? I still enjoy being part of the open source community. I don't want to just shut it all down, although that's a fate others have gone through . Many projects larger than mine struggle with 'draining and demoralizing' AI slop submissions , and not just of code . I have, thankfully, been able to stay out of that for the most part. Others have not . I thought about it. Unfortunately, another common recurring theme is how often AI crawlers beat the shit out of servers, with zero respect for any sensible limits. Frankly, that's not a problem I'm interested in dealing with. I mean, why else would people feel the need to be Goofing on Meta's AI Crawler otherwise? Because what else can you do when you get 270,000 URLs being crawled in a day. Enter Codeberg . A registered non-profit association. Kord Extensions did it , Zig did it , and I'm sure many others have and will continue to do it. I obviously don't want this to end in another monopoly. There are alternatives, such as SourceHut , which I also have huge respect for. But I had to make a choice, and Codeberg was that choice. With the experience from the migration, which was quite straightforward ↪9 , jumping ship again should I need to doesn't seem as daunting anymore. Codeberg's stance on AI and Crawling is something I align with, and they take measures to defend against it. So far, I'm satisfied with my choice, and the interface feels so much snappier than GitHub's current one too! But crawling is far from the only issue I have with AI. They will extract as much value from you as possible, whether you like it or not. They will control every bit that they can from your thoughts. Who they? Well, the watchers: how openai, the US government, and persona built an identity surveillance machine that files reports on you to the feds . Putting aside the wonderful experience that the site's design provides (maybe I should borrow that starry background…), the contents are concerning . So I feel very validated in the fact that I've never made an attempt to use any of the services all these companies are trying to sell me. I don't want to use them even if I got paid . Please stay away, Microslop . But whether I like it or not, we are, unfortunately, very much paying for it. So Hold on to Your Hardware . Allow me to quote a part from the article: Q1 hasn’t even ended and a major hard drive manufacturer has zero remaining capacity for the year So yeah. It's important to own your hardware. And I would suggest you own your code, too. Don't let them take that away from you. Now, I'm not quite at the point where I'm hosting everything I do from my own home, and I really hope it doesn't have to come to that. But there is comfort in paying for a service, such as renting a server to host this very site ↪10 , knowing that you are not the product (or, at least, whoever is offering the paid service has an incentive not to make you one.) Some people pair the move from GitHub to Codeberg along statichost.eu . But just how bad can hosting something youself can get anyway? Judging by the amount of people that are Messing with bots , it indeed seems there are plenty of websites that want to keep LLM crawlers are bay, with a multitude of approaches like Blocking LLM crawlers, without JavaScript or the popular Anubis . If I were to self-host my forge, I would probably be Guarding My Git Forge Against AI Scrapers out of need too. Regardless of the choice, let's say we're happy with the measures in place to keep crawlers busy being fed garbage. Are we done? We're protected against slop now, right? No, because they're doing the same. To those that vibecode entire projects and don't disclaim they're done with AI: your project sucks . And it's in your browser too. Even though I think nobody wants AI in Firefox, Mozilla . Because I don't care how well your "AI" works . And No, Cloudflare's Matrix server isn't an earnest project either. If that's how well AIs can do, I remain unimpressed. I haven't even mentioned the impact all these models have on jobs either ↪11 ! Cozy projects aren't safe either. WigglyPaint also suffers from low quality slop redistribution. "LLMs enable source code laundering" and frequently make mistakes. I Am An AI Hater . That's why we see forks stripping AI out, with projects like A code editor for humanoid apes and grumpy toads as a fork of Zed. While I am really happy to see that there are more and more projects adopting policies against AI submissions , all other fronts seem to just keep getting worse. To quote more comments , AI cause: environmental harms , reinforce bias , generate racist output , cause cognitive harms , support suicides , amplify numerous problems around consent and copyright , enable fraud , disinformation , harassment and surveillance and exploit and fire workers. Utter disrespect for community-maintained spaces. Source code laundering. Questionable ties to governments. Extreme waste of compute and finite resources. Exacerbating already-existing problems. I'm not alone thinking this . Are we expected to use AI to keep up? This is A Horrible Conclusion . Yeah. I don't want to have to do anything with it. I hope the post at least made some sense. There are way too many citations that it's hard to tie them neatly. Who knows, maybe one day I'll be forced to work at a local bakery and code only on my free time with how things are going. 1 I get them now. Though I prefer the terseness of no- or .  ↩ 2 I like to think I'm quite pragmatic, and frankly, I've learnt to brush off a lot of things. Having thick skin has proven to be quite useful on the internet.  ↩ 3 I kept working on C# GUI programs and toyed around with making more game-y things, with Processing using Java, which also naturally lent itself to making GUI applications for Android. These aren't quite as relevant to the story though (while both Stringlate and Klooni had/have seen some success, it's not nearly as much.)  ↩ 4 My project-naming skills haven't improved.  ↩ 5 Those are the good ones. There are worse , and then there is far worse. Stay safe.  ↩ 6 And for some reason I also have 740 followers? I have no idea what that feature does.  ↩ 7 Quite ironic… If you're one of those that also closes the tab when they see AI being mentioned, thanks for sticking by. I'm using this post to vent and let it all out. It would be awkward to address the topic otherwise, though I did think about trying to do it that way.  ↩ 8 As much as I try to avoid engaging with it, I'm afraid I'll eventually be forced to learn those patterns one way or another.  ↩ 9 I chose not to use the import features to bring over everything from GitHub. I saw this as an opportunity to start clean, and it's also just easier to not have to worry about the ownership of other people's contributions to issues if they remain the sole owner at their original place in GitHub.  ↩ 10 I have other things I host here, so I find it useful to rent a VPS rather than simply paying for a static file host. Hosting browsable Git repositories seems like an entirely different beast to hosting static sites though, hence the choice of using Codeberg for code. If all commits and all files are reachable, crawlers are going to have fun with that one.  ↩ 11 Even on my current job the company has enabled automatic Copilot code-reviews for every pull request. I can't disable them, and I feel bad opening PRs knowing that I am wasting compute on pointless bot comments. It just feels like an expensive, glorified spell-checker. The company culture is fine if we ignore this detail, but it feels like I'm fighting an uphill battle, and I'm not sure I'd have much luck elsewhere…  ↩

0 views

Uptime of GitHub Pages alternatives

Many software developers feel we are at a source control inflection point. GitHub has reigned for over fifteen years, and we may be in the early days of an exodus. Developers have become increasingly disappointed with GitHub’s service, features, and overall direction. Blame is often directed to the buy-out by Microsoft, migration to Azure, Azure itself, or the intense focus on AI. Whatever the underlying reason, people are thinking about switching. This post measures the static website hosting uptime of various alternatives. First, a little background. Many source control hosting providers support website hosting. The core concept is that you can deploy a static website with little more than a git push. The ease of use is central to this product: developers just want a static website with minimal fuss. Since they already track their source code in a git repo, its easiest to launch a website from the same provider. Ease of use was central to my decision to host this blog on GitHub Pages. This post explores: All of these services provide a static website hosting free tier. I wanted to understand the reliability of these services before I migrated my content, so I created a simple test. I signed up for accounts on each service and deployed a test web page on each platform. The web pages are completely static, so they can be served from disk as-is or from a CDN cache. Finally, I created uptime monitors on UptimeRobot to detect downtime of these test pages. It’s been running for almost two years. The monitoring status page is public , so you can track how well these platforms perform over time. Here’s the monitoring status for each platform over the last ninety days: Some quick notes about the monitoring. Checks are performed at five minute intervals, so an outage that is shorter than that duration would either not be detected or would be reported as a five minute outage. The response timing for my test webpage on GitHub Pages was the best with an average response time over 100ms faster than all the others. The minimum response time was 6ms, which suggests that UptimeRobot is in the same data center as GitHub Pages. My monitor detected three outages over the last 23 months. Two were 404 Not Found errors, both happening on November 27th, 2024 and lasting ten minutes each. There was also a five minute DNS-related outage. GitHub was not to blame in this instance as I use a custom domain name and a third-party DNS provider. Focusing on the last full year, 2025, there were zero outages I could attribute to GitHub Pages. So my assessment of GitHub Pages test webpage uptime in 2025 is 100% . I was kind of surprised that GitHub Pages did so well here. Microsoft’s own status report shows occasionally issues with GitHub Pages. My custom monitor did not detect these. One explanation for the disagreement between these measurements is the presence of a third-party CDN. GitHub serves static assets for GitHub Pages through the Fastly CDN . I never change the test web pages, so I’m not testing the reliability of deployments. So in this instance, my custom monitor is really measuring Fastly, not any Microsoft-operated systems. GitLab Pages was the slowest platform I tested, with average response times over 300ms slower than GitHub Pages. GitLab had one large outage of twenty-five minutes and a short five minute outage. GitLab Pages appeared to have 99.994% uptime in 2025. This “four-nines” availability is excellent and is suitable for most websites. Bitbucket Cloud response times were middle-of-the-road. UptimeRobot detected twenty-eight periods of downtime for the Bitbucket Cloud test webpage. Nineteen of these were connection timeouts. The rest were 500-series HTTP status codes. Over 2025, the Bitbucket Cloud test webpage availability was measured as 99.936% uptime . This “three-nines” availability is excellent and is suitable for most websites. The Codeberg Pages test webpage had the second fastest response times. The Codeberg Pages test webpage had the worst availability with 489 periods of downtime. The longest of these nearly reaching seventeen hours. Over 2025, the Codeberg Pages test webpage availability was measured as 98.358% . This “one-nine” uptime is below availability targets of many websites. GitHub Pages took the top spot in this analysis, which wasn’t what I expected. Depending on your sensitivity to slow response times and availability, you may rank GitLab Pages or Bitbucket Cloud as the best alternative. It seems reasonable to measure GitLab Cloud latency from other locations, as the slow response times could be an artifact of the network path between GitLab and UptimeRobot. Codeberg Pages had the worst availability and appears unsuitable for all but the most outage tolerant of websites. If you need to use it, you could add a CDN of your own on top. Many CDNs are able to serve your websites even when the origin is down, thus hiding availability problems. This adds additional complexity, can impact privacy, and may carry extra costs. GitHub Pages Bitbucket Cloud Codeberg Pages ; and GitLab Pages

0 views
Kev Quirk Yesterday

A Year With The Framework 13

It's been a little over a year since I bought my Framework 13 laptop and shared my initial thoughts , so I thought it would be a good time to provide you guys with an update on what I like, and dislike about this plucky little laptop. I think this is a good place to start since my previous laptop was an M1 MacBook Air, which I loved , but the paltry 256GB of storage, and the fact that it would inevitably be killed off artificially by Apple, I decided to jump ship, and the Framework 13 beat the competition for me . One of the things I loved about the M1 was the incredible battery life. I could work all day and still have a good chunk of battery left, and while the Framework 13 isn't quite as good as the M1, it's still excellent . I can work on this thing all day and still have around 30% of the battery left. Similarly, the performance while using the Framework is great too. With a Ryzen 7 7840 (8 core, 16 threads @ 5.1GHz), a whopping 64GB RAM, 2TB NVMe, and a Radeon 780 integrated GPU, it's all the computer I'll need for the foreseeable future since I mostly browse the web, send emails, and write code . I also occasionally play Minecraft with the kids, so this is more than enough for me. I'm currently running Ubuntu 24.04 LTS, after Fedora quickly started to frustrate me. Previously I commented on how I wasn't sure if I'd even stick with Linux, saying: I'm not sure I have the energy to go down the Linux route again. I don't care what anyone says, Linux is not as simple as other operating systems. Since moving over to Ubuntu and getting things set up in a way that I like, I'm now very happy being back in Linux land. Ubuntu has been totally stable, and performance has been fantastic. Everything is just as snappy as it was on the M1. The Framework 13 is not a Mac, and never will be. I'm yet to see any manufacturer hold a torch to the quality of Apple's hardware. However, it's still very good - the aluminium case is robust and showing very few signs of wear, even after a year of hard labour. Still looking great even after a year of hard work I opted for the orange bezel, but didn't like it originally. I've since grown to love it and no longer find it distracting. People just don't have orange laptops, so it's a little different and fun. The orange bezel has grown on me I also complained about the poor accuracy of the fingerprint reader, but that the issue seemed to be Fedora, rather than hardware. Since switching to Ubuntu it's been fine. The hardware switches for the mic and webcam are great. I continue to use them most days, and it's nice to know that my own laptop isn't listening, or watching, when I don't want it to. All in all the Framework cost me just shy of £1,500 at this spec, and considering the runway I have with the hardware specs, and the incredible ability to repair this thing, I think overall it's better value than spending the same money on a MacBook Pro M4 at the time. I'm currently using around 256GB of the available storage on my NVMe. The MacBook Pro at the price I wanted came with 512GB, so I'd have been around halfway through my storage already. Considering lack of storage was a big driver for me replacing the M1 in the first place, I'm glad I went this route. Whilst the build quality isn't as good as a Mac, it's still excellent. I have no concerns about the longevity of this cool little laptop, and I trust it will continue to serve me for many years to come. My wife still runs a Gen 2 Lenovo X1 Carbon from 2014 (that's 12 years old) and it's still going strong. If I can get the same kind of longevity out of this, even if I do need to (easily) replace a battery or 2 along the way, I'll be very, very happy. If you're thinking about jumping in and getting a Framework laptop, my advice would be do it . I have zero regrets. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
DHH 2 days ago

Basecamp becomes agent accessible

In the past 18 months, we've experimented with a ton of AI-infused features at 37signals. Fizzy had all sorts of attempts. As did Basecamp. But as Microsoft and many others have realized, it's not that easy to make something that's actually good and would welcomed by users. So we didn't ship. In the meantime, agents have emerged has the killer app for AI. Not only are LLMs much smarter when they can check their thinking using tools, but the file system also gives them the memory implant they needed to learn between prompts. And now they can actually do stuff! So while we keep cooking on actually-useful native AI features in Basecamp, we're launching a fully agent-accessible version today. We've revamped our API, created a brand-new CLI, and wrapped it all in a skill to teach agents how best to use it all. It works remarkably well, and it's really fast too. Not only can you have your agent look through everything in Basecamp, summarize whatever you need, but it can also set up to-do lists, post message updates, chat with humans and clankers alike, upload reference files, and arrange a project schedule. Anything you can do in Basecamp, agents can now do too. This becomes extra powerful when you combine Basecamp with all the other tools you might be using that are also agent accessible. For software development, you can use the MCP from Sentry to trawl through major sources of bugs, then have the agent summarize that in a message for Basecamp. Or you have it download, analyze, and highlight key customer complaints by giving it access to your help desk system. All this was possible in the past with APIs, hand-written integrations, and human data scientists. But it was cumbersome, slow, and expensive, so most people just didn't. A vanishingly small portion of Basecamp customers have ever directly interacted with our API. But agents? I think adoption is going to be swift. Not because everyone is going to run OpenCode, Claude Code, or Gemini CLI. But because agents are going to be incorporated into ChatGPT, Gemini, Grok, and all the other mainstream interfaces who were collectively embarrassed by OpenClaw's meteoric ascent  and popularity very quickly. There's a huge demand out there for a personal agent that can act as your private executive assistant. This is where the puck is going, and we're skating to meet it with agent accessibility across the board. Basecamp is first, Fizzy is next, and we'll hit HEY before long too. Revamped APIs, comprehensive CLIs, and the skills to use them whatever your harness or claws look like.

0 views
Anton Zhiyanov 2 days ago

Porting Go's io package to C

Creating a subset of Go that translates to C was never my end goal. I liked writing C code with Go, but without the standard library it felt pretty limited. So, the next logical step was to port Go's stdlib to C. Of course, this isn't something I could do all at once. So I started with the standard library packages that had the fewest dependencies, and one of them was the package. This post is about how that went. io package • Slices • Multiple returns • Errors • Interfaces • Type assertion • Specialized readers • Copy • Wrapping up is one of the core Go packages. It introduces the concepts of readers and writers , which are also common in other programming languages. In Go, a reader is anything that can read some raw data (bytes) from a source into a slice: A writer is anything that can take some raw data from a slice and write it to a destination: The package defines many other interfaces, like and , as well as combinations like and . It also provides several functions, the most well-known being , which copies all data from a source (represented by a reader) to a destination (represented by a writer): C, of course, doesn't have interfaces. But before I get into that, I had to make several other design decisions. In general, a slice is a linear container that holds N elements of type T. Typically, a slice is a view of some underlying data. In Go, a slice consists of a pointer to a block of allocated memory, a length (the number of elements in the slice), and a capacity (the total number of elements that can fit in the backing memory before the runtime needs to re-allocate): Interfaces in the package work with fixed-length slices (readers and writers should never append to a slice), and they only use byte slices. So, the simplest way to represent this in C could be: But since I needed a general-purpose slice type, I decided to do it the Go way instead: Plus a bound-checking helper to access slice elements: Usage example: So far, so good. Let's look at the method again: It returns two values: an and an . C functions can only return one value, so I needed to figure out how to handle this. The classic approach would be to pass output parameters by pointer, like or . But that doesn't compose well and looks nothing like Go. Instead, I went with a result struct: The union can store any primitive type, as well as strings, slices, and pointers. The type combines a value with an error. So, our method (let's assume it's just a regular function for now): Translates to: And the caller can access the result like this: For the error type itself, I went with a simple pointer to an immutable string: Plus a constructor macro: I wanted to avoid heap allocations as much as possible, so decided not to support dynamic errors. Only sentinel errors are used, and they're defined at the file level like this: Errors are compared by pointer identity ( ), not by string content — just like sentinel errors in Go. A error is a pointer. This keeps error handling cheap and straightforward. This was the big one. In Go, an interface is a type that specifies a set of methods. Any concrete type that implements those methods satisfies the interface — no explicit declaration needed. In C, there's no such mechanism. For interfaces, I decided to use "fat" structs with function pointers. That way, Go's : Becomes an struct in C: The pointer holds the concrete value, and each method becomes a function pointer that takes as its first argument. This is less efficient than using a static method table, especially if the interface has a lot of methods, but it's simpler. So I decided it was good enough for the first version. Now functions can work with interfaces without knowing the specific implementation: Calling a method on the interface just goes through the function pointer: Go's interface is more than just a value wrapper with a method table. It also stores type information about the value it holds: Since the runtime knows the exact type inside the interface, it can try to "upgrade" the interface (for example, a regular ) to another interface (like ) using a type assertion : The last thing I wanted to do was reinvent Go's dynamic type system in C, so dropping this feature was an easy decision. There's another kind of type assertion, though — when we unwrap the interface to get the value of a specific type: And this kind of assertion is quite possible in C. All we have to do is compare function pointers: If two different types happened to share the same method implementation, this would break. In practice, each concrete type has its own methods, so the function pointer serves as a reliable type tag. After I decided on the interface approach, porting the actual types was pretty easy. For example, wraps a reader and stops with EOF after reading N bytes: The logic is straightforward: if there are no bytes left, return EOF. Otherwise, if the buffer is bigger than the remaining size, shorten it. Then, call the underlying reader, and decrease the remaining size. Here's what the ported C code looks like: A bit more verbose, but nothing special. The multiple return values, the interface call with , and the slice handling are all implemented as described in previous sections. is where everything comes together. Here's the simplified Go version: In Go, allocates its buffer on the heap with . I could take a similar approach in C — make take an allocator and use it to create the buffer like this: But since this is just a temporary buffer that only exists during the function call, I decided stack allocation was a better choice: allocates memory on a stack with a bounds-checking macro that wraps C's . It moves the stack pointer and gives you a chunk of memory that's automatically freed when the function returns. People often avoid using because it can cause a stack overflow, but using a bounds-checking wrapper fixes this issue. Another common concern with is that it's not block-scoped — the memory stays allocated until the function exits. However, since we only allocate once, this isn't a problem. Here's the simplified C version of : Here, you can see all the parts from this post working together: a function accepting interfaces, slices passed to interface methods, a result type wrapping multiple return values, error sentinels compared by identity, and a stack-allocated buffer used for the copy. Porting Go's package to C meant solving a few problems: representing slices, handling multiple return values, modeling errors, and implementing interfaces using function pointers. None of this needed anything fancy — just structs, unions, functions, and some macros. The resulting C code is more verbose than Go, but it's structurally similar, easy enough to read, and this approach should work well for other Go packages too. The package isn't very useful on its own — it mainly defines interfaces and doesn't provide concrete implementations. So, the next two packages to port were naturally and — I'll talk about those in the next post. In the meantime, if you'd like to write Go that translates to C — with no runtime and manual memory management — I invite you to try Solod . The package is included, of course.

0 views
HeyDingus 2 days ago

ADK Climb Club is now web-friendly!

Just finished up a project that I’ve been meaning to get to for a year: bringing ADK Climb Club to the open web. We’ve had a landing page for a while, but all the info about our meetups was going out via Instagram and WhatsApp . But not everyone wants to use those apps, and I heard from them! So, I buckled down and imported all the old posts, and hooked up my auto-crossposter . Now, everything that we post to Instagram shows up on our website as a native, web-friendly blog post. And I enabled (free) email subscriptions (thanks Micro.blog!), so folks can get an email each time that we share information about a meetup. Although Instagram is still our “ primary” platform — that’s where our biggest audience is and where we pick up new members - I feel much better about the club being more accessible on the open web, and that people can stay in the loop with posts pushed out to them without having to sign up for a Meta app. If you’re a climber (or are climbing curious) and near Lake Placid, NY on a Wednesday night, you should come check us out! HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
neilzone 2 days ago

Initial thoughts on the tiny XTEINK X4 ereader

What fits nicely in my hand and gives me hours of pleasure? A tiny ereader! I - like, it seems, quite a lot of people - bought an XTEINK X4 ereader. I bought an X4 because I love reading, and I was drawn to the idea of having a tiny ereader in my pocket. Instead of reaching for my phone, I hope that I will instead reach for the ereader, and enjoy some more reading. I am in the very privileged position on having the X4 as an extra / secondary ereader, which perhaps colours my view of the device, in the sense of being willing to put up with more of its quirks than if it were my only ereader. (Since someone asked me about it, perhaps because of some of the marketing photos: this is a standalone ereader. Yes, one needs to transfer books to it (see below), but it is not tied to a phone / does not require a phone to function. One can attach it, magnetically, to the back of a phone, for reasons which are not entirely obvious to me.) I had no plans to use the stock firmware, and used it only so far as to change the language to English before flashing the Free software alternative firmware, CrossPoint . (There are other firmwares for the device; I chose CrossPoint.) I did, however, note that the stock firmware does not require a user account / registration or anything like that, which I appreciated. I flashed CrossPoint using the tool at [https://xteink.dve.al]. When I tried to backup the existing firmware, I got an error of I ran , to give my user the right permissions. With that done, I could dump the existing flash (which did indeed take about 25 minutes). I had the same error when flashing the CrossPoint firmware, so I ran again, and it worked again. Once I had reset the device - hold the small button at the bottom on the right edge of the X4 for a second, then press-and-hold-for-a-few-seconds the power button at the top on the right edge of the X4 - it booted into CrossPoint very quickly. The device comes with a screen protector. This is an excellent idea. It would have been even better if this has been installed in the factory, but never mind. I bought a cheap (£4) clear plastic shell, to protect the back of it. It add a bit of bulk to the device, but I’d like to protect it. I replaced the included 16GB (the manual says that it comes with a 32GB card…) XTEINK-branded microSD card as soon as I received the device, with a 128GB SanDisk card. This was mostly down to force of habit, as it would not be a particular problem for me if the microSD card in the device died. Annoying, for sure, but I could just pop in a new card and reload all my books from Calibre. The card slot is recessed, so pressing it to remove it, and to get it back in place, was quite tricky with short fingernails. This, it turns out, is a bit of a pain. I use Calibre for managing my ebook library. For my other ereaders, I load books via a cable. Somewhat annoyingly, the X4 and its microSD card do not mount as a USB-writable device. The options are Wi-Fi-based, or else remove the microSD card. I have gone with the microSD card approach, despite it being a bit of a pain. In Calibre, I used the “Save to disk” / “Save only the EPUB format to disk in a single folder” option. This did - as expected - dump 500+ ebooks into a single directory, which is not ideal on the X4 with CrossPoint, given that they appear as a list, with no way to search. Press-and-hold on the side buttons does jump between full screens though (a bit like Page Up / Page Down), so it is not terrible. Perhaps I need to treat the X4 less like a portable library, and just move onto it a small number of books that I want to have so readily available. CrossPoint seems to struggle with books with a special character (e.g. “$”) in the title; I have yet to dig into this though. I have not tried to connect it to Wi-Fi; I have no need for this. I have not found a way to turn off Wi-Fi, which is a bit annoying, as I don’t need to be on all the time, both in terms of battery life and privacy. The reading experience is… good. Neither terrible nor amazing. What makes it good is that it is pocketable and there when I want it. The 4.3” screen is, apparently, 220 PPI. It is not as crisp/sharp as the screen on my Kobo or Tolino. A backlight would be wonderful, but I knew that it did not have one when I bought it. CrossPoint does not (currently, anyway) support dark mode - light text on a black background. I prefer dark mode when reading, but I can easily live without it on this device. There is a pull request to add dark mode to CrossPoint , but I note: Did you use AI tools to help write this code? YES The X4 can fit a surprisingly large amount of text on the small screen. But, nevertheless, it means pressing the “next page” button a lot. The buttons on the front are bit “clicky”, but fortunately the buttons on the side are much quieter / softer. I imagine that, if I was using the front buttons to turn the page, and I was sitting next to my wife at the time, she would find it very annoying. I would. Note that the two buttons on the front are, in fact, four buttons; each button is a bit like a rocker switch, I guess, with different actions for the left and right sides. I should have worked that out sooner (or read the manual)… I am quite content with the lack of a touch screen; I much prefer pressing a button to turn a page than mimicking a “swipe” action, as I don’t have to move my hand or hold the device awkwardly. It has 128 megabytes of RAM, which both feels like loads, and not much at all, at the same time. Books load more than fast enough, and page turns are rapid. It has a 650mAh battery, and although my initial experience has been fine, I wonder just how long this is going to last with Wi-Fi on the whole time (needlessly). But the X4 charges via USB-C, which is excellent, as it means that I don’t need to carry yet another cable.

0 views
Robin Moffatt 2 days ago

Interesting links - March 2026

I’ve had a huge amount of fun this month exploring quite what AI (in the form of Claude Code) can do for a data engineer. Rather than just hack around at a prompt, I took a bit more of a considered approach to it, building a harness to test out different prompts and skills. You can read my write-up here, the headline of which is that literally Claude Code isn’t going to replace data engineers (yet) . I’ve also written up an AI Disclosure for my blog which I’ll keep up to date as my use of AI evolves, along with a sweary rant about why you basically have to get on board with AI if you value your career.

0 views
Andre Garzia 2 days ago

Apple Just Lost Me

# Apple Just Lost Me Apple has just lost me as an user. It will take me a while before I can fully migrate away from their devices, and I suspect I might need to keep a mac around for my work, but I will move all my personal computing to Linux and Android again. I been an Apple user since MacOS 8. I had both a Newton MessagePad 2000 and an eMate 300. I got the original blue toilet-seat iBook G3. I was there for the developer road show introducing MacOS X. I paid for my developer account since then. Recently, I had a Macbook Air, iPhone 17, iPad Mini. I'm gonna throw all of them away — not literally ofc — because of recent slop this company been shipping. It is death through a thousand papercuts. To summarise for yous there are three main issues for me and the last one happened today and is what pushed me through the threshold. ### Gatekeeper I absolutely hate Apple quarantine and gatekeeping of software. As a developer, I should just be able to ship software to those interested in my apps. Be aware that I don't give a flying fuck about mobile development, I'm talking about desktop apps here. I gave in to the Apple racketering scheme and got myself a developer account from the very start. I had to *fax my card details to them*, that is how long I had my account. Even though my software is packaged and notarised as per their requirements, they still show my users a dialog box confirming they want to run my app, something they do not for apps installed through their walled garden. This is just friction to punish developers outside their store. I am very tired of it. ### macOS 26 That has been an absolute fiasco. Liquid glass is completely broken from a design point of view. I have no idea how that got out of the door, and now multiple updates in, it still just as bad. Not only it looks ugly, and that is subjective of course, but it is visually broken. Interfaces built with AppKit or SwiftUI that rendered perfect, are now overlapping controls and clipping stuff. They have no consistency at all in terms of icons, placement, corners... I am not a designer, I don't even care about design much, but when a bad design spreads like ink on a glass of water poisoning my workflows, it is when I notice it. ### Age verification My iPhone updated last night and per UK laws, it introduced age verification. The way Apple decided to implement this is through credit card checking. First it attempted to check my Apple Wallet, it failed even though I have five cards in it and am able to use the App Store fine. Then it moved onto wanting me to manually add a card to verify myself. It failed with all my five cards. Four were debit cards, and one was a credit card from another country, cause you know I am an immigrant who has accounts still in my own original birth place. So it failed age verification and locked me out of many features. Bear in mind, I am 45 years old. I have an Apple account for 25 years, the age of my personal account alone should already verify my age. Credit cards are not documents. Many people don't have them. Apple don't provide any other way to verify your age because they are a stupid American company with American values in which you're just as human as your credit score. Age verification is a scam, but checking it with a credit card is even worse. ## Next steps for me I was already done with Apple for some months now, but due to that happening today, I am angry af and will speed up my plans. I'm tired of devices that are not actually mine, of workflows that without blessing from a higher corporate authority won't work. I'm gonna move back to Linux and Android. > Yeah, I know Google gonna fuck Android soon the same way, but at least with Android you tend to have more options. For my computing needs, I purchased a [MNT Pocket Reform](https://www.crowdsupply.com/mnt/pocket-reform). It will take them a while to assemble and send it to me, but once I have it, my macbook will become a work laptop only. All software I make already ships for Linux. I am considering getting a [Fairphone Gen 6](https://www.fairphone.com/the-fairphone-gen-6). Not sure if I will go with stock Android or their Murena /e/OS version. It depends how the degoggled version handles my banking apps. I might need to go with stock Android. After those two, I plan to assemble a little *homelab* using either a TinyMiniMicro form factor PC running Linux and if I have the budget an ugreen NAS. On those machines, I want to have something to handle my photo backup and shared drive. Will probably use either tailscale or some cloudflare bullshit to connect them to each other. This is it, moving back towards taking control of my computing again.

0 views
Stratechery 2 days ago

Arm Launches Own CPU, Arm’s Motivation, Constraints and Systems

Arm is selling its own chips, not just licensing IP. It's a big change compared to Arm's history, but not surprising given how computing is evolving.

0 views