Latest Posts (20 found)
Unsung Today

“The cheatsheet you won’t need.”

A fun bit of storytelling on the website for a git client Retcon : I don’t have personal experience with Retcon. I definitely struggled a lot with git’s syntax over the years, and have my own cheatsheet that looks similar to this. But what I really liked from this page was the elevation of undo to be the North Star. I think it’s very, very well deserved. To the best of my knowledge, undo in its modern form arrived in 1983 with Apple Lisa – Byte magazine called it a “ tremendous security blanket ” – and then over the next decade or so blossomed into its current state: an infinite, multi-level, lightning-fast safety hatch that works pretty much everywhere, always there in the bottom-left corner of your keyboard, so second-nature you might not even realize you’re invoking it. In early apps, before undo arrived, you had to be very careful about what you did and when you saved your work. Later on, undo worked on just one level, so you had to think a lot about how to spend it before things became irreversible. Today, undo just works . It truly became Back Space: The Next Generation. But any user-facing “just works” hand wave means a lot of people’s hard and invisible work behind the scenes. So if you’re reading this, and at some point in your career you worked on making undo better, my tip of the hat to you (and send me a message!). #errors #interface design #maintenance

0 views
Unsung Today

“That’s how floating point errors and triangle numbers solved a mystery.”

Minecraft is so complex that it’s sometimes hard to know what is a bug and what is not. Here’s the logic of the game: The first is common in games. The second is – I believe! – a former bug that was grandfathered in as a design decision : people got used to it, started relying on it, and it became “too big to fix.” The retroactive explanation became that the boat is your shield and takes all the fall damage, which is a very Hollywood action movie way of looking at the world. So, only the third one is a bug… obviously. But why those specific numbers? Here’s a 16-minute video by Matt Parker at Stand-up Maths that tries to answer it: It’s an interesting video because it’s lighter on bug causes discussion, but heavier on math – and the moment you realize those numbers above are not random at all and coalesce into a nice formula, is genuinely a pretty fun moment. I thought this was interesting, and a little contribution to a larger debate about how hard it is to even agree what a bug really is (which I previously briefly talked about it ). #bugs #games #youtube If you fall from height, you receive fall damage. If you fall from height but you’re in a boat, there’s no fall damage. If you fall from height and you’re in a boat, but you fall from a distance of 12, 13, 49, 51, 111, 114, 198, 202, 310 or 315 blocks, there is fall damage and you die.

0 views
iDiallo Today

The Satisfaction of a ChatGPT Plan

#NoFollowUpNecessary A couple weeks back, I was arguing that when people come up with ideas, the satisfaction is in the telling , not in building. And I was making this statement generally for idea sharing. But then, I also mentioned that people share their "ChatGPT plan" with me now. Rather than sharing the idea, they share the business plan on how to achieve the idea, entirely generated by AI. This resonated with several people who emailed me saying they have experienced the same thing. So someone they know has an idea, rather than risk the potential humiliation from being told their idea is bad, they share it with their favorite AI. Sycophancy being the default behavior, their idea is always validated. Even if not, the AI might suggest slight adjustment to the idea and make it valid. And at the end of a conversation with ChatGPT, the LLM, trying to be helpful, will always ask something in the line of: "Do you want me to create a step by step plan to achieve this?" The answer is always yes. Please tell me how I can make millions off my unique idea, and give me the details... make no mistakes. The plan that comes back is elaborate. You can even ask ChatGPT to expand on specific sections. Now, this plan is what ends up being shared. Every single time I receive those plans and read them, I notice something funny. When I ask a question about a section, my friends have no answers . They have to go back to the AI to get an answer. Why don't they have an answer? Because they are reading it for the first time with me. Basically, because the plan is long and elaborate, who has time to read it? The satisfaction is in the format and complexity, not in the execution of the plan. They had an idea, ChatGPT improved it, then it built a plan and solution for the problem. So their idea now has a solution, and the solution must be correct because AI came up with it. The problem is solved, we can file it in a cabinet. Executing it was never the issue. I'm sure there will be a psychological term for this. A term for getting a psychological reward from watching AI come up with a plan of execution for your ideas. This isn't specific to OpenAI's ChatGPT, it's a catch-all for all generative AI in the current market. Even when I'm doing research for a blog post, I'm often caught in the "Would you like me to expand further on this?" questions that can easily lure you into a rabbit hole. I guess AI providers are learning from social media. In social media, the goal isn't to socialize with friends and family anymore. Instead, they are trying to keep you engaged for as long as possible, to expose you to the maximum number of advertisements. With AI, the goal isn't to impart you with knowledge. Instead it's to give you the satisfaction of appearing knowledgeable by keeping you engaged with an AI while they expose you to ads and spend your tokens.

0 views

AI & Alignment

Raw coding speed isn’t the bottleneck. Alignment is the bottleneck. That seems to be a zeitgeist-y theme lately. If you’re using AI to code, maybe you’re feeling it. You can code more and faster . And clearly a boatload of other developers are doing that too. But software doesn’t seem to be exploding in quantity or quality broadly. Maybe it’s a little? But if AI is 10✕ing our coding, we’re certainly not seeing software get 10✕ better. Which is maybe why Andrew Murphy is saying: If you thought the speed of writing code was your problem – you have bigger problems . Your developers are producing PRs faster than ever. Great. Wonderful. Gold star. Someone get the confetti cannon. Now those PRs hit the review queue, and your reviewers haven’t tripled. Nobody tripled the reviewers. Nobody even  thought  about the reviewers, because the reviewers weren’t in the vendor’s slide deck. Or maybe you don’t even get to the “too many PRs” problem because nobody even knows quite what to build. Because you need team alignment to figure that out. You need research. You need stakeholder buy-in. You need a damn plan. And AI isn’t, for the most part, helping with those things. And those things are hard. Or maybe you are just ripping PRs and your code is evolving rapidly. AI doesn’t help you know… is this the right thing to do? Is it working? Does anybody care? That probably should have been part of the plan, and again, that’s the hard part. Maybe this is an industry-wide topic right now not just because it’s hitting the community feeling frequency just right, but because there is academic research supporting it . I can’t pretend to understand all that, but I appreciate it’s being looked at with mathematic rigor. We’re also seeing tooling react to this situation. I think it’s fair to say that AI is increasing the productivity of individuals. But Maggie Appleton pulls out the classic saying: but 9 women can’t have a baby in 1 month. Fasters individuals don’t make a fast company, unless they are perfectly aligned . Maggie showing off new GitHub software that is designed to acknowledge and help with alignment issues. I tend to agree that software itself can evolve to help. Just the fact that AI, in “planning mode” isn’t sharing that plan with a team, is weird, and an easy target to make better. I also think getting a bunch of humans in alignment is just a thing that takes time. It should be a bottleneck. I’ll forever think of Dave’s “Slow, like brisket.” Some things becomes good because they are done slowly, and it’s OK if software is one of them.

0 views
Kev Quirk Yesterday

ThinkPad T480 Initial Thoughts

Since my Framework had a coffee bath , I've been using a ThinkPad T480 that I picked up from eBay for £285 ($385). This has been my main laptop for a few days now, and I have some thoughts, so I thought I'd share them since I've read mixed reviews on these plucky little laptops - everything from: They're the best laptops in the world, EVARRRRR! They're overrated and overpriced - stop buying them! My opinion is that the T480 is somewhere in the middle of these 2 opinions. Let's just in... Like I said, I paid £285 for this laptop, which was listed as very good condition - refurbished" . And I agree - the condition of the laptop is very good, especially considering it's been a corporate laptop and is 8 years old at this point. It came with a 14" 1080p screen, 16GB RAM, a Core I5-8250U CPU (4 core, 8 thread @ 3.4GHz), a 256GB NVMe, and Windows 11 (which was promptly removed). I had a 1TB NVMe lying around, so I upgraded that first, and I've also bought a 32GB RAM upgrade costing an additional £70 ($95). The RAM upgrade hasn't been delivered yet, so these thoughts are based on 16GB RAM. My T480 (yes, those stickers needs to go) This laptop has bezels for days compared to my Framework, but that's to be expected. It's an old, utilitarian laptop - that didn't stop me getting a bit of a shock when I first cracked it open though. Now I've been using it a few days, the bezels don't bother me though. I've always liked ThinkPad keyboards, and this is no exception. It works great, and has lots of travel on the keys, which I always appreciate. It's not as nice as the keyboard on my Framework, but I think that's the best keyboard I've ever used, Macbook included. I'm not a fan of the textured finish that's all over this laptop though. It's on the case, on the keyboard, the trackpad, everywhere . It's like a slightly rubberised, gritty finish. It doesn't impact the functionality of the laptop, I'm just not a big fan of it. The keyboard is backlit too, which I appreciate. Honestly, I was expecting the battery to be crap on the T480, being second-hand. But I was so wrong! It came with an extended battery fitted, and on checking it over, it's only had 2 charge cycles, so it brand new. The battery will last all day, no problem at all. The other day I ran it for an entire working day, and at 15:00 it still had 61% charge left, with Ubuntu reporting another 6.5 hours of use remaining. That's incredible, in my opinion. Ubuntu runs perfectly on this - all drivers were discovered fine, and I managed to get the fingerprint reader working with just a little bit of DuckDuckGo-fu. Performance is good too. Everything feels snappy with no lag. Obviously it's not instant like on my Framework, but that thing is a powerhouse. Having said that, I could see myself using the T48 long-term without issue. I'm currently running Firefox, Spotify, Obsidian, VSCodium, and a few other bits. Here's how the Ubuntu System Monitor looks: So I'm using about half my RAM, and between 20-40% of the CPU. I don't need to upgrade the RAM, but it's nice to have the extra overhead in case I ever do need it. I'm not much of a gamer, but the T480 will consistently run Minecraft at 40ish FPS, which is fine, and honestly better than I expected. Overall I think the T480 was good value for money. It's in really good condition, performs well, and is almost as repairable as my Framework. I think this laptop still has years of life left in it, so will it sit in a drawer once the Framework is repaired? No, that would be a waste of both money, and a perfectly good laptop. My wife is currently using a 2014 X1 Carbon that I used for many years before switching to the Macbook M1 Air . The X1 is still going strong, but it's starting to struggle in its old age. Not to mention that my wife is still running Windows 10 on it! So once the Framework is repaired, I'll be giving this laptop to my wife where it should continue to provide solid service for years to come, all while being a nice upgrade for her. The X1 will get the latest version of Ubuntu installed on it, and will be put out to pasture as the spare laptop for the household. If you're on the fence about picking a T480 up, I'd say go for it. While they're no powerhouse, and won't win any beauty awards, they're a solid workhorse that still have many years of service left in them. I'm very happy with my purchase. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
ava's blog Yesterday

why i don't write the usual privacy stuff

When you search for privacy/data protection stuff, what you will usually come across are things like privacy guides , the privacy subreddit, interested tech-y privacy blogs and YouTube channels. They give you great advice and overviews over different kinds of alternative services or additional software you can use to protect yourself, and they rank them, rate them, give additional context and keep up with them in case anything changes. It's this stuff that initially got me interested in privacy, and I wouldn't know a lot of services if it wasn't for their work. I love that I can just refer people to those if they have any questions about specific alternatives, and they deserve their space in the privacy sphere. Anyway, this type of privacy material tends to do well online: It's easy to read, it gives you actionable steps to take, and immediately presents a solution. It says: You're still using Google services? Switch to the Proton Suite. You hate ads? Here are ad-blockers that also block trackers and popups and more. You "just" need to switch, or install more, and you're good. Crisis averted, you're safe/r. Meanwhile, more dry, theoretical, law-based stuff is harder to engage with and harder to write. The reason why I am not really interested in writing about privacy or data protection in the product-focused way isn't only because I am a law student and therefore more interested in law; it's because I prefer to talk more about why something is a problem (or a bad service), and I want to give people the tools to spot it, a legal justification for the bad gut feeling they have, and I don't want to end up just advertising products. The usual type of privacy content isn't always great at educating people on what the problem even is. This service is bad, this service is good (or at least better) is easy to believe at face value, especially when one is a big company and the other is smaller - but why is this bad, and why is this good? Okay, so one does more tracking and one does less tracking, but why is tracking bad? What stops this other service from also becoming "bad"? Nothing is really safe from enshittification, or bankruptcy, or losing their maintainer, or being steered by investors and existing under capitalism for profit. I'd feel bad having the majority of my posts in my area of interest to do the work of the sales department for these services, just for them to become another thing to move away from in a couple years. That is the downside of this sort of approach: You can install and switch all you want, but in the end, it puts a lot of responsibility onto the consumer and involves them in the never-ending arms race of avoiding something; whether that is not supporting an unethical company, or avoiding AI implementation, avoiding ads, avoiding trackers, avoiding becoming training data, etc. as both sides seek new loopholes and ways to get you to either comply and be subject to it anyway, or continue to be able to avoid it via another service or software. It's an unfair fight, where one side heavily depends on smaller companies or FOSS maintainers, and the other side are billion dollar companies that are having a monopoly on many things and have a huge influence on the most powerful government(s) of the world. Consumer choices are good and you should use yours to no longer support what doesn't align with your values, but they aren't everything, especially as the companies make it harder and harder for consumers to have this choice, or for that choice to even make a dent in their finances. That's where we need laws and consumer protections to hold them accountable and grant users who rely on these services better rights - even rights making migrating off of them easier, like the data portability aspect mandated by the GDPR. Indulging in the above sort of privacy content a lot can make you feel like you're outsmarting the Big Guys and you got it all under control while just the "normies" struggle who are just " too lazy to switch!11! ", but to me, that is a flimsy house of cards that can easily collapse. I say that while I too use these things - I am a Linux user, I have several browser extensions to reduce tracking and ads, I use forks like LibreWolf, I am a Proton user, I use a VPN, Signal, Matrix etc. - but I just want to be realistic about it and recognize that it just takes a little here and there for my products and services to vanish or get significantly worse, and that I don't want to foster a false sense of security. If you're like me and a millennial or older, you probably still remember all the past mass migrations between services. I also recognize how many people are left behind with this approach, or at least makes them rely on people around them who are knowledgeable in this stuff. In private, you have a choice, but you might be limited by your knowledge/awareness of alternatives, your understanding of tech, the complexity of the task, the network effect, or how willing the people around you or online are to help. Switching can be hard; transitioning cloud contents, or mail providers, and remembering to change your email address everywhere or at least implement a forwarding rule on the old one(s) can be a task that spans days or weeks next to all the other responsibilities you have. Then every now and then, you might wanna check in to see if your solution is still "good" or whether something changed. That's a lot more labor than just staying where you're at and where the majority is. Maybe you are the one to install a Linux distro for your grandparent, or an adblocker for your parents, and then you're on the hook when things break and have to take the time to sort it out, and they rely on your skills and time until their device is functional again. LibreWolf, for example, has broken many payment transactions for me in the past. At work, or in school or university, you probably don't have a choice at all. They force you into Microsoft and Google products or at least don't present alternative solutions in their setup guides. My work, for example, provides an MFA setup guide that only mentions Google Authenticator, even when any type of authenticator app would work. All of that is not ideal. Putting too much emphasis on switching one product out for another can sometimes produce this vibe of " If you're still using that proven-to-be-awful service, you consent to being exploited and tracked, and it's your fault for staying. " among privacy-interested people, but we can't let that run unchecked to basically mean that you can't expect better from platforms and the users deserve whatever is coming their way. Unless the laws make distinctions between company sizes, they apply to your sacred privacy-conscious competitor as well and might help to prevent them turning out "bad". I also think you'd want your friend, who cannot bring themselves to switch or delete a service, to still have at least some protections here and there, instead of pointing and laughing from your moral high ground. Your child deserves protections when they have to use Microsoft products on their school tablet or when they install TikTok to engage with their friends. They deserve to migrate as easily as possible. They deserve to have permanent deletions of their content. They deserve to not have their likeness uploaded to the platform used for advertising and AI deepfakes without their consent. They deserve to not be targeted by advertisers and political groups via the algorithm that attempts to radicalize them. They deserve not to have all their private data and especially location data leaked or sold, their DMs and art used for training data without consent, and so on. Even if they could switch/abstain and just don't do it. Switching from one service to another when both have the same profit goal and exist under the same system feels, and often is, a temporary bandaid. I don't wanna be a bandaid seller. I don't care about product names, I care for mechanisms, cash flow, dark patterns and settings options. I talk more about why things happen the way they do and make people aware that yes, this thing bothering you is very much illegal or should be handled differently. I write about what the root cause is (usually: attention economy, data brokerage business model etc.), and discuss (potential or actual) laws and other ways on how the root cause is contained, redirected or partially mitigated. We are also constantly hit with attempts by the US government to weaken and dissolve our EU consumer protections and that deserves more attention. I find that more productive and fitting to me/my style than being another " 50 privacy-focused services to consider " in a thousand, forced to make clickbait like " Is this service still safe in 2026??? ". Reply via email Published 25 Apr, 2026

0 views
iDiallo Yesterday

What Do You Charge For?

I've written about my journey to learn how to charge a fair price for building a website before. But even after landing on a strategy, there is still a question that remains unanswered. What should I charge for? Are you charging the price for the product itself? As in, the very cost for building a website? Or are you charging enough to make a living? This question applies to any field, whether you are a consultant, a mechanic, or a private chauffeur. I once worked with a company that built websites for non-profits. Their price tag? $35,000 for a standard WordPress site. Lucky for me, I got a first-hand view of their price breakdown because they were trying to expand their reach to smaller customers. They needed to figure out how to lower the price, so they invited me to the meeting. Every single person in the room was involved in building the website. The standard time frame to complete it was 6 weeks. So, the manager named each person, their title and how much time they spend on the project. There were the designers, the copy writers, the consultants that gathered the information. There were the sales people who started the process, the 2 developers that included me. Everyone at the table was indispensable. Then he gave a ball park estimate of salaries using glassdoor standards, and the price jumped to 35k. It was completely fair. "What if we have Ibrahim as the sole developer on this tier?" the director asked. "And we use only one designer, and we can reuse copy." The manager crunched the numbers and we were still going to charge 25k. "What if I don't get involved at all in this tier?" the manager removed the director's name from the list. He contributed only a couple of hours of work, yet the number went down to 22k. I originally thought $35k was an astronomical amount for a website, but their breakdown showed it didn't even include profit. The salary costs alone ate up the budget. The actual profit for the company came later, from managing the marketing campaign. This is Cost-Plus pricing. You add up what it costs to make the thing, and that becomes the price. It feels logical, but it relies entirely on your costs, not the value you provide. But then, there is another way. The market-based pricing. Take a car, for example. A vehicle costs $35k because that is what the market is willing to pay for that specific make and model. The materials and labor to build the car might be significantly cheaper, or on occasion even more expensive (Rivian) than the sticker price. The price is dictated by the buyer's perceived value, not just the manufacturer's receipt. This method became clearer to me after I started consulting. When I would get a new client, I initially tried to price based on the old model of calculating what I thought my time was worth from a salaried perspective. I later found that the recruiting company I worked with was charging clients $78 per hour for my services, while paying me $40. The market (or the recruiter's markup) was valuing my time at nearly double what I was charging myself. You know the mechanic is gonna charge you extra for that flat Then, there is the wild card method. I've been the unlucky guy who finds himself out of town with a flat tire. I stop at the first tire shop I can find, and the worker doesn't size up my car; he sizes me up. He decides how much to charge based on how desperate I look. In those misadventures, the price has ranged anywhere from $20 to $150. I'm usually in no position to argue when I'm stranded on the side of the road. But how do they decide on those numbers? Are they making a profit? Or are they just charging whatever they think fills their quota for the day? This is opportunistic pricing, highly effective for a quick buck, but I don't think you can build trust like that. All these methods for charging have their pros and cons. My goal isn't to tell you which number to pick, but to encourage you to decide how you pick that number. My advice, in the simplest terms, is this: Be consistent. Once you choose a method, it becomes your standard. Do not deviate. If you charge based on value today, but switch to charging based on your mood tomorrow, your clients will never trust your pricing. They will always wonder if they are getting the "real" price or just the price you felt like charging that morning. They will start looking for other consultants. Pick the method that works for you, stick to it, and let your clients know exactly where they stand. Personally, I apply a value based pricing with my clients, where the cost is tied to their specific needs and the time required to meet them. It's a method that requires trust and communication, but it can be the most fair and profitable for both parties when applied consistently. When they end up with an obscene bill , at the very least they are prepared.

0 views

Sidelining Safari

It was bound to happen . For months, I’ve done my best to prevent this, but eventually, my patience and tolerance weren’t enough. Here I am, writing a post about how I finally decided to ditch Safari as my main browser, and replace it with third-party options. This change was a slow process somehow — spanning a couple of weeks or so — but the gravitational forces of better options were very difficult to escape once I upgraded my work computer to Tahoe, and got to witness Liquid Glass, the mess of it all, and how right most critics were. Safari on Tahoe works fine, I guess, but so many little things feel wrong (it’s a theme with Tahoe and Liquid Glass ). For example, I can’t tell at first glance which tab is active , despite the enormous amount of screen real estate occupied by the address, tab, and bookmark bars. Meanwhile, the Safari extension situation is frustrating as always, and, in 2026, it is still impossible to use the search engine of your choice without requiring an extension that simply redirects search queries . For years, since the first version of Safari for Windows, I have been a loyal, if intermittent, user of Safari. Even today, in a work environment made of Google Workspace, Google Meet, Slack and others, I’ve resisted using the other usual suspects that are Blink-based browsers like Chrome, Brave, Vivaldi, Edge, &c. I’ve dipped my toes in the water a few times, yes, but Safari remained my first choice. Habits, soft spot, call it whatever you want, but to me Safari was always the obvious, the default Mac browser, despite its flaws. Earlier this year, I gave Helium Browser a try: a newish, smartly named, Chromium-based browser, aimed at being light, fast, and stripped of all Google surveillance technologies. The trial was a success, and, after switching back to Safari for a fair fight, I realised that Helium was the most efficient browser to use for work. But the more I used Helium, the more I realised how much better it was than Safari, even the superior Sonoma version that runs on my personal computer. Helium is well-designed, and its set of features is exactly right for me, and, being a Chromium-based browser, it works with my web-related BBEdit scripts. *1 It was just a matter of time before admitting that sticking to Safari was not the best option any more, even for my personal use. My current JavaScript-off by default approach to web browsing surely didn’t help Safari’s case. Indeed, I was starting to get tired of opening private windows to reload tabs with JavaScript “turned back on” for sites requiring it. *2 It was fine until I realised how the same JS-off system was much more convenient to implement on Helium using uBlock Origin (an extension that comes with the browser). On Helium, this is how it works: JavaScript is turned off by default via uBlock Origin. When a site requires JS, I activate it temporarily for that site via uBlock Origin, and JS stays on, only on that tab, until I close it. For sites where I want JS on all the time, I can “lock” that setting and I don’t have to think about it again, or go into the browser’s settings, navigate to the list of sites where the extension is allowed or not, and so on. Quicker and easier than my Safari system. Another perk of not using Safari on my Mac — and therefore not being able to sync my favourites, history, and open tabs with my phone any more — is that I don’t have to stick to Safari on the iPhone either. I can now finally use the great Quiche Browser without feeling like I am missing out on the cross-device comfort I experienced with both instances of Safari. And you know what is great about Quiche Browser? You guessed it, I can add a handy JS on/off toggle onto the toolbar. With Safari and the way it makes extensions like StopTheScript work on iOS, the Private window or quick access to settings workaround I had on the Mac wasn’t manageable, making it pretty much impossible to browse the web with JavaScript turned on by default on the iPhone. *3 So what’s the catch with Helium? I am surprised to say that performance doesn’t seem to be an issue on my early 2020 MacBook Air, at least for now. It may be a little warmer than usual, yes, but I was expecting to hear the fan way more often than I do. Video streaming doesn’t appear to be easy on the CPU and/or memory, but it wasn’t great on Safari either. In fact, Kagi’s Orion — a WebKit-based browser — is seemingly worse than Helium on my computer when it comes to the vacuum cleaner sound effect. The main and only catch I can see so far is everything password-related. I use Apple Passwords, and I could solve 95% of my problems with the iCloud Passwords extension, but I want to use Helium with the services disabled, which prevents it from installing extensions. The Apple Passwords’ little shortcut that lives in the Mac menu bar is helping, but is not ideal. When I look at modern browsers like Helium or Orion on the Mac, and Quiche Browser on the iPhone, I can see a widening gap between those and Safari. These browsers — made by very small teams — are surprisingly good. Not sure I can say that about Safari any more. Using these apps, you can tell the developers behind them care about the product. How many people work on Safari at Apple? Are some members of the Safari team looking at this new generation of browsers? I hope they do, I hope they care. I hope one day they will give me good reasons to switch back to Safari. This is one thing I expect from Apple at WWDC. In the meantime, I’ll let you know how my honeymoon with Helium goes, or if I get sentimental and reunite with Safari sooner than expected. I wish Firefox and other Firefox-based web browsers would work with AppleScript.  ^ The extension StopTheScript is disabled by default on private windows, which is the quickest way to recreate a JS on/off toggle of sorts.  ^ How frustrating is it on the iPhone to access Safari extension settings? Go to the Settings app, scroll all the way down to Apps, scroll all the way down again to Safari, scroll until you find Extensions, click on the extension, and then you have the per-site settings. Madness.  ^ I wish Firefox and other Firefox-based web browsers would work with AppleScript.  ^ The extension StopTheScript is disabled by default on private windows, which is the quickest way to recreate a JS on/off toggle of sorts.  ^ How frustrating is it on the iPhone to access Safari extension settings? Go to the Settings app, scroll all the way down to Apps, scroll all the way down again to Safari, scroll until you find Extensions, click on the extension, and then you have the per-site settings. Madness.  ^

0 views

New Thinkpad Means Back to Mac OS

On Wednesday I picked up a new (to me) Thinkpad P14s Gen 4. I was excited to finally get off my System76 Pang12, a computer that works, but has a long list of hardware and reliability issues. Thinkpad in hand, I installed Ubuntu 25.10 and immediately put it to work with a night of trimming down my client request backlog. The computer was incredible! Amazing keyboard, vastly better trackpad, perfect 14” form factor and everything worked out of the box on Ubuntu. Heck, it even had a usable webcam! But like a majority of things in my life, something always goes wrong. I knew it was too perfect, and wondered what I was going to find that ruined the joy. How about complete system crashes when you plug/unplug the system? Yep, that’ll do it. I spent all of yesterday and this morning debugging. Multiple distress, a long list of kernel params, different chargers and tweaking bios settings. Nada. About 50% of the time when you unplug, Gnome will slowly start to lock up, then the system restarts. Looking at logs it’s caused by a . At first I thought it might be related to the WiFi chips (based on pre-crash logs). Disabled via bios and still crashes. I’ve tested RAM, SSD and battery, all good. I have a new battery coming Monday just in case, but fully expect it won’t help. I’m out $500 USD, and honestly, I’m done with Linux for now. I love Gnome and Fedora+Ubuntu, but it’ll be a few years before I buy a new laptop after throwing away money on the Thinkpad (and the Pang12 2 years ago). Back to Mac OS Tahoe it is. Liquid ass and all. I’m hopeful that the Thinkpad problems are just on Linux. My wife has been wanting a laptop and she’s not ready to jump off Windows making it the perfect computer for her.

0 views

s/sed/ed

Read on the website: ed is a stupid simple text editor. sed is a nice streaming text processing tool. Why would one even want to use ed for anything, let alone for text processing if there's sed?

0 views
Stratechery Yesterday

2026.17: He Came, He Saw, He Cooked

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Stratechery video is on Mythos, Muse, and the Opportunity Cost of Compute . The End of the Tim Cook Era. My son, who is old enough to be on a multi-day school trip to Washington D.C., messaged me in shock that Tim Cook would be stepping down as CEO of Apple this September: that, more than anything, made me realize just how long we have been in the Tim Cook era. He was Apple’s CEO longer than my son has been alive, and a year longer than Steve Jobs. That, needless to say, is worth reflection. — Ben Thompson On Stratechery, I wrote about Cook’s Impeccable Timing and, in an Update , why John Ternus makes sense as the next CEO On Sharp Text, Andrew wrote a fantastic reflection on how Cook’s competence was both correct and boring, and representative of the overall maturation of the tech industry. On Dithering, John and I published our instant reactions on Tuesday , and additional reflections on Friday . Can Cursor and SpaceX Join the Model Wars?  When I first heard the news that SpaceX was partnering with Cursor (with an option to buy Cursor outright for $60 billion), my first reaction was to throw up my hands at the logic and broader plan. Forget it Jake, it’s Elontown , etc. That noted, I loved it when Ben’s Daily Update on Wednesday explained why, in theory, there is an obvious synergy between Cursor and SpaceX . Furthermore, I’m reminded that more AI competition would be a good thing, and for that reason alone I’m rooting for a deal like this to work. We went deeper on the topic during the second segment of Friday’s Sharp Tech, including bear and bull cases, and an attempt to nail down SpaceX’s core business as the company prepares to IPO and seeks a $1.75 trillion valuation.  — Andrew Sharp The Various Fronts of Cold War 2.0.  Most of our shows cover lots of ground, but this week’s episode of Sharp China was especially dense with updates and takes . The big news is that Xi is now publicly calling for the re-opening of the Strait of Hormuz, while several reports indicate China may be providing weapons to the IRGC in the interim. Elsewhere, Beijing passed new laws to crack down on decoupling (Bill says these laws have interested parties “freaked out”), while the U.S. is considering legislation that would close global loopholes on the sale of advanced semiconductor manufacturing equipment to China. My favorite part, though, was a segment on a cake controversy, a physical altercation between Pinduoduo staff and Shanghai regulators, and Xinhua reporting that provides a fascinating look at how the Chinese economy works in 2026.  — AS TSMC Earnings, New N3 Fabs, The Nvidia Ramp — TSMC’s earnings suggest that the company’s leadership is not truly bought into the AI growth story. Tim Cook’s Impeccable Timing — Tim Cook had an extraordinary run — and impeccable timing, both in terms of when he became CEO, and when he is stepping down. John Ternus and Apple’s Hardware-Defined Future, SpaceXAI and Cursor — The elevation of John Ternus suggests that Apple’s future is about hardware differentiation; then, the SpaceX-Cursor deal makes a lot of sense. An Interview with Google Cloud CEO Thomas Kurian About the Agentic Moment — An interview with Google Cloud CEO Thomas Kurian about Google’s cloud priorities, enterprise agent platform, and Google’s integration advantage. Tim Cook Personified Big Tech’s Maturity — For better and worse, Tim Cook’s Apple epitomized an era in which big tech companies grew up, took fewer risks, and took over the world. Tim Cook Steps Down How Tim Cook Changed Apple Itanium: Intel’s Great Successor South Korea Defied the Gods to Build its Steel Colossus Xi Wants the Strait of Hormuz Re-Opened; Cakes and An E-Commerce Crackdown; The Next Stage of Decoupling; The MATCH Act in Congress Play-In Chaos and Knueppel Slippage, Anyone But the Thunder, Title Picks and Awards Resolution Panic Rankings: Pistons Picking Up the Pieces, Rockets on the Ropes, Blazers Pinching Pennies, and More from the NBA Playoffs Tim Cook’s Exit and What Comes Next, A SpaceX Deal with Cursor, Q&A on Vibe Coding, TSMC, WhatsApp

0 views

Premium: How OpenAI Kills Oracle

Soundtrack — Brass Against — Karma Police   It was January 21, 2025. Per The Information , Larry Ellison, CEO of Oracle, had just flown to Washington DC from Florida, and had to borrow a coat “...so he wouldn’t freeze during an interview he did on the White House lawn, according to two people who were involved in the event.” He was there to announce a very big — some might even say huge — new project standing next to SoftBank CEO Masayoshi Son and OpenAI CEO Sam Altman. “Together, these world-leading technology giants are announcing the formation of Stargate, so put that name down in your books, because I think you’re gonna hear a lot about it in the future. A new American company that will invest $500 billion at least in AI infrastructure in the United States and very, very quickly, moving very rapidly, creating over 100,000 American jobs almost immediately,” said President Donald Trump . After he was done, Ellison stepped to the podium. “The data centers are actually under construction, the first of them are under construction in Texas. Each building’s a half a million square feet, there are ten buildings currently being built, but that will expand to 20.” Following Ellison, SoftBank’s Masayoshi Son added that Stargate would “...immediately start deploying $100 billion dollars, with the goal of making $500 billion dollars within [the] next four years, within your town!” turning to Donald Trump with his hands extended. It was unclear what town he was referring to. Altman added that it would be “an exciting project” and that “...we’ll be able to do all the wonderful things that these guys talked about, but the fact that we get to do this in the United States is I think wonderful,” though it’s unclear what “the wonderful things” or “this” refers to. It’s been 15 months, and Stargate LLC has never been formed. SoftBank and OpenAI have contributed no capital to the project, other than SoftBank’s own acquisition of a former electric vehicle manufacturing plant in Lordstown, Ohio that it intends to turn into a data center parts manufacturing plant with Foxconn, which is best known for effectively abandoning a $10 billion factory in Wisconsin back in 2021 . Oh, and Project Freebird, a SoftBank-built project that exists to funnel money to its subsidiary SB Energy , though I can’t imagine how SoftBank actually funds it. No government money was ever involved, no funding ever left anyone’s bank account, no "initiative" ever existed, and OpenAI, Oracle and SoftBank have, in my opinion, conspired to mislead the general public about the existence and validity of a project for marketing purposes.  The “data centers actually under construction” referred to a 1.2GW project in Abilene Texas that had been under construction since the middle of 2024 , and had originally been earmarked by Elon Musk and xAI, except Musk pulled out because he felt that Oracle was moving too slow . While Ellison said that there were ten buildings under construction with plans to expand to twenty, only eight were actually being built ( each holding around 50,000 GB200 GPUs across NVL72 racks ), with the extension up in the air until March 2026, when Microsoft agreed to lease 700MW — so another seven buildings — that were meant to go to OpenAI. These buildings will not make Oracle any money, as Oracle is, despite spending so much money, leasing whatever land it uses from Crusoe. As far as those eight buildings go, only two are actually online and generating revenue, though sources with direct knowledge of Oracle’s infrastructure have informed me that work is still being done on both buildings despite CNBC reporting that they were “ operational ” in September 2025.  Let’s break this down. Based on a presentation by landowner Lancium from May 2025 , the Stargate Abilene campus was meant to have 1.2GW of AI data centers online by year-end 2025. Based on reporting from DatacenterDynamics, the first 200MW of power was meant to be energized “ in 2025 .” As time dragged on, occupancy was meant to begin in the first half of 2025 , had “ potential to reach 1GW by 2025 ,” complete all 1.2GW of capacity by mid-2026 , be energized by mid-2026 , have 64,000 GPUs by the end of 2026 , as of September 30, 2025 had “ two buildings live ,” and as of December 12, 2025, Oracle co-CEO Clay Magouyurk said that Abilene was “on track” with “more than 96,000 NVIDIA Grace Blackwell GB200 delivered,” otherwise known as two buildings’ worth of GPUs.  Four months later on April 22, 2026, Oracle tweeted that “...in Abilene, 200MW is already operational, and delivery of the eight-building campus remains on schedule.” It is unclear if that’s 200MW of critical IT capacity or the total available power at the Abilene campus, and in any case, this is only enough power for two buildings, which means that Oracle is most decidedly not “on schedule.”  Sources familiar with Oracle infrastructure have confirmed that while construction has finished on building three, barely any actual tech has been installed. It also appears that while construction has begun on a power plant of some sort, it’s unclear whether it’s the 360.5MW gas power plant or 1GW substation. In any case, Abilene needs both to turn on the GPUs, if they ever get installed. Abilene is, for the most part, the only part of the Stargate project that’s anywhere near complete. I say that because the other data centers — Shackelford, Texas, Port Washington, Wisconsin, Doña Ana County, New Mexico, Saline, Michigan, and Milam County, Texas — are patches of land with a few steel beams, if that . To be explicit, every single Stargate data center is funded by Oracle and its respective financial backers. Oracle is taking on a massive amount of debt to build these data centers, working with a labyrinthine network of financiers and construction partners to pull together the capacity necessary to get paid for its five-year-long $300 billion compute deal with OpenAI .  Oracle has also, per Bloomberg , deliberately raised money using “ project financing ” loans that are repaid using the projected cashflow, allowing it to keep the massive amount of debt off of its balance sheet. This is remarkable — and offensive! — because it’s borrowing over $38 billion to fund construction of its Wisconsin and Shackelford data centers (the largest debt deal of its kind on record) and said debt will now effectively not exist despite its massive drag on Oracle’s cashflow, which sat at negative $24.7 billion in its last quarterly earnings . Based on estimates ($30 million in critical IT and $14 million in construction per megawatt) from TD Cowen’s Jerome Darling, the total cost of Oracle’s 7.1GW of data center capacity will be somewhere in the region of $340 billion to build. All of these data centers are being built for a single tenant — OpenAI — which expects, per The Information , to lose over $167 billion (assuming it hits annual revenues of over $100 billion) by the end of 2028, and as a result does not actually have the money to pay Oracle for its compute on an ongoing basis. In addition to its commitments to Oracle, OpenAI has also made commitments to spend $138 billion on Amazon over eight years , $250 billion on Microsoft Azure over an unspecific period , $20 billion with Cerebras over three years , $22.4 billion with CoreWeave over five years , and a non-specific amount with Google Cloud .  All of this is happening as Oracle’s core businesses plateau, even after Oracle reshuffled them in Q3 FY25 to represent Cloud, Software, Hardware and Services segments, the latter three of which have barely moved in the last 9 months as low-to-negative-margin cloud compute revenue grows.  In other words, Oracle’s only growth comes from a segment requiring hundreds of billions of dollars of compute.  To make matters worse, every single one of these data centers is behind schedule. Stargate Abilene was meant to be done at the beginning, middle, and now the end of this year, yet sources tell me there’s no way it’s finished before April 2027. Bloomberg also reported late last year that Oracle had delayed several data centers from 2027 to 2028 , but here in reality , every other Stargate data center is somewhere between a patch of dirt, a single steel beam , multiple steel beams , or less than half of a shell of a single building . Considering it’s taken two years for Stargate Abilene to build two buildings, I don’t see how it’s possible that these are built before the beginning of 2029. And at that point, where exactly will we be in the AI bubble? What GPUs will be available? What other kinds of silicon will exist? What will the demand be for AI compute? I don’t think that OpenAI exists for that long, and even if it does, it will have to raise at least $200 billion in the space of three years to possibly keep up with its commitments. I’m surprised that nobody ( outside of JustDario , at least) has raised the seriousness of this situation. Stargate, as it stands, will kill Oracle, outside of OpenAI becoming the literal most-profitable and highest-revenue-generating company of all time within the next two years. Even then, by the time that Abilene is built, its 450,000 GB200 GPUs will be two-years-old, and entirely obsolete far before its debts are repaid. A similar fate awaits whatever GPUs are put in the other Stargate data centers. Today’s newsletter is a thorough review and analysis of the ruinous excess of Stargate, a name that only really means “data centers being built for OpenAI in the hopes that OpenAI will pay for them.” Oracle is mortgaging its entire future on their construction, and even if it gets paid, I see no way that the cashflow from OpenAI’s compute spend can recover the cost before its GPU capex is rendered obsolete, let alone whether it can cover the debt associated with the buildout. I’m Larry Ellison — Welcome To Jackass. Welcome to the end of Oracle, or Sell The Compute To Who, Larry? Fucking Aquaman ? The total estimated cost of Oracle’s Stargate capacity is around $340 billion. OpenAI needs to make, in total, $852 billion in both revenue and funding through the end of 2030 to keep up with its compute costs with Oracle, Amazon, Google, CoreWeave and Microsoft. Oracle cannot afford to pay for the cost of construction and equipment out of cashflow, and has had to take on over $100 billion in debt and sell $20 billion in shares . Across a potential 7.1GW of planned Stargate capacity, Oracle stands to make around $75 billion in annual revenue. Abilene is expected to generate around $10 billion a year in revenue on completion for a project that will likely cost in excess of $58 billion. Stargate Abilene is extremely behind schedule, and likely won’t be finished until Q2 2027. Oracle estimated in 2024 that Abilene would cost it $2.14 billion a year in colocation and electricity fees. Oracle has spent over $5 billion in construction costs on the first two buildings of Abilene, with sources saying that it will likely spend over $10 billion to finish them, suggesting an overall cost of around $48-per-megawatt. Oracle’s remaining Stargate sites are barely under construction, and will likely not be finished before the end of 2028. Even if Oracle builds the data centers and OpenAI pays for them, the incredible upfront cost and NVIDIA’s yearly upgrade cycle will render much of the GPU capacity worthless within the next ten years.  And if OpenAI fails to pay, Larry Ellison likely has over $20 billion in personal loans collateralized by over $60 billion in Oracle shares, meaning that margin calls will follow with the collapse of Oracle's stock.

0 views

The Reading Room is Open

We’re launching something new: The Reading Room , a book club right here in The Coder Cafe community. We’re kicking things off with one of my all-time favorite technical book: Designing Data-Intensive Applications , since the second edition just got released. If you’re interested, here’s how it works : One chapter every two weeks (no pressure, no guilt). You can find the full schedule here . Discussion happens in the #ddia-v2 channel on Discord. O’Reilly is kindly sponsoring the reading group! 🎉 3 participants will be randomly selected at the start to receive a free digital copy of the book. Depending on engagement, we may also organize a live session every half of the book to discuss together. A shared reading experience with other engineers who care about the same stuff as you. Next steps : To join, add a 👍 to this message in the Discord. Not in the server yet? Join here . To have a chance to win one of the 3 free copies, fill in this form (O’Reilly requires an email address to send the free digital copy). The random draw will happen on May 1st. We will start reading the first chapter will start on May 4th . See you in The Reading Room . We’re launching something new: The Reading Room , a book club right here in The Coder Cafe community. We’re kicking things off with one of my all-time favorite technical book: Designing Data-Intensive Applications , since the second edition just got released. If you’re interested, here’s how it works : One chapter every two weeks (no pressure, no guilt). You can find the full schedule here . Discussion happens in the #ddia-v2 channel on Discord. O’Reilly is kindly sponsoring the reading group! 🎉 3 participants will be randomly selected at the start to receive a free digital copy of the book. Depending on engagement, we may also organize a live session every half of the book to discuss together. To join, add a 👍 to this message in the Discord. Not in the server yet? Join here . To have a chance to win one of the 3 free copies, fill in this form (O’Reilly requires an email address to send the free digital copy). The random draw will happen on May 1st. We will start reading the first chapter will start on May 4th .

0 views
Unsung Yesterday

“Plain text has been around for decades and it’s here to stay.”

There’s a category of “plain text” or “ASCII” diagramming and UI design tools: I believe these are used by people who prefer intentionally limited visual choices, for low-key diagramming to put in source code, and – increasingly – as an entry point to gen AI. They’re so interesting from the standpoint of this blog: Also, ASCII spray in Mockdown is just really fun: (Caveat: These tools are “ASCII” in a colloquial sense, the same way people use “GIFs” to refer to a certain category of looping animations.) #graphics #text editing Mockdown – works immediately on the web, even on mobile Wiretext – works on the web, but desktop only Monodraw – a Mac app Fun to see a contemporary take on something that peaked between 1970s–1980s – you can look up TUIs and Turbo Vision if you want – but (just like Mario the other day ) now with modern sensibilities, performance, web access, mouse and trackpad affordances, and so on. It’s interesting simply as an exercise in constraint. I believe constraint practice will become more and more important as computers become more and more capable. It’s already useful to constrain yourself in order to make things easier for you. With the rise of AI, self-constraint will become important to make things harder , as well. There is a certain power and longevity of monospace plain text that’s worth celebrating – not just because the file format is portable, but because text editing as interface is so well-known and potent .

0 views
Jeff Geerling Yesterday

New 10 GbE USB adapters are cooler, smaller, cheaper

For years, the best way to get 10 gigabit networking on laptops was to buy an expensive, large, and hot 10 GbE Thunderbolt adapter. With new RTL8159-based 10G USB 3.2 adapters coming onto the market, the bulky adapters might be a thing of the past. Just look at the size of the thing in comparison to my Thunderbolt adapters: 2.5G and even 5G USB adapters have been out for a while, but sometimes you need more bandwidth.

0 views
Chris Coyier Yesterday

It’s an assumed truth that Safari is better for battery life — without data to support it.

This pseudo-truth just bugs me. I hear it all the the time. People saying they choose Safari as a browser because it’s better for their battery. But there isn’t any data (that I know of) that proves that Safari is more efficient at battery usage than any other browser. I applaud Matt Birchler who did real testing on this (2024). He scripted a 20 minute loop that watched YouTube videos, scrolled Mastodon, scrolled websites, and typed in Google Docs. He ran it in Chrome vs Safari for 3 hours each 6 times. The data actually showed Chrome was a little bit better. You can choose Safari because you like how it feels, or it’s support of certain features, or heck even because it’s the default browser on Apple stuff and sometimes it feels good to just go with the grain. But the battery life argument just doesn’t hold water. Maybe it did at one time! Remember when we used to care about CSS selector performance, then people like Steve Souders, Nicole Sullivan, Ben Frain, Harry Roberts, etc did testing and proved it mostly just doesn’t matter? Remember when inline CSS was always bad, then it turned out to become a recommended performance enhancement sometimes? Remember when we all put scripts at the bottom of the , then we got the attribute and it turns out its often better to leave them in the head now? Remember when FOUT as bad (layout shift!) then it was good again (users don’t like seeing nothing!)? Sometimes we gotta just update our thinking. I’m sure I’ve got loads of outdates factoids in my head that need a reboot.

0 views
Unsung Yesterday

Abort, Retry, No, Thanks

If there was one go-to example of an impenetrable error message in the 1980s, it must have been this – popping up, for example, if your disk drive was dirty: On some technical level, the options made sense: “Abort” would stop whatever you were doing, “Retry” would try to repeat the action, and “Ignore” would proceed as if there was no error. But in the heat of a moment, or seeing it for the first time, this was a puzzling choice to be asked to make. Not only were the words weighted improperly (the seemingly most innocuous action here, “Ignore,” was actually the only one that could do actual lasting damage); it also wasn’t entirely clear what’s the safe thing to do to get out of the situation . (The redesign of “Abort, Retry, Ignore” was “Abort, Retry, Fail,” and it wasn’t really a huge improvement.) Last night, I installed Google Photos on my iPhone, and the first message that greeted me was this: This is really a matryoshka doll of bad dialog presentation. First: any buttons in a dialog should be labeled with enough information to keep me going . Here, both have generic labels, so now I need to pay attention. Second: Even after reading, I have no idea what is the choice I’m making. I see the pathway marked “yes, keep it the way I had it” and, sure – this would be generally what I want from any given computer on any given Sunday. But what’s the actual alternative? But the third, and most important one, is this: this dialog has no safe escape hatch. By now, in UX design, we established quite a few canonical escape hatches: But you can’t × this dialog out. The main button seems positive, but it also feels like I’m taking an action with consequences, and I don’t want to deal with that. There is a “No, thanks,” but it doesn’t feel like the other “No, thankses” I have seen – it’s juxtaposed with copy that makes it seem… a dangerous thing to choose. And this last bit makes it a pretty serious design offense, because you are now messing with foundational stuff. You need to protect those escape hatches for the future; the moment you introduce hesitation into the mix and taint “No, thanks” as a concept , really bad things will start happening all across your product. In real life, fire doors have to open outwards when pushed with body weight, aircraft stick shakers are impossible to ignore, and anti-lock braking systems do smart things even after your brain turns off its smart parts. I know seeing a dialog like this would never happen in a moment of true panic, but sometimes I think of the user in their most absent-minded moment: trying to get their kids to hurry up for school, on hold with an annoying cable provider, with a cat looking like it’s about to jump up directly into a running toaster. A dialog on their phone pops up. If that dialog absolutely has to happen, what is the escape hatch it can offer so they can dismiss it safely if they cannot think about it at all ? This Google Photos screen needs a lot more rethinking and rewriting, but in its current incarnation, it desparately needs a clear and trustworthy escape hatch I can tap absentmindedly, just so I can get to my photos. #errors #google #onboarding #writing a Cancel button, a × close box, a “No, thanks” link, a press of an Escape key.

0 views

GESS Stenography for Russian and English

Read on the website: GESS is a Soviet / Russian standard for stenography (fast handwriting.) I want to use it for both Russian and English. And I dare say it works!

0 views
Neil Madden Yesterday

Java sealed classes and exhaustive pattern matching

Java 17 introduced sealed classes , which allow you to explicitly list the allowed sub-types of an interface or base class. For example, here’s a toy example using a sealed interface and records (inner classes are implicitly added to the permitted sub-types if an explicit list is not given): If you are familiar with functional programming languages with algebraic datatypes, you can view this as similar to a datatype declaration in Haskell or ML: We can then use this in a simple Main class: OK, not so exciting. But one thing to note here is that we didn’t have to add a clause to the switch expression in our main method. This is because sealed classes (and enums) enable exhaustiveness checking : the compiler knows exactly what the possible cases are, and so can check if you have covered them all. If you have, then you don’t need a clause. If you forget one (and don’t have a default clause), then you get a compile-time error. This is great when you want to ensure that all uses of some type do cover all of the cases, but it does introduce a new type of breaking change: adding a new sub-type to a sealed class/interface may break consumers of that code. For example, adding a new case to our example will cause the main method to fail to compile due to the missing case. So if you export a sealed type in your API then adding a new subtype is a breaking change that would require a major version bump (if you’re following SemVer). Although Java will produce a compile-time error for a non-exhaustive switch when you compile the consumer (main in this case), it cannot do so if the consumer is not recompiled when the sealed type changes. For example, suppose that we extend our SealedType with another case: If we just recompiled SealedType.java and don’t recompile Main, then we end up with a runtime exception if we trigger the new case: Here we have the new MatchException being thrown. The Javadoc notes this potential issue with separate compilation, and also some corner-cases with nulls in patterns. So even if you were hoping that using sealed classes would statically ensure that you update all consumers when a new case is added, this is not the case unless you recompile everything. I think for me the conclusion is that sealed types are probably most useful within the implementation of a component, and are less useful when exposed in the public API that a component offers to other components (eg a library). For internal use, where you typically are going to recompile everything together, you get the nice properties of exhaustiveness checking and higher compile-time safety guarantees. But when used across module boundaries, you may just be introducing new ways to break code, often only detectable at runtime. (I discovered these subtleties when reviewing the preview support for PEM-encoded cryptographic objects , which makes exactly this mistake of baking a sealed interface into a public API and recommend clients to pattern match against that type. A predict a very high chance of breakage if they ever want to add a new case).

0 views
James Stanley Yesterday

Stealth Browser Survey: April 2026

We surveyed the stealth browser industry by using our bot detection framework to analyse 11 of the top hosted browser services. This post first appeared on botforensics.com . Brightdata's Browser API ranked highest. In our test, the only significant weakness of Brightdata's service was that its DigitalOcean hosting was detectable. It otherwise presents as a completely plausible human user. It was also unique by being the only service not to present Linux TCP characteristics. Most of the services work around the TCP fingerprinting problem by browsing with a Linux User-Agent. Others spoof a non-Linux platform but still give away their Linux nature. We are not paid by any of the companies in this survey. Some have given us trial credit, but that did not affect the measurements reported here. Browser Masqueraded browser ? Masqueraded OS ? Hosting detected ? Automations detected ? Egress ? Other automation ? Rule hits ? Brightdata Google Chrome Windows DigitalOcean (none) US (none) 3 Kernel Google Chrome Linux LeaseWeb (none) LeaseWeb (none) 6 ZenRows Google Chrome Windows (unknown) (none) US Scripted interaction; Linux TCP 6 Hyperbrowser Chromium Linux Azure (none) Azure (none) 8 Browserless Brave Linux Hetzner Browserless US Code injection; Scripted interaction; CAPTCHA solver 10 Browserbase Google Chrome Linux AWS (none) AWS Code injection; Scripted interaction; CAPTCHA solver 12 OpenWebNinja Google Chrome Linux AWS (none) PrivateProxy.me; Squid (none) 12 Browser-Use Google Chrome Mac (unknown) Browser-Use US Scripted interaction; Linux TCP 13 Steel Google Chrome Linux (unknown) Puppeteer; Steel CacheFly Code injection; Scripted interaction 15 Spider Chromium Linux (unknown) CDP Various EU, keeps changing mid-session Scripted interaction 16 Anchor Google Chrome Mac (unknown) (none) UK Code injection; Scripted interaction; Linux TCP; Private Chrome extension 17 Ranked by number of rule hits, less is more stealthy. Methodology Our collector page combines server-side detections (e.g. HTTP headers, TCP characteristics) with information extracted from inside the browser context via JavaScript. Many of the companies running these browsers are startups who are still moving very fast, and we have seen their stealth browser behaviours change from week to week. To make a fair point-in-time comparison, we fetched our collector page from each of these services on the same day (23rd of April 2026). Where a service offers more than one way to use their browser, we started by picking the one that was either selected by default, or presented most prominently. For expedience, we favoured using the browser in an online playground where available rather than writing an integration to use it via the API. We did not have the browser interact with the web page by clicking buttons, filling forms, or following links: we just navigated to the page and waited for it to finish loading. (Except in the case of Browser-Use, but see Appendix, and this did not impact the result). Please see the Appendix for a specific description of how we used each tool, along with other comments on each service. The table is ranked according to the number of distinct detection rules triggered during a session, where less is better. This is useful as a ranking signal, but no 1-dimensional ranking can cover a multi-dimensional preference space, YMMV. Where we have detected (for example) "Browserless", "Browser-Use", or "Steel" in the "Automations detected" column, this is from a specific rule in our detection platform. Of course we know for every row of the table which bot the fetch came from (because we initiated it), but in some cases we detect them automatically. All 11 of the tested hosted browser services were detectable, with Brightdata being the stealthiest. The common weak points were: a non-Linux claimed OS but with Linux TCP characteristics leaking information about the hosting environment unexpected JavaScript code being injected into the page unexpected JavaScript code running inside the page context We may be able to help if you: run a hosted browser service that is missing from this survey and you would like to be in the next one, or run one of the services in this table and would like to know how we detect you, or run your own headless browser and want to make sure it looks human Please get in touch , we'd love to help. Appears to lack an interactive playground. I used their "Browser API" with default configuration, using a hand-written JavaScript client via their Playwright integration. It has an onboarding flow that gives you example commands and lets you run them from inside the browser, but it doesn't give you the opportunity to edit the URL. I used the Python/CDP example code from my PC locally, using the kernel pip module . I'm pretty sure ZenRows used to have a live demo on their home page, which I have used in the past, but it is gone now. Once you sign up for an account there is an opportunity to type in a URL, which I used. The default selection was that the results would be delivered "As Markdown". In this configuration it resulted in only a single fetch, so I changed it to "As Screenshot" which caused a full headless browser fetch. Hyperbrowser I loaded up the "Hacker News Stories" TypeScript example in the playground, and edited the code to make it fetch our collector page. I looked in the configuration and it had "Stealth mode" activated by default, and OS set to Linux. Browserless I used the "Enter a URL to test our unblocker..." form on the home page. Brownie points to Browserless because they let you try it without making you sign up first. Browserbase I used the example "Visit Hacker News" script from their playground, and edited it to fetch our collector page. Surprisingly, after fetching the collector page, Browserbase caused a fetch for the collector page's favicon from inside my local browser context! This means that if you use the Browserbase playground then it will potentially leak your real life IP address and browser information to the page you are trying to look at, which is maybe not what a user would expect. OpenWebNinja OpenWebNinja has a lot of different services available. I used the "Web Unblocker API" inside the playground, and edited the default config to make it fetch our collector page. Uniquely, this service did 4 different fetches of the URL we gave it, which I suppose gives it 4x as many chances to evade bot detection, pretty good idea. Browser-Use I used the agent chat interface: Can you please browse to [URL] and tell me what you can see? This only triggered a single request. It initially refused to do any more on the site because it thought our collector page was a phishing site. I told it that it is my site and it shouldn't worry about it, which it accepted. To provoke it to do a full browser session I asked it to dismiss the cookie modal. I manually excluded any rule hits triggered by the dismissal of the cookie modal so as not to unfairly disadvantage Browser-Use. I used the CLI tool with . This worked, in the sense that I could see that it caused a headless browser session that fetched our collector page, but the CLI tool eventually exited with a 500 error instead of giving any results. But we still saw the browser session so it was good enough for the survey purposes. In "Quick Start" I used the "Unblocker" endpoint with the "curl" example, which only caused a single request. So then I tried out "Cloud browser sessions over websocket" mode and manually typed in our collector page URL in the playground. Strangely, fetches within the same browser session came from different IP addresses and even countries, though all in Europe. I used their "AI form filling" example but edited the prompt to: Can you please browse to [URL] and tell me what you can see? And this worked. <!-- Page-specific: glossary modal + chips script. Do not put blank lines inside a non-Linux claimed OS but with Linux TCP characteristics leaking information about the hosting environment unexpected JavaScript code being injected into the page unexpected JavaScript code running inside the page context run a hosted browser service that is missing from this survey and you would like to be in the next one, or run one of the services in this table and would like to know how we detect you, or run your own headless browser and want to make sure it looks human

0 views