Latest Posts (20 found)

Commenting Guidelines

When commenting on this website, please keep the following points in mind: You may include HTML or Markdown in your comment. Comments are converted to HTML and sanitised before they are published on this website. All submitted comments are held for review. Whether a comment is published or not is at the discretion of the author of this website. Typically, only the following types of comments are published: Generally, rants are not published, even when the post you are commenting on is itself a rant. This website is the author's place to rant. It is not your place to rant. If you really need to rant, please do so on your own website. This guideline exists to maintain a high signal-to-noise ratio in the comments section. All comments deemed suitable for this website by its author become publicly available on this website at two places: on the comment page for the article you commented on ( example ) and on the overall comment index page at comments . Read on website | #meta You may include HTML or Markdown in your comment. Comments are converted to HTML and sanitised before they are published on this website. All submitted comments are held for review. Whether a comment is published or not is at the discretion of the author of this website. Typically, only the following types of comments are published: Comments that add new information or insight to the topic discussed in an article. Comments that provide a neutral, supporting or opposing viewpoint. Comments that report typos, errors or bugs on the website. Comments that contain good humour. Comments that express appreciation. Generally, rants are not published, even when the post you are commenting on is itself a rant. This website is the author's place to rant. It is not your place to rant. If you really need to rant, please do so on your own website. This guideline exists to maintain a high signal-to-noise ratio in the comments section. All comments deemed suitable for this website by its author become publicly available on this website at two places: on the comment page for the article you commented on ( example ) and on the overall comment index page at comments . Do not submit sensitive personal data in your comments.

0 views

What Time Is It?

On Palm OS, the interface for picking the start and end time of an event is represented as two columns, hour and minutes. The hours list either starts at 8AM and shows until 7PM (covering a full business day, or it starts at the next hour (if creating an event for today). Minutes are represented for every 5 minute interval, allowing every option to be shown at once. This interface is simple and requires an extremely low cognitive load to use. It's scannable and adaptive to the current situation (today vs another day). It limits options (ie you can't set a time of 12:33) to drive simplicity. If we compare to the time picker on Android, we can see it's significantly more complex. One must first tap the hour, then tap AM/PM, then tap the minutes section and tap the minute they need. While minute intervals of 5 are shown on the screen, the user is able to select specific minutes, if they know how (one must drag the circle to get a specific minute). The interface has many more taps, states and cognitive load. How about iOS? Like Palm OS, iOS limits you to 5-minute intervals. Similar to Android though, an additional interaction is needed to pick AM/PM. Picking hour and minutes is more involved as well, you must scroll the picker to the desired value. The Palm OS UI might not be the prettiest, but it's the fastest for most use-cases. The most common options (business hours and 5-minute intervals) are presented without the need for multiple states or scrolling. Setting the time is 2 taps away!

2 views
Unsung Today

“This is where your mouse becomes a cryptographic instrument.”

A fascinating 9-minute video from PawelCodeStuff about randomness in the context of computing: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-is-where-your-mouse-becomes-a-cryptographic-instrument/yt1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-is-where-your-mouse-becomes-a-cryptographic-instrument/yt1.1600w.avif" type="image/avif"> It explains those weird moments where sometimes the computer asks you to wiggle your mouse – to generate unpredictable numbers – although the specifics of what exactly was random in my wiggling was a surprise to me. There is something poetic about computers yearning for that one thing they can never get – complete unpredictability – and collecting it in a little pool like you would something very precious. Also fascinating that in modern CPUs, there now exist hardware components that gather truly random data from the real world. While I have never needed true randomness in my design career, knowing how to control pseudorandomness (specifically, how to replay it) has been helpful. Here’s an example. In my essay about Gorton , there is this interactive bit where you can drag a slider for “messiness.” With regular pseudorandomness, the experience is wiggly and gross: But when you always restart the prng from the same seed (“the Groundhog Day maneuver”), it feels much better: #details #motion design #security #youtube

0 views

new challenges at work

In the past, I have complained about some aspects of my work here and there. As I continue to grow, get more qualifications, visit conferences, and apply to interesting positions, I've put more effort into transforming the place where I'm at, to the best of my abilities. I've repeatedly asked for more work, I've asked for different tasks, and I helped create a new role. Not replacing my current role and work, but something on top/on the side next to my core tasks. I needed change and something worth logging in or coming into the office for, and of course I wanted to pivot more into my desired field. That brings some new challenges, which are desired, but can be uncomfortable at first. Years of doing the same tasks with comparatively little cooperation and following repetitive processes never forced me to put a lot of thought into what I put out, so to say. That can be very nice, and in the beginning, it was hard enough learning everything and doing everything correctly. With my core work, no one asks me to create anything from scratch, make any decisions, or organize anything independently; it's all set in stone. If I wanted to, I could just spend years doing the work-equivalent of "minding my own business" and keeping my head down, in which I work off what came in that day based on our rigid standards and use fixed email templates (not even having to formulate my own sentences), nothing more, nothing less, unbothered. That's what I did for years as I got used to everything, and as I was very sick. But now, when I want to do more challenging work, I notice that years of working like this have made me very comfortable. Not lazy, but it feels unusual and slightly scary to suddenly have a more "active" part of work where I actually have to plan meetings, host and lead them, prepare slides, and even approach people first about needing to find time to discuss something together. Completely normal office tasks for others, of course, and it's what I wanted to not stagnate further in something that bores me, but my brain still perceives it as a threat. Due to internal restructuring and moving of employees, we lost our sub-department's IT coordinator 1 (each sub-department has to assign someone). I asked my boss if I could be the new one, and she agreed. Unfortunately, at least in our department, this title is more decorative than anything else, as the IT coordinators don't even have any meetings to discuss anything at all. This has generally worked fine enough, as in " we are surviving ", but now with different AI model rollouts and other software changes, I notice employees becoming more and more confused and helpless, and a more proactive approach would be nicer. When I asked my boss for permission to be one, I said I would like to organize a meeting of all coordinators to discuss some challenges and more, and both her and the department head thought this was a good idea and asked me to schedule one soon. I didn't expect how much this task would make me freeze up; I didn't wanna be the newcomer in a group who piles more work and yet another meeting onto the other people as a first move. So I obsessed over a good way to introduce this, and how to make the first meeting worth it. I didn't want everyone to show up, discover we have nothing to discuss, and leave after 5 minutes. The invitation mail should stress that this is just a first, casual meeting in which we will talk about x, y and z topic, and then determine whether this should happen again and in what frequency. I also kept pondering whether I should also already prepare a topic/mini-presentation to not come with empty hands myself as an organizer, and what that could be, putting a lot of pressure on finding something good enough. The final hurdle was that no one in my department apparently even had a full list of who the other coordinators are; had to research that myself somehow and ask around. All that made me put off scheduling anything for a good 3 weeks. Yesterday, I finally dealt with this mess, as the task became more and more pressing and uncomfortable to think about, threatening to become this huge anxiety beast strangling me. Detangled my feelings, set realistic expectations, and scheduled it to mid June to have a bit of time. At the same time, I am finally officially the data protection coordinator of my department. My work never had any before, no other department has one either. This is just my department wanting to lead by example, and admittedly, also accommodating me and my ambitions, as I have asked for this for months. Leadership up top has repeatedly thwarted my attempts to move into the data protection team, or officially implement coordinators house-wide, and refused to even discuss it or process it in the idea management system, so this is my little rebellion, you could say. Doing things from the bottom up. I have already prepared the slides they will use to announce it in the next department meeting and the meeting of all department heads. I will also have to prepare a short presentation about data protection challenges in our department, scheduled around Q3 or Q4 of 2026 as I need time to get an overview of everything. I'll have to meet up and interview a lot of people about their team's data workflows to see what needs to be adjusted, write some analyses, write deletion concepts, create awareness, ensure compliance, and more. I'll also be the person to go to before the data protection officer is getting involved. It's what I wanted, but internally it also makes me very nervous. I finally get to create things and success will be about the quality, not just that something was done; but it opens the door for thoughts about whether I am good enough or not. Merely following process steps as described makes it easy to just be a bot that gets things done; creating things yourself, sharing your own ideas and opinions exposes you as a person, makes you vulnerable. There are people working there that will finally see that there is a person with a brain underneath the years of automatically generated emails they received in my name. There is no one else to watch and learn from, as I am the only one, and I get to make things up as I go for this new role. I will be the blueprint, for now. There are horror scenarios in my head of not knowing something in a meeting and everyone thinking I am an impostor who doesn't really know anything. That's not how real life goes, of course, and everyone is usually understanding when you say " Sorry, I will have to look that up and get back to you about that. ", but you know how brains are. I'll have to learn from every meeting. I am scared of not doing a good job and doing it all a disservice. The culture is an aspect of it too, because unfortunately, my place has a reputation of not being kind to ambitious people, and many people being rather hostile if anything is asked of them - time, expertise, feedback, a change in routine, a little bit of grace; anything. There are also a few coworkers that have proven again and again that they are unable to view younger people or people lacking this or that university degree as worth taking seriously. That's what I will be up against, and my own harsh standards I have for myself. I'm trying to reassure myself that I have time to figure things out and that I need to make mistakes to improve. Reply via email Published 13 May, 2026 The IT coordinators' role at my workplace is to share IT knowledge around in all kinds of teams so it isn't just concentrated in specific areas, and to ensure everyone is up-to-date on internal policies, new software options and more. They're also a sort of first responder to task-specific tech problems in that specific team before annoying our general helpdesk. The communication of our IT department can be lacking, and not everyone has the time to keep on top of new things (like the sudden rollout of Copilot recently, new options available in Teams, etc.), so having these people "posted" in each sub-department to share news and developments was supposed to help that. ↩ The IT coordinators' role at my workplace is to share IT knowledge around in all kinds of teams so it isn't just concentrated in specific areas, and to ensure everyone is up-to-date on internal policies, new software options and more. They're also a sort of first responder to task-specific tech problems in that specific team before annoying our general helpdesk. The communication of our IT department can be lacking, and not everyone has the time to keep on top of new things (like the sudden rollout of Copilot recently, new options available in Teams, etc.), so having these people "posted" in each sub-department to share news and developments was supposed to help that. ↩

1 views
Unsung Today

Mailbag: Photoshop’s focus post

The post about some of Photoshop’s new dialogs traveled through some of internet’s pipes and alleyways. Michael Tsai has a nice roundup of reactions ; let me pick a few things that caught my attention. 1. Nick Heer at Pixel Envy made a discovery that Photoshop’s new windows are… websites : Maybe it really is possible to build a web app that feels platform native. But I have never used one — not once — and for this mess to be increasingly used in the industry-standard professional suite of creative tools is maddening. I think it is possible – especially in the realm of classic form fields – but you really have to care and step up and test and replicate a some stuff that the operating system controls give you for free. (As an example, if the web platform/​Electron don’t give you access to the “keyboard navigation” OS accessibility setting, you’ll need to build a bridge from the OS to pass it through. This is how Figma’s Electron app got haptics, for example.) It is true that we don’t see that level of effort often. But there are also bad native interfaces, and there might be more; Roger Wong recently made an interesting observation that stuck with me. Emphasis mine: The mechanism differs but the outcome is the same: the platform stops being a place a designer can rely on. […] [Text user interfaces] are back because the platforms quit , and the curriculum can’t fix that. I think I agree with this; I’ve felt there haven’t been a lot of improvements in native desktop interfaces recently. In the mid-1990s, Apple was losing to Windows 95/98, and after years of falling by the wayside, the team eventually got their priorities in order, and rebooted classic Mac OS into a (I believe generally successful) Aqua. And in later years, Apple as a whole has often been good about creating extra distance from the peloton even if there was no immediate danger of being overtaken. But not here. Windows lost its way, and perhaps even the memories of the darkness of the 1990s and the revival of the 2000s are now forgotten. Even if Liquid Glass was executed extremely well, macOS would still feel bereft of true evolution and care. I know there have been some slight improvements to window tiling and more recently Spotlight, but little of this betrays urgency or suggests a vision. Finder feels like it’s been abandoned for over a decade. AirDrop UI is worse in use than many of the file sharing interfaces that came before it. This common UI is stuck in the state of the art of display colour science that is out of the previous century : = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/mailbag-photoshops-focus-post/1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/mailbag-photoshops-focus-post/1.1600w.avif" type="image/avif"> Just on the topic that is fresh on my mind : Why does Shortcuts feel like a toy in all the moments it shouldn’t, but few of the moments it should? Why does the keyboard customization situation feels so messy? Or, why are both macOS and iPadOS still stuck in the ancient way of thinking that menu bars contain all the app’s commands, when the modern approach is: it’s command bars that do, with menus containing only a subset? An innovative modern operating system would offer a universal API for command bars that any app that wants one could use – instead, apps invent their own with varying levels of success and UI quality, and automation tools cannot do much since nothing’s compatible. (This in particular is an example of an area where web apps started leading the way.) These are just some examples that come to mind. It’s true I have admired and been inspired by some work done on Apple TV and the Vision Pro, but we also have to acknowledge that designing for net-new platforms is in many ways easier than for legacy ones. 2. Back to Photoshop. In the Hacker News thread , at least one person from Adobe dropped in to comment, and one paragraph caught my attention: These changes were part of the Beta program. As far as I am aware the response there was not on the same level as this blog post. It’s not my intention to pick on this Adobe employee, and I am not aware of the specific of their beta program (although I have used Photoshop in beta for a few years). But from my experience, this is why beta testing fails in this regard: 3. Oh, and when I say “broken windows,” I’m not just being cute. Here’s an example of Photoshop’s “explore” halo that occasionally appears on top of another app just because I have Photoshop open underneath. And, there is nothing I can do in Photoshop to get rid of it: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/mailbag-photoshops-focus-post/2.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/mailbag-photoshops-focus-post/2.1600w.avif" type="image/avif"> I think there is something fundamentally very broken with Photoshop’s (custom?) window management, seeing how PS windows jump in front of other applications, or how PS breaks other apps’s mouse pointers. But that’s a story for a different post. #adobe #apple #bugs #interface design #nick heer #process People in beta programs might be more lenient and excited to experiment. For obviously broken small UI things, people will be more inclined to think “oh, they will surely take care of that in the polish phase.” In general, reports of smaller UI things are less likely than bigger functional bugs like “this is not working” or “this is really slow now.” You really have to encourage and reward and incentivize people to do that, and usually identify the right people first, too. Please excuse my directness, but Photoshop’s user interface has felt low-quality for at least a decade now. There are a lot more examples. It’s hard to expect people in the beta to flag small UI stuff – including literal broken windows – when the evidence all around them is that the company doesn’t care. Just because we all encounter interfaces doesn’t mean everyone knows how to identify the things and say the words and connect the dots, especially when it comes to generally undefinable and unmeasurable craft. Good UI is deep expertise . Just like you cannot research or data science your way out of fundamentally bad product decision-making process, you also cannot add craft through relying on your users to tell you. You need to foster this on the inside.

1 views
Stratechery Yesterday

The Deployment Company, Back to the 70s, Apple and Intel

Listen to this post: Good morning, President Trump is on the way to China, and Sharp China is your go-to podcast for understanding what happens next. Add it to your podcast player now in anticipation of the next few episodes breaking down the trip. On to the Update: From Reuters : OpenAI said on Monday it is setting up a new company with more than $4 billion in initial investment to help organizations build and deploy artificial intelligence systems, and will acquire an AI consulting firm, Tomoro, to quickly scale up the unit. After its early models saw strong resonance with consumers, OpenAI has been working aggressively to sign corporate contracts and establish a large presence in the business world where its AI will see large-scale deployment. The venture, which will be majority owned and controlled by OpenAI, also comes as rival Anthropic enjoys strong success in its enterprise AI push with its Claude family of models seeing rapid adoption among businesses. The new firm, called OpenAI Deployment Company, will help the ChatGPT maker embed engineers specializing in frontier AI deployment into organizations that will then work closely with various teams to identify where AI can make the biggest impact, OpenAI said. Its acquisition of Tomoro, a consulting firm that helps enterprises deploy AI, will bring around 150 experienced AI engineers and “deployment specialists” to the new unit from day one. Tomoro was formed in 2023 in alliance with OpenAI, and counts companies such as Mattel, Red Bull, Tesco and Virgin Atlantic as its clients, according to its website. That was on Monday; on Tuesday, from The Information : Google plans to hire hundreds of engineers to help customers start using its business-focused AI products, according to a person familiar with the situation. Google’s new “forward deployed engineers” will form a new team within Google Cloud, the unit’s chief, Thomas Kurian, said on LinkedIn on Tuesday, without disclosing the size of the effort. Matt Renner, Google Cloud’s chief revenue officer, said in a separate post that the move would help Google “show up for our customers with more technical resources (vs just an ocean of salespeople).” The announcement is one of several in the industry in recent weeks as tech companies are deploying armies of humans—often described as “forward deployed engineers”—and partnerships with consulting companies to get customers using AI-driven technology intended to automate work. On Monday, OpenAI launched the “OpenAI Deployment Company” in partnership with consulting and investment firms. Last week, Anthropic announced the creation of a joint venture with private equity firms to sell its AI to the PE firms’ customers. It is, needless to say, tempting to drop some snark about AGI apparently not being good enough to deploy AI, but instead I’m going to go with “as predicted”. In 2024’s Enterprise Philosophy and the First Wave of AI , I made the case that the proper analogy for AI in the enterprise was not SaaS, but rather the first wave of computing in the 1970s. Agents aren’t copilots; they are replacements. They do work in place of humans — think call centers and the like, to start — and they have all of the advantages of software: always available, and scalable up-and-down with demand…Benioff isn’t talking about making employees more productive, but rather companies; the verb that applies to employees is “augmented”, which sounds much nicer than “replaced”; the ultimate goal is stated as well: business results. That right there is tech’s third philosophy: improving the bottom line for large enterprises. Notice how well this framing applies to the mainframe wave of computing: accounting and ERP software made companies more productive and drove positive business results; the employees that were “augmented” were managers who got far more accurate reports much more quickly, while the employees who used to do that work were replaced. Critically, the decision about whether or not to make this change did not depend on rank-and-file employees changing how they worked, but for executives to decide to take the plunge. Specifically, I don’t think that the Deployment Company is going in to help employees use chatbots; that’s even more clearly the case with the PE firms that both OpenAI and Anthropic are doing deals with. I expect there to be an ever-increasing number of deals where PE buys software firms with reliable cash flows and conducts significant layoffs, forcing AI to pick up the slack, solving stock-based compensation issues in the process. I don’t know if the mandate for the Deployment Company is going to be quite so harsh, but I assume this is a company that is hired by the executive suite to fundamentally rethink business processes in a way that hasn’t been done since the mainframe: Most historically-driven AI analogies usually come from the Internet, and understandably so: that was both an epochal change and also much fresher in our collective memories. My core contention here, however, is that AI truly is a new way of computing, and that means the better analogies are to computing itself. Transformers are the transistor, and mainframes are today’s models. The GUI is, arguably, still TBD. To the extent that is right, then, the biggest opportunity is in top-down enterprise implementations. The enterprise philosophy is older than the two consumer philosophies I wrote about previously: its motivation is not the user, but the buyer, who wants to increase revenue and cut costs, and will be brutally rational about how to achieve that (including running expected value calculations on agents making mistakes). That will be the only way to justify the compute necessary to scale out agentic capabilities, and to do the years of work necessary to get data in a state where humans can be replaced. The bottom line benefits — the essence of enterprise philosophy — will compel just that. What I wonder is how much of the work ends up reworking data; that, as I noted in that article, is why I was bullish on Palantir: That leaves the data piece, and while Benioff bragged about all of the data that Salesforce had, it doesn’t have everything, and what it does have is scattered across the phalanx of applications and storage layers that make up the Salesforce Platform. Indeed, Microsoft faces the same problem: while their Copilot vision includes APIs for 3rd-party “agents” — in this case, data from other companies — the reality is that an effective Agent — i.e. a worker replacement — needs access to everything in a way that it can reason over. The ability of large language models to handle unstructured data is revolutionary, but the fact remains that better data still results in better output; explicit step-by-step reasoning data, for example, is a big part of how o1 works. To that end, the company I am most intrigued by, for what I think will be the first wave of AI, is Palantir… That integration looks like this illustration from the company’s webpage for Foundry, what they call “The Ontology-Powered Operating System for the Modern Enterprise”: What is notable about this illustration is just how deeply Palantir needs to get into an enterprise’s operations to achieve its goals. This isn’t a consumery-SaaS application that your team leader puts on their credit card; it is SOFTWARE of the sort that Salesforce sought to move beyond. Google’s Kurian, by the way, did dismiss any sort of Palantir comparison in a Stratechery Interview last month: This all makes perfect sense, particularly this bit about the Knowledge Catalog definitely fits how I’ve been thinking. I wrote about this a few years ago about this importance of this whole layer and understanding it, it’s a bit of a big lift to get this in place. You have some sort of analog, say, with like a Palantir that’s putting in like their ontology thing. They have FDEs out on the site, multi-month projects doing this. You have OpenAI talking about Frontier, their agent layer, and they’re partnering with all the tech consultancies to build this out. Is this going to entail a lot of boots on the ground to get this graph working and functional in a way that your agents can operate effectively across it? TK: We’re not competing with Palantir, we’re not building a semantic dictionary or an ontology. What we’re doing is, today I’ll give you the closest analogy. TK: Today when you use a model, let’s say you use Gemini, and you ask a question, Gemini goes through reasoning, and then it shows you a citation. A citation is, “How did I answer the question and what’s the source I derived from?” Now imagine that citation was a query that needed to go to a folder in, for example, a storage system because there’s some documents there and a database because, for example, in a part number, just think about there’s a part number document that lists all the part numbers and sits in a drive and then that part number you need to fetch out to say it’s the modem that the guy is coming to repair, and that’s mapped to a table in a database. So what the graph does, we use Gemini, so we don’t need humans, we use Gemini to say, “Hey, go and read all these documents in these drives and extract the information from it and then match that to the database table that has the reference to the part number”, and so then when Gemini turns around and says, “I got this query about how much inventory of modems they are”, the first thing it does is it says, “Okay, go to the Knowledge Catalog and it says modem is part number one, two, three, four, five”, and then it says, “By the way the table in the database that has the inventory information about this part number is this table, here’s a SQL”, it then makes the quality of what we generate higher and then when it answers the question it shows back — back to your, “Trust my data”, it shows a grounding citation saying, “That’s where we got it from.” Well, so much for not needing humans! I joke, mostly — Kurian was referring to not needing a Palantir-like ontology, not necessarily dismissing the need for FDEs — but it sure is interesting how AI is creating the need for new kinds of jobs. It’s almost as if the world is more dynamic, and pure intelligence, unadulterated by what already exists and the burden of reflexivity, is more static, than the most pessimistic prognosticators may have anticipated. More prosaically, OpenAI and Anthropic need the revenue, enterprises need the imagination, and Google needs to stay in the game. From the Wall Street Journal : Apple and Intel have reached a preliminary agreement for Intel to manufacture some of the chips that power Apple devices, according to people familiar with the matter. Intensive talks between the two companies have been ongoing for more than a year, and they hammered out a formal deal in recent months, these people said. Bloomberg News previously reported the talks. It’s still unclear which Apple products Intel would make chips for, these people said. Apple ships more than 200 million iPhones a year as well as millions of iPads and Mac computers. Ming-Chi Kuo reported on X late last year that Intel would make Apple’s most basic M processor on its 18A process; he didn’t specify which generation. Regardless, while the Wall Street Journal cites Trump administration pressure, and an earlier Bloomberg article Apple’s concentration risk on TSMC and Taiwan, the most obvious reason for a deal — assuming it exists — is economic. Specifically, Apple has for two quarters running said it can’t satisfy demand because it can’t get enough capacity at TSMC. CEO Tim Cook referenced this point multiple times on the last earnings call , but I think this was the most important articulation: The constraint in the March quarter and the June quarter, the primary constraint is the availability of the advanced nodes our SoCs are produced on, not memory. And so I don’t want to predict for supply and demand to match because if I look at it realistically, I think on the Mac mini and the Mac Studio, I believe it will take several months to reach supply-demand balance. And so we’re not at the point where we’re saying this is going to end anytime soon. And it’s not because of a problem per se other than we just undercalled the demand. And there are lead times to this, as you well understand, and it takes a while to correct that. And the primary constraint from a product point of view, or the majority of it for this quarter, for the June quarter will be on the Mac. And it’s Mac mini, Mac Studio and the MacBook Neo. It’s all of those. Cook talked about lead times last quarter as well, and the important thing to note is that while it does take five months or so to make new chips, assuming Apple realized it needed more iPhone 17 Pro chips right away, those new A19 Pro lines only started producing chips partway through last quarter (which is why iPhone 17 Pro sales weren’t as high as they could be). Critically, however, what seems likely is that Apple took capacity away from the Mac to make more iPhone chips, and now doesn’t have enough chips for the Mini and Studio either. The long-and-short of it is this: Apple doesn’t have flexible access to TSMC capacity anymore, because so much of that capacity is going to AI in particular, and it’s costing Apple meaningful money across multiple product lines. This was always the thing that would bring companies to Intel; I wrote in TSMC Risk : Becoming a meaningful customer of Samsung or Intel is very risky: it takes years to get a chip working on a new process, which hardly seems worth it if that process might not be as good, and if the company offering the process definitely isn’t as customer service-centric as TSMC. I understand why everyone sticks with TSMC. The reality that hyperscalers and fabless chip companies need to wake up to, however, is that avoiding the risk of working with someone other than TSMC incurs new risks that are both harder to see and also much more substantial. Except again, we can see the harms already: foregone revenue today as demand outstrips supply. Today’s shortages, however, may prove to be peanuts: if AI has the potential these companies claim it does, future foregone revenue at the end of the decade is going to cost exponentially more — surely a lot more than whatever expense is necessary to make Samsung and/or Intel into viable competitors for TSMC. This, incidentally, is how the geographic risk issue will be fixed, if it ever is. It’s hard to get companies to pay for insurance for geopolitical risks that may never materialize. What is much more likely is that TSMC’s customers realize that their biggest risk isn’t that TSMC gets blown up by China, but that TSMC’s monopoly and reasonable reluctance to risk a rate of investment that matches the rest of the industry means that the rest of the industry fails to fully capture the value of AI. We’re already here (reportedly). TSMC’s failure to invest aggressively enough over the last several years will, in the end, give Intel the single most important thing it needs to become a viable competitor: the customer who did more than any other to make TSMC into the leader in the first place. This Update will be available as a podcast later today. To receive it in your podcast player, visit Stratechery . The Stratechery Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly. Thanks for being a subscriber, and have a great day!

0 views
Zak Knill Yesterday

LLMs are breaking 20 year old system design

The ‘cloud-native’ architecture of the last decade is built on a 20-year-old assumption: that state lives in the database, and compute is stateless. If you want to scale, you scale the database vertically (get a larger machine) [1] [1] or design the database schema around partition the data and you scale your application servers horizontally (add more boxes). Any request can hit any server, the loadbalancer doesn’t care, and the database is the single source of truth.

0 views
Ginger Bill Yesterday

The Aesthetic Problem of Namespacing

Karl Zylinski;s Tom;s Namespaces: An Odin Fanfic is an excellent exploration of the namespacing problem in imperative programming languages such as Odin I highly recommend reading that article before reading this one!. After sharing it on many comment forum sites, I;ve concluded there;s no real ;solution.; AssociationThe article uses a simple example:The aesthetic argument against this form of namespacing is that this looks fine with short 1/2 word length type names, but becomes unwieldy with longer ones like :The proposed fix ...

0 views
Evan Hahn Yesterday

Open Link in Unloaded Tab, a little Firefox extension

In short: I just published Open Link in Unloaded Tab , a little Firefox extension that adds “Open Link in Unloaded Tab” to the right-click context menu. In Firefox, you can unload tabs to save system resources. But there’s no way to open a new tab in the unloaded state…until now! I built a very simple extension that adds a new option to do this. (It even has a cute icon which I paid ~$15 for.) I’ve built one-off extensions before, but this is the first one I’ve submitted to the Firefox Add-ons directory. Download the extension here or check out the source code .

0 views
DYNOMIGHT Yesterday

What’s with all the slide decks?

News from the world of real jobs: Apparently, sometime between 10 and 20 years ago, it became standard for people to communicate by sending slide decks around. These slides are never presented. They aren’t intended to be presented. They’re born, they’re sent around, and they die. What? I stress, the question is not why (or if) people give bad presentations . The mystery is why everyone is using presentation software for communication that is not a presentation. Is it because we’re all dummies? I’m putting this theory first because I suspect that you, beloved readers, will favor it. True, if you ask people why they make slides instead of writing, they’ll usually say, “because nobody wants to read”. So there’s that. But I don’t consider this much of an explanation. Dummies though we may be, we’ve been like that a long time. If we entered the Slideocene 15 years ago, why then? Why not before? Did we get worse at reading? The Discourse seems to have decided this is true, but is it true, or just moral panic? Since 1971, the US has tested 13-year-olds to measure long-term trends in reading ability. This shows a slow improvement until 2012, then a slow decline, and finally a post-COVID drop. The declines seem too small and too late to explain our mystery. Since 2000, PISA has tested reading performance in 15-year-olds around the world. This shows a decline on average, but it’s smaller in rich countries and nonexistent in the United States. (It’s the same story for science and a bit more negative for math .) Among adults, data is scarce. Basic literacy is generally improving , and American time use data shows a decline in reading for pleasure from around 23 minutes per day in 2003 to around 16 minutes per day in 2023. But this seems to miss time people spend reading on their phones. So it’s unclear if people got worse at reading. It feels plausible that people now spend less of their adulthood grappling with complex written arguments, and so got worse at that. But there’s little firm evidence. Another obvious theory is that we now have computers and software and the internet. Without these things, it would be impossible to email slides to each other. This seems relevant! Yes, but we had those things for a while before slide culture really took hold. And think about the situation before computers. Photocopiers were ubiquitous in corporate offices by the mid-1980s, and mimeographs were around decades before that. If slides were really that great, people could have made them by hand. But no one did. Of course, making slides by hand is inferior. But it’s not that inferior. So slides can’t be that big of a win. And… that’s pretty much the end of the obvious theories. None of them are very satisfying. So let’s take a step back. Historically, how did the slide-as-document displace the memo? As best I can tell, this was driven by management consultancies. If you go back to 1960, they delivered detailed written memos. The memo was the product. They’d likely give a presentation as well, but that was a separate ancillary thing, likely done using flipcharts or chalkboards. In the 1970s, the memo was still the product, but consultancies started to enforce a top-down logical structure (the Pyramid principle ). Presentations shifted to acetate transparencies. Both memos and presentations often included hand-drawn graphics like the nine-box or growth-share matrices. In the 1980s, the memo was still the product, but presentations became increasingly lengthy and polished. Expensive computers like the Genigraphics started to be used to generate charts. The 1990s were when things started to shift. By then, PowerPoint was everywhere, and junior analysts were expected to create presentations themselves. Consultancies gradually started to notice that (1) clients didn’t always read the memos; (2) clients loved slides and passed them around long after the presentation was over; and (3) creating a memo and a polished presentation was a lot of work. They put more and more effort into the slides. McKinsey especially evolved towards treating slides as the primary product, and mostly stopped writing long memos. Other consultancies followed. During the 2000s, slides became even more ornate. Consultancies evolved their formatting rules, and created fancy data-dense charts. They learned that a 200 slide deck made clients feel like they got a lot for their money. Gradually, they oriented their entire business around slides. Projects would start with managers creating a template presentation with “ghost slides” and assigning different parts to junior analysts. Soon, this spread outwards, both from people who interacted with consultants and from the ex-consultant diaspora. People everywhere started thinking and communicating in slides, and now everything is slides, yay! That story makes slides-as-documents sound inevitable: People liked them, so they became popular. But there’s an alternative timeline in which we resisted the slide into slide maximalism. That timeline is Amazon.com, Inc. In 2004, Jeff Bezos famously instituted a no-presentations policy at Amazon. His logic was that slides hide poor reasoning and are a tool to persuade rather than inform. Instead, everyone involved with strategic decisions at Amazon needs to learn to write a six-page memo. Meetings begin with everyone sitting and silently reading one of these memos. Presentation software is not banned at Amazon. The ban is only for using it for internal meetings and decision-making. They use slides for external communication. There is no policy that prohibits someone from making slides and emailing them around. And yet, people don’t make slides and email them around, because it’s not part of Amazon’s culture. In effect, Amazon is a counter-movement. Most of the world decided that slides are good, because slides are easy. Bezos decided that writing is good because writing is hard. There are millions of articles explaining why Bezos’ policy is pure genius. They claim that constructing a narrative requires deeper analytical thinking and exposes flaws in logic. I want to believe those theories. I now realize they’re very similar to some of my arguments for why writing with too much formatting is bad. I’m not sure if writing is the secret to Amazon’s success. But Amazon is successful. This demonstrates that slide life is a choice, not technological destiny—institutions can choose writing over slides and flourish anyway. Warning: If you like your theories simple and mono-causal, you aren’t going to like this. Slides are a win, but a small one. The shift to slides wasn’t a “mistake”, it happened because people like it. But if sharing slides outside of presentations became illegal, this wouldn’t cause per-capita GDP to crash. That’s why people didn’t scratch slides into mimeograph stencils back in the 1950s. It wasn’t worth the modest effort. When computers and software showed up, it became easier to share slides. But people didn’t immediately shift to slides-as-documents because the win isn’t that big, because culture changes slowly, and because everyone had pre-existing skills for reading and writing documents. Consultancies happened to be in the economic niche with the strongest selection pressure to evolve towards slides-as-documents. So when making slides became cheaper, they shifted. Slowly, that norm spread outwards, people got used to communicating in slides, and here we are. Institutions can resist that norm and still be successful. If you take modern people and force them to read and write, they do just fine. Humans evolved to learn and communicate in a fragmented, interactive, and visual style. It’s hard to argue that any shift in that direction is a catastrophe. Except blogs. The decline of the blog must be arrested.

0 views
Sean Goedecke Yesterday

AI datacenters in space do not have a cooling problem

This year Elon Musk has started banging the drum about building AI datacenters in space. As the only person who owns a successful space company and a (moderately) successful AI company, this is a sensible way to boost his profile and net worth. Is it a sensible way to build datacenters? The first comment underneath most discussions of this always goes along these lines: “you obviously can’t build AI datacenters in space, because heat dissipation is really hard in space, and AI datacenters generate a lot of heat”. In general I am distrustful of snappy answers like these. It reminds me of the “AI datacenters obviously don’t use a lot of water, because cooling fluid circulates in a closed-loop system” argument: if it were true, there wouldn’t be a debate at all, just one side who understand the obvious point and another side who are stupid. Some arguments are like this! However, more often there’s a complicating factor that makes the snappy answer incorrect. In the water-use case, it’s that the closed-loop system has to itself be cooled by an open-loop evaporative chiller. What about the space datacenter case? First, let’s give the argument a fair shake. Although space is itself very cold, cooling is tricky because everything you’d want to cool is surrounded by vacuum. Heat transfer works in three ways: Vacuum is an excellent insulator because it defeats the first two methods of heat transfer. If there are no (or very few) atoms surrounding an object, those atoms can’t move around or collide. That’s why vacuum is used as an insulator in thermoses, travel mugs, and so on. So how can space datacenters get rid of their heat? By doubling down on the third method of heat transfer. Although it’s much harder to do heat transfer via moving atoms around in space, it’s actually easier to do heat transfer via emitting radiation. Any good emitter is also a good absorber. A perfectly black object is the most efficient emitter, but it’s also the most efficient way to absorb photons from external sources, which is why black objects get hotter in the sun 1 . In space, the sun’s light is much easier to avoid, because there aren’t objects everywhere for it to bounce off. A shaded radiator can dump quite a lot of heat. It would still require putting more radiators in space than we’ve ever done before. There are plenty of writeups out there if you want to read through the numbers. This is a recent one that estimates ~2500 square metres of radiation area would be needed to serve 1MW of datacenter energy (much less than what it’d need in solar panels) 2 . A serious AI datacenter is around 100MW 3 , so we’d need 250,000 square metres of radiation area. The largest current radiator in space is probably the ISS, at around a thousand square metres. Is scaling that up by 250x a lot? Yes, but it’s not necessarily ridiculous . We currently have zero industrial operations happening in space, so there’s been no need to push the boundaries here. In the grand scheme of things, 250,000 square metres is not that big. By my very rough estimates, that’s between 100-500 Starship launches: a couple of years at SpaceX’s current launch cadence, or a few months at their (very optimistic) estimate of future launch cadence. Of course, you don’t just need radiators to put a datacenter in space. You need a similar quantity of solar panels, the GPUs themselves, and all kinds of other supporting equipment. If a GPU dies in an Earth datacenter, you can go in and swap it out; if it dies in space, you just have to leave it dead and keep going with less capacity. It’s still wildly impractical to build AI datacenters in space. But it’s not impossible , and it’s certainly not impossible because of the cooling, which is a relatively minor component of the total mass that would have to be launched into space. In theory, black clothing would keep you slightly colder at night. Nobody ever talks about how impossible it would be to power space datacenters, despite the fact that you’d need to launch over triple the solar panel area into space than radiation area. I guess because people know solar panels exist and that the sun shines in space. The first gigawatt AI data centers are coming online this year, but 100MW is a fair estimate for a current pretty-large-but-not-enormous AI datacenter. Hot (i.e. fast-moving) atoms bump into other atoms, making them move and thus heating them up Hot atoms physically move from one location to another (e.g. in a fluid or gas), staying hot and thus making their new location hotter Hot objects emit photons (electromagnetic radiation), cooling themselves down and heating up other objects those photons collide with In theory, black clothing would keep you slightly colder at night. ↩ Nobody ever talks about how impossible it would be to power space datacenters, despite the fact that you’d need to launch over triple the solar panel area into space than radiation area. I guess because people know solar panels exist and that the sun shines in space. ↩ The first gigawatt AI data centers are coming online this year, but 100MW is a fair estimate for a current pretty-large-but-not-enormous AI datacenter. ↩

0 views

Patch Tuesday, May 2026 Edition

Artificial intelligence platforms may be just as susceptible to social engineering as human beings, but they are proving remarkably good at finding security vulnerabilities in human-made computer code. That reality is on full display this month with some of the more widely-used software makers — including Apple , Google , Microsoft , Mozilla and Oracle — fixing near record volumes of security bugs, and/or quickening the tempo of their patch releases. As it does on the second Tuesday of every month, Microsoft today released software updates to address at least 118 security vulnerabilities in its various Windows operating systems and other products. Remarkably, this is the first Patch Tuesday in nearly two years that Microsoft is not shipping any fixes to deal with emergency zero-day flaws that are already being exploited. Nor have any of the flaws fixed today been previously disclosed (potentially giving attackers a heads up in how to exploit the weakness). Sixteen of the vulnerabilities earned Microsoft’s most-dire “critical” label, meaning malware or miscreants could abuse these bugs to seize remote control over a vulnerable Windows device with little or no help from the user. Rapid7 has done much of the heavy lifting in identifying some of the more concerning critical weaknesses this month, including: May’s Patch Tuesday is a welcome respite from April, which saw Microsoft fix a near-record 167 security flaws . Microsoft was among a few dozen tech giants given access to a “ Project Glasswing ,” a much-hyped AI capability developed by Anthropic that appears quite effective at unearthing security vulnerabilities in code. Apple, another early participant in Project Glasswing, typically fixes an average of 20 vulnerabilities each time it ships a security update for iOS devices, said Chris Goettl , vice president of product management at Ivanti . On May 11, Apple shipped iOS 15, which addressed at least 52 vulnerabilities and backported the changes all the way to iPhone 6s and iOS 15. Last month, Mozilla released Firefox 150 , which resolved a whopping 271 vulnerabilities that were reportedly discovered during the Glasswing evaluation. “Since Firefox 150.0.0 released, they have been on a more aggressive weekly cadence for security updates including the release of Firefox 150.0.3 on May Patch Tuesday resolving between three to five CVEs in each release,” Goettl said. The software giant Oracle likewise recently increased its patch pace in response to their work with Glasswing. In its most recent quarterly patch update, Oracle addressed at least 450 flaws, including more than 300 fixes for remotely exploitable, unauthenticated flaws . But at the end of April, Oracle announced it was switching to a monthly update cycle for critical security issues. On May 8, Google started rolling out updates to its Chrome browser that fixed an astonishing 127 security flaws (up from just 30 the previous month). Chrome automagically downloads available security updates, but installing them requires fully restarting the browser. If you encounter any weirdness applying the updates from Microsoft or any other vendor mentioned here, feel free to sound off in the comments below. Meantime, if you haven’t backed up your data and/or drive lately, doing that before updating is generally sound advice. For a more granular look at the Microsoft updates released today, checkout this inventory by the SANS Internet Storm Center . CVE-2026-41089 : A critical stack-based buffer overflow in Windows Netlogon that offers an attacker SYSTEM privileges on the domain controller. No privileges or user interaction are required, and attack complexity is low. Patches are available for all versions of Windows Server from 2012 onwards. CVE-2026-41096 : A critical RCE in the Windows DNS client implementation worthy of attention despite Microsoft assessing exploitation as less likely. CVE-2026-41103 : A critical elevation of privilege vulnerability that allows an unauthorized attacker to impersonate an existing user by presenting forged credentials, thus bypassing Entra ID. Microsoft expects that exploitation is more likely.

0 views
ava's blog Yesterday

the flaws of digital consent management

Following up to my agentic consent piece , a reader (Shugo Nozaki) shared with me some interesting perspective around human consent (both by email and post ) that I felt like was worth exploring and discussing! He pointed out that while our current model of consent still relies on direct human perception and some understanding of what is being agreed to, it is already quite fragile in most areas. He rightfully points out that human consent that currently rests on real understanding is a polite fiction, as most consent flows are not really designed to be read. They are designed to get users past the gate. So he asks: How much of the current standard are humans actually meeting today? The reality is definitely that companies have squandered our trust and curiosity with the way consent mechanisms and Privacy Policies, Terms of Service etc. have been designed. Cookie banners keep popping up and sometimes don't seem to work correctly, other consent forms make it as annoying as possible to opt out, and any lengthier text is full of dry legalese. It has caused quite the consent fatigue, and for what? Unfortunately, the wrong things seem to be incentivized: Agreement to data processing is strategically beneficial to companies, so optimizing an easy workflow to not consent is not in their best interest. And the following is just a hypothesis of mine, but I firmly believe that companies have used the little leeway they were given to implement privacy law requirements to make things an absolute hassle in the hopes that they'd be seen as a failed experiment and users would complain until things were abolished again. When we actually read the laws, recitals, recommendations by organizations etc., we quickly see that we do not have to live with these unpleasant implementations; yet, companies get to point at laws for a job done badly and go " They made us do it! ". Multi-layered approaches 1 have been acknowledged and recommended for a while now, but implementation of them is still rare. For many companies, these texts seem to be a one-time thing that is once invested into and never again, instead of being the living document they should be. " If it works and we fulfill the requirement, why change it? " At the same time, laypeople unfamiliar with law have to work with heuristics and take characteristics as signs of quality when it comes to legal texts like PP's, ToS and more: If it's very long, it must be complete and enough effort has been invested, and if it has a lot of complicated jargon, it must be professional and correct. So that is what companies want to see, and weirdly, what some very invested users are reassured by. A short, casual version might seem incomplete and as if the company doesn't take privacy seriously. The money side is the same: What do law firms and their clients feel more comfortable charging and paying a lot of money for - the short, casual-toned text that people will understand, or the huge, dry and difficult to read one that comes across better? How do companies might feel if the person they hired to write these produces a more sloppy sounding one than their competitor has? Will it just come off as unprofessional to customers? In case of small businesses needing to save money, they're usually confronted with the question: Why hire anyone to make it more understandable and engaging to read while fulfilling the law, if you can just copy a trusted template online and fill in the blanks? No one wants to risk possible legal problems by a text that does not cover enough, so understandably, they resort to the most intense sounding texts, and they let consent and cookies be implemented and handled by big consent management companies because otherwise, it can be really difficult; but those sell the promise to increase the agreement numbers, so the service is designed around that metric. These wrong incentives and constraints have caused lost trust that is hard or in some cases, impossible to get back. Most people aren't born yesterday, and they have lived through years of shitty implementation. What would convince them that it is worth reading or that the next one will be better? Consent management in general offloads a lot of data management on individuals who are seldom correctly informed. On one hand, choice is what we want, on the other hand, it is also willfully ignorant of the collective issues. For now, the move is: Giving the user the option to read and agree or disagree is enough. We cannot force anyone to do anything, and if they choose to forego information or agree to make the process go faster because they are tired, then we have to accept that. Having a choice is also about the option to make a bad decision or one you regret later or wouldn't have done if you were your best self. Fittingly, Shugo Nozaki also poses the following idea in email: “ If the user policy is explicit enough, an agent may apply it with a kind of rule-following integrity that tired or distracted humans often fail to maintain. [...] How [can] we represent a user’s intent, boundaries, and escalation rules clearly enough for an agent to act on them? ” He brings up the option of a machine-readable user policy that is a set of constraints that defines what an agent may accept, must reject, or should bring back to the user. We'll likely have to move into that direction, but it still brings legal challenges as a broad consent isn't valid and needs to be granular and specific. A user could set an agent to always agree to cookies via personalization/custom instructions set as userpolicy.md, for example, but as they did not get to consent to each specific situation (website, their partners, their terms), its worth is questionable, and also difficult to prove in court for the companies. Ideally, an agent would have to ask the user on first "visit" of a website how to proceed for current and further uses of the website. So the more the agent gets around, the less this needs to happen. From a design perspective, even just asking the user to set up a policy themselves can be time-consuming and I assume not quite feasible for people who are not embedded in the law context or very passionate about privacy. There is too little context and information about why a decision matters, what could happen, what is tracked and what different kinds of situations could come up. I consider it beyond the realistic use case that the average user would introduce different categories of consent based on, for example, whether it is a blog or a shopping website, whether the banner says 4 partners or 1500, and more things that would enable more granular consent. All upfront. An agent could, on first setup, lead a user through it, but it could also be seen as annoying and skipped. Issues around the modalities of being asked and informed still remain: Do we trust the agent to relay information accurately? Will there be hidden instructions to influence the bot in what it tells the user? Would this approach really be less annoying than the existing method, when basically everything needs to get brought back for the user to decide at first? How will we reliably handle agents informing the user of changes in policies and the like? Yes, ideally, agents and other means could handle consent management better than a fatigued and annoyed human, but what counts primarily for the laws around data processing consent is that without a middleman, there is no doubt that a user directly had a choice in taking notice and the chance to inform themselves even if they chose not to; it often matters little whether they actually read and understood, as it cannot be proven or checked (and again, freedom to make bad choices). The waters get muddier when there is a translator in the middle that can sway you or skip it entirely, and no directly presented option. I'm sure on the other side, companies will also be interested in not having their consent workflow and information maligned by a bot either. It will actually also be interesting to see how worthwhile browsing data will still be if the metrics track the behavior of bots, not humans. I'll hopefully have more to say about this soon, as I will be at a conference which has some sessions about consent management in the age of AI that I will attend! :) Reply via email Published 12 May, 2026 In which the first text version a user sees is in easy language, casual and short, and if the they want to see it, they can have a lengthier, more detailed version, down to another layer, where the heavy legalese ones we are used to pops up. ↩ In which the first text version a user sees is in easy language, casual and short, and if the they want to see it, they can have a lengthier, more detailed version, down to another layer, where the heavy legalese ones we are used to pops up. ↩

1 views
Unsung Yesterday

Rug pulled

The best thing the crypto industry coined might have been the expression “rug pull,” but I’m not happy about that. To me, it perfectly describes how it feels when an app or a website randomly changes your scroll position for no rhyme or reason. You’ve seen it so many times before: To me, the scroll position is as sacred as the mouse pointer position , given the two are related whether Scroll Lock is around or not: one is you, the other is the world around you. But there are moments when software scrolling with the user or even for the user is appropriate, and here’s one example: When you switch tabs, the content below should always scroll to the top, but it doesn’t here. Here’s an even worse example, also from Settings: Why should the content scroll to the top here? Because in these situations, the fact that the content container gets reused is just a technical quirk of the implementation. From the user’s perspective, this is all new content, and new content should always start at the top. Otherwise, things will get confusing really fast; imagine it especially in the default configuration without scrollbars , where you might assume result number 6 is the first result, or completely miss the most important, topmost options. (Before you ask: Yes, I also see this in Tahoe.) #interface design #mouse you start reading a webpage, but it throws you back to the top when JavaScript finishes loading you start reading a webpage, and ads or other stuff appear and shove you around up and down you press a back button and that goes to the previous page… but to its top, rather than where you actually were you zoom in or out, the position isn’t recalculated properly, and suddenly you see a different part of the page and lose your orientation

0 views
Jim Nielsen Yesterday

Building Software Requires Digestion

Here’s Scott Jenson in his insightful piece “The Ma of a New Machine” : the chatbot interface [makes us] feel like deep cognitive work is happening. But the interface is fundamentally reactive. It spits complex text at you, you skim it quickly, and you immediately type a reaction to keep the momentum going. My hypothesis is that the very structure of the chatbot interface (type, read, type again) actively discourages reflection. When you are moving too fast, you get stuck in a groove. You literally need to take a break, step back, and basically step out of this groove so you can view the problem from a new angle. We’ve all walked away from a tough problem only to have the solution arrive unbidden into our thoughts later in the day. In my decades+ experience designing and developing software, I can’t count the number of times I’ve stepped away from a problem at the computer only to return and find the problem magically resolved in my brain. But the human-computer interaction of prompting doesn’t encourage the use of that skill in our subconscious. In fact, I think it actively discourages it (our tools shape us). Scott talks about this Japanese concept called “Ma” which is about deliberately creating pauses between things. He quotes Studio Ghibli director Hayao Miyazaki who says “if you just have non-stop action with no breathing space at all, it’s just busyness.” Here’s Scott (emphasis mine): Ma provides a framework for understanding that a pause is not a lack of work As humans we need pauses. We need space to breathe. We need time to digest. Pausing, breathing, synthesizing, digesting — these are all necessary work . “Digestion” is an interesting word here. Putting food in your body is merely the beginning of feeding yourself. Our bodies must digest that food, break it down, absorb it, and get rid of the waste. But that’s all happening mostly without our attentive oversight, so I guess it’s not “real” work — right? Building good, healthy software requires digestion. Reply via: Email · Mastodon · Bluesky

0 views
Kev Quirk Yesterday

Upgrading My Home Internet to Full Fibre

As many regular readers know, we live in the North Wales countryside, which means it can take time to get the latest and greatest when it comes to technology. As a result, we were previously "limited" to FTTC (fibre to the cabinet) which had a max speed of 70Mbps. As a result, we got okay internet speeds: But then I saw the ISP vans in the village, and I asked them what they were doing - "oh, we're upgrading the village to full fibre" she said. I had to have it! As soon as FTTP (fibre to the premises) was available, I placed the order with my ISP (who offered me a great deal that's only £5 per month more), and this is the result: In all honesty, I haven't noticed the difference. We didn't have any buffering issues when watching things like Netflix or Apple TV, so I'm not really sure why I upgraded in hindsight. I thought it would be this incredible difference where my internet would then be rapid, but the truth is, it's complete imperceptible. I remember when I upgraded from a 56k MODEM, to ~2Mbps broadband and it blew my mind. I was thinking this would be the same, but no. I do think the increased upload speed is going to come in handy when it comes to things like syncing my private git repos back to my Synology, but aside from that, there's not much in it. Had I paid full price (~£20 more per month) I don't think I'd have been too happy, but since I got a good deal, I'm not too bothered. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

1 views
Playtank Yesterday

Complexity and Player Agency

There’s sometimes an idea that a systemic game is inherently complicated to play. If you simulate a world, you must simulate everything that can potentially exist in that world. If you don’t want to constrain the player , and you want to lean strongly into player agency , this seems to lead naturally to a high degree of complexity . In some ways, yes. You need to plan for more redundancy than what a feature-rich game may need. But complexity itself is not a goal, and I’d even argue that some measure of complexity is a strength for this type of design. How to make games that are high in player agency without making them incredibly complicated , however — that’s the topic for this post. Let’s set some terminology before moving ahead. Complicated things are difficult, intricate, or require specialised knowledge to understand, but are predictable and can be solved. A complicated game is hard to interact with and may require that you internalise deep mechanical interactions before you can understand what kinds of decisions you need to make. This is not always intentional, but can be the product of a badly designed user interface. Complex things, on the other hand, are composed of interconnected parts where the whole is greater than the sum of those parts, often unpredictably. As you can tell, complex is extremely close to emergent . The difference between complicated and complex is that you don’t need to understand the whole of a complex system: it’s enough to internalise the system’s consistent elements . This last boldface point is what we’ll circle back to later. A system’s state-space complexity is measured by counting the total number of states that are reachable from an initial configuration. A linear game will have a small initial state-space complexity, because there isn’t enough branching to introduce it. Picture a game where you make setup choices at the start, like the excellent Caves of Qud . You pick a character, difficulty, starting location, and you have some other choices you can make as well. Some of these choices are unknown to you the first time you play, meaning that you may pick what sounds cool or what you were told was the optimal choice in a wiki guide. Each such choice affects the resulting state-space complexity. Particularly if they have synergies between them. A complicated game with high state-space complexity will take time to understand fully, but may instead lead to boredom once all permutations have been understood and the layers of uncertainty overcome. A complex game with high state-space complexity, on the other hand, will rarely be fully explorable even after years of play. There can always be emergent effects left to discover. The mental resource requirements for solving a particular problem is what leads to potential analytic complexity. When there are so many different elements to keep track of that the outcome might as well be random, there can be argued to be too much analytic complexity. But this is of course a matter of taste more than anything. Some players love this type of complexity and actively seek it. For a complicated game, analytic complexity can be caused by the intricacies of the interface or having to understand every dynamic before you make decisions. For a complex game, high analytic complexity will make things feel random and unpredictable, because it’s too hard to internalise the hows and whys. No amount of analytic thrill can happen if you don’t understand what is going on or if there is simply so much information that it becomes overwhelming. Whenever someone derogatorily refers to playing a certain game as “playing an Excel sheet,” for example, what they’re probably complaining about is the game’s cognitive complexity. There are too many pieces of information, they are hidden too deep in the game’s menus, or otherwise difficult to access at the right time. Complicated games reach here by having menus with submenus, strange keywords you must learn, and other forms of cognitive boundaries. Complex games will arrive at cognitive complexity by hiding effect from cause, so that it’s hard to figure out where emergent effects are coming from. Combining gaming with adulting isn’t always easy, making your wetware memory a crucial component. If you played a great game two months ago, and now you must recall what it was about, you can easily forget important parts of the narrative, which button to press to do the special attack, etc. Both complex and complicated memory problems are often of a related variety, where you need to remember the meaning of A before you can understand B. When you come back to a long-lasting TV show and you see a familiar face you can’t quite place, this is the same thing. Once your memory “clicks,” you’ll understand what the recurrence of this character means, but before then you’re probably stumped. The same is true for narrative elements and for synergies. Another instance of wetware weakness is muscle memory. While playing through a game with many interconnected mechanics and button combinations, you may learn it, even internalise it, to the point that you can play it almost intuitively. Practice makes perfect, as the saying goes. But some players have a harder time with this than others, and where you may be one of the players who will instinctively say “git gud,” others will simply quit the game and never touch it again because button gymnastics is not why they play games. Complicated skill thresholds will gatekeep elements of the game off your failure. Perhaps making it so you can’t proceed in the game until you’ve killed the next boss or used the new mechanic. Complex skill thresholds will generally be because you haven’t realised how the game’s rules can be applied, and you need to learn it. Some players are “hardcoded” to find the right way to play, and will frustratingly lock themselves into trying it again and again rather than attempting something new. This is a good example of a complex skill threshold. The systemic focus on this blog began in earnest with my three-part treatment of “ immersive sims ,” back in 2022. But the immersive sim is a very interesting case study, since its vagueness has led to countless labelling discussions. This brings us to a kind of complexity that is maybe more disruptive than we usually admit: conventions. For most console gamers, using dual analogue sticks is second nature. But that doesn’t acknowledge the many hours of training you need to actually master it. Genre conventions are similar, in that they take a lot of time to internalise, and for people who haven’t internalised them they become tough obstacles to beat. Conventions can be game-, genre-, or even developer-specific, and sometimes as game designers we’re not aware of them to the extent that we should be. “Agency: The player can control their decisions and those decisions have consequences in the game world . (And hopefully, there’s enough information in the world to make said decisions.)” Chris Siegel The vague almost nonsensical definition of player agency that I personally prefer is that a game respects player intent . Why I like it is because it has fairly deep-rooted implications. It means the player must be able to form intent — they must know what’s possible and what’s not. It also means that the player is the driver of what happens, not the game. Choices and consequences (C&C from now on) were touted as a major game feature for a while, during BioWare’s Mass Effect heyday. C&Cs are the clearest, perhaps most fundamental, expression of player agency. Bob Case, MrBtongue on Youtube , talked about two key types of C&Cs that directly map to the authorship-emergence scale: C&Cs for replayability or simulation . “Complete Mordin’s loyalty mission and he’ll live through the game’s ending. Ignore it and he may die in a certain cutscene. This is intended to increase the game’s replayability, as in play the game again, make different choices, see a different ending. […] Why would Mordin’s hypothetical loyalty level make any difference in whether or not he happens to take a stray bullet?” In today’s gamedev vernacular, replayability is often equated to seeing more content . Content is certainly the hallmark of replayability C&C, but let’s look at the contact point with player agency. “You’re using game mechanics to attempt to make the game world behave as though it was a hypothetical real world. […] The first two Fallout games became classics largely because of the quality and reactivity of their settings. The world of Fallout seemed like a real place, because it reacted to the actions of the player in a realistic way.” One of Bob Case’s examples of C&C in the first Fallout is that, if you talk about where you’re from, this can end up having your whole vault looted and destroyed. A consequence that makes intuitive sense in a cutthroat environment, but is never advertised as a choice. It just happens. This is the kind of systemic player agency I personally like, and here are some of the things that can make it sing. Many games offer high degrees of player agency using smoke and mirrors. Harvey Smith talked about Dishonored and the assassination of High Overseer Campbell, going into some detail on how all of it was constructed through level design and scripting. You have many options and you may feel that you’re outsmarting the game’s developers, but all of those options were carefully staged. Effectively, Dishonored is far to the left of the Content – Experience scale. This is great! That assassination is one of the most iconic moments in the Dishonored franchise. You don’t have to make everything actually systemic to create a systemic experience. The only thing that’s unfortunate with a content-driven setup is that it’s directly content-limited — you will never offer more variety than you have the content production capability to produce. To contextualise our gaze into the player agency abyss, we’ll look at the three main “loops” most games can be said to have: Micro, Macro, Meta. These represent the second-to-second, minute-to-minute, and hour-to-hour interactions you have with the game. Agency means very different things in different loops, and as you can hopefully see, its implications vary. Most video games have micro agency: choose which action to use, where to go, how quickly to proceed, and which events to prioritise. Consequences for failure are usually transactional in nature, where you lose a few points of health or need to spend in-game coin if you fail. Games that have low micro agency will stop until you perform the right action. Games with high micro agency will allow you to progress at your own pace. Macro agency is about being able to choose where to go. Consequences for failure are almost exclusively a cost in time. Watch the loading screen, walk back to your dead body, climb up from your missed jumping challenge, or try the dialogue tree a second time. Games with low macro agency will force checkpoint reloads if you make the wrong choice. Games with high macro agency will let you choose which missions to pick and which ones to abandon. Meta agency is less common, since it’s the layer where most modern games will lean into linearity, and where production is most expensive. Allowing the player to alter a story’s outcome, to turn into a villain, or to explore character builds that were never quite intended to be played in the ways you play them. Consequences for failure at this layer may mean the end of the game, or that you need to start from scratch with a new character. Games with low meta agency will force you into a linear story or community-enforced meta. Games with high meta agency will let you kill the princess and save the dragon (one of my favorite examples), or discover new metas even deep into the game’s life cycle. “I wanted the player to play the game enough so that they could intuit the health of their ecosystem and understand how to approach it moving forward. But players generally don’t do that. That’s not how they start. They start by using their own intuition not the game’s intuition.” Andy Schatz Think back to the point on internalising consistent elements . There are some things that almost always hold true in games, that most of us have to learn the hard way: What this effectively means is both that Andy Schatz’ point on intuition, above, and the idea that consistent elements can be internalised are directly related. To put it another way, we can’t know with certainty what the player’s intent will be . Because of this, we can only provide more player agency by covering enough ground with our simulation. Simulation is a scary word. Many times when I’ve lectured on systemic design, the interpretation has been that everything must be possible, or the idea of emergence isn’t real. This is what makes it too complicated for realistic implementation, seems to be the implication. But this is not true. High degree of emergence leads to a complex game, yes. But though emergence is complexity, it doesn’t have to be complicated . As has been covered in previous posts, all you really need is a strong mental model and a good understanding of the state-space needed to represent it . No one thinks L.A. Noire lacks agency because you can’t use your gun at any time. No one thinks Thief: The Dark Project lacks agency because you can’t cook an omelette between thefts. Remember: permissions, restrictions, and conditions. If we can find rules that fit the mental model, we can never go wrong. “If people compare our combat to Half-Life, we’re dead; if they compare us to Thief’s stealth, we’re dead; if they compare our RPG elements to BioWare’s latest, we’re dead. But if they get that they can decide how to play, to do any of those they want, we might rule the world.” Warren Spector It’s fine if you don’t simulate everything to be able to create your systemic game. You’re not expected to. What’s important is that you understand what choices the player will want to make and that you nudge them in that direction. Just don’t require them to listen to your nudges if they don’t want to. Unique rewards . A common form for replayability C&C is to provide you with gameplay rewards that you can’t get any other way. A special weapon, a unique armor, or even just a chunk of virtual gold. For some achievers, this is enough to warrant a replay. Outcome response . You get some kind of feedback from the game based on the choice you make, and if you want to see another outcome you need to replay the level or even the whole game. Downstream effects . It’s quite common to tally the result of choices made during the game and make a difference at the end rather than throughout play. For example the Mass Effect 2 ending and how it’s affected by party loyalty, or a high/low chaos ending in Dishonored . Branching impact . If you decide to destroy the town of Megaton in Fallout 3 , it won’t exist anymore. If you decide to kill a character, it stays dead and won’t show up in later scenes. Replayability C&C will usually be limited to few but very obvious choices like this. Irrevocability . If you make the choice, you suffer the consequences. There are no reversals or revivals to be had: the result is irrevocable. Defiance . Allowing players to disregard instructions and then provide consequences for it. If you wait too long or if you make enemies with the wrong person. Can be good, can be bad, but it should make intuitive sense. Vandalism . Consciously breaking the game is the hardcore version of defiance. When you take a “what if?” to the edge and beyond. Optional . Choosing to exclude yourself from things you don’t care much about, say stealth, while focusing on things you do like, say combat. This is something that truly distinguishes games with high player agency from other games. Versatile . Often goes hand in hand with optionality, since versatility needs it. Versatility is about offering more than one solution to any given problem. Chris Crawford found it so essential to game design that he didn’t consider puzzles to be games . Players don’t read text. Players don’t play the way we think.

0 views
neilzone Yesterday

Fixing a proxying problem with my HomeAssistantOS installation by replacing nginx proxy manager

tl;dr: I removed the “nginx proxy manager” add-on, and replaced it with the Let’s Encrypt add-on and (second) the nginx add-on. A couple of months ago, I moved my HomeAssistant installation to HAos . I think that it is fair to say that I was not overly pleased with this. Honestly, I preferred the “Core” python-venv approach, but I also wanted a “supported” installation, and so I switched to HAos. i got it up and running okay, and I thought that I had got proxying working too, using an add-on called “nginx proxy manager”. This is not something that I had used before; I’d rather just configure nginx myself. Well, either I got something wrong, or it just does not work very well, as I kept having problems using HomeAssistant, stuck on a “loading data” screen, or it simply not responding. This bugged me for quite a while. Annoyingly, the logs available to me within HAos were unhelpful. I couldn’t spot anything indicating a problem. Using the console in my web browser, I noted that some files were not loading correctly, but why that was the case, I wasn’t sure. I thought that I’d had a similar issue with my “Core” installation years ago, which I got down to the issue of the in the file, but that looked correct here (which I was able to check, using the SSH add-on. I tried various parameters in the nginx proxy manager add-on, but to no avail. In the end, I tried removing the nginx proxy manager add-on, and replacing it with the Let’s Encrypt add-on (which I installed, configured, and ran first), and then the nginx add-on. And it immediately started working correctly. So I don’t know exactly why my original set-up was not working, but at least it is working better now.

0 views

Where Are All The Data Centers?

If you liked this piece, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . My Hater's Guides To Private Credit and Private Equity are essential to understanding our current financial system, and my guide to how OpenAI Kills Oracle pairs nicely with my Hater's Guide To Oracle . My last piece was a detailed commentary on the circular nature of the AI economy — and how the illusion of AI demand is just that, an illusion.  Subscribing to premium is both great value and makes it possible to write these large, deeply-researched free pieces every week.  During every bubble there’s one very obvious thing that keeps happening: things are said, these things are repeated, and are then considered fact. Sam Bankman-Fried was the smiling, friendly, “ self-made billionaire ” face of the crypto industry. NFTs were the future of art, and would change the way people think about the ownership of digital media. The actual evidence, of course, never lined up. NFT trading was dominated by wash trading — market manipulation through two parties deliberately buying and selling an asset to raise the price. Cryptocurrency never took off as anything other than a speculative asset, and altcoins are effectively dead . Sam Bankman-Fried was only a billionaire if you counted his billions of illiquid FTX tokens, but that didn’t stop people from saying he wanted to save the world weeks after the collapse of Terra Luna, a stablecoin that he himself had bet against and may have helped collapse .  Three months before his arrest, a CNBC reporter would fly to the Bahamas to hear SBF tell the story of how he “ survived the market wreckage and still expanded his empire, ” with the answer being that he had “stashed away ample cash, kept overhead low, and avoided lending,” as opposed to the truth, which was “crime.”  The point is that before every scandal is somebody emphatically telling you that everything’s fine. Everything seems real because there’s enough proof, with “enough proof” being a convincing-enough person saying that “most of FTX’s volume comes from customers trading at least $100,000 per day,” when the actual volume was manipulated by FTX itself , and the “$100,000 a day in customer funds” were being used by FTX to prop up its flailing token .  In the end, the “proof” that SBF was rich and that FTX was solvent was that nobody had run out of money and that nothing bad had happened to anybody. SBF was a billionaire sixteen times over because enough people had said that it was true.  Anyway, one of the most commonly-held parts of the AI bubble is that massive amounts — gigawatts’ worth — of data centers have both already been and continue to be built… …but then you look a little closer, and things start getting a little more vague. While Wood Mackenzie’s report said that there was “ 25GW of data center capacity added to the funnel ” in Q4 2025 does not say how much came online. CBRE said back in February that “net absorption of 2497MW” happened in primary markets in 2025 , with other reports saying that somewhere between 700MW and 2GW of capacity was absorbed every quarter of 2025. At the time, I reached out for any clarity about the methodology in question and received no response. Okay, so, I know data centers are getting built and that they exist . I believe some capacity is coming online. But gigawatts? Or even hundreds of megawatts? How much data center capacity is actually coming online?  Why did Anthropic get so desperate it took on a years old data center, xAI’s Colossus-1 , full of even older chips from a competitor — one whose CEO described the company as “evil, ” and that’s currently facing a lawsuit from the NAACP over allegations the facility’s gas turbines are polluting black neighborhoods ?  Remember, Colossus-1 is an odd data center, with around 200,000 H100 and H200 GPUs and an indeterminate amount of Blackwell GB200s, weighing in at around 300MW of total capacity… which isn’t really that much if we’re talking about gigawatts being built every quarter, is it?    So, I have two very simple questions to ask: how long does it take to build a data center, and how much data center capacity is actually coming online? These simple questions are surprisingly difficult to answer. There exists very little reliable information about in-progress data centers, and what information exists is continually muddied by terrible reporting — claiming that incomplete projects are “operational” because some parts of them have turned on , for example — and a lack of any investor demand for the truth. Hyperscalers do not disclose how many data centers they’ve built, nor do they disclose how much capacity they have available.  I find this utterly inexcusable, given the fact that Amazon, Google, Meta and Microsoft have sunk over $800 billion in capex (and more if you count investments into Anthropic and OpenAI) in the last three years . So I went and looked, and what I found was confusing. So, you’re going to hear people say “well Ed , data centers are being built ,” and what I’m talking about is data centers that have been fully constructed and then turned on . It’s really, really easy to find data centers that are under construction , but as I’ve discussed in the past, that can mean everything from a pile of scaffolding to a near-complete data center . Yet finding the latter is very, very difficult. I’ve spent the last week searching for data centers that broke ground in 2023 or 2024 that have actually been finished, and come up surprisingly empty-handed. Some projects are stuck in construction hell, eternally dueling with planning departments over permitting, some are chugging along with no real substantive updates, some, as is the case with Nscale’s Loughton, England data center, have done effectively nothing for the best part of a year , some are perennially adding more capacity to the order as a means of continuing raking in construction bills, and some are claiming their data centers are “operational” as only a single phase has turned on. You should also know that even once construction has finished, the buildings themselves must be fully filled with the necessary cooling, power and compute hardware, at which point it can be configured to meet a client’s specifications (which can take months), at which point the unfortunate soul building the facility can actually start making money. I think it’s also worth revisiting how difficult data center construction is, and how large these new projects are.  This starts with a very simple statement: nobody has actually built a 1GW data center (to be clear, it’s usually a campus of multiple buildings networked together) yet. There are campuses — such as Stargate Abilene — which promise to reach 1.2GW, but nearly two years in sit at two buildings at around 103MW of critical IT load each with, based on discussions with sources with direct knowledge of Abilene’s infrastructure , a third building sitting fully-constructed but with barely any gear inside it. It’s fundamentally insane how many different companies are trying to build these things considering how difficult even the simplest data center is to build. Take, for example, American Tower Corporation’s edge data center in Raleigh, North Carolina, which I’ll mention a little later. This is a 1MW facility — or one-thousandth the size of a gigawatt facility — occupying 4000 sq ft of real estate at first and expanding to 16,000 if ATC actually gets it up to 4MW. That’s about two-and-a-bit times larger than the typical American home . And, from ground-breaking to ribbon-cutting , it took eleven months to complete. And that’s not including all the other necessary time-consuming bits, like finding land, securing permits, and so on.  That’s a simple one. People want to build data center campuses a thousand times larger than that. Look at how difficult it is. In fact, it’s so difficult that the companies can’t build all of it at once. Larger data center campuses are almost always divided into “phases,” in part because that’s the smartest way to build them, and in part with the express intention of convincing you that they’re “fully operational.”  For example, CNBC’s MacKenzie Sigalos reported in October 2025 that Amazon’s Indiana-based (allegedly) 2.2GW Project Rainier data center was “operational,” but only seven out of a planned 30 buildings were actually operational, and her comment of “with two more campuses [of indeterminate capacity] underway.” This comment was buried two videos and 600 words into a piece that declared the data center was “now operational,” with the express intent of making you think the whole thing was operational. To give her credit, at least she didn’t copy-paste the outright lie from Amazon, which claimed that Rainier was “ fully operational ” in a press release the same day. You’ll also note that Amazon never provides any clarity about the actual capacity of Rainier. Sigalos did exactly the same thing when the first (of eight) buildings of Stargate Abilene opened, declaring that “OpenAI’s first data center in $500 billion Stargate project is open in Texas,” burying the comment that only one was operational with another nearly complete several hundred words earlier.  These are intentionally attempts to obfuscate the actual progress of the data center buildout, and if I’m honest, I’ve spent months trying to work out why big companies that were supposedly building large swaths of data centers would be trying to do so. Unless, of course, things weren’t going to plan. In its last (Q3 FY26) quarterly earnings call , Microsoft CEO Satya Nadella claimed that “[Microsoft] added another gigawatt of capacity this quarter, and [remained] on track to double [its] overall footprint in two years.” A quarter earlier , he claimed to have added “nearly one gigawatt of total capacity,”  with Karl Keirstead of UBS saying that he “...thought the one gigawatt added in the December quarter was extraordinary and hints that the capacity adds are accelerating.” As I’ll discuss below, I can find no evidence of anything more than a few hundred megawatts of Microsoft’s data center capacity coming online. While I’ll humour the idea that it doesn’t announce every new data center, and that there may be colocation and neocloud counterparties ( 67% of CoreWeave’s revenue comes from Microsoft, for example ) that make up the capacity, as I’ll also discuss, I don’t know where the hell that might be. So, to be aggressively fair, I asked Microsoft to answer the following questions on May 4, 2026: A Microsoft representative from WE Communications promised to "circle back" by 5PM ET on Monday May 4th, but did not return further requests for comment via text and email, which is incredibly strange considering the simple and straightforward nature of my questions. That’s probably because the vast majority of its publicly-announced or documented data center capacity doesn’t appear to be getting finished. In September 2025, CEO Satya Nadella claimed that Microsoft had added 2GW of capacity “in the last year,” and acted as if Fairwater, a project with two actively-constructed data centers with one in Wisconsin that broke ground in September 2023 and another in Atlanta that broke ground in July 2024 , was something to be “announced” rather than “a very expensive project that has taken forever.” Nadella also claimed that there are “multiple identical Fairwater datacenters under construction,“ though he neglected to name them. To be clear, “Fairwater” refers to a project where multiple data centers are linked with high-speed networking to make one larger cluster, a project that sounds ambitious because it is , and also unlikely because it’s yet to have been built.  Fairwater Atlanta — the latter of the Fairwaters — was “launched” in November 2025 and it’s unclear how much capacity it has. Cleanview claims it’s at 350MW of capacity , and Microsoft’s own community outreach page claims construction would be completed by the beginning of October 2025 , but, as I’ll get to, it’s unclear whether this is just one phase, given that reporting shows multiple other buildings still under construction . I have serious doubts that Microsoft stood up a 350MW data center in less than a year, given everything else I’m about to explain. Fairwater Wisconsin is also a data center of indeterminate size, but Cleanview claims Phase 1 is 400MW , quoting a story from FOX6 News Milwaukee from September 2025 that said that Microsoft was “investing an additional $4 billion to expand the campus,” featuring a video of a very much in construction data center saying the following: So, $3.3 billion — at a rate of around $14 million per megawatt per analyst Jerome Darling of TD Cowen — is about 235MW of capacity, which is a lot lower than 400MW.   Seven months later, Satya Nadella said that the Fairwater datacenter in Wisconsin was “going live, ahead of schedule,” a sentence written in the present tense, but also said that it “ will bring together hundreds of thousands of GB200s in a single seamless cluster,” which is in the future tense.  It’s a great time to remind you that Microsoft claims that it brought online roughly eight times that capacity (around 2GW) in the past six months.  To make matters worse, it doesn’t appear that Fairwater Wisconsin is actually operational. Ricardo Torres of the Milwaukee Journal-Sentinel reports that Microsoft has said it isn’t actually online , and that while there “...is equipment inside the data center conducting start-up opportunities…the company anticipates [they] will continue to happen for the next several weeks.”  Epoch AI’s satellite footage of Fairwater Wisconsin — which mentions  a completely wrong capacity because it’s uniquely terrible at calculating it ( it claimed Colossus-1 has 425MW capacity, for example) — notes that as of April 2026, one building appeared to be operational, with a second under construction. So, that’s one building in Wisconsin that might be complete, and based on the permitting application from August 2023 dug up by Epoch, the project is designed to have 117MW of capacity, which is a lot lower than 235MW. While Epoch didn’t have permitting for building two, it did for three and four, which are designed to have around 719MW of capacity , and as of April 2026 still appear to be slabs of concrete.  In simpler terms, there’s at most around 117MW of capacity running at Fairwater Wisconsin. The Fairwater data centers are Microsoft’s most-publicized data centers, yet they’re shrouded in secrecy, with the Atlanta Journal-Constitution having to file an open records request to find the site being developed by QTS, a data center developer owned by Blackstone . Videos of Fairwater Atlanta from last November show a giant campus with two large buildings and a patch of yet-to-be-developed dirt. DataCenterMap refers to it as “ under construction .” Epoch AI’s satellite footage notes that as of February 2026, building four’s roof was complete and “all mechanical equipment appears to be installed,” but “there is still a lot of construction activity around the building.”  Based on air permits filed as part of the project (that Epoch found), it appears that each building is powered by a number of Caterpillar 3516C Generator Sets at around 2.5MW each, with building one having 47 (117.5MW), building two having 13 (32.5MW), building three having 30 (75MW), and building four having 35 (87.5MW). If we’re very generous and assume that three buildings are complete, that means that Fairwater Atlanta is at around 225MW of capacity (not IT load!). So, that’s about 342MW of data center capacity being built by one of the largest companies in the world, in its most-publicized and written-about data centers. Put another way, for Microsoft to come remotely close to its so-called 2GW of capacity in the last six months, it will have had to bring online a little under six times that capacity. I’m calling bullshit. I really did want Microsoft to give me some answers, but I’m very confused as to how it can remotely claim it brought even a gigawatt of capacity online in the last year. I also question whether Microsoft is actually building multiple other “identical” Fairwater data centers, as I can’t find any announcements or pronouncements or mentions or hints as to where they might be. In fact, I’m having a little trouble finding where else Microsoft has been building data centers, and those I can find are extremely suspicious. In Microsoft’s announcement of its Wisconsin data center , it mentioned two other projects — one in Narvik Norway that had already been announced months beforehand by OpenAI , and another with Nscale in Loughton, England that was also announced by OpenAI that very same day as part of the entirely fictional Stargate project . If you’re wondering how those are going, Microsoft had to take over the entire Narvik project (which does not appear to have started construction) from OpenAI , and the Loughton data center ( which OpenAI also backed out of ) is currently a pile of scaffolding . For two straight quarters , Microsoft has said it’s brought on an entire gigwatt of capacity,and I have to ask: where?  Because when you actually look at the projects it’s announced, very little appears to have been built, and that which has is nowhere near its theoretical capacity. To be specific about what Microsoft is claiming, it’s saying it’s brought around 4GW of capacity online in the space of two years, and at a 1.35 PUE, that’s about 2.96GW of critical IT load, which works out to the power equivalent of around 284,600 H100 GPUs, which may be possible — after all, Microsoft apparently bought 450,000 H100 GPUs in 2024 — but I can’t find much evidence of data centers that could house that many GPUs, nor that might be in construction.  Let’s dig in. Microsoft broke ground on three data centers in Catawba County North Carolina in 2024 — one in Hickory, another in Lyle Creek, and another in Boyd Farms: Alright, maybe I’m being unfair! Maybe it’s just a North Carolina problem. There must be another that broke ground and got built…right?  Microsoft also broke ground on a data center in Quebec City, Canada in September 2024 , and as of April 2026 , “generator testing has been completed,” and “civil works will continue until Autumn 2026.”  Okay, well, maybe it’s a Canada problem. What about Microsoft’s New Albany, Ohio data center that broke ground in October 2024 ? Well, as of March 2026, “spring activity would resume,” and “beginning soon, soil will be delivered to the site via a designated truck route. I’ll note that Microsoft specifically says that Ames Construction is currently leading it, and that it will “resume the lead role in project communications” once the final phase of construction is done at some unknown time. Alright, well, how about the August 2025 ground breaking in Cheyenne, Wyoming that was allegedly “ due to launch in 2026 ”?  Well, Microsoft hasn’t updated its community page since it said there’d be a community meeting planned for November 2025 and that “neighbors within the vicinity will be notified ahead of construction,” which sounds like construction is yet to commence. Not to worry though, it announced on April 14, 2026 that it planned to expand it to “ accelerate innovation and economic growth ” How about that 2023-announced Southwest Hortolândia Brazil data center ? That’s right, the last update was in September 2025 , and the update was “construction activities continue to progress in alignment with local regulations.” A piece from Folha De S.Paulo from March 2026 mentioned that Microsoft “had begun operating its first artificial intelligence data centers in Brazil,” but satellite footage shows that it’s barely finished. What about the Newport, Wales data center it announced in 2022 ? Well, as of November 2025, a politician was standing on a concrete slab saying how many jobs it’ll theoretically bring in , which it won’t. What about Microsoft’s four data centers in Irving, Texas, announced December 2024 ? The best I’ve got for you is a news report about a data center in Irving Texas breaking ground in January 2025 . Its San Antonio data center, announced in July 2024 ? Well, construction was underway as of December 2025 , and it appears that construction will begin in the summer of 2026 on another one in the area. How about the two data centers outside of Cologne, Germany , announced in November 2024? Well, as of September 2025, Microsoft has… plans to build one of them ? …what about the 900 acres of land it bought in June 2024 in Granger, Indiana ? Great news! According to 16NewsNow , Microsoft officials “could break ground on a proposed data center…in late April or early May [2026].” How about Project Ginger West, a data center planned in Des Moines. Iowa since March 2021 ? Hope you like waiting , because Microsoft itself says that it’s estimated to finish construction in Summer 2028 . Ginger East , announced a few months later? Mid-2028 . Project Ruthenium ( announced 2023 )? I don’t have shit for you I’m afraid. Rutheniumkanda Forever! This company claims it’s built four fucking gigawatts of capacity , but when I go and look to see what it’s actually built I’ve failed to find a single announced data center from the last three years that got turned on outside of its Fairwater Atlanta and Wisconsin sites. To be clear, all of these sites are somewhere in the 200MW to 300MW range. For Microsoft to have brought online 4000MW of data center capacity in the last two years would require it to have completed thirteen or more of these projects, all while choosing not to promote them, with every project operating in such a veil of secrecy that no local or national news outlet reported a single one of them.  I truly cannot work out how Microsoft has brought on any more than 500MW of capacity in the last year based on my research, and think Microsoft is deliberately obfuscating whether said capacity was contracted rather than actively in-use , much like CoreWeave refers to itself having 3.1GW of “ total contracted power ” but only added 260MW of active power capacity in a single quarter at the end of 2025.  However, the exact verbiage used in Microsoft’s earnings transcripts is that it “added another gigawatt of capacity,” which sounds far more like it’s saying it brought them online… …but it didn’t, right? It obviously hasn’t. Where are all the data centers, Satya? Where are they? Why are your PR people too scared to tell me?  No, really, where are they?  So, to be fair, analyst Ben Bajarin, one of the more friendly pro-AI posters, argues that actually all of that capacity is secretly behind-the-scenes , something I’d humour if there was any kind of paper trail to a bunch of Microsoft data centers that were secretly being built.  I’d also be more willing to humour it if any of the data centers that have been publicized as “breaking ground” had actually been finished, or if both Fairwater Atlanta and Wisconsin weren’t so deceptively-marketed. My only devil’s advocate is that Microsoft could, in theory , be working with colocation partners to stand up several gigawatts of capacity through shell corporations and SPVs, but even then , not a single one has any sort of trail to Microsoft? All of that capacity?  It’s really, really weird, and the only answers I get are smug statements about how “Fairwater is ahead of schedule.” But if I’m honest, I’m having trouble even making these numbers add up. Considering how loud, offensive and conspicuous the AI bubble has become, it feels like we should have a far, far better understanding of how much actual capacity has been built. I also think it’s time to start being realistic about how long these things are taking to build. For example, I was only able to find a few data centers that for sure, categorically, definitively opened, and for the most part, it appears that a data center takes around 18 months to go from groundbreaking to opening. And these, I add, are all facilities that are relatively modest — at least, when compared to the kinds of gigawatt-scale campuses that are reportedly in active development.  Digging deeper, I found a lot of projects stuck in development Hell: While there are absolutely data centers under construction , and some, somewhere , are actually being completed , the vast majority of projects I’ve found are either in a mysterious limbo state or, in most cases, under construction years after breaking ground. Across the board, the message seems to be fairly simple: it takes about 18 to 24 months to build any kind of data center, and the bigger they are, the less likely they are to get completed on schedule. Those that actually “come online” aren’t actually fully constructed, but have brought on a single phase — something I wouldn’t begrudge them if they were anything close to honest about it. In reality, data center companies actively deceive the media and customers about the actual status of projects, most likely because it’s really, really difficult to build a data center. In any case, what I’ve found amounts to a total mismatch between the so-called “rapid buildout” of AI data centers and reality.  It also doesn’t make much sense when you factor in how many GPUs NVIDIA sold. In October last year, NVIDIA CEO Jensen Huang told reporters that it had shipped six million Blackwell GPUs in the last four quarters , though it eventually came out that he was counting two cores for every GPU , making the real number three million. I disagree with the framing, I think it’s incoherent and dishonest, but I’ve confirmed this is what NVIDIA meant. In any case, if we assume two cores per GPU, a B200 GPU has a power draw of around 1200W, for around 3.6GW of IT load for 3 million of them. I realize that NVIDIA also sells B100 and B300 GPUs (similar power draw) and NVL72 racks of 72 GB200 GPUs and 36 CPUs, but bear with me. Blackwell GPUs only started shipping with any real seriousness in the first quarter of 2025, which means that a good chunk of these data centers were built with H100 and H200 GPUs in mind. Nevertheless, I can find no compelling evidence that significant amounts — anything over 500,000 GPUs — of Blackwell-based data centers have been successfully brought online.  When I say I struggled to find data centers that had been both announced and brought online, I mean that I spent hours looking, hours and hours and hours, and came up short-handed.  I want to be clear that I know that there is Blackwell capacity actually being built , and believe that the majority of that capacity is retrofits of previous data centers, such as Microsoft’s extension to its Goodyear Arizona campus which it began building in 2018 that likely houses Blackwell GPUs. But I no longer believe that the majority of Blackwell GPUs are doing anything other than collecting dust in a warehouse. Blackwell GPUs require distinct cooling, a great deal more power than an H100, and cost an absolute shit-ton of money, making it unlikely that a 2023 or early-2024 era data center could handle them without significant modifications. I fundamentally do not believe more than a million — if that! — Blackwell GPUs are actually in service.  If that’s the case, NVIDIA is likely pre-selling GPUs years in advance — experimenting with the dark arts of “ bill-and-hold ” — and helping certain partners like Microsoft install the latest generation to create the illusion of utility, availability and viability that does not actually exist. If I’m honest, I also have serious questions about the current status of many H100 and H200 GPUs. Based on what I’ve found, I’d be surprised if more than 3GW of actual capacity was turned on in the last two years, which means that NVIDIA has sold anywhere from double to triple the amount of GPUs that the world can hold. While the Anthropic-Musk compute deal is an obvious sign about xAI’s lack of demand for compute, it’s also, as I mentioned earlier, a clear sign that AI data centers are mostly not getting finished, and those that do get finished are taking two or three years even for smaller builds. While it sounds a little wild, I think in reality only a few hundred megawatts — if that — of actual, usable AI compute capacity is being spun up every quarter. If I was wrong, there’d be significantly more progress on, well, anything I could find.  Why can’t Microsoft offer up a data center that isn’t called Fairwater, and why are its Fairwater data centers taking so long? How much actual capacity has Microsoft brought online? Because it certainly isn’t fucking 2GW in six months. I’m willing to believe that Microsoft has a number of collocation agreements with parties that don’t disclose their involvement. I’m also willing to believe that Microsoft doesn’t publicize every single data center it’s building or has built.  2GW of capacity is a lot. It’s nearly ten times the (likely) existing capacity of Fairwater Atlanta. If Microsoft is bringing so much capacity online, why can’t we find it, and why won’t they tell us? And no, this isn’t some super secret squirrel “they’re building secret data centers for the government” thing, it’s very clearly a case where “capacity” refers to “something other than data centers that actually got brought online. Despite their ubiquity in the media, AI data centers are relatively new concepts that are barely five years old. They are significantly more power-intensive than a regular data center, requiring massive amounts of cooling and access to water to the point that the surrounding infrastructure of said data center is often a massive construction project unto itself.  For example, OpenAI and Oracle’s Stargate Abilene data center is (in theory) made up of two massive electrical substations , a giant gas power plant and eight distinct data center buildings, each with around 50,000 GB200 GPUs, at least in theory. Every data center requires that power exists — as in it’s being generated in both the manner and capacity necessary to turn it on, either through external or grid-based power — and is accessible at the data center site. This means that every single data center, no matter how big, is its own construction nightmare. You’ve got the power, the labor, the permits, the planning, the construction firm, the power company, the specialist gear, the temporary power (because on-site power is slow ), the backup power (because you can’t just rely on the grid for something you’re charging millions for!), the cooling, the uninterruptible power supplies — endless lists of shit that needs to go very well or else the bloody thing won’t work. These are very difficult and large projects to complete. Edged Computing’s (theoretically) 96MW data center in Illinois is 200,000 square feet in effectively two large squares. For comparison, every single inch of gambling space in Caesar’s Casino Vegas is around 130,000 square feet . These things are fucking huge, fucking difficult, and fucking expensive, and all signs point to capacity not coming online.  Let’s go back to Anthropic mopping up Musk’s fallow data center capacity, which stinks of desperation for both companies. If there were modern data centers full of GB200s being turned on and available anywhere in the next month or two, wouldn’t it be more financially prudent to wait for it, even if it’s just on an efficiency level? A franken-center made up of H100s and H200s with some GB200s stapled onto the side feels like a stopgap solution. I have similar questions about the results of adding this capacity — that “...Anthropic plans to use [it] to directly improve capacity for Claude Pro and Claude Max subscribers ,” “doubling” (whatever that means) the 5-hour rate limit and removing the recently-added peak rate limits.  What’s the plan here, exactly? Less than a month ago Anthropic’s Head of Growth, Amol Avasare , said that Anthropic was “looking at different options to keep delivering a great experience for users” because Max accounts were created before the era of Claude Code and Cowork . How does adding 300MW of capacity magically resolve that problem? Was that always the plan?  Or was this a knee-jerk reaction to the surging popularity of OpenAI’s Codex ? Because the original justification for peak hours was that Anthropic needed to manage “ growing demand for Claude ,” demand that I bet Anthropic claims hasn’t gone anywhere. It’s also important to remember that last year, OpenAI’s margins (which are already non-GAAP), per The Information , were worse than expected because (and I quote) it had to “..to buy more expensive compute at the last minute in response to higher than expected demand for its chatbots and models.”  In other words, Anthropic has deliberately tanked its already-negative 2026 gross margins by desperately buying the fallow compute from a company whose CEO threw up the nazi salute , called the company “ misanthropic and evil ,” and has the “right to reclaim the compute” if Anthropic “engages in actions that harm humanity.” Surely you’d wait a few months for some new, less tainted source of compute, right? And surely it wouldn’t be such a big deal, because new data centers get switched on every day, right?  So, let’s get to brass tacks. Anthropic and OpenAI have now committed to spending $748 billion across Amazon Web Services, Google Cloud, and Microsoft Azure , accounting for more than 50% of their remaining performance obligations. The very future of hyperscaler revenue depends both on Anthropic and OpenAI’s continued ability to pay and both of them having something to actually pay for.  I also think it’s fair to ask why Microsoft’s theoretical gigawatts of new compute aren’t producing tens of billions of dollars of new revenue.  Microsoft’s $37 billion in annualized AI run rate (sigh) is mostly taken up by OpenAI’s voracious demands for its :compute , and only ever seems to expand based on OpenAI’s compute demands and the now 20 million lost souls paying for Microsoft 365 Copilot . There’s supposedly incredible, unstoppable demand for AI compute, and Microsoft is apparently sitting on gigawatts’ worth , but somehow those gigawatts don’t seem to be translating into gigabillions , likely because they don’t fucking exist. All of this makes me wonder what Google infrastructure head Amin Vahdat meant last November when he said that Google needed to double its capacity every six months to meet demand . Many took this to mean “Google is doubling its capacity every six months,” but I think it’s far more likely that Google is taking on capacity requests from Anthropic that are making said capacity demands necessary. Similarly, I think CEO Sundar Pichai’s comment that it would have made more money had it had more capacity to sell was a manifestation of a distinct lack of new capacity rather than a result of bringing on swaths of new data centers that immediately got filled. I also need to be blunt on two things: Look, I know it sounds crazy, but I’m telling you: I don’t think very many data centers are coming online! While I keep wanting to hedge my bets and say “I bet a few gigawatts came online,” I cannot actually find any compelling literature that backs up that statement. I’ve spent hours and hours looking, and I’ve come up with a few hundred megawatts delivered in the past two years. Every major project is stuck in the mud, a phase or two in, or facing mounting opposition from locals that don’t want a Godzilla-sized cube making a constant screaming sound 24/7 so that somebody can generate increasingly-bustier Garfields.  I’m not even being a hater! It’s just genuinely difficult to find actual data centers that have been announced that have also been fully turned on.   So, humour me for a second: if hyperscalers are bringing on hundreds of megawatts of capacity a year, then that means that the ever-growing quarterly chunks of depreciation ripped out of their net income are just a taste of what’s to come. Last quarter, Google’s depreciation jumped $400 million to $6.482 billion, with Microsoft’s jumping nearly a billion dollars from $9.198 billion to $10.167 billion, and Meta’s from $5.41 billion to $5.99 billion. While Amazon’s technically dropped quarter-over-quarter, it still sat at an astonishing $18.94 billion. Remember: depreciation only increases when an item is actually put into service. If Microsoft, Google, Amazon and Meta are sitting on tens of billions of yet-to-be-installed GPUs, and said GPUs are only being installed at a snail’s pace every quarter, that means that these depreciation figures are set to grow dramatically. In fact, year-over-year, Google’s depreciation has jumped 30.7%, Amazon’s 24.7%, Microsoft’s 23.9%, and Meta’s an astonishing 34.9% .  And that’s with an extremely slow pace of deployment.  I do kind of see why the hyperscalers are sinking capex into these big AI infrastructure gigaprojects now, though. Shareholders are currently tolerating the capex because they think stuff is coming online, and that’s where the “incredible value” is. When a $20 billion or $30 billion a quarter depreciation bill first rears its head — as I said, Amazon is close, reporting $18.945bn in depreciation and amortization expenses in the most recent quarter — it’ll become obvious that the only people seeing value from AI are Jensen Huang and one of the massive construction firms slowly building these projects.  Actually, it’s probably important to state that I don’t think the majority of these projects are doing anything untoward I just don’t think any of them realized how difficult it is to build a data center, and unlike basically any other problem the tech industry has ever faced, simply throwing as much money as possible at it doesn’t really change the limits of physical construction.  I think every one of these data center projects is its own individual construction nightmare, and thanks to the general market psychosis around the AI bubble, nobody has thought to question the core assumption that these things are actually getting built. With all that being said , I’m not sure that anyone building these things is moving with much urgency either. Perhaps they don’t need to — perhaps hyperscalers are happy, because they can continually string out both the AI narrative and put off those massive blobs of depreciation. But we really do need to reckon with the fact that nearly two years in, Stargate Abilene has only two buildings’ worth of actual, operational, revenue generating capacity, and nobody has given me an answer as to how it doesn’t have even a quarter of the 1.7GW of power it’ll need to turn everything on , if it ever gets fully built. Maybe they can really pick up the pace, but as of early April, barely any actual gear was in the third building.  And then we get to the other problem: Oracle. As I’ve discussed before, Oracle is building 7.1GW of total capacity for OpenAI , and keeps — laughably! — saying 2027 or 2028, when at this rate, Stargate Abilene won’t be done until mid-2027, and the rest either never get finished or are done in 2030 or later.  This is setting up a horrifying situation where Oracle desperately needs OpenAI to pay it for capacity that doesn’t exist, and if it ever gets built, it’s likely to be years after OpenAI has run out of money, which is the same problem that Microsoft, Google, and Amazon have with their $748 billion of deals with Anthropic and OpenAI, though thanks to the $340 billion or more necessary to build the Stargate data centers, Oracle’s problems are far more existential. I’ve repeatedly — and correctly! — said that the problem is that these companies didn’t have the money to pay for their capacity, but Oracle lacks Microsoft or Google’s existing profitable businesses to fall back on if these data centers are delayed, with its existing business lines plateauing and its only real growth coming from theoretical deals with OpenAI and GPU compute with negative 100% margins .  Anthropic’s desperation for new sources of  compute also suggests that it’s bonking its head against the limits of its capacity, and will continue to do so as long as it continues to subsidize its users . I also think that the slow pace of construction will eventually lead to OpenAI facing similar problems. These companies need to continue growing to continue to raise the hundreds of billions of dollars in funding necessary to pay Oracle, Google, Microsoft, and Amazon their respective pounds of flesh.  It’s now very clear that the whole “inference is profitable” and “most compute is being used for training” myths are dead, because if they weren’t, Anthropic would either need way more compute or way higher-quality compute. Colossus-1 was specifically built as a training cluster, yet its current use is “reduce rate limits for our subsidized AI subscriptions,” which is most decidedly inference provided by three-year-old hardware . Despite writing over 9000 words and driving myself slightly insane trying to find out, I still haven’t got an answer as to how much actual data center capacity has come online. Hyperscalers have clearly been retrofitting old data centers to fit their new chips, and based on my research, I can find no compelling evidence that they’ve added more than a few hundred megawatts a piece since 2023.  What I do know is that, across the board, a data center of anything above 50MW (or lower, in some cases) takes anywhere from 18 to 36 months to complete, and nobody has actually built a gigawatt data center despite how many people discuss them. For example, Kevin O’Leary — known as “Mr. Dogshit” to his friends — is allegedly building a 9GW data center in Utah , but he may as well say that he’s building a unicorn that shits Toyota Tacomas, as doing so is far more realistic than a project that will likely cost $396 billion, assuming that locals and bankers don’t drag him to The Other Side like Dr. Facilier .  Nobody has built a 1GW data center, so I severely doubt Mr. Dogshit will be able to do anything other than create another scandal and lose a bunch of people’s money. In other words, any time you hear about a “new data center project,” add a year or two to whatever projection they give. If it’s 2027, assume 2029, or that it never gets built. Anything being discussed as “finished in 2030” may as well not exist. In any case, what I’m suggesting is that very, very few data centers are actually getting finished, and if that’s true,  NVIDIA has sold years worth of chips that are yet to be digested.  And if that’s true, somebody is sitting on piles of them.  I’m trying to be fair, so I’ll assume that an unknown amount of data centers got retrofitted to fit Blackwell GPUs. But I also refuse to believe that even half of the three million Blackwell GPUs that got shipped have actually been installed. Where would they go? You can’t use the same racks for them that you would with an H100 or H200, because Blackwell requires so much god damn cooling. Another sign that these things aren’t actually getting installed is Supermicro’s $1.4 billion or so of B200 GPUs left in inventory from a canceled order from Oracle .  Why not? Isn’t this meant to be a chip that’s extremely valuable? Isn’t there infinite demand? Is there not a place to put them? Apparently Oracle wanted to use faster GB200 GPUs from Dell , but why aren’t there other customers lining up to buy these things?  Also… how was Oracle able to cancel an order of over a billion dollars’ worth of GPUs?  Can anybody do that? Because if they can, one has to wonder if this doesn’t start happening as people realize these data centers aren’t getting built. Pick a data center. It’s probably barely under construction, or if it’s “finished” it’s actually “partly done” with no real guide as to when the rest will finish.  Remember that $17 billion deal with Microsoft and Nebius signed ? The one that’s a key reason why Nebius’ stock is on a tear? Well, its existence is based on the continued construction of a data center out in Vineland, New Jersey facing massive local opposition, and multiple sources now confirm that construction has been halted due to local planning issues. The data center is horribly behind schedule already, and Microsoft has the option to cancel its entire contract if Nebius fails to meet milestones . That data center is a major reason that people value Nebius’ stock! It cannot make a dollar of revenue without its existence! It has the funds and blessing of Redmond’s finest — the Mandate of Heaven! — and it can’t get things done! This is bad, and indicative of a larger problem in the industry — that it’s really difficult to build data centers, and for the most part, they’re not being fully built! You’ve heard plenty about data centers getting opposed and canceled — how about ones that fully opened? No, really, if you’ve heard about them please get in touch, because it’s really difficult to find them. Why don’t we know? This is apparently the single most important technology movement since whatever the last justification somebody made up was, shouldn’t we have a tangible grasp? Because the way I see it, if these things aren’t coming online at the rate that people think, we have to start asking for fundamental clarity from NVIDIA about where the GPUs are, and when they’re coming online.  NVIDIA’s continually-growing valuation is based on the conceit that there is always more demand for GPUs, and perhaps that’s true, but if this demand is based on functionally selling chips two years in advance. That makes NVIDIA’s yearly upgrade cadence utterly deranged. Buy today’s GPUs! They’re the best, for now, at least. By the time you plug them in they’re gonna be old and nasty. But don’t worry, it’ll take two years for you to install the next one too! To be clear, Blackwell GPUs are absolutely being installed! But three million of them?  People love to use “enough to power two cities” to illustrate these points, but I actually think it’s better to illustrate in real data center terms.  Stargate Abilene has taken two years to build two buildings of around 103MW of critical IT load. 3 million B200 GPUs works out to about 3.6GW of IT load. Do you really think that nearly thirty five Stargate Abilene-scale buildings were built in 2025? If so, where are they, exactly? You may argue that other data centers are smaller, and thus it would be easier to build. So why can’t I find any examples of where they’ve done so?  By all means prove me wrong! It’s so easy! Just show me a data center announced or that broke ground in 2023 and find obvious proof it turned on. I’ll even give you credit if it’s partially open! The problem is that I keep finding examples of “partially complete” and those are the only examples of “finished” data centers.  Isn’t this a little insane? This is all we’ve heard about for years, everybody is ACTING like these things exist at a scale that I’m not sure is actually true!  I expect a fair amount of huffing and “well of course they’re coming online” from the peanut gallery, but come on guys, isn’t this all kind of weird? Even if you want to marry Sandisk and name your children “Western” and “Digital,” why can’t you say with your whole chest several data centers that got finished? We have macro level “proof” but when you try and look at even a shred of the micro you find a bunch of guys with their hands on their hips saying “sorry mate that’ll be another $4 million.”  Something doesn’t line up, and it’s exactly the kind of misalignment that happens in a bubble — when infrastructural reality disconnects from the financials. NVIDIA is making hundreds of billions of dollars and it’s unclear how much of it is from GPUs installed in operational data centers. It feels like Jensen Huang might have run the largest preorder campaign of all time.  This has massive downstream consequences. Sandisk, Samsung, SK Hynix, Broadcom, AMD, Microsoft, Google, Oracle, and Amazon’s remaining performance obligations total [find] and are dependent on being *able* to sell gigawatts worth of computing gear or compute access. If data centers are not getting built in anything approaching a reasonable timeline, that makes the future of these companies only as viable as the construction projects themselves. Even if you truly believe Anthropic will be a $2 trillion company and a $200 billion customer of Google, the compute capacity has to exist to be bought, and it does not appear to be built or, in many cases, anywhere further than the earliest stages of construction.  If they don’t get built in the next few years, there’s no space for that solid state storage or those instinct GPUs. There’s no reason for NVIDIA to have reserved most of TSMC’s capacity , either. There’s also no reason to get excited about Bloom Energy, as it’s not making real revenue on those until Oracle finishes its data centers sometime between the next two years and never .  And if they don’t get built, hundreds of billions of dollars have been wasted, with large swaths of those billions funded by private credit, which in turn is funded by pensions, retirements and insurance funds . I’ve got a bad feeling about this.  Microsoft claims to have brought around 4GW of data center capacity online in the last two years, but it’s unclear how much actually got built. In an analysis of all announced groundbreakings and land acquisitions, it appears that Microsoft has only finished the first phase of its Atlanta and Wisconsin data centers.  It is unclear where this capacity could be. When Mr. Nadella said on his most-recent earnings call that Microsoft had (and I quote) "added another gigawatt of capacity this quarter," did he mean active, revenue-generating capacity?  In the event he did not, what did he mean? How much active, revenue-generating capacity has Microsoft brought online in FY2026 so far? Outside of Fairwater Wisconsin and Atlanta, where has that capacity been built?  Microsoft’s latest update on the Hickory/Stover site is that it “will” begin “initial site setup and earthwork activities” as of February 2026, and it appears the contractor has changed from Ames Construction to Clayco. The latest Microsoft update on the Boyd Farms site is that it started construction on April 1, 2024. A February 2026 piece from the Charlotte Observer claimed it had started construction again after a 10 month (!) delay. The latest Microsoft update on the Lyle Creek site — which it adds began construction in March 2024 — is that its contractor, Whiting-Turner, “will begin initial site preparation once weather conditions allow” as of February 2026.  A press release from a Canadian satellite firm from February 2026 said that it had “identified renewed construction activity at all three of Microsoft’s permitted data center campuses in Catawba County North Carolina.” Novva’s 60MW data center in Reno, Nevada. Announced in May 2023, operational as of July 2025 , or around 26 months. Edged Energy’s 36MW Phoenix, Arizona data center that broke ground in August 2024 and opened in April 2026 , or around 20 months. Duos Edge AI’s 450KW (lol) data center in Corpus Christi, Texas that was announced in July 2025 and opened in May 2026 , or around 10 months. Edge Energy’s 24MW, Columbus, Ohio-based data center that broke ground in August 2024 and opened in September 2025 , or around 13 months. American Tower’s 1MW (scalable to 4MW!) Raleigh, North Carolina data center that broke ground in June 2024 and came online in May 2025 , or around 11 months. EdgeCore’s 36MW Santa Clara, California data center campus that broke ground in January 2023, said it would be “energized in Q1 2024,” and opened in September 2025 , or around 32 months . Edged Energy’s “180MW” data center in Atlanta broke ground in July 2023 , and around 33 months later in April 2026 ,  it managed to top off a single 42MW building . EdgeCore’s two-building, 216MW campus that broke ground in August 2023 with plans to complete “as early as late 2025” is, as of March 2026, still under construction. Edged Energy broke ground on a 100MW data center in Aurora, Illinois in May 2023 , and has, as of February 2025, successfully opened (per DataCenterDynamics) “phase 1” — 24MW of capacity — but in its own press release from the same day referred to it as 96MW , choosing not to refer to any phases or separate buildings, something it has done since before the 24MW phase was complete.  CyrusOne’s 40MW Aurora, Illinois data center broke ground in October 2024 , which was apparently so significant that CyrusOne would announce that it had broken ground a second time on January 28 2025 . Confusingly, CyrusOne has another campus it’s linking to the Bilter Road one on Diehl Road, which may or may not be the same one, and as of May 2026 is still very much under construction . As of March 2026, locals were still opposing the data centers , slowing down the process further. Vantage’s “192MW” OH1 data center in New Albany Ohio broke ground in October 2024 , with its first phase to be due live sometime in 2025. As of August 2025, Vantage had topped off the second building , and per its own website about OH1 , the first building was meant to be operational in December 2025, but it’s unclear whether it actually opened. PowerHouse’s 65MW data center campus in Reno, Nevada broke ground in October 2024 , and its website states that “delivery” will happen in April 2026, with “construction/delivery” due “Q3 2024 to Q2 2026.” Oppidan’s Carol Stream, Illinois data center broke ground in November 2024 , with the “first phase” due live in 2026. Per Clearview, it is still “ planned .” Databank’s 20MW Ashburn, Virginia “IAD4” data center that broke ground in July 2024 was “set to go live in Q1 2026,” and as of May 2026 is still referred to in the future tense on Databank’s website . Aligned’s 96MW “NEO-01” Ohio-based data center that broke ground in May 2024 was “scheduled to be opened by end of this year” as of March 2026 . Aligned’s 72MW Hillsboro. Oregon data center campus broke ground in October 2023 , topped off the first building in July 2024 (Aligned also plans a separate building, too!), and as of May 2026, Cleanview still marks the first one as “planned.” Flexennial broke ground on a Denver-based 22.5MW data center in October 2024 , and as of April 8. 2026, a local Facebook group has said that it will be operational by January 2027 .   Flexennial, on the other hand, has been referring to it as “ the new build ” — in terms that make it sound like it was built — as far back as February 2025. If hyperscalers are truly not bringing on that much capacity, they cannot make those hundreds of billions of dollars from Anthropic and OpenAI. The current “AI compute demand is insatiable” narrative is utterly false , and a direct result of a lack of capacity coming online.

0 views
Unsung Yesterday

Save For Web claws

Randomly found this 2014 Dribbble from Jamie Nicoll and it made me smile: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/1.1600w.avif" type="image/avif"> For context, Save For Web was a popular export function in Photoshop at the peak of its use for web design, but assigned a rather unpleasant ⌘⌥⇧S shortcut. Using it often turned your hand into a… claw of sorts. There was a Tumblr cataloging real and humorous photos of people pressing Save For Web. You can still find parts of it on Internet Archive , and here are some choice photos: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/2.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/2.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/3.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/3.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/4.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/4.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/5.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/5.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/6.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/6.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/7.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/save-for-web-claws/7.1600w.avif" type="image/avif"> This is funny, but I actually found it enlightening – and lightly frightening – to ask coworkers how exactly they press common shortcuts like ⌘Z, ⌘C, ⌘V, and so on. There was a lot more variety than I expected. (My basic heuristics say: three-modifier-key shortcuts should not be assigned to anything used often.) #humor #keyboard

0 views