Posts in Business (20 found)
iDiallo Yesterday

Beyond Enshittification: Hostile

The computer is not just working less well. Instead, it is actively trying to undermine you. And there is nothing you can do about it. When Windows wants to update, you don't get to say "no." You get "Update now" or "Remind me later." When Twitter shows you notifications from people you don't follow, you can't dismiss them, only "see less often." When LinkedIn changes your email preferences, you'll reset them, only to find they've reverted a few months later. These aren't bugs. They aren't oversights. They're deliberate design choices that remove your ability to say no. It's not dark patterns anymore. It's not even enshittification. It's pure hostility . As developers, there are two types of users we find extremely annoying. The first is the user who refuses to get on the latest version of the app. They're not taking advantage of the latest bug fixes we've developed. We're forced to maintain the old API because this user doesn't want to update. They're stubborn, they're stuck in their ways, and they're holding everyone back. The second type of user is the one who's clueless about updates. It's not that they don't want to update, they don't even know there is such a thing as an update. They can be annoying because they'll eventually start complaining that the app doesn't work. But they'll do everything short of actually updating it. Well, I fall into the first category. I understand it's annoying, but I also know that developers will often change the app in ways that don't suit me. I download an app when it's brand new and has no ads, when the developer is still passionate about the project, pouring their heart and soul into it, making sure the user experience is a priority. That's the version I like. Because shortly after, as the metrics settle in and they want to monetize, the focus switches from being user-centric to business-centric. In Cory Doctorow's words, this is where "enshittification" starts. Now, I'm not against a developer trying to make a buck, or millions for that matter. But I am against degrading the user experience to maximize profit. Companies have figured out how to eliminate the first type of user entirely. They've weaponized updates to force compliance. Apps that won't launch without updating. Operating systems that update despite your settings. Games that require online connection to play single-player campaigns. Software that stops working if you don't agree to new terms of service. The philosophy of "if it ain't broke, don't fix it" is dead. They killed it. And they can get away with it because of the network effect. We are trapped in it. You use Windows because your workplace uses Windows. You use Excel because your colleagues use Excel. You use Slack because your team uses Slack. You use WhatsApp because your family uses WhatsApp. When Windows suddenly requires you to have a Microsoft account (an online account) just to log into your local computer, what are your options? Switch to Apple? After twenty years of Windows shortcuts, file systems, and muscle memory? Switch to Linux? When you need to share files with colleagues who use proprietary Microsoft formats? You can't. And they know you can't. They're not competing on quality anymore. They're leveraging your professional dependency, your colleagues' software choices, your decade of learned workflows. You're not a customer who might leave if the product gets worse. You're a captive audience. This is why the hostility is possible. This is why they can get away with it. Enshittification, as Doctorow describes it, is a process of degradation. First, platforms are good to users to build market share. Then they abuse users to favor business customers. Finally, they abuse those business customers to claw back all the value for themselves. But what we're seeing now is different. This isn't neglect or the natural decay of a profit-maximizing business. This is the deliberate, systematic removal of user agency. You are presented with the illusion of choice. You can update now or update later, but you cannot choose to never update. You can see less often, but you cannot choose to never see it. You can accept all cookies instantly, or you can navigate through a deliberately complex maze of toggles and submenus to reject them one by one. They borrow ransomware patterns. Notifications you can't dismiss, only snooze. Warnings that your system is "at risk" if you don't update immediately. Except once you update, the computer is restarted and you are presented with new terms you have to agree in order to access your computer. Every Windows update that turns Bing back on and forces all links to open with Edge. Every app update that re-enables notifications you turned off. Every platform that opts you back into marketing emails and makes you opt out again. Updates are now scary because they can take you from a version that serves your interest, to a version that services the company's. The update that adds telemetry. The update that removes features you relied on. The update that makes the app slower, more bloated, more aggressive about upselling you. These aren't accidents. They're not the result of developers who don't care or designers who don't know better. They're the result of product meetings where someone said "users are rejecting this, how do we force them to accept it?" and someone else said "remove the 'no' button." As a developer, and someone who has been using computers since I was 5 years old, I don't really care about the operating system. I can use them interchangeably. In fact, I don't care about Twitter, or any of these platforms. When I log into my computer it's to write a document. When I use my mobile device, it's to talk to my friends or family. When I access my dev machine, it's to do my job. The operating systems or the platforms are secondary to the task at hand. The software is supposed to be the tool, not the obstacle. But now the tool demands tribute. It demands your data, your attention, your compliance with whatever new terms it has decided to impose. You can't switch because switching costs everything. Your time, your muscle memory, your compatibility with everyone else who's also trapped. The network effect isn't just about other people using the same platform. It's about your own accumulated investment in learning, customization, and integration. So when they add hostile features, when they remove your ability to say no, when they force you to have an online account for offline work, when they interrupt you with notifications you can't dismiss, when they change interfaces you've spent years mastering, you can only accept it. Not because you want to. Not because it's better. Because you have no choice. And that's not enshittification. That's hostility.

0 views
Jason Fried Yesterday

The next product

New products don’t need to be revolutionary, life-changing, or disruptive breakthroughs to succeed. Entire categories can roll downhill, gathering complexity as they go. Each product one-upping the next until more becomes too much. The cycle feeds itself, never satiated. Competitors locked in a loop of mutual destruction through perpetual over-improvement. When that happens, the door cracks open for something new. The newcomer doesn’t have to meet the others where they are. It just has to feel right — like someone opened the curtains and let the sun back in. The type of product that lets people exhale and say, “finally!” Not groundbreaking. Just grounded. Standing where everyone else forgot to. -Jason

0 views
Sean Goedecke 3 days ago

How I provide technical clarity to non-technical leaders

My mission as a staff engineer is to provide technical clarity to the organization. Of course, I do other stuff too. I run projects, I ship code, I review PRs, and so on. But the most important thing I do - what I’m for - is to provide technical clarity. In an organization, technical clarity is when non-technical decision makers have a good-enough practical understanding of what changes they can make to their software systems. The people in charge of your software organization 1 have to make a lot of decisions about software. Even if they’re not setting the overall strategy, they’re still probably deciding which kinds of users get which features, which updates are most important to roll out, whether projects should be delayed or rushed, and so on. These people may have been technical once. They may even have fine technical minds now. But they’re still “non-technical” in the sense I mean, because they simply don’t have the time or the context to build an accurate mental model of the system. Instead, they rely on a vague mental model, supplemented by advice from engineers they trust. To the extent that their vague mental model is accurate and the advice they get is good - in other words, to the extent that they have technical clarity - they’ll make sensible decisions. The stakes are therefore very high. Technical clarity in an organization can be the difference between a functional engineering group and a completely dysfunctional one. The default quantity of technical clarity in an organization is very low. In other words, decision-makers at tech companies are often hopelessly confused about the technology in question . This is not a statement about their competence. Software is really complicated , and even the engineers on the relevant team spend much of their time hopelessly confused about the systems they own. In my experience, this is surprising to non-engineers. But it’s true! For large established codebases, it’s completely normal for very senior engineers to be unable to definitively answer even very basic questions about how their own system works, like “can a user of type X do operation Y”, or “if we perform operation Z, what will it look like for users of type W?” Engineers often 2 answer these questions with “I’ll have to go and check”. Suppose a VP at a tech company wants to offer an existing paid feature to a subset of free-tier users. Of course, most of the technical questions involved in this project are irrelevant to the VP. But there is a set of technical questions that they will need to know the answers to: Finding out the answer to these questions is a complex technical process. It takes a deep understanding of the entire system, and usually requires you to also carefully re-read the relevant code. You can’t simply try the change out in a developer environment or on a test account, because you’re likely to miss edge cases. Maybe it works for your test account, but it doesn’t work for users who are part of an “organization”, or who are on a trial plan, and so on. Sometimes they can only be answered by actually performing the task. I wrote about why this happens in Wicked features : as software systems grow, they build marginal-but-profitable features that interact with each other in surprising ways, until the system becomes almost - but not quite - impossible to understand. Good software design can tame this complexity, but never eliminate it. Experienced software engineers are thus always suspicious that they’re missing some interaction that will turn into a problem in production. For a VP or product leader, it’s an enormous relief to work with an engineer who can be relied on to help them navigate the complexities of the software system. In my experience, this “technical advisor” role is usually filled by staff engineers, or by senior engineers who are rapidly on the path to a staff role. Senior engineers who are good at providing technical clarity sometimes get promoted to staff without even trying, in order to make them a more useful tool for the non-technical leaders who they’re used to helping. Of course, you can be an impactful engineer without doing the work of providing technical clarity to the organization. Many engineers - even staff engineers - deliver most of their value by shipping projects, identifying tricky bugs, doing good systems design , and so on. But those engineers will rarely be as valued as the ones providing technical clarity. That’s partly because senior leadership at the company will remember who was helping them, and partly because technical clarity is just much higher-leverage than almost any single project. Non-technical leaders need to make decisions, whether they’re clear or not. They are thus highly motivated to maintain a mental list of the engineers who can help them make those decisions, and to position those engineers in the most important teams and projects. From the perspective of non-technical leaders, those engineers are an abstraction around technical complexity . In the same way that engineers use garbage-collected languages so they don’t have to care about memory management, VPs use engineers so they don’t have to care about the details of software. But what does it feel like inside the abstraction ? Internally, engineers do have to worry about all the awkward technical details, even if their non-technical leaders don’t have to. If I say “no problem, we’ll be able to roll back safely”, I’m not as confident as I appear. When I’m giving my opinion on a technical topic, I top out at 95% confidence - there’s always a 5% chance that I missed something important - and am usually lower than that. I’m always at least a little bit worried. Why am I worried if I’m 95% sure I’m right? Because I’m worrying about the things I don’t know to look for. When I’ve been spectacularly wrong in my career, it’s usually not about risks that I anticipated. Instead, it’s about the “unknown unknowns”: risks that I didn’t even contemplate, because my understanding of the overall system was missing a piece. That’s why I say that shipping a project takes your full attention . When I lead technical projects, I spend a lot of time sitting and wondering about what I haven’t thought of yet. In other words, even when I’m quite confident in my understanding of the system, I still have a background level of internal paranoia. To provide technical clarity to the organization, I have to keep that paranoia to myself. There’s a careful balance to be struck between verbalizing all my worries - more on that later - and between being so overconfident that I fail to surface risks that I ought to have mentioned. Like good engineers, good VPs understand that all abstractions are sometimes leaky . They don’t blame their engineers for the occasional technical mistake, so long as those engineers are doing their duty as a useful abstraction the rest of the time 3 . What they won’t tolerate in a technical advisor is the lack of a clear opinion at all . An engineer who answers most questions with “well, I can’t be sure, it’s really hard to say” is useless as an advisor. They may still be able to write code and deliver projects, but they will not increase the amount of technical clarity in the organization. When I’ve written about communicating confidently in the past, some readers think I’m advising engineers to act unethically. They think that careful, technically-sound engineers should communicate the exact truth, in all its detail, and that appearing more confident than you are is a con man’s trick: of course if you pretend to be certain, leadership will think you’re a better engineer than the engineer who honestly says they’re not sure. Once one engineer starts keeping their worries to themself, other engineers have to follow or be sidelined, and pretty soon all the fast-talking blowhards are in positions of influence while the honest engineers are relegated to just working on projects. In other words, when I say “no problem, we’ll be able to roll back”, even though I might have missed something, isn’t that just lying? Shouldn’t I just communicate my level of confidence accurately? For instance, could I instead say “I think we’ll be able to roll back safely, though I can’t be sure, since my understanding of the system isn’t perfect - there could be all kinds of potential bugs”? I don’t think so. Saying that engineers should strive for maximum technical accuracy betrays a misunderstanding of what clarity is . At the top of this article, I said that clarity is when non-technical decision makers have a good enough working understanding of the system. That necessarily means a simplified understanding. When engineers are communicating to non-technical leadership, they must therefore simplify their communication (in other words, allow some degree of inaccuracy in the service of being understood). Most of my worries are not relevant information to non-technical decision makers . When I’m asked “can we deliver this today”, or “is it safe to roll this feature out”, the person asking is looking for a “yes” or “no”. If I also give them a stream of vague technical caveats, they will have to consciously filter that out in order to figure out if I mean “yes” or “no”. Why would they care about any of the details? They know that I’m better positioned to evaluate the technical risk than them - that’s why they’re asking me in the first place! I want to be really clear that I’m not advising engineers to always say “yes” even to bad or unacceptably risky decisions. Sometimes you need to say “we won’t be able to roll back safely, so we’d better be sure about the change”, or “no, we can’t ship the feature to this class of users yet”. My point is that when you’re talking to the company’s decision-makers, you should commit to a recommendation one way or the other , and only give caveats when the potential risk is extreme or the chances are genuinely high. At the end of the day, a VP only has so many mental bits to spare on understanding the technical details. If you’re a senior engineer communicating with a VP, you should make sure you fill those bits with the most important pieces: what’s possible, what’s impossible, and what’s risky. Don’t make them parse those pieces out of a long stream of irrelevant (to them) technical information. The highest-leverage work I do is to provide technical clarity to the organization: communicating up to non-technical decision makers to give them context about the software system. This is hard for two reasons. First, even competent engineers find it difficult to answer simple questions definitively about large codebases. Second, non-technical decision makers cannot absorb the same level of technical nuance as a competent engineer, so communicating to them requires simplification . Effectively simplifying complex technical topics requires three things: In a large tech company, this is usually a director or VP. However, depending on the scope we’re talking about, this could even be a manager or product manager - the same principles apply. Sometimes you know the answer off the top of your head, but usually that’s when you’ve been recently working on the relevant part of the codebase (and even then you may want to go and make sure you’re right). You do still have to be right a lot. I wrote about this in Good engineers are right, a lot . Despite this being very important, I don’t have a lot to say about it. You just have to feel it out based on your relationship with the decision-maker in question. Can the paid feature be safely delivered to free users in its current state? Can the feature be rolled out gradually? If something goes wrong, can the feature be reverted without breaking user accounts? Can a subset of users be granted early access for testing (and other) purposes? Can paid users be prioritized in case of capacity problems? Good taste - knowing which risks or context to mention and which to omit 4 . A deep technical understanding of the system. In order to communicate effectively, I need to also be shipping code and delivering projects. If I lose direct contact with the codebase, I will eventually lose my ability to communicate about it (as the codebase changes and my memory of the concrete details fades). The confidence to present a simplified picture to upper management. Many engineers either feel that it’s dishonest, or lack the courage to commit to claims where they’re only 80% or 90% confident. In my view, these engineers are abdicating their responsibility to help the organization make good technical decisions. I write about this a lot more in Engineers who won’t commit . In a large tech company, this is usually a director or VP. However, depending on the scope we’re talking about, this could even be a manager or product manager - the same principles apply. ↩ Sometimes you know the answer off the top of your head, but usually that’s when you’ve been recently working on the relevant part of the codebase (and even then you may want to go and make sure you’re right). ↩ You do still have to be right a lot. I wrote about this in Good engineers are right, a lot . ↩ Despite this being very important, I don’t have a lot to say about it. You just have to feel it out based on your relationship with the decision-maker in question. ↩

0 views

Good Morning Oct 10

Goodmorning everyone 😪 it's 10:00 here. Going to be taking my guinea pig Pina to the vet today, I think she has a UTI... She stays cute though. Subscribe via email or RSS

0 views

The AI Bubble's Impossible Promises

Readers: I’ve done a very generous “free” portion of this newsletter, but I do recommend paying for premium to get the in-depth analysis underpinning the intro. That being said, I want as many people as possible to get the general feel for this piece. Things are insane, and it’s time to be realistic about what the future actually looks like. We’re in a bubble. Everybody says we’re in a bubble. You can’t say we’re not in a bubble anymore without sounding insane, because everybody is now talking about how OpenAI has promised everybody $1 trillion — something you could have read about two weeks ago on my premium newsletter . Yet we live in a chaotic, insane world, where we can watch the news and hear hand-wringing over the fact that we’re in a bubble , read article after CEO after article after CEO after analyst after investor saying we’re in a bubble, yet the market continues to rip ever-upward on increasingly more-insane ideas , in part thanks to analysts that continue to ignore the very signs that they’re relied upon to read . AMD and OpenAI signed a very strange deal where AMD will give OpenAI the chance to buy 160 million shares at a cent a piece, in tranches of indeterminate size, for every gigawatt of data centers OpenAI builds using AMD’s chips, adding that OpenAI has agreed to buy “six gigawatts of GPUs.” This is a peculiar way to measure GPUs, which are traditionally measured in the price of each GPU , but nevertheless, these chips are going to be a mixture of AMD’s mi450 instinct GPUs — which we don’t know the specs of! — and its current generation mi350 GPUs , making the actual scale of these purchases a little difficult to grasp, though the Wall Street Journal says it would “result in tens of billions of dollars in new revenue” for AMD . This AMD deal is weird , but one that’s rigged in favour of Lisa Su and AMD. OpenAI doesn’t get a dollar at any point - it has work out how to buy those GPUs and figure out how to build six further gigawatts of data centers on top of the 10GW of data centers it promised to build for NVIDIA and the seven-to-ten gigawatts that are allegedly being built for Stargate , bringing it to a total of somewhere between 23 and 26 gigawatts of data center capacity. Hell, while we’re on the subject, has anyone thought about how difficult and expensive it is to build a data center?  Everybody is very casual with how they talk about Sam Altman’s theoretical promises of trillions of dollars of data center infrastructure , and I'm not sure anybody realizes how difficult even the very basics of this plan will be. Nevertheless, everybody is happily publishing stories about how Stargate Abilene Texas — OpenAI’s massive data center with Oracle — is “open, ” by which they mean two buildings, and I’m not even confident both of them are providing compute to OpenAI yet. There are six more of them that need to get built for this thing to start rocking at 1.2GW — even though it’s only 1.1GW according to my sources in Abilene. But, hey, sorry — one minute — while we’re on that subject, did anybody visiting Abilene in the last week or so ever ask whether they’ll have enough power there?  Don’t worry, you don’t need to look. I’m sure you were just about to, but I did the hard work for you and read up on it, and it turns out that Stargate Abilene only has 200MW of power — a 200MW substation that, according to my sources, has only been built within the last couple of months, with 350MWs of gas turbine generators that connect to a natural gas power plant that might get built by the end of the year . Said turbine is extremely expensive, featuring volatile pricing ( for context, natural gas price volatility fell in Q2 2025…to 69% annualized ) and even more volatile environmental consequences , and is, while permitted for it ( this will download the PDF of the permit ), impractical and expensive to use long-term.  Analyst James van Geelen, founder of Citrini Research recently said on Bloomberg’s Odd Lots podcast that these are “not the really good natural gas turbines” because the really good ones would take seven years to deliver due to a natural gas turbine shortage . But they’re going to have to do. According to sources in Abilene, developer Lancium has only recently broken ground on the 1GW substation and five transformers OpenAI’s going to need to build out there , and based on my conversations with numerous analysts and researchers, it does not appear that Stargate Abilene will have sufficient power before the year 2027.  Then there’s the question of whether 1GW of power actually gets you 1GW of compute. This is something you never see addressed in the coverage of OpenAI’s various construction commitments, but it’s an important point to make. Analyst Daniel Bizo, Research Director at the Uptime Institute, explained that 1 gigawatt of power is only sufficient to power (roughly) 700 megawatts of data center capacity . We’ll get into the finer details of that later in this newsletter, but if we assume that ratio is accurate, we’re left with a troubling problem. That figure represents a 1.43 PUE — Power Usage Effectiveness — and if we apply that to Stargate Abilene, we see that it needs at least 1.7GW of power, and currently only has 200MW. Stargate Abilene does not have sufficient power to run at even half of its supposed IT load of 1.2GW, and at its present capacity — assuming that the gas turbines function at full power — can only hope to run 370MW to 460MW of IT load. I’ve seen article after article about the gas turbines and their use of fracked gas — a disgusting and wasteful act typical of OpenAI — but nobody appears to have asked “how much power does a 1.2GW data center require?” and then chased it with “how much power does Stargate Abilene have?” The answer is not enough, and the significance of said “not enough” is remarkable. Today, I’m going to tell you, at length, how impossible the future of generative AI is.  Gigawatt data centers are a ridiculous pipe dream, one that runs face-first into the walls of reality.   The world’s governments and media have been far too cavalier with the term “gigawatt,” casually breezing by the fact that Altman’s plans require 17 or more nuclear reactors’ worth of power , as if building power is quick and easy and cheap and just happens. I believe that many of you think that this is an issue of permitting — of simply throwing enough money at the problem — when we are in the midst of a shortage in the electrical grade steel and transformers required to expand America’s (and the world’s) power grid. I realize it’s easy to get blinded by the constant drumbeat of “ gargoyle-like tycoon cabal builds 1GW  data center ” and feel that they will simply overwhelm the problem with money, but no, I’m afraid that isn’t the case at all, and all of this is so silly, so ridiculous, so cartoonishly bad that it threatens even the seemingly-infinite wealth of Elon Musk, with xAI burning over a billion dollars a month and planning to spend tens of billions of dollars building the Colossus 2 data center , dragging two billion dollars from SpaceX in his desperate quest to burn as much money as possible for no reason.  This is the age of hubris — a time in which we are going to watch stupid, powerful and rich men fuck up their legacies by finding a technology so vulgar in its costs and mythical outcomes that it drives the avaricious insane and makes fools of them.  Or perhaps this is what happens when somebody believes they’ve found the ultimate con — the ability to become both the customer and the business, which is exactly what NVIDIA is doing to fund the chips behind Colossus 2. According to Bloomberg , NVIDIA is creating a company — a “special purpose vehicle” — that it will invest $2 billion in, along with several other backers. Once that’s done, the special purpose vehicle will then use that equity to raise debt from banks, buy GPUs from NVIDIA, and then rent those GPUs to Elon Musk for five years. Hell, why make it so complex? NVIDIA invested money in a company specifically built to buy chips from it, which then promptly handed the money back to NVIDIA along with a bunch of other money, and then whatever happened next is somebody else’s problem. Actually, wait — how long do GPUs last, exactly? Four years for training ? Three years ? The A100 GPU started shipping in May 2020 , and the H100 (and the Hopper GPU generation) entered full production in September 2022 , meaning that we’re hurtling at speed toward the time in which we’re going to start seeing a remarkable amount of chips start wearing down, which should be a concern for companies like Microsoft, which bought 150,000 Hopper GPUs in 2023 and 485,000 of them in 2024 . Alright, let me just be blunt: the entire economy of debt around GPUs is insane. Assuming these things don’t die within five years ( their warranties generally end in three ), their value absolutely will, as NVIDIA has committed to releasing a new AI chip every single year , likely with significant increases to power and power efficiency. At the end of the five year period, the Special Purpose Vehicle will be the proud owner of five-year-old chips that nobody is going to want to rent at the price that Elon Musk has been paying for the last five years. Don’t believe me? Take a look at the rental prices for H100 GPUs that went from $8-an-hour in 2023 to $2-an-hour in 2024 , or the Silicon Data Indexes (aggregated realtime indexes of hourly prices) that show H100 rentals at around $2.14-an-hour and A100 rentals at a dollar-an-hour , with Vast.AI offering them at as little as $0.67 an hour . This is, by the way, a problem that faces literally every data center being built in the world , and I feel insane talking about it. It feels like nobody is talking about how impossible and ridiculous all of this is. It’s one thing that OpenAI has promised one trillion dollars to people — it’s another that large swaths of that will be spent on hardware that will, by the end of these agreements, be half-obsolete and generating less revenue than ever. Think about it. Let’s assume we live in a fantasy land where OpenAI is somehow able to pay Oracle $300 billion over 5 years — which, although the costs will almost certainly grow over time, and some of the payments are front-loaded, averages out to $5bn each month, which is a truly insane number that’s in excess of what Netflix makes in revenue.  Said money is paying for access to Blackwell GPUs, which will, by then, be at least two generations behind, with NVIDIA’s Vera Rubin GPUs due next year . What happens to that GPU infrastructure? Why would OpenAI continue to pay the same rental rate for five-year-old Blackwell GPUs?   All of these ludicrous investments are going into building data centers full of what will, at that point, be old tech.  Let me put it in simple terms: imagine you, for some reason, rented an M1 Mac when it was released in 2020 , and your rental was done in 2025, when we’re onto the M4 series . Would you expect somebody to rent it at the same price? Or would they say “hey, wait a minute, for that price I could rent one of the newer generation ones.” And you’d be bloody right!  Now, I realize that $70,000 data center GPUs are a little different to laptops, but that only makes their decline in value more profound, especially considering the billions of dollars of infrastructure built around them.  And that’s the problem. Private equity firms are sinking $50 billion or more a quarter into theoretical data center projects full of what will be years-old GPU technology, despite the fact that there’s no real demand for generative AI compute , and that’s before you get to the grimmest fact of all: that even if you can build these data centers, it will take years and billions of dollars to deliver the power, if it’s even possible to do so. Harvard economist Jason Furman estimates that data centers and software accounted for 92% of GDP growth in the first half of this year , in line with my conversation with economist Paul Kedrosky from a few months ago .  All of this money is being sunk into infrastructure for an “AI revolution” that doesn’t exist, as every single AI company is unprofitable, with pathetic revenues ( $61 billion or so if you include CoreWeave and Lambda, both of which are being handed money by NVIDIA ), impossible-to-control costs that have only ever increased , and no ability to replace labor at scale ( and especially not software engineers ).   OpenAI needs more than a trillion dollars to pay its massive cloud compute bills and build 27 gigawatts of data centers, and to get there, it needs to start making incredible amounts of money, a job that’s been mostly handed to Fidji Simo, OpenAI’s new CEO of Applications , who is solely responsible for turning a company that loses billions of dollars into one that makes $200 billion in 2030 with $38 billion in profit . She’s been set up to fail, and I’m going to explain why. In fact, today I’m going to explain to you how impossible all of this is — not just expensive , not just silly , but actively impossible within any of the timelines set .  Stargate will not have the power it needs before the middle of 2026 — the beginning of Oracle’s fiscal year 2027, when OpenAI has to pay it $30 billion for compute — or, according to The Information , choose to walk away if the capacity isn’t complete. And based on my research, analysis and discussions with power and data center analysts, gigawatt data centers are, by and large, a pipedream, with their associated power infrastructure taking two to four years, and that’s if everything goes smoothly. OpenAI cannot build a gigawatt of data centers for AMD by the “second half of 2026.”   It haven’t even announced the financing, let alone where the data center might be, and until it does that it’s impossible to plan the power, which in and of itself takes months before you even start building.   Every promise you’re reading in the news is impossible. Nobody has even built a gigawatt data center, and more than likely nobody ever will . Stargate Abilene isn’t going to be ready in 2026, won’t have sufficient power until at best 2027, and based on the conversations I’ve had it’s very unlikely it will build that gigawatt substation before the year 2028.   In fact, let me put it a little simpler: all of those data center deals you’ve seen announced are basically bullshit. Even if they get the permits and the money, there are massive physical challenges that cannot be resolved by simply throwing money at them.  Today I’m going to tell you a story of chaos, hubris and fantastical thinking. I want you to come away from this with a full picture of how ridiculous the promises are, and that’s before you get to the cold hard reality that AI fucking sucks.

2 views
iDiallo 5 days ago

Designing Behavior with Music

A few years back, I had a ritual. I'd walk to the nearest Starbucks, get a coffee, and bury myself in work. I came so often that I knew all the baristas and their schedules. I also started noticing the music. There were songs I loved but never managed to catch the name of, always playing at the most inconvenient times for me to Shazam them. It felt random, but I began to wonder: Was this playlist really on shuffle? Or was there a method to the music? I never got a definitive answer from the baristas, but I started to observe a pattern. During the morning rush, around 8:30 AM when I'd desperately need to take a call, the music was always higher-tempo and noticeably louder. The kind of volume that made phone conversations nearly impossible. By mid-day, the vibe shifted to something more relaxed, almost lofi. The perfect backdrop for a deep, focused coding session when the cafe had thinned out and I could actually hear myself think. Then, after 5 PM, the "social hour" began. The music became familiar pop, at a volume that allowed for easy conversation, making the buzz of surrounding tables feel part of the atmosphere rather than a distraction. The songs changed daily, but the strategy was consistent. The music was subtly, or not so subtly, encouraging different behaviors at different times of day. It wasn't just background noise; it was a tool. And as it turns out, my coffee-fueled hypothesis was correct. This isn't just a Starbucks quirk; it's a science-backed strategy used across the hospitality industry. The music isn't random. It's designed to influence you. Research shows that we can broadly group cafe patrons into three archetypes, each responding differently to the sonic environment. Let's break them down. This is you and me, with a laptop, hoping to grind through a few hours of work. Our goal is focus, and the cafe's goal is often to prevent us from camping out all day on a single coffee. What the Research Says: A recent field experiment confirmed that fast-tempo music leads to patrons leaving more quickly. Those exposed to fast-tempo tracks spent significantly less time in the establishment than those who heard slow-tempo music or no music at all. For the solo worker, loud or complex music creates a higher "cognitive load," making sustained concentration difficult. That upbeat, intrusive morning music isn't an accident; it's a gentle nudge to keep the line moving. When you're trying to write code or draft an email and the music suddenly shifts to something with a driving beat and prominent vocals, your brain has to work harder to filter it out. Every decision, from what variable to name to which sentence structure to use, becomes just a little more taxing. I'm trying to write a function and a song is stuck in my head. "I just wanna use your love tonight!" After an hour or two of this cognitive friction, packing up and heading somewhere quieter starts to feel like a relief rather than an inconvenience. This pair is there for conversation. You meet up with a friend you haven't seen in some time. You want to catch up, and the music acts as a double-edged sword. What the Research Says: The key here is volume. Very loud music can shorten a visit because it makes conversing difficult. You have to lean in, raise your voice, and constantly ask "What?" Research on acoustic comfort in cafes highlights another side: music at a moderate level acts as a "sonic privacy blanket." It masks their conversation from neighboring tables better than silence, making the pair feel more comfortable and less self-conscious. I've experienced this myself. When catching up with a friend over coffee, there's an awkward awareness in a silent cafe that everyone can hear your conversation. Are you talking too loud about that work drama? Can the person at the next table hear you discussing your dating life? But add a layer of moderate background music, and suddenly you feel like you're in your own bubble. You can speak freely without constantly monitoring your volume or censoring yourself. The relaxed, mid-day tempo isn't just for solo workers. It's also giving pairs the acoustic privacy to linger over a second latte, perhaps order a pastry, and feel comfortable enough to stay for another thirty minutes. The group of three or more is there for the vibe. Their primary goal is to connect with each other, and the music is part of the experience. What the Research Says: Studies on background music and consumer behavior show that for social groups, louder, more upbeat music increases physiological arousal, which translates into a sense of excitement and fun. This positive state is directly linked to impulse purchases, and a longer stay. "Let's get another round!" The music effectively masks the group's own noise, allowing them to be loud without feeling disruptive. The familiar pop tunes of the evening are an invitation to relax, stay, and spend. That energy translates into staying longer, ordering another drink, maybe splitting some appetizers. The music gives permission for the group to match its volume and enthusiasm. If the cafe is already vibrating with sound, your group's laughter doesn't feel excessive, it feels appropriate. The music is not random, it's calculated. I have a private office in a coworking space. What I find interesting is that whenever I go to the common area, where most people work, there's always music blasting. Not just playing. Blasting . You couldn't possibly get on a meeting call in the common area, even though this is basically a place of work. For that, there are private rooms that you can rent by the minute. Let that sink in for a moment. In a place of work, it's hard to justify music playing in the background loud enough to disrupt actual work. Unless it serves a very specific purpose: getting you to rent a private room. The economics makes sense. I did a quick count on my floor. The common area has thirty desks but only eight private rooms. If everyone could take calls at their desks, those private rooms would sit empty. But crank up the music to 75 decibels, throw in some upbeat electronic tracks with prominent basslines, and suddenly those private rooms are booked solid at $5 per 15 minutes. That's $20 per hour, per room, eight rooms, potentially running 10 hours a day. The music isn't there to help people focus. It's a $1,600 daily revenue stream disguised as ambiance. And the best, or worse, part is that nobody complains. Because nobody wants to be the person who admits they need silence to think. We've all internalized the idea that professionals should be able to work anywhere, under any conditions. So we grimace, throw on noise-canceling headphones, and when we inevitably need to take a Zoom call, we sheepishly book a room and swipe our credit card. Until now, this process has been relatively manual. A manager chooses a playlist or subscribes to a service (like Spotify's "Coffee House" or "Lofi Beats") and hopes it has the desired effect. It's a best guess based on time of day and general principles. But what if a cafe could move from curating playlists to engineering soundscapes in real-time? This is where generative AI will play a part. Imagine a system where: Simple sensors can count the number of customers in the establishment and feed real-time information to an AI. Point-of-sale data shows the average ticket per customer and table turnover rates. The AI receives a constant stream: "It's 2:30 PM. The cafe is 40% full, primarily with solo workers on laptops. Table turnover is slowing down, average stay time is now 97 minutes, up from the target of 75 minutes." An AI composer, trained on psychoacoustic principles and the cafe's own historical data, generates a unique, endless piece of music. It doesn't select from a library. It is created in realtime. The manager has set a goal: "Gently increase turnover without driving people away." The AI responds by subtly shifting the generated music to a slightly faster BPM. Maybe, from 98 to 112 beats per minute. It introduces more repetitive, less engrossing melodies. Nothing jarring, nothing that would make someone consciously think "this music is annoying," but enough to make that coding session feel just a little more effortful. The feedback loop measures the result. Did the solo workers start packing up 15 minutes sooner on average? Did they look annoyed when they left, or did they seem natural? Did anyone complain to staff? The AI learns and refines its model for next time, adjusting its parameters. Maybe 112 BPM was too aggressive; next time it tries 106 BPM with slightly less complex instrumentation. This isn't science fiction. The technology exists today. We already have: Any day now, you'll see a start up providing this service. Where the ambiance of a space is not just curated, but designed. A cafe could have a "High Turnover Morning" mode, a "Linger-Friendly Afternoon" mode, and a "High-Spend Social Evening" mode, with the AI seamlessly transitioning between them by generating the perfect, adaptive soundtrack. One thing that I find frustrating with AI is that when we switch to these types of systems, you never know. The music would always feel appropriate, never obviously manipulative. It would be perfectly calibrated to nudge you in the desired direction while remaining just below the threshold of conscious awareness. A sonic environment optimized not for your experience, but for the business's metrics. When does ambiance become manipulation? There's a difference between playing pleasant background music and deploying an AI system that continuously analyzes your behavior and adjusts the environment to influence your decisions. One is hospitality; the other is something closer to behavioral engineering. And unlike targeted ads online, which we're at least somewhat aware of and can block, this kind of environmental manipulation is invisible, unavoidable, and operates on a subconscious level. You can't install an ad blocker for the physical world. I don't have answers here, only questions. Should businesses be required to disclose when they're using AI to manipulate ambiance? Is there a meaningful difference between a human selecting a playlist to achieve certain outcomes and an AI doing the same thing more effectively? Does it matter if the result is that you leave a cafe five minutes sooner than you otherwise would have? These are conversations we need to have as consumers, as business owners, as a society. Now we know that the quiet background music in your local cafe has never been just music. It's a powerful, invisible architect of behavior. And it's about to get a whole lot smarter. Simple sensors can count the number of customers in the establishment and feed real-time information to an AI. Point-of-sale data shows the average ticket per customer and table turnover rates. The AI receives a constant stream: "It's 2:30 PM. The cafe is 40% full, primarily with solo workers on laptops. Table turnover is slowing down, average stay time is now 97 minutes, up from the target of 75 minutes." An AI composer, trained on psychoacoustic principles and the cafe's own historical data, generates a unique, endless piece of music. It doesn't select from a library. It is created in realtime. The manager has set a goal: "Gently increase turnover without driving people away." The AI responds by subtly shifting the generated music to a slightly faster BPM. Maybe, from 98 to 112 beats per minute. It introduces more repetitive, less engrossing melodies. Nothing jarring, nothing that would make someone consciously think "this music is annoying," but enough to make that coding session feel just a little more effortful. The feedback loop measures the result. Did the solo workers start packing up 15 minutes sooner on average? Did they look annoyed when they left, or did they seem natural? Did anyone complain to staff? The AI learns and refines its model for next time, adjusting its parameters. Maybe 112 BPM was too aggressive; next time it tries 106 BPM with slightly less complex instrumentation. Generative AI that can create music in any style ( MusicLM , MusicGen ) Computer vision that can anonymously track occupancy and behavior Point-of-sale systems that track every metric in real-time Machine learning systems that can optimize for complex, multi-variable outcomes

1 views
Anton Sten 5 days ago

Henry Ford's horse problem wasn't about imagination

>"If I had asked people what they wanted, they would have said faster horses." - Henry Ford (allegedly) This quote gets thrown around constantly—usually by someone who wants to justify ignoring user research entirely. The logic goes: users don't know what they want, so why bother asking them? The problem isn't the sentiment. It's that people are using it to defend bad research, not to avoid research altogether. Here's the thing: Henry Ford's mistake wasn't talking to users. It was asking the wrong question. ## The real problem with "faster horses" Let's assume Ford actually said this (there's no evidence he did, but let's run with it). The issue isn't that people asked for faster horses. It's that "What do you want?" is a terrible research question. Of course they said faster horses. That's the only frame of reference they had. But if Ford had dug deeper—if he'd asked about their actual problems instead of solutions—he would've heard something very different. Imagine if he'd asked: - What's frustrating about traveling with your horse? - Tell me about the last time you needed to go somewhere far away. - What stops you from traveling more often? - How does weather affect your trips? Suddenly you're not hearing "faster horses." You're hearing: - "I can't take my whole family without a carriage" - "Long rides leave me sore for days" - "I get soaked when it rains" - "My horse gets tired and needs rest" - "Feeding and caring for a horse is expensive" None of these answers mention cars. But every single one of them points directly to what a car solves. ## Good research doesn't ask for solutions The mistake most people make—and the one this quote reinforces—is thinking user research means asking users what to build. It doesn't. Good research uncovers problems. It reveals pain points. It helps you understand what people are actually struggling with in their daily lives. What they're working around. What they've given up on entirely. Users aren't supposed to design your product. That's your job. But they're the only ones who can tell you what's actually broken in their world. When you focus on understanding problems instead of collecting feature requests, you stop getting "faster horses" and start hearing real needs. ## Why this matters more now than ever Here's the irony: the same people who quote Henry Ford to avoid user research are now using AI to build products faster than ever. Which means they're building the wrong things faster than ever. The market is flooded with functional products that [solve problems nobody has](https://www.antonsten.com/books/products-people-actually-want/). Henry Ford couldn't build a car in a weekend. You can build a working app in hours with AI. The barrier to building dropped to zero. The barrier to understanding what people actually need? That stayed exactly the same. Which makes user research the only competitive advantage that matters. ## How to actually understand your users I've written before about [stakeholder interviews](https://www.antonsten.com/articles/stakeholder/) and the same principles apply to user research: **Ask about the past, not the future.** "Tell me about the last time you struggled with X" beats "What features would you want?" every time. **Focus on behavior, not opinions.** What people actually do matters more than what they say they'd do. Watch for workarounds—they reveal unmet needs. **Dig into the why.** When someone mentions a problem, ask why it matters. Then ask why again. The first answer is usually surface-level. The third or fourth answer is where the real insight lives. **Listen for emotion.** When someone's voice changes—frustration, relief, resignation—you've hit something that actually matters to them. None of this requires a PhD. It just requires showing up with curiosity instead of assumptions. ## The bottom line The Henry Ford quote isn't wrong because users can't imagine solutions. It's wrong because it defends lazy research. Great products don't come from avoiding users—they come from understanding them deeply. Not asking what they want, but understanding what they struggle with. What they're working around. What they've accepted as "just the way it is." That's how you build something people actually need instead of just another faster horse.

0 views

Co-Pilots, Not Competitors: PM/EM Alignment Done Right

In commercial aviation, most people know there's a "Pilot" and a "Co-Pilot" up front (also known as the "Captain" and "First Officer"). The ranks of Captain and First Officer denote seniority but both pilots, in practice, take turns filling one of two roles: "pilot flying" and "pilot monitoring". The pilot flying is at the controls actually operating the plane. The pilot monitoring is usually the one talking to Air Traffic Control, running checklists, and watching over the various aircraft systems to make sure they're healthy throughout the flight. Obviously, both pilots share the goal of getting the passengers to the destination safely and on time while managing the costs of fuel etc. secondarily. They have the same engines, fuel tank and aircraft systems and no way to alter that once airborne. They succeed or fail together. It would be a hilariously bad idea to give one pilot the goal of getting to the destination as quickly as possible while giving the other pilot the goal of limiting fuel use as much as possible and making sure the wings stay on. Or, to take an even stupider example, give one pilot the goal of landing in Los Angeles and the other the goal of landing in San Francisco. Obviously that wouldn't work at because those goals are in opposition to each other. You can get there fast if you don't care about fuel or the long-term health of the aircraft, or you can optimize for fuel use and aircraft stress if you don't need to worry about the travel time as much. It sounds ridiculous! And yet, this is how a lot of organizations structure EM and PM goals. An EM and PM have to use the same pool of available capacity to achieve their objectives. They are assigned a team of developers and what that team of developers is able to do in a given period is a zero-sum-game. In a modern structure, following the kinds of practices that lead to good DORA metrics , their ownership is probably going to be a mix of building new features and care and feeding of what they've shipped before. Like a plane's finite fuel tank, the capacity of the team is fixed and exhaustible. Boiled down to its essence, the job of leaders is managing that capacity to produce the best outcomes for the business. Problems arise, however, in determining what those outcomes are because they're going to be filtered through the lens of each individual's incentive structure. Once fuel is spent, it’s spent. Add more destinations without adjusting the plan, and you’re guaranteeing failure. It would not be controversial to say that a typical Product Manager is accountable and incentivized to define and deliver new features that increase user adoption and drive revenue growth. Nor would it be controversial to say that a typical Engineering Manager is accountable and incentivized to make sure that the code that is shipped is relatively error-free, doesn't crash, treats customer data appropriately, scales to meet demand, etc etc. You know the drill. I think this is fine and reflects the reality that EMs and PMs aren't like pilots in that their roles are not fully interchangeable and there is some specialization in those areas they're accountable for. But if you just stop there, you have a problem. You've created the scenario where one pilot is trying to get to the destination as quickly as possible and one is trying to save fuel and keep the airplane from falling apart. You need to go a step further and coalesce all of those things into shared goals for the team. An interview question I like to ask both prospective EM and PM candidates is "How do you balance the need to ship new stuff with the need to take care of the stuff you've already shipped?" The most common answer is something like "We reserve X% of our story points for 'Engineering Work' and the rest is for 'Product Work'. This way of doing things is a coping strategy disguised as a solution. It's the wrong framing entirely because everything is product work . Performance is a feature, reliability is a feature, scalability is a feature. A product that doesn't work isn't a product, and so a product manager needs to care about that sort of thing too. Pretending there's a clean line between "Product Work" and "Engineering Work" is how teams can quietly drift off-course. On the flip-side, I'm not letting engineering managers off the hook either. New features and product improvements are usually key to maintaining business growth and keeping existing customers happy. All the scalability and performance in the world don't matter if you don't have users. Over-rotating on trying to get to "five nines" of uptime when you're not making any revenue is a waste of your precious capacity that could be spent on things that grow the business and that growth will bring more opportunities for solving interesting technical challenges and more personal growth opportunities for everyone who is there. EMs shouldn't ignore the health of the business any more than PMs should ignore the health of the systems their teams own. Different hats are fine, different destinations are not. If you use OKRs or some other cascading system of goals where the C-Suite sets some broad goals for the company and those cascade into ever more specific versions as you walk down the org chart hierarchy, what happens when you get to the team level? Do the EM and PM have separate OKRs they're trying to achieve? Put them side by-side and ask yourself the following questions: If you answer 'no' to any of those, congratulations, you've just planned your team's failure to achieve their goals. To put it another way your planning is not finished yet! You need to keep going and narrow down the objectives to something that both leaders can get behind and do their best to commit to. You've already got your goals side-by-side. After you've identified the ones that are in conflict, have a conversation about business value. Don't bring in the whole team yet, they're engineers and the PM might feel outnumbered. Just have the conversation between the two of you and see if you can come to some kind of consensus about which of the items in conflict are more valuable. Prioritize that one and deprioritize the one that it's in oppostition to. Use "what's best for the business" as your tie-breaker. That's not always going to be perfectly quantifiable so there's a lot of subjectivity and bias that's going to enter into it. This is where alignment up the org chart is very beneficial. These concepts don't just apply at the team level. If you find you can't agree and aren't making progress, try to bring in a neutral third party to facilitate the discussion. If you have scrum-masters, agile coaches, program managers or the like they can often fill this role nicely. You could also consult a partner from Customer Support who will have a strong incentive to advocate for the customer above all to add more context to the discussion. Don't outsource the decision though, both the EM and PM need to ultimately agree (even reluctantly) on what's right for the team. If your org, like many, has parallel product and engineering hierarchies, alignment around goals at EACH level is critical. The VP Eng and VP Product should aim to have shared goals for the entire team. Same thing at the Director level, and every other level. That way if each team succeeds, each leader up the org chart succeeds, and ultimately the customer and the business are the beneficiaries. If you don't have that, doing it at the team level alone is going to approach impossible and you should send this post to your bosses. 😉 But in seriousness, you should spend some energy trying to get alignment up and down the org chart. If your VP Eng and VP Product don’t agree on direction, your team is flying two different flight plans. No amount of team-level cleverness can fully fix that. Your job is to surface it, not absorb it. Things you can do: You likely set goals on a schedule. Every n months you're expected to produce a new set and report on the outcomes from the last set. That's all well and good but I think most people who've done it a few times recognize that the act of doing it is more valuable than the artifacts the exercise produces are. (Often described as "Plans are useless, planning is critical.") The world changes constantly, the knowledge you and your team have about what you're doing increases constantly, all of these things can impact your EM/PM alignment so it's also important to stay close, keep the lines of communication very open, and make adjustments whenever needed. The greatest predictor of a team's success, health, and happiness is the quality of the relationship between the EM and PM. Keeping that relationship healthy and fixing it if it starts to break down will keep the team on the right flight path with plenty of spare fuel for surprise diversions if they're needed. Regardless of what framework you're using, or even if you're not using one at all, the team should have one set of goals they're working toward and the EM and PM should succeed or fail together on achieving those goals. Shared goals don’t guarantee a smooth landing. Misaligned ones guarantee a crash. " A380 Cockpit " by Naddsy is licensed under CC BY 2.0 . Like this? Please feel free to share it on your favourite social media or link site! Share it with friends! Hit subscribe to get new posts delivered to your inbox automatically. Feedback? Get in touch ! If we had infinite capacity, are these all achievable? Or are some mutually exclusive? Example: Launch a new feature with expensive storage requirements (PM) vs. Cut infra spend in half (EM) Does achieving any of these objectives make another one harder? Example: Increase the number of shipped experiments and MVPs (PM) vs. Cut the inbound rate of customer support tickets escalated to the team. (EM) Do we actually have the capacity to do all of this? Example: Ship a major feature in time for the annual customer conference (PM) vs. Upgrade our key framework which is 2 major versions behind and is going out of support. (EM) If your original set of goals conflicted, the goals at the next level up probably do to. Call that out and ask for it to be reconciled. Escalate when structural problems get in the way of the team achieving thier goals. Only the most dysfunctional organizations would create systems that guarantee failure on purpose .

0 views
Michael Lynch 1 weeks ago

Refactoring English: Month 10

Hi, I’m Michael. I’m a software developer and founder of small, indie tech businesses. I’m currently working on a book called Refactoring English: Effective Writing for Software Developers . Every month, I publish a retrospective like this one to share how things are going with my book and my professional life overall. At the start of each month, I declare what I’d like to accomplish. Here’s how I did against those goals: I did complete this successfully, but I spent too long on the post and felt somewhat underwhelmed with my final result. I wrote a first draft of a new chapter but didn’t publish it. I ended up spending more time than I planned on “The Software Essays that Shaped Me” and freelance editing clients. I was going to write this off and say that I’m not learning anything new anymore by reaching out to customers. Then, a few days ago, I heard back from a reader I’d reached out to who said he used what he learned from my book to get an article on the front page of Hacker News for the first time. So, that was pretty indisputably valuable and tells me I should be doing more of this. I brainstorm more about this below . September had a nice bump in website visitors and pre-orders. I’d like to get to the point where there’s a virtuous cycle of readers referring other readers, but I don’t think I’m there yet. Still, nice to make almost $1k for the month. In baseball, a bunt is when you hold the bat in the ball’s path rather than swinging the bat. The upside is that you’re less likely to miss, but the downside is that you won’t hit the ball very far. The best you can hope for with a bunt is making it to first base, but a bunt is almost never going to be a home run. Most of my blog posts are “swing for the fences” posts. I put in a lot of effort because I want to reach #1 on Hacker News, reddit, or search results. The problem is that my “swing for the fences” posts take me about a month to write, so if I’m publishing blog posts as I write my book, I’d have to put my book on hold for a month every time I write a blog post. I’ve been thinking about whether I could do some “bunt” posts instead. That way, I can only put my book on hold for a week rather than the whole month. I don’t want to take a topic that deserves a lot of care and just do a lazy version of it. Rather, I want to take a topic that’s easy to cover and just see how it does. My first bunt was, “I Once Appeared in The Old New Thing.” It was about an experience I had at 22 at my first real job. I didn’t have a lot of insightful things to say about it, but I thought it was an interesting story. I was able to write it in about four hours, and it felt complete for what it was. My next bunt was, “The Software Essays that Shaped Me.” I’ve seen other people share lists of their favorite software blog posts, and I thought it would be an easy, fun thing to do. Best of all, the people who appreciate good software writing might also find my book interesting. As I started to write “The Software Essays that Shaped Me,” it turned into more than just a bunt. I ended up spending almost all of September on it. I originally thought I’d list my favorite blog posts and call it a day, but that felt too boring. So, I tried to include short commentary about each post. Then, I got carried away and ended up writing commentary that was longer than the originals themselves. It took me several drafts to figure out what commentary felt interesting, and I still don’t feel like I quite succeeded. I ended up spending 17 hours on “The Software Essays that Shaped Me” and never stopped to evaluate whether it was still worth writing if it was going to be all that work. I think the post is interesting to people who read my blog. If someone I knew published a list of articles that influenced them, I’d find that interesting. But in comment threads about the post, people shared their own lists, and I found strangers’ lists totally uninteresting. Maybe I counteracted that some by investing a lot in my commentary, but I just don’t think a list of good blog posts can be all that interesting. Both posts did well. They both reached the front page of Hacker News, though they did it through the second chance pool , which feels a little like winning through TKO rather than a real knockout. It’s interesting that the results scaled almost linearly with the effort I invested, which I typically don’t find to be the case . Previously, when one of my Refactoring English posts did well on Hacker News, there was a noticeable uptick in readers purchasing the book . This time, “The Software Essays that Shaped Me” reached #2 and stayed on the front page for 11 hours, but only one person purchased. Maybe everyone seeing my post on Hacker News has already seen that I’m writing a book, so everyone who’s interested has already bought? I woke up the morning after my article had already fallen off the front page of Hacker News and suddenly realized: I never included the ad for the book! All the sample chapters on the book’s website include a little self-ad to tell the reader I’m writing a book on this topic, and they can buy early access. All the pages on the Refactoring English website are supposed to have a little self-ad on them for the book. I forgot to include the self-ad for the blog post, so the first 14k readers saw my post and had no idea I’m writing a book. D’oh! I’ve updated my blog template so that I can’t possibly forget to include the self-ad in the future. A few months ago, I decided to offer freelance editing services to help other developers improve writing on their blogs. My idea was that it’s an opportunity to make sure the way I explain concepts in my book makes sense to real people. The downside is that there’s a high cost to the editing. Each job takes me between four to seven hours, and it eats up my “hard thinking” of the day, so it’s tough to do my own writing in the same day. I also feel pressure to offer quick turnaround, even though nobody has asked me to hurry. But just knowing my own writing process, it sucks to be stuck for days waiting on feedback. At the beginning, freelance editing worked as I planned: it gave me good ideas for my book. As I do more jobs, I’m getting fewer ideas for my book. Now, most of the feedback I write is basically writing a personalized version of something I’ve already written for my book. I want to keep doing the editing, but only for authors who have read my book. I doubled my rates, so now my price for editing a blog post is $400. But I’m going to offer a 90% discount to readers who have read my book. At a 90% discount, it’s almost not worth charging at all, but I want clients to pay some amount so that they feel like they have skin in the game, too. I’ll continue to take on clients who haven’t read the book, but I want to charge enough that I feel like it’s worth the tradeoff of taking time from my book. $400 might still be too low, but we’ll see. I’m trying to figure out why I keep missing my goal of reader outreach. On its face, it doesn’t seem that hard, but it never seems like the most important thing, so I keep deferring it. There are other tasks I procrastinate because I don’t enjoy doing them, but I actually enjoy reaching out to readers. It’s fun to see what different readers are up to and how they might apply my techniques. Part of the issue is that emailing readers requires activation energy because I have to: It might help if I first gather a list of customers to email and their websites. That way, when I’m in the mood to reach out, I’m not starting from scratch every time. A few Refactoring English customers have emailed me confused because they paid but never got an email with a link to the book. I collect payment through Stripe, and Stripe redirects customers to the book’s URL after they complete payment. If the customer doesn’t notice the redirect or forgets to bookmark the page, they lose access to the book. Whenever customers tell me they can’t find the link to the book, I dig around in Stripe to look for a setting to customize post-purchase emails, give up after a few minutes, and then email the correct link to the customer. Last month, I finally sat down and searched through Stripe’s documentation and forum posts, and I can’t find any way to customize the email Stripe sends after a customer completes a one-time payment. As far as I can tell, the only option is to spin up your own web server to listen for Stripe webhooks, then send your own emails from your own email provider. All because Stripe can’t be bothered to let merchants customize any text in the payment completion emails… Setting up a web server to respond to webhooks shouldn’t be that hard for me, but it means writing code to glue together Stripe, Buttondown, and Netlify functions, and they all have their little gotchas and bugs. Especially Stripe. I’ve spent about 10 hours so far just trying to get emails to send after a customer makes a purchase, and I’m still not sure it’s working correctly. Here are the gotchas I’ve hit so far: I’m still tinkering with Hacker News Observer, a product that I still haven’t released and don’t know what to do with. For now, I’m just gathering data and using it to satisfy some curiosities about success on Hacker News. One curiosity I’ve had for a long time is whether there are times of day when it’s easier for a post to reach the front page of Hacker News, so I aggregated what percentage of posts reach the front page over the course of a day: I created a view in Hacker News observer to show front page stats by hour I initially thought I had a bug that overcounted the success rate, as the percentage of Hacker News submissions that reach the front page feels lower than 12% in my experience. Then, I looked at some random slices from the last few days, and it seems to match up. If I browse , there will typically be 2-5 stories that reached the front page. I found a 30-minute slice from a few days ago where 27% of submissions reached the front page, which is surprising. I thought that success rate would be significantly higher on the weekends, when there are fewer submissions. Weekend posts are more likely to reach the front page, but the effect is much smaller than I thought. I thought it was going to be like 5% on weekdays vs. 20% on weekends. It makes submitting on the weekend less attractive because your chances of hitting the front page are only slightly better, but if you succeed, there are substantially fewer readers. I’d like to try limiting the data to personal blogs like I do on HN Popularity Contest , as I’m curious to see if personal blogs have better chances at certain times. I’m experimenting with low-investment, low-payoff-style blog posts. I’m adjusting my strategy for freelance editing to work specifically with people who have read my book. My intuition was way off about the odds of reaching the front page of Hacker News. Result : Published “The Software Essays that Shaped Me” , which attracted 16k readers in the first three days Result : Didn’t publish anything new Result : Emailed two new readers Go to my list of pre-paid readers Look for ones that have a website (so I can say something personalized) Read through their website to learn more about them Write an email and word it carefully to avoid sounding AI-generated Stripe’s Go client library is compatible with exactly one version of the Stripe webhook API. No, the documentation doesn’t say which one. Run it and find out from the webhook failures! If you update your Stripe account to use the latest webhook API version and then resend a webhook for a previous event, Stripe still uses the old API version even though it claims to use the new version. Netlify silently converts HTTP header names to lowercase, so if you’re looking for the header, you have to look for . Instead of a normal v2 Go module , Stripe for some reason decided to make every package upgrade a source change as well, so when I upgrade from v83 to v84, I have to replace in every file that imports the Stripe package. Normally, you’d upgrade the version in one place without affecting imports. The Stripe webhook signing secret is different from your Stripe API key. Weekdays: 12.1% of submissions reach the front page. Weekends: 13.2% of submissions reach the front page. Published “The Software Essays that Shaped Me” Published “I Once Appeared in The Old New Thing” Published “Get xkcd Cartoons at 2x Resolution” Worked with two freelance clients for Refactoring English Set up a webhook handler to send post-purchase emails to Refactoring English customers Added “success by hour of day” feature to Hacker News observer Started contributing to the Jellyfin Roku client code Had a call with AirGradient to discuss improving relations between the company and community members Consider bailing if a low-investment post turns out to be high-investment. Stripe does not allow you to customize post-purchase emails. You have to do a bunch of other stuff to send your customers an email. Set up editing discounts for readers who have read the book. Create a list of early access customers to reach out to. Publish a new chapter of the book.

0 views
iDiallo 1 weeks ago

Why You Can't Be an Asshole in the Middle

On the first day on the job, the manager introduced me to the team, made a couple of jokes, then threatened to fire someone. At first, I thought it was just his sense of humor, that it was something I would understand once I worked long enough on the team. But no one else laughed. The air in the meeting room became stiff as he rambled about issues we had. The next Monday morning, he did it again. Now I was confused. Was I being hazed? No. Because he did it again the following Monday. He was an asshole. But he wasn't just any asshole. He thought he was Steve Jobs. Steve Jobs was a difficult person to work with. He was brutally honest, he could bend wills, shatter egos. Yet, he was also enshrined as one of the greatest business leaders of our time. He was the visionary who resurrected Apple and gave us the iPhone. My manager wasn't alone in his delusion. Like many professionals who find themselves in a people manager's position, they look at Jobs and think that being a brilliant jerk is a viable path to success. "The results speak for themselves. Maybe I need to be tougher, more demanding, less concerned with feelings." What they fail to see is that they are not Steve Jobs. And unless you're the CEO at the helm, acting like him is not a superpower. When you're a mid-level manager, you're not the Captain. You're a member of the crew. The difference between being an asshole at the top versus being an asshole in the middle comes down to authority, autonomy, and consequences. Jobs was the Captain. As the founder and CEO, he was the ultimate source of authority and vision. His difficult personality was inseparable from the company's mission. People tolerated his behavior because they bought into his vision of the future. He had the final say on hiring, firing, and strategy. His presence was the gravitational force around which the entire company orbited. When the captain is an asshole, the crew might stay for the voyage. When a fellow crewmate is an asshole, they get thrown overboard. A mid-level manager is a key member of the crew, but you are not the ultimate authority. Your colleagues in engineering, marketing, and sales don't report to you out of reverence for your world-changing vision; they collaborate with you to achieve shared company goals. Your power is not absolute; it's influence-based. And that changes everything. For Steve Jobs, it's not that being an asshole was his secret sauce. It's that his unique position allowed him to survive the downsides of his personality. He was building his vision of the future. For every person he drove away, another was drawn to the mission. It was impossible to fire him (a second time). He could fire people, and he could make them millionaires with stock options. The potential upside made the toxicity tolerable. The part of the story that often get omitted is that Jobs had a cleanup crew. Behind his grandiose ideas and abrasive personality, there were people who handled the operations and relationship-focused work he didn't have time for. That's what Tim Cook was for. Tim Cook smoothed over the conflicts, built the partnerships, and kept the machine running while Jobs played visionary. As a mid-level manager, you don't have a Tim Cook, do you? As a mid-level manager, your "because I said so" doesn't have the same weight. Anyone one level above your position can contradict you. When the CEO is harsh and demanding, it gets labeled as visionary leadership. The same behavior from a mid-level manager is seen for what it is: poor communication and a lack of respect. Your influence is much smaller than the person at the helm. You need favors from other departments, buy-in from your peers, and discretionary effort from your team. Being difficult burns bridges, creates resentment, and ensures that when you need help, no one will be in a hurry to give it. Your "brilliant" idea dies in a meeting room because you've alienated the very people needed to execute it. Your tools are limited. You can't promise life-changing wealth, and while you can influence promotions or terminations, the process is often layered with HR policies and approvals. Using fear as your primary tool without having ultimate control just creates a culture of anxiety and quiet quitting, not breakthrough innovation. Collaboration is your strength, and you're actively undermining it. When we had layoffs at my company, my manager was first on the list to get the boot. I can't say that his "assholery" was what put him on the list, but it certainly didn't help. No one went to bat for him. No one argued that he was indispensable. The bridges he'd burned came back to haunt him. Your success as a mid-level manager depends on your ability to influence, inspire, and collaborate. You can't demand greatness; you have to cultivate it. And you can't do that from behind a wall of arrogance and fear. In the real world, building bridges will always get you further than burning them. At work, be the leader people actually want to follow .

1 views
Sean Goedecke 1 weeks ago

How I influence tech company politics as a staff software engineer

Many software engineers are fatalistic about company politics. They believe that it’s pointless to get involved, because 1 : The general idea here is that software engineers are simply not equipped to play the game at the same level as real political operators . This is true! It would be a terrible mistake for a software engineer to think that you ought to start scheming and plotting like you’re in Game of Thrones . Your schemes will be immediately uncovered and repurposed to your disadvantage and other people’s gain. Scheming takes practice and power, and neither of those things are available to software engineers. It is simply a fact that software engineers are tools in the political game being played at large companies, not players in their own right. However, there are many ways to get involved in politics without scheming. The easiest way is to actively work to make a high-profile project successful . This is more or less what you ought to be doing anyway, just as part of your ordinary job. If your company is heavily investing in some new project - these days, likely an AI project - using your engineering skill to make it successful 2 is a politically advantageous move for whatever VP or executive is spearheading that project. In return, you’ll get the rewards that executives can give at tech companies: bonuses, help with promotions, and positions on future high-profile projects. I wrote about this almost a year ago in Ratchet effects determine engineer reputation at large companies . A slightly harder way (but one that gives you more control) is to make your pet idea available for an existing political campaign . Suppose you’ve wanted for a while to pull out some existing functionality into its own service. There are two ways to make that happen. The hard way is to expend your own political capital: drum up support, let your manager know how important it is to you, and slowly wear doubters down until you can get the project formally approved. The easy way is to allow some executive to spend their (much greater) political capital on your project . You wait until there’s a company-wide mandate for some goal that aligns with your project (say, a push for reliability, which often happens in the wake of a high-profile incident). Then you suggest to your manager that your project might be a good fit for this. If you’ve gauged it correctly, your org will get behind your project. Not only that, but it’ll increase your political capital instead of you having to spend it. Organizational interest comes in waves. When it’s reliability time, VPs are desperate to be doing something . They want to come up with plausible-sounding reliability projects that they can fund, because they need to go to their bosses and point at what they’re doing for reliability, but they don’t have the skillset to do it on their own. They’re typically happy to fund anything that the engineering team suggests. On the other hand, when the organization’s attention is focused somewhere else - say, on a big new product ship - the last thing they want is for engineers to spend their time on an internal reliability-focused refactor that’s invisible to customers. So if you want to get something technical done in a tech company, you ought to wait for the appropriate wave . It’s a good idea to prepare multiple technical programs of work, all along different lines. Strong engineers will do some of this kind of thing as an automatic process, simply by noticing things in the normal line of work. For instance, you might have rough plans: When executives are concerned about billing, you can offer the billing refactor as a reliability improvement. When they’re concerned about developer experience, you can suggest replacing the build pipeline. When customers are complaining about performance, you can point to the Golang rewrite as a good option. When the CEO checks the state of the public documentation and is embarrassed, you can make the case for rebuilding it as a static site. The important thing is to have a detailed, effective program of work ready to go for whatever the flavor of the month is. Some program of work will be funded whether you do this or not. However, if you don’t do this, you have no control over what that program is. In my experience, this is where companies make their worst technical decisions : when the political need to do something collides with a lack of any good ideas. When there are no good ideas, a bad idea will do, in a pinch. But nobody prefers this outcome. It’s bad for the executives, who then have to sell a disappointing technical outcome as if it were a success 4 , and it’s bad for the engineers, who have to spend their time and effort building the wrong idea. If you’re a very senior engineer, the VPs (or whoever) will quietly blame you for this. They’ll be right to! Having the right idea handy at the right time is your responsibility. You can view all this in two different ways. Cynically, you can read this as a suggestion to make yourself a convenient tool for the sociopaths who run your company to use in their endless internecine power struggles. Optimistically, you can read this as a suggestion to let executives set the overall priorities for the company - that’s their job, after all - and to tailor your own technical plans to fit 3 . Either way, you’ll achieve more of your technical goals if you push the right plan at the right time. edit: this post got some attention on Hacker News . The comments were much more positive than on my other posts about politics, for reasons I don’t quite understand. This comment is an excellent statement of what I write about here (but targeted at more junior engineers). This comment (echoed here ) references a Milton Friedman quote that applies the idea in this post to political policy in general, which I’d never thought of but sounds correct: Only a crisis—actual or perceived—produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes politically inevitable. There’s a few comments calling this approach overly game-playing and self-serving. I think this depends on the goal you’re aiming at. The ones I referenced above seem pretty beneficial to me! Finally, this comment is a good summary of what I was trying to say: Instead of waiting to be told what to do and being cynical about bad ideas coming up when there’s a vacumn and not doing what he wants to do, the author keeps a back log of good and important ideas that he waits to bring up for when someone important says something is priority. He gets what he wants done, compromising on timing. I was prompted to write this after reading Terrible Software’s article Don’t avoid workplace politics and its comments on Hacker News. Disclaimer: I am talking here about broadly functional tech companies (i.e. ones that are making money). If you’re working somewhere that’s completely dysfunctional, I have no idea whether this advice would apply at all. What it takes to make a project successful is itself a complex political question that every senior+ engineer is eventually forced to grapple with (or to deliberately avoid, with consequences for their career). For more on that, see How I ship projects at large tech companies . For more along these lines, see Is it cynical to do what your manager wants? Just because they can do this doesn’t mean they want to. Technical decisions are often made for completely selfish reasons that cannot be influenced by a well-meaning engineer Powerful stakeholders are typically so stupid and dysfunctional that it’s effectively impossible for you to identify their needs and deliver solutions to them The political game being played depends on private information that software engineers do not have, so any attempt to get involved will result in just blundering around Managers and executives spend most of their time playing politics, while engineers spend most of their time doing engineering, so engineers are at a serious political disadvantage before they even start to migrate the billing code to stored-data-updated-by-webhooks instead of cached API calls to rip out the ancient hand-rolled build pipeline and replace it with Vite to rewrite a crufty high-volume Python service in Golang to replace the slow CMS frontend that backs your public documentation with a fast static site I was prompted to write this after reading Terrible Software’s article Don’t avoid workplace politics and its comments on Hacker News. Disclaimer: I am talking here about broadly functional tech companies (i.e. ones that are making money). If you’re working somewhere that’s completely dysfunctional, I have no idea whether this advice would apply at all. ↩ What it takes to make a project successful is itself a complex political question that every senior+ engineer is eventually forced to grapple with (or to deliberately avoid, with consequences for their career). For more on that, see How I ship projects at large tech companies . ↩ For more along these lines, see Is it cynical to do what your manager wants? ↩ Just because they can do this doesn’t mean they want to. ↩

0 views

OpenAI Is Just Another Boring, Desperate AI Startup

What is OpenAI? I realize you might say "a foundation model lab" or "the company that runs ChatGPT," but that doesn't really give the full picture of everything it’s promised, or claimed, or leaked that it was or would be. No, really, if you believe its leaks to the press... To be clear, many of these are ideas that OpenAI has leaked specifically so the media can continue to pump up its valuation and continue to raise the money it needs — at least $1 Trillion over the next four or five years, and I don't believe the theoretical (or actual) costs of many of the things I've listed are included. OpenAI wants you to believe it is everything , because in reality it’s a company bereft of strategy, focus or vision. The GPT-5 upgrade for ChatGPT was a dud — an industry-wide embarrassment for arguably the most-hyped product in AI history, one that ( as I revealed a few months ago ) costs more to operate than its predecessor, not because of any inherent capability upgrade, but how it actually processes the prompts its user provides — and now it's unclear what it is that this company does.   Does it make hardware? Software? Ads? Is it going to lease you GPUs to use for your own AI projects? Is it going to certify you as an AI expert ? Notice how I've listed a whole bunch of stuff that isn't ChatGPT, which will, if you look at The Information's reporting of its projections, remain the vast majority of its revenue until 2027, at which point "agents" and "new products including free user monetization" will magically kick in. In reality, OpenAI is an extremely boring (and bad!) software business. It makes the majority of its revenue selling subscriptions to ChatGPT, and apparently had 20 million paid subscribers (as of April) and 5 million business subscribers (as of August, though 500,000 of them are Cal State University seats paid at $2.50 a month ). It also loses incredibly large amounts of money. Yes, I realize that OpenAI also sells access to its API, but as you can see from the chart above, it is making a teeny tiny sliver of revenue from it in 2025, though I will also add that this chart has a little bit of green for "agent" revenue, which means it's very likely bullshit. Operator, OpenAI's so-called agent, is barely functional , and I have no idea how anyone would even begin to charge money for it outside of "please try my broken product." In any case, API sales appear to be a very, very small part of OpenAI's revenue stream, and that heavily suggests a lack of interest in integrating its models at scale. Worse still, this effectively turns OpenAI into an AI startup. Think about it: if OpenAI can't make the majority of its money through "innovating" in the development of large language models (LLMs), then it’s just another company plugging LLMs into its software. While ChatGPT may be a very popular product, it is, by definition (and in its name!) a GPT wrapper, with the few differences being that OpenAI pays its own immediate costs, has the people necessary to continue improving its own models, and also continually makes promises to convince people it’s anything other than just another AI startup. In fact, the only real difference is the amount of money backing it. Otherwise, OpenAI could be literally any foundation model company, and with a lack of real innovation within those models, it’s just another startup trying to find ways to monetize generative AI, an industry that only ever seems to lose money . As a result, we should start evaluating OpenAI as just another AI startup, as its promises do not appear to mesh with any coherent strategy, other than " we need $1 trillion dollars ." There does not seem to be much of a plan on a day-to-day basis, nor does there seem to be one about what OpenAI should be, other than that OpenAI will be a consumer hardware, consumer software, enterprise SaaS and data center operator, as well as running a social network. As I've discussed many times , LLMs are inherently flawed due to their probabilistic nature."Hallucinations" — when a model authoritatively states something is true when it isn't (or takes an action that seems the most likely course of action, even if it isn't the right one) — are a " mathematically inevitable " according to OpenAI's own research feature of the technology, meaning that there is no fixing their most glaring, obvious problem , even with "perfect data." I'd wager the reason OpenAI is so eager to build out so much capacity while leaking so many diverse business lines is an attempt to get away from a dark truth: that when you peel away the hype, ChatGPT is a wrapper, every product it makes is a wrapper, and OpenAI is pretty fucking terrible at making products. Today I'm going to walk you through a fairly unique position: that OpenAI is just another boring AI startup lacking any meaningful product roadmap or strategy, using the press as a tool to pump its bags while very rarely delivering on what it’s promised. It is a company with massive amounts of cash, industrial backing, and brand recognition, and otherwise is, much like its customers, desperately trying to work out how to make money selling products built on top of Large Language Models. OpenAI lives and dies on its mythology as the center of innovation in the world of AI, yet reality is so much more mediocre. Its revenue growth is slowing, its products are commoditized, its models are hardly state-of-the-art, the overall generative AI industry has lost its sheen , and its killer app is a mythology that has converted a handful of very rich people and very few others. OpenAI spent, according to The Information , 150% ($6.7 billion in costs) of its H1 2025 revenue ($4.3 billion) on research and development, producing the deeply-underwhelming GPT-5 and Sora 2, an app that I estimate costs it upwards of $5 for each video generation, based on Azure's published rates for the first Sora model , though it's my belief that these rates are unprofitable, all so that it can gain a few more users. To be clear, R&D is good, and useful, and in my experience, the companies that spend deeply on this tend to be the ones that do well. The reason why Huawei has managed to outpace its American rivals in several key areas — like automotive technology and telecommunications — is because it spends around a quarter of its revenue on developing new technologies and entering new markets, rather than stock buybacks and dividends. The difference is that said R&D spending is both sustainable and useful, and has led to Huawei becoming much a stronger business, even as it languishes on a Treasury Department entity list that effectively cuts it off from US-made or US-origin parts or IP . Considering that OpenAI’s R&D spending was 38.28% of its cash-on-hand by the end of the period (totalling $17.5bn, which we’ll get to later), and what we’ve seen as a result, it’s hard to describe it as either sustainable or useful.     OpenAI isn't innovative, it’s exploitative, a giant multi-billion dollar grift attempting to hide how deeply unexciting it is, and how nonsensical it is to continue backing it . Sam Altman is an excellent operator, capable of spreading his mediocre, half-baked mantras about how 2025 was the year AI got smarter than us , or how we'll be building 1GW data centers each week (something that, by my estimations, takes 2.5 years), taking advantage of how many people in the media, markets and global governments don't know a fucking thing about anything. OpenAI is  also getting desperate. Beneath the surface of the media hype and trillion-dollar promises is a company struggling to maintain relevance, its entire existence built on top of hype and mythology. And at this rate, I believe it’s going to miss its 2025 revenue projections, all while burning billions more than anyone has anticipated. OpenAI is a social media company, this week launching Sora 2, a social feed entirely made up of generative video . OpenAI is a workplace productivity company, allegedly working on its own productivity suite to compete with Microsoft . OpenAI is a jobs portal, announcing in September it was "developing an AI-powered hiring platform ," which it will launch 'by mid-2026. OpenAI is an ads company, and is apparently trying to hire an an ads chief , with the (alleged) intent to start showing ads in ChatGPT "by 2026." OpenAI is a company that would sell AI compute like Microsoft Azure or Amazon Web Services, or at least is considering being one, with CFO Sarah Friar telling Bloomberg in August that it is not "actively looking" at such an effort today but will "think about it as a business down the line, for sure." OpenAI is a fabless semiconductor design company, launching its own AI chips in, again, 2026 with Broadcom , but only for internal use. OpenAI is a consumer hardware company, preparing to launch a device by the end of 2026 or early 2027 and hiring a bunch of Apple people to work on it , as well as considering — again, it’s just leaking random stuff at this point to pump up its value — a smart speaker, a voice recorder and AR glasses. OpenAI is also working on its own browser , I guess.

0 views
Herman's blog 1 weeks ago

PIRACYKILLS

Most people who read my blog and know me for the development of Bear Blog are surprised to learn that I have another software project in the art and design space. It's called JustSketchMe and is a 3D modelling tool for artists to conceptualise their artwork before putting pencil to paper. It's a very niche tool (and requires some serious explanation to some non-illustrators involving a wooden mannequin and me doing some dramatic poses), however when provided as a freemium tool to the global population of artists, it's quite well used. Similar to Bear, I make it free to everyone, with the development being funded through a "pro" tier. Conversely, since it is a standalone app it has a bit of a weakness, which is what this post is about. I noticed, back in 2021, that when Googling "justsketchme" the top 3 autocompletes were "justsketchme crack", "justsketchme pro cracked", and "justsketchme apk". On writing this post, I checked that this still holds true, and it's fairly similar 4 years later. The meaning of this is obvious. A lot of people are trying to pirate JustSketchMe. However, instead of feeling frustrated (okay, I did feel a bit frustrated at first) I had a bright idea to turn this apparent negative into a positive. I created two pages with the following titles and the appropriate subtitles to get indexed as a pirate-able version of JustSketchMe: These pages rank as the first result on Google for the relevant search terms. Then on the page itself I tongue-in-cheek call out the potential pirate. I then acknowledge that we're in financially trying times and give them a discount code. And you know what? That discount code is the most used discount code on JustSketchMe! By far! No YouTube sponsor, nor Black Friday special even comes close. In some ways this is taking advantage of a good search term. In others it's showing empathy and adding delight, creating a positive incentive to purchase to someone who otherwise wouldn't have. The discount code is PIRACYKILLS . I'll leave it active for a while. 👮🏻‍♂️ JustSketchMe Crack Full 2021 22.0.1.73 JustSketchMe APK Mirror FULL 2.2.2021

2 views
iDiallo 1 weeks ago

Can You Build a TikTok Alternative?

Whenever a major platform announces changes, the internet's response is predictable: "Let's just build our own." I remember the uproar when Facebook introduced Timeline. Users threatened boycotts and vowed to create alternatives. The same pattern emerged with Stack Overflow. There were countless weekend-clone attempts that promised to be "better." Back then, building an alternative felt possible, even if most attempts fizzled out. Now, with TikTok's American operations being sold to Oracle and inevitable changes on the horizon, I find myself asking one question. Is it actually possible to build a TikTok alternative today? The answer depends entirely on who's asking. A well-resourced tech company? Absolutely. We've already seen Facebook, YouTube, and others roll out their own short-form video features in months. But a scrappy startup or weekend project? That's a different story entirely. As someone who doesn't even use TikTok, I'm exploring this purely for the technical and strategic challenge. So let's approach this like a mid-level manager tasked with researching what it would actually take. It's interesting to think about cost or technology stack, but I think the most critical part of TikTok isn't its code at all. On the surface, TikTok does two things: it lets you record a video, then shares it with other users. That's it. You could argue that Facebook, YouTube, and Instagram do the same thing. And you'd be right. This surface-level replication is exactly why every major platform (Reels, Shorts, etc.) launched their own versions within months of TikTok's explosion. Creating a platform that records and shares videos is straightforward for a large company. The technical pattern is well-established. But that surface simplicity is deceiving. Because video, at scale, is one of the hardest technical problems in consumer tech. Let me put video complexity in perspective. All the text content on my blog compiled over 12 years totals about 10 MB. That's the size of a single photo from my smartphone. A single TikTok video, depending on length and resolution, easily exceeds that. Now multiply that by millions of uploads per day. Building an app with TikTok's core features requires significant upfront investment: These aren't optional. The format is established, the bar is set. You can't launch a "minimum viable" short-form video app in 2025. Users expect the full feature set from day one. Video processing is not as simple as it seems. You could build wrappers around FFmpeg, but building fast and reliable encoding, streaming and formatting demands more than just a wrapper. In my previous exploration of building a YouTube alternative , I concluded it was essentially impossible for two reasons: TikTok operates at a smaller scale than YouTube, but those fundamental challenges remain. You need serious capital to even start. You can build the platform, but you can't build the phenomenon. TikTok's true competitive advantage has nothing to do with its codebase. It's technically a Snapchat clone. What makes TikTok impossible to displace is its cultural gravity. TikTok isn't just a video app. It's the most powerful music discovery platform. It turned Lil Nas X's "Old Town Road" into a global phenomenon and resurrected Fleetwood Mac's "Dreams" 43 years after release. Artists now strategically release "sped-up" versions specifically formatted for TikTok trends. Record labels monitor the platform more closely than radio. Your alternative app might have better video processing, but it won't make hits. For younger users, TikTok has replaced Google for everything from recipe searches to news discovery. But it's more radical than that. Google evolved from a search engine to an answer engine, attempting to provide direct answers rather than just links. TikTok takes this evolution further by becoming a serve engine. You don't find content, content finds you. You open the app and scroll. No search queries, no browsing, no active seeking. The algorithm serves you exactly what it thinks you want to see, refining its understanding with every swipe. Users aren't searching for vibes and aesthetics; they're being served in an endless, personalized stream. Your alternative can't replicate this with a better algorithm alone. You need millions of users generating behavioral data to train on. On TikTok. "Microtrends" emerge, peak, and die within weeks, fueling entire industries. Restaurant chains now add viral menu items to permanent offerings. Fast fashion brands monitor TikTok trends in real-time. Your alternative might have a great feed algorithm, but it won't move markets. On TikTok, you can watch three seconds of a video and instantly identify it as TikTok content before seeing any logo. The vertical format, the quick cuts, the trending sounds, the text overlays. It's a distinct design that users have internalized. I'm not interested in creating TikTok content, but the more important truth is that TikTok isn't interested in the content I would create. The platform has defined what it is, and users know exactly what they're getting. Any alternative must either copy this completely (making it pointless) or define something new (requiring the same years-long cultural adoption TikTok achieved). Technical replication of TikTok is expensive but achievable for a well-resourced company. But the insurmountable barrier isn't the code; it's the immense cultural inertia. To compete, you wouldn't just be building a video app. You'd need to simultaneously displace TikTok as: You're not building a better mousetrap. You're trying to convince an entire ecosystem to migrate to an empty platform with no culture, no creators, and no communities. For a genuine alternative to emerge, the strategy can't be "TikTok but slightly different." It must be "TikTok completely neglected this specific use case, and we're going to own it entirely." Or alternatively, people may react negatively to the acquisition by Oracle. As a developer, no Oracle software inspires me. I hope this will serve as inspiration to build a better alternative. Not just an expensive ghost town with excellent video processing. Development costs : Vibe coding won't cut it. You need to hire people. Team requirements : You'll need experienced teams that can build and optimize for each App ecosystem. Frontend and backend developers, UI/UX designers, QA engineers Mandatory features : Video recording/editing with effects and filters, AI-powered recommendation engine, live streaming, duets/stitches, social graph and sharing, content moderation systems It's expensive to host videos at scale It's even more expensive to deal with copyright issues A music discovery platform A search engine for Gen Z A trendsetter driving consumer behavior A community hub with established creator ecosystems

0 views
ava's blog 1 weeks ago

[rant] i hate email forwarding notes

You erroneously receive an email meant for someone else or another team. You forward it with the note: we received this but I think this is meant for you. Kind regards, Name” or a variation thereof. Sometimes, mails bounce around internally before responsibilities are clear, and it finally arrives in your mailbox - with 4-6 of these useless notes. “Hey Katja, look at this and maybe forward to xyz.” “Hello Ben, not for us. Maybe send to abc.” “Dear Mr. Schmidt, we received this but it’s not in our jurisdiction. Please forward to the relevant parties.” ”Dear all, we just received this, please respond to the initial query.” And I fucking hate this! Can’t we all agree to let this fossilized shit go? It makes me scroll endlessly to get to the meat of it, passing by complete corporate diarrhea in the process. I love forwarding without any of that, and you better believe there’s pearl clutching about that. I’m here to bust these bullshit excuses right now. Put your thinking cap on, read the email and deduct based on keywords why you got that email. If it is about ABC and you’re the ABC team, is it not clear what you are supposed to do? That you are supposed to treat it as if you had gotten it directly? If you had received it directly, it would also not include a handbook about your next steps! Also, if you are not CC and are directly addressed in the mail body, it already shows you are expected to react to it and handle it, and aren’t just being notified. If just notifying, I would write an FYI note, but I did not. This weaponized incompetence just an excuse to delay working on it. This is the worst lie of them all. Did you receive that email from the nether, anonymously? There is an email address attached. You know who it was! You can respond back to that address, and in my specific case, we even have a team phone number you know that you can call, and we are at best 4 people, less if on vacation or sick. Getting in contact with someone on the team or the exact person who sent it isn’t hard. Also: If there is any additional context needed or extra information I can give you, I would of course write the damn note. But I did not! Take the hint, what questions would you even ask me if this is clearly an errant email meant for you and your work, which I have zero knowledge about? Am I supposed to do your work for you or what? It’s not nice to scroll through 5 sections of this goddamn drivel to find out what the fuss is about and I feel stupid writing that note that just reaffirms what is already painfully obvious: that you are getting it because it’s your goddamn job. We are wasting each other’s time. If I read XYZ in the title and the body of the email, I don’t need you to explain to me “I send this to you because we are ABC team and you are responsible for XYZ!” Gee thanks, I almost forgot! Same in the other direction. We receive this comment from this working group, now I am supposed to forward it saying “here’s the comment by the working group for you!” just echoing the original email content I am forwarding. It adds absolutely nothing new. At least, if you really have to do your own little forwarding note dance for your peace of mind, delete the previous notes . If in the specific case of needing to preserve the path this took so we all know what teams already received it and passed it on so we don’t send them that again, you can include a quick summary of that in there too - at least that would make it useful then. Deleting all that and just go “forwarding this to you because I think you’re responsible for this; teams A, B, and C previously received it but weren’t the correct recipient.” There you go! I am so tired of the email etiquette of yesteryear. Rules like these are seemingly set in stone for no good reason by the same generation who will unironically write “……….” after every sentence. Get a grip and do your job and stop trying to nitpick to waste time and pretend you got nothing to work with or are absolutely clueless about what “receiving an email” means. Reply via email Published 02 Oct, 2025

4 views
Justin Duke 1 weeks ago

September, 2025

The last of summer's grip finally loosened its hold this September, and Richmond began its annual transformation into something gentler and more contemplative. This morning's walk with Telly required a dusting-off of the closet-buried Patagonia puffer jacket; it's perfect for walks with Lucy, who has graduated into the Big Kid stroller making it easier than ever for her to point at every dog ("dah!"), every bird (also "dah!"), every passing leaf that dared to flutter in her line of sight. As you will read below, the big corporate milestone for me this month was sponsoring Djangocon and having our first offsite over the course of a single week. Sadly, our Seattle trip was once again canceled. Haley and Lucy both got a little sick, and we had to abandon course. It's weird to think this will be the first year since 2011 that we have not stepped foot in the Pacific Northwest. More than anything though, I learned this month for the first time how impossibly difficult it is to be away from your daughter for six days. It is something I hope I have to go through again for a very long time.

1 views

Am I solving the problem or just coping with it?

Solving problems and putting in processes that eliminate them is a core part of the job of a manager. Knowing ahead of time whether or not your solution is going to work can be tricky, and time pressures and the urgency that pervades startups can make quick solutions seem really attractive. However, those are often a trap that papers over the real issue enough to look like it’s solved, only to have it come roaring back worse later. If you’ve been around the world of incident response and the subsequent activity of retrospectives/post-mortems, you’ve probably heard of the “ 5 whys ” method of root-cause analysis. In the world of complex systems, attributing failures to a single root cause has fallen out of favour and 5 whys has gone with it. Nevertheless, there is value in digging deeper than what initially presents itself as the challenge to uncover deeper, more systemic issues that might be contributing factors. Rather than “why,” I like to ask myself if I’m really solving this dysfunction or just coping with it. There’s a lot of temptation to cope. Coping strategies have some features that make them attractive. Imagine your problem is that your team is “always behind schedule.” That’s a problem I’m sure most people are familiar with. There are plenty of easy-to-grab band-aid strategies for that pattern that are very attractive: All of these can make you feel like you’re solving the problem but: This dichotomy is probably more familiar and easy to recognize in the technical realm. If your code has a memory leak you can either proactively restart it every now and then (coping) or you can find the leak and fix it (actual solution). The reasons you should prefer the actual solution are the same in both scenarios. The 'strategy' of restarting the service will work for a while, but you know in the back of your mind that eventually it’s going to manage to crash itself between restarts. Same goes for your people processes. At one of my jobs we had a fairly chaotic release process. This was a while ago and “ Continuous Delivery r” was still a fairly newfangled idea that wasn’t widely adopted, so we did what most companies did and released on a schedule. It was a SaaS product, so releasing was really just pushing a batch of changes to production. The changes had ostensibly been tested (we still had human QA people then) and everything looked good, but nevertheless nearly every release had some catastrophic issues that needed immediate hotfixes. We didn’t have set working hours, except on release day. We expected all of the developers to be in the office when the code went out in case they were needed for fixes. We released around 9am and firefighting often continued throughout the morning and into the afternoon. When that happened, the office manager would usually order some pizzas so people could have something to eat while fixing prod. Eventually this happened so often that it just became routine. Release day meant lunch was proactively ordered in for the dev team, then chaos ensued and was fixed. Of course, we did eventually tackle the real pain points by increasing the frequency of releases (when something hurts, do it more, it’ll give you incentives to fix the real problems), adding more automated testing, and generally just getting better at it all. The lunches continued well past the point where the majority of the people on the team remembered or ever knew why they were there. Some other practical questions you can ask yourself to try to recognize stopgap strategies that aren’t addressing the real problems. As mentioned before, anything that stops working when you stop doing it is likely not a good solution. If you’ve got a system where someone has to push the “don’t crash production” button every day or else production crashes, eventually someone’s going to fail to push the button and production is going to crash. The button is not a solution; it’s a coping strategy. If you’re having a lot of off-hours incidents and your incident response team is complaining about the burden, you could train more people in incident response to reduce their burden (and maybe you should), but that’s not solving the problem, which is really that you’re having too many incidents. A true solution would be understanding why and addressing that. Code not sufficiently tested? Production infra not sized appropriately? Who knows, but the answer is not simply throwing more bodies at it. That will reduce the acute problem of burnout in your responders (symptom) but not the chronic issues that are causing the incidents in the first place. If you’re not changing the underlying conditions you’re probably not fixing the problem. If all your meetings are running over time you could appoint someone to act as the timing police and give warnings and scold people who talk too long, but that’s just adding work for someone and not touching the real reasons, which could be unclear agendas, poor facilitation, or the fact that the VP can’t manage to stay on topic for any amount of time. The solution is to fix your meeting culture. Require agendas, limit the number of topics, train people on facilitation, give some candid feedback to the VP. These solutions actually remove the failure mode that causes meetings to run long. The timekeeper “strategy” doesn’t do anything about that. This is similar to the first question, but it’s something you can think of like a metric, even though it’s probably more a heuristic than something directly measurable. Treating your releases like pre-scheduled incidents like we did would be a good warning sign that you’re coping, but if each one is less dramatic than the last and wraps up sooner, those are signs that you’re on the right track. Getting to the point where releases are unremarkable and you don’t need the whole team on standby would be a good indicator that you’ve got solid solutions in place. When production is down, anything you can do to get it back up again is the right move. If you’re bleeding and you have to make a tourniquet with your own shirt, you do it. There are lots of scenarios where a short-term fix is the best thing for the situation. But keep in mind, this is effectively debt . Like technical debt, it should be repaid — ideally sooner rather than later. A good incident process will recover production by any means necessary. Once, during an incident related to a global DNS provider’s outage, we were literally uploading /etc/hosts files to our infrastructure to keep the part of our product that relied on some 3rd parties working. Needless to say, once the DNS incident was resolved, we went back and cleaned those up. You can do the same with processes. When there’s immediate pain that can be relieved with a quick patch-up, you should do it, and use the fact that the pain has abated to fix the problem permanently. You also can’t fix them all. You might not have enough influence or political capital to make the kinds of changes that real solutions require. In that case, your job is to advocate for the right things to happen, point out the ways that coping is hurting the organization, and exert influence over the parts that are in your control. I’ve tried to give you some strategies for recognizing when you’re just coping with a situation rather than fixing it. The differences can be subtle, but once you start to spot them it gets easier and you’ll find things start to get better in more permanent, sustainable ways. " Temporary Fix " by reader of the pack is licensed under CC BY-ND 2.0 . Like this? Please feel free to share it on your favourite social media or link site! Share it with friends! Hit subscribe to get new posts delivered to your inbox automatically. Feedback? Get in touch ! They can be contained within your team ; you don’t need to influence people outside your sphere. (We’ll just work through the weekend and get the project back on track.) They’re highly visible. (Look at Jason’s team putting in the extra effort! What a good manager!) They can move vanity metrics in the short term, which is often something rewarded. (We increased our velocity and did 20% more story points!) They come with a cost . (The team will burn out and quit if they have to work through too many weekends.) If you stop doing them, the situation comes right back . (The next project wasn’t different and we’re back behind schedule again.) They only address the symptoms , not the causes. (Improving your culture around estimations and deadlines would be a better fix.)

0 views
W. Jason Gilmore 2 weeks ago

10,000 Pushups And Other Silly Exercise Quests That Changed My Life

Headed into 2025 I was fat, out of shape, and lazy. My three young children were running circles around me, and I was increasingly concerned not only about my health in general but about the kind of example I was setting for them. My (very) sedentary job in front of a laptop serving as the CTO of Adalo wasn't helping, nor was the fact that my favorite hobby in the world outside of work is, well, sitting in front of the laptop building SaaS companies like SecurityBot.dev and 6DollarCRM . Adding to the general anxiety was the fact I had spent the last two years watching my parents struggle with devastating health issues. My parents had me in their early 20's, so all said they really weren't that much older than I am. My thoughts regularly turned into worry that I'd eventually wind up with my own serious health problems if I didn't get my act together. I wanted to do something about it, but what? Past attempts to go to a gym weren't successful, and I really did not want to drive any more than I already do serving alongside my wife as a kid taxi. Also, having made half-hearted attempts in the past to get into shape (Orange Theory, P90X, etc) and winding up spending less time exercising than researching the minutiae of max VO2, bicycle construction, and fasting benefits, I knew I had to keep things simple. While on a post-Christmas family vacation down in Florida I concluded it made sense to set a goal that could help me get into better shape but which also could be completed in small chunks over a long period of time. It was also important that I could do the workout at any point in the day and even in my office if necessary. And thus began the quest to complete 10,000 pushups in one year. Almost 10 months later, this harebrained goal and the many positive effects that came from it changed my life in ways I never imagined. While still in Florida I fired up a Google Sheet and added two columns to it: Date and Pushups. And on January 1, 2025 I dropped down and knocked out 30. Well not 30 in a row, mind you. I never could have done that on day 1. It was more like 10, 10, 5, 5 or something like that. Then I wrote it down. On January 2 I upped my game a bit, doing 35 and again immediately logged into the sheet and wrote it down again. In the days that followed, the reward very much became the opportunity to open that sheet. Can't write the pushup number down if I didn't do the pushups, right? I didn't want to break the chain (although you'll later see I did in fact break the chain plenty of times in the months ahead) and so in the first 31 days I did pushups on 24 of 31 days, logging 1,018 in total and averaging 32.84 per day. I even worked up the motivation to run on a treadmill one day in January, logging 2.17 miles in 30 minutes on January 14, 2025. Other than that run and pushups, according to my spreadsheet I did no other notable exercise that month. It was also in January that I stopped eating fast food of any type, and as of the day of this writing I've not reversed course on this decision. Long story short we were driving back from https://codemash.org and pulled through a McDonalds. I had at that point been eating McDonalds all of my life; nothing over the top mind you, but probably twice a month at least for as long as I can remember. Anyway, the food that day was rancid. Legitimately nauseating. I have no idea why it was that way but I was so turned off that right there and then I swore I'd never touch it again. Coincidentally, over this past weekend I was on a walk and reflecting on some of what I'd been writing in this blog post, and my thoughts turned towards diet. When was the last time you heard somebody (including yourself) say they feel better after eating fast food? We all know the answer to this question: never. This stuff is not food and I feel so much better staying away from this poison. Whether it was due to the winter blues or that shiny New Year's resolution already starting to fade, I only logged 848 pushups on 21 of 28 days in February. But I definitely seemed to be getting stronger, averaging 44.63 pushups on those days, and managed to log a daily high of 117 pushups on February 9, 2025. By the end of February I had logged 1,876 pushups. According to my spreadsheet I also managed to lift weights on February 1, 4, and 10. I have a pretty basic weight set in the basement and although I can't recall the specifics, I was probably standing around listening to CNBC on my phone most of the time. I'm not going to sugarcoat it; March was bad, real bad. I only logged 206 pushups on 9 days, averaging 22.9 pushups on those days. It's unclear to me why I'd tailed off so much other than to imagine old man winter was really starting to weigh on me by that point. Even so, those 206 pushups took me to a total of 2,082 pushups for the year. In April my pace picked back up along with the improving weather and increasing sunlight. I completed 375 pushups on 13 days, averaging 28.84 pushups on those days. However I also managed to lift weights on six days in April, went on a run on April 14, and even gave fasting a go for a 28 hour period between April 2-April 3 (not sure I'll do that again). Another lifestyle change unexpectedly happened in April: I basically quit drinking alcohol, wine in particular. This decision was a pretty simple one because as I've gotten older, the hangovers have gotten worse, and my sleep quality has gotten much worse, anytime I drank more than 1-2 drinks. As of this writing (September 28, 2025) I've had maybe 2-3 glasses of wine in almost 5 months. My new alcoholic drink of choice when I feel like having something? Miller Lite. It has low calories, low alcohol content, and you can buy a 12 pack for as much as one bottle of wine. Adding 375 pushups to the pile took me to a total of 2,457 pushups for 2025. Likely due to fear I was going to enter yet another summer rocking the "dad bod", my exercise intensity soared in May. I completed 1,281 pushups over 25 days, averaging 51.24 pushups on those days. On five of those days I completed more than 100 pushups, and on May 18 completed a YTD single day high of 150. I also became mildly obsessed with the idea of doing a split. While browsing Libby as I love to do at night, I found the book Even the Stiffest People Can Do the Splits . The cover showed the author smiling and doing a full split, and I thought well if Eiko says even stiff people can do it then maybe I can too. Over the course of May I did the splits workout 15 times, and undoubtedly became far more flexible although I never did quite reach a complete split. This continued into June and early July however for reasons I'll explain in a moment I stopped doing the regiment out of fear I'd get hurt. However, to this day I stretch daily and of all the different exercise routines I've tried this year I think aggressive stretching has perhaps had the most ROI of them all. On May 15 I ran a 5K with my daughter (well she sped ahead of me after mile 1), completing it in 32:50. Not too bad considering according to my log I ran exactly four times in 2025. Headed into June I had completed a grand total of 3,738 pushups. June is where things really started to get exciting. Every year Xenon Partners runs a friendly intercontinental pushup contest. "Friendly" is a relative term considering I work with numerous combat veterans, retired members of the United States and Australian military services, and a former Mr. Australia contestant. I also spent some time in France with the family, attending the 24 Hours of Le Mans race (amazing btw) and sightseeing around the country, meaning I had to fit pushups in whenever possible, including at Versailles: In June my output soared to 2,014 pushups, and despite all of the traveling managed to do pushups on 24 of 30 days, averaging 91.55 pushups per day. I also set multiple PRs in June, doing 205 pushups on June 1, 222 on June 15, and then 300 on June 27. As of June 30 I had completed a total of 5,752 pushups. Upon returning from Europe I got the bright idea to organize a race called the 5/15/500 Challenge. This involved running 5 miles, biking 15 miles, and then completing 500 body weight exercises. Nevermind that I'd run maybe four times in 2025 and hadn't been on my bike once. Many of my neighbors joined the fun, and we even had t-shirts printed for the occasion. Of course, I also created a website . I did this because I figured having an artificially imposed deadline was going to force me to exercise more often. Mission accomplished. In July I completed 2,002 pushups, ran 48.88 miles, and biked 28.99 miles (this includes the race day numbers). The heat throughout the month was often unbearable, but I pushed through all the same knowing July 26 (race day) was coming up quick. During this period I also really began to dial in my diet, eating little more than fruit, eggs (lots of eggs), chicken, rice, and salad (lots of salad). It was during this period and August that my body began to change. I became noticeably larger and more muscular, and incredibly my abs began to show. In this photo I'm completing race pushup #500. Don't judge the form, it was almost 90 degrees and the exhaustion was real from having already completed the run and bike segments. That said if you squint in the right light you can see I actually have muscles due to all the pushups and running! Due to all of the July training and the 5/15/500 Challenge, my YTD pushup output soared to 7,754. It was around this time that I went down a major rabbit hole regarding microplastics. A successful techie named Nat Friedman funded a study that looked into the prevalency of microplastics in food, vitamins, and other products, and published the results here . I'm not going to call out any products by name here (although I should because they are poisoning us), but take a moment to open this site in a new tab and search for protein for a glimpse into how you are being poisoned every time you take a bite of so-called health food. After spending a few weeks researching this topic I radically changed my diet and eliminated all of this nonsense. If you really want to go down a rabbit hole, look into the relationship between chocolate-infused health products and heavy metals. In August I did exactly 1,000 pushups, and threw in 190 body weight squats just for fun. 525 of these pushups were completed in a single day (August 16) thanks to my neighbor, friend, and fellow 5/15/500 contestant Charlie having the bright idea that we should knock out what was originally supposed to be 400 pushups during our sons' soccer game. Of course, our competitive spirit got the best of us and I quit at 525 while Charlie pushed on to 600. I'll get him the next time! The running sessions continued throughout August, with 37.48 miles completed. I started taking running much more seriously at this point because I signed up for the October 19 Columbus 1/2 Marathon. I've run 1/2 marathons before (poorly - my last finish time was 3:05) so I know what I'm getting into here, but this time around I want to actually finish at what I deem to be a respectable time which is around 2:20 (10:40/mile pace). Of course, in order to train for this I needed to know what pace I'm running in the first place, and so I bought a Garmin Forerunner 55 watch with GPS. As mentioned before my proclivity for going down research rabbit holes hasn't really helped my previous attempts to get into shape so I chose this watch because compared to other watches it is relatively spartan in terms of features. Above all else I wanted a watch that can accurately track my running distance, pace, and route and so far I am so, so happy with this purchase. It is perfect, and the battery life is amazing. On August 2 I received the watch and later that day took my son and his friend up to a local (Alum Creek) mountain bike park and while they were riding I decided to run the trails. I wound up running 4.69 miles on very hilly and bumpy trails, and paid for it dearly over the next week due to terrible foot and knee pain. On August 21 I ran my first training 10K, completing it in 1:12:28. According to my fancy watch I completed the first 5K in 39:11 but then sped up and completed the second 5K in 33:11. On August 25 I repeated the route, this time completing the 10K in 1:05:41. On August 28 I did it a third time, completing it in 1:02:47. Progress! I brought some help to the the August 25 and 28 10K training runs: GU packs . In July I read the book Swim, Bike, Bonk: Confessions of a Reluctant Triathlete , by Will McGough. In this hilarious recounting of training and competing in an Ironman triathlon, the author mentions using these mysterious "gel" pack, of which the most popular is known as a "GU pack". I subsequently picked up a few at the local Walmart and can confirm they unquestionably gave me a boost on these long runs. Now anytime I plan on running a 10K or longer I put one in my running pouch and open it 5K into the route. With another 1,000 pushups in the book my YTD output sat at 8,754 on August 31. Much better endurance aside, the most obvious visible outcome of the last few months is my clothes no longer fit. My polo shirts are so baggy they look like tents, and my t-shirts are too small because I'm so much more... muscular? What in the hell is going on? This seems to be working! With 8,754 pushups complete, I only had 1,246 to go and concluded I'd meet the milestone in September. With the 1/2 marathon around the corner my running workouts picked up and I set multiple PRs, including a 29:51 5K PR on September 8, followed by another 28:10 5K PR on September 11. On September 17 I got one of the biggest motivational boosts possible. I was in Chicago for a quarterly meeting, and one of the fellow board members who I've seen in person once every 3 months (but not 3 months ago because we were on the France trip) walked up to me and introduced himself. I stared back at him completely puzzled, and watched him walk away to greet the person next to me. He suddenly wheeled around with a look of shock on his face and said something to the effect of "Holy shit! I didn't even recognize you! You look amazing!". On September 21 I completed the 10,000th pushup in unceremonious fashion on my living room floor: On September 24 I gobbled up a GU pack and headed outside feeling like I could tear a phone book in half. My goal was to shatter the previous 28:10 5K record, and I was on track to do exactly that, running the first 2.1 kilometers in 18 minutes flat. Then out of nowhere I felt this terrible pain in my left calf and came to an immediate stop. It wasn't until September 29 that I could comfortably run again, and even then I only ran 1 mile because I'm terrified of a nagging injury setting me back for the October 19 1/2 marathon. In September I added 1,501 pushups to the pile, bringing the YTD total to 10,245. Today is October 1, 2025 and the pushups continue. The aforementioned 1/2 marathon is on October 19, and my neighbor Charlie and I have already agreed to walk/run a full marathon (around our neighborhood) on November 29. Although it's almost 80 degrees today, in past years we've seen snow by the end of the month so I'm thinking about getting one of those fancy stationary bikes or maybe even a treadmill so I can keep this party going over the winter. In recent months I have started to look so different that friends have asked me for some diet details. As mentioned, I no longer eat fast food, nor overconsume alcohol. But I've also almost completely cut out processed foods, eating them only very sparingly. A few months ago I did manage to go down the microplastics and heavy metals rabbit hole, and now spend some time researching anything that I plan on eating on a regular basis. Believe me, a lot of the food you think is healthy is pure garbage. Every morning I eat one of two things: either a gigantic fruit smoothie or four scrambled eggs and a salad. I do not deviate from this, only very occasionally eating some protein-powder pancakes made by my wife. My smoothie consists of milk, greek yogurt, 1.5 scoops of Optimum Nutrition protein powder, a huge scoop (probably two cups) of frozen organic berries, and an entire banana: Here is the typical scrambled eggs and salad breakfast: For lunch I eat some combination of chicken, rice, tuna, and salad. I almost never deviate from this. For dinner I eat whatever my wife decides to make, which is always healthy. Obviously we occasionally go out and I'll eat some garbage like wings or pizza, but this is pretty rare compared to the past. I also take a few vitamins and creatine daily. Earlier in this post I mentioned researching the prevalency of microplastics, heavy metals, and other poison in food. This is particularly problematic in ironically protein powder, protein bars, protein shakes, etc. I settled on Optimum Nutrition because it is one of the few powders on the market that has been tested by numerous third-parties, including the Clean Label Project . It's pretty expensive compared to other products, but I'm happy to pay in order to avoid ingesting this garbage. Despite getting myself into incredibly good shape relative to the past, this wasn't really that hard. On 105 of 274 days (38.3%) I did no pushups at all. On 142 of 274 (51.8%) days I did between 1 and 100 pushups. On just 26 of 274 (9.4%) days did I do more than 100 pushups, and on only 8 of 274 (2.9%) days did I do 200 or greater. Interestingly, although I have no hard data to back this up I feel like my strength soared in the 67 days following the 5/15/500 race (July 26). Following that date I did more than 100 pushups on 11 days (16.4% of the days), and became noticeably more muscular. Here's a chart showing the pushup volume throughout the year: Headed into October, I feel like a million dollars and plan on continuing these off-the-wall exercise quests for the rest of my (hopefully long) life. I obviously have no idea what I'm doing, but am happy to answer any questions and help motivate you to get in the best shape of your life. Send me an email at [email protected] or DM me on Twitter/X at @wjgilmore!

0 views

Goodbye Disqus - Your injected ads are horrible

IntroThis will be a short and sweet post. I’m not big on goodbyes. Disqus started showing ads for their “free” tier comments system a few years back. At the time, the communication they sent out via email, seemed quite laid-back and had the tone of “don’t worry about it, it’s not a big thing”. Which in part lead me to almost forget it happened. At the time, the disqus comments system looked quite smart and sleek.

0 views
Jeff Geerling 2 weeks ago

The AI Emperor Has No Clothes

If the size of the current AI bubble can be estimated by how many I receive per day... then I'd say we're nearing the pop. There is no way the trillions of dollars of valuation placed on AI companies can be backed by any amount of future profit.

0 views