Posts in Business (20 found)

Launch Now

Inside us are two wolves. One wolf wants to craft, polish and refine – make things of exceptional quality. The other wolf wants to move fast and get feedback now. The two wolves don’t always get along. For years, I’ve balanced this by working toward exceptional products but constantly collecting private feedback along the way. Then, once we’ve built something excellent, something worthy of attention, we launch it to the world with appropriate fanfare. Videos, marketing campaigns, polished onboarding, and so on. “Here’s something worth trying, we think you’ll really like it.” This totally works. At least, it works as a path to eventually ship high-quality software. Polished, usable, even delightful software. But when it comes to building something people will pay for, it’s neither reliable nor fast. Our first product at Forestwalk was a developer tool – a platform for building and running evaluations of LLM-powered apps . We learned a ton building it, but after a few months – as we approached our first pilot projects – feedback from demos and potential first customers convinced us that this was the wrong path. It was more likely to lead us into a lifestyle business than something big. So we pivoted. We spent a few weeks building a prototype a week, showing demos, doing customer research, and found a second promising product path. Our second product was a productivity tool – a work assistant that could capture, organize, and rationalize teams’ tasks . We learned a ton building it, but after a few months – as we approached a public beta – feedback from private testers and our investors convinced us that this was the wrong path. It was more likely to lead us into a lifestyle business than something big. So we pivoted. The third time purports to be the charm. But at the same time, doing the same thing over and over typically gets the same results. We need to build something profoundly useful, something people really want. We can’t keep hiding away, sending out private demos and prototypes, not fully shipping anything! So, we decided to push harder into the discomfort of showing our work early. Just before Christmas, we decided to commit to something and work towards getting it shipped. This third product is codenamed Cedarloop 1 . It’s a realtime meeting agent. Unlike AIs that passively listen in to meetings and just write up notes after the fact, Cedar joins calls and uses “voice in, visuals out” to screen-share useful observations and perform routine tasks live during a Google Meet or Zoom meeting. The vision is to build a kind of agentic PM assistant. It can respond within a second of you talking 2 , which – when it works – feels like magic. We’ve been learning a lot building it. Recently, we started working with an excellent designer here in Vancouver who was keen to get going. I’d like to do some user testing. What do people say when you let them try it? Well, obviously it’s so early right now. They won’t like it. The inference and onboarding need more work. But we’ve been doing research about problems, needs, willingness to pay, and things like that. Sure… but we should also let people try it. What if we launched now? Well, obviously we can’t launch now . I mean… obviously. Launching now would be embarrassing. It’s not my brand to launch something publicly that’s not ready. On the other hand… I keep a printed copy of Y Combinator’s list of essential startup advice on my desk. And if you know YC, you’ll know that the first point of advice is “Launch now”. Only last month I was interviewing Brett Huneycutt, Wealthsimple’s co-founder . He had a lot of great stories, but one that sticks out is that even as a $10B company, they prioritize launching “now”, for as close as they can get to that definition. It’s not just about speed: a rapid feedback loop is a core ingredient in getting to quality. So we launched now. As of today, people can check out our research-preview realtime meeting agent at Cedarloop.ai . With luck, they’ll report issues, inform what we should prioritize next, and tell us what problems they’d love to have automated away. We’re only a few hours in, and yep – people are reporting issues. Linear integration had an OAuth issue. Login didn’t work in social-media webviews. We’ve been so focused on the desktop experience that we’ve let the mobile layout get janky. This is embarrassing! But also, there’s signal. People are trying the Linear integration. Our desktop-focused app is being discovered on mobile. Folks care enough to click at all. And in a week or so, we’ll have a smoother onboarding flow than we would have gotten to with weeks of private user tests. So it’s worth the pain. We’re going to take the feedback, follow the signal, learn and re-learn, and do better. We’ll use it to forge the best damn live agent ever – or, if the feedback peters out, we’ll know we’re on the wrong path, and find the right one. In the meantime, there’s a lot to do. 3 Back to work! This is not a good name yet. For example, sometimes iOS mishears “Hey Cedar” as “Hey Siri”. But part of our move-fast strategy is to worry more about names once we’ve proven something has traction. At that point, we’ll put in the work to give it the right name – and eventually rename the company after it. ↩ It’s fascinating how much you can do to get LLM response times down. Our first prototype often took over 8000ms to respond, which doesn’t feel live at all. Once we got it under ~1200ms, voice-in-vision-out suddenly felt alive – a step change. We have a lot of work planned to get Cedarloop even faster and much more reliable, which I’m keen to write about when I can. ↩ Speaking of having a lot to do: if you’re an experienced product-minded developer in Vancouver who would be excited to iterate and build out realtime agents using LLMs and TypeScript, we’re hiring a Founding Engineer . Just sayin’. ↩ This is not a good name yet. For example, sometimes iOS mishears “Hey Cedar” as “Hey Siri”. But part of our move-fast strategy is to worry more about names once we’ve proven something has traction. At that point, we’ll put in the work to give it the right name – and eventually rename the company after it. ↩ It’s fascinating how much you can do to get LLM response times down. Our first prototype often took over 8000ms to respond, which doesn’t feel live at all. Once we got it under ~1200ms, voice-in-vision-out suddenly felt alive – a step change. We have a lot of work planned to get Cedarloop even faster and much more reliable, which I’m keen to write about when I can. ↩ Speaking of having a lot to do: if you’re an experienced product-minded developer in Vancouver who would be excited to iterate and build out realtime agents using LLMs and TypeScript, we’re hiring a Founding Engineer . Just sayin’. ↩

0 views
Ruslan Osipov Yesterday

Are AI productivity gains fueled by delivery pressure?

A multitudes study which followed 500 developers found an interesting soundbyte: “Engineers merged 27% more PRs with AI - but did 20% more out-of-hours commits”. While I won’t comment on the situation at Google, there are many anecdotes online about folks online who raise concerns about increased work pressure. When a response to “I’m overloaded” becomes “use AI” - we’re heading for unsustainable workloads. The problem is compounded by the fact that AI tools excel at prototyping - the type of work which makes other work happen. Now, your product manager can prototype an idea in a couple of hours, fill it with real (but often incorrect) data, sell the idea to stakeholders, and set goals to productionize it a week later. “Look - the prototype works, and it even uses real data. If I could do this in a couple of hours, how hard could this be for an experienced engineer?” - while I haven’t heard these exact words, the sentiment is widespread (again, online). In a world where AI provides a surface-level ability to contribute across almost any role, the path to avoiding global burnout is to focus on building empathy. Just because an LLM can churn out a document doesn’t mean it’s actually good writing, and we’re certainly not at the point where a handful of agents can replace a seasoned PM. However, because the output looks polished - especially to those without deep domain knowledge - it’s easy to fall into the trap of thinking you’ve done someone else’s job for them. That gap between “looking done” and “being right” is exactly where the extra professional pressure begins to mount. This is really caused by the way we still measure knowledge worker productivity - by the sheer number of artifacts they produce, rather than the outcomes of the work. The right way to leverage AI in workspace is as a license to work better and focus on the right things, not as a mandate to produce more things faster.

0 views

There should be a citizen-led path to amend the Constitution

There are many issues with widespread bipartisan agreement that never go anywhere , for example limiting corporate money in campaigns and making gerrymandering illegal . As a surprise to no one, Congress is the bottleneck on these issues and many, many more. If the people on both sides generally agree for years and years, and Congress still doesn’t do anything about it, then that’s a structural governance problem. For stuck issues like these, we should have a citizen-led path to amend the Constitution as a solution to this governance problem. Now I fully agree that it should be super hard to amend the Constitution, and so we could make this path just as hard as the Congress-led one. 1 Thanks for reading Gabriel Weinberg! Subscribe for free to receive new posts and support my work. The current Congress-led path is two-thirds of both houses need to propose an amendment and then three-fourths of state legislatures need to ratify it. 2 A citizen-led path could similarly require two-thirds of states to propose an amendment (via something like signature campaigns within those states) and then a nationwide three-fourths vote to ratify it. 3 In other words the thresholds could be similar, just put directly in the hands of the people. You would still need most red and blue states working together to pass an amendment, such that nothing could pass without super-majority bipartisan support. That is, a super high bar. There are a lot of interesting structural reform proposals out there, but most require new Constitutional amendments to have lasting staying power. 4 To really unlock the possibilities for those, we need a citizen-led amendment path in place first. Of course, we would need an amendment using the old way to get this new way in place. 5 Congress won’t reform itself, but a citizen-led amendment path could. Thanks for reading! Subscribe for free to receive new posts or get the audio version . I’d also argue the current path is too hard, but I’m ignoring that for now since it is independent of this argument. There is technically also a path for state legislatures to call a constitutional convention, but it’s never been used and it still routes through legislatures and representatives rather than citizens directly. Signature campaigns could mimic current processes of how citizen-led ballot initiatives work, but there are other options (including non-signature-based options) as well. A few examples of interesting structural reforms likely requiring an amendment to be long-term effective are increasing the number of representatives, term limits, and rank-choice voting. This type of citizen-led amendment path has been proposed as a Consitutional amendment in the past and hasn’t gone anywhere either, but at some point we may reach a breaking point where something like this could take hold.

0 views

RE Backseat Software

Reading through Mike Swanson's article "Backseat Software" made me realize why I tend to gravitate to older platforms and software. Software used to be sold with the expectation that it would accomplish the goal you purchased it for. Now, software is all about keeping you engaged, on platform, etc so that you keep renewing your subscription (or even better, part with more money and data). Mike writes "Great tools get out of the way so the user can accomplish their goal". I've been in enough companies where the goal is the opposite. You can't let the user just hop on, finish their task and hop off, think of the metrics! If a user's task is accomplished, they won't realize the value and might not renew! Mike also writes "I don’t want to go back to floppy disks. I like fast updates. I like security patches. I like sync. I like crash reports when they help fix real issues", and to be honest, I disagree with this to a point. I'd love to go back to boxed software on a disc. If a company has to manufacture and distribute, they typically made sure the software was well tested to prevent the cost of reprinting discs. These days, it's a "ship first, fix later" mentality. Speed is all that matters to a modern software company. This mindset is even growing with the VCDLC (Vibe Code Development Life Cycle). Just this morning I found my childhood copy of KidPix Deluxe on CD. I know that, if I had a computer from the era, inserting that disc would result in a full, functional experience. No failed license checks due to offline servers, no gigs of updates and no online account. Instead, KidPix would load and be fun just like it was when I played it. I don't need new features. Software should be sold as is. While new features might come, what you purchased still accomplishes the goal you bought it for. When I run software on my Palm Pilot, it does exactly what it should. No tracking, no announcements, no updates. If a Palm Pilot app is buggy or lacking, you use an app from a different vendor. Quality was necessary to make sales. When you buy a hammer, you expect to be able to hit nails. You don't need a manual, just a good nail to hit. Years later the manufacturer might introduce a new carbon fiber hammer with a larger head that hits nails with 30% more accuracy. Your old hammer won't get these features, but it continues to hit nails just fine. And sure, maybe the new hammer fixed a design flaw with the grip occasionally shifting. But again, you've learned to live with it and it hits nails. The hammer doesn't define your life or act as a status symbol. It's not engaging or addictive. It's a tool, and it hits nails. Software should be like a hammer.

0 views
Stratechery 2 days ago

2026.09: This Was an Xbox

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Stratechery video is on Thin Is In . From Owning the Living Room to Ceding the Hardware Market ? After Phil Spencer’s exit at Microsoft,  Wednesday’s Daily Update provided an entertaining tour of Xbox history , including strategy that has been misaligned for at least 15 years, and why some of those red flags were ignored at the time (spoiler: “[Microsoft] held onto Xbox as the sole piece of evidence that the company could be cool and interesting to consumers”). Today, though, there are new pivots to discuss. So what’s next? Ben builds on the fraught history to explain why, given the lack of growth in the gaming market and competitive pressures on the rest of Microsoft’s business, the days of 1st party Xbox hardware may be over.  — Andrew Sharp From MJ to Wemby and Everything in Between. With Andrew on vacation, Greatest of All Talk was lucky to have the illustrious Rachel Nichols on as a guest. From sharing stories from her early days as an intern covering Michael Jordan to reflecting on the end of the Washington Post Sports section and the changing media landscape, Rachel’s unique experience provided a compelling through line across eras of sports and media. Come for the discussion of whether Wemby and the Spurs can win it all, stay for the greatest moose related headline of all time. — Ben Thompson It’s Time to Build… In Space?  In 2016 Jeff Bezos said, “We can build gigantic chip factories in space.” 10 years later, with chip constraints  as urgent as ever , a number of companies are already exploring manufacturing in space (data centers, pharmaceuticals), so why not chips too?  This week’s Asianometry video  answers that question comprehensively, noting that LEO chip fabbing would impose incredible logistics challenges (cooling, cleaning, managing radiation, constant maintenance  in space ), and would probably require reimagining the entire chip stack (how do you handle packaging in space?). It’s a great, itemized breakdown of the obstacles —  available as a podcast or transcript for Stratechery Plus subscribers  — that also underscores how many incredible challenges we’ve already solved on earth. — AS Another Viral AI Doomer Article, The Fundamental Error, DoorDash’s AI Advantages — Another AI doomer article has gone viral, and like many in the genre, it lacks an appreciation for dynamism and markets. Then, why DoorDash is going to be fine. Xbox Replaces Head of Gaming, Xbox History, Whither Xbox — Xbox has a new head, who isn’t a gamer; I suspect Microsoft is doing what it should have done a decade ago: get out of the console business. An Interview with Bill Gurley About Runnin’ Down a Dream — An interview with long-time (retired) VC Bill Gurley about his new book about building a career you love, Uber, and the modern state of VC. AI Xbox Doom Privacy Screens and Apple Report Cards Chip Fabs in Space: Technically Possible, Completely Impractical The GOAT pod visits No Dunks Rachel Nichols on Mike, WaPo, Luka & Wemby

0 views
iDiallo 2 days ago

We Need Process, But Process Gets in the Way

How do you manage a company with 50,000 employees? You need processes that give you visibility and control across every function such as technology, logistics, operations, and more. But the moment you try to create a single process to govern everyone, it stops working for anyone. One system can't cater to every team, every workflow, every context. When implemented you start seeing in-fighting, projects missing deadlines, people quitting. Compromises get made, and in my experience, it almost always becomes overwhelming. The first time I was part of a merger, I was naïve about how it would go. The narrative we were sold was reassuring. The larger company was acquiring us because we were successful. The last thing they'd want to do was get in the way of that success. But that's not how it went. It doesn't matter what made you successful before you join a larger organization. The principles and processes of the acquiring company are what will dominate. Your past success is acknowledged, maybe even celebrated, but it doesn't protect you from assimilation. One of the first things we had to adopt was Scrum. It may be standard practice now, but at the time it was still making its way through the industry. Our team, developers and product managers, already had a process that worked. We knew how to communicate, how to prioritize, how to ship. Adopting this new set of ceremonies felt counterproductive. It didn't make us faster. It didn't improve communication. What it did do was increase administrative overhead. Standups, sprints, retrospectives, layer after layer of structure added on top of work that was already getting done. But there was no going back. We were never going to return to being that nimble, ad hoc team that could resolve issues quickly and move on. We had to adopt methods that got in the way. Eventually, we adapted. We adopted the process. And in doing so, we became less efficient at the local level. A lot of people, frustrated by the slowdown, left for other opportunities. But as far as the larger company was concerned, that was acceptable. Our product was just one of many in their portfolio. Slowing down one team to get everyone aligned was a price they were willing to pay. It wasn't efficient, but it was manageable from their perspective. The math made sense at the organizational level, even if it felt like a loss from where we were standing. I understand that logic. I just don't think it's the best way forward. Think about how a computer works. A CPU doesn't concern itself with how a hard drive retrieves data. Whether it's spinning magnetic disks or a solid state drive, the internal mechanics are irrelevant to the CPU. All it knows is that it can make a request, and the response will come back in the expected format. If the CPU had to get involved in the actual process of fetching data, it would waste enormous processing power on something that isn't its concern. Organizations can work the same way. Rather than imposing a single process across every team, a company can treat its departments as independent components. You make a request, the department delivers an output. How they produce that output like what tools they use, how they run their meetings, how they structure their work, that shouldn't be a concern, as long as the result meets the requirement. There are places where unified processes make sense. Legal and compliance, for example, probably need to be consistent across the whole organization. But for how individual teams operate day to day, autonomy is often the better choice. Will every team's process be perfectly aligned with every other team's? No. But they'll actually work. And the people doing the work will be far less likely to walk out the door. Sometimes in large organizations, it's important to identify which process works, and which team is better left alone.

0 views

On NVIDIA and Analyslop

Hey all! I’m going to start hammering out free pieces again after a brief hiatus, mostly because I found myself trying to boil the ocean with each one, fearing that if I regularly emailed you you’d unsubscribe. I eventually realized how silly that was, so I’m back, and will be back more regularly. I’ll treat it like a column, which will be both easier to write and a lot more fun. As ever, if you like this piece and want to support my work, please subscribe to my premium newsletter. It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5000 to 18,000 words, including vast, extremely detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I am regularly several steps ahead in my coverage, and you get an absolute ton of value. In the bottom right hand corner of your screen you’ll see a red circle — click that and select either monthly or annual.  Next year I expect to expand to other areas too. It’ll be great. You’re gonna love it.  Before we go any further, I want to remind everybody I’m not a stock analyst nor do I give investment advice.  I do, however, want to say a few things about NVIDIA and its annual earnings report, which it published on Wednesday, February 25: NVIDIA’s entire future is built on the idea that hyperscalers will buy GPUs at increasingly-higher prices and at increasingly-higher rates every single year. It is completely reliant on maybe four or five companies being willing to shove tens of billions of dollars a quarter directly into Jensen Huang’s wallet. If anything changes here — such as difficulty acquiring debt or investor pressure cutting capex — NVIDIA is in real trouble, as it’s made over $95 billion in commitments to build out for the AI bubble .  Yet the real gem was this part: Hell yeah dude! After misleading everybody that it intended to invest $100 billion in OpenAI last year ( as I warned everybody about months ago , the deal never existed and is now effectively dead ), NVIDIA was allegedly “close” to investing $30 billion . One would think that NVIDIA would, after Huang awkwardly tried to claim that the $100 billion was “ never a commitment ,” say with its full chest how badly it wanted to support OpenAI and how intentionally it would do so. Especially when you have this note in your 10-K: What a peculiar world we live in. Apparently NVIDIA is “so close” to a “partnership agreement” too , though it’s important to remember that Altman, Brockman, and Huang went on CNBC to talk about the last deal and that never came together. All of this adds a little more anxiety to OpenAI's alleged $100 billion funding round which, as The Information reports , Amazon's alleged $50 billion investment will actually be $15 billion, with the next $35 billion contingent on AGI or an IPO: And that $30 billion from NVIDIA is shaping up to be a Klarna-esque three-installment payment plan: A few thoughts: Anyway, on to the main event. New term: analyslop, when somebody writes a long, specious piece of writing with few facts or actual statements with the intention of it being read as thorough analysis.  This week, alleged financial analyst Citrini Research (not to be confused with Andrew Left’s Citron Research)  put out a truly awful piece called the “2028 Global Intelligence Crisis,” slop-filled scare-fiction written and framed with the authority of deeply-founded analysis, so much so that it caused a global selloff in stocks .  This piece — if you haven’t read it, please do so using my annotated version — spends 7000 or more words telling the dire tale of what would happen if AI made an indeterminately-large amount of white collar workers redundant.  It isn’t clear what exactly AI does, who makes the AI, or how the AI works, just that it replaces people, and then bad stuff happens. Citrini insists that this “isn’t bear porn or AI-doomer fan-fiction,” but that’s exactly what it is — mediocre analyslop framed in the trappings of analysis, sold on a Substack with “research” in the title, specifically written to spook and ingratiate anyone involved in the financial markets.  Its goal is to convince you that AI (non-specifically) is scary, that your current stocks are bad, and that AI stocks (unclear which ones those are, by the way) are the future. Also, find out more for $999 a year. Let me give you an example: The goal of a paragraph like this is for you to say “wow, that’s what GPUs are doing now!” It isn’t, of course. The majority of CEOs report little or no return on investment from AI , with a study of 6000 CEOs across the US, UK, Germany and Australia finding that “ more than 80%  [detected] no discernable impact from AI on either employment or productivity .” Nevertheless, you read “GPU” and “North Dakota” and you think “wow! That’s a place I know, and I know that GPUs power AI!”  I know a GPU cluster in North Dakota — CoreWeave’s one with Applied Digital that has debt so severe that it loses both companies money even if they have the capacity rented out 24/7 . But let’s not let facts get in the way of a poorly-written story. I don’t need to go line-by-line — mostly because I’ll end up writing a legally-actionable threat — but I need you to know that most of this piece’s arguments come down to magical thinking and the utterly empty prose. For example, how does AI take over the entire economy?  That’s right, they just get better. No need to discuss anything happening today. Even AI 2027 had the balls to start making stuff about “OpenBrain” or whatever. This piece literally just says stuff, including one particularly-egregious lie:  This is a complete and utter lie. A bald-faced lie. This is not something that Claude Code can do. The fact that we have major media outlets quoting this piece suggests that those responsible for explaining how things work don’t actually bother to do any of the work to find out, and it’s both a disgrace and embarrassment for the tech and business media that these lies continue to be peddled.  I’m now going to quote part of my upcoming premium (the Hater’s Guide To Private Equity, out Friday), because I think it’s time we talked about what Claude Code actually does. I’ve worked in or around SaaS since 2012, and I know the industry well. I may not be able to code, but I take the time to speak with software engineers so that I understand what things actually do and how “impressive” they are. Similarly, I make the effort to understand the underlying business models in a way that I’m not sure everybody else is trying to, and if I’m wrong, please show me an analysis of the financial condition of OpenAI or Anthropic from a booster. You won’t find one, because they’re not interested in interacting with reality. So, despite all of this being very obvious , it’s clear that the markets and an alarming number of people in the media simply do not know what they are talking about or are intentionally avoiding thinking about it. The “AI replaces software” story is literally “Anthropic has released a product and now the resulting industry is selling off,” such as when it launched a cybersecurity tool that could check for vulnerabilities (a product that has existed in some form for nearly a decade) causing a sell-off in cybersecurity stocks like Crowdstrike — you know, the one that had a faulty bit of code cause a global cybersecurity incident that lost the Fortune 500 billions , and resulted in Delta Airlines having to cancel over 1,200 flights over a period of several days .  There is no rational basis for anything about this sell-off other than that our financial media and markets do not appear to understand the very basic things about the stuff they invest in. Software may seem complex, but (especially in these cases) it’s really quite simple: investors are conflating “an AI model can spit out code” with “an AI model can create the entire experience of what we know as ‘software,’ or is close enough that we have to start freaking out.” This is thanks to the intentionally-deceptive marketing pedalled by Anthropic and validated by the media. In a piece from September 2025, Bloomberg reported that Claude Sonnet 4.5 could “code on its own for up to 30 hours straight,”  a statement directly from Anthropic repeated by other outlets that added that it did so “on complex, multi-step tasks,” none of which were explained. The Verge, however, added that apparently Anthropic “ coded a chat app akin to Slack or Teams ,” and no, you can’t see it, or know anything about how much it costs or its functionality. Does it run? Is it useful? Does it work in any way? What does it look like? We have absolutely no proof this happened other than Anthropic saying it, but because the media repeated it it’s now a fact.  As I discussed last week, Anthropic’s primary business model is deception , muddying the waters of what’s possible today and what might be possible tomorrow through a mixture of flimsy marketing statements and chief executive Dario Amodei’s doomerist lies about all white collar labor disappearing .  Anthropic tells lies of obfuscation and omission.  Anthropic exploits bad journalism, ignorance and a lack of critical thinking. As I said earlier, the “wow, Claude Code!” articles are mostly from captured boosters and people that do not actually build software being amazed that it can burp up its training data and make an impression of software engineering.  And even if we believe the idea that Spotify’s best engineers are not writing any code , I have to ask: to what end? Is Spotify shipping more software? Is the software better? Are there more features? Are there less bugs? What are the engineers doing with the time they’re saving? A study from last year from METR said that despite thinking they were 24% faster, LLM coding tools made engineers 19% slower.  I also think we need to really think deeply about how, for the second time in a month, the markets and the media have had a miniature shitfit based on blogs that tell lies using fan fiction. As I covered in my annotations of Matt Shumer’s “Something Big Is Happening,” the people that are meant to tell the general public what’s happening in the world appear to be falling for ghost stories that confirm their biases or investment strategies, even if said stories are full of half-truths and outright lies. I am despairing a little. When I see Matt Shumer on CNN or hear from the head of a PE firm about Citrini Research, I begin to wonder whether everybody got where they were not through any actual work but by making the right noises.  This is the grifter economy, and the people that should be stopping them are asleep at the wheel. NVIDIA beat estimates and raised expectations, as it has quarter after quarter. People were initially excited, then started reading the 10-K and seeing weird little things that stood out. $68.1 billion in revenue is a lot of money! That’s what you should expect from a company that is the single vendor in the only thing anybody talks about.  Hyperscaler revenue accounted for slightly more than 50% of NVIDIA’s data center revenue . As I wrote about last year , NVIDIA’s diversified revenue — that’s the revenue that comes from companies that aren’t in the magnificent 7 — continues to collapse. While data center revenue was $62.3 billion, 50% ($31.15 billion) was taken up by hyperscalers…and because we don’t get a 10-Q for the fourth quarter, we don’t get a breakdown of how many individual customers made up that quarter’s revenue. Boo! It is both peculiar and worrying that 36% (around $77.7 billion) of its $215.938 billion in FY2026 revenue came from two customers. If I had to guess, they’re likely Foxconn or Quanta computing, two large Taiwanese ODMs (Original Design Manufacturers) that build the servers for most hyperscalers.  If you want to know more, I wrote a long premium piece that goes into it (among the ways in which AI is worse than the dot com bubble). In simple terms, when a hyperscaler buys GPUs, they go straight to one of these ODMs to put them into servers. This isn’t out of the ordinary, but I keep an eye on the ODM revenues (which publish every month) to see if anything shifts, as I think it’ll be one of the first signs that things are collapsing. NVIDIA’s inventories continue to grow, sitting at over $21 billion (up from around $19 billion last quarter). Could be normal! Could mean stuff isn’t shipping. NVIDIA has now agreed to $27 billion in multi-year-long cloud service agreements — literally renting its GPUs back from the people it sells them to — with $7 billion of that expected in its FY2027 (Q1 FY2027 will report in May 2026).  For some context, CoreWeave (which reports FY2025 earnings today, February 26) gave guidance last November that it expected its entire annual revenue to be between $5 billion and $5.15 billion. CoreWeave is arguably the largest AI compute vendor outside of the hyperscalers. If there was significant demand, none of this would be necessary. NVIDIA “invested” $17.5bn in AI model makers and other early-stage AI startups, and made a further $3.5bn in land, power, and shell guarantees to “support the build-out of complex datacenter infrastructures.” In total, it spent $21bn propping up the ecosystem that, in turn, feeds billions of dollars into its coffers.  NVIDIA’s l ong-term supply and capacity obligations soared from $30.8bn to $95.2bn , largely because NVIDIA’s latest chips are extremely complex and require TSMC to make significant investments in hardware and facilities , and it’s unwilling to do that without receiving guarantees that it’ll make its money back.  NVIDIA expects these obligations to grow .  NVIDIA’s accounts receivable (as in goods that have been shipped but are yet to be paid for) now sits at $38.4 billion, of which 56% ($21.5 billion) is from three customers. This is turning into a very involved and convoluted process! It turns out that it's pretty difficult to actually raise $100 billion. This is a big problem, because OpenAI needs $655 billion in the next five years to pay all its bills , and loses billions of dollars a year. If OpenAI is struggling to raise $100 billion today, I don't see how it's possible it survives. If you're to believe reports, OpenAI made $13.1 billion in revenue in 2025 on $8 billion of losses , but remember, my own reporting from last year said that OpenAI only made around $4.329 billion through September 2025 with $8.67 billion of inference costs alone. It is kind of weird that nobody seems to acknowledge my reporting on this subject. I do not see how OpenAI survives. it coded for 30 hours [from which you are meant to intimate the code was useful or good and that these hours were productive].  it made a Microsoft Teams competitor [that you are meant to assume was full-featured and functional like Teams or Slack, or…functional? And they didn’t even have to prove it by showing you it]  It was able to write uninterruptedly [which you assume was because it was doing good work that didn’t need interruption].

0 views
Stratechery 3 days ago

An Interview with Bill Gurley About Runnin’ Down a Dream

An interview with long-time (retired) VC Bill Gurley about his new book about building a career you love, Uber, and the modern state of VC.

0 views
Stratechery 4 days ago

Xbox Replaces Head of Gaming, Xbox History, Whither Xbox

Xbox has a new head, who isn't a gamer; I suspect Microsoft is doing what it should have done a decade ago: get out of the console business.

0 views
Phil Eaton 4 days ago

I started a software research company

I quit my job at EnterpriseDB hacking on PostgreSQL products last month to start a company researching and writing about software infrastructure. I believe there is space for analysis that is more focused on code than TechCrunch or The Register, more open to covering corporate software development than LWN.net, and (as much as I love some of these folks) less biased than VCs writing about their own investments. I believe that more than ever there is a need for authentic and trustworthy analysis and coverage of the software we depend on. This company, The Consensus , will talk about databases and programming languages and web servers and everything else that is important for experienced developers to understand and think about. It is independent of any software vendor and independent of any particular technology. Some people were surprised (in a positive way) to see me cover MySQL already, for example. But that is exactly the point. I don't want The Consensus to be just "Phil's thoughts". I have already started working with a number of experienced developers who will be writing, and paid to write, for The Consensus. I also hope that this is another way, beyond the many communities I already run, to give back to the community such as in highlighting the work of open-source developers (the first interview with a DataFusion developer is coming soon), and highlighting compelling events and jobs in the software infrastructure world. The Consensus is entirely bootstrapped and will depend on the support of subscribers and, potentially, sponsors . The first few subscribers signed up just this past week. You can read more about the background and goals here , you can read about how contributors will work with The Consensus here , and you can get a sense for where this is going by browsing the homepage of The Consensus already. Thank you for your support in advance! Thank you to the folks who have subscribed already despite very little fanfare. Feedback is very welcome. I'm very excited and having quite a bit of fun already. We're all going to learn a lot. I started a software research company pic.twitter.com/PY3L0yhJlW

0 views
Hugo 4 days ago

The B2BigB Syndrome: How Large Corporations Quietly Kill Startups

In the late 2000s, I worked at a software publisher and one of my colleagues started a company. It was a kind of corporate Second Life , where an avatar could move around and trigger discussions with other people. I don't remember the details anymore, but with hindsight and probably lots of exaggeration, I'd say it was like Gather but 15 years ahead of its time. The application seemed to work well and the company was lining up meetings with major corporations that seemed super interested in rolling it out across their enterprise. We're talking about big banks, major energy suppliers, really serious companies. Except it dragged on. A month. A quarter. A year. Then two. And eventually the company died waiting for an actual signature and, incidentally, some cash. My friend unfortunately ran into the infamous B2BigB syndrome, this curse (a French one?) that tends to kill a lot of companies every year. So if you're starting a company today or thinking about it, I invite you to think twice before prioritizing this segment, and that's what we're going to talk about today. First, I need to define this acronym. In the business world, we tend to segment companies based on the customers they target: For example, Netflix is B2C and Jira is B2B. Among all this you have plenty of nuances. Microsoft sells in both B2C and B2B, for example. You have C2C platforms (exchanges between individuals). But let's keep it simple and just talk about B2C and B2B. Except "B" is broad. Between a 5-person company and a 40,000-person conglomerate, the way you sell to the two is very different. And in this category, there's a category of death: large corporations. It's hard to really say when a large corporation begins, but you recognize them easily. A large corporation starts when a decision requires a ton of meetings, a quarter, a steering committee and board approval or a purchasing department sign-off. In practice, you can even have 500-person companies that behave this way, even if it's more common starting at 1,000. But in any case, it gets worse with size. A quarter can become a year, or even 2, or even 5 (and I swear I've seen sales cycles that long). Anyway, that's what I call the BigB (the big B's). The big advantage of BigB's is, in theory, the ability to buy expensive because we're talking about deployment across an entire large corporation, so volumes that make most startups' eyes light up. Except that, it's often a mirage. The moment you start looking at costs and margins, not to mention all the associated risks. Working with a large corporation is often synonymous with complexity, and that complexity is financed by specialists. You have to respond to costly processes (a 200-page security questionnaire, legal questionnaires, framework contracts, ISO certification this and that) that often requires a lot of specialists (lawyers, security experts, finance people, etc.). And that's just to get through the first step of the sales cycle. To sell to a large corporation, you need to be prepared to spend a fortune. By the way, it's worth noting that this doesn't prevent these large corporations from regularly appearing on the monthly data breach list. Because no, churning out Excel questionnaires is not synonymous with security quality. After that, you're quickly going to fall into the spiral of quarterly meetings with a bunch of people you'll only see once in your life, some of whom will take advantage of their temporary power to take out their frustrations and pet peeves on you. And since you'll be in a weak position, well... This time is time not spent on the product. Of course it's normal to spend time on sales, but we're talking about quarterly meetings to prepare, with McKinsey-style PowerPoints (you sometimes even see scale-ups calling in consulting firms to fill out these documents) that will require weeks of preparation. Again, to sell to a large corporation, you need to already be prepared to spend a fortune and wait ages. But let's imagine you've finally got the green light to deploy in a large corporation. The contract is signed. Now it's up to you to figure out adoption. Actually, this is the beginning of a second nightmare. A year has passed since the beginning of the sales cycle. All your previous contacts are gone. They might have been contractors who left the company. Or executives who got transferred to other branches of the group. And now you have to find the people capable of helping you deploy your software because without a doubt your revenue depends on how much the software is actually used. No deployment, no money. So you're going to need a dedicated team of salespeople capable of navigating complex bureaucracy to find the right contacts, and maybe even a dedicated implementation team. Your costs are going to explode and you still won't have made anything at this stage. With a bit of luck, and because you were smart enough to get a payment at signature, you'll eventually issue your first invoice. That will be paid 8 months later, end of month . The first 3 months having caused countless incidents because a purchase order needed to be signed and you had to go through 3 different departments for that. Bad luck, your cash flow is starting to choke. You reach the end of the first year and then the purchasing department will come see you to renegotiate the contract, knowing full well that, in theory, they're your biggest client so it would be natural to do them a favor. In short, 2 years later, you've spent a fortune, your cash flow is negative, and your margin has melted like snow during a World Cup ski race in Saudi Arabia. OK, let's say I'm exaggerating and that despite everything, this contract allowed you to instead cross a threshold, to have an impressive signature to put forward and life continues for your startup/scaleup. Actually, you don't know it yet, but you've invited a Trojan horse into your company. Working with a large corporation means accepting the complexity inherent to that business. If it took you 2 years to sign a contract with them, imagine that everything else takes the same time. Your product has to evolve to fit their way of working. You'll be asked for 12-level approval workflows, software integrations with ERPs, broken enterprise SSO, integrations with legacy systems from the 90s. And every company has its own internal jargon that you'll be asked to force into your software. You'll invoice in units of work, have a "purchasing" role in your RBAC schemas (authorization systems), in short, in reality, you're going to develop an extension of your first client's IT infrastructure with all its constraints, its complexity, its slow onboarding, and its costs. And when you have a client representing 80% of your revenue (and even from 20% onwards it really starts to matter), you can hardly say no. So your roadmap is regularly hijacked by salespeople dedicated to this client, and globally a product that drifts away from the mass market. And that's normal, hey, I'm not throwing stones at that team. If you've dedicated people to a client, it's normal they try to influence how you build the product and even if the requests are absurd. Because that team doesn't have the perspective needed to judge. And when the roadmap is regularly sidetracked, it's also a huge amount of customization debt that will end up slowing the entire product. This big client may have allowed you to double your headcount. But 3/4 of the company will end up working for them, and will develop their own software culture, less UX sense, less sensitivity to product performance (no point working on acquisition or conversion, for example). All enterprise software has terrible UX, because first, that's not what drives sales, and second, because after burning money in the sales process, certification and onboarding, you have to make savings somewhere, often on the product which is no longer really central to the relationship with this client. They'll try to reassure you by saying no, it's important, but actually, the product at that point has become a cost center that needs to be optimized to not lose more margin. Margin eaten by the consulting firm that helped you determine your deployment strategy and pricing... But even when you "improve" your product for this client, you're going to continuously degrade it for all the others you thought you'd attract next by showcasing this win on your beautiful landing page. Because again, you're going to impose their complexity on all the other companies that could have been interested in your services. I'm obviously painting a dark picture. And there are companies that specialized FROM DAY 1 in large corporations, that tailored their commercial offering taking into account all the associated costs. Deployments are priced at 100k, contracts impose minimum usage, everything was framed from the start because the strategy was always to expand exclusively here. But for all the companies that think "just" doing a BigB to get a validation badge, but who actually target the entire SMB market and are looking for volume. It's rarely a good plan. At the beginning I said: "this curse (a French one?)". Why do I say it's a great French curse? Actually it's probably a magnifying glass effect and I'd certainly see the same thing in every country. But every year, I see companies that die after quarters of waiting for that famous contract with a large corporation (just yesterday I was talking to someone who told me the exact same story). So I think there's something a bit different about us. We like to be different. Partly, I get the sense it's related to the size of our SMB market which is less important than in Germany (the German Mittelstand seems bigger). We go faster from SMB to large corporations. Obviously, then, in terms of credibility, it's easier to sell a product once you have the logo of a large corporation than a bunch of logos of unknown companies. What's certain is that culturally, there's the CAC 40 and everything else. The CAC 40 has been basically the same companies for 30 or 40 years. By contrast, look at the S&P 500, in 1990 it was Exxon, GE, Philip Morris, IBM. They've all given way to Apple, Nvidia, Amazon, Google. In France, the large corporations in the CAC are structurally stable and dominant, which makes them all the more attractive as clients for startups. They have budgets, longevity, legitimacy. But these same large corporations aren't springboards to a global market — they're markets closed in on themselves. And conversely the SMB market can work. If I look at Pennylane, Qonto, Indy, Payfit, Spendesk, Livestorm, it's precisely by targeting this market that they've managed to go far. By contrast, I have real questions about the strategy of a company like Mistral which seems to position itself only on large corporations (on-premise deployment, Azure partnerships, etc.) and seems to be neglecting the mass market. I hope it won't be the future DailyMotion, which favored big media and telecom operators while missing the opportunity to become the B2C media platform that YouTube managed to become. You'll have gathered, if you're starting a company today, I'd tend to advise you to not see "B2B" as a single big playground. I'd tend to tell you to avoid B2BigB which is often destructive for startups and often ends up leading to a dead end. It's still possible, but you need to be armed for it. And if that's your choice, I'll only say one thing. Good luck :) Targeting large corporations (and the public sector) obviously gives you access to larger markets. But I'd tend to recommend tackling that step later, when the company is already solid. When DJI (Chinese drones) attacked the professional market, they already had a huge foothold in the B2C market. They came with an expertise and know-how that allowed them to be sovereign over their decisions. Now if you're tempted anyway, the recipe for having a chance is above all a question of seniority of leadership: you need to know how to say no firmly, you need to stop chasing every rabbit that passes by when you see a so-called "low hanging fruit", the expression that has replaced "quick win" as one of my most hated expressions. There's no such thing as effortless gain. Everything has a cost, even when it's hidden. And you need a good financial and reputational foundation to impose these conditions, hence the advice to already have a good base on the other segments. It's easier to say no when a client represents 2% than when they represent 20%. One strategy I've seen work several times is to create software with great UX, get adopted by the teams, then go see the purchasing departments of the companies in question and put the usage figures under their nose: "See, you already have 300 people using it, wouldn't you like to set up a framework contract and better understand usage at your company?" That's interesting because you've created a product whose adoption happened from the teams, you didn't modify your roadmap, and you're in a strong position with procurement to improve your presence without being pressured on everything else. In short, make a good product, track usage, wait until you have enough footprint, and then go negotiate. Anthropic (Claude Code) by first targeting individual developers (indie hackers, side projects) and small teams was pushed to constantly improve its product which became number 1 in its category (at the time of writing, this passage might age poorly :)). Today, they're selling enterprise licenses. Good companies are able to do volume and then move up the chain, small companies then large companies. I've rarely (never?) seen the reverse. When you do large corporations, you don't know how to come back to the rest of the segments. B2C (Business to Consumer), that's the general public. B2B (Business to Business), that's selling to companies.

0 views
Stratechery 5 days ago

Another Viral AI Doomer Article, The Fundamental Error, DoorDash’s AI Advantages

Another AI doomer article has gone viral, and like many in the genre, it lacks an appreciation for dynamism and markets. Then, why DoorDash is going to be fine.

0 views
Jim Nielsen 1 weeks ago

How AI Labs Proliferate

SITUATION: there are 14 competing AI labs. “We can’t trust any of these people with super-intelligence. We need to build it ourselves to ensure it’s done right!" SOON: there are 15 competing AI labs. (See: xkcd on standards .) The irony: “we’re the responsible ones” is each lab’s founding mythology as they spin out of each other. Reply via: Email · Mastodon · Bluesky

0 views
A Smart Bear 1 weeks ago

Strategic choices: When both options are good

Real strategy means choosing between two good options and accepting all the consequences--even the painful ones you don't like.

0 views
iDiallo 1 weeks ago

Nvidia was only invited to invest

Nvidia was only invited to invest. That is one reversal of commitment. Remember that graph that has been circling around for some time now? The one that shows the circular investment from AI companies: Basically Nvidia will invest $100 billion in OpenAI. OpenAI will then invest $300 billion in Oracle, then Oracle invests back into Nvidia. Now, Jensen Huang, the Nvidia CEO, is back tracking and saying he never made that commitment . “It was never a commitment. They invited us to invest up to $100 billion and of course, we were, we were very happy and honored that they invited us, but we will invest one step at a time.” So he never committed? Did we make up all these graphs in our head? Was it a misquote from a journalist somewhere that sparkled all this frenzy? Well, you can take a look in OpenAI press release in September of 2025 . They wrote: NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed. In fact, Jensen Huang went on to say: “NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT. This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.” It sounds like Jensen is distancing himself from that $100 billion commitment. Did he take a peak inside OpenAI and change his mind? At the same time, OpenAI is experimenting with ads. Sam Altman stated before that they would only ever use ads as a last resort. It sounds like we are in the phase.

0 views
Stratechery 1 weeks ago

2026.08: Losing in the Attention Economy

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on Anthropic’s Super Bowl lies. What Happened to Video Games? For decades video games were hailed as the industry of the future, as their growth and eventually total revenue dwarfed other forms of entertainment. Over the last five years, however, things have gotten dark — and what light there is is shining on everyone other than game developers. I’ve been talking to Matthew Ball about the state of the video game industry every year for the last three years, and this week’s Interview was my favorite one of the series: what happens when you actually have to fight for attention, and when everything that made you exciting — particularly interactivity and immersiveness — start to be come liabilities? — Ben Thompson The NBA Is a Mess, For Now.  As a card-carrying pro basketball sicko who will be watching the NBA the rest of my life, it brings me no joy to report the league is not in a great place at the moment. We’re reliving the mid-aughts Spurs-Pistons Dark Ages, but with too much offense instead of too much defense, and a regular season that’s 20 games too long. I wrote about all of it on Sharp Text this week , including problems that can be fixed, others that may be solved with time, and whether Commissioner Adam Silver is the right leader to address any of these issues.  — Andrew Sharp Shopify and the Future of E-Commerce.  In the midst of the ongoing thrum of SaaSpocalypse takes, I enjoyed that Ben’s Daily Update on Wednesday pumped the brakes on the panic in at least one area: Shopify is fine, actually . We went deeper on this week’s episode of Sharp Tech , exploring not only Shopify’s value propositions, but the shifting dynamics of e-commerce in the AI era, the sorts of businesses that are likely to emerge in the years to come, and why certain structural advantages from previous paradigms will not only be durable, but even stronger going forward.  — AS Thin Is In — Thick clients were the dominant form of device throughout the PC and mobile era; in an AI world, however, thin clients make much more sense. Shopify Earnings, Shopify’s AI Advantages — Shopify is poised to be one of the biggest winners from AI; it would behoove investors to actually understand the businesses they are selling. An Interview with Matthew Ball About Gaming and the Fight for Attention — An interview with Matthew Ball about the state of the video gaming industry in 2026, and why everything is a fight for attention. The NBA’s Problems Are Structural, Cultural and Fixable — What’s driving NBA fans to apathy, how the league might find its way back, and whether Adam Silver has outlived his usefulness. Back to the Future Curling, F1 , and Gambling South Africa’s Ruined Synthetic Oil Giant The Dunk Contest Preview America Needs, The Top Five Bandwagons for the Next Five Years, The NBA Fines the Jazz $500,000 The All-Star Game Was a Delight, Harrowing Field Reporting from the Dunk Contest, KD Burners Rise from the Ashes The Roots of a Global Memory Shortage, Thick, Thin and Apple, Shopify is Fine, Actually

0 views
iDiallo 1 weeks ago

Teleoperation is Always the Butt of the Joke

A few years back, the term "AI" took an unexpected turn when it was redefined as "Actual Indian". As in, a person in India operating the machine remotely. I first heard the term when Amazon was boasting about their cashierless grocery stores. There was a big sign in the store that said "Just Walk Out," meaning you grab your items, walk out, and get charged the correct amount automatically. How did they do it? According to Amazon, they used AI. What kind of AI exactly, nobody was quite sure. But customers started reporting something odd. They weren't charged immediately after leaving the store. Some said it took several days for a charge to appear on their account. It eventually came out that the technology was sophisticated tracking performed by Amazon's team in India. Workers would manually review footage of each customer's visit and charge them accordingly. What's fascinating is that this operation was impressive. Coordinating thousands of store visits, matching items to customers across multiple camera angles, and doing it accurately enough that most people never noticed the delay. But because it was buried under the "AI" label, the moment the truth came out, the whole thing became a punchline. In 2024, Tesla held their "We, Robot" event, where Optimus robots operated a bar. They were serving drinks, dancing, and mingling with guests. It was a pretty impressive display. The robots moved fluidly, held conversations, and handed off drinks without fumbling. Elon Musk claimed they were AI-driven , fully autonomous. People were genuinely impressed by the interactions, and for good reason. Fluid, bipedal locomotion in a crowded social environment is an extraordinarily hard robotics problem. The moment it came out that the robots were teleoperated, the sentiment flipped entirely. It didn't matter how dexterous or natural the movement was. It felt like a magic trick exposed. But think about what was actually being demonstrated. Humanoid robots walking through a crowd, responding in real time to a human operator's inputs, without tripping over guests or spilling drinks. That's not nothing. Slapping "AI" on it turned an engineering achievement into a scandal. More recently, the company 1X unveiled a friendly humanoid robot available for purchase at $20,000. The demo looks genuinely impressive. The robot can perform domestic tasks like doing laundry, folding clothes, and navigating a home environment. And if it doesn't know how to do something, it can be taught. You can authorize a remote worker to take control, demonstrate the task, and the robot learns from that demonstration, adding it to its growing repertoire. That's a legitimately interesting approach to machine learning through human guidance. What got glossed over is how much of the current capability relies on that remote worker. Right after the unveiling, the Wall Street Journal was invited to test the robots. In their video, the robot is being operated entirely by a person sitting in the next room. To be fair, the smoothness of that teleoperation is itself a technical achievement. Real-time control of a bipedal robot performing fine motor tasks, like folding a shirt, requires low-latency communication, precise motor control, and a well-designed interface for the operator. That's years of engineering work. But because teleoperation isn't the product being sold, AI is,that achievement gets treated as evidence of fraud rather than progress. We've built an environment where "teleoperated" has become a slur, and anything short of full autonomy is seen as cheating. Even Waymo, whose self-driving cars have logged millions of autonomous miles, feels compelled to publicly defend themselves against accusations of secretly using remote operators. As if any human involvement would invalidate everything they've built. I think teleoperation is pretty impressive. It's a valuable technology in its own right. Surgeons use it to operate across continents. Industrial operators use it to work in places no human could safely go. In all of these cases, having a human-in-the-loop is the point. Every "AI" product that turns out to have a person behind the curtain makes the public more skeptical. In a parallel universe, there is a version of the tech industry that celebrates teleoperation as a stepping stone. Where we are building tools to make collaboration easier through teleoperation, and it's not viewed as an embarrassing secret.

0 views
Justin Duke 1 weeks ago

Maybe use Plain

When I wrote about Help Scout , much of my praise was appositional. They were the one tool I saw that did not aggressively shoehorn you into using them as a CRM to the detriment of the core product itself. This is still true. They launched a redesign that I personally don't love, but purely on subjective grounds. And there's still a fairly reasonable option for — and I mean this in a non-derogatory way — baby's first support system. I will call out also: if you want something even simpler, Jelly , which is an app that leans fully into the shared inbox side of things. It is less featureful than Help Scout, but with a better design and lower price point. If I was starting a new app today, this is what I would reach for first. But nowadays I use Plain . Plain will not solve all of your problems overnight. It's only a marginally more expensive product — $35 per user per month compared to Help Scout's $25 per user per month. The built-in Linear integration is worth its weight in gold if you're already using Linear, and its customer cards (the equivalent of Help Scout's sidebar widgets) are marginally more ergonomic to work with. The biggest downside that we've had thus far is reliability — less in a cosmic or existential sense and more that Plain has had a disquieting number of small-potatoes incidents over the past three to six months. My personal flowchart for what service to use in this genre is something like: But the biggest thing to do is take the tooling and gravity of support seriously as early as you can. Start with Jelly. If I need something more than that, see if anyone else on the team has specific experience that they care a lot about, because half the game here is in muscle memory rather than functionality. If not, use Plain.

0 views
マリウス 1 weeks ago

Hold on to Your Hardware

Tl;dr at the end. For the better part of two decades, consumers lived in a golden age of tech. Memory got cheaper, storage increased in capacity and hardware got faster and absurdly affordable. Upgrades were routine, almost casual. If you needed more RAM, a bigger SSD, or a faster CPU or GPU, you barely had to wait a week for a discount offer and you moved on with your life. This era is ending. What’s forming now isn’t just another pricing cycle or a short-term shortage, it is a structural shift in the hardware industry that paints a deeply grim outlook for consumers. Today, I am urging you to hold on to your hardware, as you may not be able to replace it affordably in the future. While I have always been a stark critic of today’s consumer industry , as well as the ideas behind it , and a strong proponent of buying it for life (meaning, investing into durable, repairable, quality products) the industry’s shift has nothing to do with the protection of valuable resources or the environment, but is instead a move towards a trajectory that has the potential to erode technological self-sufficiency and independence for people all over the world. In recent months the buzzword RAM-pocalypse has started popping up across tech journalism and enthusiast circles. It’s an intentionally dramatic term that describes the sharp increase in RAM prices, primarily driven by high demand from data centers and “AI” technology, which most people had considered a mere blip in the market. This presumed temporary blip , however, turned out to be a lot more than just that, with one manufacturer after the other openly stating that prices will continue to rise, with suppliers forecasting shortages of specific components that could last well beyond 2028, and with key players like Western Digital and Micron either completely disregarding or even exiting the consumer market altogether. Note: Micron wasn’t just another supplier , but one of the three major players directly serving consumers with reasonably priced, widely available RAM and SSDs. Its departure leaves the consumer memory market effectively in the hands of only two companies: Samsung and SK Hynix . This duopoly certainly doesn’t compete on your wallet’s behalf, and it definitely wouldn’t be the first time it would optimize for margins . The RAM-pocalypse isn’t just a temporary headline anymore, but has seemingly become long-term reality. However, RAM and memory in general is only the beginning. The main reason for the shortages and hence the increased prices is data center demand, specifically from “AI” companies. These data centers require mind-boggling amounts of hardware, specifically RAM, storage drives and GPUs, which in turn are RAM-heavy graphics units for “AI” workloads. The enterprise demand for specific components simply outpaces the current global production capacity, and outbids the comparatively poor consumer market. For example, OpenAI ’s Stargate project alone reportedly requires approximately 900,000 DRAM wafers per month , which could account for roughly 40% of current global DRAM output. Other big tech giants including Google , Amazon , Microsoft , and Meta have placed open-ended orders with memory suppliers, accepting as much supply as available. The existing and future data centers for/of these companies are expected to consume 70% of all memory chips produced in 2026. However, memory is just the first domino. RAM and SSDs are where the pain is most visible today, but rest assured that the same forces are quietly reshaping all aspects of consumer hardware. One of the most immediate and tangible consequences of this broader supply-chain realignment are sharp, cascading price hikes across consumer electronics, with LPDDR memory standing out as an early pressure point that most consumers didn’t recognize until it was already unavoidable. LPDDR is used in smartphones, laptops, tablets, handheld consoles, routers, and increasingly even low-power PCs. It sits at the intersection of consumer demand and enterprise prioritization, making it uniquely vulnerable when manufacturers reallocate capacity toward “AI” accelerators, servers, and data-center-grade memory, where margins are higher and contracts are long-term. As fabs shift production toward HBM and server DRAM , as well as GPU wafers, consumer hardware production quietly becomes non-essential , tightening supply just as devices become more power- and memory-hungry, all while continuing on their path to remain frustratingly unserviceable and un-upgradable. The result is a ripple effect, in which device makers pay more for chips and memory and pass those costs on through higher retail prices, cut base configurations to preserve margins, or lock features behind premium tiers. At the same time, consumers lose the ability to compensate by upgrading later, because most components these days, like LPDDR , are soldered down by design. This is further amplified by scarcity, as even modest supply disruptions can spike prices disproportionately in a market where just a few suppliers dominate, turning what should be incremental cost increases into sudden jumps that affect entire product categories at once. In practice, this means that phones, ultrabooks, and embedded devices are becoming more expensive overnight, not because of new features, but because the invisible silicon inside them has quietly become a contested resource in a world that no longer builds hardware primarily for consumers. In late January 2026, the Western Digital CEO confirmed during an earnings call that the company’s entire HDD production capacity for calendar year 2026 is already sold out. Let that sink in for a moment. Q1 hasn’t even ended and a major hard drive manufacturer has zero remaining capacity for the year. Firm purchase orders are in place with its top customers, and long-term agreements already extend into 2027 and 2028. Consumer revenue now accounts for just 5% of Western Digital ’s total sales, while cloud and enterprise clients make up 89%. The company has, for all practical purposes, stopped being a consumer storage company. And Western Digital is not alone. Kioxia , one of the world’s largest NAND flash manufacturers, admitted that its entire 2026 production volume is already in a “sold out” state , with the company expecting tight supply to persist through at least 2027 and long-term customers facing 30% or higher year-on-year price increases. Adding to this, the Silicon Motion CEO put it bluntly during a recent earnings call : We’re facing what has never happened before: HDD, DRAM, HBM, NAND… all in severe shortage in 2026. In addition, the Phison CEO has gone even further, warning that the NAND shortage could persist until 2030, and that it risks the “destruction” of entire segments of the consumer electronics industry. He also noted that factories are now demanding prepayment for capacity three years in advance , an unprecedented practice that effectively locks out smaller players. The collateral damage of this can already be felt, and it’s significant. For example Valve confirmed that the Steam Deck OLED is now out of stock intermittently in multiple regions “due to memory and storage shortages” . All models are currently unavailable in the US and Canada, the cheaper LCD model has been discontinued entirely, and there is no timeline for when supply will return to normal. Valve has also been forced to delay the pricing and launch details for its upcoming Steam Machine console and Steam Frame VR headset, directly citing memory and storage shortages. At the same time, Sony is considering delaying the PlayStation 6 to 2028 or even 2029, and Nintendo is reportedly contemplating a price increase for the Switch 2 , less than a year after its launch. Both decisions are seemingly driven by the same memory supply constraints. Meanwhile, Microsoft has already raised prices on the Xbox . Now you might think that everything so far is about GPUs and other gaming-related hardware, but that couldn’t be further from the truth. General computing, like the Raspberry Pi is not immune to any of this either. The Raspberry Pi Foundation has been forced to raise prices twice in three months, with the flagship Raspberry Pi 5 (16GB) jumping from $120 at launch to $205 as of February 2026, a 70% increase driven entirely by LPDDR4 memory costs. What was once a symbol of affordable computing is rapidly being priced out of reach for the educational and hobbyist communities it was designed to serve. HP, on the other hand, seems to have already prepared for the hardware shortage by launching a laptop subscription service where you pay a monthly fee to use a laptop but never own it , no matter how long you subscribe. While HP frames this as a convenience, the timing, right in the middle of a hardware affordability crisis, makes it feel a lot more like a preview of a rented compute future. But more on that in a second. “But we’ve seen price spikes before, due to crypto booms, pandemic shortages, factory floods and fires!” , you might say. And while we did live through those crises, things eventually eased when bubbles popped and markets or supply chains recovered. The current situation, however, doesn’t appear to be going away anytime soon, as it looks like the industry’s priorities have fundamentally changed . These days, the biggest customers are not gamers, creators, PC builders or even crypto miners anymore. Today, it’s hyperscalers . Companies that use hardware for “AI” training clusters, cloud providers, enterprise data centers, as well as governments and defense contractors. Compared to these hyperscalers consumers are small fish in a big pond. These buyers don’t care if RAM costs 20% more and neither do they wait for Black Friday deals. Instead, they sign contracts measured in exabytes and billions of dollars. With such clients lining up, the consumer market in contrast is suddenly an inconvenience for manufacturers. Why settle for smaller margins and deal with higher marketing and support costs, fragmented SKUs, price sensitivity and retail logistics headaches, when you can have behemoths throwing money at you? Why sell a $100 SSD to one consumer, when you can sell a whole rack of enterprise NVMe drives to a data center with circular virtually infinite money? Guaranteed volume, guaranteed profit, zero marketing. The industry has answered these questions loudly. All of this goes to show that the consumer market is not just deprioritized, but instead it is being starved . In fact, IDC has already warned that the PC market could shrink by up to 9% in 2026 due to skyrocketing memory prices, and has described the situation not as a cyclical shortage but as “a potentially permanent, strategic reallocation of the world’s silicon wafer capacity” . Leading PC OEMs including Lenovo , Dell , HP , Acer , and ASUS have all signaled 15-20% PC price increases for 2026, with some models seeing even steeper hikes. Framework , the repairable laptop company, has also been transparent about rising memory costs impacting its pricing. And analyst Jukan Choi recently revised his shortage timeline estimate , noting that DRAM production capacity is expected to grow at just 4.8% annually through 2030, with even that incremental capacity concentrated on HBM rather than consumer memory. TrendForce ’s latest forecast projects DRAM contract prices rising by 90-95% quarter over quarter in Q1 2026. And that is not a typo. The price of hardware is one thing, but value-for-money is another aspect that appears to be only getting worse from here on. Already today consumer parts feel like cut-down versions of enterprise silicon. As “AI” accelerators and server chips dominate R&D budgets, consumer improvements will slow even further, or arrive at higher prices justified as premium features . This is true for CPUs and GPUs, and it will be equally true for motherboards, chipsets, power supplies, networking, etc. We will likely see fewer low-end options, more segmentation, artificial feature gating and generally higher baseline prices that, once established, won’t be coming back down again. As enterprise standards become the priority, consumer gear is becoming an afterthought that is being rebadged, overpriced, and poorly supported. The uncomfortable truth is that the consumer hardware market is no longer the center of gravity, as we all were able to see at this year’s CES . It’s orbiting something much larger, and none of this is accidental. The industry isn’t failing, it’s succeeding, just not for you . And to be fair, from a corporate standpoint, this pivot makes perfect sense. “AI” and enterprise customers are rewriting revenue charts, all while consumers continue to be noisy, demanding, and comparatively poor. It is pretty clear that consumer hardware is becoming a second-class citizen, which means that the machines we already own are more valuable than we might be thinking right now. “But what does the industry think the future will look like if nobody can afford new hardware?” , you might be asking. There is a darker, conspiratorial interpretation of today’s hardware trends that reads less like market economics and more like a rehearsal for a managed future. Businesses, having discovered that ownership is inefficient and obedience is profitable, are quietly steering society toward a world where no one owns compute at all, where hardware exists only as an abstraction rented back to the public through virtual servers, SaaS subscriptions, and metered experiences , and where digital sovereignty, that anyone with a PC tower under their desk once had, becomes an outdated, eccentric, and even suspicious concept. … a morning in said future, where an ordinary citizen wakes up, taps their terminal, which is a sealed device without ports, storage, and sophisticated local execution capabilities, and logs into their Personal Compute Allocation . This bundle of cloud CPU minutes, RAM credits, and storage tokens leased from a conglomerate whose logo has quietly replaced the word “computer” in everyday speech, just like “to search” has made way for “to google” , has removed the concept of installing software, because software no longer exists as a thing , but only as a service tier in which every task routes through servers owned by entities. Entities that insist that this is all for the planet . Entities that outlawed consumer hardware years ago under the banner of environmental protectionism , citing e-waste statistics, carbon budgets , and unsafe unregulated silicon , while conveniently ignoring that the data centers humming beyond the city limits burn more power in an hour than the old neighborhood ever did in a decade. In this world, the ordinary citizen remembers their parents’ dusty Personal Computer , locked away in a storage unit like contraband. A machine that once ran freely, offline if it wanted, immune to arbitrary account suspensions and pricing changes. As they go about their day, paying a micro-fee to open a document, losing access to their own photos because a subscription lapsed, watching a warning banner appear when they type something that violates the ever evolving terms-of-service, and shouting “McDonald’s!” to skip the otherwise unskippable ads within every other app they open, they begin to understand that the true crime of consumer hardware wasn’t primarily pollution but independence. They realize that owning a machine meant owning the means of computation , and that by centralizing hardware under the guise of efficiency, safety, and sustainability, society traded resilience for convenience and autonomy for comfort. In this dyst… utopia , nothing ever breaks because nothing is yours , nothing is repairable because nothing is physical, and nothing is private because everything runs somewhere else , on someone else’s computer . The quiet moral, felt when the network briefly stutters and the world freezes, is that keeping old hardware alive was never nostalgia or paranoia, but a small, stubborn act of digital self-defense; A refusal to accept that the future must be rented, permissioned, and revocable at any moment. If you think that dystopian “rented compute over owned hardware” future could never happen, think again . In fact, you’re already likely renting rather than owning in many different areas. Your means of communication are run by Meta , your music is provided by Spotify , your movies are streamed from Netflix , your data is stored in Google ’s data centers and your office suite runs on Microsoft ’s cloud. Maybe even your car is leased instead of owned, and you pay a monthly premium for seat heating or sElF-dRiViNg , whatever that means. After all, the average Gen Z and Millennial US consumer today apparently has 8.2 subscriptions , not including their DaIlY aVoCaDo ToAsTs and StArBuCkS cHoCoLate ChIp LaTtEs that the same Boomers responsible for the current (and past) economic crises love to dunk on. Besides, look no further than what’s already happening in for example China, a country that manufactures massive amounts of the world’s sought-after hardware yet faces restrictions on buying that very hardware. In recent years, a complex web of export controls and chip bans has put a spotlight on how hardware can become a geopolitical bargaining chip rather than a consumer good. For example, export controls imposed by the United States in recent years barred Nvidia from selling many of its high-performance GPUs into China without special licenses, significantly reducing legal access to cutting-edge compute inside the country. Meanwhile, enforcement efforts have repeatedly busted smuggling operations moving prohibited Nvidia chips into Chinese territory through Southeast Asian hubs, with over $1 billion worth of banned GPUs reportedly moving through gray markets, even as official channels remain restricted. Coverage by outlets such as Bloomberg , as well as actual investigative journalism like Gamer’s Nexus has documented these black-market flows and the lengths to which both sides go to enforce or evade restrictions, including smuggling networks and increased regulatory scrutiny. On top of this, Chinese regulators have at times restricted domestic tech firms from buying specific Nvidia models, further underscoring how government policy can override basic market access for hardware, even in the country where much of that hardware is manufactured. While some of these export rules have seen partial reversals or regulatory shifts, the overall situation highlights a world in which hardware access is increasingly determined by politics, security regimes, and corporate strategy, and not by consumer demand . This should serve as a cautionary tale for anyone who thinks owning their own machines won’t matter in the years to come. In an ironic twist, however, one of the few potential sources of relief may, in fact, come from China. Two Chinese manufacturers, CXMT ( ChangXin Memory Technologies ) and YMTC ( Yangtze Memory Technologies ), are embarking on their most aggressive capacity expansions ever , viewing the global shortage as a golden opportunity to close the gap with the incumbent big three ( Samsung , SK Hynix , Micron ). CXMT is now the world’s fourth-largest DRAM maker by production volume, holding roughly 10-11% of global wafer capacity, and is building a massive new DRAM facility in Shanghai expected to be two to three times larger than its existing Hefei headquarters, with volume production targeted for 2027. The company is also preparing a $4.2 billion IPO on Shanghai’s STAR Market to fund further expansion and has reportedly delivered HBM3 samples to domestic customers including Huawei . YMTC , traditionally a NAND flash supplier, is constructing a third fab in Wuhan with roughly half of its capacity dedicated to DRAM, and has reached 270-layer 3D NAND capability, rapidly narrowing the gap with Samsung (286 layers) and SK Hynix (321 layers). Its NAND market share by shipments reached 13% in Q3 2025, close to Micron ’s 14%. What’s particularly notable is that major PC manufacturers are already turning to these suppliers . However, as mentioned before, with hardware having become a geopolitical topic, both companies face ongoing (US-imposed) restrictions. Hence, for example HP has indicated it would only use CXMT chips in devices for non-US markets. Nevertheless, for consumers worldwide the emergence of viable fourth and fifth players in the memory market represents the most tangible hope of eventually breaking the current supply stranglehold. Whether that relief arrives in time to prevent lasting damage to the consumer hardware ecosystem remains an open question, though. Polymarket bet prediction : A non-zero percentage of people will confuse Yangtze Memory Technologies with the Haskell programming language . The reason I’m writing all of this isn’t to create panic, but to help put things into perspective. You don’t need to scavenger-hunt for legacy parts in your local landfill (yet) or swear off upgrades forever, but you do need to recognize that the rules have changed . The market that once catered to enthusiasts and everyday users is turning its back. So take care of your hardware, stretch its lifespan, upgrade thoughtfully, and don’t assume replacement will always be easy or affordable. That PC, laptop, NAS, or home server isn’t disposable anymore. Clean it, maintain it, repaste it, replace fans and protect it, as it may need to last far longer than you originally planned. Also, realize that the best time to upgrade your hardware was yesterday and that the second best time is now . If you can afford sensible upgrades, especially RAM and SSD capacity, it may be worth doing sooner rather than later. Not for performance, but for insurance, because the next time something fails, it might be unaffordable to replace, as the era of casual upgrades seems to be over. Five-year systems may become eight- or ten-year systems. Software bloat will hurt more and will require re-thinking . Efficiency will matter again . And looking at it from a different angle, maybe that’s a good thing. Additionally, the assumption that prices will normalize again at some point is most likely a pipe dream. The old logic wait a year and it’ll be cheaper no longer applies when manufacturers are deliberately constraining supply. If you need a new device, buy it; If you don’t, however, there is absolutely no need to spend money on the minor yearly refresh cycle any longer, as the returns will be increasingly diminishing. And again, looking at it from a different angle, probably that is also a good thing. Consumer hardware is heading toward a bleak future where owning powerful, affordable machines becomes harder or maybe even impossible, as manufacturers abandon everyday users to chase vastly more profitable data centers, “AI” firms, and enterprise clients. RAM and SSD price spikes, Micron ’s exit from the consumer market, and the resulting Samsung / SK Hynix duopoly are early warning signs of a broader shift that will eventually affect CPUs, GPUs, and the entire PC ecosystem. With large manufacturers having sold out their entire production capacity to hyperscalers for the rest of the year while simultaneously cutting consumer production by double-digit percentages, consumers will have to take a back seat. Already today consumer hardware is overpriced, out of stock or even intentionally being delayed due to supply issues. In addition, manufacturers are pivoting towards consumer hardware subscriptions, where you never own the hardware and in the most dystopian trajectory, consumers might not buy any hardware at all, with the exception of low-end thin-clients that are merely interfaces , and will rent compute through cloud platforms, losing digital sovereignty in exchange for convenience. And despite all of this sounding like science fiction, there is already hard evidence proving that access to hardware can in fact be politically and economically revoked. Therefor I am urging you to maintain and upgrade wisely, and hold on to your existing hardware , because ownership may soon be a luxury rather than the norm.

0 views
Martin Fowler 1 weeks ago

Bliki: Host Leadership

If you've hung around agile circles for long, you've probably heard about the concept of servant leadership , that managers should think of themselves as supporting the team, removing blocks, protecting them from the vagaries of corporate life. That's never sounded quite right to me, and a recent conversation with Kent Beck nailed why - it's gaslighting. The manager claims to be a servant, but everyone knows who really has the power. My colleague Giles Edwards-Alexander told me about an alternative way of thinking about leadership, one that he came across working with mental-health professionals. This casts the leader as a host: preparing a suitable space, inviting the team in, providing ideas and problems, and then stepping back to let them work. The host looks after the team, rather as the ideal servant leader does, but still has the power to intervene should things go awry.

0 views