Posts in Business (20 found)

An Interview with F1 Driver and Venture Capitalist Nico Rosberg About the Drive to Win

Listen to this post: Good morning, This week’s Stratechery Interview is with F1 driver-turned-venture capitalist Nico Rosberg . Rosberg started his F1 career in 2005, and retired after winning the world championship in 2016; Rosberg spent his last four years as teammates on Mercedes with his childhood friend Lewis Hamilton in one of the most intenst teammate rivalries in F1 history. Over the last several years, however, Rosberg has reinvented himself as a venture capitalist, founding Rosberg Ventures , with a specific focus on leveraging his F1 background to build connections between European money and Silicon Valley startups in one direction, and startup products and German businesses in the other. In this interview we cover all aspects of Rosberg’s journey, from having a steering wheel in his crib, pioneering the use of sports psychology in F1, and his decision to retire on top of the world. Then, we discuss how F1 builds connections, the similarities between founders and drivers, and how he realized he could leverage that in a new competition: winning as an investor. What I found particularly interesting is how Rosberg’s background and history seems so varied and unconnected on the surface, yet are clearly linked by a consistent ethos of maximizing opportunity in the service of winning. As a reminder, all Stratechery content, including interviews, is available as a podcast; click the link at the top of this email to add Stratechery to your podcast player. On to the Interview: This interview is lightly edited for clarity. Nico Rosberg, welcome to Stratechery. Nico Rosberg: Thank you very much, Ben, it’s really an honor to be on the show. I hear so much about your show always especially when I’m in the Bay Area. Well, I don’t normally interview venture capitalists on Stratechery, but you are no normal venture capitalist, which you use to your advantage. I want to ask you about that, but needless to say, that made this an easy exception to make, particularly since I’m a big Formula 1 fan. To that end, I always start my interviews talking about the subject background, we may spend a bit more time on yours if that’s okay with you, it’s pretty fascinating. NR: I understand. With pleasure. Okay, good. Well, you were born in 1985 in West Germany to a German mother and a Finnish father. Your father Keke was the 1982 Formula 1 world champion. Was there a steering wheel in your crib when you came home from the hospital? NR: There was actually, yes. (Laughing) Oh, that’s funny. NR: On my Facebook page you would see photos of me in a go-kart when I’m like three years old with a helmet on and everything, so yeah, it was an early discovery of that passion. I’m interested about that because obviously your father was tremendously successful, is he immediately all in on, “You have to do what I did”, or was there ultimately a bit of humoring you, “You can come along and try this but I’m not sure you could ever measure up to what I did?”. NR: There was a go-kart track near our house and he was going there with his friends even before I was born and then when I was born, and then I was six, seven years old, we just gave it a go, I enjoyed it, and I looked pretty fast also. So then he was like, “Maybe this can become a father-son hobby”, it just went from there and then you start doing a race here, a race there, I started winning the races kind of immediately and so that even that hooks me even even more than when you win, of course, it’s amazing, it’s an amazing motivation. So that’s how we just kind of got going and it became an amazing father-son hobby to share. We spent a lot of time with each other, we traveled in a motorhome to the races, so it was really lovely. There definitely is a bit to driving a car very fast. On one hand, of course, you started early, and you see the history of Formula 1 drivers, they start early, but you took to it right away. It’s definitely like father, like son in that regard. NR: Indeed. I think as in every sport — you also see it with golf or tennis — you have to start pretty early now it just gives you a head start and in practicing those skills. And I think, yeah, I guess I inherited some of those genes from my father because we need to be very good at hand-eye coordination, that’s super important. NR: We need to be also very good at processing things very quickly because we have things coming at us at 220 miles an hour, our eyes are flickering left and right all the time, just taking in all the inputs that we’re seeing and also feeling, so I think that also probably has to be a strength of ours. There’s a lot of stuff in your background about your parents really pushing you in terms of academics, learning lots of languages, all that sort of thing. Was that unique to you, or to your bit, it always strikes me that Formula 1 drivers all come across as very intelligent. And to your point, there’s such a high degree of information processing that’s happening on, is that the norm, generally speaking? NR: I think you probably need to be a bit street smarter, at least, to be a successful F1 driver than maybe in some other sports, because we depend so much on this high technology car, and if we’re not able to understand the car, set it up properly, be at least street smart about all these things, then it doesn’t matter how talented you are, you’ll never be able to go fast. So probably I would say that in our sport, yeah, that comes a little bit more to the fore than maybe in other sports. But in my case, actually, my parents pushing me at school was the contrary, my mom and my dad would usually come in late at night and say, “OK, stop now”, because I was always very hard working at school. Somehow we had a group of friends, everybody wanted to achieve, and I wanted to achieve as well, and I had to catch up because I was missing half the week every other week because I was racing. So my parents were more actually telling me to stop now because I was trying to make too much of an effort to catch up. Interesting, because a bit I want to get to here is you’ve had such a widely varying career, even since you finished racing, you finished relatively young , and so that has been a theme for you all along, is like you born with the steering wheel in your crib, but you’re interested in more than that. NR: Yeah, I really always enjoyed the academic side. In fact, if I wasn’t going to make it as a driver, I already had a place reserved for me in Imperial College in London to study aeronautics, that was my plan B of how to get into F1, which would have been as an aerodynamicist. Right, design the car instead of driving it. NR: I don’t know if I would have gotten there in the end, but I think I had a good shot, so that was my plan B was already set. You’re most famous for your rivalry with Lewis Hamilton but as I understand it you actually met him quite young you were teammates in carting as well? NR: It’s a pretty crazy story because the McLaren Formula One team wanted to set up a little go-kart team at the time, and the two rising star drivers at the time was Lewis Hamilton from Great Britain and myself down south, and so they actually funded our two go-karting seasons. And so it was just the two of us driving for the McLaren Mercedes go-karting team and we were winning all the races and championships. Unfortunately for me, more often than not, it was Lewis winning and I was second, but there we go. So it’s incredible because we were best friends at the time and we were 13 years old and we were on holiday together all the time and dreaming, “Imagine what it would be like in 15 years to be in the F1 team together, winning races and championships?”, and it was impossible to achieve that dream, just seemed so far away. And yet really 15 years later, we’re in the Mercedes F1 team as teammates fighting for races and championships, so it’s a pretty incredible story. I mean, why did it seem even that impossible, though? I mean, your dad was an F1 driver, you’ve been racing in karts. What makes F1 feel so far away? NR: Well, come on, you can imagine if you’re 13 year old and you’re playing in your regional tennis camp in the middle of nowhere that you look at the television and you see [Jannik] Sinner and [Carlos] Alcaraz fighting for the Monaco Masters that’s going to look like extremely impossible and far away. Right, but there wasn’t a bit of total self-belief that, “I’m going to be there, there’s no question”? NR: Well maybe Lewis is a little bit more like that, I’m more sensitive, more insecure, less self-belief, so I never actually really believed of myself that I could get there and be good enough, which has pros and cons to think like that, because it also is an incredibly strong motivator. When you don’t have that self-confidence, you just fight so hard to prepare to the best of your abilities all the time. So it has pros and cons, and it was nice to see that, of course, someone like me that did not believe until the very last corner, I was still able to actually win in the end, so that was reassuring. I’m curious about this mindset bit, because this has been an area that you’ve actually talked a lot about. In 2007, you stopped working with your father as closely as you were, went to work with a sports psychologist. At what point was it clear to you that this mental aspect is going to be super important to your success? NR: That became clear to me in my first year of F1 because it was mentally just an enormous struggle. We had a bad car, so we’re either breaking down or finishing well out of the points all the time and it was a really rough start to my career. And this is with Williams at the time? NR: Yeah, with Williams. At times it was almost as if like, “Oof, I might not get taken on for the second year”, because it was such a rough start. So mentally, it was incredibly hard because my dream is at stake, my dream is to be an F1 driver, to win races, so that was difficult. So I decided that, “I’m spending four hours a day on training my body, why am I not training my brain? There must be solutions out there to improve my mental state”. So I sought out help, and I found a psychologist/philosopher and this was incredible for my life, for my performance, I worked 10 years with him. In the winter, two hours every two days, so it was like an incredible effort, it was harder than the physical training was actually the mental training. It was a combination of learning to meditate, learning to visualize, to learning the power of repetition, and also learning to understand myself better. “Why am I scared?”, “Why am I anxious, jealous?”, because then you cannot switch those emotions off very easily or almost not at all. But when you understand why they’re there, you can really adapt your reaction and that has a snowball effect, because when you react in a much better and more appropriate way, it has an enormous snowball effect on your life so it’s these kind of learnings that really helped me so much. Was this pretty novel for an F1 driver to seek this out and do this sort of training at the time? NR: Yeah, it’s a bit like in the startup world. Founders are not really allowed to admit that they’re scared of failing or that they’re working with a brain doctor, as some like to call it at the time in F1, so it was not something that I could really tell anybody about this because it would look weak in a way, but actually it became my superpower to go through that process. And now there’s a little bit more acceptance now, there’s been a couple of other drivers talking about it. I think even Lando Norris, the world champion last year, he sought help in the middle of last year as he was struggling mentally, clearly, and his championship was slipping away from him, and he went out and sought help and made enormous progress, and that’s what got him the world championship in the end so that was great to see. Lando’s always interesting because he seems to wear his insecurities on his sleeve, they just come through sort of so tangibly. Did you feel a lot of like sympathy for his sort of struggles and working through that? NR: Yeah, totally. That’s the state of mind that I can very much relate with, and that’s what people love also because he’s very authentic, so that’s really appreciated. At the same time I wrote Lando a direct message on Instagram and he never replied, but at least I wanted to see if maybe he would read it, because I’ve been through what he’s what he’s been through, and one of the obvious things that I would change if I was Lando, and he did change it a little bit, is to not always talk about the glass half empty, even when he was on pole position he almost only spoke about that one corner where he messed up rather than like, “Hey, that was almost the best lap of my life”. I mean, both is right. “Hey, that was almost the best lap of my entire life”, that would be correct or, “Ah, damn, I messed that last corner up so bad”, that would also be correct. You know? And he just says, “I messed that last corner up”, and, “I need to get my stuff together”, and that’s just unnecessary because it’s repetition, and it really ingrains itself in your mind that you always, if you say, “I make mistakes always”, you’re really going to believe that you make mistakes always. So that’s something that he could quite easily just adapt, even if he keeps on thinking that that, but don’t say it, and don’t say it out to the whole world, because that’s a whole tsunami that you’re setting off there repeatedly, which is not going to be beneficial to your performance. You’ve talked about talking to founders and not being able to show weaknesses. Have there been any examples in the times that as you’ve been an investor and talking to different companies, where you’ve identified someone and been like, “Look, you’re kind of a Lando Norris here” — maybe that’s not the words that you used — but, “Let me talk to you about your mindset and how you can shift that”, has that come in handy yet? NR: I really enjoy that because founders are really very similar to high performance athletes. NR: They’re extremely competitive, their drive is unbelievable, they’re very courageous also, because you have to be so damn brave to bet the company over and over as you’re innovating and pivoting, so there’s great similarities, and that’s why I really enjoy speaking to founders. Just now in the Bay Area, that’s very often the topic that I speak to founders about and they enjoy that as well, to discuss that kind of topic mentally, how they approach that and everything, and so that’s really enjoyable. I think I can really add value as well as I learned for myself also, but I can really add value by adding from my experience. The more founders that you talk to, is there a bit where — if you go back to F1, it’s very visible who’s the best, like it’s very measurable in a certain sense, but it’s interesting at F1 because sometimes you could have a great driver who doesn’t have a great car, and yet people will still say, “That person is excellent, they’re just limited by their circumstances”. Do you get a similar sense in being in tech, dealing with founders, and being able to separate the circumstances from the person and saying, “There’s something there even if the circumstances aren’t allowing it to show”? NR: That’s one very, very important ingredient for a successful founder, because actually it will be often many, many years until there’s any validation as to what he’s building or she’s building and the best founders have to be extremely resilient and not feel the need to bow to consensus thinking of people around them or of their board or whatever. They are the visionary and they have to believe with such high conviction in their idea, in what they’re building and see it through. Because if it was obvious, then everybody would be building it, and most of the time, they’re creating something that’s just not obvious to sometimes anybody except for themselves in the early stages, so that’s absolutely a very important trait. However, in combination with an extreme curiosity and desire to learn and remain open to new ideas and everything, so it’s a balance that has to be found. And again, that’s pretty rare to find both attributes within a founder, but usually that’s the case. Is that tension between the sort of insecurity and confidence and uncertainty and curiosity? Is that what you’re zoomed in on, what you’re looking for? NR: Yeah, totally. Because sometimes it’s like it opposes each other. Right, it’s a paradox. NR: Someone who’s very self-confident their idea will be will be completely arrogant and just so sure that their way is is the right way and that’s it and then they will not be very curious, so that’s why you don’t find it in every person and it’s important. I think these two character traits are very, very important. Continuing with the background, you have a YouTube channel that has 1.46 million subscribers, you haven’t posted on it for a while, but there used to be a whole host of videos. But I went back, scrolled all the way to the bottom, and the original upload was in 2011. A lot of people didn’t know what YouTube was at that point or barely did, how did you find YouTube and why did you start posting videos? NR: As an athlete, there was an opportunity that suddenly that came in those years, which was to connect closer with those out there that were supporting me. Were you the first one to really do that? NR: No, not the first, but I joined some of the early movers and it was amazing to see how you could directly connect with your fanbase, and there was also the belief that, of course, with time, Formula 1 is also about marketing and that can give you an edge over some other drivers. If you build a big following, a big brand for yourself, and you become highly relevant to brands for sponsorship, etc., then a team might choose you over someone who just drives fast. So there’s also that element that to be a successful F1 driver, usually it helps to really try and excel in every single domain that may be relevant and that domain plays a role, as well as working well with the media, because the media is so powerful and that’s a game you also need to try and nail. I’m curious about the sponsorship angle. F1 obviously has huge amounts of sponsorships, it’s an amazing sport where people will willingly wear gear with a bunch of sponsorships on it — I guess all racing is sort of like this. But right now, now that tech is huge and F1 is huge, there’s a lot of tech sponsorships of F1 and I’m just sort of curious: I’m in tech, but generally a lot of these companies are enterprise companies , a lot of B2B things, and this whole world of sponsorships and what goes on around that is somewhat foreign to me. I’m just a blogger here in Wisconsin before in Taiwan, what is in that game and how involved are the drivers? Is that a huge thing? You have to go out and actually help win these sponsorships too? Or you should show up to a bunch of events? I’m just curious, how does that world work? NR: So a few things here. First of all, because of Netflix , the sponsorship fees that the teams are now requesting are like 2-3x from what they were just six, seven years ago. Is that just because it’s more popular or because they also their logos also show up on Netflix? NR: Because it’s so much more popular and because it’s now become relevant in the US. So the whole tech industry has become interested and you’ll see most companies are now also sponsoring. I mean, look at just the Mercedes team , of course, but look at the Audi team also . They have Revolut, so the bank that’s come out of the startup ecosystem, ElevenLabs , the voice AI global Leader, all of these companies. In fact, I’m actually, because I’m so deeply connected now with Silicon Valley, I am more and more also kind of casually supporting some of these tech companies with sponsorships in F1. I’m just presenting one dev tools company, multi-billion dollar, with an opportunity to sponsor a team this week, I’m just sending that through. Because the sponsorship fees have increased so much, a team like Mercedes has $400 million in annual sponsorship revenue. $400 million! That’s so crazy. And then you add their share of TV revenues on top, so they get to beyond like $600 million in annual revenue, and because they inserted budget caps in F1, they don’t spend more than $300 million, even including driver salaries and everything. So they are so hugely profitable, these F1 teams, or especially the successful ones and that’s why the CrowdStrike founder now, George Kurtz , he just bought 5% of the Mercedes F1 team. And that stake, I mean, the Mercedes F1 team was valued at $6 billion, unbelievable. you know so so he paid three hundred three hundred million dollars he paid for a five percent share. Do you feel like you were 10 years too early? NR: I missed that train, because I think with a bit of effort probably at some point I could have had a nice little share in a F1 team somewhere, but I completely missed the train. It’s incredible how this sport has become has become really a business case now, and these these F1 teams have become investable assets, which never used to be the case, so it’s quite phenomenal. So these sponsors, we drivers spend a lot of time with these companies then, they invite all of their customers, I do dinner with them then even during a race weekend or the next morning for breakfast. Monaco Grand Prix, I’m at the Hotel de Paris having breakfast with one of the sponsors, so the drivers do spend a lot of time with those sponsors. And apart from that, the sponsors want visibility because visibility for their logo is just an amazing credibility stamp, and also they want to bring and host people at the races, so that’s what it’s about and I think it works amazingly well. I was talking to Michael Cannon-Brooks , Atlassian is now sponsoring Williams, and this idea of you actually have 24, or this year 22 , around the world, pre-planned, clear places to meet customers and bring them there. He’s like, “It makes scheduling very easy, it’s very straightforward”. NR: And for someone like Atlassian the customers are there anyways in the paddock, because the C-levels of all big companies are always there. To make deals in the paddock is incredible, an incredible opportunity and even I myself, so I do work for Mercedes F1 and they don’t actually pay me in Euros, they actually pay me most of the time with tickets for the F1 races, because I too, I love to host the VC community at the races, it’s such a great way to get to know people, build friendships and of course, yeah, it’s very important for me to really build relationships in this ecosystem. That’s super interesting. Speaking of Mercedes, when Mercedes rejoined F1, acquired Brawn , you were the first driver alongside Michael Schumacher, who was then replaced by Lewis Hamilton — two pretty impressive names to have as teammates to say the least. The rivalries between teammates is the stuff of lore in Formula 1 but is it actually underrated how intense that is? NR: So the norm in F1 is always that a team has a number one driver and a number two driver and that’s clearly kind of set in stone, and that’s the way you go racing. It’s very unusual that a team has two number one drivers, the most legendary such pairing was Ayrton Senna and Alain Prost at McLaren, and that ended in total disaster after only two years. They were crashing, then one guy quit, and it was just a total mess. It’s okay and not too bad as long as you’re racing for like fifth and sixth and seventh place — but as soon as you have the best car and you as teammates are fighting for every single race win, it just becomes so hard because you’re always going to push the boundaries and go into those gray areas because there’s a championship at stake and that’s your childhood dream and that’s what then happened between Lewis and I also. It kind of just spiraled from one going a little bit too far, then the other one paying back and then back again and then crashing and it just became very, very tense and difficult to manage. It was a very uncomfortable environment to be in because not only are you kind of enemies within the team, but also the whole team as such cannot really take a side anymore and they need to stay neutral, so they can’t really support you either anymore, so it’s a complicated dynamic. Well, you lasted longer than Prost and Senna, because I think you made it three years with Lewis Hamilton. Is that right? NR: Four, actually. We would have kept going, I had another contract for a few more years so it was kind of borderline manageable, but only after Toto Wolff made us sign a contract whereby it didn’t matter who was at fault, but if ever we crashed together, then we would have to split the bill, the repair bill, 50-50, and my most expensive one was $360,000 and after that, I made sure to leave extra space when Lewis was anywhere close. (laughing) That’s amazing. Why did you decide to retire? I mean, you finally win, you overcome Lewis, and then you’re done at 31. NR: I gave it a thousand percent, really, much more than any that I thought I could give. Total life commitment, insane intensity, the whole thing, mentally, physically and I achieved my dream, I achieved my dream in the best possible way, I beat the greatest of all time, I won that Formula 1 World Championship with Mercedes , the legendary car brand, it’s not possible for me to do better. I had a young family at home, a child at home, baby at home so it just felt like the right moment for the most beautiful exit possible for me that would carry me for the rest of my life. So it was a bit of a rational decision in that way and I just felt that was what I wanted to try and do. Of course, it was scary because when you make such a decision, you don’t really know how it’s going to go and how you’re going to feel. But now in hindsight, for me personally, it was really the best thing I could do and a great decision, which I’m very lucky to have been able to exit in that way. And a lot of founders listening, because I know you’re very popular with founders, also your podcast, they will be able to relate, it’s kind of the $10 billion or $50 billion dollar exit. NR: Once you put your life into it and you’ve created an enormous success and change people’s lives and then you go out on a high, I think that was my dream to do it that way. You made a lot of changes before that last year, too. But then there’s all these stories of that last year where you won the title, focusing on things like jet lag or like your nutrition and all those bits and pieces. Was that just like, “I have to figure something else to finally get over the hump”? NR: I tried to perfect every single possible marginal gain possible, that was really what I was about, and it went from working with a Professor of Sleep at Harvard , and who now has created a startup based on what we were working together at the time called Timeshifter , actually, which is a nice anecdote. And so there, for example, the secret was eliminating jet lag for the whole year because jet lag is a disaster. As an athlete, the difference between 99% focus and 100% focus is the difference between coming first and second, and jet lag just destroys you, and we’re traveling from continent to continent all the time. I managed to do a whole season with absolutely 0.0 jet lag, and it’s pretty simple. Of course, it takes a lot of discipline, but pretty simple. The secret was one-and-a-half hours maximum of time shift per day and then blackout glasses in the evening, two hours before needing to go to sleep and then also immediately upon waking up, 10,000 lux, like a light, you know, which you’re staring into, which you also see with Bryan Johnson , he does that and then, yeah, I mean, as long as I followed that, it was incredible. So I eliminated jet lag from my whole life for that year and every detail I worked on in that way, you know, really everything. So you see my helmet was black and it was bare carbon because I realized that the helmet was 80 grams and every gram counts in our sport so I took the paint off my helmet, just every single detail. I really tried to work on every single marginal gain possible. This sounds absolutely hellish with family and little kids at home. I can see why you once you accomplished it, you were done. NR: Yeah, of course. I mean with a little baby at home it required a lot of a lot of a great commitment also from my wife Vivian at the time and great support and and she did that awesomely so I’m very very grateful for that. Now you’re sitting here as an investor, but we’re a decade on from when you retired, what was the path to get to where you are now and to realize that, “This is what I want to do with the rest of my life”? NR: Seven years after retiring was first of all, just trying everything and nothing, trying to figure out what could be next in my life. And it’s hard because as an athlete, you are like CEO, you know, you’re top of the company, and you feel like being the king and then after your sports career you drop to zero. There’s nothing there and you cannot use your skill that you learned for something new, it’s just gone. And it’s very hard to accept that you really start from zero and you don’t even know if you’re going to have success in something new or not. So I tried a lot of things and and now finally I’ve landed on what I really enjoy doing and it’s being fully into the venture capital ecosystem building my own VC firm, Rosberg Ventures , out of Europe, investing a lot in the USA or even primarily in the USA. So super exciting and yeah, and I hit the ground running and I’m able to win also pretty quickly, which is what is really motivating. What made you realize there was this opportunity? If you sort of zoom out, this idea that there’s money in Europe, there’s opportunity in the U.S., someone needs to connect those two things together. But was there a specific conversation or something that came along that’s like, “Oh, I could actually do this and be good at it”? NR: Well more than money in Europe it was money in my bank account which was just sitting there. That makes sense. NR: And I was like, “What am I going to do with that?”, because it’s really really hard to invest capital across generations in a smart way. It’s like super, super difficult, as most people will know that or many people know. The way led to the Yale Endowment — everybody who’s interested in finance has once looked at the Yale Endowment because David Swensen is the gold standard for investing capital across generations. And my light bulb moment was then seeing that David Swensen had by then put 20% of the Yale endowment into venture capital, 20%, that’s $8 billion, and it was by far his best performing asset class with 21% yearly performance, 21% IRR. So that was my light bulb moment because I said, “Wow, I love startup anyways”, but I didn’t know you could make an asset class out of this, “Let me try and replicate what David Swensen did”, and I believe that with time because I have my unique angles, including F1, that with time, I can also build the right access by adding value into the ecosystem and everything to kind of replicate the approach that David Swensen took to the asset class. And that’s where we are now, we actually made it work. What are those unique angles? I think that sort of ties this together. You have the F1 background, you’re European. NR: So the unique angle, of course I have the F1 platform, which is a really unique advantage to be able to meet people from the VC ecosystem, make friendships, get insights. Appear on this podcast. NR: (laughing) I’m very, very lucky in that sense. But that’s something you seem to think about very strategically. Like, “This is an advantage that I have, I’m going to exploit this and push this”. Is this part of the thesis up front, particularly once you started? NR: Well, first of all, I really enjoy welcoming this incredible community to my sport, it’s amazing for me to be able to showcase my sport in a way. So this is where you did better from Drive to Survive in the end, because even if you sort of missed that era, now suddenly everyone’s interested in F1. NR: Oh yeah, definitely, I would not be here today if it wasn’t for Drive to Survive because that’s what has really engaged the whole tech community in my sport. It’s lovely to be able to invite people, bring them up close, show them what my sport is about, and see how excited everybody is and to share that with them is really amazing, so I enjoy that. And it’s a great opportunity to, as I said, build friendships and get insights, but then also to add value. How does that start? First of all, of course, curating the group that I invite. I invite the founder and then I invite the CIO of a big company and they then actually have a very valuable exchange. The CIO happens to be looking for the product that the founder is building, the founder obviously needs to go to market, so there’s a great way for me to build connections, and that’s how you start adding value. And beyond that, what we do is also we bring U.S. innovation to the German large corporates, we help with that. So Germany is your specific focus in particular in Europe. NR: Because I’m German, and because of my history and everything, I’m very well connected in Germany to all the C-levels in the large corporates. Does this even go back to like not just growing up in Germany, but also working for Mercedes, being the driver who’s interacting with all this? NR: Yeah, of course. All these large caps have been sponsors in F1, they’re all in the paddock, so I know them very well, and they’re all in desperate need of transformation now. Of course, there’s AI, there’s sustainability, there’s all these points and they’re not exactly the fastest, the German companies. They’re a little bit — many of them are real legacy businesses, who are not necessarily known to being the most brave when it comes to adopting new innovation and things like that. And are these generally like just regular companies, like manufacturer companies, things like that? NR: It goes all the way to the car manufacturers, whether it’s BMW or Mercedes and we have found a unique positioning where we’re able to support, just selectively, with bringing their attention to a couple of products that are just being built in the US in the startup ecosystem, whether it was vibe coding or it’s even legal tech, all these different things, and we can bring their attention to some of these innovations and really add value by creating these connections. So this is one of the secret sauces to Rosberg Ventures and to adding value, which works very well, and we’re hosting dinners with some of the C-levels and inviting some of the startups, etc., and it works very well. So you recently announced a new fund, $200 million assets under management . How did you grow your network on the asset side? Is that mostly then German money that’s coming back to the U.S. and you’re completing the cycle there? NR: Mainly German, so it’s German capital because the Europeans really lack connectivity, I realized that the Europeans lack access to U.S. venture capital and they know of the importance and the value that’s being created there, but they don’t have the access and they really kind of miss the boat on that, so it’s not too easy to convince them that, “Hey let’s join forces and partner up here, and let’s invest in the best opportunities in the U.S.”. So that’s been working very well and my way to raise or to convince these families is really going via the principle who I may know from F1 or whatever and then I say — I don’t even say too much like what i’m building because you don’t want to sell straight away — it’s more like, “Hey can you introduce me to your family office? I would love to just have a conversation with them”, and then the introduction, and I speak to them, I explain what we’re doing, and it’s just an obvious one. We’re kind of indexing the top 10 VC funds in the U.S., and also the top 10 growth stage companies, startups in the U.S., and indexing those and it’s kind of a no-brainer then, that’s how we’ve been able to raise capital very, very quickly. That makes sense. So everyone sees the opportunity, it’s not clear to get the capital in, you go in first sort of as like a seed investor with your own money, and that sort of starts that virtuous cycle, and that makes sense and then they get access to the German market in the long run. You’re bringing a unique angle and it’s just all about deal flow, I think it’s pretty compelling. Why is it so hard to do business in Europe ? Has everyone just given up on having a big startup ecosystem there and, “Let’s just get our money into the U.S.”? NR: So you mean the startup ecosystem in Europe? NR: There are flashes of real hope at the moment. Vibe coding was pioneered in Europe, the vibe coding for prosumers, that’s Lovable out of Sweden, and there’s many other examples. I mean, ElevenLabs, the global leader in voice AI, European, and many, many more examples. So there is flashes of real hope. But of course, we lack the breadth in the whole ecosystem and that’s as a result of a few things. It’s a bit of a chicken-and-egg. One, of course, it’s much harder to scale in Europe because of the geographical limitations, it’s so hard to go from Germany to France, different language, different regulatory framework, it’s just a huge friction there in the go-to market, so that’s one challenge. And then historically also, there’s been quite a lag in the distributions and liquidity in that asset class in Europe and so therefore, funding is not as ample as in the U.S. So it’s kind of a chicken-and-egg there also. But I think Europe is really working on trying to introduce one regulatory framework across the entire Europe, across all countries for startups, so that’s in the plan, so a lot is happening, and let’s see if Europe can develop more and more such promising companies. How have you managed this shift? You started out sort of a fund of funds sort of model, then you mentioned you’re doing more direct investing. Is that just a natural evolution of getting more access, having more assets under management? Or what was that explicit goal and strategy that you were seeking to pursue? NR: Well I think the holy grail in venture capital is is to invest directly in the startups and the fund of fund was the natural starting point from an asset class point of view, also from from copying and being inspired by what Yale did, and then from there the fund of fund is like a Trojan horse because it gets you positioned well into into the market where you see everything and then it really helps to identify which are the breakout startups, which are the most promising with the generational founders. So it really helps to create a short list and also to create those connections and to build those opportunities to actually invest directly in the startups. We met in San Francisco a couple of months ago, you had just met with Dreamer , I actually met with them the next day, they launched and were immediately acquired by Meta , was that your first exit of a direct investment? NR: So this is an important point that I don’t just like try and support the companies that I’ve backed. So in this case, this was the CTO of Stripe, the ex-CTO of Stripe, who was my friend, David Singleton , he built this together with Hugo [Barra] , who used to have a senior role at Facebook. Yep, I knew him when he was at Xiaomi , he was at Google, he was at Meta, he’s been all over the place. NR: Everywhere, it’s an incredibly promising founding team, and so I was just trying to support. And they happened to say that Stratechery, that they were the biggest fans in the world of you and Stratechery, so I was like, “Okay, well, that’s easy, I just met Ben yesterday, so I can make the connection there”. Yeah, it’s a pity how that went — I mean, pity because also from our point of view, I was so excited about that product, actually, it was vibe coding AI agents. Yep, it’s very compelling. I was looking forward to writing about it, they got snapped up before I could even get there. NR: I was looking forward to really using it at scale, but, yeah, now it’s bought by Meta and let’s see what Meta does with it, but it will certainly be, I’m sure, very promising what they build with that. As you’ve made this transition and levered up into tech and going from fund of funds to direct investment, it’s a time of great upheaval in tech , given AI. Theoretically, this should mean more startup opportunities. On the other hand, the frontier lab models might just eat everything. How are you thinking about that as an investor? Is it like, “I’m finally getting to the stage where I can get into startups, and now I’m not sure that I want to”? Or are you optimistic? NR: I’m very optimistic. I’m very optimistic because AI, the value creation within this wave of AI is going to be something like we’ve never seen before, and I do think there’s a lot of opportunities beyond just the frontier labs to capture market share, create new markets. But at the same time, you do need to be careful because we see the legal tech. Legal tech is a really big new market that’s being created there with a leader like Harvey and Legora , the two leaders, and then now Anthropic came out with a product which kind of starts to threaten their position a little bit. And Anthropic has been doing that for every sector, it feels like almost, so that is a little bit of a concern. It does feel like a safer place at the moment to be invested in frontier labs and neo labs, that does seem the more safe place to be. But nevertheless, I think there’s like, for example, Elevan Labs, voice AI, it’s very defensible what they’re building. They are a frontier lab themselves, by the way, because they build their own models. But still, voice probably is going to commoditize, the research, as in many cases and there it’s then going to be about the platform, distribution, products. And there, ElevenLabs is doing an excellent job. So it does look at the moment like they’re going to be able to really win and sustain any potential threat from these frontier labs so there are examples where beyond the frontier labs, many, many examples where they can be success stories, so it’s an exciting time. You mentioned platform and distribution, and this sort of seems to be a theme: you’ve thought about the F1 reputation and background, “I can leverage that, I know these sort of companies, I can leverage that”, you saw YouTube early on, you were on that, you’re here on this interview. Is that why you still do Sky Sports? Everyone’s favorite commentator , is that you love to commentate, does that keep Nico Rosberg sort of front and center? NR: You’re right. I do enjoy staying connected with the sport, but there’s the second reason that it’s really helpful for me to stay kind of relevant and it does help me also with relevance, even in the tech ecosystem. Because, of course, if then some people enjoy watching me and things like that, it’s easier to connect with them in future, even in the tech ecosystem. So that is twofold. We talked before, you were born with sort of steering wheel in your crib, in some respects, a advantageous background. But what I see as an overall theme is pretty consistently you identifying and leveraging your advantages and like what we just articulated is a good example. So now you’re in the investing world, totally separate, but figuring out what you have, how to work with it and build towards that. Is that the overarching sort of theme that you see in your life? What still drives you, is it that bit about being a little bit insecure and wanting to prove yourself and being super competitive? Is that just like you can’t turn that off and that’s what that’s why you’re still here? NR: I’m a super extreme competitor, I need to compete, I want to win, and I have now chosen venture capital as my space to try and win more and more in future. And I think, yeah, this is what I’m carrying over from the sport. I was very methodical about how do I get that win, in sports, every detail. I worked on every single detail possible to put all the pieces together to be the best that I could be and to get to that win eventually and I think that’s something that I’m now replicating in the world of venture capital, trying to optimize for everything and put everything together to be able to win more and more. How do you think about that with your kids, just out of curiosity? Your daughter sort of popped into the background on the call here. NR: So with my kids, because I went through such an extreme intensity in my sporting career, I, with them, am more focused on well-being rather than pushing them towards some success. But at the same time, you just credited your massive drive and competitiveness with your success. NR: Exactly, yeah, but wellbeing and happiness is what I put at a higher level for my kids and that doesn’t necessarily have to be success. So I’m very eager to push to try and help them discover their real passions, and we’re getting there. So my daughter, I put her in a go-kart two weeks ago, she drove slower than I could walk, so I could walk faster, and she ended up crying, so I hope she doesn’t listen to this one day, but I don’t see which one it is either, so we’re fine because I have two daughters. So it was clear that this is not her passion, and then we will never go again. But I can see that her passion is music, guitar, singing and so there I do nudge her towards more lessons, guitar lessons, drum lessons, without overdoing it, because I see that that’s her natural passion, you know? So that’s the approach I’m taking, but definitely really focused on happiness and well-being. So you mentioned you’re on holiday in Ibiza. I understand you have an ice cream shop there , is that right? NR: So yeah, with my wife, because she’s an interior designer, so she’s super creative and for some reason, we both of us, we love ice cream and we’ve been coming to Ibiza all our life, and there’s never been a nice ice cream place. So just as a hobby, we just said, “Hey, why don’t we open one ourselves?” — our friend, our common friend, he likes to make ice cream, so we do that, and it’s become a huge business. We have now a chain here in Ibiza, and very successful, and it’s the number one ice cream place. So Ben, next time you’re in Ibiza, ice cream is on us. (laughing) Sounds like a deal. You have an interesting life in terms of you learn five languages growing up, you have parents from different countries. Obviously, as part of being an F1 driver, you’re all over the world. You’re doing this connection between Germany in particular and Silicon Valley. Do you feel like, you talk about eras and riding them and starting and beginning in terms of F1 — do you feel that era, you’re like the pinnacle of like globalized civilization? Do you feel that that is an era that is going to persist past you, or do you feel that sort of cracking and changing? NR: This is related to the sport or? Just in general, just given you are like an international man of mystery, although maybe not that mysterious, but it’s like your superpower is connecting and linking all these disparate pieces together and seeing the ability to sort of build through them. And I’m wondering, is that something, an opportunity, that you think is going to persist given the way the world is going? NR: Well, I’m very optimistic in that sense, I’m very optimistic. And I see a long road ahead. And I think it’s an amazing time for venture capital now, it’s incredible, a time that we’ve never seen something like that before, the speed of innovation, and there may be my F1 speed also helps me, it doesn’t scare me at the moment because I’m used to driving 220 miles an hour. So maybe I’m one of the only people in the world where I’m not getting scared by the speed of innovation that we’re seeing in the startup ecosystem, because I’m quite used to speed. You actually focused a lot on e-mobility and electric vehicles. I do have to ask you, how are you feeling about the current F1 regulations , this 50-50 split? A lot of complaints that driver’s skills being taken away. What’s your view? NR: I saw a message from Toto actually recently, and he said, the F1 driver job might be the very last place that AI is going to endanger that job. Because it’s very, very hard for AI to try and replicate what we are doing in that racing car at the edge of physics. But has it been diminished a little bit if you’re going around a curve or you’re on a straight and your car’s just slowing down on its own? NR: No, I understand, F1 has tried to stay technologically relevant so they have gone full hybrid which is one of the most efficient powertrains in the world, the way they’ve done it, but of course yeah it’s a little bit to the detriment of racing on the edge, because now they’re going through a high speed corner towards the end of the straight and they actually downshift on the straight after the corner which is unheard of in the sport. But to be honest I’m quite easygoing about that because I like to really focus on just, “Is the racing exciting?”, “Is there good battles?”, “Is it unpredictable?”, “Is there rivalries?”, and as long as that’s happening, I think all fans will kind of forget about these regulations and will just enjoy the sport once again and be super excited. I think the season is shaping up really nicely. We have this super underdog, this 19-year-old who was really having a struggle last year, who suddenly has come to life and is showing his real talent and is dominating the championship so far, 19 years old, he’s still like a child, it’s incredible, Kimi Antonelli , Italian guy, driving for Mercedes. So it’s so exciting to see him in front and now everybody else trying to catch up to him, I think it’s great. You are associated with Mercedes, they are doing very well, I am a Kimi fan, my kids got a picture with him last year, so he’s by default who we’re cheering, for sure. But who do you cheer for in F1? NR: I do cheer for Kimi as well now because he used to be my driver in go-karting as well, so I know him since he’s 12 years old, and he is a generational talent of the level of [Max] Verstappen, Hamilton. His talent is exceptional and he’s so humble and authentic and nice guy also, so you can only cheer for him. It’s such a challenge that he’s facing, being a driver of the Mercedes team, leading the championship all of a sudden, an incredible challenge, and I can so relate because I was in that position and it’s so hard. It is so hard what he’s getting himself into now for the rest of the year. I’ve been writing him also and I said, just without telling him what he should do, I just told him like what I did and what worked for me, I’ve been writing him. And one thing, for example, was just really take it race by race, don’t think about the end of the season, don’t think about championship, just race by race, try and optimize for the next race, go in to win, and that’s it and then the rest will just see how it goes. Are you surprised it’s been a decade and Lewis [Hamilton] is still in F1? NR: I am quite surprised, because that’s a long time, and we weren’t exactly young at the time. So when I stopped 10 years ago, he was already almost 32 and he’s still going now, which is incredible and huge respect, respect for him to keep going, keep grinding, keep the motivation. Still seems as motivated as ever, driving really well again this year, he’ll definitely win some races this year, I think he’ll win some, so he’s doing really well. And every win that Lewis gets is another notch on your belt, right? NR: (laughing) That’s a little bit of an egotistical view to it, which sometimes I do think about. Yes, the better my success looks, which is nice, yeah. You won one, you beat Lewis. It’s a championship, if you’re going to win one, that’s about as good as it gets. But, hey, you didn’t stop there, it’s super impressive what you built, very interesting to learn more and I look forward in 10 years when Nico Rosberg is the champion VC investor. What is it, the Midas list ? Are you gunning for number one? NR: Yeah, sure, Midas List, that’s gonna be a hard one, but those kind of targets, at some point, yes. Nico Rosberg, great to talk to you. NR: Thank you very much. This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery . The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly. Thanks for being a supporter, and have a great day!

0 views
Stratechery Yesterday

Amazon Buys Globalstar, Delta to Add Leo, The Apple Angle

Apple's Globalstar acquisition is being framed as Apple versus SpaceX, but I think the real story is about Apple.

0 views
Stratechery 2 days ago

OpenAI’s Memos, Frontier, Amazon and Anthropic

Breaking down OpenAI's internal memo about taking on Anthropic in the enterprise.

0 views
Stratechery 3 days ago

Mythos, Muse, and the Opportunity Cost of Compute

Listen to this post : In January 2025, Doug O’Laughlin at Fabricated Knowledge declared that o1 and reasoning models marked the end of Aggregation Theory: I believe that there is no practical limit to the improvements of models other than economics, and I think that will be the real constraint in the future. It is reasonable that if we spent infinite dollars on a model, it would be improved. The problem is whether infinite dollars would make sense for a business. That is going to be the key question for 2025. How do the economics of AI make this work? One of the core assumptions about the internet has just been broken. Marginal costs now exist again, meaning that most hyperscalers will become increasingly capital-intensive. The era of Aggregation Theory is behind us, and AI is again making technology expensive. This relation of increased cost from increased consumption is anti-internet era thinking. And this will be the big problem that will be reckoned with this year. Hyperscaler’s business models are mainly underpinned by the marginal cost being zero. So, as long as you set up the infrastructure and fill an internet-scale product with users, you can make money. This era will soon be over, and the future will be much weirder and more compute-intensive. Looking back on the 2010s, we will probably consider them a naive time in the long arc of technology. One of our fundamental assumptions about this period is unraveling. This will be the single most significant change in the technology landscape going forward. Aggregation Theory was, if I may say so myself, the single best way to understand the 2010s, particularly consumer tech. It explained the dynamics undergirding Google and Facebook’s dominance, as well as the App Store and Amazon’s e-commerce business; it was also a useful ( albeit incomplete ) framework to understand an entire host of consumer services like Uber, Airbnb, and Netflix. It’s worth pointing out, however, that some of the critical insights undergirding Aggregation Theory are much older, and are embedded in the fundamental nature of tech itself. They are, as O’Laughlin notes, rooted in the concept of zero marginal costs. Marginal costs are how much it costs to make one more unit of a good. Consider a widget-making factory: Land and machines are clearly fixed costs; you have to have both to get started, and you are paying for both whether or not you make one more widget. Raw material, on the other hand, is clearly a marginal cost: if you make one more widget, you need one more widget’s worth of raw material. When it comes to physical goods, electricity and humans are also marginal costs: you need more or fewer of them depending on whether you make more or fewer widgets. Where marginal costs matter is that they provide a price floor. Companies will operate unprofitably because profit and loss is an accounting concept that incorporates depreciation, i.e. your fixed costs. For example, imagine that a company spent $1,000 on a factory to make widgets that have a marginal cost of $10: as long as the price of widgets is >$10 the company will make them even if they don’t earn enough money to cover their depreciation costs (i.e. they operate at a loss) because at least they are still making a marginal profit on each widget (what the company may not do is invest in any more fixed costs, and, eventually, will probably go bankrupt from interest on the debt that likely financed those fixed costs). I explain all of this precisely because it’s almost completely immaterial to tech. First, there generally are no raw material costs, because the outputs are digital. Second, because there are no raw material costs, and because the fixed costs are so large, electricity and humans are generally treated as fixed costs, not marginal costs: of course you will run your servers all of the time and at full capacity, because every scrap of additional revenue you can generate is worth it. AI very much fits in this paradigm: the output is digital, and while AI chips use a lot of electricity, the cost is a fraction of the cost of the chips themselves, which is to say that no one with AI chips is making marginal cost calculations in terms of utilizing them. They’re going to be used! Rather, the decision that matters is what they will be used for. Consider Microsoft: last quarter the company missed the Street’s Azure growth expectations not because there wasn’t demand, but because the company decided to use its capacity for its own products. CFO Amy Hood said on the company’s earnings call : I think it’s probably better to think about the Azure guidance that we give as an allocated capacity guide about what we can deliver in Azure revenue. Because as we spend the capital and put GPUs specifically, it applies to CPUs, the GPUs more specifically, we’re really making long-term decisions. And the first thing we’re doing is solving for the increased usage in sales and the accelerating pace of M365 Copilot as well as GitHub Copilot, our first-party apps. Then we make sure we’re investing in the long-term nature of R&D and product innovation. And much of the acceleration that I think you’ve seen from us and products over the past a bit is coming because we are allocating GPUs and capacity to many of the talented AI people we’ve been hiring over the past years. Then, when you end up, is that, you end up with the remainder going towards serving the Azure capacity that continues to grow in terms of demand. And a way to think about it, because I think, I get asked this question sometimes, is if I had taken the GPUs that just came online in Q1 and Q2 in terms of GPUs and allocated them all to Azure, the KPI would have been over 40. And I think the most important thing to realize is that this is about investing in all the layers of the stack that benefit customers. And I think that’s hopefully helpful in terms of thinking about capital growth, it shows in every piece, it shows in revenue growth across the business and shows as OpEx growth as we invest in our people. The cost that Microsoft is contending with here is not marginal cost, but rather opportunity cost: compute spent in one area cannot be used in another area; in the case of these earnings, Microsoft was admitting that they could have made their Azure number if they wanted to, but chose to prioritize their own workloads because, as CEO Satya Nadella noted later in the call, those have higher gross margin profiles and higher lifetime value. It’s opportunity costs, not marginal costs, that are the challenge facing hyperscalers. How much compute should go to customers, and which ones? How much should be reserved for internal workloads? Microsoft needs to balance Azure — both for its enterprise customers and OpenAI — and its software business; Amazon needs to balance its e-commerce business, AWS, and its strategic investments in both Anthropic and OpenAI. Google has to balance GCP, its own strategic investment in Anthropic, and its consumer businesses. Last week Anthropic released announced Mythos, its most advanced model. And, in somewhat typical Anthropic fashion, it did so by focusing on its dangers; from the introductory post for Project Glasswing , the company’s initiative for leveraging Mythos to address security: We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes. In an Update last week I analogized Anthropic’s “disaster-porn-as-marketing-tool” approach to The Boy Who Cried Wolf ; what’s important about that analogy is not just that the boy raised false alarms, but also that, in the end, the wolf did come. To that end, I wrote two weeks ago about the myriad of security issues that underpin all software, and my optimism that AI would solve these issues in the long run, even if it made things much worse in the short run. In other words, it’s actually not important whether or not Mythos represents a major security threat: if this model doesn’t, a future model will; to that end, I do support leveraging Mythos to proactively find and fix bugs before bad actors can find and exploit them. At the same time, it’s also worth noting that there are other reasons for Anthropic to not make Mythos widely available, limiting access to a finite number of companies with a high capacity and willingness to pay. The first are those opportunity costs: Anthropic is already short on compute serving its current models; X was overrun with complaints and debates this weekend about Anthropic allegedly dumbing down Claude over the last month or so . Making Mythos more widely available — particularly to subscription plans that don’t pay per usage — would make the situation much worse. In other words, Anthropic isn’t facing a marginal cost problem, but an opportunity cost problem: where to allocate its compute. Of course this could become a margin problem: I suspect that Anthropic is going to overcome its conservatism in terms of compute by acquiring more compute from hyperscalers and neoclouds, and paying dearly for the privilege. The key to handling those costs will be to charge more for Claude going forward; that, by extension, means maintaining pricing power, which leads to a second benefit of not releasing Mythos broadly. Anthropic certainly faces competition from OpenAI; for both frontier labs, however, the real competition in the long run are open source models. Right now those primarily come from China, and a key ingredient in fast-following frontier models is distillation; from Anthropic’s blog : We have identified industrial-scale campaigns by three AI laboratories—DeepSeek, Moonshot, and MiniMax—to illicitly extract Claude’s capabilities to improve their own models. These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions. These labs used a technique called “distillation,” which involves training a less capable model on the outputs of a stronger one. Distillation is a widely used and legitimate training method. For example, frontier AI labs routinely distill their own models to create smaller, cheaper versions for their customers. But distillation can also be used for illicit purposes: competitors can use it to acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently. I absolutely believe this is a real problem, and wrote as much when DeepSeek R1 was released last year . I also think it’s in the interest of everyone other than the frontier labs to pretend that it isn’t; open source models are not subject to the frontier labs’ markup or compute constraints, which is exactly why it benefits most companies to have them available, whether or not they are distilled. Of course that doesn’t mean they are free to run: you still need to provide the compute. Notice, however, how that makes stopping distillation even more of a priority for the frontier labs: first, they want to protect their margins. Second, however, their biggest cost is opportunity cost: the customers they can’t serve because they don’t have enough compute. To the extent they can make compute less useful for their potential customers — by stopping open source models from distilling their models — is the extent to which they can acquire that compute for themselves at more favorable rates. Mythos wasn’t the only new model announced last week: Meta released the first fruit of their new frontier lab as well. From the company’s blog post : Today, we’re excited to introduce Muse Spark, the first in the Muse family of models developed by Meta Superintelligence Labs. Muse Spark is a natively multimodal reasoning model with support for tool-use, visual chain of thought, and multi-agent orchestration. Muse Spark is the first step on our scaling ladder and the first product of a ground-up overhaul of our AI efforts. To support further scaling, we are making strategic investments across the entire stack — from research and model training to infrastructure, including the Hyperion data center… Muse Spark offers competitive performance in multimodal perception, reasoning, health, and agentic tasks. We continue to invest in areas with current performance gaps, such as long-horizon agentic systems and coding workflows. Muse Spark isn’t state of the art, but it’s in the game, and overall a positive first impression from Meta Superintelligence Labs. What is most notable to me, however, is the extent to which the last nine months of AI have made clear that CEO Mark Zuckerberg made the right call to embark on that “ground-up overhaul of [Meta’s] AI efforts”. The trigger for O’Laughlin’s post that I opened this Article with was reasoning, where models using more tokens led to better answers; since then agents have exponentially increased token demand , as they can use LLMs continuously without a human in the loop. This is a huge driver in sky-rocketing demand for Claude, as well as OpenAI’s Codex. Moreover, this use case is so potentially profitable that not only is Anthropic’s revenue sky-rocketing, but OpenAI is pivoting its focus to enterprise. Indeed, you can make the argument that one of OpenAI’s biggest challenges is the fact it has such a popular consumer product in ChatGPT. I, with my Aggregation Theory lens, have long maintained that that userbase was a big advantage for OpenAI, but that assumed that the company could effectively monetize it, which is why I have argued so vociferously for an advertising model . OpenAI has big projections for exactly that, but until that materializes, that big consumer base is a big opportunity cost in terms of OpenAI’s focus and compute. The company has, to its credit and in the face of widespread skepticism, made significant investments in more compute, but the temptation to allocate more and more compute to agentic use cases that enterprises will pay for, even at the expense of the consumer business, will be very large. This puts Meta in a unique position relative to everyone else in the industry: unlike any of the hyperscalers or the frontier labs, Meta does not have an enterprise or cloud business to worry about. That means that serving the consumer market comes with no opportunity costs. Of course those opportunity costs would be much smaller anyways, given that Meta already has an at-scale advertising business to monetize usage. In other words, Meta may actually face less competition in winning the consumer space than it might have seemed a few months ago, simply because that is their primary focus — and because they have their own model, which means they don’t need to worry about not having access to the frontier labs (much of this analysis applies to Google, of course). This, by the same token, is why Meta should open source Muse, just like they did Llama. The entities that will be the most hurt by widespread availability of a frontier model are other frontier labs, who will see their pricing power reduced and face increased competition for compute. This will make it even harder for them to bear the opportunity cost of pursuing the consumer market, leaving it for Meta. So is “the era of Aggregation Theory…behind us”? On one hand, the insight that the way to create and maintain value will come from owning the customer is almost certainly going to continue to be the case. On the consumer side owning customers leads to advertising which provides the revenue to provide services to customers. On the enterprise side — which, I would note, has never been an arena where Aggregation Theory was meant to be applied — I think it’s likely that both Anthropic and OpenAI continue to move up the stack and deliver features that compete with software providers directly (an approach that is also in line with not making leading edge models publicly available). On the other hand, O’Laughlin’s observation that we are and will continue to be compute constrained is an important one: companies will not be able to assume they can serve everyone, because serving one set of customers imposes the opportunity cost of not serving another. This won’t, at least in theory, last forever: at some point AI will be “good enough” for enough use cases that there will be enough compute capacity to take advantage of the fact that there really aren’t meaningful marginal costs entailed in serving AI; that theoretical future, however, feels further away than ever. OpenAI is betting that this compute constraint — and the deals they have made to overcome it — will matter more than Anthropic’s current momentum with end users. From Bloomberg : OpenAI told investors this week that its early push to dramatically increase computing resources gives it a key advantage over Anthropic PBC at a moment when its longtime rival is gaining ground and mulling a potential public offering. The ChatGPT maker said it has outpaced Anthropic by “rapidly and consistently” adding computing capacity to support wider adoption of its software, according to a note the company sent to some of its investors after Anthropic announced a more powerful AI model called Mythos. The ambitious infrastructure build-out, criticized by some as too costly, has enabled OpenAI to better keep pace with rising demand for AI products, the memo states. I’m less certain that this will be dispositive. When it comes to AI, distribution and transaction costs are still free — the two preconditions for Aggregators — which means that the winners should be those with the most compelling products. Those products will win the most users, providing the money necessary to source the compute to serve them; consider Anthropic’s deal to secure a meaningful portion of TPU supply , which, given the capacity constraints at TSMC, is ultimately an example of taking supply from Google. I suspect that Anthropic can take more, including already built hyperscaler and neocloud capacity. Yes, that compute will be more expensive, but if demand is high enough the necessary cash flow will be there. In other words, my bet is that owning demand will ultimately trump owning supply, suggesting that the underlying principles of Aggregation Theory lives on. To put it another way, I think that OpenAI will need to win with better products, not just more compute; then again, if more compute is the key to better products, then does supply matter most? Regardless, they’ll certainly be focused on delivering both to the enterprise customers who are driving Anthropic’s astonishing growth. The real cost may be the consumer market they currently dominate, given that Meta has nothing to lose and everything to gain. You need land for the factory You need machines for the factory You need electricity to operate the machines You need humans to operate the machines You need the raw material for the widgets

0 views
André Arko 3 days ago

Software developers have become their own joke

Creating software is complicated. It’s hard to figure out exactly what you need to build without a lot of trial and error. It almost always requires both exploring possible options and refining something until it works really well. But those things aren’t the same! Your research prototype is not a good product that people will happily pay for. Back in the olden days, when software literally came from BigCo R&D departments, we managed to invent Unix, and the mouse, and GUIs, and Ethernet, and TCP/IP, and a ton of other stuff we all use constantly today. Those research divisions didn’t ship viable consumer products, though. Doug Englebart demoed a mouse-driven GUI in 1968, but you couldn’t buy a home computer with a mouse-driven GUI until 1979, and they didn’t become commercially popular until the Macintosh in 1984. Even years or decades of research wasn’t enough, and years (or decades!) of development work also needed to be done before the results was ready for people to use. Early literature about creating software, written by Fred Brooks and his peers, seems to contain the internalized view that both R and D are required. That’s not surprising, since R&D departments created most software back then, but we seem to have lost track of that connection. Even though our jobs are descended from those R&D labs of yore, we somehow lost the industry job of “software researcher”, and only “software developer” remains. Instead, research happens in academia, where an argument and some pseudocode is all you need to publish a paper. In that world, development is effectively non-existent. (I admit the division isn’t perfectly clear-cut. Sometimes academics will start companies around their research that create a product, or more likely get acquired to add a feature to a product. And sometimes Linus Torvalds will just build a new operating system, without doing any academic research on it, and it will get so popular everyone uses it. The point is that industry and academia have each publicly claimed one half of R&D while disowning the other.) The broader separation of research and development into academia and industry is really unfortunate, because good software needs both research and development as inputs. If you don’t do any research, you can’t identify which parts will be hard (or impossible) until after it’s too late. You also won’t have a good idea of what parts are important until after you’ve put in most or all of the work to create the parts that don’t matter. If you don’t do development, you won’t ever have something robust enough that other people can use it successfully. Meanwhile, on the other side, it feels like developers work hard to convince themselves there are no research aspects involved in their jobs. We call anything research-ish by another name, like “design”, “user experience”, “prototyping”, “de-risking”, “a spike”, and a lot of other funny euphemisms that avoid referring to the work as research. It seems like we’re trying to convince ourselves that we don’t do Research any more, because we are just Developers. This cultural lack of clarity around research in software development spaces really hit hard for me this week, as I read yet another treatise on working with LLM-driven agents for development. The two most popular takes that I have seen are “these tools are a fundamental shift in the nature of software development” and “these tools change nothing about building software at all”. Then the two sides start screaming at each other about how the other side is delusional and time will prove them completely wrong, and I lose interest. If we instead start from the premise that all software work requires research (where the problem space must be explored) and development (where solutions must be implemented and refined), there’s something hiding in the sometimes messy overlap between those two ideas that I’m not seeing come up in any discussions. No one can take the output of software research and treat it like it’s the output of software development. Not Bell Labs, not Xerox PARC, not Microsoft middle managers, and not “solo founders managing a team of AI agents” today. Unfortunately, seeing a prototype and becoming convinced it’s complete is not a new problem. It’s been the bane of software development possibly since the very beginning, when (apocryphally) a manager would review a mockup and conclude the project was now complete and could be shipped to customers immediately. Today, instead of telling that story as a joke, software developers have have somehow turned themselves into the boss from the joke, shouting that it’s time to ship the research prototype because it “looks finished”. How did we do this to ourselves? It seems like, back when we always had to do all the work ourselves, it was harder for software developers to be confused this way. If a developer knows they skipped every validation and edge case, it’s much easier to realize it’s not finished. If an LLM agent says “here’s a comprehensive implementation”, without mentioning all the validations and edge cases it skipped, many (and possibly most) developers will not notice the parts that are missing. This phenomenon is bad for a lot of reasons, including one reason you have probably already thought of: we’re going to get a lot more software claiming to be “comprehensive” and “fully implemented” when it’s really a partially finished prototype that’s full of holes. In a world full of research prototypes being pitched as completed development work, life is about to get worse for everyone who uses software. The docs are even more wildly wrong than they were before, customer support is telling you that your problem is solved by a feature that doesn’t exist, and company leadership is so excited they are planning to fire as many humans as possible so they can have more of it. I don’t want worse software! The software we already have is mostly terrible. Not only much worse software, but also much more of it, is pretty much my worst case scenario. What I actually want is better software, even if that means less of it. Unfortunately, instead of making better software, software developers have decided to become the butt of their own joke, shipping software that doesn’t work, with a footnote that says they know it doesn’t work but they are still shipping it. I don’t see any way to stop it, but I hate it anyway.

0 views
iDiallo 5 days ago

Your friends are hiding their best ideas from you

Back in college, the final project in our JavaScript class was to build a website. We were a group of four, and we built the best website in class. It was for a restaurant called the Coral Reef. We found pictures online, created a menu, and settled on a solid theme. I was taking a digital art class in parallel, so I used my Photoshop skills to place our logo inside pictures of our fake restaurant. All of a sudden, something clicked. We were admiring our website on a CRT monitor when my classmate pulled me aside. She had an idea. A business idea. An idea so great that she couldn't share it with the rest of the team. She whispered, covering her mouth with one hand so a lip reader couldn't steal this fantastic idea: "what if we build websites for people?" This was the 2000s, of course it was a fantastic idea. The perfect time to spin up an online business after a market crash. But what she didn't know was that, while I was in class in the mornings, my afternoons were spent scouring Craigslist and building crappy websites for a hundred to two hundred dollars a piece. I wasn't going to share my measly spoils. If anything, this was the perfect time to build that kind of service. That's a great idea , I said. There is something satisfying about having an idea validated. A sort of satisfaction we get from the acknowledgment. We are smart, and our ideas are good. Whenever someone learned that I was a developer, they felt this urge to share their "someday" idea. It's an app, a website, or some technology I couldn't even make sense of. I used to try to dissect these ideas, get to the nitty-gritty details, scrutinize them. But that always ended in hostility. "Yeah, you don't get it. You probably don't have enough experience" was a common response when I didn't give a resounding yes. I don't get those questions anymore, at least not framed in the same way. I have worked for decades in the field, and I even have a few failed start-ups under my belt. I'm ready to hear your ideas. But that job has been taken, not by another eager developer with even more experience, or maybe a successful start-up on their résumé. No, not a person. AI took this job. Somewhere behind a chatbot interface, an AI is telling one of your friends that their idea is brilliant. Another AI is telling them to write out the full details in a prompt and it will build the app in a single stroke. That friend probably shared a localhost:3000 link with you, or a Lovable app, last year. That same friend was satisfied with the demo they saw then and has most likely moved on. In the days when I stood as a judge, validating an idea was rarely what sparked a business. The satisfaction was in the telling. And today, a prompt is rarely a spark either. In fact, the prompt is not enough. My friends share a link to their ChatGPT conversation as proof that their idea is brilliant. I can't deny it, the robot has already spoken. I'm not the authority on good or bad ideas. I've called ideas stupid that went on to make millions of dollars. (A ChatGPT wrapper for SMS, for instance.) A decade ago, I was in Y Combinator's Startup School. In my batch, there were two co-founders: one was the developer, and the other was the idea guy. In every meeting, the idea guy would come up with a brand new idea that had nothing to do with their start-up. The instructor tried to steer him toward being the salesman, but he wouldn't budge. "My talent is in coming up with ideas," he said. We love having great ideas. We're just not interested in starting a business, because that's what it actually takes. A friend will joke, "here's an idea" then proceeds to tell me their idea. "If you ever build it, send me my share." They are not expecting me to build it. They are happy to have shared a great idea. As for my classmate, she never spoke of the business again. But over the years, she must have sent me at least a dozen clients. It was a great idea after all.

0 views
Stratechery 5 days ago

2026.15: Myth and Mythos

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on why OpenAI’s enterprise pivot makes sense. Anthropic Anthropic Anthropic . In the current AI era, it feels like a new company is crowned the winner every few months, and right now Anthropic is wearing the crown. However, a point I make on Sharp Tech is that Anthropic’s exponential growth includes the part of the curve everyone misses: the company has been on this once-barely-visible trajectory for nearly two years now. Now the company has what is undoubtedly the most powerful model in the world, so powerful, in fact, that Anthropic says it can’t release it publicly. There’s reason for cynicism, given Anthropic’s history, but the part of the “Boy Cries Wolf” myth everyone forgets is that the wolf did come in the end. — Ben Thompson The New York Times and Another Paradigm Shift. If you’re interested in media, this week’s Stratechery Interview with New York Times CEO Meredith Kopit Levien is a fantastic listen. The  Times  has nailed the internet era better than media company in the world, and they’ve succeeded by making deliberate choices — a paywall before it was cool, a clear point of view, integrated business and editorial strategies — to differentiate themselves from a sea of commoditized content in an era of aggregators and content abundance. That playbook worked wonders for the Times in the previous generation of the internet, and I enjoyed hearing Levien’s thoughts on updating it for an era dominated by AI and video.  — Andrew Sharp The New Yorker  Explains Sam Altman. This week’s Sharp Text hit a few different beats, including thoughts on the Strait of Hormuz and a fun bit of E-ZPass history, but I opened with a take on the sprawling Sam Altman profile from the New Yorker . The 16,000 word profile is certainly an exhaustive recital of questions that have been asked about Altman for more than a decade, but better topics went unexplored. It’s frustrating — and representative of too much tech coverage — that so much effort went into what’s effectively a well-written Wikipedia entry, anchored by a predetermined conclusion, and ignoring more dramatic questions than whether Sam Altman is a good person. — AS OpenAI Buys TBPN, Tech and the Token Tsunami — OpenAI’s purchase of TBPN makes no sense, which may be par for the course for OpenAI. Then, AI is breaking stuff, starting with tech services. Anthropic’s New TPU Deal, Anthropic’s Computing Crunch, The Anthropic-Google Alliance — Anthropic needs compute, and Google has the most: it’s a natural partnership, particularly for Google. Anthropic’s New Model, The Mythos Wolf, Glasswing and Alignment — Anthropic says its new model is too dangerous to release; there are reasons to be skeptical, but to the extent Anthropic is right, that raises even deeper concerns. An Interview with New York Times CEO Meredith Kopit Levien About Betting on Humans With Expertise — An interview with New York Times Company CEO Meredith Kopit Levien about human expertise as a moat against Aggregators and AI. Hormuz, Rushmore and a Sam Altman Story That Missed the Story — On the New Yorker’s profile of Sam Altman, the future in the Middle East, and the power of E-ZPass history . OpenAI Buys TBPN Mythos, Altman, New York Times VLIW: The “Impossible” Computer Gas Turbine Blades and their Heat-Defying Single-Crystal Superalloys A Ceasefire and Reports of PRC Pressure; Another Politburo Investigation; Mythos, DeepSeek, and a Token Crunch An Exclusive Hornets-Suns Report and Mail on LeBron, Wemby, the Pistons, ABS in the NBA, Bulls Fandom for Kids Malone to Carolina and Karnisovas Out in Chicago, Cooper and Kon Battling to the Finish, A Jokic-Wemby Classic in Denver Mythos and Project Glasswing, The Year of Anthropic Continues Apace, Q&A on the NYT, Altman, De-globalization

0 views

Premium: The Hater's Guide to OpenAI

Soundtrack: The Dillinger Escape Plan — Setting Fire To Sleeping Giants In what The New Yorker’s Andrew Marantz and Ronan Farrow called a “tense call” after his brief ouster from OpenAI in 2023, Sam Altman seemed unable to reckon with a “pattern of deception” across his time at the company:  No, he cannot. Sam Altman is a deeply-untrustworthy individual, and like OpenAI lives on the fringes of truth, using a complaint media to launder statements that are, for legal reasons, difficult to call “lies” but certainly resemble them. For example, back in November 2025, Altman told venture capitalist Brad Gerstner that OpenAI was doing “well more” than $13 billion in annual revenue when the company would do — and this is assuming you believe CNBC’s source — $13.1 billion for the entire year . I guarantee you that, if pressed, Altman would say that OpenAI was doing “well more than” $13 billion of annualized revenue at the time, which was likely true based on OpenAI’s stylized math, which works out as so (per The Information): This means that, per CNBC’s reporting, OpenAI barely scratched $10 billion in revenue in 2025, and that every single story about OpenAI’s revenue other than my own reporting (which came directly from Azure) massively overinflates its sales. The Information’s piece about OpenAI hitting $4.3 billion in revenue in the first half of 2025 should really say “$3.44 billion,” but even then, my own reporting suggests that OpenAI likely made a mere $2.27 billion in the first half of last year, meaning that even that $10 billion number is questionable. It’s also genuinely insane to me that more people aren’t concerned about OpenAI, not as a creator of software, but as a business entity continually misleading its partners, the media, and the general public. To put it far more bluntly, the media has failed to hold OpenAI accountable, enabling and rationalizing a company built on deception, rationalizing and normalizing ridiculous and impossible ideas just because Sam Altman said them. Let me give you a very obvious example. About a month ago, per CNBC , “...OpenAI reset spending expectations, telling investors its compute target was around $600 billion by 2030.” This is, on its face, a completely fucking insane thing to say, even if OpenAI was a profitable company. Microsoft, a company with hundreds of billions of dollars of annual revenue, has about $42 billion in quarterly operating expenses .  OpenAI cannot afford to pay these agreements. At all. Hell, I don’t think any company can! And instead of saying that, or acknowledging the problem, CNBC simply repeats the statement of “$600 billion in compute spend,” laundering Altman and OpenAI’s reputation as it did (with many of the same writers and TV hosts) with Sam Bankman-Fried . CNBC claimed mere months before the collapse of FTX that it had grown revenue by 1,000% “during the crypto craze,” with its chief executive having “ ...survived the market wreckage and still expanded his empire .” You might say “how could we possibly know?” and the answer is “read CNBC’s own reporting that said that Bankman-Fried intentionally kept FTX in the Bahamas ,” which said that Bankman-Fried had intentionally reduced his stake in Canadian finance firm Voyager ( which eventually collapsed on similar terms to FTX ) to avoid regulatory disclosures around (Bankman-Fried’s investment vehicle) Alameda’s finances. This piece was written by a reporter that has helped launder the reputation of Stargate Abilene , claiming it was “online” despite only a fraction of its capacity actually existing.  The same goes for OpenAI’s $300 billion deal with Oracle that OpenAI cannot afford and Oracle does not have the capacity to serve . These deals do not make any logical sense, the money does not exist, and the utter ridiculousness of reporting them as objective truths rather than ludicrous overpromises allowed Oracle’s stock to pump and OpenAI to continue pretending it could actually ever have hundreds of billions of dollars to spend. OpenAI now claims it makes $2 billion a month , but even then I have serious questions about how much of that is real money considering the proliferation of discounted subscriptions (such as ones that pop up when you cancel that offer you three months of discounted access to ChatGPT Plus ) and free compute deals, such as the $2500 given to Ramp customers , millions of tokens in exchange for sharing your data , the $100,000 token grants given to AI policy researchers , and the OpenAI For Startups program that appears to offer thousands (or even tens of thousands) of dollars of tokens to startups . While I don’t have proof, I would bet that OpenAI likely includes these free tokens in its revenues and then counts them as part of its billions of dollars of sales and market spend . I also think that revenue growth is a little too convenient, accelerating only to match Anthropic, which recently “hit” $30 billion in annualized revenue under suspicious circumstances . I can only imagine OpenAI will soon announce that it’s actually hit $35 billion in annualized revenue , or perhaps $40 billion in annualized revenue , and if that happens, you know that OpenAI is just making shit up.  Regardless, even if OpenAI is actually making $2 billion a month in revenue, it’s likely losing anywhere from $4 billion to $10 billion to make that revenue. Per my own reporting from last year, OpenAI spent $8.67 billion on inference to make $4.329 billion in revenue , and that’s not including training costs that I was unable to dig up — and those numbers were before OpenAI spent tens of millions of dollars in inference costs propping up its doomed Sora video generation product , or launched its Codex coding environment. In simpler terms, OpenAI’s costs have likely accelerated dramatically with its supposed revenue growth. And all of this is happening before OpenAI has to spend the majority of its capital. Oracle has, per my sources in Abilene, only managed to successfully build and generate revenue from two buildings out of the eight that are meant to be done by the end of the year, which means that OpenAI is only paying a small fraction of the final costs of one Stargate data center. Its $138 billion deal with Amazon Web Services is only in its early stages, and as I explained a few months ago in the Hater’s Guide To Microsoft , Redmond’s Remaining Performance Obligations that it expects to make revenue from in the next 12 months have remained flat for multiple quarters, meaning that OpenAI’s supposed purchase of “ an incremental $250 billion in Azure compute ” are yet to commence. In practice, this means that OpenAI’s expenses are likely to massively increase in the coming months. And while the “ $122 billion ” funding round it raised — with $35 billion of it contingent on either AGI or going public (Amazon), and $60 billion of it paid in tranches by SoftBank and NVIDIA — may seem like a lot, keep in mind that OpenAI had received $22.5 billion from SoftBank on December 31 2025 , a little under four months ago.  This suggests that either OpenAI is running out of capital, or has significant up-front commitments it needs to fulfil, requiring massive amounts of cash to be sent to Amazon, Microsoft, CoreWeave ( which it pays on net 360 terms ) and Oracle.  And if I’m honest, I think the entire goal of the funding round was to plug OpenAI’s leaky finances long enough to take it public, against the advice of CFO Sarah Friar. One under-discussed part of Farrow and Marantz’s piece was a quote about OpenAI’s overall finances, emphasis mine : As I wrote up earlier in the week , OpenAI CFO Sarah Friar does not believe, per The Information , that OpenAI is ready to go public, and is concerned about both revenue growth slowing and OpenAI’s ability to pay its bills: To make matters worse, Friar also no longer reports to Altman — and god is it strange that the CFO doesn’t report to the CEO! — and it’s actually unclear who it is she reports to at all, as her current report, Fiji Simo, has taken an indeterminately-long leave of medical absence . Friar has also, per The Information, been left out of conversations around financial planning for data center capacity. These are the big, flashing warning signs of a company with serious financial and accounting issues, run by Sam Altman, a CEO with a vastly-documented pattern of lies and deceit. Altman is sidelining his CFO, rushing the company to go public so that his investors can cash out and the larger con of OpenAI can be dumped onto public investors. And beneath the surface, the raw economics of OpenAI do not make sense. You’ll notice I haven’t talked much about OpenAI’s products yet, and that’s because I do not believe they can exist without venture capital funding them and the customers that buy them. These products only have market share as long as other parties continue to build capacity or throw money into the furnace. To explain: While OpenAI is not systemically necessary , the continued enabling and normalization of its egregious and impossible promises has created an existential threat to multiple parties named above. Its continued existence requires more money than anybody has ever raised for a company — private or public — and in the event it’s allowed to go public, I believe that both retail investors and large equity investors like SoftBank will be left holding the bag. OpenAI has a fundamental lack of focus as a business, despite how many articles have claimed over the last year that it’s working on a “SuperApp” and has some sort of renewed plan to take on whoever it is that OpenAI perceives as the competition in any given calendar month.  Everything OpenAI does is a reaction to somebody else. Its Atlas browser was a response to Perplexity’s Comet browser , its first ( of multiple! ) Code Reds in 2025 was a reaction to Google’s Gemini 3, and its rapid deployment of its Codex model and platform was to compete with Anthropic’s Claude Code . I’ve read about this company and the surrounding industry for hours a day for several years, and I can’t think of a single product that OpenAI has launched first . Even its video-generating social network app Sora was beaten to market by five days by Meta’s putrid and irrelevant “Vibes.” Actually, that’s not true. OpenAI did have one original idea in 2025 — the launch of GPT-5, a much-anticipated new model launch that included a “model router” to make it “more efficient,” except it turned out that it boofed on benchmarks and that the model router actually made it (as I reported last year) more expensive , which led to the router being retired in December 2025 .  I tend to be pretty light-hearted in what I write, but please take me seriously when I say I have genuine concerns about the dangers posed by OpenAI. I believe that OpenAI is an incredibly risky entity, not due to the power of its models or its underlying assets, but due to Sam Altman’s ability to con people and find others that will con in his stead. Those responsible for rooting out con artists — regulators, investors, and the media — have not simply failed , but actively assisted Altman in this con. Here’re the crucial elements of the con: Sam Altman is a dull, mediocre man that loves money and power. He appears to be superficially charming, but his actual skill is ingratiating himself with others and having them owe him favors, or feel somehow indebted to him otherwise. He remembers people’s names and where he met them, and is very good at emailing people, writing checks, or finding reasons for somebody else to write a check. He is not technical — he can barely code and misunderstands basic machine learning ( to quote Futurism ) — but is very good at making the noises that people want to hear, be they big scary statements that confirm their biases or massive promises of unlimited revenue that don’t really make any rational sense. While OpenAI might have started on noble terms, it has since morphed into a massive con led by the Valley’s most-notable con artist.  I realize that those who like AI might find this offensive, but what else do you call somebody who makes promises they can’t keep ($300 billion to Oracle, $200 billion of revenue by 2030), spreads nonsensical financials (promises to spend $600 billion in compute), makes announcements of deals that don’t exist (see: NVIDIA’s $100 billion funding and the entire Stargate project), and speaks in hyperbolic terms to pump the value of his stock (such as basically every time he talks about Superintelligence). Altman has taken advantage of a tech and business media that wants to see him win, a market divorced from true fundamentals, desperate venture capitalists at the end of their rope , hyperscalers that have run out of hypergrowth ideas , and multiple large companies like Oracle and SoftBank that are run by people that can’t do maths. OpenAI is a psuedo-company that can only exist with infinite resources, its software sold on lies, its infrastructure built and paid for by other parties, and its entire existence fueled by compounding layers of leverage and risk.  OpenAI has never made sense, and was only rationalized through a network of co-conspirators. OpenAI has never had a path to profitability, and never had a product that was worthy of the actual cost of selling it. The ascension of this company has only been possible as part of an exploitation of ignorance and desperation, and its collapse will be dangerous for the entire tech industry. Today I’ll explain in great detail the sheer scale of Sam Altman’s con, how it was exacted, the danger it poses to its associated parties, and how it might eventually collapse. This is the Hater’s Guide To OpenAI, or Sam Altman, Freed.  OpenAI’s ChatGPT Subscriptions are, like every LLM product, deeply unprofitable, which means that OpenAI needs constant funding to keep providing them. I have found users of OpenAI Codex who have been able to burn between $1,000 and $2,000 in the space of a week on a $200-a-month subscription, and OpenAI just reset rate limits for the second time in a month. This isn’t a real business. OpenAI’s API customers (the ones paying for access to its models) are, for the most part, venture-backed startups providing services like Cursor and Perplexity that are powered by these models. These startups are all incredibly unprofitable, requiring them to raise hundreds of millions of dollars every few months ( as is the case with Harvey , Lovable, and many other big-name AI firms), which means that a large chunk — some estimate around 27% of its revenue — is dependent on customers that stop existing the moment that venture capital slows down. OpenAI’s infrastructure partners like CoreWeave and Oracle are taking on anywhere from a few billion to over a hundred billion dollars’ worth of debt to build data centers for OpenAI, putting both companies in material jeopardy in the event of OpenAI’s failure to pay or overall collapse. 67% of CoreWeave’s 2025 revenue came from Microsoft renting capacity to rent to OpenAI , and $22 billion (32%) of of CoreWeave’s $66.8 billion in revenue backlog , which requires it to build more capacity to fill.  Oracle took on $38 billion in debt in 2025 , and is in the process of raising another $50 billion more as it lays off thousands of people , with said debt’s only purpose being building data center capacity for OpenAI. OpenAI’s lead investor SoftBank is putting its company in dire straits to fund the company, with over $60 billion invested in the company so far, existentially tying SoftBank’s overall financial health to both OpenAI’s stock price and SoftBank’s ability to continue paying (or refinancing) its loans. SoftBank took on a year-long $15 billion bridge loan in 2025 , had to sell its entire stake in NVIDIA , and expand its ARM-stock-backed margin loan to over $11 billion to give OpenAI $30 billion in 2025, and then took on another $40 billion bridge loan a few weeks ago to fund the $30 billion it promised for OpenAI’s latest funding round . Creating a halo of uncertainty around the actual efficacies of LLMs, to the point that a cult of personality grew around a technology that obfuscated its actual outcomes and efficacies to the point that it could be sold based on what it might do rather than what it actually does . Creating a halo of “genius” around Altman himself, aided by constant and vague threats of human destruction with the suggestion that only Altman could solve them. Normalizing the idea that it’s both necessary and important to let a company burn billions of dollars. Normalizing the idea that it’s okay that a company has perpetual losses, and perpetuating the idea that these losses are necessary for innovation to continue at large.

0 views
Kev Quirk 5 days ago

Motorbike Servicing Rant

So my BMW S1000XR is now a year old and it's going in for its first "full service" . It had it's "break in" service after a few weeks of ownership, but that's just an oil change. New bikes come with a very thin oil inside the engine that's used to help with the break-in process. After 500 or so miles, this needs to be swapped out for proper oil. I contacted the dealership for a price and some potential dates, this is the breakdown they came back with: So nearly £350 for what's effectively an hour's work and around £50 in parts. I'm mechanically minded and could easily do this at home, but like most modern vehicles, my BMW doesn't come with a service book that is stamped. These days the service history is all stored centrally with BMW, so means that the service has to be carried out by them. There is a misconception that home servicing will void the warranty of a new bike. It won't as long as the person doing the service uses OEM parts and has done it to manufacturers specification - which I always do. But I bought this bike from BMW, so if I hand it back after 3 years with a generic eBay service book that's been stamped by me, even though it's been done to a high standard, it will affect the trade-in value. Ipso facto, they have me by the balls. I get it, margins are small and this is how dealerships make money, but I wish they would make it accessible for mechanically minded people, like me, to service at home. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Labour - £150 Oil disposal - £20 Oil - £80.60 Sump plug washer - £0.96 Oil filter - £17.29 Brake fluid - £11.92 Tax @ 20% - £56.15 Total: £336.92 (~$455)

0 views
Stratechery 1 weeks ago

An Interview with New York Times CEO Meredith Kopit Levien About Betting on Humans With Expertise

Listen to this post: Good morning, This week’s Stratechery Interview is with New York Times Company CEO Meredith Kopit Levien . Levien became CEO in 2020, after previously serving as Chief Operating Officer, Chief Revenue Officer, and Head of Advertising. I previously interviewed Kopit Levien in August 2022 . The New York Times editorial team always elicits strong reactions, both in the political realm and also in tech, but that’s not what this interview is about; what is indisputable is that the New York Times as a business is both incredibly interesting and incredibly successful. Over the last decade the newspaper has gone from strength to strength, building a thriving subscription business, expanding its bundle from news to Games to Sports to Cooking and more, and now — to take things full circle — has a rapidly growing advertising business. We discuss all of that in this interview, starting with the Games and Sports categories, why the bundle is about expanding the New York Times brand, and the company’s recent push into vertical video. Then we discuss what it means to be a destination site, while also using Aggregators to acquire customers. We spend time on AI, including the New York Times lawsuit against OpenAI, why Kopit Levien sees humans as the moat against AI content, and how the company is using AI on both the business and editorial sides. Finally we discuss the potential for building communities, why advertising is working, and how surviving in an Aggregator and AI world is about fighting entropy. As a reminder, all Stratechery content, including interviews, is available as a podcast; click the link at the top of this email to add Stratechery to your podcast player. On to the Interview: This interview is lightly edited for clarity. Meredith Kopit Levien, welcome back to Stratechery. MKL: Hi Ben, thanks for having me, so happy to be here. It’s hard to believe, but it has been four-and-a-half years since you last came on — I was thinking two or three years ago — nope, it’s almost half a decade. I was actually shocked that I’ve been doing interviews for that long, but apparently I’ve been doing them for like six, six-and-a-half years. MKL: You have, and I’ve listened to a lot of them! I appreciate it. Well, we already did the whole background conversation then, we both worked for the student newspaper, lots of commonality there. So let’s fast forward to the time of that interview. It was August 2022, and speaking of mind-blowing lengths of time, you had bought Wordle earlier that year, it’s hard to believe it’s been that long and then you had just purchased The Athletic . How do you feel about those acquisitions five years on? MKL: That’s such a fun place to start. We acquired both of them, if I remember correctly, within a week of each other, and I would say we feel great about both of them and both of them have exceeded our expectations in so many ways. Is Wordle the greatest media acquisition of all time? MKL: You know what I tell people? That New York Times Games is the most up-and-to-the-right thing I’ve experienced in my career in terms of just people’s attention to it and the way it kind of touched culture and still touches culture every day, and the ability for Wordle to be like a megaphone for these other incredible games that we already had that most people didn’t know about. And then what’s so amazing to me is we now have, I think 11 games — half of them are free, half of them are paid games, tens of millions of people play our games every day. And we have made the vast majority, we’ve made those games. And before Wordle and after Wordle, Wordle in and of itself is extraordinary, but before and after, we’ve made other extraordinary games, it’s so awesome. Is it a bit of like annoying that’s like everyone thinks about Wordle, “Oh, you bought Wordle”, and you’re like, “Look, we made most of these, give us more credit here!”? MKL: Listen, credit to Josh Wardle , it’s an awesome game, and it just touched culture like nothing else. But it has served us so brilliantly — it has just shined this huge light on all these other games and it’s given us a chance to prove our chops as a game studio and we just keep making hits. I am so proud of our games team, Jonathan Knight and the whole team around him, they have done such good work and they are still hard, hard at it, that team works so hard. I’m a Connections player , so Wyna Liu is my hero , but they’re all amazing and they put out really good work. Games, it’s going swimmingly, I hope we get to talk even more about it. As long as we’re here, like how has your – because we were talking a bit about, Wordle sort of came out of the blue — it was this game that popped up, you snapped it up, super smart — and we were talking in our interview about it being an in-point to the New York Times broadly. MKL: Yeah. Has that evolved as you expected or has it evolved in different ways? In the context of not just Games being a property but also it tying into the whole thing. MKL: What a great question. To answer that, let me step back for a minute and say our strategy is for the whole of the New York Times and all the different parts of the portfolio to be an essential subscription for curious people everywhere who want to understand the world and make the most of their lives. We’ve got three pillars to that, 1) be, and become even more every day, the world’s best news destination 2) have these leading lifestyle products, including Games, but also Sports, Recipes, shopping advice, that really help people do their passion more deeply or better or enjoy it even more and then put those two things together, news and the lifestyle products, in an interconnected experience so that the New York Times is incredibly relevant to you every single day, whatever is going on in the world or your world. Right. This is a point you made before, is you wanted the New York Times to not just be — sometimes the news is slow, or sometimes stuff’s happening you don’t care about, and you wanted to have other stuff for people along the way. MKL: Listen, I want to be really clear. We are first and foremost a high quality independent news journalism company, that is our mission, it is the most value-creating thing we do for society and economically, and that is by miles. And to your original question, it’s just amazing to have all these other points of introduction to people and point all these other ways to bring people into the Times ecosystem and to get them to form a habit with us. Once we do that, once we can engage them in something, our bet is that we can engage them in more and more, and there’s lots of examples of that. You mentioned you had three things, you had the news, you had the lifestyle, what was the third one? MKL: Yeah, so news, news is such a small word for such a big idea. You mentioned that sports is a lifestyle so is sports not news? Is that lifestyle? It’s kind of interesting where that fits. MKL: We do sports news, we do sports journalism, we do news journalism. But let me stay on the news thing for a minute because we’re often even trying ourselves in how we articulate it to not let it be this small idea. We do high quality, original, independent journalism, which means we are unearthing new and important information through reporting and also providing often deeply reported commentary and analysis on the really big topics that are going on in the world and also on things that just matter at the level of relevance of people’s daily lives. You could read us today for what is happening with this fragile ceasefire in Iran and you could also read us today for health advice or for what movie to go see or what restaurant people are eating in in New York City right now. News is this very broad thing at The New York Times, and we’ve got these four lifestyle products. I would say to you what we’re doing with The Athletic is absolutely journalism, often it is like news journalism, but make no mistake, and we are doing it with the rigor and the independence that The Times does. It’s journalism, but we are doing it for fans, we are doing that journalism. Right. It never occurred to me until you sort of mentioned it — it’s not wrong to say that sports is a lifestyle category. MKL: Totally. That intersection is actually kind of interesting to think about. MKL: Let me tell you something — I have an almost 15-year-old, he is an athlete, and he is a giant sports fan and when I think, “What are his lifestyle pursuits?”, when I fill out the parent statement in the school applications, first he’s a sports fan, and The Athletic is serving that fandom. Do you think there’s a bit where some of this sports journalism has been caught up in, “We are journalists”, bit and has missed the fact that people watch sports in many cases as a pastime to relax. I look forward to turning on the baseball game at night, I don’t want the perils of the world, this is supposed to be an escape. It’s also most helpful to put it in this lifestyle category because that’s actually meeting people where they are. MKL: I think that’s a great point. What I will say is The Athletic often does very hard-hitting sports journalism, it is certainly covering the important topics and the tough topics across the major leagues and teams in the United States and European football and a bunch of other things, so it is doing that, hard stop. But if you look at the multiplicity of things they’re doing and you look in a day’s time, it’s probably well over 100 stories that get published every day, an enormous amount of that is beat reporting on what happened to your team in the league that you most likely watch and it is literally meant to make you closer to the team, the fan, the game. I think all high quality information is — consumers of information want uncompromised information and so The Athletic is just like uncompromised the way The Times is uncompromised, it’s going to pursue the truth wherever it may lead, even when that’s to uncomfortable places. But the whole purpose of the broad set of things we do at The Athletic is to make you a better fan, and we know that. Whereas the purpose, and again, that does not mean we don’t do hard-hitting journalism, we absolutely do, but we are independent of anyone’s interest in that journalism but the sports fan. And for the Times, we’re not writing or producing our work for any particular audience, we’re doing it in service to the public’s interest. Is that a value of keeping The Athletic brand separate from the New York Times? MKL: We are absolutely committed to building the brand The Athletic, it was a deliberate choice, I’m very invested in that choice and we’ve still got a lot of running room to build it. I say the biggest opportunity with The Athletic is just to make more sports fans. We’re making real progress with it and let me tell you, you asked me at the beginning, “How’s it going?”, we bought a company that was losing a ton of money because they were investing into a huge sports newsroom, it’s like a giant newsroom with a little business. We said it would take some time, but then it would be accretive to the Times — it is absolutely that. We got there in many ways earlier and better than we expected and today we’ve got well over 500 journalists at The Athletic. So it’s an even bigger journalistic proposition and it’s really contributing as a business to The Times and we’re thrilled about that, and I want to say we’re only four years and a few months in, we’re just getting started on all the ways we can support fandom of the major sports. I think we were nailing the journalism thing, you’re always going to get better and better at that, they were good at it before we acquired them, we’ve helped them be even better at it, do it more robustly, do it in a more edited way and add like a layer of national, and in some cases global, sports coverage. But there’s just a lot of stuff that there’s a lot of white space in the market to serve fans deeply reported, uncompromised information and we’re going to do that. You have such a good product organization and you have the whole Games initiative, how much do you think about the prospects for games in the context of sports? Whether this be fantasy sports or sort of a whole host of like daily pick-ems — it’s interesting because there’s obviously a huge gambling angle to this but how many of those sort of offerings are possible without necessarily being gambling or whatever it might be? MKL: Yeah, great question. We think there’s real opportunity for Puzzles/Games, and Sports, we think we’re good at both of those things. We already have our first collab, I think it’s about a year old, we launched a Sports Connections puzzle , it is super fun. We did some great marketing for it with famous athletes, which was hilarious, and it’s played a lot, so people love it, and I would say that is early. We’re building out the team, we just hired a new Chief Product Officer at The Athletic , he comes following years of building communities at Facebook. We took one of the guys from the Times newsroom who’d been a leader of the Upshot, who’s incredible at building interactive work, and he’s now leading interactive work at The Athletic, so we think there’s real opportunity for that. And I’ll tell you just this week, it might even be today, I’m losing track of my dates, we are launching something called The Beast . I don’t know if you’re an NFL fan, but it is the most comprehensive guide I think that exists on the planet to the NFL draft class and it includes literally information on thousands of players who are draft hopefuls and then very deep profiles of 400 of them. Before we owned The Athletic, and actually until a year ago, we’d publish it like as a book, a physical book, it’s this like monster book because there’s so much information in it and teams use it, there’s nothing else like it. Now you’ll see as it launches this week, it’s got all these incredible interactive features now on the individual player profiles and if you’re someone, if you love an NFL team and you really care, you’re going to pay attention to The Beast. So I think we’re just getting started on features that may be games and also other things that support a fan who’s super passionate about their team. I keep interrupting you, but you mentioned three things, so we’ve got to get that third thing. What was the third thing in addition to news and lifestyle? MKL: World’s best news destination, leading lifestyle products, and put those two things together in an interconnected product experience for a bundle that makes The Times relevant for whatever is going on in your world, or the bigger world, every single day. That’s the idea. Got it. We talked a lot about bundling last time and obviously that’s really the core of your strategy, how though has that evolved in the last five years? Is this really a most people are coming in the door through these lifestyle brands and you’re bringing them to the news, whereas it used to be the other way before? I’m throwing that out there as a hypothesis, how does that actually work? MKL: I actually think the essence of it is about having this portfolio of world-class news coverage, news broadly defined, and then not just products, but these products that either are or are becoming the leaders in their category. These categories are giant spaces where tens of millions, in some cases hundreds of millions, of people spend a lot of time. It’s the fact that we have rare and valuable news coverage and lifestyle products in these huge spaces that’s really working. So to me, the word “bundle” can mean — the low common denominator version of it is, “It’s a marketing concept or merchandising concept” — in our experience, we’ve got this singular idea of being essential in meeting a lot of different kinds of information and experience needs in a person’s life. Rather than it be this idea of, “We’ve got one big important thing” — I’m going to come back to news in a minute because news is central to all of it — but you’ve got this one major hero thing and then you append a bunch of other stuff so the consumer thinks there’s some other value there, we have invested and built these products out in such a way where each thing should be deeply valuable to the person who cares about buying the right products and is going to deeply research them, and therefore they use Wirecutter. You talked about expanding the brand, is this what you mean? Where you hear “New York Times”, it’s not, of course news is always the most important, I know you’re going to say that, so I’ll say that for you. MKL: I’m going to say that again and again, because it’s true. It’s also the most economic-value creating thing we do. Right. But you want people to think that, “New York Times, that’s the best games”, or, “That’s the best cooking”. MKL: New York Times makes the best puzzles, it has the best recipes, and by the way, just advice for home cooks who want to cook, it’s where I go if I’m a sports fan, and it’s absolutely going to give me the best uncompromised shopping advice — that’s sort of the spirit of it. It’s not just a news indicator it’s like a “stamp of quality” indicator. MKL: It’s a stamp of rigor and quality, and I’m going to keep using this word, “uncompromised”. Really high quality information that’s done in an uncompromised way and therefore has value at real scale. And the “uncompromised” comes from the business model? MKL: Uncompromised comes from the idea that at our core what we do is independent journalism. You could even say every bit of it, even the games are like journalistic in that they are sort of planned in a very deliberate way and thought out. Right. They’re not randomly generated, someone is actually editing every puzzle. MKL: That’s right. Humans with expertise are making these things and in some cases harnessing technology to do that even better. It’s really working, and I want to say to you, I wouldn’t have had these words four-and-a-half years ago, but at the core what we’re trying to do in a very complex information ecosystem, really shaped and controlled by a small number of dominant tech platforms, we are trying to make news coverage and products that are so good that people seek them out and ask for them by name. A destination site . MKL: Seek them out, ask for them by name, make room in their lives. The destination site has been — there’s a few companies that I always feel very pleased about, I feel like they’re like my children in a way. MKL: Are we one of your kids? You are one of my kids! MKL: I appreciate that, we could use all the parents, we could use it. That’s why I loved that, I’ve mentioned it multiple times, but the strategy document that you guys, it’s been like a decade now — I’m like, “This is beautiful”, and I think it really was on this point of destination sites, this idea that the way around a world of Aggregators that just commoditizes everything is people have to seek you out directly. Google will say a competition is only a click away and no one seems to take that seriously, people can actually click on you and go there. MKL: My answer, we all read your Aggregation Theory and all the updates you’ve done to Aggregation Theory. The way I think about it is for more than a decade, we have had these like four D’s that we’re obsessed with. Ready? So what do I mean by that? We know we exist in an ecosystem shaped by these dominant tech platforms and so and we have to have a wide free layer for our work, we have to, otherwise you can’t bring in the next subscribers. So we are very deliberate where we can be about how we go about doing that and the idea is we need to be able to get you to sample our stuff and fall in love with it and we’ve got to give you enough time and space to make a habit of it so that ultimately you subscribe. Yeah, that’s really interesting. I was going to ask this towards the end, but that’s a good lead into it. You’ve had a big focus on video recently, and it’s super interesting – actually, I have a few questions about this. One is it’s pretty weird to go to the video tab on the desktop and all the videos are vertical. Was that very controversial? MKL: There’s video all over the site now so you’re gonna see it in a lot of places. When we say destination, we know a lot of people during the workday are reading us or watching us or listening to us on the desktop web, but we are so kind of first to that phone. Our bet is the ability to watch a video on a phone, you are going to want it in vertical and we now have a home for it in this tab. I encourage everybody, download our app, and you get the best version of what we’re doing. Download your app and make sure you register your user account and get the experience. It’s really interesting because I’ve noticed with Stratechery actually, a huge portion of my audience now is just audio, I think more than half my subscribers listen instead of read. You mentioned you mostly listen, which is fine. But as far as the reading goes, actually, I still have a huge amount of people reading on the desktop as compared to mobile. MKL: By the way, I listen when I run because all my other media time is reading. MKL: And now I’m forcing myself to watch. Right, you’ve got to dogfood it . MKL: I’m like listening to YouTube when I run. Just talking shop, is there a bit where, as you look back on the evolution of media, there’s a thing where actually it turned out that the browser ended up being a text medium, and then the phone was actually the multimedia platform? MKL: That’s such a great question, that’s so well put and I need to take that in for a minute and think about it. What I’ll say that I think that’s related to that in a web world, we needed a website that people would type in and then like pin and always be able to go back to, that worked and the Times has been very good at that. In an iOS and Android world, we need an app, and we’re very, very good at that. I would actually say to you, we’re still pretty early in really getting more and more people to use our app. Today, the majority of people who use our app are subscribers, the engagement is enormous, but it’s like mostly the people who subscribe. We have not made the app a really important place for prospects and we’re starting to do that, the Watch tab is part of that. I think it remains to be seen in a world where the Times is as preferred a brand and a source for watching as it is for reading and listening. Which, by the way, I want to say to you, those things are not going to go away, we’ve been at this for 175 years. MKL: The old media doesn’t go away, the people who do it still do it. They vary it a bit, but many of them still do it. To your point, this is a big part of your approach is you have this huge reporting base, which the medium, that’s all ones and zeros, they can write an article, and they can be on a podcast, and they can show up in video. MKL: And they can put a camera, they can literally hold a camera in front of them from somewhere on the edges of Iran and describe what they’re seeing. So I think it remains to be seen, I think the market is still kind of forming and structuring. We regard video as doing three really important things for us. One is it helps us engage the people we already have, and anything that helps us engage the people we already have is very good for business. Churn mitigation is always a win if you’re a subscription business. MKL: It’s good for business, and I would argue it’s good for journalistic impact and everything. Good for society, but very good for business. We also think there is an enormous number of people in all generations of life, but especially young people, who spend time watching, and they’re either watching news or they’re watching things that are in a zone adjacent. We are the only generation that really just maximized text, it’s been all downhill ever since. We got all the text in the world, we read it all, and then now everyone’s just watching video. MKL: I could do a whole other episode on that and fight to get my very intelligent kid to just like sit back and read and how important I think that is to brain development. But we think video will help us engage whole new audiences, that is a big bet we’re making, we’re already starting to see some of that, we are very excited about it. And then the third thing that video does for us, and I think that’s really important, I think we all know that trust in all institutions is at an all-time low, trust in media is at an all-time low, I hate the word “media” because it lumps in journalism and a bunch of other things, but trust in all of it is low. And the more we can show you the work, the more we believe you will come to understand what an independent journalistic process to pursue the truth wherever it may leave looks like. Interesting. So it’s like brand-enhancing for what you’re going for overall. MKL: Totally, and trust building. I’ll just tell you, we are much more aggressive today than we’ve been. One of the formats that we’ve scaled the most and there’s still so much room to go is just a reporter on camera describing the story. Which by the way then your production is vertical anyway so it ties right in. MKL: But there are times you go into a studio and explain something, so it doesn’t have to only be vertical, it goes a really long way. And we have made a very deliberate choice where we’ve said, we don’t particularly have a business model on TikTok or Instagram or YouTube Shorts, but we’ve got to be in those places. I wanted to ask you about that because when you think about podcasts, for example, there’s a huge push in general to be on YouTube and I think it’s pretty obvious because podcasts are incredible for audience retention. I’ve talked about for my business, all these people listening to Stratechery don’t go anywhere. Whereas people would have emails build up before that, and they’re like, “I have too many emails, I should just unsubscribe”, the problem is I get much less sharing because it’s much easier to forward an email and the podcast, you just go to the next podcast and then it’s sort of done. So you have podcasts in general going to YouTube because they feel like the algorithm is the way to acquire new users. The reason to bring this up is I go to the New York Times YouTube page right now, your last main video is from seven days ago. Your last Short is more recent, but it’s about Trump escalates threats to destroy Iran. Well, there’s been some news development since those threats. MKL: You think? Consult top of app. But the point is clearly it’s not a priority for you. How does that tie into the balance of destination site versus customer acquisition and all those sorts of things? MKL: It’s a great question. Let me start by saying our general thesis, and I’ve been here a long time now, so I’ve got enough reps to say it bears out. If we make great work that should scale because it’s unlike anything else out there, and it’s important, it will. I want to say that, that is our bet. And so I will say to you, we’re still at. That’s my bet too. MKL: I listened to enough of your work to know you think that too. It’s a really important principle that we’ve just like hit again and again and again as a business. First, we have to make like the best stuff there is, and it’s got to be done in an independent way and it’s got to be done with rigor into a high standard of quality. So the chapter we’re in now with video is very much scaling production, which is like, “What are we making?”, “What is it?”, “What is the New York Times if you can watch it?”. We are early in that and we’re going to admit that all over the place. We are, as I started to say, putting a lot of that work. The best place to experience it is come to our app, go to the website, even if you have to, you know, even if on the site, some of it is shot for vertical, best place to experience it is our destinations. But we need to be in the places where huge numbers of people are. So the work is also on TikTok and Instagram, it’s on YouTube both in short form and on YouTube, we’re starting to put our longer form stuff there. And the truth is, it’s a place where we can see, you are right, a lot of it is dictated by algorithms, but also you get a sense of what is a hit. I’m going to name a few things that are just like unequivocally hits at the New York Times as video. The Ezra Klein show was only a podcast, it’s now a video show too — that guy is so brilliant, he has such an incredible following, we are so excited about that show. Right around the time we were putting him on video, we launched, to the extent that Ezra is examining the biggest ideas on the left, Ross Douthat is examining the biggest ideas that are animating the right. Ross has been a longtime columnist at the Times, we launched a show, I think we launched the pod and video at the same time it was one of the first ones where we said, we’re going out. You say they’re going huge, are they going huge on your properties, or are they going huge on the RSS feeds and the other platforms? MKL: Out in the ecosystem. And when I say huge, we were early in all of this, they’re building audiences and growing. The Daily is huge, The Morning , we have the largest general interest news newsletter I think on the Internet in terms of readership, five or six million people open it every day. And do you see very tangible, measurable, people are finding this other platforms and coming back to the Times and subscribing? Or is this more ethereal, this is enhancing the brand, in the long run this will pay off? MKL: It’s a great question. The broad answer I’m going to give you, and I ran the subscription business for a long time, I was on top of the product organization, I was accountable for it, the thing I’m sure is that we have to make stuff that is so good that it’s worth paying for even in the presence of free and less expensive alternatives, and we also have to have many tens of millions of people who do not yet pay, who are regularly engaging with our work. We do believe we have to be sort of out there in the ecosystem — of course, you and I both know, you know, we see a receding link-based economy. Did you see that discussion between Nate Silver and Nikita Bier the other day? MKL: Oh, I haven’t seen it yet. They were talking about, because Nate Silver did some sort of article about who’s getting prominence on X and things along those lines, and one of Nikita’s pushback about The New York Times not having prominence, not just on X but on all social platforms, is you do what I do , which is we’re old and lazy and just post an article with a link and Twitter doesn’t feature links anymore. Fine, it is what it is, I have my built-in audience, it’s okay. And it’s like, well, if you actually want to grow, you have to do the whole thread thing like, “This is what’s in this article”, and at the end there’s a link. And Nikita pointed out that the New York Times does the bare minimum, it’s basically like an RSS feed for links, of course they’re not getting featured. Is that something where, I’m telling you now, you didn’t read it, you’re like, “Oh yeah, we should fix that”, or is that a, “Well, you know what? We’re not a social media company, we are a destination site, and that’s just the way it’s going to be”. MKL: It’s a fair question, I think you should regard us as first and most importantly trying to make the best stuff that can and should scale because it’s amazing. And remind me, I’m going to mention two other video shows to you that are so different. And then we are also looking to always master the evolving audience ecosystem. And I think if you followed us, it’s interesting on YouTube, we’re doing more now show by show to build audience so just like you mentioned, the New York Times channel, but like Ezra’s feed is surely updated, Ross Douthat’s feed is updated. I’ll mention these two other shows. We launched our cooking team, launched a show maybe six months ago called The Pizza Interview , we have this amazing test kitchen on the west side of Manhattan and like every major celebrity with something important to say can come on that show now, they make a pizza and they talk about their work. So the cast of Stranger Things came with the finale, Ariana Grande came. That’s a great concept. MKL: It’s amazing. And that show is building so much momentum, so different than what you would expect. It is fun, it’s really working. We’ve had a show, I don’t know if you’re a music fan, Ben, but we’ve got a music critic and a music reporter, Jon Caramanica and Joe Coscarelli, they have had a podcast on The Times for like a decade called Popcast , where they talk about music. It was sort of made at the edges of the enterprise, these guys are so talented, and we’ve just brought them to video and kind of prime time and man is that scaling. They actually did a live show at an all-company meeting with Lizzo, it was unbelievable. They’re getting everybody, it’s so, so great. What you see is we are just in the early days of saying, “How and where should we build the big audience for this?”. The Daily, which is nine years in still in the top podcasts, there is I think it’s the largest general interest news podcast, most people do not listen on The New York Times, they listen on Apple or Spotify. MKL: And you know that because of what you do for a living. So we’re open-minded about that and also pushing really hard on the companies that shape the ecosystem to make it so that great stuff can scale. Yeah, I’ve had plenty of discussions with YouTube. MKL: I’m sure we’re going to talk about that too. Well, we’ve actually gone quite long, I do need to ask you about – there’s this technology called AI you may have heard of, I do have a few questions for you on that. Just to get it out of the bag, you’re in ongoing litigation with OpenAI. Obviously, I’m sure that constrains what you can talk about to a certain extent. But sort of big picture, what’s the point of this? What do you want to accomplish? MKL: We’re in ongoing litigation, two-and-a-half years now with OpenAI and Microsoft, we’ve also sued Perplexity . Why? They stole our stuff, they used it without permission, without fair value exchange, copyright infringement and they build products that compete with us, so that’s why. Let me just say, why did the Times do this? You know, we have spent over 175 years, an enormous amount of resources on high-quality independent journalism, and I want to say this, we’re fighting here, obviously, for the Times, but for the industry writ large for high quality journalism and content creation writ large and for the public to have high quality information and content. We have made an enormous investment, we’ve been doing it for a very long time, and we have a huge number of works. Is your biggest concern the training or the output? MKL: We believe that there should be sustainable fair value exchange for our work used in any way, number one, so fair value exchange sustainably. Number two, we believe we should have control and the law says we should have control over how our work is used, and I would say those are kind of for everyone. And for the Times very specifically, by the way, we’re not just suing, we have a deal with Amazon , we choose to deal, these things are of a piece enforcement of our rights in court and dealing is all to put a stake in the ground to say high quality journalism deserves to be paid for and it should be. And, by the way, the LLMs are only going to be as good as the information that courses through them. The third bit is can we do a deal that’s consistent with our long-term strategy, which involves ultimately having direct relationships with our consumer. Do you worry about — you’ve had this huge growth in terms of these lifestyle verticals, things like recommendations, things like cooking. Some of those AI is really, really good and useful at, do you feel a threat there? Have you seen an impact there? MKL: We’re enforcing our rights in court for very specific reasons. I want to do a number of AI categories so let’s set aside the court case. Let’s just say in terms of NYT Cooking, super compelling. Also, I go to ChatGPT, I ask for a recipe and it will give me one. MKL: Totally fair question. I want to say to you first, we’re also using AI like assertively in our product. Right, my next question is how you’re actually using it. MKL: Let’s come back to that. The most important part of our strategy, and maybe to the extent there’s a theme from this conversation, is that The New York Times creates human-led high quality news journalism and all this other stuff, including recipes that are better because of the humanity, the expertise, the professional process that goes into them. And I want to say, because you asked about cooking specifically, every one of those recipes, we have 25,000 recipes and counting in a database, every one of them, human-tasted, human-tested, they’re better. People say to me all the time, “Your recipes are just better”, yes! Because professional chefs and cooks are using them and it doesn’t get published until we’ve done that. We think that’s going to have enduring value, we think in an information ecosystem where it’s harder and harder to find quality stuff, brands are going to matter more and human-made content is going to matter more. The week you filed the lawsuit, when I wrote about it, I entitled it The New York Times’ AI Opportunity . MKL: I remember what you wrote about it. In this world of everyone getting individualized content and actually that makes you more valuable, not less. MKL: Listen, society needs a shared fact base. People need high quality, uncompromised information and they need to be able to find it with ease and they need to be able to know what is true and worth their time and we think the Times and each of our portfolio brands, each of our lifestyle brands is like a signal to that. So we are obviously investing enormously into all that. Has that been validated in the numbers? MKL: Look at our business results. It’s been a strong period for our business results, I can’t tell you what will happen in the future, but I can tell you we are very, very focused on two things. One, making our products even more kind of rare and valuable at real scale to people, and we are also incredibly focused, part of how I got into this chair, we are incredibly focused on harnessing technology to make the journalism richer where it can help us do that, make our journalists able to get to more things or get to the things more deeply. We are incredibly focused on using technology, and this includes AI, to make the work more accessible. I told you earlier, I’m a runner, you can listen to almost every article now. You can’t listen to the live journalism, but everything else you can listen to in an automated voice and I think we’re on the third generation of that voice, it’s so much better. It’s still like, we’ll mispronounce one or two things, but it’s great. See, I read my own articles and I still mispronounce things, so maybe that’s actually the human component. The moment it starts pronouncing things perfectly, I’ll know it’s a robot. MKL: We we’ve been aggressive with that. Let me give you an example in the journalism that the Epstein Files , I think it was like three-and-a-half million pages, they came out like late in the day on a Friday and we’ve got a whole AI Initiatives team in the newsroom and they like built a tool to be able to comb those documents and the magic of what we were able to do from them was the fact that we could create this tool that said like, there’s all these different story angles to get to, how do you get at it with ease? And then the beat reporters and the editors who have the expertise and the kind of rigor to say, “What should the public know from this?”, it’s the combination of those things that made it awesome. I’m going to give you one more example that I just kind of said immediately, “Oh, there’s a real interesting opportunity here”. Remember the Sydney Sweeney jeans/genes thing? MKL: So the early of read on that was that the left was up in arms about this Sydney Sweeney ad and we had journalists who basically did a story using AI to comb social media to sort of say, “How did this happen?”, and what they found was it was actually construction on the right, started as a construction. Like the idea that there was kind of fury about it started as a construction on the right and then became like a bigger thing. So I think any new technology, it is our job, it is my job, to see that people are not afraid of it, and are using it in responsible and appropriate ways. We’ve just rolled out Claude Code to our product engineering team, so they can prototype faster and do all kinds of things. So The Times is not anti-AI or any other tech, we have laid a stake in the ground to say this next chapter of the ecosystem has got to be shaped in a way that allows high quality journalism organizations and other high quality creative content organizations to do their work in a way where they can earn the living they should from that work but we are certainly not anti-tech. Just to go back to this AI bit and The New York Times AI Opportunity idea. You just touched on the, This is a trusted brand, it’s validated by humans”, it’s leaning into the humanity of it. I’ve expanded that bit a little bit as well as I’ve been thinking about this thesis , and I have this concept that I’ve been thinking about called totem content , where if everyone is reading AI content, everyone’s reading different stuff. The idea of having one piece that, “Did you read the Stratechery article today?”, or whatever it might be, is actually going to be more valuable, not less. I’ve been thinking about this in the context of community, it feels like no content company has ever solved community. You have a thriving comment section, but you’re not making friends in the comment section, it’s sort of a performative bit. MKL: We’re not introducing friends to one another, not necessarily yet. If I know someone who is interested in the same sports team or is interested in Wordle or Connections or whatever it might be or is interested in a particular facet of the world and I knew who they were, there’s something there and there’s a continual trigger for us to talk about it. Where’s your thinking about this? You do this all the time, there’s lots of group chats with New York Times articles shared it, is that something, though, that you want to or you see an opportunity to lean more into? MKL: My very short answer is yes, with like a double underline. Yes, yes, yes. At the core of the mission’s role is to help society make sense of itself in a way that serves the common interest, the public interest, “common” is the main word in community. So yes, and I agree with you, I don’t think it’s been solved in any way yet by us or anybody else in the sort of publishing or journalism industry, but we’re beginning to focus on it much more earnestly. I want to say two other things. Within the news report, we do a ton of culture and lifestyle journalism, and going back a couple of years, we launched the 100 Best Books , and we launched it with a bunch of input from experts beyond the Times, but of course, all coalescing around our books experts and we launched it with a bunch of features, because it was like an inherently shareable idea, “I read these books, Ben, you should read these books, what’s on your book list?”, and then we did it for movies . We’re just at the beginning of it, I think it’s a huge opportunity, I am super interested in it. And the last thing I want to say, and it kind of brings us back to where you started with me. I will never forget, I was with my son and his friend, on the ferry to the Vineyard, and his friend was like, “Oh my gosh, I play Wordle every day and then after that, I go and I play…”, and he named four rip offs because he liked the game so much. Point being, we need to make more games, we have, we did, we’re still making more. But none of those games, you know, have like the competitors, people may play them, but like you don’t hear about them the way you hear about Wordle, they haven’t broken through. Why is that? There is one puzzle a day from a company whose brand ethos is it makes you smarter that you do with the people you love and by the way, it’s true for Wordle and Connections and Strands. Everyone’s playing the exact same puzzle. MKL: And it is a shared experience. Just to go back, you asked me about sports, fandom is a shared experience, and we’re thinking very hard about how we support that game moment in a way that I think The Athletic has a very big opportunity here. And I think in news, what we want, journalism can’t solve society’s big problems, and there are many big problems, but society’s problems cannot be solved without high quality independent journalism. So the idea of, “Can we get more people engaged with one another?”, on really big, important, weighty topics that need independent journalism, I think that’s a big idea and a big opportunity for The Times, for journalism, for the country, for the world. Has the New York Times fully crossed the Valley of Despair in terms of advertising? Part of all this was you had to like build a subscription business but now that you’re known as a subscription business, advertising is suddenly a growth opportunity instead of a decline to manage? MKL: I came to run the ad business, the woman who runs the ad business now, Joy Robins , she’s an extraordinary leader. The ad business I joke all the time is going so much better under her than it ever went many years ago. I think that we have really found a formula that works. What is that formula? MKL: We are a, and I bet, long after I’m here, we are a subscription-first business, meaning we make things that are meant to be extraordinary to consumers at great scale. So many of our ads are shown to subscribers because so much of our engagement is from subscribers and we’re obsessed, especially in a changing ecosystem, with getting the next group, the prospects, really, really, really engaged with our work and our obsession with engagement and with quality products in giant spaces that marketers want to be near, news broadly defined, but on the authority of news. Marketers want to be next to other healthy, thriving brands, and I think The Times is that today, but they also want to be in sports and they want to be next to our games, which are cultural sensations, and by the way, do you think marketers like shopping? Quality shopping and cooking, there’s so many marketers want to do stuff with that. I do think we’ve arrived, I’ve been more optimistic and excited about our ad business over the last year than I’ve been at any other point and I think given the scale that we have achieved — Ben, you and I both grew up on the web, just think about the number of page views the New York Times has, like, all that engagement. And we’ve spent half a decade, longer than that, building very sophisticated first-party data. So we’re never going to have the scale of a platform or the targetability of a platform, but we are certainly well above what I would suspect any other kind of publisher can do. That’s the question — is there anything actually generalizable from the New York Times? Like you’ve done it, you’ve won it, can anyone actually replicate this? MKL: First of all, we have not won anything, I want to say that very clearly. We have so much more to do, to grow, to make sure. Relative to basically every other newspaper, I’m going to declare you a winner. MKL: Let me tell you the few things that I think are absolutely extensible. I often say we’ve spent so much of our time wanting to make a market and then support a market for digital subscriptions to journalism, and journalism being something of value that is worth paying for. We believe that a thriving, healthy ecosystem with lots of competitors who we’re fighting every day with is actually better, it’s certainly better for society, we think it’s just better generally. And I want to say there are you, Puck, there are so many other things that have been invented since I came to The New York Times. So in some ways, there are aspects of the information ecosystem and journalism that that are thriving, certainly not local journalism, certainly not deeply reported journalism and that’s very unfortunate. The things that I think are extensible, one, when I get asked, “Why has the Times succeeded?”, if I can only give one short answer, it is we kept investing in journalism, that’s it. Good times, bad times, we kept investing in the journalism. There was something there that actually was worth paying for, one. And two, we stuck to our values. So the Times can’t be bought, the journalism is never compromised, we can’t be cowed, we can be hated in lots of places, and people know they’re still going to get our best understanding, they’re going to get the results of a pursuit of truth wherever it will lead, even when that’s to uncomfortable places. If I had to boil it down to like two short things, I’m ripping off a line from our publisher, AG Sulzberger , that I think does it so beautifully, he says, “It’s value and values”, we kept investing to make sure the product was still really valuable and then we just never let go of our values, I think that those are ideas that are extensible to everyone. The other thing I’ll say to you, and this is maybe my contribution, we clocked early on, 9 or 10 years ago, we are competing for engagement with the most powerful companies, information companies the world has ever known, who are so much richer than us, so dominant, and we’ve got to get really good at engagement. We’ve got to get really good at making people want to come back, and we’ve also believed in the power of brands as signals to get people to ask for us. I say all the time, they’ve got to ask for us by name. The New York Times, Wordle, Connections, Strands, The Athletic, Cooking, Wirecutter, people have to ask for us by name, and we’ve invested into all those things, I think those are all extensible ideas. Well that’s why I say you’re one of my idea children, destination site, I write about Aggregators and my personal strategy is to do everything the exact opposite as them because why would I want to even compete in that game? So that certainly resonates. MKL: And you have so many readers and listeners at The New York Times, we’ve been reading you as long as you have felt like a parent of us. Well, I appreciate it. You are, for the record, older than, The New York Times I should say. 175 years this year, very exciting, congratulations. MKL: (laughing) Very exciting. Can I say one thing? If we can do anything with like a 175th — Is it a birthday? Is it an anniversary? — if we can do anything in this moment, the most important thing we want to accomplish is just raising people’s consciousness for the idea of what high quality independent journalism is and does. It is human beings with a professional process and real expertise going out into the world and unearthing new information, following a very honed professional process to do so, so that the public can know what’s happening. We are spending a lot of our energy this year at 175 years old, just trying to remind people what that is and there’s so many other things you can do in media now. You know, I listen to a bunch of stuff, there’s so many things that are like adjacent to news. Oh, I appreciate it. I’m not a reporter, so I need someone to actually go out and unearth facts. MKL: But it is not that, most of it is not that and I think as local journalism has been in such dire straits for so long, and there’s so few local newspapers and fewer journalists and as people get more and more of their media diet fed to them by an algorithm that’s meant to match the things they already think and as leaders work to discredit independent journalism with all those forces going on in the world, I think the public has a — I think it’s just harder to know or remember or be conscious of the importance of the thing our journalists are doing every single day. There’s one thing, I know we’ve gone slightly long, but when you say that, what I find inspiring and why I like to talk to you and write about the New York Times is, I’m sure it’s a relief to you, I’m just completely independent of any partisanship or political angle. MKL: Totally, you’re not compromised. I find it so interesting from a business perspective and what you’re articulating there is what is inspiring is it’s a fight against entropy, where the easiest path for people and for publications is to just give in to the algorithm, as it were. And it’s kind of nice to go to YouTube and not see any of your videos there, because it’s sort of like an assertion that that’s not the path we’re going to go, and I certainly can relate to that and find that inspiring and that’s why I enjoyed talking to you. MKL: I enjoyed talking to you, this was a lot of fun, thank you. This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery . The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly. Thanks for being a supporter, and have a great day! We have to be a daily habit We have to have direct relationships with people We have to be a destination and let me say to you, by destination, I mean, we do most of the economic value creation and we also give the best experience if you actually come to us in the whole of the experience. Then I say the fourth D is we only do drive-bys if they’re deliberate.

0 views

AI Is Really Weird

If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I just put out a massive Hater’s Guide To The SaaSpocalypse , as well as last week’s deep dive into How AI Isn't Too Big To Fail .  Subscribing helps directly support my free work, and premium subscribers don’t see this ad in their inbox. I can’t get over how weird the AI bubble has become. Hyperscalers are planning to spend over $600 billion on data center construction and GPUs predominantly bought from NVIDIA, the largest company on the stock market, all to power generative AI, a technology that’s so powerful that none of them will discuss how much it’s making them, or what it is we’re all meant to be so excited.  To make matters weirder , Microsoft, a company that spent $37.5 billion in capital expenditures in its last quarter on AI , recently updated the terms and conditions of its LLM-powered “Copilot” service to say that it was “for entertainment purposes only,” discussing a product that apparently has 15 million users as part of enterprise Microsoft 365 subscriptions , and is sold to both local and national governments overseas , including the US federal government . That’s so weird! What’re you doing Microsoft? What do you mean it’s for entertainment purposes? You’re building massive data centers to drive this!  Well, okay, you’re building them at some point. As I discussed a few weeks ago, despite everybody talking about the hundreds of gigawatts of data centers being built “to power AI,” only 5GW are actually “under construction,” with “under construction” meaning anything from “we’ve got some scaffolding up” to “we’re about to hand over the keys to the customer.”  But isn’t it weird we’re even building those data centers to begin with? Why? What is it that AI does that makes it so essential — or, rather, entertaining — that we keep funding and building these things? Every day we hear about “the power of AI,” we’re beaten over the head with scary propaganda saying “AI will take our jobs,” but nobody can really explain — outside of outright falsehoods about “AI replacing all software engineers” — what it is that makes any of this worthy of taking up any oxygen let alone essential or a justification for so many billions of dollars of investment. Instead of providing an actual answer of some sort , AI boosters respond by saying it’s “just like the dot com bubble” — another weird thing to do considering 168,000 people lost their jobs as the NASDAQ dropped by 80% in two years , and only 16% of the world even used the internet , and those that did in America had an average internet speed of 50 kilobits per second ( and only 52% of them had access in 2000 anyway ). Conversely, to quote myself: And with that incredibly easy access , only 3% of households pay for AI . Boosters will again use this talking point to say that “we’re in the early days,” but that’s only true if you think that “early days” means “people aren’t really using it yet.”  Yet the “early days” argument is inherently deceptive. While the Large Language Model hype cycle might have only begun in 2022, the entirety of the media and markets have focused their attention on AI, along with hundreds of billions of dollars of venture capital and nearly a trillion dollars of hyperscale capex investment . AI progress isn’t hampered by a lack of access, talent, resources, novel approaches, or industry buy-in, but by a single-minded focus on Large Language Models, a technology that has been so obviously-limited from the very beginning that Gary Marcus was able to call it in 2022 .  Saying it’s “the early days” also doesn’t really make sense when faced with the rotten and incredibly unprofitable economics of AI. The early days of the internet were not unprofitable due to the underlying technology of serving websites , but the incredibly shitty businesses that people were building. Pets.com spent $400 per customer in customer acquisition costs , millions of dollars on advertising, and had hundreds of employees for a business with a little over $600,000 in quarterly revenue — and as a result, nothing about its failure was about “the early days of the internet” at all, as was the case with Kozmo, or any number of other dot com flameouts.  Similarly, internet infrastructure companies like Winstar collapsed because they tried to grow too fast and signed stupid deals rather than anything about the underlying technology’s flaws. For example, in 1998, Lucent Technologies signed its largest deal — a $2 billion “equipment and finance agreement” — with telecommunications company Winstar , which promised to bring in “$100 million in new business over the next five years” and build a giant wireless broadband network, along with expanding Winstar’s optical networking. Eager math-heads in the audience will be able to see the issue of borrowing $2 billion to make $100 million over five years, as will eager news-heads laugh at WIRED magazine in 1999 saying that Winstar’s “small white dish antennas…[heralded] a new era and new mind-set in telecommunications.” Winstar died two years later because its business was built to grow at a rate that its underlying product couldn’t support . In the end, microwave internet (high-speed internet delivered via radio waves) has become an $8 billion-a-year industry , despite everybody’s excitement. In any case, anytime that somebody tells you that we’re in “the early days of AI” has either been conned or is in the process of conning you, as they’re using it to deflect from issues of efficacy or underlying economic weakness.  In fact, that’s a great place to go next. Probably the weirdest thing about this entire era is how nobody wants to talk about the fact that AI isn’t actually doing very much, and that AI agents are just chatbots plugged into an API. Per Redpoint Ventures’ Reflections on the State of the Software and AI Market , “the agent maturity curve is still early, but the TAM implications are enormous,” with agents able to “...run discretely for minutes, [and] execute end-to-end tasks with some oversight.” What tasks, exactly? Who knows! Truly, nobody seems able to say. To paraphrase Steven Levy at WIRED , 2025 was meant to be the year of AI agents, but turned out to be the year of talking about AI agents. Agents were/are meant to be autonomous pieces of software that go off and do distinct tasks. In reality, it’s kind of hard to say what those tasks are. “AI agent” now refers to literally anything anybody wants it to, but ultimately means “chatbot that has access to some systems.”  The New York Times’ Ezra Klein recently talked to the entity currently inhabiting former journalist and Anthropic co-founder Jack Clark recently about “how fast AI agents would rip through the economy,” but despite speaking for over an hour, the closest we got was “it wrote up a predator-prey simulation (a complex-sounding but extremely-common kind of webgame that Anthropic likely ingested through its training material )” and “chatbots that talk to each other about tasks,” and if you think I’m kidding, this is how he described it: Anyway, this is all bad, because multiple papers have now shown that, and I quote, agents are “...incapable of carrying out computational and agentic tasks beyond a certain complexity,” with Futurism adding that said complexity was pretty low . The word “agent” is meant to make you think of powerful autonomous systems that carry out complex and minute tasks, when in reality it’s…a chatbot. It’s always a fucking chatbot. It might be a chatbot with API access or a chatbot that generates a plan that another chatbot looks at and says something about, but it’s still chatbots talking to chatbots. When you strip away the puffery, nobody seems to actually talk about what AI does.  Let’s take a look at CNBC’s piece on Goldman Sachs’ supposed contract with Anthropic to build “autonomous systems for time-intensive, high-volume back-office work”: …okay, but like, what does it do? Right, brilliant. Great. Love it. What tasks? What is the thing you’re paying for? Okay, great, we have two things it might do in the future , and that’s “employee surveillance” (?) and making pitchbooks. The upshot is that, with the help of the agents in development, clients will be onboarded faster and issues with trade reconciliation or other accounting matters will be solved faster, Argenti said. Onboarding? Chatbot. “Issues with trade reconciliation”? Chatbot connected to a knowledge base, like we’ve had for years but worse and more expensive. Oh, and “other accounting matters” will be solved faster, always with the future tense with these guys. How about Anthropic and outsourcing body shop giant InfoSys’ “AI agents for telecommunications and other regulated industries ”? Let’s go through the list of tasks and say what they mean, my comments in bold: How about OpenAI’s “Frontier” platform for businesses to “ build, deploy and manage AI agents that do real work” ?  Shared context? Chatbot. Onboarding? Chatbot. Hands-on learning with feedback? Chatbot. Clear permissions and boundaries? Chatbot setting. Let’s check out the diagram! Uhuh. Great. What real-world tasks? Uhhh.  Reason over data? Chatbot. “Complex tasks”? No idea, it doesn’t say. “Working with files”? Doesn’t say how it works with files, but I’d bet it can analyze, summarize and create charts based on them that may or may not have errors in them, and based on my experience of trying to get these things to make charts (as a test, I’d never use them in my actual work), it doesn’t seem to be able to do that. “Evaluation and optimization loops”? Unclear, because we have no idea what the tasks are. What are the agents planning, acting, or executing on? Again, no idea.  Yet the media continues to perpetuate the myth of some sort of present or future “agentic AI” that will destroy all employment. A few weeks ago, CNBC mindlessly repeated that ServiceNow CEO Bill McDermott believed that agents would send college grad unemployment over 30% . NowAssist , ServiceNow’s AI platform, is capable of — you guessed it! — summarization, conversational exchanges, content creation, code generation and search, a fucking chatbot just like the other chatbots.  A few weeks ago, The New York Times wrote about how “AI agents are fun, useful, but [not to] give them your credit card,” saying that they can “do more than just chat…they can edit files, send emails, book trips and cause trouble”: Sure sounds like you connected a chatbot to your email there Mr. Heyneman.  Let’s go through these: Yes, you can string together chatbots with various APIs and have the chatbot be able to activate certain systems. You could also do the same with a button you bought on Etsy connected to your computer via USB if you really wanted to. The ability to connect something to something else does not mean that anything useful happens at the end, and LLMs are extremely bad at the kind of deterministic actions that define the modern knowledge economy, especially when choosing to do them based on their interpretation of human language. AI agents do not, as sold, actually exist. Every “AI agent” you read about is a chatbot talking to another chatbot connected to an API and a system of record, and the reason that you haven’t heard about their incredible achievements is because AI agents are, for the most part, fundamentally broken.  Even OpenClaw, which CNBC confusingly called a “ ChatGPT moment ,” is just a series of chatbots with the added functionality of requiring root access to your computer and access to your files and emails. Let’s see how CNBC described it back in February :  Hmmm interesting. I wonder if they say what that means: Reading this, you might be fooled into believing that OpenClaw can actually do any of this stuff correctly, and you’d be wrong! OpenClaw is doing the same chatbot bullshit, just in a much-more-expensive and much-more convoluted way, requiring either a well-secured private space or an expensive Mac Mini to run multiple AI services and do, well, a bunch of shit very poorly. The same goes for things like Perplexity’s “Computer,” which it describes as “an independent digital worker that completes and workflows for you,” which means, I shit you not, that it can search, generate stuff (words, code, images), and integrate with Gmail, Outlook, Github, Slack, and Notion, places where it can also drop stuff it’s generated. Yes, all of this is dressed up with fancy terms like “persistent memory across sessions” (a document the chatbot reads and information it can access) with “authenticated integrations” (connections via API that basically any software can have). But in reality, it’s just further compute-intensive ways of trying to fit a square peg in a round hole, by which I mean having a hallucination-prone chatbot do actual work. The only reason Jensen Huang is talking about OpenClaw is that there’s nothing else for Jensen Huang to talk about: That’s wild, man. That’s completely wild. What’re you talking about? What can NemoClaw or OpenClaw or whatever-the-fuck actually do? What is the actual output? That’s so fucking weird! I can already hear the haters in my head screaming “ but Ed, coding models! ” and I’m kind of sick of talking about them, because nobody can actually tell me what I’m meant to be amazed or surprised by.  To be clear, LLMs can absolutely write code, and can absolutely create software, but neither of those mean that the code is good, stable or secure, or that the same can be said of the software they create. They do not have ideas, nor do they create unique concepts — everything they create is based on training data fed to it that was first scraped from Stack Overflow, Github and whatever code repositories Anthropic, OpenAI, and Google have been able to get their hands on.  It’s unclear what the actual economic or productivity effects are, other than an abundance of new code that’s making running companies harder. Per The New York Times :  As I wrote a few weeks ago , LLMs are good at writing a lot of code , not good code, and the more people you allow to use them, the more code you’re going to generate, which means the more time you’re either going to need to review that code, or the more vulnerabilities you’re going to create as a result. Worse still, hyperscalers like Meta and Amazon are allowing non-technical people to ship code themselves, which is creating a crisis throughout the tech industry.  Worse still , LLMs allow shitty software engineers that would otherwise be isolated by their incompetence to feign enough intelligence to get by, leading to them actively lowering the quality of code being shipped. Per the Times: The Times also notes that because LLM coding works better on a device rather than a web interface, “...engineers are downloading their entire company’s code to their laptops, creating a security risk if the laptop goes missing.”  Speaking frankly, it appears that LLMs can write code, and create some software, but without any guarantee that said code will compile, run, be secure, performant, or easy to read and maintain. For an experienced and ethical software engineer, LLMs can likely speed them up somewhat , though not in a way that appears to be documented in any academic sense, other than it makes them slower .  And I think it’s fair to ask what any of this actually means. What’s the advantage of having an LLM write all of your code? Are you shipping faster? Is the code better? Are there many more features being shipped? What is the actual thing you can point at that has materially changed for the better?  Software engineers don’t seem happier, nor do they seem to be paid more, nor do they seem to be being replaced by AI, nor do we have any examples of truly vibe coded software companies shipping incredible, beloved products.  In fact, I can’t think of a new piece of software I’ve used in the last few years that actually impressed me outside of Flighty . Where’s the beef? What am I meant to be looking at? What’re you shipping that’s so impressive? Why should I give a shit? Isn’t it weird that we’re even having this conversation? Shouldn’t it be obvious by now? This week, economist Paul Kedrosky told me on the latest episode of my show Better Offline that AI is “...nowhere to be seen yet in any really meaningful productivity data anywhere,” and only appears in the non-residential fixed investments side of America’s GDP, at (and I quote again) “...levels we last saw with the railroad build out or with rural electrification.” That’s so fucking weird! NVIDIA is the largest company on the US stock market and has sold hundreds of billions of dollars of GPUs in the last few years, with many of them sold to the Magnificent Seven, who are building massive data centers and reopening nuclear power plants to power them, and every single one of them is losing money doing so, with revenues so putrid they refuse to talk about them!   And all that to make…what, Gemini? To power ChatGPT and Claude? What does any of this actually do that makes any of those costs actually matter? And as I’ve discussed above, what, literally, does this software do that makes any of this worth it?   Ask the average AI booster — or even member of the media — and they’ll say something about “lots of code being written by AI,” or “novel discoveries” (unrelated to LLMs) or “LLMs finding new materials ( based on an economics paper with faked data )” or “people doing research,” or, of course, “that these are the fastest-growing companies of all time.” That “growth” is only possible because all of the companies in question heavily subsidize their products , spending $3 to $15 for every dollar of revenue. Even then, only OpenAI and Anthropic seem to be able to make “billions of dollars of revenue,” a statement that I put in quotes because however many billions there might be is up for discussion. Back in November 2025 , I reported that OpenAI had made — based on its revenue share with Microsoft — $4.329 billion between January and September 2025, despite The Information reporting that it had made $4.3 billion in the first half of the year based on disclosures to shareholders .  While a few outlets wrote it up, my reporting has been outright ignored by the rest of the media. I was not reached out to by or otherwise acknowledged by any other outlets, and every outlet has continued to repeat that OpenAI “made $13 billion in 2025,” despite that being very unlikely given that it would have required it to have made $8 billion in a single quarter. While I understand why — I’m an independent, after all — these numbers directly contradict existing reporting, which, if I was a reporter, would give me a great deal of concern about the validity of my reporting and the sources that had provided it.  Similarly, when Anthropic’s CFO said in a sworn affidavit that it had only made $5 billion in its entire existence , nobody seemed particularly bothered, despite reports saying it had made $4.5 billion in 2025 , and multiple “annualized revenue” reports — including Anthropic’s own — that added up to over $6.6 billion .  Though I cannot say for certain, both of these situations suggest that Anthropic and OpenAI are misleading their investors, the media and the general public. If I were a reporter who had written about Anthropic or OpenAI’s revenues previously, I would be concerned that I had published something that wasn’t true, and even if I was certain that I was correct, I would have to consider the existence of information that ran counter to my own. I would be concerned that Anthropic or OpenAI had lied to me, or that they were lying to someone else, and work diligently to try and find out what happened. I would, at the very least, publish that there was conflicting information. The S-1 will give us the truth, I guess.  Let’s talk for a moment about margins , because they’re very important to measuring the length of a business.  Back in February in my Hater’s Guide To Anthropic, I raised concerns that Dario Amodei was using a different way to calculate margins than other companies do .  Amodei told the FT in December 2024 that he didn’t think profitability was based on how much you spent versus how much you made: He then did the same thing in an interview with John Collison in August 2025 : Almost exactly six months later on February 13, 2026’s appearance on the Dwarkesh Podcast, Dario would once again try and discuss profitability in terms other than “making more money than you’ve spent”: The above quote has been used repeatedly to suggest that Anthropic has 50% gross margins and is “profitable,” which is extremely weird in and of itself as that’s not what Dario Amodei said at all. Based on The Information’s reporting from earlier in the year , Anthropic’s “gross margin” was 38%.” Yet things have become even more confusing thanks to reporting from Eric Newcomer, who ( in reporting on an investor presentation by Coatue from January ) revealed that Anthropic’s gross margin was “45% in the quarter ended Sep-25,” with the crucial note that — and I quote — “Non-GAAP gross margins [are] calculated by Anthropic management…[are] unaudited, company-provided, and may not be comparable to other companies.” This means that however Anthropic calculates its margins are not based on Generally Accepted Accounting Principles , which means that the real margins probably suck ass , because Anthropic loses billions of dollars a year, just like OpenAI. Yet one seemingly-innocent line in there gives me even more pause: “Model payback improving significantly as revenue scales faster than R&D training costs.” This directly matches with Dario Amodei’s bizarre idea that “...If you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million, and then it made $200 million of revenue.” Yes, I know it’s a “stylized fact” or whatever, but that’s what he said, and I think that their IPO might have a rude surprise in the form of a non-EBITDA margin calculation that makes even the most-ardent booster see red. This week, The Wall Street Journal published a piece about OpenAI and Anthropic's finances that included one of the most-offensive lines in tech media history: Two thoughts: As I said a few months ago about training costs: The Journal also adds that both Anthropic and OpenAI are showing investors two versions of their earnings — one with training costs, and one without — without adding the commentary that this is extremely deceptive or, at the very least, extremely unusual. The more I think about it the more frustrated I get. Having two sets of earnings is extremely dodgy! Especially when the difference between them is billions of dollars. This should be immediately concerning to every financial journalist, the reddest of red flags, the biggest sign that something weird is happening… …but because this is the AI industry, the Journal runs propaganda instead: That “fast-growing” part is only possible because both Anthropic and OpenAI subsidize the compute of their subscribers , allowing them to burn $3 to $15 for every dollar of subscription revenue. And no, this is nothing like Uber or Amazon , that’s a silly comparison, click that link and read what I said and then never bring it up again. I realize my suspicion around Anthropic’s growth has become something of a meme at this point, but I’m sorry, something is up here. Let’s line it all up: Anthropic was making $9 billion in annualized revenue at the end of 2025, or approximately $750 million in a 30-day period. Per Newcomer , as of December 2025, this is how Anthropic’s revenue breaks down: Per The Information , Anthropic also sells its models through Microsoft, Google and Amazon, and for whatever reason reports all of the revenue from their sales as its own and then takes out whatever cut it gives them as a sales and marketing expense: The Information also adds that “...about 50% of Anthropic’s gross profits on selling its AI via Amazon has gone to Amazon,” and that “...Google typically takes a cut of somewhere between 20% and 30% of net revenue, after subtracting infrastructure costs.”  The problem here is that we don’t know what the actual amounts of revenue are that come from Amazon or Google (or Microsoft, for that matter, which started selling Anthropic’s models late last year), which makes it difficult to parse how much of a cut they’re getting. That being said, Google ( per DataCenterDynamics/The Information ) typically takes a cut of 20% to 30% of net revenue after subtracting the costs of serving the models . Nevertheless, something is up with Anthropic’s revenue story.  Let’s humour Anthropic for a second and say that what it’s saying is completely true: it went from making $750 million in monthly revenue in January to $2.5 billion in monthly revenue in April 2026. That’s remarkable growth, made even more remarkable by the fact that — based on its December breakdown — most of it appears to have come from API sales. That leap from $750 million to $1.16 billion between December and February feels, while ridiculous , not entirely impossible , but the further ratchet up to $2.5 billion is fucking weird! But let’s try and work it out.  On February 5 2026, Anthropic launched Opus 4.6 , followed by Claude Sonnet 4.6 on February 17 2026.  Based on OpenRouter token burn rates , Opus 4.5 was burning around 370 billion tokens a week. Immediately on release, Opus 4.6 started burning way, way more tokens — 524 billion in its first week, then 643 billion, then 634 billion, then 771 billion, then 822 billion, then 976 billion, eventually going over a trillion tokens burned in the final week of March.  In the weeks approaching its successor’s launch, Sonnet 4.5 burned between 500 billion and 770 billion tokens. A week after launch, 4.6 burned 636 billion tokens, then 680 billion, then 890 billion, and, by about a month in, it had burned over a trillion tokens in a single week.  Reports across Reddit suggest that these new models burn far more tokens than their predecessors with questionable levels of improvement.  The sudden burst in token burn across OpenRouter doesn’t suggest a bunch of people suddenly decided to connect to Anthropic and other services’ models , but that the model themselves had started to burn nearly twice the amount of tokens to do the same tasks. At this point, I estimate Anthropic’s revenue split to be more in the region of 75% API and 25% subscriptions, based on its supposed $2.5 billion in annualized revenue (out of $14 billion, so a little under 18%) in February coming from “Claude Code” (read: subscribers to Claude, there’s no “Claude Code” subscription).  If that’s the case, I truly have no idea how it could’ve possibly accelerated so aggressively, and as I’ve mentioned before , there is no way to reconcile having made $5 billion in lifetime revenue as of March 9, 2026, having $14 billion in annualized revenue on February 12 2026, and having $4.5 billion in revenue for the year 2025. Things get more confusing when you hear how Anthropic calculates its annualized revenues, per The Information : So, Anthropic is annualizing based on the last four weeks of API revenue times 13, a number that’s extremely easy to manipulate using, say, launches of new products. In simpler terms, Anthropic is cherry-picking four-week windows of API spend — ones that are pumped by big announcements and new model releases — and annualizing them. The one million token context window is a big deal, too, having been raised from 200,000 tokens in previous models. With Opus and Sonnet 4.6, Anthropic lets users use up to one million tokens of context, which means that both models can now carry a very, very large conversation history, one that includes every single output, file, or, well, anything that was generated as a result of using the model via the API. This leads to context bloat that absolutely rinses your token budget.   To explain, the context window is the information that the model can consider at once. With 4.6, Anthropic by default allows you to load in one million tokens’ worth of information at once, which means that every single prompt or action you take has the model load one million tokens’ worth of information at once unless you actively “trim” the window through context editing .  Let’s say you’re trying to work out a billing bug in a codebase via whatever interface you’re using to code with LLMs. You load in a 350,000 token codebase, a system prompt (IE: “you are a talented software engineer,” here’s an example ), a few support tickets, and a bunch of word-heavy logs to try and fix it. On your first turn (question), you ask it to find the bug, and you send all of that information through. It spits out an answer, and then you ask it how to fix the bug…but “asking it to fix the bug” also re-sends everything, including the codebase, tickets and logs. As a result, you’re burning hundreds of thousands of tokens with every single prompt. Although this is a simplified example, it’s the case across basically any coding product, such as Claude Code or Cursor. While Cursor uses codebase indexing to selectively fetch pieces of the codebase without constantly loading it into the context window, one developer using Claude inside of Cursor watched a single tool call burn 800,000 tokens by pulling an entire database into the context window , and I imagine others have run into similar problems. To be clear, Anthropic charges at a per-million-token rate of $5 per million input and $25 per million output, which means that those casually YOLOing entire codebases into context are burning shit tons of cash (or, in the case of subscribers, hitting their rate limits faster). if Anthropic actually made $2.5 billion in a month — we’ll find out when it files its S-1! — it likely came not from genuine growth or a surge of adoption, but in its existing products suddenly costing a shit ton more because of how they’re engineered.  The other possibility is the nebulous form of “enterprise deals” that Anthropic allegedly has, and the theory that they somehow clustered in this three-month-long period, but that just feels too convenient.   If 70% of Anthropic’s revenue is truly from API calls, this would suggest: I don’t see much evidence of Anthropic creating custom integrations that actually matter, or — and fuck have I looked! — any real examples of businesses “doing stuff with Claude” other than making announcements about vague partnerships.  There’s also one other option: that Silicon Valley is effectively subsidizing Anthropic through an industry-wide token-burning psychosis. And based on some recent news, there’s a chance that’s the case. As I discussed a few weeks ago, Silicon Valley has a “tokenmaxxing” problem , where engineers are encouraged by their companies to burn as many tokens as possible, at times by their peers, and at others by their companies. The most egregious — and honestly, worrying! — version of this came from The Information’s recent story about Meta employees competing on an internal leaderboard to see who can burn the most tokens, deliberately increasing the size of their prompts and the amount of concurrent sessions ( along with unfettered and dangerous OpenClaw usage ) to do so:   The Information reports that the dashboard, called “Claudeonomics” (despite said dashboard covering other models from OpenAI, Google, and xAI), has sparked competition within Meta, with users burning a remarkable 60 trillion tokens in the space of a month, with one individual averaging around 281 billion tokens, which The Information remarks could cost millions of dollars. Meta’s company-mandated psychosis also gives achievements for particular things like using multiple models or high utilization of the cache. Here’s one very worrying anecdote: One poster on Twitter says that there are people at Meta running loops burning tokens to rise up the leaderboards, and that Meta’s managers also measure lines of code as a success metric.  The Information says that, considering Anthropic’s current pricing for its models, that 60 trillion tokens could be as much as $900 million in the space of a month, though adds that this assumes that every token being burned was on Claude Opus 4.6 (at $15 per 1 million tokens).  I personally think this maths is a bit fucked, because it assumes that A) everybody is only using Claude Opus, B) that none of that token burn runs through the cache (which it obviously does, and the cache charges 50%, as pointed out by OpenCode co-founder Dax Radd ), and C) that Meta is entirely using the API (versus paying for a $200-a-month Claude Max subscription for each user).  Digging in further, it appears that a few years ago Meta created an internal coding tool called CodeCompose , though a source at Meta tells me that developers use VSCode and an assistant called Devmate connected to models from Anthropic, OpenAI and xAI. One engineer on Reddit — albeit an anonymous one! — had some commentary on the subject: If we assume that Meta is an enterprise customer paying API rates for its tokens, it’s reasonable to assume — at even a low $5-per-million average — that it’s spending $300 million or more a month on API calls. As Radd also added, there’s likely a discount involved. He suggested 20%, which I agree with. Even if it’s $300 million, that’s still fucking insane. That’s still over three billion dollars a year. If this is what’s actually happening, and this is what’s contributing to Anthropic’s growth, this is not a sustainable business model, which is par for the course for Anthropic, a company that has only lost billions of dollars. Encouraging workers to burn as many tokens as possible is incredibly irresponsible and antithetical to good business or software engineering. Writing great software is, in many cases, an exercise in efficiency and nuance , building something that runs well, is accessible and readable by future engineers working on it, and ideally uses as few resources as it can. TokenMaxxing runs contrary to basically all good business and software practices, encouraging waste for the sake of waste, and resulting in little measurable productivity benefits or, in the case of Meta, anything user-facing that actually seems to have improved. Venture capitalist Nick Davidov mentioned yesterday that sources at Google Cloud “started seeing billions of tokens per minute from Meta, which might now be as big as a quarter of all the token spend in Anthropic.” While I can’t verify this information ( and Davidoff famously deleted his photos using Claude Cowork while attempting to reorganize his wife’s desktop ), if that’s the case, Meta is a load-bearing pillar of Anthropic’s revenue — and, just as importantly, a large chunk of Anthropic’s revenue flows through Google Cloud , which means A) that Anthropic’s revenue truly hinges on Google selling its models, and B) that said revenue is heavily-inflated by the fact that Anthropic books revenue without cutting out Google’s 20%+ revenue share. In any case, TokenMaxxing is not real demand, but an economic form of AI psychosis. There is no rational reason to tell somebody to deliberately burn more resources without a defined output or outcome other than increasing how much of the resource is being used. I have confirmed with a source at that there is no actual metric or tracking of any return on investment involved in token burn at Meta, meaning that TokenMaxxing’s only purpose is to burn more tokens to go higher on a leaderboard, and is already creating bad habits across a company that already has decaying products and leadership. To make matters worse, TokenMaxxing also teaches people to use Large Language Models poorly. While I think LLMs are massively-overrated and have their outcomes and potential massively overstated, anyone I know who actually uses them for coding generally has habits built around making sure token burn isn’t too ridiculous, and various ways to both do things faster without LLMs and ways to be intentional with the models you use for particular tasks. TokenMaxxing literally encourages you to do the opposite — to use whatever you want in whatever way you want to spend as much money as possible to do whatever you want because the only thing that matters is burning more tokens. Furthermore, TokenMaxxing is exactly the kind of revenue that disappears first. Zuckerberg has reorganized his AI team four or five times already, and massively shifted Meta’s focus multiple times in the last five years, proving that at the very least he’ll move on a whim depending on external forces. After laying off tens of thousands of people in the last few years , Meta has shown it’s fully capable of dumping entire business lines or groups with a moment’s notice, and while moving on from AI might be embarrassing , that would suggest that Mark Zuckerberg experiences shame or any kind of emotion other than anger. This is the kind of revenue that a business needs to treat with extreme caution, and if Meta is truly spending $300 million or more a month on tokens, Anthropic’s annualized revenues are aggressively and irresponsibly inflated to the point that they can’t be taken seriously, especially if said revenue travels through Google Cloud, which takes another 20% off the top at the very least.  Though the term is pretty new, the practice of encouraging your engineers to use AI as much as humanly possible is an industry-wide phenomena, especially across hyperscalers like Amazon, Microsoft and Google, all of whom until recently directly have pushed their workers to use models with few restraints. Shopify and other large companies are encouraging their workers to reflexively rely on AI, with performance reviews that include stats around your token burn and other nebulous “AI metrics” that don’t seem to connect to actual productivity. I’m also hearing — though I’ve yet to be able to confirm it — that Anthropic and other model providers are forcing enterprise clients to start using the API directly rather than paying for monthly subscriptions.  Combined with mandates to “use as much AI as possible,” this naturally increases the cost of having software engineers, which — and I say this not wanting anyone to lose their jobs — does the literal opposite of replacing workers with AI. Instead, organizations are arbitrarily raising the cost of doing business without any real reason.  Because we’re still in the AI hype cycle, this kind of wasteful spending is both tolerated and encouraged, and the second that financial conditions worsen or stock prices drop due to increasing operating expenses, these same companies will cut back on API spend, which will overwhelmingly crush Anthropic’s glowing revenues. I think it’s also worth asking at this point what is is we’re actually fucking doing.   We’re building — theoretically — hundreds of gigawatts of data centers, feeding hundreds of billions of dollars to NVIDIA to buy GPUs, all to build capacity for demand that doesn’t appear to exist, with only around $65 billion of revenue (not profit) for the entire generative AI industry in 2025 , with much of that flowing from two companies (Anthropic and OpenAI) making money by offering their models to unprofitable AI startups that cannot survive without endless venture capital, which is also the case for both AI labs. Said data centers make up 90% of NVIDIA’s revenue, which means that 8% or so of the S&P 500’s value comes from a company that makes money selling hardware to people that immediately lose money on installing it. That’s very weird! Even if you’re an AI booster, surely you want to know the truth , right?  The most-prominent companies in the AI industry — Anthropic and OpenAI — burn billions of dollars a year, have margins that get worse over time , and absolutely no path to profitability, yet the majority of the media act as if this is a problem that they will fix, even going as far as to make up rationalizations as to how they’ll fix it, focusing on big revenue numbers that wilt under scrutiny. That’s extremely weird, and only made weirder by members of the media who seem to think it’s their job to defend AI companies ’ bizarre and brittle businesses. It’s weird that the media’s default approach to AI has, for the most part, been to accept everything that the companies say, no matter how nonsensical it might be. I mean, come on! It’s fucking weird that OpenAI plans to burn $121 billion in the next two years on compute for training its models , and that the media’s response is to say that somehow it will break even in 2030, even though there’s no actual explanation anywhere as to how that might happen other than vague statements about “efficiency.” That’s weird! It’s really, really weird! It’s also weird that we’re still having a debate about “the power of AI” and “what agents might do in the future” based on fantastical thoughts about “agents on the internet ” that do not exist, cannot exist, and will never exist, and it’s fucking weird that executives and members of the media keep acting as if that’s the case. It’s also weird that people discussing agents don’t seem to want to discuss that OpenAI’s Operator Agent does not work , that AI browsers are fundamentally broken , or that agentic AI does not do anything that people discuss. In fact, that’s one of the weirdest parts of the whole AI bubble: the possibility of something existing is enough for the media to cover it as if it exists, and a product saying that it will do something is enough for the media to believe it does it. It’s weird that somebody saying they will spend money is enough to make the media believe that something is actually happening , even if the company in question — say, Anthropic — literally can’t afford to pay for it . It’s also weird how many outright lies are taking place, and how little the media seems to want to talk about them. Stargate was a lie! The whole time it was a lie! That time that Sam Altman and Masayoshi Son and Larry Ellison stood up at the white house and talked about a $500 billion infrastructure project was a lie! They never formed the entity ! That’s so weird! Hey, while I have you, isn’t it weird that OpenAI spent hundreds of millions of dollars to buy tech podcast TBPN “to help with comms and marketing”? It’s even weirder considering that TBPN was already a booster for OpenAI!  It’s also weird that a lot of AI data center projects don’t seem to actually exist, such as Nscale’s project to make “one of the most powerful AI computing centres ever” that is literally a pile of scaffolding , and that despite that announcement the company was able to raise $2 billion in funding . It’s also weird that we’re all having to pretend that any of this matters. The revenues are terrible, Large Language Models are yet to provide any meaningful productivity improvements, and the only reason that they’ve been able to get as far as they have is a compliant media and a venture capital environment borne of a lack of anything else to invest in .  Coding LLMs are popular only because of their massive subsidies and corporate encouragement, and in the end will be seen as a useful-yet-incremental and way too expensive way to make the easy things easier and the harder things harder, all while filling codebases full of masses of unintentional, bloated code. If everybody was forced to pay their actual costs for LLM coding, I do not believe for a second that we’d have anywhere near the amount of mewling, submissive and desperate press around these models.  The AI bubble has every big, flashing warning sign you could ask for. Every company loses money. Seemingly every AI data center is behind schedule, and the vast majority of them aren’t even under construction . OpenAI’s CFO does not believe that it’s ready to go public in 2026 , and Sam Altman’s reaction has been to have her report to somebody else other than him, the CEO. Both OpenAI and Anthropic’s margins are worse than they projected. Every AI startup has to raise hundreds of millions of dollars, and their products are so weak that they can only make millions of dollars of revenue after subsidizing the underlying cost of goods to the point of mass unprofitability .   And it’s really weird that the mainstream media has a diametric view — that all of this is totally permissible under the auspices of hypergrowth, that these companies will simply grow larger, that they will somehow become profitable in a way that nobody can actually describe, that demand for AI data centers will exist despite there being no signs of that happening. I get it. Living in my world is weird in and of itself. If you think like I do, you have to see every announcement by Anthropic or OpenAI as suspicious — which should be the default position of every journalist, but I digress — and any promise of spending billions of dollars as impossible without infinite resources. At the end of this era, I think we’re all going to have to have a conversation about the innate credulity of the business and tech media, and how often that was co-opted to help the rich get richer. Until then, can we at least admit how weird this all is? Telecommunications: AI agents will help carriers modernize network operations, simplify customer lifecycle management, and improve service delivery—bringing intelligent automation to one of the most operationally complex and regulated industries in the world. Meaningless. Automation of what?  Financial services: AI agents will help firms detect and assess risk faster, automate compliance reporting, and deliver more personalized customer interactions, such as tailoring financial advice based on a client's full account history and market conditions. Chatbot! “More-personalized interactions” are a chatbot with a connection to a knowledge system, as is any kind of “tailored financial advice.” Compliance reporting? Summarizing or pulling documents from places, much like any LLM can do, other than the fact that it’ll likely get shit wrong, which is bad for compliance. Manufacturing and engineering: Claude will help accelerate product design and simulation, reducing R&D timelines and enabling engineers to test more iterations before production. I assume this refers to people using Claude Code to do coding, which is what it does. Software development: Teams will use Claude Code to write, test, and debug code, helping developers move faster from design to production. Claude Code. Enterprise operations: Claude Cowork will help teams automate routine work like document summarization, status reporting, and review cycles. Literally a chatbot that deleted every single one of a guy’s photos when he asked it to organize his wife’s desktop . “Gather information” — search tool, part of chatbots for years. “Write reports” — generative AI’s most basic feature, with no details on quality. “Edit files” — to do what exactly? Chatbot feature. “Send and receive messages through email and text” — generating and reading text, connected to an email account.  “Delegate work” — what work? No need to get specific!  Are you fucking kidding me? If you simply remove billions of dollars in costs, OpenAI is profitable! Why do you think these companies are going to break even anytime soon? You have absolutely no basis for doing so other than leaks from the company!  Anthropic said on February 12, 2026 it had hit $14 billion in annualized revenue . This would work out to roughly $1.16 billion in a 30-day period, let’s assume from January 11 2026 to February 11 2026. Anthropic’s CFO said it had made “exceeding $5 billion” in lifetime revenue on March 9 2026. On March 3, 2026 Dario Amodei said it had hit $19 billion in annualized revenue.  This would work out to $1.58 billion in a 30-day period. Let’s assume this is for the period from February 2 2026 to March 2 2026. On April 6, 2026, Anthropic said it had hit $30 billion in annualized revenue . This works out to about $2.5 billion in a 30-day period. Let’s assume that said period is March 6 2026 to April 6 2026. Anthropic’s $14 billion in annualized revenue from February 16, 2026 includes both the launch of Claude Opus 4.6 , as well as the height of the OpenClaw hype cycle where people were burning hundreds of dollars of tokens a day .  This announcement also included the launch of Anthropic’s 1 million token context window in Beta for Opus 4.6 Anthropic’s $19 billion in annualized revenue from March 3, 2026 included both the launch of Claude Opus 4.6 and Claude Sonnet 4.6 . This period includes around half of the January 16 to February 16 2026 window from the previous $14 billion annualized number, and the launch of the beta of the 1 million token context window for Sonnet 4.6. To be clear, the betas required you to explicitly turn on the 1 million token context window, and had higher pricing around long context. Anthropic’s $30 billion in annualized revenue from April 6 2026 included two weeks’ worth of massive token burn from the launches of Sonnet and Opus 4.6. This includes a few days of the previous window (March 3 to April 5). This also included the general availability of the 1-million token context window , enabling it by default, billed at the standard pricing. Massive new customers that are making payments up front, which makes this far from “recurring” revenue. Massive new customers are spending tons of money immediately, burning hundreds of millions of dollars a month in tokens, and paying Anthropic handsomely for them.

0 views
Stratechery 1 weeks ago

Anthropic’s New TPU Deal, Anthropic’s Computing Crunch, The Anthropic-Google Alliance

Anthropic needs compute, and Google has the most: it's a natural partnership, particularly for Google.

0 views
Anton Sten 1 weeks ago

What a UX strategy is — and why most teams should write one

A UX strategy is a short document that says what good looks like for the people using your product, and how the team plans to deliver it. That's it. No frameworks, no McKinsey decks, no 113 slides. When I join a project, one of the first things I do is ask if there's a UX strategy in place. Most of the time there isn't. Sometimes there's a brand book, or a product roadmap, or a Notion page someone wrote a year ago and then forgot. Rarely is there a document that actually says: this is what we mean by a good experience, and this is how we're going to get there. It's not that teams don't care. They almost always do. It's that nobody's written it down, so the care gets spent in fifty different directions and the product ends up feeling like a committee made it. Which, in a way, it did. ## What a UX strategy actually is The phrase trips people up, so I usually pull the two words apart. **UX** is what someone experiences when they use your product. Not what the product does — what it feels like to use. Two apps can have identical feature lists and feel completely different. iPhone and Android. Notion and Confluence. Linear and Jira. The features are the same on paper. The experience isn't. **Strategy**, stripped of the consultant baggage, is three questions. Where are we now. Where do we want to be. How do we get there. That's the whole shape of it. Everything else is detail. Put them together and a UX strategy is a document that answers those three questions specifically about the experience you're building. What's the experience like today, what do you want it to feel like, and what are you actually going to do to close the gap. It's not a deliverable for clients. It's not a marketing document. It's a working tool the team uses to make decisions when nobody's in the room to ask. ## What goes in one I've written a lot of these over the years and the contents vary, but the bones are usually the same. **Where you are now.** A short, honest snapshot. Who your users are, what they're trying to do, where they get stuck, and how the current experience compares to alternatives. Not a research report — a summary the team can hold in their heads. If it's longer than two pages, it's not doing its job. **Where you want to be.** This is the part most teams skip, because it requires picking a direction and sticking to it. Not goals like "improve the user experience" — that's not a goal, that's a wish. Specific principles you can actually hold a design decision against. We'll come back to those in a minute, because they're the part that does the most work. **How you'll get there.** The practical bit. What's going to change. Who's going to do it. What you'll stop doing to make room. I'm partial to this section because it's the part most strategies leave out, and it's the reason most strategies don't survive contact with real work. A direction without a plan is a wish list. Length-wise, a good UX strategy is short. A page can be enough. Two is plenty. Anything longer and people will paste it into Claude and ask for the summary — which means the summary is the real strategy, and you wrote the rest for nothing. ## Goals are principles, not action items The most important section in any UX strategy is the one defining what good looks like. And the best way I've found to do that is to write principles, not features. Principles are desired outcomes. They sound like sentences, not roadmap items. Some I've used over the years: **Design for everyone.** Build for the eighty percent, not the loudest twenty. Every team I've worked with has someone — a stakeholder, a power user, a vocal customer — who keeps asking for the next feature. Most of those features serve almost nobody. A principle like this gives the team something to point at when the request comes in, instead of just saying no and feeling bad about it. **Optimize for speed.** Most products are judged on how fast they feel before they're judged on anything else. Not literal load time — perceived speed. How quickly something responds. How few steps it takes. People will forgive almost anything if the product feels fast. **Different is good.** Make the important thing obvious. The primary action should be visually distinct, placed where people expect it, and impossible to miss. Insecurity is the root of bad user experiences. If the user is wondering what to do next, you've already failed. **Always start with what's familiar.** Your users spend most of their time in other apps, not yours. Look at the patterns they already know. Borrow shamelessly from the conventions of the industry you're in. Familiarity isn't a lack of imagination — it's respect for the user's time. These are just examples. Yours will be different, and they should be. The point isn't the specific principles, it's the form. Write things you can hold a real design decision against on a Tuesday afternoon when nobody's watching. ## Why this matters more now For most of my career, UX strategy was the kind of document large teams wrote because they could afford to. Smaller teams skipped it. They were busy shipping, and shipping was the hard part. Shipping isn't the hard part anymore. The cost of building has collapsed. Anyone can put a working product on the internet in a weekend. Tools write half the code. AI handles the parts that used to take a junior designer a week. The bottleneck used to be execution, and execution is now nearly free. Which means the hard part is the part that was always hard but easier to ignore: knowing what's worth building, who it's for, and what good would even look like. When making things was expensive, you had to be careful before you started. Now you can start anything, which is exactly why so many teams are shipping a lot of things that nobody needed. A UX strategy used to be a luxury. It's becoming the thing that separates teams who ship useful work from teams who ship a lot of work. I've written about this from a couple of different angles — [vibe coding for designers](https://www.antonsten.com/articles/vibe-coding-for-designers/) covers what changes when designers can build, and [simple is hard](https://www.antonsten.com/articles/simple-is-hard/) is about why restraint is the harder discipline. A UX strategy is the document that makes restraint possible. ## What a strategy actually does The thing nobody tells you about UX strategies is that the document itself isn't really the point. The point is the conversations you have while writing it. The disagreements that surface. The assumptions that turn out not to be shared. The "wait, is *that* what we're optimizing for?" moments that happen when you try to put it on paper. The strategy is the artifact. The alignment is the work. When I look back at the projects where the team shipped well and the ones where it didn't, the difference was almost never talent or budget. It was whether the team agreed on what they were actually trying to do. Sometimes that agreement existed without a document. More often it didn't, and the document was what made it real. If you're on a team without a UX strategy, you don't need a long one. You don't need a template. You don't even need to call it a strategy if the word makes people roll their eyes. You need a few pages that say what good looks like, what you're going to do about it, and what you're going to stop doing to make room. Then you need everyone on the team to actually read it. The surprising thing isn't that most teams don't have a UX strategy. It's that most of them are doing fine without one, until suddenly they aren't. A strategy is what you wish you'd written before things got hard. *I wrote a chapter on UX strategy in [Products People Actually Want](https://www.antonsten.com/books/products-people-actually-want/) — if you want the longer version.*

0 views
Kev Quirk 1 weeks ago

I Hate Insurance!

So yesterday I received an email from Admiral , our insurance provider, where we have a combined policy for both our cars and our home. Last year this cost £1,426.00 , but this year the renewal had gone up by a huge 33%, to £1,897.93 broken down as follows: Even at last year's price this was a shit tonne of money, so I started shopping around and here's what I ended up with: These policies have at least the same cover as Admiral. In some cases, better. I knew it would be cheaper shopping around, but I didn't think it would be nearly half. So, I called Admiral to see what they could do for me, considering I've been a loyal customer for 7 years. They knocked £167,83 (8.8%) off the policy for me, bringing the revised total to £1,730.10. Nice to see that long-term customers are rewarded with the best price! 🤷🏻‍♂️ So I obviously went with the much cheaper option and renewed with 3 different companies. It's a pain, as I'll now need to renew 3 policies at the same time every year, but if it means saving this much money, I'm happy to do it. Next year I'll get a multi-quote from Admiral to see if they're competitive. Something tells me they will be, as with most things these days, getting new customers is more important than retaining existing ones. Unfortunately having car and home insurance is a necessary evil in today's world, but I'm glad I was able to make it a little more palatable by saving myself over £700! If your insurance is up for renewal, don't just blindly renew - shop around as there's some serious savings to be had. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Wife's car - £339.34 My car - £455.68 Our home (building & contents) - £1,102.91 Wife's car - £300.17 My car - £402.22 Our home (building and contents) - £533.52 Total: £1056.86 (44% reduction!)

1 views

News: OpenAI CFO Doesn't Believe Company Ready For IPO, Unsure Revenue Will Support Commitments

News out of The Information's Anissa Gardizy and Amir Efrati over the weekend - OpenAI CFO Sarah Friar has apparently clashed with CEO Sam Altman over timing around OpenAI's IPO, emphasis mine: I cannot express how strange this is. Generally a CFO and CEO are in lock-step over IPO timing, or at the very least the CFO has an iron grip on the actual timing because, well, CEOs love to go public and the CFO generally exists to curb their instincts. Nevertheless, Clammy Sam Altman has clearly sidelined Friar, and as of August last year, the CFO of OpenAI doesn't report to the CEO . In fact, the person Friar reports to ( Fiji Simo ) just took a medical leave of absence: It is extremely peculiar to not have the Chief Financial Officer report to the Chief Executive Officer , but remember folks, this is OpenAI, the world's least-normal company! Anyway, all of this seemed really weird, so I asked investor, writer and economist Paul Kedrosky for his thoughts: Very cool! Paul is also a guest on this week's episode of my podcast Better Offline , by the way. Out at 12AM ET Tuesday. Anyway, The Information's piece also adds another fun detail - that OpenAI's margins were even worse than expected in 2025: Riddle me this, Batman! If your AI company always has to buy extra compute to meet demand, and said extra compute always makes margins worse, doesn't that mean that your company will either always be unprofitable or die because it buys too much compute? Say, that reminds me of something Anthropic CEO Dario Amodei said to Dwarkesh Patel earlier in the year ... It is extremely strange that the CFO of a company doesn't report to the CEO of a company, and even more strange that the CFO is directly saying "we are not ready for IPO" as its CEO jams his foot on the accelerator. It's clear that both OpenAI and Anthropic are rushing toward a public offering so that their CEOs can cash out, and that their underlying economics are equal parts problematic and worrying. Though I am entirely guessing here, I imagine Friar sees something within OpenAi's finances that give her pause. An S-1 - one of the filings a company makes before going public - is an audited document, and I imagine the whimsical mathematics that OpenAI engages in - such as, per The Wall Street Journal , calculating profitability without training compute - might not match up with what actual financiers crave. If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of  NVIDIA ,  Anthropic and OpenAI’s finances , and  the AI bubble writ large . I just put out  a massive Hater’s Guide To The SaaSpocalypse , as well as last week’s deep dive into How AI Isn't Too Big To Fail . Supporting my premium supports my free newsletter. OpenAI CFO Sarah Friar has, per The Information, said that OpenAI is not ready to go public in 2026, in part because of the "risks from its spending commitments" and not being sure whether the company's revenue growth would support its spending commitments. Friar (CFO) no longer reports to Sam Altman (CEO) and hasn't done so since August 2025. OpenAI's margins were lower in 2025 "...due to the company having to buy more expensive compute at the last minute."

0 views
HeyDingus 1 weeks ago

The difference between a company that makes money and a company that makes something worth caring about

David Sparks blogs that companies whose leaders actually give a damn about the products are the ones worth watching: You could argue that’s unhealthy. Maybe it is. But there’s something about a CEO who feels physical pain when the product falls short. That energy flows downhill. When the person at the top cares that much, everyone else figures out pretty quickly that they’d better care too. […] You can spot it pretty easily. When a CEO talks about their company, do they talk about the product or the business? Walt talked about the park. Steve talked about the iPhone. Jensen talks about the chip. The ones who love the product can’t help themselves. The ones who don’t talk about market share and strategic initiatives. Sparks’ sentiment pairs well with Marco Arment’s letter to presumed future Apple CEO John Ternus: Apple doesn’t settle for fine, functional, or good enough in its hardware (and thanks for your incredible work on that). We love making and using products that aren’t just great, but greater than they need to be, always raising the bar of greatness for its own sake. Software, services, revenue sources, and world impact need to be held to that same standard. Focus on making great computers with great user experiences above all else, and you can trust that every other major goal will follow: profit, market share, expansion, impact, and benefit to the world. We have high expectations for Ternus. I hope he can live up to them. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
Stratechery 1 weeks ago

OpenAI Buys TBPN, Tech and the Token Tsunami

OpenAI's purchase of TBPN makes no sense, which may be par for the course for OpenAI. Then, AI is breaking stuff, starting with tech services.

0 views
Hugo 1 weeks ago

AI & Layoffs: What if Artificial Intelligence Is Just an Excuse?

Well, here we are—tech layoffs are exploding. According to RationalFX , the total number of departures is expected to reach 273,000 by the end of the year. And while this figure alone doesn't mean much, know this: it represents roughly 10 times the annual volume of pre-COVID layoffs. So, can we really say that humans are being progressively replaced by AI as so many claim ? In France, INSEE speaks of a contraction in the job market directly linked to the rise of AI . But correlation doesn't imply causation, so we're entitled to wonder if there's something else hiding behind all the hype. So I wanted to dig deeper and explore the root causes to understand this wave. And it turns out AI might not be our biggest concern. If you read the latest news, there's plenty to worry about: And I could've cited Meta, Amazon, Klarna, ASML, Ericsson, Salesforce—the list goes on. In most cases, AI is cited as one of the reasons. And this narrative has a major advantage because on paper, these companies say: we're automating, we're gaining productivity, and we're cutting fixed costs. Which tends to reassure shareholders. Block's stock price, for example, recovered a bit in February following the announcements. Same with Oracle's stock price (announcement made March 30th). Now doubts linger, and as one article put it : "Isn't this just layoffs with better marketing—AI washing?" Block is the new name for Square, a payments company you might know from its little payment terminal that's now fairly ubiquitous: But Block isn't just a payment terminal—it's also companies in crypto because its founder, Jack Dorsey, is a big believer in cryptocurrencies. Jack Dorsey is also the former founder of Twitter, which he sold to Elon Musk a few years ago. And Jack tends to think big. Twitter had 8,000 employees when he sold it—a company that now runs with 2,800 people. At Block, the company tripled its headcount post-COVID. We're talking about a 12,000-employee company that had just 4,000 pre-2020. Sure, you can understand it by looking at the COVID effect on Block's stock. Except the return to reality in 2022 hit hard. The company stagnated, and with an explosion in the payroll, things couldn't end well. So we started seeing performance improvement plans emerge. Because if you look at the economic fundamentals, as this article does , you realize that Block is far less profitable than its competitors, with gross margins that are half theirs. Today, AI is mostly a "pretty" way to hide management mistakes and reassure investors. Oracle's case is a bit different. Officially, it's not about cuts driven by productivity gains, but a reorientation of investments toward infrastructure to support AI. In their case too, the stock price is rather concerning, but it's not the main driver of changes. As one article puts it, it's primarily about investment : The job cuts at Oracle come as it has invested heavily in AI, spending both on its own infrastructure and on partnerships with other companies like OpenAI. It plans to spend at least $50bn on infrastructure this year, and it has also raised $50bn in debt in order to "meet demand" for even more AI infrastructure. Oracle is also part of the Stargate initiative, alongside OpenAI, SoftBank and MGX, an AI investment fund backed by US President Donald Trump. Here, it's really about reorienting capital from a traditional activity that's flagging to one that's supposed to replace it in a few years. In reality, I won't criticize it. It's a strategy, a bet. A huge bet, but one that falls into the same category as what Kodak should've done when digital arrived. And Oracle doesn't want to be the next Kodak. And that's really the issue—nobody wants to be the next Kodak. When a leader (like Block, Google, or Meta) lays off 10% of its workforce and its stock goes up the next day, every other company is tempted to do the same. Laying people off because you mismanaged your company would be an admission of failure. But laying people off because you're "transforming through AI" is a vision of the future. And this FOMO—fear of missing out—explains a lot of the current departure plans. Gartner calls it "RIFs before reality", the anticipation of unrealized gains : The employment deal is being rewritten in real time. CEOs are making bold moves based on AI's promise rather than its proven impact. Layoffs linked to AI dominated headlines last year, but Gartner data shows fewer than 1% were due to actual productivity gains. This anticipation drives investment reorientations. Oracle's case is representative here. Not everyone is investing in infrastructure, but many are reinvesting in engineering to automate other business functions and, most importantly, to be ready for the future. AI is no longer just a growth story; it's a cost-reduction tool, and firms are restructuring accordingly. What we're witnessing is a shift from headcount-driven expansion to automation-led productivity, a transition that will define the tech sector in the coming years. —Alan Cohen, analyst at RationalFX Now, this isn't new, and I notice a certain hypocrisy among some developers who are discovering today that their profession has always been about automating others' jobs. It's a shame to discover it when it touches us personally. Anyway, what's certain is that companies are anticipating cuts without yet having proof of the gains to come. It's not just a few layoffs—we're seeing signs, notably raised by levels.fyi's founder : we're witnessing a simplification of career paths. A layoff plan is temporary. But when you start eliminating rungs in career ladders, it signals you're anticipating a durable, global reduction in headcount. And yet, once again, the gains aren't that obvious so far. We all have our opinion on this. I consider myself more productive with AI. But not everyone agrees. But in any case, these are just opinions. There are studies on the topic of productivity, but there's no consensus. You can find studies showing we're less productive , but you can also find others saying the opposite . The causes are multiple. The first is what's called the productivity paradox : You can see the computer age everywhere but in the productivity statistics Yes, back then we wondered if computers really made us more productive. It was far from certain. This paradox is explained two ways. First, companies spend more time configuring tools, training people, and reorganizing workflows than actually producing more. Second, a new technology requires a learning period that can be quite long to master. And that's what we're seeing today—AI usage is totally new. Many are just faster at doing what they did badly before. And it's not like we know how to measure developer productivity anyway. I'll remind you that this question still hasn't found a universal answer since we started asking it. Now, I've also heard plenty of CTOs and IT directors privately say they have the means to prove it. But they don't want to . Because proving it would mean making decisions they don't want to make. And I can tell you that in this period, I'm glad I'm no longer a CTO. Still, as we've seen, productivity gains or not, can we really say all current layoffs are AI-related? Probably not. A recently cited study shows that 59% of HR leaders admit that AI was used as a "cover" to justify budget cuts that were actually driven by : But I think that would be overly reductive. It's mostly the nth demonstration that we've entered a new era post-COVID. Between rising inflation, ongoing trade wars, endless debates about tariffs, skyrocketing energy costs, various conflicts that paralyze parts of international commerce— we're really in a recession . AI is a facade to hide the rest. When Trump gets excited about his Stargate project (building datacenters), it's storytelling to hide the mess, even if it's true that AI is probably one of the drivers of the military sector in coming years and the US losing ground on it is probably making them nervous. Yes, because the worst part is that even on AI, it's not certain the people leading the dance will be American. Recent Chinese models like Ernie, DeepSeek, Qwen, and Kimi are largely on par with Gemini or ChatGPT, without necessarily costing the same. Kimi and DeepSeek reportedly cost 10% of their American counterparts during training phases. Which, incidentally, is encouraging but mostly logical—technology improves, and we've never seen tech stay this inefficient over time. The computer that sent a rocket to the moon was less powerful than our smartphone despite consuming far more energy. And for all these reasons, US companies are in full downsizing mode. Players in AI need to become more competitive. They're investing heavily while cutting payroll at the same time. Other tech companies are following suit, further constrained by hyper-unfavorable economic conditions and in a context where saying you're laying off to increase productivity is more sellable than admitting reality. And us in the middle of all this? Well... I'll be honest—I really wondered how to conclude this piece. I always try to end on a positive note, but the exercise is difficult here. I'll try anyway. Is this the end of an era? Probably the era of unreasonable hyper-growth, which isn't so bad. This forced downsizing might help us get back to basics instead of just chasing vanity metrics (like headcount). It's also a global economic shift and a US bloc that seems to be faltering. I want to see some positivity in thinking that Europe has cards to play. We're less affected than the US by the recent massive waves of layoffs. Probably because we have less insane payrolls than the US and more solid social models. While American giants painfully refocus, it's our moment in Europe to catch up. These new technologies, more accessible and efficient, let us move faster with fewer resources. Maybe it's finally time to create real European tech alternatives—more sober and pragmatic. On that note, you can go back to normal activities. Oracle just laid off 30,000 people (20% of its workforce) Block cut 40% of its headcount Over-hiring post-COVID Investor pressure to increase margins Internal strategy mistakes

0 views