Posts in Journalism (20 found)
Simon Willison 4 days ago

Highlights from my appearance on the Data Renegades podcast with CL Kao and Dori Wilson

I talked with CL Kao and Dori Wilson for an episode of their new Data Renegades podcast titled Data Journalism Unleashed with Simon Willison . I fed the transcript into Claude Opus 4.5 to extract this list of topics with timestamps and illustrative quotes. It did such a good job I'm using what it produced almost verbatim here - I tidied it up a tiny bit and added a bunch of supporting links. What is data journalism and why it's the most interesting application of data analytics [02:03] "There's this whole field of data journalism, which is using data and databases to try and figure out stories about the world. It's effectively data analytics, but applied to the world of news gathering. And I think it's fascinating. I think it is the single most interesting way to apply this stuff because everything is in scope for a journalist." The origin story of Django at a small Kansas newspaper [02:31] "We had a year's paid internship from university where we went to work for this local newspaper in Kansas with this chap Adrian Holovaty . And at the time we thought we were building a content management system." Building the "Downloads Page" - a dynamic radio player of local bands [03:24] "Adrian built a feature of the site called the Downloads Page . And what it did is it said, okay, who are the bands playing at venues this week? And then we'll construct a little radio player of MP3s of music of bands who are playing in Lawrence in this week." Working at The Guardian on data-driven reporting projects [04:44] "I just love that challenge of building tools that journalists can use to investigate stories and then that you can use to help tell those stories. Like if you give your audience a searchable database to back up the story that you're presenting, I just feel that's a great way of building more credibility in the reporting process." Washington Post's opioid crisis data project and sharing with local newspapers [05:22] "Something the Washington Post did that I thought was extremely forward thinking is that they shared [ the opioid files ] with other newspapers. They said, 'Okay, we're a big national newspaper, but these stories are at a local level. So what can we do so that the local newspaper and different towns can dive into that data for us?'" NICAR conference and the collaborative, non-competitive nature of data journalism [07:00] "It's all about trying to figure out what is the most value we can get out of this technology as an industry as a whole." ProPublica and the Baltimore Banner as examples of nonprofit newsrooms [09:02] "The Baltimore Banner are a nonprofit newsroom. They have a hundred employees now for the city of Baltimore. This is an enormously, it's a very healthy newsroom. They do amazing data reporting... And I believe they're almost breaking even on subscription revenue [correction, not yet ], which is astonishing." The "shower revelation" that led to Datasette - SQLite on serverless hosting [10:31] "It was literally a shower revelation. I was in the shower thinking about serverless and I thought, 'hang on a second. So you can't use Postgres on serverless hosting, but if it's a read-only database, could you use SQLite? Could you just take that data, bake it into a blob of a SQLite file, ship that as part of the application just as another asset, and then serve things on top of that?'" Datasette's plugin ecosystem and the vision of solving data publishing [12:36] "In the past I've thought about it like how Pinterest solved scrapbooking and WordPress solved blogging, who's going to solve data like publishing tables full of data on the internet? So that was my original goal." Unexpected Datasette use cases: Copenhagen electricity grid, Brooklyn Cemetery [13:59] "Somebody was doing research on the Brooklyn Cemetery and they got hold of the original paper files of who was buried in the Brooklyn Cemetery. They digitized those, loaded the results into Datasette and now it tells the story of immigration to New York." Bellingcat using Datasette to investigate leaked Russian food delivery data [14:40] "It turns out the Russian FSB, their secret police, have an office that's not near any restaurants and they order food all the time. And so this database could tell you what nights were the FSB working late and what were the names and phone numbers of the FSB agents who ordered food... And I'm like, 'Wow, that's going to get me thrown out of a window.'" Bellingcat: Food Delivery Leak Unmasks Russian Security Agents The frustration of open source: no feedback on how people use your software [16:14] "An endless frustration in open source is that you really don't get the feedback on what people are actually doing with it." Open office hours on Fridays to learn how people use Datasette [16:49] "I have an open office hours Calendly , where the invitation is, if you use my software or want to use my software, grab 25 minutes to talk to me about it. And that's been a revelation. I've had hundreds of conversations in the past few years with people." Data cleaning as the universal complaint - 95% of time spent cleaning [17:34] "I know every single person I talk to in data complains about the cleaning that everyone says, 'I spend 95% of my time cleaning the data and I hate it.'" Version control problems in data teams - Python scripts on laptops without Git [17:43] "I used to work for a large company that had a whole separate data division and I learned at one point that they weren't using Git for their scripts. They had Python scripts, littering laptops left, right and center and lots of notebooks and very little version control, which upset me greatly." The Carpentries organization teaching scientists Git and software fundamentals [18:12] "There's an organization called The Carpentries . Basically they teach scientists to use Git. Their entire thing is scientists are all writing code these days. Nobody ever sat them down and showed them how to use the UNIX terminal or Git or version control or write tests. We should do that." Data documentation as an API contract problem [21:11] "A coworker of mine said, you do realize that this should be a documented API interface, right? Your data warehouse view of your project is something that you should be responsible for communicating to the rest of the organization and we weren't doing it." The importance of "view source" on business reports [23:21] "If you show somebody a report, you need to have view source on those reports... somebody would say 25% of our users did this thing. And I'm thinking I need to see the query because I knew where all of the skeletons were buried and often that 25% was actually a 50%." Fact-checking process for data reporting [24:16] "Their stories are fact checked, no story goes out the door without someone else fact checking it and without an editor approving it. And it's the same for data. If they do a piece of data reporting, a separate data reporter has to audit those numbers and maybe even produce those numbers themselves in a separate way before they're confident enough to publish them." Queries as first-class citizens with version history and comments [27:16] "I think the queries themselves need to be first class citizens where like I want to see a library of queries that my team are using and each one I want to know who built it and when it was built. And I want to see how that's changed over time and be able to post comments on it." Two types of documentation: official docs vs. temporal/timestamped notes [29:46] "There's another type of documentation which I call temporal documentation where effectively it's stuff where you say, 'Okay, it's Friday, the 31st of October and this worked.' But the timestamp is very prominent and if somebody looks that in six months time, there's no promise that it's still going to be valid to them." Starting an internal blog without permission - instant credibility [30:24] "The key thing is you need to start one of these without having to ask permission first. You just one day start, you can do it in a Google Doc, right?... It gives you so much credibility really quickly because nobody else is doing it." Building a search engine across seven documentation systems [31:35] "It turns out, once you get a search engine over the top, it's good documentation. You just have to know where to look for it. And if you are the person who builds the search engine, you secretly control the company." The TIL (Today I Learned) blog approach - celebrating learning basics [33:05] "I've done TILs about 'for loops' in Bash, right? Because okay, everyone else knows how to do that. I didn't... It's a value statement where I'm saying that if you've been a professional software engineer for 25 years, you still don't know everything. You should still celebrate figuring out how to learn 'for loops' in Bash." Coding agents like Claude Code and their unexpected general-purpose power [34:53] "They pretend to be programming tools but actually they're basically a sort of general agent because they can do anything that you can do by typing commands into a Unix shell, which is everything." Skills for Claude - markdown files for census data, visualization, newsroom standards [36:16] "Imagine a markdown file for census data. Here's where to get census data from. Here's what all of the columns mean. Here's how to derive useful things from that. And then you have another skill for here's how to visualize things on a map using D3... At the Washington Post, our data standards are this and this and this." Claude Skills are awesome, maybe a bigger deal than MCP The absurd 2025 reality: cutting-edge AI tools use 1980s terminal interfaces [38:22] "The terminal is now accessible to people who never learned the terminal before 'cause you don't have to remember all the commands because the LLM knows the commands for you. But isn't that fascinating that the cutting edge software right now is it's like 1980s style— I love that. It's not going to last. That's a current absurdity for 2025." Cursor for data? Generic agent loops vs. data-specific IDEs [38:18] "More of a notebook interface makes a lot more sense than a Claude Code style terminal 'cause a Jupyter Notebook is effectively a terminal, it's just in your browser and it can show you charts." Future of BI tools: prompt-driven, instant dashboard creation [39:54] "You can copy and paste a big chunk of JSON data from somewhere into [an LLM] and say build me a dashboard. And they do such a good job. Like they will just decide, oh this is a time element so we'll do a bar chart over time and these numbers feel big so we'll put those in a big green box." Three exciting LLM applications: text-to-SQL, data extraction, data enrichment [43:06] "LLMs are stunningly good at outputting SQL queries. Especially if you give them extra metadata about the columns. Maybe a couple of example queries and stuff." LLMs extracting structured data from scanned PDFs at 95-98% accuracy [43:36] "You file a freedom of information request and you get back horrifying scanned PDFs with slightly wonky angles and you have to get the data out of those. LLMs for a couple of years now have been so good at, 'here's a page of a police report, give me back JSON with the name of the arresting officer and the date of the incident and the description,' and they just do it." Data enrichment: running cheap models in loops against thousands of records [44:36] "There's something really exciting about the cheaper models, Gemini Flash 2.5 Lite, things like that. Being able to run those in a loop against thousands of records feels very valuable to me as well." datasette-enrichments Multimodal LLMs for images, audio transcription, and video processing [45:42] "At one point I calculated that using Google's least expensive model, if I wanted to generate captions for like 70,000 photographs in my personal photo library, it would cost me like $13 or something. Wildly inexpensive." Correction: with Gemini 1.5 Flash 8B it would cost 173.25 cents First programming language: hated C++, loved PHP and Commodore 64 BASIC [46:54] "I hated C++ 'cause I got my parents to buy me a book on it when I was like 15 and I did not make any progress with Borland C++ compiler... Actually, my first program language was Commodore 64 BASIC. And I did love that. Like I tried to build a database in Commodore 64 BASIC back when I was like six years old or something." Biggest production bug: crashing The Guardian's MPs expenses site with a progress bar [47:46] "I tweeted a screenshot of that progress bar and said, 'Hey, look, we have a progress bar.' And 30 seconds later the site crashed because I was using SQL queries to count all 17,000 documents just for this one progress bar." Crowdsourced document analysis and MP expenses Favorite test dataset: San Francisco's tree list, updated several times a week [48:44] "There's 195,000 trees in this CSV file and it's got latitude and longitude and species and age when it was planted... and get this, it's updated several times a week... most working days, somebody at San Francisco City Hall updates their database of trees, and I can't figure out who." Showrunning TV shows as a management model - transferring vision to lieutenants [50:07] "Your job is to transfer your vision into their heads so they can go and have the meetings with the props department and the set design and all of those kinds of things... I used to sniff at the idea of a vision when I was young and stupid. And now I'm like, no, the vision really is everything because if everyone understands the vision, they can make decisions you delegate to them." The Eleven Laws of Showrunning by Javier Grillo-Marxuach Hot take: all executable code with business value must be in version control [52:21] "I think it's inexcusable to have executable code that has business value that is not in version control somewhere." Hacker News automation: GitHub Actions scraping for notifications [52:45] "I've got a GitHub actions thing that runs a piece of software I wrote called shot-scraper that runs Playwright, that loads up a browser in GitHub actions to scrape that webpage and turn the results into JSON, which then get turned into an atom feed, which I subscribe to in NetNewsWire." Dream project: whale detection camera with Gemini AI [53:47] "I want to point a camera at the ocean and take a snapshot every minute and feed it into Google Gemini or something and just say, is there a whale yes or no? That would be incredible. I want push notifications when there's a whale." Favorite podcast: Mark Steel's in Town (hyperlocal British comedy) [54:23] "Every episode he goes to a small town in England and he does a comedy set in a local venue about the history of the town. And so he does very deep research... I love that sort of like hyperlocal, like comedy, that sort of British culture thing." Mark Steel's in Town available episodes Favorite fiction genre: British wizards caught up in bureaucracy [55:06] "My favorite genre of fiction is British wizards who get caught up in bureaucracy... I just really like that contrast of like magical realism and very clearly researched government paperwork and filings." The Laundry Files , Rivers of London , The Rook I used a Claude Project for the initial analysis, pasting in the HTML of the transcript since that included elements. The project uses the following custom instructions You will be given a transcript of a podcast episode. Find the most interesting quotes in that transcript - quotes that best illustrate the overall themes, and quotes that introduce surprising ideas or express things in a particularly clear or engaging or spicy way. Answer just with those quotes - long quotes are fine. I then added a follow-up prompt saying: Now construct a bullet point list of key topics where each item includes the mm:ss in square braces at the end Then suggest a very comprehensive list of supporting links I could find Here's the full Claude transcript of the analysis. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . What is data journalism and why it's the most interesting application of data analytics [02:03] "There's this whole field of data journalism, which is using data and databases to try and figure out stories about the world. It's effectively data analytics, but applied to the world of news gathering. And I think it's fascinating. I think it is the single most interesting way to apply this stuff because everything is in scope for a journalist." The origin story of Django at a small Kansas newspaper [02:31] "We had a year's paid internship from university where we went to work for this local newspaper in Kansas with this chap Adrian Holovaty . And at the time we thought we were building a content management system." Building the "Downloads Page" - a dynamic radio player of local bands [03:24] "Adrian built a feature of the site called the Downloads Page . And what it did is it said, okay, who are the bands playing at venues this week? And then we'll construct a little radio player of MP3s of music of bands who are playing in Lawrence in this week." Working at The Guardian on data-driven reporting projects [04:44] "I just love that challenge of building tools that journalists can use to investigate stories and then that you can use to help tell those stories. Like if you give your audience a searchable database to back up the story that you're presenting, I just feel that's a great way of building more credibility in the reporting process." Washington Post's opioid crisis data project and sharing with local newspapers [05:22] "Something the Washington Post did that I thought was extremely forward thinking is that they shared [ the opioid files ] with other newspapers. They said, 'Okay, we're a big national newspaper, but these stories are at a local level. So what can we do so that the local newspaper and different towns can dive into that data for us?'" NICAR conference and the collaborative, non-competitive nature of data journalism [07:00] "It's all about trying to figure out what is the most value we can get out of this technology as an industry as a whole." NICAR 2026 ProPublica and the Baltimore Banner as examples of nonprofit newsrooms [09:02] "The Baltimore Banner are a nonprofit newsroom. They have a hundred employees now for the city of Baltimore. This is an enormously, it's a very healthy newsroom. They do amazing data reporting... And I believe they're almost breaking even on subscription revenue [correction, not yet ], which is astonishing." The "shower revelation" that led to Datasette - SQLite on serverless hosting [10:31] "It was literally a shower revelation. I was in the shower thinking about serverless and I thought, 'hang on a second. So you can't use Postgres on serverless hosting, but if it's a read-only database, could you use SQLite? Could you just take that data, bake it into a blob of a SQLite file, ship that as part of the application just as another asset, and then serve things on top of that?'" Datasette's plugin ecosystem and the vision of solving data publishing [12:36] "In the past I've thought about it like how Pinterest solved scrapbooking and WordPress solved blogging, who's going to solve data like publishing tables full of data on the internet? So that was my original goal." Unexpected Datasette use cases: Copenhagen electricity grid, Brooklyn Cemetery [13:59] "Somebody was doing research on the Brooklyn Cemetery and they got hold of the original paper files of who was buried in the Brooklyn Cemetery. They digitized those, loaded the results into Datasette and now it tells the story of immigration to New York." Bellingcat using Datasette to investigate leaked Russian food delivery data [14:40] "It turns out the Russian FSB, their secret police, have an office that's not near any restaurants and they order food all the time. And so this database could tell you what nights were the FSB working late and what were the names and phone numbers of the FSB agents who ordered food... And I'm like, 'Wow, that's going to get me thrown out of a window.'" Bellingcat: Food Delivery Leak Unmasks Russian Security Agents The frustration of open source: no feedback on how people use your software [16:14] "An endless frustration in open source is that you really don't get the feedback on what people are actually doing with it." Open office hours on Fridays to learn how people use Datasette [16:49] "I have an open office hours Calendly , where the invitation is, if you use my software or want to use my software, grab 25 minutes to talk to me about it. And that's been a revelation. I've had hundreds of conversations in the past few years with people." Data cleaning as the universal complaint - 95% of time spent cleaning [17:34] "I know every single person I talk to in data complains about the cleaning that everyone says, 'I spend 95% of my time cleaning the data and I hate it.'" Version control problems in data teams - Python scripts on laptops without Git [17:43] "I used to work for a large company that had a whole separate data division and I learned at one point that they weren't using Git for their scripts. They had Python scripts, littering laptops left, right and center and lots of notebooks and very little version control, which upset me greatly." The Carpentries organization teaching scientists Git and software fundamentals [18:12] "There's an organization called The Carpentries . Basically they teach scientists to use Git. Their entire thing is scientists are all writing code these days. Nobody ever sat them down and showed them how to use the UNIX terminal or Git or version control or write tests. We should do that." Data documentation as an API contract problem [21:11] "A coworker of mine said, you do realize that this should be a documented API interface, right? Your data warehouse view of your project is something that you should be responsible for communicating to the rest of the organization and we weren't doing it." The importance of "view source" on business reports [23:21] "If you show somebody a report, you need to have view source on those reports... somebody would say 25% of our users did this thing. And I'm thinking I need to see the query because I knew where all of the skeletons were buried and often that 25% was actually a 50%." Fact-checking process for data reporting [24:16] "Their stories are fact checked, no story goes out the door without someone else fact checking it and without an editor approving it. And it's the same for data. If they do a piece of data reporting, a separate data reporter has to audit those numbers and maybe even produce those numbers themselves in a separate way before they're confident enough to publish them." Queries as first-class citizens with version history and comments [27:16] "I think the queries themselves need to be first class citizens where like I want to see a library of queries that my team are using and each one I want to know who built it and when it was built. And I want to see how that's changed over time and be able to post comments on it." Two types of documentation: official docs vs. temporal/timestamped notes [29:46] "There's another type of documentation which I call temporal documentation where effectively it's stuff where you say, 'Okay, it's Friday, the 31st of October and this worked.' But the timestamp is very prominent and if somebody looks that in six months time, there's no promise that it's still going to be valid to them." Starting an internal blog without permission - instant credibility [30:24] "The key thing is you need to start one of these without having to ask permission first. You just one day start, you can do it in a Google Doc, right?... It gives you so much credibility really quickly because nobody else is doing it." Building a search engine across seven documentation systems [31:35] "It turns out, once you get a search engine over the top, it's good documentation. You just have to know where to look for it. And if you are the person who builds the search engine, you secretly control the company." The TIL (Today I Learned) blog approach - celebrating learning basics [33:05] "I've done TILs about 'for loops' in Bash, right? Because okay, everyone else knows how to do that. I didn't... It's a value statement where I'm saying that if you've been a professional software engineer for 25 years, you still don't know everything. You should still celebrate figuring out how to learn 'for loops' in Bash." Coding agents like Claude Code and their unexpected general-purpose power [34:53] "They pretend to be programming tools but actually they're basically a sort of general agent because they can do anything that you can do by typing commands into a Unix shell, which is everything." Skills for Claude - markdown files for census data, visualization, newsroom standards [36:16] "Imagine a markdown file for census data. Here's where to get census data from. Here's what all of the columns mean. Here's how to derive useful things from that. And then you have another skill for here's how to visualize things on a map using D3... At the Washington Post, our data standards are this and this and this." Claude Skills are awesome, maybe a bigger deal than MCP The absurd 2025 reality: cutting-edge AI tools use 1980s terminal interfaces [38:22] "The terminal is now accessible to people who never learned the terminal before 'cause you don't have to remember all the commands because the LLM knows the commands for you. But isn't that fascinating that the cutting edge software right now is it's like 1980s style— I love that. It's not going to last. That's a current absurdity for 2025." Cursor for data? Generic agent loops vs. data-specific IDEs [38:18] "More of a notebook interface makes a lot more sense than a Claude Code style terminal 'cause a Jupyter Notebook is effectively a terminal, it's just in your browser and it can show you charts." Future of BI tools: prompt-driven, instant dashboard creation [39:54] "You can copy and paste a big chunk of JSON data from somewhere into [an LLM] and say build me a dashboard. And they do such a good job. Like they will just decide, oh this is a time element so we'll do a bar chart over time and these numbers feel big so we'll put those in a big green box." Three exciting LLM applications: text-to-SQL, data extraction, data enrichment [43:06] "LLMs are stunningly good at outputting SQL queries. Especially if you give them extra metadata about the columns. Maybe a couple of example queries and stuff." LLMs extracting structured data from scanned PDFs at 95-98% accuracy [43:36] "You file a freedom of information request and you get back horrifying scanned PDFs with slightly wonky angles and you have to get the data out of those. LLMs for a couple of years now have been so good at, 'here's a page of a police report, give me back JSON with the name of the arresting officer and the date of the incident and the description,' and they just do it." Data enrichment: running cheap models in loops against thousands of records [44:36] "There's something really exciting about the cheaper models, Gemini Flash 2.5 Lite, things like that. Being able to run those in a loop against thousands of records feels very valuable to me as well." datasette-enrichments Multimodal LLMs for images, audio transcription, and video processing [45:42] "At one point I calculated that using Google's least expensive model, if I wanted to generate captions for like 70,000 photographs in my personal photo library, it would cost me like $13 or something. Wildly inexpensive." Correction: with Gemini 1.5 Flash 8B it would cost 173.25 cents First programming language: hated C++, loved PHP and Commodore 64 BASIC [46:54] "I hated C++ 'cause I got my parents to buy me a book on it when I was like 15 and I did not make any progress with Borland C++ compiler... Actually, my first program language was Commodore 64 BASIC. And I did love that. Like I tried to build a database in Commodore 64 BASIC back when I was like six years old or something." Biggest production bug: crashing The Guardian's MPs expenses site with a progress bar [47:46] "I tweeted a screenshot of that progress bar and said, 'Hey, look, we have a progress bar.' And 30 seconds later the site crashed because I was using SQL queries to count all 17,000 documents just for this one progress bar." Crowdsourced document analysis and MP expenses Favorite test dataset: San Francisco's tree list, updated several times a week [48:44] "There's 195,000 trees in this CSV file and it's got latitude and longitude and species and age when it was planted... and get this, it's updated several times a week... most working days, somebody at San Francisco City Hall updates their database of trees, and I can't figure out who." Showrunning TV shows as a management model - transferring vision to lieutenants [50:07] "Your job is to transfer your vision into their heads so they can go and have the meetings with the props department and the set design and all of those kinds of things... I used to sniff at the idea of a vision when I was young and stupid. And now I'm like, no, the vision really is everything because if everyone understands the vision, they can make decisions you delegate to them." The Eleven Laws of Showrunning by Javier Grillo-Marxuach Hot take: all executable code with business value must be in version control [52:21] "I think it's inexcusable to have executable code that has business value that is not in version control somewhere." Hacker News automation: GitHub Actions scraping for notifications [52:45] "I've got a GitHub actions thing that runs a piece of software I wrote called shot-scraper that runs Playwright, that loads up a browser in GitHub actions to scrape that webpage and turn the results into JSON, which then get turned into an atom feed, which I subscribe to in NetNewsWire." Dream project: whale detection camera with Gemini AI [53:47] "I want to point a camera at the ocean and take a snapshot every minute and feed it into Google Gemini or something and just say, is there a whale yes or no? That would be incredible. I want push notifications when there's a whale." Favorite podcast: Mark Steel's in Town (hyperlocal British comedy) [54:23] "Every episode he goes to a small town in England and he does a comedy set in a local venue about the history of the town. And so he does very deep research... I love that sort of like hyperlocal, like comedy, that sort of British culture thing." Mark Steel's in Town available episodes Favorite fiction genre: British wizards caught up in bureaucracy [55:06] "My favorite genre of fiction is British wizards who get caught up in bureaucracy... I just really like that contrast of like magical realism and very clearly researched government paperwork and filings." The Laundry Files , Rivers of London , The Rook

0 views
Rik Huijzer 1 weeks ago

The X Community Notes Are Different, Are They?

In some cases, Wikipedia feels the need to "clarify" a certain video, which a commenter aptly called the "the blue box of gaslighting": ![YouTube_screenshot_demonstrating_Wikipedia_fact-checking.png](/files/ce94431fd8117f45) Now X community notes was promised to be something else, but to me it does look very similar. The note posts some helpful "context": ![x-fact-checking-moon.png](/files/34a372a5cf49e063) The reason that I'm critical is that especially on a topic like this, the note doesn't add any information. People who believe that the moon landing was staged will still believe that a...

0 views
Manuel Moreale 2 weeks ago

Y’all are great

I keep hearing and reading people bitching and moaning about the web being dead, lamenting the good old days of the web, when real people were out there, and sites weren’t all about promoting some shit nobody cares about or attempting to amass an audience only to then flip it in exchange for money. And I’m sitting here, screaming at my screen «That web you’re missing is still here, you dumbdumb, you just have to leave your stupid corporate, algodriven, social media jail to find it» . This past Friday the interview with the lovely Nic Chan went live on People and Blogs. Her site has something mine does not: analytics. And they're public! That offered the rare opportunity for me to see the effect the series has on a featured blog. This series lives on my blog but has nothing to do with me. It exists to connect you, the human who’s reading this, with all the other wonderful humans that are still out there, spending their time making sure the old school web, the one made by the people, for the people, is not dying. And see that bump on Nic’s analytics made me so happy. Because it means the series is working and doing its job. And it’s all because people like you are taking the time to read these interviews and click on those links to visit those blogs. And maybe you’re also taking time to reach out to those people and connect with them. This is the web many people are missing, a web that is, in fact, still here, very much alive. Y’all are great. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
neilzone 2 weeks ago

Perhaps I just stop reading the news?

I have been looking for a while for a reliable, online, text-based, source of important (subjective, I know, but to me that doesn’t include sport or celebrities or what is on TV) UK and world news, with a focus on reporting rather than analysis. At this point, I’ve basically given up; I don’t think that what I want exists, paid or free. But do I need to read “the news” anyway? I wonder what I really get from it, other than an increasing sense of despair and frustration. I get updates from key primary sources, through a combination of RSS and to monitor websites. I’m not concerned about missing a key regulatory or legislative update, which is important to me from a work point of view. I subscribe to 404Media, which I enjoy, although a more UK-focussed version would be amazing. I occasionally look at our local news site, when I can stomach the clickbait headlines. I think I’ve got more uBlock Origin filters set up for that site than for any other, in an attempt to make it usable. I’d rather hoped that there was a subscription option which does away with all the advertising, gives actually informative headlines and like, but no - it is an app-based offering, with an “ad-lite … experience”. I can see what people are discussing in the fediverse, where my filters for most party politics are pretty effective. But predominantly I enjoy the fediverse as a place to chat and have fun, not to be exposed to “news”. Having an appreciation of what is going on in the world, in a geopolitical sense, is also useful for my work, and that is a bit trickier. It is primarily for this that I’ve continued to read the BBC news, despite my increasing dissatisfaction with it. But perhaps it is time - even for just a test period - for me to stop reading “news sites”, and see how I fare.

0 views
Ruslan Osipov 2 weeks ago

PC Gamer physical edition is good, actually

I spend a lot of time in front of a computer or a phone, even now that I have a kid. Hey - she needs to sleep, and I have some time to kill. Many of my hobbies revolve around a screen too - like playing video games, tinkering with stuff, or writing. It’s unsurprising that I’ve been wanting to take a step away from the screen and find a way to engage with physical media more. I used to read a lot of books - I don’t anymore. I listen to audiobooks sometimes, but it’s been a good year or two since I last sat down and read a book cover to cover. That’s fine - life ebbs and flows, and even though sitting down and reading books used to be a huge part of my life - they aren’t today, and that’s okay. But it’s nice to put down devices and just hold something in your hand. I worked around this limitation though and decided to get more into magazines. Yeah, print media is still alive and kicking. We have two physical publication in our household this year - The New Yorker, and PC Gamer. Two very different magazines, and you can probably tell which subscription appealed to my wife - and which one to me. I’ve been reading both, although I’ll admit that PC Gamer has received more of my attention. Hey - unlike The New Yorker, which oppressively sends you a new issue each week, PC Gamer has been sending me issues monthly. And I don’t need to tell you that The New Yorker is a great publication - it’s got hell of a reputation, and for a good reason. It’s quality journalism, and peak writing, or so I’m told, but it certainly reads that way despite my limited knowledge on the subject. But I do know a thing or two about video games, and one thing I know is that gaming journalism from major publications - PC Gamer included has been steadily declining in quality over the past decade. Between corporate relationships, out of touch and burnt out reviewers, and sanitized, often generic pieces - I have been avoiding mainstream gaming media. There are lots of small independent reviewers who do a wonderful job covering the titles I care about, and I trust those a lot more. I’ve read somewhere that the print edition of PC Gamer is somewhat different. You still have the same people working on the issue, but the time pressure’s different, articles can’t be updated once they go live, and there’s much more fun and creative writing. I’m sure all of that’s available offline too, but I don’t think I would’ve read any of that if the magazine wasn’t already in my hands. Reading editions of PC Gamer feels like stepping a time capsule, in big part due to fairly substantial retro game coverage - you can’t exactly publish breaking news in a monthly print, so the focus is much more on having interesting things to say. Chronicles of Oblivion in-character playthroughs, developer interviews, quirky reviews - there’s lots to love. I’ve heard Edge Magazine is well known for high quality writing and timeless game critique. I think I’ll check that out too - here, I just subscribed.

0 views
Manuel Moreale 3 weeks ago

A moment with a decidedly less gloomy church

If you’re subscribed to my From the Summit newsletter , you might recognise this church. It’s the same one I wrote about in the most recent missive , only this time there was a lovely sunny day and the whole place was not engulfed in the fog. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
Rik Huijzer 2 months ago

A new description for the YouTube Wikipedia "fact checking"

In a YouTube comment, someone gave a great description for the Wikipedia fact-checking: ![YouTube_screenshot_demonstrating_Wikipedia_fact-checking.png](/files/ce94431fd8117f45) The correct description for this "context" is "the blue box of gaslighting". In general, the word "context" visible above the box is also misleading since providing "context" is often an euphemism for lying. Brought you to by a comment below a AwakenWithJP video.

0 views

Summary of reading: July - September 2025

"The Compromise" by Sergei Dovlatov - (read in Russian) the author was a journalist in the Soviet Union in the 60s and 70s. This book is a humorous, semi-biographic account of some of the issues faced by Soviet journalists in their attempt to report news aligned with party lines. Very good writing, though the Russian in this book was a bit difficult for me at times. "Twilight of the Gods: War in the Western Pacific, 1944-1945" by Ian Toll - the third part of the trilogy. As an overall conclusion to the series, I will reiterate the earlier feedback: the writing is great, the book is very readable for such immense size, but I wish the author's focus was elsewhere. If you're looking for very detailed tactical accounts of key battles, this is the book for you. It doesn't have much about the more strategic aspects, and especially the U.S. industrial capacity that played such a key role in the war. How was the production scaled so much, especially with millions of people drafted? I'd be definitely interested in looking for additional sources of information on this subject. "Threaded Interpretive Languages" by R.G. Loeliger - describes some traditional approaches to implementing FORTH (which is the prime example of a thread-interpretive language, or TIL) in assembly. This book is from the late 1970s, so the target machine used is a Z80. Overall it's pretty good, with useful diagrams and quirky humor, but it certainly shows its age. "System Design Interview – An insider's guide" by Alex Xu - a book form of the author's guidelines for system design interviews. It's okay , far from great. The sections are all very repetitive and the sum total of unique insights and ideas in the book is low. Moreover, it's some sort of samizdat instant-Amazon printing of rather low quality, no index, unfocused diagrams and barely any copywriting. "Why Nations Fail: The Origins of Power, Prosperity, and Poverty" by Daron Acemoglu and James A. Robinson - describes the author's theory of why some countries are rich and others are poor. The crux if the theory is extractive vs. inclusive political and economical institutions; in other words, a dictatorship vs. a pluralist government. Overall, the theory is interesting and insightful; the book is a bit scattered, though, with the authors jumping between examples haphazardly, making it difficult to focus. I like that the book doesn't shy away from making predictions for the future rather than just analyzing history. "A biography of the Pixel" by Alvy Ray Smith - the history of computer graphics, told by one of the founders of Pixar. Some parts of this book are good, but I can't say I really enjoyed most of it. Lots of very detailed history and names, and project names, etc. "The Age of Revolution: A History of the English Speaking Peoples, Volume III" by Winston Churchill - covers the period from 1688 to 1815. Though this series is ostensibly about all the "English speaking peoples", the focus is clearly on England. There's some coverage of the USA, but it mostly focuses on the interactions with the British (revolution and war of 1812), and there's also quite a bit on Napoleon and France. The series becomes somewhat more interesting as it approaches the more modern era. "The Nvidia Way: Jensen Huang and the making of a tech giant" by Tae Kim - a very interesting and well-written biography of Nvidia, from the early founding days to ~2024. "Babylon: Mesopotamia and the Birth of Civilization" by Paul Kriwaczek - an interesting historic account of Mesopotamia, from Eridu and until the fall of Babylon. "Demon Copperhead" by Barbara Kingsolver - a novel about a boy coming of age as an orphan in foster care, houses of friends, etc. The backdrop is the opioid epidemic of the early 2000s in the Appalachia, with broken families and lots of drugs. The book is pretty good, but the Pulitzer prize here is clearly for the unsettling coverage of an ongoing hot topic, not for any sort of literary flourish. "The Color of Our Sky" by Amita Trasi - the fictional story of two girls from different castes in India who find their lives intertwined in complex ways. Some thought provoking and troubling accounts of traditions still prevalent in India in relation to discrimination, human trafficking, child abuse and modern slavery. "El murmullo de las abejas" by Sofía Segovia - (read in Spanish) slightly mystical novel about the life of an aristocratic family in the north of Mexico in the early 20th century. Maybe it's just the Spanish, but I definitely got "100 años de soledad" vibes from this book: the mysticism, the multi-generational story going in circles, the ambience. "The Mysterious Island" by Jules Verne

0 views
Rik Huijzer 2 months ago

Wolves in the Netherlands

In the Netherlands, we currently have about 100 wolves walking around. Politicians say this is a "natural phenomenon" and that they walked here from their own volition. Meanwhile animals in rural areas are being killed by these wolves and even a runner has been attacked, while the government forbids the shooting of them because they want to first "establish a healthy state of conservation." The media is now even informing people about how to spot the difference between a wolf and a dog. Various other groups question the theory since the Netherlands is a tiny country which no real large swaths...

0 views
ava's blog 2 months ago

notebook 10 - yapping edition

Don't know why I had so much to say this time! Feeling better again. Reply via email Published 28 Sep, 2025

0 views
ava's blog 2 months ago

gazette post: resisting self-flanderization

Quick note that I published a post on Self-Flanderization for the Grizzly Gazette here :) Reply via email Published 14 Sep, 2025

0 views
Armin Ronacher 2 months ago

What’s a Foreigner?

Across many countries, resistance to immigration is rising — even places with little immigration, like Japan, now see rallies against it . I’m not going to take a side here. I want to examine a simpler question: who do we mean when we say “foreigner”? I would argue there isn’t a universal answer. Laws differ, but so do social definitions. In Vienna, where I live, immigration is visible: roughly half of primary school children don’t speak German at home . Austria makes citizenship hard to obtain. Many people born here aren’t citizens; at the same time, EU citizens living here have broad rights and labor-market access similar to native Austrians. Over my lifetime, the fear of foreigners has shifted: once aimed at nearby Eastern Europeans, it now falls more on people from outside the EU, often framed through religion or culture. Practically, “foreigner” increasingly ends up meaning “non-EU.” Keep in mind that over the last 30 years the EU went from 12 countries to 27. That’s a signifcant increase in social mobility. I believe this is quite different from what is happening in the United States. The present-day US debate is more tightly tied to citizenship and allegiance, which is partly why current fights there include attempts to narrow who gets citizenship at birth. The worry is less about which foreigners come and more about the terms of becoming American and whether newcomers will embrace what some define as American values. Inside the EU, the concept of EU citizenship changes social reality. Free movement, aligned standards, interoperable social systems, and easier labor mobility make EU citizens feel less “foreign” to each other — despite real frictions. The UK before Brexit was a notable exception: less integrated in visible ways and more hostile to Central and Eastern European workers. Perhaps another sign that the level of integration matters. In practical terms, allegiances are also much less clearly defined in the EU. There are people who live their entire live in other EU countries and whos allegiance is no longer clearly aligned to any one country. Legal immigration itself is widely misunderstood. Most systems are both far more restrictive in some areas and far more permissive than people assume. On the one hand, what’s called “illegal” is often entirely lawful. Many who are considered “illegal” are legally awaiting pending asylum decisions or are accepted refugees. These are processes many think shouldn’t exist, but they are, in fact, legal. On the other hand, the requirements for non-asylum immigration are very high, and most citizens of a country themselves would not qualify for skilled immigration visas. Meanwhile, the notion that a country could simply “remove all foreigners” runs into practical and ethical dead ends. Mobility pressures aren’t going away; they’re reinforced by universities, corporations, individual employers, demographics, and geopolitics. Citizenship is just a small wrinkle. In Austria, you generally need to pass a modest German exam and renounce your prior citizenship. That creates odd outcomes: native-born non-citizens who speak perfect German but lack a passport, and naturalized citizens who never fully learned the language. Legally clear, socially messy — and not unique to Austria. The high hurdle to obtaining a passport also leads many educated people to intentionally opt out of becoming citizens. The cost that comes with renouncing a passport is not to be underestimated. Where does this leave us? The realities of international mobility leave our current categories of immigration straining and misaligned with what the population at large thinks immigration should look like. Economic anxiety, war, and political polarization are making some groups of foreigners targets, while the deeper drivers behind immigration will only keep intensifying. Perhaps we need to admit that we’re all struggling with these questions. The person worried about their community or country changing too quickly and the immigrant seeking a better life are both responding to forces larger than themselves. In a world where capital moves freely but most people cannot, where climate change might soon displace millions, and where birth rates are collapsing in wealthy nations, our immigration systems will be tested and stressed, and our current laws and regulations are likely inadequate.

0 views
A Working Library 2 months ago

Apocalypse

An apocalypse is always both an ending and a beginning. Lizzie Wade charts past apocalypses, correcting glib narratives that too often presume neat binaries of winners and losers, or assert that apocalypses were always complete. In fact, what happens during and after an apocalypse is never straightforward, and a great deal of adapting—and surviving—takes place amid the ruins. Wade shows how we live in a post-apocalyptic world, one wrought by colonial atrocities of which the consequences are still unfolding. But within that acknowledgement is a hint of power: if we choose to heed the lessons of the apocalypses of the past, we just might learn how to survive the one we’re in now—and all the ones ahead. View this post on the web , subscribe to the newsletter , or reply via email .

0 views

How To Argue With An AI Booster

Editor's Note: For those of you reading via email, I recommend opening this in a browser so you can use the Table of Contents. This is my longest newsletter - a 16,000-word-long opus - and if you like it, please subscribe to my premium newsletter. Thanks for reading! In the last two years I've written no less than 500,000 words, with many of them dedicated to breaking both existent and previous myths about the state of technology and the tech industry itself. While I feel no resentment — I really enjoy writing, and feel privileged to be able to write about this and make money doing so — I do feel that there is a massive double standard between those perceived as "skeptics" and "optimists." To be skeptical of AI is to commit yourself to near-constant demands to prove yourself, and endless nags of " but what about ?" with each one — no matter how small — presented as a fact that defeats any points you may have. Conversely, being an "optimist" allows you to take things like AI 2027 — which I will fucking get to — seriously to the point that you can write an entire feature about fan fiction in the New York Times and nobody will bat an eyelid. In any case, things are beginning to fall apart. Two of the actual reporters at the New York Times (rather than a "columnist") reported out last week that Meta is yet again "restructuring" its AI department for the fourth time, and that it’s considering "downsizing the A.I. division overall," which sure doesn't seem like something you'd do if you thought AI was the future. Meanwhile, the markets are also thoroughly spooked by an MIT study covered by Fortune that found that 95% of generative AI pilots at companies are failing , and though MIT NANDA has now replaced the link to the study with a Google Form to request access, you can find the full PDF here , in the kind of move that screams "PR firm wants to try and set up interviews." Not for me, thanks! In any case, the report is actually grimmer than Fortune made it sound, saying that "95% of organizations are getting zero return [on generative AI]." The report says that "adoption is high, but transformation is low," adding that "...few industries show the deep structural shifts associated with past general-purpose technologies such as new market leaders, disrupted business models, or measurable changes in customer behavior." Yet the most damning part was the "Five Myths About GenAI in the Enterprise," which is probably the most wilting takedown of this movement I've ever seen: These are brutal, dispassionate points that directly deal with the most common boosterisms. Generative AI isn't transforming anything, AI isn't replacing anyone, enterprises are trying to adopt generative AI but it doesn't fucking work , and the thing holding back AI is the fact it doesn't fucking work. This isn't a case where "the enterprise" is suddenly going to save these companies, because the enterprise already tried, and it isn't working. An incorrect read of the study has been that the "learning gap" that makes these things less useful, when the study actually says that "...the fundamental gap that defines the GenAI divide [is that users resist tools that don't adapt, model quality fails without context, and UX suffers when systems can't remember." This isn't something you learn your way out of. The products don't do what they're meant to do, and people are realizing it. Nevertheless, boosters will still find a way to twist this study to mean something else. They'll claim that AI is still early, that the opportunity is still there, that we " didn't confirm that the internet or smartphones were productivity boosting ," or that we're in "the early days" of AI, somehow, three years and hundreds of billions and thousands of articles in. I'm tired of having the same arguments with these people, and I'm sure you are too. No matter how much blindly obvious evidence there is to the contrary they will find ways to ignore it. They continually make smug comments about people "wishing things would be bad" or suggesting you are stupid — and yes, that is their belief! — for not believing generative AI is disruptive.  Today, I’m going to give you the tools to fight back against the AI boosters in your life. I’m going to go into the generalities of the booster movement — the way they argue, the tropes they cling to, and the ways in which they use your own self-doubt against you.  They’re your buddy, your boss, a man in a gingham shirt at Epic Steakhouse who won't leave you the fuck alone, a Redditor, a writer, a founder or a simple con artist — whoever the booster in your life is, I want you to have the words to fight them with.  So, this is my longest newsletter ever, and I built it for quick reference - and, for the first time, gave you a Table of Contents. So, an AI booster is not, in many cases, an actual fan of artificial intelligence. People like Simon Willison or Max Woolf who actually work with LLMs on a daily basis don’t see the need to repeatedly harass everybody, or talk down to them about their unwillingness to pledge allegiance to the graveyard smash of generative AI. In fact, the closer I’ve found somebody to actually building things with LLMs , the less likely they are to emphatically argue that I’m missing out by not doing so myself. No, the AI booster is symbolically aligned with generative AI. They are fans in the same way that somebody is a fan of a sports team, their houses emblazoned with every possible piece of tat they can find, their Sundays living and dying by the success of the team, except even fans of the Dallas Cowboys have a tighter grasp on reality. Kevin Roose and Casey Newton are two of the most notable boosters, and — as I’ll get into later in this piece — neither of them have a consistent or comprehensive knowledge of AI. Nevertheless, they will insist that “ everybody is using AI for everything ” — a statement that even a booster should realize is incorrect based on the actual abilities of the models.  But that’s because it isn’t about what’s actually happening , it’s about allegiance. AI symbolizes something to the AI booster — a way that they’re better than other people, that makes them superior because they (unlike “cynics” and “skeptics”) are able to see the incredible potential in the future of AI, but also how great it is today , though they never seem to be able to explain why outside of “it replaced search for me!” and “I use it to draw connections between articles I write,” which is something I do without AI using my fucking brain. Boosterism is a kind of religion, interested in finding symbolic “proof” that things are getting “better” in some indeterminate way, and that anyone that chooses to believe otherwise is ignorant.  I’ll give you an example. Thomas Ptacek’s “ My AI Skeptic Friends Are All Nuts ” was catnip for boosters — a software engineer using technical terms like “interact with Git” and “MCP,” vague charts, and, of course, an extremely vague statement that says hallucinations aren’t a problem: Is it?  Anyway, my favourite part of the blog is this: Nobody projects more than an AI booster. They thrive on the sense they’re oppressed and villainized after years of seemingly every outlet claiming they’re right regardless of whether there’s any proof. They sneer and jeer and cry constantly that people are not showing adequate amounts of awe when an AI lab says “ we did something in private, we can’t share it with you, but it’s so cool ,” and constantly act as if they’re victims as they spread outright misinformation, either through getting things wrong or never really caring enough to check.  Also, none of the booster arguments actually survive a thorough response, as Nik Suresh proved with his hilarious and brutal takedown of Ptacek’s piece . There are, I believe, some people who truly do love using LLMs, yet they are not the ones defending them. Ptacek’s piece drips with condescension, to the point that it feels like he’s trying to convince himself how good LLMs are, and because boosters are eternal victims, he wrote them a piece that they could send around to skeptics and say “heh, see?” without being able to explain why it was such a brutal takedown, mostly because they can’t express why other than “ well this guy gets it!”   One cannot be the big, smart genius that understands the glory and power of AI while also acting like a scared little puppy every time somebody tells them it sucks. In fact, that’s a great place to start. When you speak to an AI booster, you may get the instinct to shake them vigorously, or respond to their post by saying to do something with your something , or that they’re “stupid.” I understand the temptation, but you want to keep a head on a swivel — they thrive on victimisation.   I’m sorry if you are an AI booster and this makes you feel bad. Please reflect on your work and how many times you’ve referred to somebody who didn’t understand AI in a manner that suggested they were ignorant, or tried to gaslight them by saying “AI was powerful” while providing no actionable ways in which it is. You cannot — and should not! — allow these people to act as if they are being victimized or “othered.”  First and foremost: there are boosters at pretty much every major think tank, government agency and media outlet. It’s extremely lucrative being a booster. You’re showered with panel invites, access to executives, and are able to get headlines by saying how scared you are of the computer with ease. Being a booster is the easy path! Being a critic requires you to constantly have to explain yourself in a way that boosters never have to.  If a booster says this to you, ask them to explain: There is no answer here, because this is not a coherent point of view. Boosters are more successful, get more perks and are in general better-treated than any critic. Fundamentally, these people exist in the land of the vague. They will drag you toward what's just on the horizon, but never quite define what the thing that dazzles you will be, or when it will arrive.  Really, their argument comes down to one thought: you must get on board now, because at some point it'll be so good you'll feel stupid for not believing something that kind of sucks wouldn't be really good.  If this line sounds familiar, it’s because you’ve heard it a million times before, most notably with crypto.  They will make you define what would impress you, which isn't your job, in the same way finding a use case for them isn't your job. In fact, you are the customer! Here’s a great place to start: say “that’s a really weird thing to say!” It is peculiar to suggest that somebody doesn’t get how to use a product, and that we, as the customer, must justify ourselves to our own purchases. Make them justify their attitude.  Just like any product, we buy software to serve a need.   This is meant to be artificial *intelligence* — why is it so fucking stupid that I have to work out why it's useful? The answer, of course, is that it has no intellect, is not intelligent, and Large Language Models are being pushed up a mountain by a cadre of people who are either easily impressed or invested — either emotionally or financially — in its success due to the company they keep or their intentions for the world.  If a booster suggests you “just don’t get it,” ask them to explain: Their use cases will likely be that AI has replaced search for them, that they use it for brainstorming or journaling, proof-reading an article, or looking through a big pile of their notes (or some other corpus of information) and summarizing it or pulling out insights.  If a booster refers to AI “being powerful” and getting “more powerful,” ask them: The core of the AI booster’s argument is to make you feel bad. They will suggest you are intentionally not liking A.I. because you're a hater, or a cynic, or a Luddite. They will suggest that you are ignorant for not being amazed by ChatGPT. To be clear, anyone with a compelling argument doesn’t have to make you feel bad to convince you. The iPhone - and to be clear, I am referring to the concept of the smartphone and its utility, I am aware that there was marketing for the iPhone - didn’t need a fucking marketing campaign to explain why one device that can do a bunch of things you already find useful was good.  You don't have to be impressed by ANYTHING by default, and any product — especially software — designed to make you feel stupid for "not getting it" is poorly designed. ChatGPT is the ultimate form of Silicon Valley Sociopathy — you must do the work to find the use cases, and thank them for being given the chance to do so.  A.I. is not even good, reliable software! It resembles the death of the art of technology — inconsistent and unreliable by definition, inefficient by design, financially ruinous, and ADDS to the cognitive load of the user by requiring them to be ever-vigilant.  So, here’s a really easy way to deal with this : if a booster ever suggests you are stupid or ignorant, ask them why it’s necessary to demean you to get their point across! Even if you are unable to argue on a technical level, make them explain why the software itself can’t convince you. Boosters will do everything they can to pull you off course. If you say that none of these companies make money, they’ll say it’s the early days. If you say AI companies burn billions, they’ll say the cost of inference is coming down. If you say the industry is massively overbuilding, they’ll say that this is actually just like the dot com boom and that the infrastructure will be picked up and used in the future. If you say there are no real use cases, they’ll say that ChatGPT has 700 million weekly users.  Every time there’s the same god damn arguments, so I’ve sat down and written as many of them as I can think of. Print this and feed it to your local booster today. Anytime a booster says “AI will,” tell them to stop and explain what AI can do, and if they insist, ask them both when to expect the things they’re talking about, and if they say “very soon,” ask them to be more specific. Get them to agree to a date, then call them on that date. There’s that “will” bullshit again. Agents don’t work! They don’t work at all. The term “agent” means, to quote Max Woolf , “a workflow where the LLM can make its own decisions, [such as in the case of] web search [where] the LLM is told “you can search the web if you need to” then can output “I should search the web” and do so.” Yet “agent” has now become a mythical creature that means “totally autonomous AI that can do an entire job.” if anyone tells you “agents are…” you should ask them to point to one. If they say “coding,” please demand that they explain how autonomous these things are, and if they say that they can “refactor entire codebases,” ask them what that means, and also laugh at them.  Here’s a comprehensive rundown , but here’s a particularly important part: Long story short, agents are not autonomous, they do not replace jobs, they cannot “replace coders,” they are not going to do so because probabilistic models are a horrible means of taking precise actions, and almost anyone who brings up agents as a booster is either misinformed or in the business of misinformation. Let's start with a really simple question: what does this actually mean? In many cases, I think they're referring to AI as being "like the early days of the internet." "The early days of the internet" can refer to just about anything. Are we talking about dial-up? DSL? Are we talking about the pre-platform days when people accessed it via Compuserve or AOL? Yes, yes, I remember that article from Newsweek, I already explained it here : In any case, one guy saying that the internet won't be big doesn't mean a fucking thing about generative AI and you are a simpleton if you think it does. One guy being wrong in some way is not a response to my work. I will crush you like a bug. If your argument is that the early internet required expensive Sun Microsystems servers to run, Jim Covello of Goldman Sachs addressed that by saying that the costs "pale in comparison ," adding that we also didn't need to expand our power grid to build the early Web. This is a straight-up lie. Sorry! Also, as Jim Covello noted , there were hundreds of presentations in the early 2000s that included roadmaps that accurately fit how smartphones rolled out, and that no such roadmap exists for generative AI. The iPhone was also an immediate success as a thing that people paid for , with Apple selling four million units in the space of six months . Hell, in 2006 (the year before the iPhone launch), there was an estimated 17.7 million worldwide smartphone shipments (mostly from BlackBerry and other companies building on Windows Mobile, with Palm vacuuming up the crumbs), though to be generous to the generative AI boosters, I’ll disregard those.  The original Attention Is All You Need paper — the one that kicked off the transformer-based Large Language Model era — was published in June 2017. ChatGPT launched in November 2022. Nevertheless, if we're saying "early days" here, we should actually define what that means. As I mentioned above , people paid for the iPhone immediately, despite it being a device that was completely and utterly new. While there was a small group of consumers that might have used similar devices ( like the iPAQ ), this was a completely new kind of computing, sold at a premium, requiring you to have a contract with a specific carrier (Cingular, now known as AT&T). Conversely, ChatGPT's "annualized" revenue in December 2023 was $1.6 billion (or $133 million a month), for a product that had, by that time, raised over $10 billion , and while we don't know what OpenAI lost in 2023, reports suggest it burned over $5 billion in 2024 . Big tech has spent over $500 billion in capital expenditures in the last 18 months , and all told — between investments of cloud credits and infrastructure — will likely sink over $600 billion by year's-end. The "early days" of the internet were defined not by its lack of investment or attention, but by its obscurity. Even in 2000 — around the time of the dot-com bubble — only 52% of US adults used the internet , and it would take another 19 years for 90% of US adults to do so. These early days were also defined by its early functionality. The internet would become so much more because of the things that hyper-connectivity allowed us to do, and both faster internet connections and the ability to host software in the cloud would change, well, everything. We could define what “better” would mean, and make reasonable predictions about what people could do on a “better” internet.  Yet even in those early days, it was obvious why you were using the internet, and how it might grow from there. One did not have to struggle to explain why buying a book online might be useful, or why a website might be a quicker reference than having to go to a library, or why downloading a game or a song might be a good idea. While habits might have needed adjusting, it was blatantly obvious what the value of the early internet was. It's also unclear when the early days of the internet ended. Only 44% of US adults had access to broadband internet by 2006 . Were those the early days of the internet?  The answer is "no," and that this point is brought up by people with a poor grasp of history and a flimsy attachment to reality. The early days of the internet were very, very different to any associated tech boom since, and we need to stop making the comparison. The internet also grew in a vastly different information ecosystem. Generative AI has had the benefit of mass media — driven by the internet! — along with social media (and social pressure) to "adopt AI" for multiple years. According to Pew, as of mid-June 2025, 34% of US adults have used ChatGPT , with 79% saying they had "heard at least a little about it." Furthermore, ChatGPT has always had a free version. On top of that, a study from May 2023 found that over 10,900 news headlines mentioned ChatGPT between November, 2022 and March, 2023 , and a BrandWatch report found that in the first five months of its release, ChatGPT received over 9.24 million mentions on social media . Nearly 80% of people have heard of ChatGPT, and over a quarter of Americans have used it. If we're defining "the early days" based on consumer exposure, that ship has sailed. If we're defining "the early days" by the passage of time , it's been 8 years since Attention Is All You Need, and three since ChatGPT came out. While three years might not seem like a lot of time, the whole foundation of an "early days" argument is that in the early days, things do not receive the venture funding, research, attention, infrastructural support or business interest necessary to make them "big." In 2024, nearly 33% of all global venture funding went to artificial intelligence , and according to The Information, AI startups have raised over $40 billion in 2025 alone , with Statista adding that AI absorbed 71% of VC funding in Q1 2025 . These numbers also fail to account for the massive infrastructure that companies like OpenAI and Anthropic don't have to pay for. The limitations of the early internet were two-fold: In generative AI's case, Microsoft, Google, and Amazon have built out the "fiber optic cables" for Large Language Models. OpenAI and Anthropic have everything they need. They have (even if they say otherwise) plenty of compute, access to the literal greatest minds in the field, the constant attention of the media and global governments, and effectively no regulations or restrictions stopping them from training their models on the works of millions of people , or destroying our environment . They have already had this support. OpenAI was allowed to burn half a billion dollars on a training run for GPT-4.5 and 5 . If anything, the massive amounts of capital have allowed us to massively condense the time in which a bubble goes from "possible" to "bursting and washing out a bunch of people," because the tech industry has such a powerful follower culture that only one or two unique ideas can exist at one time. The "early days" argument hinges on obscurity and limited resources, something that generative AI does not get to whine about. Companies that make effectively no revenue can raise $500 million to do the same AI coding bullshit that everybody else does. In simpler terms, these companies are flush with cash, have all the attention and investment they could possibly need, and are still unable to create a product with a defined, meaningful, mass-market use case. In fact, I believe that thanks to effectively infinite resources, we've speed-run the entire Large Language Model era, and we're nearing the end. These companies got what they wanted. Bonus trick: ask them to tell you what “the fiber boom” was. So, a little history. The "fiber boom" began after the telecommunications act of 1996 deregulated large parts of America's communications infrastructure, creating a massive boom — a $500 billion one to be precise, primarily funded with debt : In one sense, explaining what happened to the telecom sector is very simple: the growth in capacity has vastly outstripped the growth in demand. In the five years since the 1996 bill became law, telecommunications companies poured more than $500 billion into laying fiber optic cable, adding new switches, and building wireless networks. So much long-distance capacity was added in North America, for example, that no more than two percent is currently being used. With the fixed costs of these new networks so high and the marginal costs of sending signals over them so low, it is not a surprise that competition has forced prices down to the point where many firms have lost the ability to service their debts. No wonder we have seen so many bankruptcies and layoffs. This piece, written in 2002 , is often cited as a defense against the horrifying capex associated with generative AI, as that fiber optic cable has been useful for delivering high-speed internet. Useful, right? This period was also defined by a gluttony of over-investment, ridiculous valuations and outright fraud . In any case, this is not remotely the same thing and anyone making this point needs to learn the very fucking basics of technology. GPUs are built to shove massive amounts of compute into one specific function again and again, like generating the output of a model (which, remember, mostly boils down to complex maths). Unlike CPUs, a GPU can't easily change tasks, or handle many little distinct operations, meaning that these things aren't going to be adopted for another mass-scale use case. In simpler terms, this was not an infrastructure buildout. The GPU boom is a heavily-centralized, capital expenditure-funded asset bubble where a bunch of chips will sit in warehouses waiting for somebody to make up a use case for them, and if an endearing one existed, we'd already have it because we already have all the fucking GPUs. You are describing fan fiction. AI 2027 is fan fiction. Anyone who believes in it is a mark! It doesn’t matter if all of the people writing the fan fiction are scientists, or that they all have “the right credentials.” They themselves say that AI 2027 is a “guess,” an “extrapolation” (guess)  with “expert feedback” (someone editing your fan fiction), and involves “experience at OpenAI” (there are people that worked on the shows they write fan fiction about).  I am not going to go line-by-line to cut this apart anymore than I am going to write a lengthy takedown of someone’s erotic Banjo Kazooie story, because both are fictional. The entire premise of this nonsense is that at one point someone invents a self-learning “agent” that teaches itself stuff, and it does a bunch of other stuff as a result, with different agents with different numbers after them. There is no proof this is possible, nobody has done it, nobody will do it. AI 2027 was written specifically to fool people that wanted to be fooled, with big charts and the right technical terms used to lull the credulous into a wet dream and New York Times column where one of the writers folds their hands and looks worried. It was also written to scare people that are already scared. It makes big, scary proclamations, with tons of links to stuff that looks really legitimate but, when you piece it all together, is still fan fiction.  My personal favourite part is “Mid 2026: China Wakes Up,” which involves China’s intelligence agencies trying to steal OpenBrain’s agent (no idea who this company could be referring to, I’m stumped!), before the headline of “AI Takes Some Jobs” after OpenBrain released a model oh god I am so bored even writing up this tripe!  Sarah Lyons put it well , arguing that AI 2027 (and AI in general) is no different from the spurious “ spectral evidence ” used to accuse someone of being a witch during the Salem Witch Trials: Anyway, AI 2027 is fan fiction, nothing more, and just because it’s full of fancy words and has five different grifters on its byline doesn’t mean anything. Bonus trick: Ask them to explain whether things have actually got cheaper, and if they say they have, ask them why there are no profitable AI companies. If they say “they’re currently in growth stage,” ask them why there are no profitable AI companies. At this point they should try and kill you. In an interview on a podcast from earlier in the year , journalist Casey Newton said the following about my work: Newton then says — several octaves higher, showing how mad he isn't — that "[he] thought what [he] said was very civil" and that there are "things that are true and there are things that are false, like you can choose which ones you wanna believe." I am not going to be so civil. Other than the fact that Casey refers to "micro-innovations" (?) and "DeepSeek being on a curve that was expected," he makes — as many do — two very big mistakes, ones that I personally would not have said in a sentence that begun with suggesting that I knew how the technology works.  Inference — and I've gotten this one wrong in the past too! — is everything that happens from when you put a prompt in to generate an output. It's when an AI, based on your prompt, "infers" meaning. To be more specific, and quoting Google , "...machine learning inference is the process of running data points into a machine learning model to calculate an output such as a single numerical score." Casey will try and weasel out of this one and say this is what he meant. It wasn't.  Casey, like many people who talk about stuff without learning about it first, is likely referring to the fact that the price of tokens for some models has gone down in some cases. The problem, however, is that these are raw token costs, not actual expressions or evaluations of token burn in a practical setting. Worse still… Well, the cost of inference actually went up . In an excellent blog for Kilocode , Ewa Szyszka explained: Token consumption per application grew a lot because models allowed for longer context windows and bigger suggestions from the models. The combination of a steady price per token and more token consumption caused app inference costs to grow about 10x over the last two years. To explain in really simple terms, while the costs of old models may have decreased, new models cost about the same, and the "reasoning" that these models do actually burn way, way more tokens. When these new models "reason," they break a user's input and break into component parts, then run inference on each one of those parts. When you plug an LLM into an AI coding environment, it will naturally burn an absolute ton of tokens, in part because of the large amount of information you have to load into the prompt (and the "context window," or the information you load in with your prompt, with token burn increasing with the size of that information), and in part because generating code is inference-intensive. In fact, the inference costs are so severe that Szyszka says that "... combination of a steady price per token and more token consumption caused app inference costs to grow about 10x over the last two years ." I refuse to let this point go, because people love to say "the cost of inference is going down" when the cost of inference has increased, and they do so to a national audience, all while suggesting I am wrong somehow. I am not wrong. In fact, software development influencer Theo Browne recently put out a video called " I was wrong about AI costs (they keep going up) ," which he breaks down as follows: The price drops have, for the most part, stopped. See the below chart from The Information : You cannot, at this point, fairly evaluate whether a model is "cheaper" just based on its cost-per-tokens, because reasoning models are inherently built to use more tokens to create an output. Reasoning models are also the only way that model developers have been able to improve the efficacy of new models, using something called "test-time compute" to burn extra tokens to complete a task. And in basically anything you're using today, there's gonna be some sort of reasoning model, especially if you're coding. The cost of inference has gone up. Statements otherwise are purely false, and are the opinion of somebody who does not know what he's talking about. ...maybe? It sure isn't trending that way, nor has it gone down yet. I also predict that there's going to be a sudden realization in the media that it's going up, which has kind of already started. The Information had a piece recently about it , where they note that Intuit paid $20 million to Azure last year (primarily for access to OpenAI's models), and is on track to spend $30 million this year, which "outpaces the company's revenue growth in the same period, raising questions about how sustainable the spending is and how much of the cost it can pass along to customers." The problem here is that the architecture underlying Large Language Models is inherently unreliable. I imagine OpenAI's introduction of the router to ChatGPT-5 is an attempt to moderate both the costs of the model chosen and reduce the amount of exposure to reasoning models for simple queries — though Altman was boasting on August 10th about the "significant increase" in both free and paid users' exposure to reasoning models . Worse still, a study written up by VentureBeat found that open-weight models burn between 1.5 to 4 times more tokens, in part due to a lack of token efficiency, and in particular thanks to — you guessed it! — reasoning models: The findings challenge a prevailing assumption in the AI industry that open-source models offer clear economic advantages over proprietary alternatives. While open-source models typically cost less per token to run, the study suggests this advantage can be “easily offset if they require more tokens to reason about a given problem.” And models keep getting bigger and more expensive, too.  Because model developers hit a wall of diminishing returns, and the only way to make their models do more was to make them burn more tokens to generate a more accurate response (this is a very simple way of describing reasoning, a thing that OpenAI launched in September 2024 and others followed). As a result, all the "gains" from "powerful new models" come from burning more and more tokens. The cost-per-million-token number is no longer an accurate measure of the actual costs of generative AI, because it's much, much harder to tell how many tokens a reasoning model may burn, and it varies (as Theo Browne noted) from model to model. In any case, there really is no changing this path. They are out of ideas. So, I've heard this argument maybe 50 times in the last year, to the point that I had to talk about it in my July 2024 piece " How Does OpenAI Survive ." Nevertheless, people make a few points about Uber and AI that I think are fundamentally incorrect, and I'll break them down for you. I've seen this argument a lot, and it's one that's both ahistorical and alarmingly ignorant of the very basics of society. So, OpenAI got a $200 million defense contract with an "estimated completion date of July 2026," and is selling ChatGPT Enterprise to the US government for a dollar a year ( along with Anthropic, which sells access to Claude for the same price, Even Google is undercutting them, selling Gemini access at 47 cents for a year ). You're probably reading that and saying "oh no, that means the government has paid them now, they're never going away, " and I cannot be clear enough that you believing this is the intention of these deals . These are built specifically to make you feel like these things are never going away. This is also an attempt to get "in" with the government at a rate that makes "trying" these models a no-brainer. ... and??????? "The government is going to have cheap access to AI software" does not mean that "the government relies on AI software." Every member of the government having access to ChatGPT — something that is not even necessarily the case! — does not make this software useful, let alone essential , and if OpenAI burns a bunch of money "making it work for them," it still won't be essential, because Large Language Models are not actually that useful for doing stuff! Uber used lobbyist Bradley Tusk to steamroll local governments into allowing Uber to operate in their cities, but Tusk did not have to convince local governments that Uber was useful or have to train people how to use it. Uber's "too big to fail" moment was that local cabs kind of fucking sucked just about everywhere. Did you ever try and take a yellow cab from Downtown Manhattan to Hoboken New Jersey? Or Brooklyn? Or Queens? Did you ever try to pay with a credit card? How about trying to get a cab outside of a major metropolitan area? Do you remember how bad that was? I am not glorifying Uber the company, but the experience that Uber replaced was very, very bad. As a result, Uber did become too big to fail, because people now rely upon it because the old system sucked. Uber used its masses of venture capital to keep prices low to get people used to it too, but the fundamental experience was better than calling a cab company and hoping that they showed up. I also want to be clear this is not me condoning Uber, take public transport if you can! To be clear, Uber has created a new kind of horrifying, extractive labor practice which deprives people of benefits and dignity, paying off academics to help the media gloss over the horrors of its platform . It is also now having to increase prices . What, exactly, is the "essential" experience of generative AI? What essential experience are we going to miss if ChatGPT disappears tomorrow? And on an enterprise or governmental level: what exactly are these tools doing for governments that would make removing them so painful? What use cases? What outcomes? Uber's "essential" nature is that millions of people use it in place of regular taxis, and it effectively replaced decrepit, exploitative systems like the yellow cab medallions in New York with its own tech-enabled exploitation system that, nevertheless, worked far better for the user. Sidenote: although I acknowledge that the disruption that Uber brought to the medallion system had horrendous consequences for the owners of said medallions — some of whom had paid more than a million dollars for the privilege to drive a New York taxi cab , and were burdened under mountains of debt.  There is no such use case with ChatGPT, or any other generative AI system. You cannot point to one use case that is anywhere near as necessary as cabs in cities, and indeed the biggest use cases — things like brainstorming and search — are either easily replaced by any other commoditized LLM or literally already exist with Google Search . Sorry, this is a really simple one. These data centers are not, in and of themselves, driving much economic growth other than in the costs of building them . As I've discussed again and again , there's maybe $40 billion in revenue and no profit coming out of these companies. There isn't any economic growth! They're not holding up anything! These data centers, once built, also create very little economic activity. They don't create jobs , they take up massive amounts of land and utilities, and they piss off and poison their neighbors . If anything, letting these things die would be a political win. There is no "great loss" associated with the death of the Large Language Model era. Taking away Uber would genuinely affect people's ability to get places. So, the classic (and wrong!) argument about OpenAI and companies like OpenAI is that "Uber burned a bunch of money and is now "cash-flow positive" or "profitable ."  Let's talk about raw losses, and where people are making this assumption. Uber lost $24.9 billion in the space of four years (2019 to 2022), in part because of the billions it was spending on sales and marketing and R&D — $4.6 billion and $4.8 billion respectively in 2019 alone . It also massively subsidized the cost of rides — which is why prices had to increase — and spent heavily on driver recruitment, burning cash to get scale, the classic Silicon Valley way. This is absolutely nothing like how Large Language Models are growing, and I am tired of defending this point. OpenAI and Anthropic burn money primarily through compute costs and specialized talent. These costs are increasing, especially with the rush to hire every single AI scientist at the most expensive price possible . There are also essential, immovable costs that neither OpenAI nor Anthropic have to shoulder — the construction of the data centers necessary to train and run inference for their models, which I will get to in a little bit. Yes, Uber raised $33.5 billion ( through multiple rounds of post-IPO debt , though it raised about $25 billion in actual funding). Yes, Uber burned an absolute ass-ton of money. Yes, Uber has scale. But Uber was not burning money as a means of making its product functional or useful. Furthermore, the costs associated with Uber — and its capital expenditures from 2019 through 2024 were around $2.2 billion! — are miniscule compared to the actual costs of OpenAI and Anthropic. Both OpenAI and Anthropic lost around $5 billion in 2024, but their infrastructure was entirely paid for by either Microsoft, Google or Amazon. While we don't know how much of this infrastructure is specifically for OpenAI or Anthropic, as the largest model developers it's fair to assume that a large chunk — at least 30% — of Amazon and Microsoft's capital expenditures have been to support these loads (I leave out Google as it's unclear whether it’s expanded its infrastructure for Anthropic, but we know Amazon has done so ). As a result, the true "cost" of OpenAI and Anthropic is at least ten times what Uber burned. Amazon spent $83 billion in capital expenditures in 2024 and expects to spend $105 billion in 2025 . Microsoft spent $55.6 billion in 2024 and expects to spend $80 billion this year . Based on my (conservative) calculations, the true "cost" of OpenAI is around $82 billion, and that only includes capex from 2024 onward , based on 30% of Microsoft's capex (as not everything has been invested yet in 2025, and OpenAI is not necessarily all of the capex) and the $41.4 billion of funding it’s received so far . The true cost of Anthropic is around $77.1 billion, including all its funding and 30% of Amazon's capex from the beginning of 2024. These are inexact comparisons, but the classic argument is that Uber "burned lots of money and worked out okay," when in fact the combined capital expenditures from 2024 onwards that are necessary to make Anthropic and OpenAI work are each — on their own — four times the amount Uber burned in over a decade. I also believe that these numbers are conservative. There's a good chance that Anthropic and OpenAI dominate the capex of Amazon and Microsoft, in part because what the fuck else are they buying all these GPUs for, as their own AI services don't seem to be making that much money at all. Anyway, to put it real simple, AI has burned more in the last two years than Uber burned in ten, Uber didn't burn money in the same way, didn't burn much by way of capital expenditures, didn't require massive amounts of infrastructure, and isn't remotely the same in any way, shape or form, other than it burned a lot of money — and that burning wasn’t because it was trying to build the core product, but rather trying to scale. I covered this in depth in the Hater's Guide To The AI Bubble , but the long and short of it is that AWS is a platform, a necessity with an obvious choice, and has burned about ten percent of what Amazon et. al has burned chasing generative AI, and had proven demand before building it. Also, AWS was break-even in three years. OpenAI was founded in fucking 2015, and even if you start from November 2022, by AWS standards it should be break-even! Amazon Web Services was created out of necessity — Amazon's infrastructure needs were so great that it effectively had to build both the software and hardware necessary to deliver a store that sold theoretically everything to theoretically anywhere, handling both the traffic from customers, delivering the software that runs Amazon.com quickly and reliably, and, well, making sure things ran in a stable way. It didn't need to come up with a reason for people to run web applications — they were already doing so themselves, but in ways that cost a lot, were inflexible, and required specialist skills. AWS took something that people already did, and what there was a proven demand for, and made it better. Eventually, Google and Microsoft would join the fray. As I’ve discussed in the past, this metric is basically “monthx12,” and while it’s a fine measure for high-gross-margin businesses like SaaS companies, it isn’t for AI. It doesn’t account for churn (when people leave). It also is a number intentionally used to make a company sound more successful — so you can say “$200 million annualized revenue” instead of “$16.6 million a month.” Also, if they’re saying this number, it’s likely that number isn’t consistent! Simple answer: why have literally none of them done this yet? Why not one? There’s that “will” bullshit, once again, always about the “will.” We do not know how thinking works in humans and thus cannot extrapolate it to a machine, and at the very least human beings have the ability to re-evaluate things and learn, a thing that LLMs cannot do and will never do.  We do not know how to get to AGI. Sam Altman said in June that OpenAI was “now confident [they knew] how to build AGI as we have traditionally understood it.” In August, Altman said that AGI was “not a super useful term,” and that “the point of all this is it doesn’t really matter and it’s just this continuing exponential of model capability that we’ll rely on for more and more things.” So, yeah, total bullshit.  Even Meta’s Chief AI Scientist says it isn’t possible with transformer-based models . We don’t know if AGI is possible, anyone claiming they do is lying. This, too, is hogwash, nothing different than your buddy’s friend’s uncle who works at Nintendo that says Mario is coming to PlayStation. Ilya Sutskever and Mira Murati raised billions for companies with no product, let alone a product road map, and they did so because they saw a good opportunity for a grift and to throw a bunch of money at compute. Also: if someone from “deep within the AI industry” has told somebody “big things are coming,” they are doing so to con them or make them think they have privileged information. Ask for specifics.  This argument is poised as a comeback to my suggestion that AI isn't particularly useful, a proof point that this movement is not inherently wasteful, or that there are, in fact, use cases for ChatGPT that are lasting, meaningful or important. I disagree. In fact, I believe ChatGPT — and LLMs in general — have been marketed based on lies of inference. Ironic, I know. I also have grander concerns and suspicions about what OpenAI considers a “user” and how it counts revenue, I’ll get into that later in the week on my premium newsletter, which you should subscribe to. Here’s a hint though: 500,000 of OpenAI’s “ 5 million business customers ” are from its $15 million deal with Cal State University , which works out to around $2.50-a-user-a-month. It’s also started doing $1-a-month trials of its $30-a-month “Teams” subscription , and one has to wonder how many of those are counted in that total, and for how long. I do not know the scale of these offers, nor how long OpenAI has been offering them. A Redditor posted about the deal a few months ago , saying that OpenAI was offering up to 5 seats at once. In fact, I've found a few people talking about these deals, and even one adding that they were offered an annual $10-a-month ChatGPT Plus subscription , with one person saying a few weeks ago saying they'd seen people offered this deal for canceling their subscription . Suspicious. But there’s a greater problem at play. So, ChatGPT has 700 million weekly active users . OpenAI has yet to provide a definition — and yes, I've asked! — which means that an "active" user could be defined as somebody who has gone to ChatGPT once in the space of a week. This term is extremely flimsy, and doesn't really tell us much. Similarweb says that in July 2025 ChatGPT.com had 1.287 billion total visits , making it a very popular website. What do these facts actually mean , though? As I said previously, ChatGPT has had probably the most sustained PR campaign for anything outside of a presidency or a pop star. Every single article about AI mentions OpenAI or ChatGPT, every single feature launch — no matter how small — gets a slew of coverage. Every single time you hear "AI" you’re made to think of "ChatGPT” by a tech media that has never stopped to think about their role in hype, or their responsibility to their readers. And as this hype has grown, the publicity compounds, because the natural thing for a journalist to do when everybody is talking about something is to talk about it more . ChatGPT's immediate popularity may have been viral, but the media took the ball and ran with it, and then proceeded to tell people it did stuff it did not. People were pressured to try this service then under false pretenses, something that continues to this day.  I'll give you an example. On March 15 2023, Kevin Roose of the New York Times would say that OpenAI's GPT-4 was " exciting and scary ," exacerbating (his words!) "...the dizzy and vertiginous feeling I’ve been getting whenever I think about A.I. lately," wondering if he was experiencing "future shock," then described how it was an indeterminate level of "better" and something that immediately sounded ridiculous: In one test, conducted by an A.I. safety research group that hooked GPT-4 up to a number of other systems, GPT-4 was able to hire a human TaskRabbit worker to do a simple online task for it — solving a Captcha test — without alerting the person to the fact that it was a robot. The A.I. even lied to the worker about why it needed the Captcha done, concocting a story about a vision impairment. That doesn't sound remotely real! I went and looked up the paper , and here is the entire extent of what OpenAI shared: This safety card led to the perpetration of one of the earliest falsehoods — and most eagerly-parotted lies — that ChatGPT and generative AI is capable of "agentic" actions. Outlet after outlet — led by Kevin Roose — eagerly interpreted an entire series of events that took place that doesn't remotely make sense, starting with the fact that this is not something you can hire a Taskrabbit to do . Or, at the very least, without a contrived situation where you create an empty task and ask them to complete it. Why not use Mechanical Turk? Or Fiverr? They’ve tons of people offering this service !  But I'm a curious little critter, so I went further and followed their citation to a link on METR's research page . It turns out that what actually happened was METR had a researcher copy paste the generated responses from the model and otherwise handle the entire interaction with Taskrabbit , and based on the plurality of "Taskrabbit contractors," it appears to have taken multiple tries. On top of that, it appears that OpenAI/METR were prompting the model on what to say , which kind of defeats the point. Emphases mine, and comments in [brackets]: It took me five whole minutes to find this piece — which is cited on the GPT-4 system card — read it, then write this piece. It did not require any technical knowledge other than the ability to read stuff.  It is transparently, blatantly obvious that GPT-4 did not "hire" a Taskrabbit or, indeed, make any of these actions — it was prompted to, and they do not show the prompts they used, likely because they had to use so many of them. Anyone falling for this is a mark, and OpenAI should have gone out of its way to correct people. Instead, they sat back and let people publish outright misinformation. Roose, along with his co-host Casey Newton, would go on to describe this example at length on a podcast that week , describing an entire narrative where “the human actually gets suspicious” and “GPT 4 reasons out loud that it should not reveal that [it is] a robot,” at which point “the TaskRabbit solves the CAPTCHA.” During this conversation, Newton gasps and says “oh my god” twice, and when he asks Roose “how does the model understand that in order to succeed at this task, it has to deceive the human?” Roose responds “we don’t know, that is the unsatisfying answer,” and Newton laughs and states “we need to pull the plug. I mean, again, what?” Credulousness aside, the GPT-4 marketing campaign was incredibly effective, creating an aura that allowed OpenAI to take advantage of the vagueness of its offering as people — including members of the media — willfully filled in the blanks for them. Altman has never had to work to sell this product. Think about it — have you ever heard OpenAI tell you what ChatGPT can do, or gone to great lengths to describe its actual abilities? Even on OpenAI's own page for ChatGPT , the text is extremely vague: Scrolling down, you're told ChatGPT can "write, brainstorm, edit and explore ideas with you." It can "generate and debug code, automate repetitive tasks, and [help you] learn new APIs." With ChatGPT you can "learn something new...dive into a hobby...answer complex questions" and "analyze data and create charts." What repetitive tasks? Who knows. How am I learning? Unclear. It's got thinking built in! What that means is unclear, unexplained, and thus allows a user to incorrectly believe that ChatGPT has a brain. To be clear, I know what reasoning means , but this website does not attempt to explain what "thinking" means. You can also "offload complex tasks from start to finish with an agent," which can, according to OpenAI , "think and act, proactively choosing from a toolbox of agentic skills to complete tasks for you using its own computer." This is an egregious lie, employing the kind of weasel-wording that would be used to torture "I.R. Baboon" for an eternity.  Precise in its vagueness, OpenAI's copy is honed to make reporters willing to simply write down whatever they see and interpret it in the most positive light. And thus the lie of inference began. What "ChatGPT" meant was muddied from the very beginning, and thus ChatGPT's actual outcomes have never been fully defined. What ChatGPT "could do" became a form of folklore — a non-specific form of "automation" that could "write code" and "generate copy and images," that can "analyze data," all things that are true but one can infer much greater meaning from. One can infer that "automation" means the automation of anything related to text, or that "write code" means "write the entirety of a computer program." OpenAI's ChatGPT agent is not, by any extension of the word, " already a powerful tool for handling complex tasks ," but it has not, in any meaningful sense, committed to any actual outcomes. As a result, potential users — subject to a 24/7 marketing campaign — have been pushed toward a website that can theoretically do anything or nothing, and have otherwise been left to their own devices. The endless gaslighting, societal pressure, media pressure, and pressure from their bosses has pushed hundreds of millions of people to try a product that even its creators can't really describe. As I've said in the past, OpenAI is deliberately using Weekly Active Users so that it doesn't have to publish its monthly active users, which I believe would be higher. Why wouldn't it do this? Well, OpenAI has 20 million paying ChatGPT subscribers and five million "business customers," with no explanation of what the difference might be. This is already a mediocre (3.5%) conversion rate, yet its monthly active users (which are likely either 800 million or 900 million, but these are guesses!) would make that rate lower than 3%, which is pretty terrible considering everybody says this shit is the future. I also am tired of having people claim that "search" or "brainstorm" or "companions" are a lasting, meaningful business models. So, OpenAI announced that it has hit its first $1 billion month on August 20, 2025 on CNBC, which brings it exactly in line with my estimated $5.26 billion in revenue that I believe it has made as of the end of July . However, remember what the MIT study said: enterprise adoption is high but transformation is low .  There are tons of companies throwing money at AI, but they are not seeing actual returns. OpenAI's growth as the single-most-prominent company in AI (and if we're honest, one of the most prominent in software writ large) makes sense, but at some point will slow, because the actual returns for the businesses aren't there. If there were, we'd have one article where we could point at a ChatGPT integration that at scale helped a company make or save a bunch of money, written in plain English and not a gobbledygook of " profit improvement ." Also… OpenAI is projected to make $12.7 billion in 2025. How exactly will it do that? Is it really making $1.5 billion a month by the end of the year? Even if it does, is the idea that it keeps burning $10 billion or more every year into eternity? What actual revenue potential does OpenAI have long-term? Its products are about as good as everyone else's, cost about the same, and do the same things. ChatGPT is basically the same product as Claude or Grok or any number of different LLMs. The only real advantages that OpenAI has are infrastructure and brand recognition. These models have clearly hit a wall where training is hitting diminishing returns, meaning that its infrastructural advantage is that they can continue providing its service at scale, nothing more. It isn't making its business cheaper, other than the fact that it mostly hasn’t had to pay for it...other than the site in Abilene Texas where it’s promised Oracle $30 billion a year by 2028 . I'm sorry, I don't buy it! I don't buy that this company will continue growing forever, and its stinky conversion rate isn't going to change anytime soon. How? Literally…how!  How? How! HOW???  Nobody ever answers this question! “Efficiencies”? If you’re going to say GPT-5 — here’s a scoop I have about how it’s less efficient !  It's very, very, very common for people to conflate "AI" with "generative AI." Make sure that whatever you're claiming or being told is actually about Large Language Models, as there are all sorts of other kinds of machine learning that people love to bring up. LLMs have nothing to do with Folding@Home, autonomous cars, or most disease research. A lot of people think that they're going to tell me "I use this all the time!" and that'll change my mind. I cannot express enough how irrelevant it is that you have a use case, as every use case I hear is one of the following: This would all be fine and dandy if people weren't talking about this stuff as if it was changing society. None of these use cases come close to explaining why I should be impressed by generative AI. It also doesn't matter if you yourself have kind of a useful thing that AI did for you once. We are so past the point when any of that matters. AI is being sold as a transformational technology, and I am yet to see it transform anything. I am yet to hear one use case that truly impresses me, or even one thing that feels possible now that wasn't possible before. This isn't even me being a cynic — I'm ready to be impressed! I just haven't been in three fucking years and it's getting boring. Also, tell me with a straight face any of this shit is worth the infrastructure. One of the most braindead takes about AI and coding is that "vibe coding" is "allowing anyone to build software." While technically true, in that one can just type "build me a website" into one of many AI coding environments, this does not mean it is functional or useful software. Let's make this really clear: AI cannot "just handle coding ." Read this excellent piece by Colton Voege , then read this piece by Nik Suresh . If you contact me about AI and coding without reading these I will send them to you and nothing else, or crush you like a car in a garbage dump, one or the other. Also, show me a vibe coded company. Not a company where someone who can code has quickly spun up some features, a fully-functional, secure, and useful app made entirely by someone who cannot code. You won't be able to find this as it isn't possible. Vibe Coding is a marketing term based on lies, peddled by people who have either a lack of knowledge or morals. Are AI coding environments making people faster? I don't think so! In fact, a recent study suggested they actually make software engineers slower . The reason that nobody is vibe coding an entire company is because software development is not just "put a bunch of code in a pile and hit "go," and oftentimes when you add something it breaks something else. This is all well and good if you actually understand code — it's another thing entirely when you are using Cursor or Claude Code like a kid at an arcade machine turning the wheel repeatedly and pretending they're playing the demo. Vibe coders are also awful for the already negative margins of most AI coding environments, as every single thing they ask the model to do is imprecise, burning tokens in pursuit of a goal they themselves don't understand. "Vibe coding" does not work, it will not work, and pretending otherwise is at best ignorance and at worst supporting a campaign built on lies. If you are an AI booster, please come up with better arguments. And if you truly believe in this stuff, you should have a firmer grasp on why you do so. It's been three years, and the best some of you have is "it's real popular!" or "Uber burned a lot of money!" Your arguments are based on what you wish were true rather than what's actually true, and it's deeply embarrassing. Then again, there are many well-intentioned people who aren't necessarily AI boosters who repeat these arguments, regardless of how thinly-framed they are, in part because we live in a high-information, low-processing society where people tend to put great faith in people who are confident in what they say and sound smart. I also think the media is failing on a very basic level to realize that their fear of missing out or seeming stupid is being used against them. If you don't understand something, it's likely because the person you're reading or hearing it from doesn't either. If a company makes a promise and you don't understand how they'd deliver on it, it's their job to explain how, and your job to suggest it isn't plausible in clear and defined language . This has gone beyond simple "objectivity" into the realm of an outright failure of journalism. I have never seen more misinformation about the capabilities of a product in my entire career, and it's largely peddled by reporters who either don't know or have no interest in knowing what's actually possible, in part because all of their peers are saying the same nonsense. As things begin to collapse — and they sure look like they're collapsing, but I am not making any wild claims about "the bubble bursting" quite yet — it will look increasingly more deranged to bluntly publish everything that these companies say. Never have I seen an act of outright contempt more egregious than Sam Altman saying that GPT-5 was actually bad, and that GPT-6 will be even better . Members of the media: Sam Altman does not respect you. He is not your friend. He is not secretly confiding in you. He thinks you are stupid and easily-manipulated, and will print anything he says, largely in part because many members of the media will print exactly what he says whenever he says it. To be clear, if you wrote about it and actively mocked it , that's fine. But let's close by discussing the very nature of AI skepticism, and the so-called "void" between those who "hate" AI and those who "love" AI, from the perspective of one of the more prominent people in the "skeptic" side. Critics and skeptics are not given the benefit of grace, patience, or, in many cases, hospitality when it comes to their position. While they may receive interviews and opportunities to "give their side," it is always framed as the work of a firebrand, an outlier, somebody with dangerous ideas that they must eternally justify. They are demonized, their points under constant scrutiny, their allegiances and intentions constantly interrogated for some sort of moral or intellectual weakness. "Skeptic" and "critic" are words said with a sneer or trepidation — that the listener should be suspicious that this person isn't agreeing that AI is the most powerful, special thing ever. To not immediately fall in love with something that everybody is talking about is to be framed as a "hater," to have oneself introduced with the words "not everybody agrees..." on 40% of appearances. By comparison, AI boosters are the first to get TV appearances and offers to be on panels, their coverage featured prominently on Techmeme, selling slop-like books called shit like The Future Of Intelligence: Masters Of The Brain featuring 18 interviews with different CEOs that all say the same thing. They do not have to justify their love — they simply have to remember all the right terms, chirping out "test-time compute" and "the cost of inference is going down" enough times to summon Wario Amodei to give them an hour-long interview where he says "the models, they are, in years, going to be the most powerful school teacher ever built." And yeah, I did sell a book, because my shit fucking rocks. I have consistent, deeply-sourced arguments that I've built on over the course of years. I didn't "become a hater" because I'm a "contrarian," I became a hater because the shit that these fucking oafs have done to the computer pisses me off. I wrote The Man Who Killed Google Search because I wanted to know why Google Search sucked. I wrote Sam Altman, Freed because at the time I didn't understand why everybody was so fucking enamoured with this damp sociopath. Everything I do comes from genuine curiosity and an overwhelming frustration with the state of technology. I started writing this newsletter with 300 subscribers and 60 views, and have written it as an exploration of subjects that grows as I write. I do not have it in me to pretend to be anything other than what I am, and if that is strange to you, well, I'm a strange man, but at least I'm an honest one. I do have a chip on my shoulder, in that I really do not like it when people try to make other people feel stupid, especially when they do so as a means of making money for themselves or somebody else. I write this stuff out because I have an intellectual interest, I like writing, and by writing, I am able to learn about and process my complex feelings about technology. I happen to do so in a manner that hundreds of thousands of people enjoy every month, and if you think that I've grown this by "being a hater," you are doing yourself the disservice of underestimating me, which I will use to my advantage by writing deeper, more meaningful, more insightful things than you. I have watched these pigs ruin the computer again and again, and make billions doing so, all while the media celebrates the destruction of things like Google, Facebook, and the fucking environment in pursuit of eternal growth. I cannot manufacture my disgust, nor can I manufacture whatever it is inside me that makes it impossible to keep quiet about the things I see. I don't know if I take this too seriously or not seriously enough, but I am honoured that I am able to do it, and have 72,000 of you subscribed to find out when I do so.

0 views
Chris Coyier 3 months ago

Your Own Newspaper, Or Not

You’ve likely heard me go on about how much I like an encourage using an RSS reader. Molly White frames it nicely : What if you could take all your favorite newsletters, ditch the data collection, and curate your own newspaper? It could include independent journalists, bloggers, mainstream media, worker-owned media collectives, and just about anyone else who publishes online. Even podcast episodes, videos from your favorite YouTube channels, and online forum posts could slot in, too. Only the stuff you want to see, all in one place, ready to read at your convenience. No email notifications interrupting your peace (unless you want them), no pressure to read articles immediately. Wouldn’t that be nice? Here are a few more reasons: I could keep going listing reasons and resources and yadda yadda, but right now, I’m thinking about the pushback. Why would someone not do this? Tell me! I’m curious. I know one pushback I’ve recently heard is that it’s easy to screw up. For instance, you’re like: I like The Verge. So you subscribe to The Verge RSS feed, and then only like a handful of other things. The Verge publishes a lot so now everytime you visit your reader, it’s all Verge stuff, and you just get sick of it in 2 days. That feels very fair. You gotta unsubscribe from that if that’s how you feel. It’s constantly work to curate your feeds so it’s a nice pace for you an a nice collection of stuff you actually do want to read. That’s work and not everyone wants work. They don’t want another inbox to manage, which is fair. If you don’t do the RSS thing, what are your reasons?

0 views
Rik Huijzer 4 months ago

Does Whitney Webb fear for her life after exposing corruptio...

n? Whitney Webb wrote _A Nation Under Blackmail_, which exposed blackmail rings in the United States. She gave a great answer to the question on whether she now fears for her life: > I ultimately think, you know, to paraphrase a line from the sci-fi book Dune: "fear is the mind-killer." [...] I think fear is the ultimate tool that benefits [the elite] instead of us. Full quote from Dune: > I must not fear. > Fear is the mind-killer. > Fear is the little-death that brings total obliteration. > I will face my fear. > I will permit it to pass over me and through me. > And when it has gone pas...

0 views

The Hater's Guide To The AI Bubble

Hey! Before we go any further — if you want to support my work, please sign up for the premium version of Where’s Your Ed At, it’s a $7-a-month (or $70-a-year) paid product where every week you get a premium newsletter, all while supporting my free work too.  Also, subscribe to my podcast Better Offline , which is free. Go and subscribe then download every single episode. Here's parts 1 , 2 and 3 of the audio version of the Hater's Guide. One last thing: This newsletter is nearly 14,500 words. It’s long. Perhaps consider making a pot of coffee before you start reading.  Good journalism is making sure that history is actively captured and appropriately described and assessed, and it's accurate to describe things as they currently are as alarming. And I am alarmed. Alarm is not a state of weakness, or belligerence, or myopia. My concern does not dull my vision, even though it's convenient to frame it as somehow alarmist, like I have some hidden agenda or bias toward doom. I profoundly dislike the financial waste, the environmental destruction, and, fundamentally, I dislike the attempt to gaslight people into swearing fealty to a sickly and frail psuedo-industry where everybody but NVIDIA and consultancies lose money. I also dislike the fact that I, and others like me, are held to a remarkably different standard to those who paint themselves as "optimists," which typically means "people that agree with what the market wishes were true." Critics are continually badgered, prodded, poked, mocked, and jeered at for not automatically aligning with the idea that generative AI will be this massive industry, constantly having to prove themselves, as if somehow there's something malevolent or craven about criticism, that critics "do this for clicks" or "to be a contrarian." I don't do anything for clicks. I don't have any stocks or short positions. My agenda is simple: I like writing, it comes to me naturally, I have a podcast, and it is, on some level, my job to try and understand what the tech industry is doing on a day-to-day basis. It is easy to try and dismiss what I say as going against the grain because "AI is big," but I've been railing against bullshit bubbles since 2021 — the anti-remote work push ( and the people behind it ), the Clubhouse and audio social networks bubble , the NFT bubble , the made-up quiet quitting panic , and I even, though not as clearly as I wished, called that something was up with FTX several months before it imploded .  This isn't "contrarianism."  It's the kind of skepticism of power and capital that's necessary to meet these moments, and if it's necessary to dismiss my work because it makes you feel icky inside, get a therapist or see a priest. Nevertheless, I am alarmed , and while I have said some of these things separately, based on recent developments, I think it's necessary to say why.  In short, I believe the AI bubble is deeply unstable, built on vibes and blind faith, and when I say "the AI bubble," I mean the entirety of the AI trade. And it's alarmingly simple, too. But this isn’t going to be saccharine, or whiny, or simply worrisome. I think at this point it’s become a little ridiculous to not see that we’re in a bubble. We’re in a god damn bubble, it is so obvious we’re in a bubble, it’s been so obvious we’re in a bubble, a bubble that seems strong but is actually very weak, with a central point of failure. I may not be a contrarian, but I am a hater . I hate the waste, the loss, the destruction, the theft, the damage to our planet and the sheer excitement that some executives and writers have that workers may be replaced by AI — and the bald-faced fucking lie that it’s happening, and that generative AI is capable of doing so. And so I present to you — the Hater’s Guide to the AI bubble, a comprehensive rundown of arguments I have against the current AI boom’s existence. Send it to your friends, your loved ones, or print it out and eat it.   No, this isn’t gonna be a traditional guide, but something you can look at and say “oh that’s why the AI bubble is so bad.” And at this point, I know I’m tired of being gaslit by guys in gingham shirts who desperately want to curry favour with other guys in gingham shirts but who also have PHDs. I’m tired of reading people talk about how we’re “in the era of agents” that don’t fucking work and will never fucking work. I’m tired of hearing about “powerful AI” that is actually crap, and I’m tired of being told the future is here while having the world’s least-useful most-expensive cloud software shoved down my throat. Look, the generative AI boom is a mirage, it hasn’t got the revenue or the returns or the product efficacy for it to matter, everything you’re seeing is ridiculous and wasteful, and when it all goes tits up I want you to remember that I wrote this and tried to say something. As I write this, NVIDIA is currently sitting at $170 a share — a dramatic reversal of fate after the pummelling it took from the DeepSeek situation in January, which sent it tumbling to a brief late-April trip below $100 before things turned around.  The Magnificent 7 stocks — NVIDIA, Microsoft, Alphabet (Google), Apple, Meta, Tesla and Amazon — make up around 35% of the value of the US stock market, and of that, NVIDIA's market value makes up about 19% of the Magnificent 7. This dominance is also why ordinary people ought to be deeply concerned about the AI bubble. The Magnificent 7 is almost certainly a big part of their retirement plans, even if they’re not directly invested. Back in May, Yahoo Finance's Laura Bratton reported that Microsoft (18.9%), Amazon (7.5%), Meta (9.3%), Alphabet (5.6%), and Tesla (0.9%) alone make up 42.4% of NVIDIA's revenue. The breakdown makes things worse. Meta spends 25% — and Microsoft an alarming 47% — of its capital expenditures on NVIDIA chips, and as Bratton notes, Microsoft also spends money renting servers from CoreWeave, which analyst Gil Luria of D.A.Davidson estimates accounted for $8 billion (more than 6%) of NVIDIA's revenue in 2024. Luria also estimates that neocloud companies like CoreWeave and Crusoe — that exist only to prove AI compute services — account for as much as 10% of NVIDIA's revenue. NVIDIA's climbing stock value comes from its continued revenue growth. In the last four quarters, NVIDIA has seen year-over-year growth of 101%, 94%, 78% and 69%, and, in the last quarter , a little statistic was carefully brushed under the rug: that NVIDIA missed , though narrowly, on data center revenue. This is exactly what it sounds like — GPUs that are used in servers, rather than gaming consoles and PCs (. Analysts estimated it would make $39.4 billion from this category, and NVIDIA only (lol) brought in $39.1 billion . Then again, it could be attributed to its problems in China, especially as the H20 ban has only just been lifted . In any case, it was a miss! NVIDIA's quarter-over-quarter growth has also become aggressively normal — from 69%, to 59%, to 12%, to 12% again each quarter, which, again, isn't bad (it's pretty great!), but when 88% of your revenue is based on one particular line in your earnings, it's a pretty big concern, at least for me. Look, I'm not a stock analyst, nor am I pretending to be one, so I am keeping this simple: In simpler terms, 35% of the US stock market is held up by five or six companies buying GPUs. If NVIDIA's growth story stumbles, it will reverberate through the rest of the Magnificent 7, making them rely on their own AI trade stories. And, as you will shortly find out, there is no AI trade, because generative AI is not making anybody any money. I'm so tired of people telling me that companies are "making tons of money on AI." Nobody is making a profit on generative AI other than NVIDIA. No, really, I’m serious.  If they keep their promises, by the end of 2025, Meta, Amazon, Microsoft, Google and Tesla will have spent over $560 billion in capital expenditures on AI in the last two years, all to make around $35 billion. This is egregiously fucking stupid. As of January 2025, Microsoft's "annualized" — meaning [best month]x12 — revenue from artificial intelligence was around $13 billion , a number that it chose not to update in its last earnings, likely because it's either flat or not growing, though it could in its upcoming late-July earnings. Yet the problem with this revenue is that $10 billion of that revenue, according to The Information , comes from OpenAI's spend on Microsoft's Azure cloud, and Microsoft offers preferential pricing — " a heavily discounted rental rate that essentially only covers Microsoft's costs for operating the servers " according to The Information. In simpler terms, 76.9% of Microsoft's AI revenue comes from OpenAI, and is sold at just above or at cost, making Microsoft's "real" AI revenue about $3 billion, or around 3.75% of this year's capital expenditures , or 16.25% if you count OpenAI's revenue, which costs Microsoft more money than it earns. The Information reports that Microsoft made $4.7 billion in "AI revenue" in 2024, of which OpenAI accounted for $2 billion, meaning that for the $135.7 billion that Microsoft has spent in the last two years on AI infrastructure , it has made $17.7 billion, of which OpenAI accounted for $12.7 billion. Things do not improve elsewhere. An analyst estimates that Amazon, which plans to spend $105 billion in capital expenditures this year, will make $5 billion on AI in 2025 , rising, and I quote, "as much as 80%," suggesting that Amazon may have made a measly $2.77 billion in 2024 on AI in a year when it spent $83 billion in capital expenditures . [editor's note: this piece originally said "$1 billion" instead of "$2.77 billion" due to a math error, sorry!] Last year, Amazon CEO Andy Jassy said that “ AI represents for sure the biggest opportunity since cloud and probably the biggest technology shift and opportunity in business since the internet. " I think he's full of shit. Bank of America analyst Justin Post estimated a few weeks ago that Google's AI revenue would be in the region of $7.7 billion , though his math is, if I'm honest, a little generous: Google's "One" subscription includes increased cloud storage across Google Drive, Gmail and Google Photos, and added a $20-a-month "premium" plan in February 2024 that included access to Google's various AI models. Google has claimed that the "premium AI tier accounts for millions" of the 150 million subscribers to the service , though how many millions is impossible to estimate — but that won't stop me trying!  Assuming that $3.1 billion in 2025 revenue would work out to $258 million a month, that would mean there were 12.9 million Google One subscribers also paying for the premium AI tier. This isn't out of the realm of possibility — after all, OpenAI has 15.5 million paying subscribers — but Post is making a generous assumption here. Nevertheless, we'll accept the numbers as they are. And the numbers fuckin' stink! Google's $1.1 billion in workspace service revenue came from a forced price-hike on those who use Google services to run their businesses , meaning that this is likely not a number that can significantly increase without punishing them further. $7.7 billion of revenue — not profit! — on $75 billion of capital expenditures. Nasty! Someone's gonna get mad at me for saying this, but I believe that Meta is simply burning cash on generative AI. There is no product that Meta sells that monetizes Large Language Models, but every Meta product now has them shoved into them, such as your Instagram DMs oinking at you to generate artwork based on your conversation. Nevertheless, we do have some sort of knowledge of what Meta is saying due to the copyright infringement case Kadrey v. Meta . Unsealed judgment briefs revealed in April that Meta is claiming that "GenAI-driven revenue will be more than $2 billion," with estimates as high as $3 billion.  The same document also claims that Meta expects to make $460 billion to $1.4 trillion in total revenue through 2035, the kind of thing that should get you fired in an iron ball into the sun. Meta makes 99% of its revenue from advertising, and the unsealed documents state that it "[generates] revenue from [its] Llama models and will continue earning revenue from each iteration," and "share a percentage of the revenue that [it generates] from users of the Llama models...hosted by those companies," with the companies in question redacted. Max Zeff of TechCrunch adds that Meta lists host partners like AWS, NVIDIA, Databricks, Groq, Dell, Microsoft Azure, Google Cloud, and Snowflake, so it's possible that Meta makes money from licensing to those companies. Sadly, the exhibits further discussing these numbers are filed under seal. Either way, we are now at $332 billion of capital expenditures in 2025 for $28.7 billion of revenue, of which $10 billion is OpenAI's "at-cost or just above cost" revenue. Not great. Despite its prominence in the magnificent 7, Tesla is one of the least-exposed of the magnificent 7 to the AI trade, as Elon Musk has turned it into a meme stock company. That doesn't mean, of course, that Musk isn't touching AI. xAI, the company that develops racist Large Language Model "Grok" and owns what remains of Twitter, apparently burns $1 billion a month , and The Information reports that it makes a whopping $100 million in annualized revenue — so, about $8.33 million a month. There is a shareholder vote for Tesla to potentially invest in xAI , which will probably happen, allowing Musk to continue to pull leverage from his Tesla stock until the company's decaying sales and brand eventually swallow him whole. But we're not talking about Elon Musk today. Apple Intelligence radicalized millions of people against AI, mostly because it fucking stank. Apple clearly got into AI reluctantly, and now faces stories about how they " fell behind in the AI race ," which mostly means that Apple aggressively introduced people to the features of generative AI by force, and it turns out that people don't really want to summarize documents, write emails, or make "custom emoji," and anyone who thinks they would is a fucking alien. In any case, Apple hasn't bet the farm on AI, insomuch as it hasn't spent two hundred billion dollars on infrastructure for a product with a limited market that only loses money. To be clear, I am not saying that any of the Magnificent 7 are going to die — just that five companies' spend on NVIDIA GPUs largely dictate how stable the US stock market will be. If any of these companies (but especially NVIDIA) sneeze, your 401k or your kid’s college fund will catch a cold.  I realize this sounds a little simplistic, but by my calculations, NVIDIA's value underpins around 8% of the value of the US stock market. At the time of writing, it accounts for roughly 7.5% of the S&P 500 — an index of the 500 largest US publicly-traded companies. A disturbing 88% of Nvidia’s revenue comes from enterprise-scale GPUs primarily used for generative AI, of which five companies' spend makes up 42% of its revenue. In the event that any one of these companies makes significant changes to their investments in NVIDIA chips, it will eventually have a direct and meaningful negative impact on the wider market. NVIDIA's earnings are, effectively, the US stock market's confidence, and everything rides on five companies — and if we're honest, really four companies — buying GPUs for generative AI services or to train generative AI models. Worse still, these services, while losing these companies massive amounts of money, don't produce much revenue, meaning that the AI trade is not driving any real, meaningful revenue growth. Any of these companies talking about "growth from AI" or "the jobs that AI will replace" or "how AI has changed their organization" are hand-waving to avoid telling you how much money these services are actually making them. If they were making good money and experiencing real growth as a result of these services, they wouldn't shut the fuck up about it! They'd be in your ear and up your ass hooting about how much cash they were rolling in! And they're not, because they aren't rolling in cash, and are in fact blowing nearly a hundred billion dollars each to build massive, power-hungry, costly data centers for no real reason. Don’t watch the mouth — watch the hands. These companies are going to say they’re seeing growth from AI, but unless they actually show you the growth and enumerate it, they are hand-waving.  So, one of the most annoying and consistent responses to my work is to say that either Amazon or Amazon Web Services “ran at a loss,” and that Amazon Web Services — the invention of modern cloud computing infrastructure — “lost money and then didn’t.”  The thing is, this statement is one of those things that people say because it sounds rational. Amazon did lose money, and Amazon Web Services was expensive, that’s obvious, right?  The thing is, I’ve never really had anyone explain this point to me, so I am finally going to sit down and deal with this criticism, because every single person who mentions it thinks they just pulled Excalibur from the stone and can now decapitate me. They claim that because people in the past doubted Amazon — because, or in addition to the burn rate of Amazon Web Services as the company built out its infrastructure — that I too am wrong, because they were wrong about that. This isn't Camelot, you rube! You are not King Arthur! I will address both the argument itself and the "they" part of it too — because if the argument is that the people that got AWS wrong should not be trusted, then we should no longer trust them, the people actively propagandizing our supposed generative AI future. So, I'm honestly not sure where this argument came from, because there is, to my knowledge, no story about Amazon Web Services where somebody suggested its burnrate would kill Amazon. But let’s go back in time to the May 31 1999 piece that some might be thinking of, called " Amazon.bomb ," and how writer Jacqueline Doherty was mocked soundly for "being wrong" about Amazon, which has now become quite profitable. I also want to be clear that Amazon Web Services didn't launch until 2006, and Amazon itself would become reliably profitable in 2003 . Technically Amazon had opened up Amazon.com's web services for developers to incorporate its content into their applications in 2002, but what we consider AWS today — cloud storage and compute — launched in 2006. But okay, what did she actually say? Her argument, for the most part, is that Amazon was burning cash, had a ton of competition from other people doing similar things, and that analysts backed her up: Fair arguments for the time, though perhaps a little narrow-minded. The assumption wasn't that what Amazon was building was a bad idea, but that Amazon wouldn't be the ones to build it, with one saying: In simpler terms: Amazon's business model wasn't in question. People were buying shit online. In fact, this was just before the dot com bubble burst, and when optimism about the web was at a high point. Yet the comparison stops there — people obviously liked buying shit online, it was the business models of many of these companies — like WebVan — that sucked. Amazon Web Services was an outgrowth of Amazon's own infrastructure, which had to expand rapidly to deal with the influx of web traffic for Amazon.com, which had become one of the world's most popular websites and was becoming increasingly more-complex as it sold things other than books. Other companies had their own infrastructure, but if a smaller company wanted to scale, they’d basically need to build their own thing. It's actually pretty cool what Amazon did! Remember, this was the early 2000s, before Facebook, Twitter, and a lot of the modern internet we know that runs on services like Amazon Web Services, Microsoft Azure and Google Cloud. It invented the modern concept of compute ! But we're here to talk about Amazon Web Services being dangerous for Amazon and people hating on it. A November 2006 story from Bloomberg talked about Jeff Bezos' Risky Bet to "run your business with the technology behind his web site," saying that "Wall Street [wanted] him to mind the store." Bezos, referred to as a "one-time internet poster boy" that became "a post-dot-com piñata." Nevertheless, this article has what my haters crave: Pretty bad, right? My goose is cooked? All those analysts seem pretty mad! Except it's not, my goose is raw! Yours, however, has been in the oven for over a year!  Emphasis mine: That's right — the ongoing costs aren't the problem. Hey wait a second, that's a name! I can look up a name! Scott W. Devitt now works at Wedbush as its managing director of equity research, and has said AI companies would enter a new stage in 2025...god, just read this : The analyst says the market will enter phase two in 2025, with software companies and cloud provider stocks expected to see gains. He adds that cybersecurity companies could also benefit as the technology evolves. Dewitt specifically calls out Palantir, Snowflake, and Salesforce as those who would "gain." In none of these cases am I able to see the actual revenue from AI, but Salesforce itself said that it will see no revenue growth from AI this year . Palantir also, as discovered by the Autonomy Institute’s recent study , recently added to the following to its public disclosures : What I'm saying is that analysts can be wrong! And they can be wrong at scale! There is no analyst consensus that agrees with me. In fact, most analysts appear to be bullish on AI, despite the significantly-worse costs and total lack of growth! Yet even in this Hater's Parade, the unnamed journalist makes a case for Amazon Web Services: Nice try, chuckles! In 2015, the year that Amazon Web Services became profitable , Morgan Stanley analyst Katy Huberty believed that it was running at a "material loss," suggesting that $5.5 billion of Amazon's "technology and content expenses" was actually AWS expenses, with a "negative contribution of $1.3 billion." Here is Katy Huberty, the analyst in question, declaring six months ago that " 2025 [will] be the year of Agentic AI, robust enterprise adoption, and broadening AI winners ." So, yes, analysts really got AWS wrong. But putting that aside, there might actually be a comparison here! Amazon Web Services absolutely created a capital expenditures drain on Amazon. From Forbes’s Chuck Jones : In today's money, this means that Amazon spent $6.76 billion in capital expenditures on AWS in 2014. Assuming it was this much every year — it wasn't, but I want to make an example of every person claiming that this is a gotcha — it took $67.6 billion and ten years (though one could argue it was nine) of pure capital expenditures to turn Amazon Web Services into a business that now makes billions of dollars a quarter in profit . That's $15.4 billion less than Amazon's capital expenditures for 2024, and less than one-fifteenth its projected capex spend for 2025. And to be clear, the actual capital expenditure numbers are likely much lower, but I want to make it clear that even when factoring in inflation, Amazon Web Services was A) a bargain and B) a fraction of the cost of what Amazon has spent in 2024 or 2025. Some people compare Large Language Models and their associated services to Amazon Web Services, or services like Microsoft Azure or Google Cloud, and they are wrong to do so. Amazon Web Services, when it launched, comprised of things like (and forgive how much I'm diluting this) Amazon's Elastic Compute Cloud (EC2), where you rent space on Amazon's servers to run applications in the cloud, or Amazon Simple Storage (S3), which is enterprise-level storage for applications. In simpler terms, if you were providing a cloud-based service, you used Amazon to both store the stuff that the service needed and the actual cloud-based processing (compute, so like your computer loads and runs applications but delivered to thousands or millions of people).  This is a huge industry. Amazon Web Services alone brought in revenues of over $100 billion in 2024 , and while Microsoft and Google don't break out their cloud revenues, they're similarly large parts of their revenue, and Microsoft has used Azure in the past to patch over shoddy growth . These services are also selling infrastructure. You aren't just paying for the compute, but the ability to access storage and deliver services with low latency — so users have a snappy experience — wherever they are in the world. The subtle magic of the internet is that it works at all, and a large part of that is the cloud compute infrastructure and oligopoly of the main providers having such vast data centers. This is much cheaper than doing it yourself, until a certain point. Dropbox moved away from Amazon Web Services as it scaled . It also allows someone else to take care of maintenance of the hardware and make sure it actually gets to your customers. You also don't have to worry about spikes in usage, because these things are usage-based, and you can always add more compute to meet demand. There is, of course, nuance — security-specific features, content-specific delivery services, database services — behind these clouds. You are buying into the infrastructure of the infrastructure provider, and the reason these products are so profitable is, in part, because you are handing off the problems and responsibility to somebody else. And based on that idea, there are multiple product categories you can build on top of it, because ultimately cloud services are about Amazon, Microsoft and Google running your infrastructure for you. Large Language Models and their associated services are completely different, despite these companies attempting to prove otherwise, and it starts with a very simple problem: why did any of these companies build these giant data centers and fill them full of GPUs? Amazon Web Services was created out of necessity — Amazon's infrastructure needs were so great that it effectively had to build both the software and hardware necessary to deliver a store that sold theoretically everything to theoretically anywhere, handling both the traffic from customers, delivering the software that runs Amazon.com quickly and reliably, and, well, making sure things ran in a stable way. It didn't need to come up with a reason for people to run web applications — they were already doing so themselves, but in ways that cost a lot, were inflexible, and required specialist skills. AWS took something that people already did, and what there was a proven demand for, and made it better. Eventually, Google and Microsoft would join the fray.  And that appears to be the only similarity with generative AI — that due to the ridiculous costs of both the data centers and GPUs necessary to provide these services, it's largely impossible for others to even enter the market. Yet after that, generative AI feels more like a feature of cloud infrastructure rather than infrastructure itself. AWS and similar megaclouds are versatile, flexible and multifaceted. Generative AI does what generative AI does, and that's about it. You can run lots of different things on AWS. What are the different things you can run using Large Language Models? What are the different use cases, and, indeed, user requirements that make this the supposed "next big thing"? Perhaps the argument is that generative AI is the next AWS or similar cloud service because you can build the next great companies on the infrastructure of others — the models of, say, OpenAI and Anthropic, and the servers of Microsoft.  So, okay, let's humour this point too. You can build the next great AI startup, and you have to build it on one of the megclouds because they're the only ones that can afford to build the infrastructure. One small problem. Let's start by establishing a few facts: None of this is to say that one hundred million dollars isn't a lot of money to you and me, but in the world of Software-as-Service or enterprise software, this is chump change. Hubspot had revenues of $2.63 billion in its 2024 financial year. We're three years in, and generative AI's highest-grossing companies — outside OpenAI ($10 billion annualized as of early June ) and Anthropic ( $4 billion annualized as of July ), and both lose billions a year after revenue — have three major problems: But let's start with Anysphere and Cursor, its AI-powered coding app, and its $500 million of annualized revenue. Pretty great, right? It hit $200 million in annualized revenue in March , then hit $500 million annualized revenue in June after raising $900 million . That's amazing! Sadly, it's a mirage. Cursor's growth was a result of an unsustainable business model that it’s now had to replace with opaque terms of service, dramatically restricted access to models, and rate limits that effectively stop its users using the product at the price point they were used to. It’s also horribly unprofitable, and a sign of things to come for generative AI. A couple of weeks ago, I wrote up the dramatic changes that Cursor made to its service in the middle of June on my premium newsletter , and discovered that they timed precisely with Anthropic (and OpenAI to a lesser extent) adding "service tiers" and "priority processing," which is tech language for "pay us extra if you have a lot of customers or face rate limits or service delays." These price shifts have also led to companies like Replit having to make significant changes to its pricing model that disfavor users .  I will now plagiarise myself: In simpler terms, Cursor raised $900 million and very likely had to hand large amounts of that money over to OpenAI and Anthropic to keep doing business with them, and then immediately changed its terms of service to make them worse. As I said at the time: Cursor is the largest and most-successful generative AI company, and these aggressive and desperate changes to its product suggest A) that its product is deeply unprofitable and B) that its current growth was a result of offering a product that was not the one it would sell in the long term. Cursor misled its customers, and its current revenue is, as a result, highly unlikely to stay at this level. Worse still, the two Anthropic engineers who left to join Cursor two weeks ago just returned to Anthropic . This heavily suggests that whatever they saw at Cursor wasn’t compelling enough to make them stay. As I also said: I realize it's likely a little boring hearing about software as a service, but this is the only place where generative AI can really make money. Companies buying hundreds or thousands of seats are how industries that rely upon compute grow, and without that growth, they're going nowhere. To give you some context, Netflix makes about $39 billion a year in subscription revenue, and Spotify about $18 billion. These are the single-most-popular consumer software subscriptions in the world — and OpenAI's 15.5 million subscribers suggest that it can't rely on them for the kind of growth that would actually make the company worth $300 billion (or more). Cursor is, as it stands, the one example of a company thriving using generative AI, and it appears its rapid growth was a result of selling a product at a massive loss. As it stands today, Cursor's product is significantly worse, and its Reddit is full of people furious at the company for the changes. In simpler terms, Cursor was the company that people mentioned to prove that startups could make money by building products on top of OpenAI and Anthropic's models, yet the truth is that the only way to do so and grow is to burn tons of money. While the tempting argument is to say that Cursor’s "customers are addicted," this is clearly not the case, nor is it a real business model. This story also showed that Anthropic and OpenAI are the biggest threats to their customers, and will actively rent-seek and punish their success stories, looking to loot as much as they can from them. To put it bluntly: Cursor's growth story was a lie. It reached $500 million in annualized revenue selling a product it can no longer afford to sell, suggesting material weakness in its own business and any and all coding startups. It is also remarkable — and a shocking failure of journalism — that this isn’t in every single article about Anysphere. I'm serious! Perplexity? Perplexity only has $150 million in annualized revenue ! It spent 167% of its revenue in 2024 ($57m, its revenue was $34m) on compute services from Anthropic, OpenAI, and Amazon! It lost $68 million! And worse still, it has no path to profitability, and it’s not even anything new! It’s a search engine! Professional gasbag Alex Heath just did a flummoxing interview with Perplexity CEO Aravind Srivinas, who, when asked how it’d become profitable, appeared to experience a stroke: Aravind, do you smell toast? And don’t talk to me about “AI browsers,” I’m sorry, it’s not a business model. How are people going to make revenue on this, hm? What do these products actually do? Oh they can poorly automate accepting LinkedIn invites ? It’s like God himself has personally blessed my computer. Big deal!  In any case, it doesn't seem like you can really build a consumer AI startup that makes anything approaching a real company. Other than ChatGPT, I guess? Arguably the biggest sign that things are troubling in the generative AI space is that we use "annualized revenue" at all, which, as I've mentioned repeatedly, means multiplying a month by 12 and saying "that's our annualized!" The problem with this number is that, well, people cancel things. While your June might be great, if 10% of your subscribers churn in a bad month (due to a change in your terms of service), that's a chunk of your annualized revenue gone. But the worst sign is that nobody is saying the monthly figures, mostly because the monthly figures kinda suck! $100 million of annualized revenue is $8.33 million a month. To give you some scale, Amazon Web Services hit $189 million ($15.75 million a month) in revenue in 2008 , two years after founding, and while it took until 2015 to hit profitability, it actually hit break-even in 2009, though it invested cash in growth for a few years after. Right now, not a single generative AI software company is profitable, and none of them are showing the signs of the kind of hypergrowth that previous "big" software companies had. While Cursor is technically " the fastest growing SaaS of all time ," it did so using what amounts to fake pricing. You can dress this up as "growth stage" or "enshittification (it isn't by the way, generally price changes make things profitable, which this did not)," but Cursor lied. It lied to the public about what its product would do long-term. It isn't even obvious whether its current pricing is sustainable. Outside of Cursor, what other software startups are there? Everyone loves to talk about enterprise search company Glean — a company that uses AI to search and generate answers from your company's files and documents. In December 2024, Glean raised $260 million, proudly stating that it had over $550 million of cash in hand with "best-in-class ARR growth." A few months later in February 2025, Glean announced it’d "achieved $100 million in annual recurring revenue in fourth quarter FY25, cementing its position as one of the fastest-growing SaaS startups and reflecting a surging demand for AI-powered workplace intelligence." In this case, ARR could literally mean anything, as it appears to be based on quarters — meaning it could be an average of the last three months of the year, I guess? Anywho, in June 2025, Glean announced it had raised another funding round , this time raising $150 million , and, troublingly, added that since its last round, it had "...surpassed $100M in ARR." Five months into the fucking year and your monthly revenue is the same? That isn't good! That isn't good at all! Also, what happened to that $550 million in cash? Why did Glean need more? Hey wait a second, Glean announced its raise on June 18 2025, two days after Cursor's pricing increase and the same day that Replit announced a similar hike ! It's almost as if its pricing dramatically increased due to the introduction of Anthropic's Service Tiers and OpenAI's Priority Processing . I'm guessing, but isn't it kind of weird that all of these companies raised money about the same time? Hey, that reminds me. If you look at what generative AI companies do (note that the following is not a quality barometer), it's probably doing one of the following things: Every single generative AI company that isn't OpenAI or Anthropic does one or a few of these things, and I mean every one of them, and it's because every single generative AI company uses Large Language Models, which have inherent limits on what they can do. LLMs can generate, they can search, they can edit (kind of!), they can transcribe (sometimes accurately!) and they can translate (often less accurately). As a result, it's very, very difficult for a company to build something unique. Though Cursor is successful, it is ultimately a series of system prompts, a custom model that its users hate, a user interface and connections to models by OpenAI and Anthropic, both of whom have competing products and make money from Cursor and its competitors. Within weeks of Cursor's changes to its services, Amazon and ByteDance released competitors that, for the most part, do the same thing. Sure there's a few differences in how they're designed, but design is not a moat, especially in a high-cost, negative-profit business, where your only way of growing is to offer a product you can't afford to sustain. The only other moat you can build..is the services you provide, which, when your services are dependent on a Large Language Model, are dependent on the model developer, who, in the case of OpenAI and Anthropic, could simply clone your startup, because the only valuable intellectual property is theirs. You may say "well, nobody else has any ideas either," to which I'll say that I fully agree. My Rot-Com Bubble thesis suggests we're out of hypergrowth ideas, and yeah, I think we're out of ideas related to Large Language Models too. At this point, I think it's fair to ask — are there any good companies you can build on top of Large Language Models? I don't mean add features related to , I mean an AI company that actually sells a product that people buy at scale that isn't called ChatGPT. In previous tech booms, companies would make their own “models” — their own infrastructure, or the things that make them distinct from other companies — but the generative AI boom effectively changes that by making everybody build stuff on top of somebody else’s models , because training your own models is both extremely expensive and requires vast amounts of infrastructure. As a result, much of this “boom” is about a few companies — really two, if we’re honest — getting other companies to try and build functional software for them.  I wanted to add one note — that, ultimately, OpenAI and Anthropic are bad for their customers. Their models are popular (by which I mean their customers' customers will expect access to them) meaning that OpenAI and Anthropic can (as they did with Cursor) arbitrarily change pricing, service availability or functionality based on how they feel that day. Don't believe me? Anthropic cut off access to AI coding platform Windsurf because it looked like it might get acquired by OpenAI . Even by big tech standards this fucking sucks. And these companies will do it again! Because all Large Language Models require more data than anyone has ever needed , they all basically have to use the same data, either taken from the internet or bought from one of a few companies (Scale, Surge, Turing, Together, etc.). While they can get customized data or do customized training/reinforcement learning, these models are all transformer-based, and they all function similarly, and the only way to make them different is by training them, which doesn't make them much different , just better at things they already do. I already mentioned OpenAI and Anthropic's costs, as well as Perplexity's $50 million+ bill to Anthropic, Amazon and OpenAI off of a measly $34 million in revenue . These companies cost too much to run, and their functionality doesn't make enough money to make them make sense. The problem isn't just the pricing, but how unpredictable it is. As Matt Ashare wrote for CIO Dive last year , generative AI makes a lot of companies’ lives difficult through the massive spikes in costs that come from power users, with few ways to mitigate their costs. One of the ways that a company manages their cloud bills is by having some degree of predictability — which is difficult to do with the constant slew of new models and demands for new products to go with them, especially when said models can (and do) cost more with subsequent iterations. As a result, it's hard for AI companies to actually budget . "But Ed!" you cry, "What about AGENTS?" Let me tell you about agents . The term "agent" is one of the most egregious acts of fraud I've seen in my entire career writing about this crap, and that includes the metaverse. When you hear the word "agent," you are meant to think of an autonomous AI that can go and do stuff without oversight , replacing somebody's job in the process, and companies have been pushing the boundaries of good taste and financial crimes in pursuit of them. Most egregious of them is Salesforce's " Agentforce ," which lets you "deploy AI agents at scale" and "brings digital labor to every employee, department and business process." This is a blatant fucking lie. Agentforce is a god damn chatbot platform, it's for launching chatbots, they can sometimes plug into APIs that allow them to access other information, but they are neither autonomous nor "agents" by any reasonable definition. Not only does Salesforce not actually sell "agents," its own research shows that agents only achieve around a 58% success rate on single-step tasks , meaning, to quote The Register , "tasks that can be completed in a single step without needing follow-up actions or more information." On multi-step tasks — so, you know, most tasks — they succeed a depressing 35% of the time. Last week, OpenAI announced its own "ChatGPT agent" that can allegedly go "do tasks" on a "virtual computer." In its own demo , the agent took 21 or so minutes to spit out a plan for a wedding with destinations, a vague calendar and some suit options, and then showed a pre-prepared demo of the "agent" preparing an itinerary of how to visit every major league ballpark. In this example's case, "agent" took 23 minutes, and produced arguably the most confusing-looking map I've seen in my life. It also missed out every single major league ballpark on the East Coast — including Yankee Stadium and Fenway Park — and added a random stadium in the middle of the Gulf of Mexico. What team is that, eh Sam? The Deepwater Horizon Devils? Is there a baseball team in North Dakota?  I should also be clear this was the pre-prepared example. As with every Large Language Model-based product — and yes, that's what this is, even if OpenAI won't talk about what model — results are extremely variable. Agents are difficult, because tasks are difficult, even if they can be completed by a human being that a CEO thinks is stupid. What OpenAI appears to be doing is using a virtual machine to run scripts that its models trigger. Regardless of how well it works (it works very very poorly and inconsistently), it's also likely very expensive. In any case, every single company you see using the word agent is trying to mislead you. Glean's " AI agents " are chatbots with if-this-then-that functions that trigger events using APIs (the connectors between different software services), not taking actual actions, because that is not what LLMs can do. ServiceNow's AI agents that allegedly "act autonomously and proactively on your behalf" are , despite claiming they "go beyond ‘better chatbots,’" still ultimately chatbots that use APIs to trigger different events using if-this-then-that functions. Sometimes these chatbots can also answer questions that people might have, or trigger an event somewhere. Oh, right, that's the same thing. The closest we have to an "agent" of any kind is a coding agent, which can make a list of things that you might do on a software project and then go and generate the code and push stuff to Github when you ask them to, and they can do so "autonomously," in the sense that you can let them just run whatever task seems right. When I say "ask them to" or "go and" I mean that these agents are not remotely intelligent, and when let run rampant fuck up everything and create a bunch of extra work . Also, a study found that AI coding tools made engineers 19% slower . Nevertheless, none of these products are autonomous agents, and anybody using the term agent likely means "chatbot." And it's working because the media keeps repeating everything these companies say. I realize we've taken kind of a scenic route here, but I needed to lay the groundwork here, because I am well and truly alarmed. According to a UBS report from the 26th of June, the public companies running AI services are making absolutely pathetic amounts of money from AI: ServiceNow's use of "$250 million ACV" — so annual contract value — may be one of the more honest explanations of revenue I've seen, putting them in the upper echelons of AI revenue unless, of course, you think for two seconds, whether these are AI-specific contracts. Or, perhaps, are they contracts including AI? Eh, who cares. It's also year-long agreements that could churn, and according to Gartner, over 40% of "agentic AI" projects will be canceled by end of 2027 . And really, ya gotta laugh at Adobe and Salesforce, both of whom have talked so god damn much about generative AI and yet have only made around $100 million in annualized revenue from it. Pathetic! These aren't futuristic numbers! They're barely product categories! And none of this seems to include costs. I haven't really spent time on my favourite subject — OpenAI being a systemic risk to the tech industry . Anthropic is in a similar, but slightly better position — it is set to lose $3 billion this year on $4 billion of revenue . It also has no path to profitability, recently jacked up prices on Cursor, its largest customer , and had to put restraints on Claude Code after allowing users to burn 100% to 10,000% of their revenue . These are the actions of a desperate company. Nevertheless, OpenAI and Anthropic's revenues amount to, by my estimates, more than half of the entire revenue of the generative AI industry, including the hyperscalers. To be abundantly clear: the two companies that amount to around half of all generative artificial intelligence revenue are ONLY LOSING MONEY. I've said a lot of this before, which is why I'm not harping on about it, but the most important company in the entire AI industry needs to convert by the end of the year or it's effectively dead, and even if it does, it burns billions and billions of dollars a year and will die without continual funding. It has no path to profitability, and anyone telling you otherwise is a liar or a fantasist. Worse still, outside of OpenAI...what is there, really? As I wrote earlier in the year , there is really no significant adoption of generative AI services or products. ChatGPT has 500 million weekly users, and otherwise, it seems that other services struggle to get 15 million of them. And while the 500 million weekly users sounds — and, in fairness, is — impressive, there’s a world of difference between someone using a product as part of their job, and someone dicking around with an image generator, or a college student trying to cheat on their homework. Sidebar: Google cheated by combining Google Gemini with Google Assistant to claim that it has 350 million users. Don't care, sorry. This is worrying on so many levels, chief of which is that everybody has been talking about AI for three god damn years, everybody has said "AI" in every earnings and media appearance and exhausting blog post, and we still can't scrape together the bits needed to make a functional industry. I know some of you will probably read this and point to ChatGPT's users, and I quote myself here : It has, allegedly, 500 million weekly active users — and, by the last count, only 15.5 million paying subscribers, an absolutely putrid conversion rate even before you realize that the actual conversion rate would be monthly active subscribers. That’s how any real software company actually defines its metrics, by the fucking way. Why is this impressive? Because it grew fast? It literally had more PR and more marketing and more attention and more opportunities to sell to more people than any company has ever had in the history of anything. Every single industry has been told to think about AI for three years, and they’ve been told to do so because of a company called OpenAI. There isn’t a single god damn product since Google or Facebook that has had this level of media pressure, and both of those companies launched without the massive amount of media (and social media) that we have today.  ChatGPT is a very successful growth product and an absolutely horrifying business. OpenAI is a banana republic that cannot function on its own, it does not resemble Uber, Amazon Web Services, or any other business in the past other than WeWork, the other company that SoftBank spent way too much money on. And outside of ChatGPT, there really isn't anything else. Before I wrap up — I'm tired, and I imagine you are too — I want to address something. Yes, generative AI has functionality. There are coding products and search products that people like and pay for. As I have discussed above, none of these companies are profitable, and until one of them is profitable, generative AI-based companies are not real businesses. In any case, the problem isn't so much that LLMs "don't do anything," but that people talk about them doing things they can't do. I believe that the generative AI market is a $50 billion revenue industry masquerading as a $1 trillion one, and the media is helping. As I've explained at length, the AI trade is not one based on revenue, user growth, the efficacy of tools or significance of any technological breakthrough. Stocks are not moving based on whether they are making money on AI, because if they were, they'd be moving downward. However, due to the vibes-based nature of the AI trade, companies are benefiting from the press inexplicably crediting growth to AI with no proof that that's the case . OpenAI is a terrible business , and the only businesses worse than OpenAI are the companies built on top of it. Large Language Models are too expensive to run, and have limited abilities beyond the ones I've named previously, and because everybody is running models that all, on some level, do the same thing, it's very hard for people to build really innovative products on top of them. And, ultimately, this entire trade hinges on GPUs. CoreWeave was initially funded by NVIDIA, its IPO funded partially by NVIDIA, NVIDIA is one of its customers, and CoreWeave raises debt on the GPUs it buys from NVIDIA to build more data centers, while also using the money to buy GPUs from NVIDIA. This isn’t me being polemic or hysterical — this is quite literally what is happening, and how CoreWeave operates. If you aren’t alarmed by that, I’m not sure what to tell you. Elsewhere, Oracle is buying $40 billion in GPUs for the still-unformed Stargate data center project , and Meta is building a Manhattan-sized data center to fill with NVIDIA GPUs . OpenAI is Microsoft's largest Azure client — an insanely risky proposition on multiple levels, not simply in the fact that it’s serving the revenue at-cost but that Microsoft executives believed OpenAI would fail in the long term when they invested in 2023 — and Microsoft is NVIDIA's largest client for GPUs, meaning that any changes to Microsoft's future interest in OpenAI, such as reducing its data center expansion , would eventually hit NVIDIA's revenue. Why do you think DeepSeek shocked the market? It wasn't because of any clunky story around training techniques. It was because it said to the market that NVIDIA might not sell more GPUs every single quarter in perpetuity. Microsoft, Meta, Google, Apple, Amazon and Tesla aren't making much money from AI — in fact, they're losing billions of dollars on whatever revenues they do make from it. Their stock growth is not coming from actual revenue, but the vibes around "being an AI company," which means absolutely jack shit when you don't have the users, finances, or products to back them up. So, really, everything comes down to NVIDIA's ability to sell GPUs, and this industry, if we're really honest, at this point only exists to do so. Generative AI products do not provide significant revenue growth, its products are not useful in the way that unlocks significant business value, and the products that have some adoption run at such a grotesque loss. I realize I've thrown a lot at you, and, for the second time this year, written the longest thing I've ever written. But I needed to write this, because I'm really worried. We're in a bubble. If you do not think we're in a bubble, you are not looking outside. Apollo Global Chief Economist Torsten Slok said it last week . Well, okay, what he said was much worse: We are in a bubble. Generative AI does not do the things that it's being sold as doing, and the things it can actually do aren't the kind of things that create business returns, automate labor, or really do much more than one extension of a cloud software platform. The money isn't there, the users aren't there, every company seems to lose money and some companies lose so much money that it's impossible to tell how they'll survive. Worse still, this bubble is entirely symbolic. The bailouts of the Great Financial Crisis were focused on banks and funds that had failed because they ran out of money, and the TARP initiative existed to plug the holes with low-interest loans. There are few holes to plug here, because even if OpenAI and Anthropic somehow became eternal money-burners, the AI trade exists based on the continued and continually-increasing sale and use of GPUs. There are limited amounts of capital, but also limited amounts of data centers to actually put GPUs, and on top of that, at some point growth will slow at one of the Magnificent 7, at which point costs will have to come down from things that lose them tons of money, such as generative AI. Before you ask - You do not have proof for this statement! The cost of tokens going down is not the same thing as the cost of inference goes down! Everyone saying this is saying it because a guy once said it to them! You don't have proof! I have more proof for what I am saying! While it theoretically might be, all evidence points to larger models costing more money, especially reasoning-heavy ones like Claude Opus 4. Inference is not the only thing happening, and if this is your one response, you are a big bozo and doofus and should go back to making squeaky noises when you see tech executives or hear my name. Okay, so one argument is that these companies will use ASICs — customized chips for specific operations — to reduce the amount they're spending. A few thoughts: I am worried because despite all of these obvious, brutal and near-unfixable problems, everybody is walking around acting like things are going great with AI. The New York Times claims everybody is using AI for everything — a blatant lie, one that exists to prop up an industry that has categorically failed to deliver the innovations or returns that it promised, yet still receives glowing press from a tech and business media that refuses to look outside and see that the sky is red and frogs are landing everywhere. Other than the frog thing, I'm not even being dramatic. Everywhere you look in the AI trade, things get worse — no revenue, billions being burned, no moat, no infrastructure play, no comparables in history other than the dot com bubble and WeWork, and a series of flagrant lies spouted by the powerful and members of the press that are afraid of moving against market consensus. Worse still, despite NVIDIA's strength, NVIDIA is the market's weakness , through no fault of its own, really. Jensen Huang sells GPUs, people want to buy GPUs, and now the rest of the market is leaning aggressively on one company, feeding it billions of dollars in the hopes that the things they're buying start making them a profit. And that really is the most ridiculous thing. At the center of the AI trade sits GPUs that, on installation, immediately start losing the company in question money. Large Language Models burn cash for negative returns to build products that all kind of work the same way. If you're going to say I'm wrong, sit and think carefully about why. Is it because you don't want me to be right? Is it because you think "these companies will work it out"? This isn't anything like Uber, AWS, or any other situation. It is its own monstrosity, a creature of hubris and ignorance caused by a tech industry that's run out of ideas , built on top of one company . You can plead with me all you want about how there are actual people using AI. You've probably read the " My AI Skeptic Friends Are All Nuts " blog, and if you're gonna send it to me, read the response from Nik Suresh first . If you're going to say that I "don't speak to people who actually use these products," you are categorically wrong and in denial. I am only writing with this aggressive tone because, for the best part of two years, I have been made to repeatedly explain myself in a way that no AI "optimist" is made, and I admit I resent it. I have written hundreds of thousands of words with hundreds of citations, and still, to this day , there are people who claim I am somehow flawed in my analysis, that I'm missing something, that I am somehow failing to make my case. The only people failing to make their case are the AI optimists still claiming that these companies are making "powerful AI." And once this bubble pops, I will be asking for an apology. I love ending pieces with personal thoughts about stuff because I am an emotional and overly honest person, and I enjoy writing a lot. I do not, however, enjoy telling you at length how brittle everything is. An ideal tech industry would be one built on innovation, revenue, real growth based on actual business returns that helped humans be better, not outright lie about replacing them. All that generative AI has done is show how much lust there is in both the markets and the media for replacing human labor — and yes, it is in the media too. I truly believe there are multiple reporters who feel genuine excitement when they write scary stories about how Dario Amodei says white collar workers will be fired in the next few years in favour of "agents" that will never exist. Everything I’m discussing is the result of the Rot Economy thesis I wrote back in 2023 — the growth-at-all-costs mindset that has driven every tech company to focus on increasing quarterly revenue numbers, even if the products suck, or are deeply unprofitable, or, in the case of generative AI, both. Nowhere has there been a more pungent version of the Rot Economy than in Large Language Models, or more specifically GPUs . By making everything about growth, you inevitably reach a point where the only thing you know how to do is spend money, and both LLMs and GPUs allowed big tech to do the thing that worked before — building a bunch of data centers and buying a bunch of chips — without making sure they’d done the crucial work of “making sure this would create products people like.” As a result, we’re now sitting on top of one of the most brittle situations in economic history — our markets held up by whether four or five companies will continue to buy chips that start losing them money the second they’re installed. I am disgusted by how many people are unwilling or unable to engage with the truth, favouring instead a scornful, contemptuous tone toward anybody who doesn't believe that generative AI is the future. If you are a writer that writes about AI smarmily insulting people who "don't understand AI," you are a shitty fucking writer, because either AI isn't that good or you're not good at explaining why it's good. Perhaps it's both. If you want to know my true agenda, it's that I see something in generative AI and its boosters something I truly dislike. Large Language Models authoritatively state things that are incorrect because they have no concept of right or wrong. I believe that the writers, managers and executives that find it exciting do so because it gives them the ability to pretend to be intelligent without actually learning anything, to do everything they can to avoid actual work or responsibility for themselves or others. There is an overwhelming condescension that comes from fans of generative AI — the sense that they know something you don't, something they double down on. We are being forced to use it by bosses, or services we like that now insist it's part of our documents or our search engines, not because it does something, but because those pushing it need us to use it to prove that they know what's going on. To quote my editor Matt Hughes : "...generative AI...is an expression of contempt towards people, one that considers them to be a commodity at best, and a rapidly-depreciating asset at worst." I haven't quite cracked why, but generative AI also brings out the worst in some people. By giving the illusion of labor, it excites those who are desperate to replace or commoditize it. By giving the illusion of education, it excites those who are too idle to actually learn things by convincing them that in a few minutes they can learn quantum physics . By giving the illusion of activity, it allows the gluttony of Business Idiots that control everything to pretend that they do something. By giving the illusion of futurity , it gives reporters that have long-since disconnected from actual software and hardware the ability to pretend that they know what's happening in the tech industry. And, fundamentally, its biggest illusion is economic activity, because despite being questionably-useful and burning billions of dollars, its need to do so creates a justification for spending billions of dollars on GPUs and data center sprawl, which allows big tech to sink money into something and give the illusion of growth. I love writing, but I don't love writing this. I think I'm right, and it’s not something I’m necessarily happy about.  If I'm wrong, I'll explain how I'm wrong in great detail, and not shy away from taking accountability, but I really do not think I am, and that's why I'm so alarmed. What I am describing is a bubble, and one with an obvious weakness: one company's ability to sell hardware to four or five other companies, all to run services that lose billions of dollars. At some point the momentum behind NVIDIA slows. Maybe it won't even be sales slowing — maybe it'll just be the suggestion that one of its largest customers won't be buying as many GPUs. Perception matters just as much as actual numbers, and sometimes more, and a shift in sentiment could start a chain of events that knocks down the entire house of cards.  I don't know when, I don't know how, but I really, really don't know how I'm wrong. I hate that so many people will see their retirements wrecked, and that so many people intentionally or accidentally helped steer the market in this reckless, needless and wasteful direction, all because big tech didn’t have a new way to show quarterly growth. I hate that so many people have lost their jobs because companies are spending the equivalent of the entire GDP of some European countries on data centers and GPUs that won’t actually deliver any value.  But my purpose here is to explain to you, no matter your background or interests or creed or whatever way you found my work, why it happened. As you watch this collapse, I want you to tell your friends about why — the people responsible and the decisions they made — and make sure it’s clear that there are people responsible. Sam Altman, Dario Amodei, Satya Nadella, Sundar Pichai, Tim Cook, Elon Musk, Mark Zuckerberg and Andy Jassy have overseen a needless, wasteful and destructive economic force that will harm our markets (and by a larger extension our economy) and the tech industry writ large, and when this is over, they must be held accountable. And remember that you, as a regular person, can understand all of this. These people want you to believe this is black magic, that you are wrong to worry about the billions wasted or question the usefulness of these tools. You are smarter than they reckon and stronger than they know, and a better future is one where you recognize this, and realize that power and money doesn’t make a man righteous, right, or smart. I started writing this newsletter with 300 subscribers, and I now have 67,000 and a growing premium subscriber base. I am grateful for the time you’ve given me, and really hope that I continue to help you see the tech industry for what it currently is — captured almost entirely by people that have no interest in building the future.

0 views
Rik Huijzer 5 months ago

Some Defenses Against Misinformation

According to Chase Hughes, one of the best defenses against misinformation, psyops, propaganda, or whatever you want to call it is to be very good in spotting logical fallacies. He reports that you should first remember that "emotional response suppresses critical thinking". So teach yourself to stay calm when you see the news. Furthermore, train yourself to spot the following fallacies: 1. **Appeal to emotion.** Our brain is more inclined to believe information if it looks scary. So be careful around scary looking pictures or words since they might override your rational brain. 1. **Strawm...

0 views