Will the explainer post go extinct?
Will short-form non-fiction internet writing go extinct? This may seem like a strange question to ask. After all, short-form non-fiction internet writing is currently, if anything, on the ascent—at least for politics, money, and culture war—driven by the shocking discovery that many people will pay the cost equivalent of four hardback books each year to support their favorite internet writers. But, particularly for “explainer” posts, the long-term prospects seem dim. I write about random stuff and then send it to you. If you just want to understand something, why would you read my rambling if AI could explain it equally well, in a style customized for your tastes, and then patiently answer your questions forever? I mean, say you can explain some topic better than AI. That’s cool, but once you’ve published your explanation, AI companies will put it in their datasets, thankyouverymuch, after which AIs will start regurgitating your explanation. And then—wait a second—suddenly you can’t explain that topic better than AI anymore. This is all perfectly legal, since you can’t copyright ideas, only presentations of ideas. It used to take work to create a new presentation of someone else’s ideas. And there used to be a social norm to give credit to whoever first came up with some idea. This created incentives to create ideas , even if they weren’t legally protected. But AI can instantly slap a new presentation on your ideas, and no one expects AI to give credit for its training data. Why spend time creating content so just it can be nostrified by the Borg? And why read other humans if the Borg will curate their best material for you? So will the explainer post survive? Let’s start with an easier question: Already today, AI will happily explain anything. Yet many people read human-written explanations anyway. Why do they do that? I can think of seven reasons: Accuracy. Current AI is unreliable. If I ask about information theory or how to replace the battery on my laptop, it’s very impressive but makes some mistakes. But if I ask about heritability , the answers are three balls of gibberish stacked on top of each other in a trench-coat. Of course, random humans make mistakes, too. But if you find a quality human source, it is far less likely to contain egregious mistakes. This is particularly true across “large contexts” and for tasks where solutions are hard to verify. AI is boring. At least, writing from current popular AI tools is boring, by default. Parasocial relationships. If I’ve been reading someone for a long time, I start to feel like I have a kind of relationship with them. If you’ve followed this blog for a long time, you might feel like you have a relationship with me. Calling these “parasocial relationships” makes them sound sinister, but I think this is normal and actually a clever way of using our tribal-band programming to help us navigate of the modern world. Just like in “real” relationships, when I read someone I have a parasocial relationship with, I have extra context that makes it easier to understand them, I feel a sense of human connection, and I feel like I’m getting a sort of update on their “story”. I don’t get any of that with (current) AI. Skin in the game. If a human screws something up, it’s embarrassing. They lose respect and readers. On a meta-level, AI companies have similar incentives not to screw things up. But AI itself doesn’t (seem to) care. Human nature makes it easier to trust someone when we know they’re putting some kind of reputation on the line. Conspicuous consumption. Since I read Reasons and Persons , I can brag to everyone that I read Reasons and Persons. If I had read some equally good AI-written book, probably no one would care. Coordination points. Partly, I read Reasons and Persons because I liked it. And maybe I guess I read it so I can brag about the fact that I read it. (Hey everyone, have I mentioned that I read Reasons and Persons?) But I also read it because other people read it. When I talk to those people, we have a shared vocabulary and set of ideas that makes it easier to talk about other things. This wouldn’t work if we had all explored the same ideas though fragmented AI “tutoring”. Change is slow. Here we are 600 years after the invention of the printing press, and the primary mode of advanced education is still for people to physically go to a room where an expert is talking and write down stuff the expert says. If we’re that slow to adapt, then maybe we read human-written explainers simply out of habit. How much do each of these really matter? How much confidence should they give us that explainer posts will still exist a decade from now? Let’s handle them in reverse order. Sure, society takes time to adapt to technological change. But I don’t think college lectures are a good example of this, or that they’re a medieval relic that only survive out of inertia. On the contrary, I think they survive because we haven’t really any other model of education that’s fundamentally better. Take paper letters. One hundred years ago, these were the primary form of long-distance communication. But after the telephone was widely distributed, it only took it a few decades to kill the letter in almost all cases where the phone is better. When email and texting showed up, they killed off almost all remaining use of paper letters. They still exist, but they’re niche. The same basic story holds for horses, the telegraph, card catalogs, slide rules, VHS tapes, vacuum tubes, steam engines, ice boxes, answering machines, sailboats, typewriters, the short story, and the divine right of kings. When we have something that’s actually better , we drop the old ways pretty quickly. Inertia alone might keep explainer posts alive for a few years, but not more than that. Western civilization began with the Iliad . Or, at least, we’ve decided to pretend it did. If you read the Iliad, then you can brag about reading the Iliad (good) and you have more context to engage with everyone else who read it (very good). So people keep reading the Iliad. I think this will continue indefinitely. But so what? The Iliad is in that position because people have been reading/listening to it for thousands of years. But if you write something new and there’s no “normal” reason to read it, then it has to way to establish that kind of self-sustaining legacy. Non-fiction in general has a very short half-life. And even when coordination points exist, people often rely on secondary sources anyway. Personally, I’ve tried to read Wittgenstein, but I found it incomprehensible. Yet I think I’ve absorbed his most useful idea by reading other people’s descriptions. I wonder how much “Wittgenstein” is really a source at this point as opposed to a label. Also… explainer posts typically aren’t the Iliad. So I don’t think this will do much to keep explainer posts alive, either. (Aside: I’ve never understood why philosophy is so fixated on original sources, instead of continually developing new presentations of old ideas like math and physics do. Is this related to the fact that philosophers go to conferences and literally read their papers out loud?) I trust people more when I know they’re putting their reputation on the line, for the same reason I trust restaurants more when I know they rely on repeat customers. AI doesn’t give me this same reason for confidence. But so what? This is a loose heuristic. If AI were truly more accurate than human writing, I’m sure most people would learn to trust it in a matter of weeks. If AI was ultra-reliable but people really needed someone to hold accountable, AI companies could perhaps offer some kind of “insurance”. So I don’t see this as keeping explainers alive, either. Humans are social creatures. If bears had a secret bear Wikipedia and you went to the entry on humans, it would surely say, “Humans are obsessed with relationships.” I feel confident this will remain true. I also feel confident that we will continue to be interested in what people we like and respect think about matters of fact. It seems plausible that we’ll continue to enjoy getting that information bundled together with little jokes or busts of personality. So I expect our social instincts will provide at least some reason for explainers to survive. But how strong will this effect be? When explainer posts are read today, what fraction of readers are familiar enough to have a parasocial relationship with the author? Maybe 40%? And when people are familiar, what fraction of their motivation comes from the parasocial relationship, as opposed to just wanting to understand the content? Maybe another 40%? Those are made-up numbers, but I think it’s hard to avoid the conclusion that parasocial relationships explain only a fraction of why people read explainers today. And there’s another issue. How do parasocial relationships get started if there’s no other reason to read someone? These might keep established authors going for a while at reduced levels, but it seems like it would make it hard for new people to rise up. Maybe popular AIs are a bit boring , today. But I think this is mostly due to the final reinforcement learning step. If you interact with “base models”, they are very good at picking up style cues and not boring at all . So I highly doubt that there’s some fundamental limitation here. And anyway, does anyone care? If you just want to understand why vitamin D is technically a type of steroid , how much does style really matter, as opposed to clarity? I think style mostly matters in the context of a parasocial relationship, meaning we’ve already accounted for it above. I don’t know for sure if AI will ever be as accurate as a high-quality human source. Though it seems very unlikely that physics somehow precludes creating systems that are more accurate than humans. But if AI is that accurate, then I think this exercise suggests that explainer posts are basically toast. All the above arguments are just too weak to explain most of why people read human-written explainers now. So I think it’s mostly just accuracy. When that human advantage goes, I expect human-written explainers to go with it. I can think of three main counterarguments. First, maybe AI will fix discovery. Currently, potential readers of explainers often have no way to find potential writers. Search engines have utterly capitulated to SEO spam. Social media soft-bans outward links. If you write for a long time, you can build up an audience, but few people have the time and determination to do that. If you write a single explainer in your life, no one will read it. The rare exceptions to this rule either come from people contributing to established (non-social media) communities or from people with exceptional social connections. So—this argument goes—most potential readers don’t bother trying to find explainers, and most potential writers don’t bother creating them. If AI solves that matching problem, explainers could thrive. Second, maybe society will figure out some new way to reward people who create information. Maybe we fool around with intellectual property law. Maybe we create some crazy Xanadu -like system where in order to read some text, you have to first sign a contract to pay them based on the value you derive, and this is recursively enforced on everyone who’s downstream of you. Hell, maybe AI companies decide to solve the data wall problem by paying people to write stuff. But I doubt it. Third, maybe explainers will follow a trajectory like chess. Up until perhaps the early 1990s, humans were so much better than computers at chess that computers were irrelevant. After Deep Blue beat Kasparov in 1997, people quickly realized that while computers could beat humans, human+computer teams could still beat computers. This was called Advanced Chess . Within 15-20 years, however, humans became irrelevant. Maybe there will be a similar Advanced Explainer era? (I kid, that era started five years ago.) Will the explainer post go extinct? My guess is mostly yes, if and when AI reaches human-level accuracy. Incidentally, since there’s so much techno-pessimism these days: I think this outcome would be… great? It’s a little grim to think of humans all communicating with AI instead of each other, yes. But the upside is all of humanity having access to more accurate and accessible explanations of basically everything. If this is the worst effect of AGI, bring it on. Accuracy. Current AI is unreliable. If I ask about information theory or how to replace the battery on my laptop, it’s very impressive but makes some mistakes. But if I ask about heritability , the answers are three balls of gibberish stacked on top of each other in a trench-coat. Of course, random humans make mistakes, too. But if you find a quality human source, it is far less likely to contain egregious mistakes. This is particularly true across “large contexts” and for tasks where solutions are hard to verify. AI is boring. At least, writing from current popular AI tools is boring, by default. Parasocial relationships. If I’ve been reading someone for a long time, I start to feel like I have a kind of relationship with them. If you’ve followed this blog for a long time, you might feel like you have a relationship with me. Calling these “parasocial relationships” makes them sound sinister, but I think this is normal and actually a clever way of using our tribal-band programming to help us navigate of the modern world. Just like in “real” relationships, when I read someone I have a parasocial relationship with, I have extra context that makes it easier to understand them, I feel a sense of human connection, and I feel like I’m getting a sort of update on their “story”. I don’t get any of that with (current) AI. Skin in the game. If a human screws something up, it’s embarrassing. They lose respect and readers. On a meta-level, AI companies have similar incentives not to screw things up. But AI itself doesn’t (seem to) care. Human nature makes it easier to trust someone when we know they’re putting some kind of reputation on the line. Conspicuous consumption. Since I read Reasons and Persons , I can brag to everyone that I read Reasons and Persons. If I had read some equally good AI-written book, probably no one would care. Coordination points. Partly, I read Reasons and Persons because I liked it. And maybe I guess I read it so I can brag about the fact that I read it. (Hey everyone, have I mentioned that I read Reasons and Persons?) But I also read it because other people read it. When I talk to those people, we have a shared vocabulary and set of ideas that makes it easier to talk about other things. This wouldn’t work if we had all explored the same ideas though fragmented AI “tutoring”. Change is slow. Here we are 600 years after the invention of the printing press, and the primary mode of advanced education is still for people to physically go to a room where an expert is talking and write down stuff the expert says. If we’re that slow to adapt, then maybe we read human-written explainers simply out of habit.