A Treatise on AI Chatbots Undermining the Enlightenment
On chatbot sycophancy, passivity, and the case for more intellectually challenging companions
On chatbot sycophancy, passivity, and the case for more intellectually challenging companions
A lovely little write-up by my friend Steve Krouse on how vibe code and legacy code are roughly the same thing; “code that nobody understands.” I particularly like this graph which illustrates the relationship between vibe code and understanding: <img src="https://maggieappleton.com//images/smidgeons/vibe-code.png" alt="A line chart with vibe on the Y axis and understanding on the X axis with a downwards diagonal line" /> This type of discussion feels helpful in a moment where the term “vibe coding” is being tossed around in vague and unhelpful ways. It rings true to me that it's a continuous spectrum, and no professional developers are sitting at the all-vibes end of it. As many have pointed out , not all code written with AI assistance is vibe code. Per the original definition , it's code written in contexts where you “forget that the code even exists.” Or as the fairly fleshed-out Wikipedia article puts it: ”A key part of the definition of vibe coding is that the user accepts code without full understanding.” Like many developers, I'm constantly grappling with how much understanding I'm willing to hand over to Cursor or Claude Code. I sincerely try to keep it minimal, or at least have them walk me through the functionality line-by-line if I feel I'm out of my depth. But it's always easier and faster to YOLO it – an impulse I have to actively keep in check. Our AI minions are also exceptional tools for learning when you move too far towards the high-vibes-low-understanding end of the spectrum. I particularly like getting Claude to write me targeted exercises to practice new concepts when I get lost in generated functions or fail to implement something correctly sans-AI. Even though doubling-down up on engineering skills sometimes feels like learning to operate a textile loom in 1820.
In a wonderfully dramatic change to my life, I became a mother two months ago. My son was born at the end of March via an unplanned but otherwise uncomplicated c-section. Parenthood has been predictably overwhelming, exhausting, and existentially glorious. My days are now spent holding a sleeping newborn on my chest, timing wake windows, picking up the dropped pacifier for the 19th time, trying to eat with 0.5 hands free, and watching an eternal stream of Gilmore Girls episodes on a precariously balanced iPhone while feeding/burping/soothing/rocking/patting this tiny human. It swings between hard physical labour with high cortisol levels, to extremely chill, serene, and joyful a dozen times throughout the day and night. I had doubts about becoming a mother when I was younger. Mostly related to systemic gender inequality, believing I would need to sacrifice my whole career for it, and thinking myself incapable of bearing the responsibility (which, to be fair, I was before age ~28). I spent a solid year in angst and turmoil trying to figure it out. All the parents around me only shared details of how stressful, sleep-deprived, expensive, and burdensome their new lives were. Perhaps because it felt too trite or vulnerable to put into words the love, joy, and purpose that comes with it. Being on the other side, I now realise there was no calculation or algorithm or pro/con list or financial spreadsheet that could have helped me understand what it would feel like. Nothing that would do justice to the emotional weight of holding your sleeping baby that you made with your own body. Of watching them grin back at you with uncomplicated joy. Of realising you'll get to watch them grow into a full person; one that is – at least genetically – half you and half the person you love most in the world. Of watching them trip out as they realise they have hands. I can now say with certainty I am evolutionarily wired for this. Perhaps not everyone is. But everything in me is designed to feel existential delight at each little fart, squeak, grunt, and sneeze that comes out of this child. Delight that is unrivalled by any successful day at work, fully shipped feature, long cathartic run, or Sunday morning buttery croissant – the banal highlights of my past life. When I think back to my pre-baby self, trying to calculate herself into a clear decision, I wish I could let her feel for one minute what it's like to hold him. And tell her I can't believe I ever considered depriving myself of this. In other news, I've read no books (other than Your Baby Week by Week and Secrets Of The Baby Whisperer ), had few higher-order thoughts, and binge watched all of Motherland . As this child learns to sleep in more predictable ways, I'm looking forward to being less of a zombie and engaging with the world again.
A tiny tool to calculate when your baby might arrive
The New Scientist used freedom of information laws to get the ChatGPT records of the UK's technology secretary. The headline hints at a damning exposé, but ends up being a story about a politician making pretty reasonable and sensible use of language models to be more informed and make better policy decisions. He asked it why small business owners are slow to adopt AI, which popular podcasts he should appear on, and to define terms like antimatter and digital inclusion . This all seems extremely fine to me. Perhaps my standards for politicians are too low, but I assume they don't actually know much and rely heavily on advisors to define terms for them and decide on policy improvements. And I think ChatGPT connected to some grounded sources would be a decent policy advisor. Better than most human policy advisors. At least when it comes to consistency, rapidly searching and synthesising lots of documents, and avoiding personal bias. Models still carry the bias of their creators, but it all becomes a trade-off between human flaws and model flaws. Claiming language models should have anything to do with national governance feels slightly insane. But we're also sitting in a moment where Trump and Musk are implementing policies that trigger trade wars and crash the U.S. economy. And I have to think "What if we just put Claude in charge?" I joke. Kind of.
Well, I've had a dramatic start to the year. Normally , the design agency I joined a short eight months ago, unexpectedly closed down in January. Despite running for a decade and working with almost every major tech company, client work slowed down and the founders decided to close up shop. It's been a sad time. Everyone I worked with there was exceptionally talented and kind. I'm thankful I got to build with them for a short while. I was already due to start maternity leave in March, so Normally closing just moved that date up a bit sooner. But I managed to fit in a couple of months of work with Deep Mirror before taking my baby break. They're a London-based startup using machine learning to speed up the drug discovery process, specifically by helping medicinal chemists generate ideas for new molecules. While I was completely new to the field of drug discovery, many of the design challenges echoed the ones I'd worked on with Elicit – complex research workflows, information-dense interfaces, and making the inner workings of models and their reasoning process visible to users. I've learned I like this shape of work; AI/ML tools designed to help scientific researchers who have high standards and need to thoroughly understand how models “reason” and how answers are generated. It's fertile ground for responsible AI interface design. My baby break has now started. Only two weeks remain until the new human arrives. A terrifyingly short timeline. Luckily, the excitement of meeting our child and the physical discomfort of late pregnancy outweigh any fears about birth or the impending marathon of sleep deprivation. I'd happily start labour tomorrow if I had any say in the matter. Given that I won't be in a 9-5 job for the next six months, I've stocked up on new books. Though it's naïve to think I'll have the mental capacity to read any of them in between baby feedings and waking up a dozen times a night. But one can hope. I've added the full pile to my Antilibrary , but these are the ones I'm most excited about: <a href="https://www.google.co.uk/books/edition/Soldiers_and_Kings/EzPBEAAAQBAJ"><strong>Soldiers and Kings: Survival and Hope in the World of Human Smuggling</strong></a> by Jason De Leon This got my attention when it started popping up on all the “best of” ethnography lists in 2024, and then went on to win the national book award for non-fiction. I expect it to be a slightly intense read, but well-researched ethnographies are my favourite genre. <a href="https://www.google.co.uk/books/edition/Cue_the_Sun/GObnEAAAQBAJ"><strong>Cue the Sun! The Invention of Reality TV</strong></a> by Emily Nussbaum Like most of us, I have a love/hate/fascination/repulsion relationship with reality TV. I've watched my fair share of trash series, but will happily defend (most of) them as time well spent. They're always insightful windows into our collective value systems and cultural narratives, and I'm keen to read Nussbaum's critical take on the medium. <a href="https://www.google.co.uk/books/edition/The_Invention_of_Nature/w1WNBQAAQBAJ"><strong>The Invention of Nature: The Adventures of Alexander von Humboldt</strong></a> by Andrea Wulf Given my long standing preoccupation with how we try to define and divide “nature” from “culture”, it's about time I did a bit more historical reading into the origins of this cultural dichotomy. I've been using a bit of my pre-baby time to build as well. I added a new section to this garden called Smidgeons . These are teeny tiny posts: links with a bit of commentary, research papers I enjoyed, or one-liners that would otherwise go on Bluesky. I'm also quite deep into a new research project and set of prototypes I'm calling Lodestone . It's an exploration of how language models might be able to get us to think more, not less. Specifically, I'm interested in whether models can enable me to be a better critical thinker and rigorous writer. Not by writing for me, but by guiding me through a well-defined process of understanding what claims I'm making, what evidence I have to support it, and how my argument structure fits together. I'm tackling it from a few angles, but here's some previews from the latest prototype: The code is all open source on Github , though it'll evolve a lot from here. I'll publish more about it soon, but the ideas still feel early and my thesis is unproven. I'll wait until it all gels together a bit more. I should mention that starting this summer I'll be looking for a new role as a Design Engineer or technically-inclined Product Designer. I'm planning to be on maternity leave until early September, but I'm happy to start talking to companies, teams, and founders now if you think we could be a good fit. Just email hello at maggieappleton.com or DM me on Bluesky .
We have a new(ish) benchmark, cutely named “Humanity's Last Exam.” If you're not familiar with benchmarks, they're how we measure the capabilities of particular AI models like o1 or Claude Sonnet 3.5. Each one is a standardised test designed to check a specific skill set. For example: When you run a model on a benchmark it gets a score, which allows us to create leaderboards showing which model is currently the best for that test. To make scoring easy, the answers are usually formatted as multiple choice, true/false, or unit tests for programming tasks. Among the many problems with using benchmarks as a stand-in for “intelligence” (other than the fact they're multiple choice standardised tests – do you think that's a reasonable measure of human capabilities in the real world?), is that our current benchmarks aren't hard enough. New models routinely achieve 90%+ on the best ones we have. So there's a clear need for harder benchmarks to measure model performance against. Hence, Humanity's Last Exam . Made by ScaleAI and the Center for AI Safety, they've crowdsourced "the hardest and broadest set of questions ever" by experts across domains. 2,700 questions at the moment, some of which they're keeping private to prevent future models training on the dataset and memorising answers ahead of time. Questions like this: <img src="https://maggieappleton.com//images/smidgeons/last-exam-1.png" alt="Samples of the diverse and challenging questions submitted to Humanity's Last Exam." /> <img src="https://maggieappleton.com//images/smidgeons/last-exam-2.png" alt="Samples of the diverse and challenging questions submitted to Humanity's Last Exam." /> <img src="https://maggieappleton.com//images/smidgeons/last-exam-3.png" alt="Samples of the diverse and challenging questions submitted to Humanity's Last Exam." /> So far, it's doing it's job well – the highest scoring model is OpenAI's Deep Research at 26.6%, with other common models like GPT-4o, Grok, and Claude only getting 3-4% correct. Maybe it'll last a year before we have to design the next “last exam.” When people make sweeping statements like “language models are bullshit machines” or “ChatGPT lies,” it usually tells me they're not seriously engaged in any kind of AI/ML work or productive discourse in this space. First, because saying a machine “lies” or “bullshits” implies motivated intent in a social context, which language models don't have. Models doing statistical pattern matching aren't purposefully trying to deceive or manipulate their users. And second, broad generalisations about “AI”'s correctness, truthfulness, or usefulness is meaningless outside of a specific context. Or rather, a specific model measured on a specific benchmark or reproducible test. So, next time you hear someone making grand statements about AI capabilities (both critical and overhyped), ask: which model are they talking about? On what benchmark? With what prompting techniques? With what supporting infrastructure around the model? Everything is in the details, and the only way to be a sensible thinker in this space is to learn about the details.
If you're not distressingly embedded in the torrent of AI news on Twixxer like I reluctantly am, you might not know what DeepSeek is yet. Bless you. From what I've gathered: <img src="https://maggieappleton.com//images/smidgeons/deepseek1.png" alt="DeepSeek R1 showing its thinking" /> Here's the announcement Tweet: TLDR high-quality reasoning models are getting significantly cheaper and more open-source. This means companies like Google, OpenAI, and Anthropic won't be able to maintain a monopoly on access to fast, cheap, good quality reasoning. This is net good for everyone.
A roboticist breaks down common misconceptions about what's hard and easy in robotics. A response to everyone asking “can't we just stick a large language model into its brain to make it more capable?” Contrary to the assumptions of many people, making robots perceive and move in the world in the way humans can turns out to be an extraordinarily hard problem to solve. While seemingly “hard” problems like scoring well on intelligence tests, winning at chess, and acing the GMAT turn out to be much easier. Everyone thought it would be extremely hard and computationally expensive to teach computers language, and easy to teach them to identify objects visually. The opposite turned out to be true. This is known as Moravec's Paradox . Especially liked the ending where Dan explores why people are so resistant to the idea picking up a cup is more complex than solving logic puzzles. Partly anthropocentrism; humans are special because we can do higher order thinking. Any lowly animal can sense the world and move through it. Partly social class bias; people who work manual labour jobs using their bodies are less valued then people who sit still using their intellect to solve problems.
Researchers submitted entirely AI-generated exam answers to the undergraduate psychology department of a “reputable” UK university. The vast majority went undetected and the AI answers achieved higher scores than real students. “We report a rigorous, blind study in which we injected 100% AI written submissions into the examinations system in five undergraduate modules, across all years of study, for a BSc degree in Psychology at a reputable UK university. We found that 94% of our AI submissions were undetected . The grades awarded to our AI submissions were on average half a grade boundary higher than that achieved by real students . Across modules there was an 83.4% chance that the AI submissions on a module would outperform a random selection of the same number of real student submissions.” I have to assume educators are swiftly moving to hand-written exams under supervised conditions and oral exams. Anything else seems to negate the point of exams.
A browser extension that filters out engagement bait from your feed on Twixxer. Uses Llama 3.3 under the hood to analyse Tweets in real time and then blurs out sensationalist political content. Or whatever else you prompt it to blur – the system prompt is editable: <img src="https://maggieappleton.com//images/smidgeons/unbaited.png" alt="System settings and a customisable prompt for the Unbaited app" /> This is certainly a way to try and manage Twixxer's slow demise into right-wing extremist content. Though I'm taking this more as a thought experiment and interesting prototype than a sincere suggestion we should spend precious energy burning GPUs on clickbait filtering. Integrating LLMs into the browsing experience and using them to selectively curate content for you is the more interesting move here.
Welcome to the smidgeon stream. This is a new kind of content on the Garden . One that was overdue. They're called smidgeons . Teeny, tiny entries. The kinds of things I used to put in Tweets, before Twitter died a terrible death. Most are only a few sentences long. They're mainly links to notable things – good articles, papers, and ideas. I've been meaning to do this for a while, but a recent migration to Astro suddenly made it much easier.
How to use Zotero's translator and Tana Paste formatting to easily import papers into Tana
Reflections on the strange experience of growing a human from scratch, without any conscious understanding of how you are doing it