Posts in Writing (20 found)
neilzone Yesterday

Resources to aid understanding someone else's perimenopause / menopause

I asked for reading recommendations, for a partner of someone who is going through the perimenopause / menopause. I got a lot of responses; thank you. I have included below those which seemed most relevant, for me to follow up on them. Apologies if I didn’t include your particular suggestions. I received quite a lot of advice too; thank you. Thayer said: I often help men understand their partners’ journeys as part of my therapy & coaching as it really affects men as well “Burning Up, Frozen Out” by Joe Warner and Rob Kemp “Menopause Manifesto” by Dr Jen Gunter (several recommendations for this) “Perimenopause Power” by Maisie Hill “Woman on Fire” by Sheila de Liz (multiple recommendations) anything by Dr Louise Newsome Trans experience of the menopause by Quinn Rhodes Two posts by Sundial : “Perimenopause hit me like a brick” and “Perimenopause: My HRT Journey” “Nobody told me about the way menopause restructures marriage. Here’s what I wish I knew then.” Ben’s toots “Body of Evidence” , including this episode “What’s Up Docs?” , including this episode “BDSM and the menopause” a Davina McCall documentatary (possibly this one )

0 views

I Will Never Respect A Website

If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I recently put out the timely and important Hater’s Guide To The SaaSpocalypse , another on How AI Isn't Too Big To Fail , and a deep (17,500 word) Hater’s Guide To OpenAI .  Subscribing to premium is both great value and makes it possible to write these large, deeply-researched free pieces every week.  Soundtrack: Muse — Stockholm Syndrome I think the most enlightening thing about AI is that it shows you how even the most mediocre text inspires some sort of emotion. Soulless LinkedIn slop makes you feel frustration with a person for their lack of authenticity, but you can still imagine how they forced it out of their heads. You still connect with them, even if it’s in a bad way.  AI copy is dead. It is inert. The reason you can spot it is that it sounds hollow. I don’t care if a website says stuff on it because I typed in, just like I don’t care if it responds in a way that sounds human, because it all feels like nothing to me. I am not here to give a website respect, I will not be impressed by a website, nor will I grant a website any extra credit if it can’t do the right thing every time. The computer is meant to work for me. If the computer doesn’t do what I want, I change the kind of computer I use. LLMs will always hallucinate, their outputs are not trustworthy as a result, they cannot be deterministic, and any chance of any mistakes of any kind are unforgivable. I don’t care how the website made you feel: it’s a machine that doesn’t always work, and that’s not a very good machine.  I feel nothing when I see an LLM’s output. Tell me thank you or whatever, I don’t care. You’re a website. Oh you can spit out code? Amazing. Still a website.  Perhaps you’ve found value in LLMs. Congratulations! You should feel no compulsion to have to convince me, nor should you feel any pride in using a particular website. And if you feel you’re being judged for using AI, perhaps you should ask why you feel so vilified? Did the industry do something to somehow warrant judgment? Is there something weird or embarrassing about the product, such as it famously having a propensity to get things wrong? Perhaps it loses billions of dollars? Oh, it’s damaging to the environment too? And people are telling outright lies about it and constantly saying it’ll replace people’s jobs? And the CEOs are all greedy oafish sociopaths?  Did you try being cloying, judgmental, condescending, and aggressive to those who don’t like AI? Oh, that didn’t work? I can’t imagine why.  Sounds embarrassing! You must really like that website.  ChatGPT is a website. Claude is a website. While I guess Claude Code runs in a terminal window, that just means it’s an app, which I put in exactly the same mental box as I do a website.  Yet everything you read or hear or see about AI does everything it can to make you think that AI is something other than a website or an app. People that “discover the power of AI” immediately stop discussing it in the same terms as Microsoft Word, Google, or any other app or website. It’s never just about what AI can do today, but always about some theoretical “AGI” or vague shit about “AI agents” that are some sort of indeterminate level of “valuable” without anyone being able to describe why. Truly useful technology isn’t described in oblique or hyperbolic terms. For example, last week, IBM’s Dave McCann described using a series of “AI agents” to Business Insider Sounds like a website to me.  Sounds like a website using an LLM to summarize stuff to me. Why are we making all this effort to talk about what a website does?  My friend, this isn’t a “series of agents.” It’s an LLM that looks at stuff and spits out an answer. Chatbots have done this kind of thing forever. These aren’t “agents.” “Agents” makes it sound like there’s some sort of futuristic autonomous presence rather than a chatbot that’s looking at documents using technology that’s guaranteed to hallucinate incorrect information . Here’s a fun exercise: replace the word “agent” with “app,” and replace “AI” with “application.” In fact, let’s try that with the next quote: A variety of functions including searching for stuff, looking at stuff, generating stuff, transcribing a meeting, and searching for stuff. Wow! Who gives a fuck. Every “AI agent” story is either about code generation, summarizing some sort of information source, or generating something based on an information source that you may or may not be able to trust.  “Agent” is an intentional act of deception, and even “modern” agents like OpenClaw and its respective ripoffs ultimately boil down to “I can send you a reminder” or “I can transcribe a text you send me.” Yet everybody seems to want to believe these things are “valuable” or “useful” without ever explaining why. A page of OpenClaw integrations claiming to share “real projects, real automations [and] real magic” includes such incredible, magical use cases as “reads my X bookmarks and discusses them with me,” “check incoming mail and remove spam,” “researches people before meetings and creates briefing docs,” “schedule reminders,” “tracking who visits a website” (summarizing information), and “using voice notes to tell OpenClaw what to do,” which includes “distilling market research” (searching for stuff) and “tightening a proposal” (generating stuff after looking at it). I’d have no quarrel with any of this if it wasn’t literally described as magical and innovative. This is exactly the shit that software has always done — automations, shortcuts, reminders, and document work. Boring, potentially useful stuff done in an inefficient way requiring a Mac Mini and hundreds of dollars a day of API calls.  Even Stephen Fry’s effusive review of the iPad from 2010 , in referring to it as a “magical object,” still referred to it as “class,” “a different order of experience,” remarking on its speed, responsiveness, its “smooth glide,” and remarking that it’s so simple . Even Fry, a writer beloved for his effervescence and sophisticated lexicon, was still able to point at the things he liked (such as the design and simplicity) in clear terms. Even in couching it in terms of the future, Fry is still able to cogently explain why he’s excited about the present. Conversely, articles about Large Language Models and their associated products often describe them in one of three ways: This simply doesn’t happen outside of bubbles. The original CNET review of the iPhone — a technology I’d argue literally changed the way that human beings live their lives — still described it in terms that mirrored the reality we live in: I’d argue that technologies like cloud storage, contactless payments, streaming music, and video and digital photography have transformed our societies in ways that were obvious from the very beginning. Nobody sat around cajoling us to accept that we’d need to sunset our Nokia 3210s and get used to touchscreens because it was blatantly obvious that it was better on using the first iPhone.  Nobody ostracized you for not being sufficiently excited about iPhone apps. Git, launched in 2005, is arguably one of the single-most transformational technologies in tech history, changing how software engineers built all kinds of software . And I’d argue that Github, which came a few years later, was equally transformational.  I can’t find a single example of somebody being shamed for not being sufficiently excited, other than people arguing over whether Git was the superior version control software , or saying that  Github, a cloud-based repository for code and collaboration, was obvious in its utility. Those that liked it didn’t feel particularly defensive. Even articles about GitHub’s growth spoke entirely in terms rooted in the present. I realize this was before the hyper-polarized world of post-Musk Twitter, one where venture capital and the tech industry in general was a fraction of the size, but it’s really weird how different it feels when you read about how the stuff that actually mattered was covered. I must repeat that this was a very different world with very different incentives. Today’s tech industry is a series of giant group chats across various social networks and physical locations, with a much-larger startup community (yCombinator’s last batch had 199 people — the first had 8) influenced heavily by the whims of investors and the various cults of personality in the valley. While social pressure absolutely existed, the speed at which it could manifest and mutate was minute in comparison to the rabid dogs of Twitter or the current state of Hackernews. There were fewer VCs, too. In any case, no previous real or imagined tech revolution has ever inspired such eager defensiveness, tribalism or outright aggression toward dissenters, nor such ridiculous attempts to obfuscate the truth about a product outside of cryptocurrency, an industry with obvious corruption and financial incentives.  We’ve never had a cult of personality around a specific technology at this scale. There is something that AI does to people — in the way it both functions and the way that people react to it —  that inspires them to act, defensively, weirdly, tribally. I think it starts with LLMs themselves, and the feeling they create within a user. We all love prompts. We love to be asked questions about ourselves. We feel important when somebody takes interest in what we’re doing, and even more-so when they remember things about it and seem to be paying attention. LLMs are built to completely focus themselves on us and do so while affirming every single interaction.  Human beings also naturally crave order and structure, which means we’ve created frameworks in our head about what authoritative-sounding or looking information looks like, and the language that engenders trust in it. We trust Wikipedia both because it’s an incredibly well-maintained library of information riddled with citations and because it tonally and structurally resembles an authoritative source. Large Language Models have been explicitly trained to deliver information (through training on much of the internet including Wikipedia) in a structured manner that makes us trust it like we would another source massaged with language we’d expect from a trusted friend or endlessly-patient teacher. All of this is done with the intention of making you forget that you’re using a website. And that deception is what starts to make people act strangely. The fact that an LLM can maybe do something is enough to make people try it, along with the constant pressure from social media, peers and the mainstream media.  Some people — such as myself — have used LLMs to do things, seen that making them do said things isn’t going to happen very easily, and walked away because I am not going to use a website that doesn’t do what it says.  As I’ve previously said, technology is a tool to do stuff. Some technology requires you to “get used to it” — iPhones and iPads were both novel (and weird) in their time, as was learning to use the Moonlander ZSK — but in basically every example doesn’t involve you tolerating the inherent failings of the underlying product under the auspices of it “one day being better.” Nowhere else in the world of technology does someone gaslight you into believing that the problems don’t exist or will magically disappear. It’s not like the iPhone only occasionally allowed you to successfully take a photo, and reliable photography was something that you’d have to wait until the iPhone 3GS to enjoy. While the picture quality improved over time, every generation of iPhone all did the same basic thing successfully, reliably, and consistently.  I also think that the challenge of making an LLM do something useful is addictive and transformative. When people say they’ve “learned to use AI,” often they mean that they’ve worked out ways to fudge their prompts, navigate its failures, mitigate its hallucinations, and connect it to various different APIs and systems of record in such a way that it now, on a prompt, does something , and because they’re the ones that built this messy little process, they feel superior — because the model has repeatedly told them that they were smart for doing it and celebrated with them when they “succeeded.”  The term “AI agent” exists as both a marketing term and a way to ingratiate the user. Saying “yeah I used a chatbot to do some stuff” sounds boring, like you’re talking to an app or a website, but “using an AI agent” makes you sound like a futuristic cyber-warrior , even though you’re doing exactly the same thing. LLMs are excellent digital busyboxes for those who want to come up with a way to work differently rather than actually doing work. In WIRED’s article about journalists using AI , Alex Heath boasts that he “feels like he’s cheating in a way that feels amazing”: The linguistics of “transmitting an idea to an AI agent” misrepresent what is a deeply boring and soulless experience. Alex speaks into a microphone, his words are transcribed, then an LLM burps out a draft. A bunch of different services connect to Claude Cowork and a text document (that’s what the “custom set of instructions” is) that says how to write like him, and then it writes like him, and then he talks to it and then sometimes writes bits of the story himself. This is also most decidedly not automation. Heath still must sit and prompt a model again and again. He must still maintain connections to various services and make sure the associated documents in Notion are correct. He must make sure that Granola actually gets the transcriptions from his interview. He must (I would hope) still check both the AI transcription and the output from the model to make sure quotes are accurate. He must make sure his calendar reflects accurate information. He must make sure that Claude still follows his “voice and writing style” — if you can call it that given the amount of distance between him and the product. Well, Alex, you’re not telling anybody anything, your ideas and words come out of a Large Language Model that has convinced you that you’re writing them.  In any case, Heath’s process is a great example of what makes people think they’re “using powerful AI.” Large Language Models are extremely adept at convincing human beings to do most of the work and then credit “AI” with the outcomes. Alex’s process sounds convoluted and, if I’m honest, a lot more work than the old way of doing things. It’s like writing a blog using a machine from Pee-wee’s Playhouse.  I couldn’t eat breakfast that way every morning. I bet it would get old pretty quick. This is the reality of the Large Language Model era. LLMs are not “artificial intelligence” at all. They do not think, they do not have knowledge, they are conjuring up their own training data (or reflecting post-training instructions from those developing them or documents instructing them to act a certain way), and any time you try and make them do something more-complicated, they begin to fall apart, and/or become exponentially more-expensive. You’ll notice that most AI boosters have some sort of bizarre, overly-complicated way of explaining how they use AI. They spin up “multiple agents” (chatbots) that each have their own “skills document” (a text document) and connect “harnesses” (python scripts, text files that tell it what to do, a search engine, an API) that “let it run agentic workflows” (query various tools to get an outcome.”  The so-called “agentic AI” that is supposedly powerful and autonomous is actually incredibly demanding of its human users — you must set it up in so many different ways and connect it to so many different services and check that every “agent” (different chatbot) is instructed in exactly the right way, and that none of these agents cause any problems (they will) with each other. Oh, don’t forget to set certain ones to “high-thinking” for certain tasks and make sure that other tasks that are “easier” are given to cheaper models, and make sure that those models are prompted as necessary so they don’t burn tokens. But the process of setting up all those agents is so satisfying, and when they actually succeed in doing something — even if it took fucking forever and costs a bunch and is incredibly inefficient — you feel like a god! And because you can “spin up multiple agents,” each one ready and waiting for you to give them commands (and ready to affirm each and every one of them), you feel powerful, like you’re commanding an army that also requires you to monitor whatever it does. The reason that LLMs have become so interesting for software engineers is that this is already how they lived. Writing software is often a case of taping together different systems and creating little scripts and automations that make them all work, and the satisfaction of building functional software is incredible, even at the early stages.  Large Language Models perform an impression of automating that process, but for the most part force you, the user, to do the shit that matters, even if that means “be responsible for the code that it puts out.” Heath’s process does not appear to take less time than his previous one — he’s just moved stuff around a bit and found a website to tell him he’s smart for doing so.  They are Language Models interpreting language without any knowledge or thoughts or feelings or ability to learn, and each time they read something they interpret meaning based on their training data, which means they can (and will!) make mistakes, and when they’re, say, talking to another chatbot to tell it what to do next, that little mistake might build a fundamental flaw in the software, or just break the process entirely.  And Large Language Models — using the media — exist to try and convince you that these mistakes are acceptable. When Anthropic launched its Claude For Finance tool , which claims to “automate financial modeling” with “pre-built agents” (chatbots) but really appears to just be able to create questionably-useful models via Excel spreadsheets and “financial research” based on connecting to documents in your various systems, I imagine with a specific system prompt. Anthropic also proudly announced that it had scored a 55.3% on the Finance Agent Test .  I hate to repeat myself, but I will not respect a website, and I will not tolerate something being “55% good” at something if its alleged use case is that it’s an artificial intelligence.  Yet that’s the other remarkable thing about the LLM era — that there are people who are extremely tolerant of potential failures because they believe they’re either A) smart enough to catch them or B) smart enough to build systems that do so for them, with a little sprinkle of “humans make mistakes too,” conflating “an LLM that doesn’t know anything fucking up by definition” with “a human being with experiences and the capacity for adaptation making a mistake.”  I truly have no beef with people using LLMs to speed up Python scripts to do fun little automations or to dig through big datasets, but please don’t try and convince me they’re being futuristic by doing so. If you want to learn Python, I recommend reading Al Sweigart’s Automate The Boring Stuff . Anytime somebody sneers at you and says you are being “left behind” because you’re not using AI should be forced to show you what it is they’ve created or done, and the specific system they used to do so. They should have to show you how much work it took to prepare the system, and why it’s superior to just doing it themselves.  Karpathy also had a recent (and very long) tweet about “ the growing gap in understanding of AI capability ,” involving more word salad than a fucking SweetGreen: Wondering what those “staggering improvements” are?  The one tangible (and theoretical!) example Karpathy gives is an example of how hard people work to overstate the capabilities of LLMs. “Coherently restructuring” a codebase might happen when you feed it to an LLM (while also costing a shit-ton of tokens, but putting that aside), or it might not understand at all because Claude Opus is acting funny that day , or it might sort-of fix it but mess something subtle up that breaks things in the future. This is an LLM doing exactly what an LLM does — it looks at a block of text, sees whether it matches up with what a user said, sees how that matches with its training data, and then either tells you things to do or generates new code, much like it would do if you had a paragraph of text you needed to fact-check. Perhaps it would get some of the facts right if connected to the right system. Perhaps it might make a subtle error. Perhaps it might get everything wrong. This is the core problem with the “checkmate, boosters — AI can write code!” problem. AI can write code. We knew that already. It gets “better” as measured by benchmarks that don’t really compare to real world success , and even with the supposedly meteoric improvements over the last few months, nobody can actually explain what the result of it being better is, nor does it appear to extend to any domain outside of coding. You’ll also notice that Karpathy’s language is as ingratiating to true believers as it is vague. Other domains are left unexplained other than references to “research” and “math.” I’m in a research-heavy business, and I have tried the most-powerful LLMs and highest-priced RAG/post-RAG research tools, and every time find them bereft of any unique analysis or suggestions.  I don’t dispute that LLMs are useful for generating code, nor do I question whether or not they’re being used by software developers at scale. I just think that they would be used dramatically less if there weren’t an industrial-scale publicity campaign run through the media and the majority of corporate America both incentivizing and forcing them to do so.  Similarly, I’m not sure anybody would’ve been anywhere near as excited if OpenAI and Anthropic hadn’t intentionally sold them a product that was impossible to support long-term.  This entire industry has been sold on a lie, and as capacity becomes an issue, even true believers are turning on the AI labs. About a year ago, I warned you that Anthropic and OpenAI had begun the Subprime AI Crisis , where both companies created “priority processing tiers” for enterprise customers (read: AI startups like Replit and Cursor), dramatically increasing the cost of running their services to the point that both had to dramatically change their features as a result. A few weeks later, I wrote another piece about how Anthropic was allowing its subscribers to burn thousands of dollars’ worth of tokens on its $100 and $200-a-month subscriptions, and asked the following question at the end: I was right to ask, as a few weeks ago ( as I wrote in the Subprime AI Crisis Is Here ) that Anthropic had added “peak hours” to its rate limits, and users found across the board that they were burning through their limits in some cases in only a few prompts . Anthropic’s response was, after saying it was looking into why rate limits were being hit so fast , to say that users were ineffectively utilizing the 1-million-token context window and failing to adjust Claude’s “thinking effort level” based on whatever task it is they were doing. Anthropic’s customers were (and remain) furious , as you can see in the replies of its thread on the r/Anthropic Subreddit . To make matters worse, it appears that — deliberately or otherwise — Anthropic has been degrading the performance of both Claude Opus 4.6 and Claude Code itself , with developers, including AMD Senior AI Director Stella Laurenzo, documenting the problem at length (per VentureBeat): Think that Anthropic cares? Think again:  Another developer found that Claude Opus 4.6 was “thinking 67% less than it used to,” though Anthropic didn’t even bother to respond. In fact, Anthropic has done very little to explain what’s actually happening, other than to say that it doesn’t degrade its models to better serve demand . To be clear, this is far from the only time that I’ve seen people complain about these models “getting dumber” — users on basically every AI Subreddit will say, at some point, that models randomly can’t do things they used to be able to, with nobody really having an answer other than “yeah dude, same.”  Back in September 2025, developer Theo Browne complained that Claude had got dumber , but Anthropic near-immediately responded to say that the degraded responses were a result of bugs that “intermittently degraded responses from Claude,” adding the following:  Which begs the question: is Anthropic accidentally making its models worse? Because it’s obvious it’s happening, it’s obvious they know something is happening, and its response, at least so far, has been to say that either users need to tweak their settings or nothing is wrong at all. Yet these complaints have happened for years, and have reached a crescendo with the latest ones that involve, in some cases, Claude Code burning way more tokens for absolutely no reason , hitting rate limits earlier than expected or wasting actual dollars spent on API calls. Some suggest that the problems are a result of capacity issues over at Anthropic, which have led to a stunning (at least for software used by millions of people) amounts of downtime, per the Wall Street Journal : This naturally led to boosters (and, for that matter, the Wall Street Journal) immediately saying that this was a sign of the “insatiable demand for AI compute”: Before I go any further: if anyone has been taking $2.75-per-hour-per-GPU for any kind of Blackwell GPU, they are losing money. Shit, I think they are at $4.08. While these are examples from on-demand pricing (versus paid-up years-long contracts like Anthropic buys), if they’re indicative of wider pricing on Blackwell, this is an economic catastrophe. In any case, Anthropic’s compute constraints are a convenient excuse to start fucking over its customers at scale. Rate limits that were initially believed to be a “ bug ” are now the standard operating limits of using Anthropic’s services, and its models are absolutely, fundamentally worse than they were even a month ago. It’s January 14 2026, and you just read The Atlantic’s breathless hype-slop about Claude Code , believing that it was “bigger than the ChatGPT moment,” that it was an “inflection point for AI progress,” and that it could build whatever software you imagined. While you’re not exactly sure what it is you’re meant to be excited about, your boss has been going on and on about how “those who don’t use AI will be left behind,” and your boss allows you to pay $200 for a year’s access to Claude Pro. You, as a customer, no longer have access to the product you purchased. Your rate limits are entirely different, service uptime is measurably worse, and model performance has, for some reason, taken a massive dip. You hit your rate limits in minutes rather than hours. Prompts that previously allowed you a healthy back-and-forth over a project are now either impractical or impossible.  Your boss now has you vibe-coding barely-functional apps as a means of “integrating you with the development stack,” but every time you feed it a screenshot of what’s going wrong with the app you seem to hit your rate limits again. You ask your boss if he’ll upgrade you to the $100-a-month subscription, and he says that “you’ve got to make do, times are tough.” You sit at your desk trying to work out what the fuck to do for the next four hours, as you do not know how to code and what little you’ve been able to do is now impossible. This is the reality for a lot of AI subscribers, though in many cases they’ll simply subscribe to OpenAI Codex or another service that hasn’t brought the hammer down on their rate limits. …for now, at least. The con of the Large Language Model era is that any subscription you pay for is massively subsidized, and that any product you use can and will see its service degraded as these companies desperately try to either ease their capacity issues or lower their burn rate. Yet it’s unclear whether “more capacity” means that things will be cheaper, or better, or just a way of Anthropic scaling an increasingly-shittier experience.  To explain, when an AI lab like Anthropic or OpenAI “hits capacity limits,” it doesn’t mean that they start turning away business or stop accepting subscribers, but that current (and new) subscribers will face randomized downtime and model issues, along with increasingly-punishing rate limits.  Neither company is facing a financial shortfall as a result of being unable to provide their services (rather, they’re facing financial shortfalls because they’re providing their services to customers. And yet, the only people that are the only people paying that price because of these “capacity limits” are the customers. This is because AI labs must, when planning capacity, make arbitrary guesses about how large the company will get, and in the event that they acquire too much capacity, they’ll find themselves in financial dire straits, as Anthropic CEO Dario Amodei told Dwarkesh Patel back in February :  What happens if you don’t buy enough compute? Well, you find yourself having to buy it last-minute, which costs more money, which further erodes your margins, per The Information : In other words, compute capacity is a knife-catching game. Ordering compute in advance lets you lock in a better rate, but having to buy compute at the last-minute spikes those prices, eating any potential margin that might have been saved as a result of serving that extra demand.  Order too little compute and you’ll find yourself unable to run stable and reliable services, spiking your costs as you rush to find more capacity. Order too much capacity and you’ll have too little revenue to pay for it. It’s important to note that the “demand” in question here isn’t revenue waiting in the wings, but customers that are already paying you that want to do more with the product they paid for. More capacity allows you to potentially onboard new customers, but they too face the same problems as your capacity fills.  This also begs the question: how much capacity is “enough”? It’s clear that current capacity issues are a result of the inference (the creation of outputs) demands of Anthropic’s users. What does adding more capacity do, other than potentially bringing that under control?  This also suggests that Anthropic’s (and OpenAI’s by extension) business model is fundamentally flawed. At its current infrastructure scale, Anthropic cannot satisfactorily serve its current paying customer base , and even with this questionably-stable farce of a product, Anthropic still expects to burn $14 billion . While adding more capacity might potentially allow new customers to subscribe, said new customers would also add more strain on capacity, which would likely mean that nobody’s service improves but Anthropic still makes money. It ultimately comes down to the definition of the word “demand.” Let me explain. Data center development is very slow. Only 5GW of capacity is under construction worldwide (and “construction” can mean anything from a single steel beam to a near-complete building). As a result, both Anthropic and OpenAI are planning and paying for capacity years in advance based on “demand.” “Demand” in this case doesn’t just mean “people who want to pay for services,” but “the amount of compute that the people who pay us now and may pay us in the future will need for whatever it is they do.”  The amount of compute that a user may use varies wildly based on the model they choose and the task in question — a source at Microsoft once told me in the middle of last year that a single user could take up as many as 12 GPUs with a coding task using OpenAI’s o4-mini — which means that in a very real sense these guys are guessing and hoping for the best. It also means that their natural choice will be to fuck over their current users to ease their capacity issues, especially when those users are paying on a monthly or — ideally — annual basis. OpenAI and Anthropic need to show continued revenue growth, which means that they must have capacity available for new customers, which means that old customers will always be the first to be punished. We’re already seeing this with OpenAI’s new $100-a-month subscription, a kind of middle ground between its $20 and $200-a-month ChatGPT subscriptions that appears to have immediately reduced rate limits for $20-a-month subscribers.  To obfuscate the changes further, OpenAI also launched a bonus rate limit period through May 31 2026 , telling users that they will have “10x or 20x higher rate limits than plus” on its pricing page while also featuring a tiny little note that’s very easy for somebody to miss: This is a fundamentally insane and deceptive way to run a business, and I believe things will only get worse as capacity issues continue. Not only must Anthropic and OpenAI find a way to make their unsustainable and unprofitable services burn less money, but they must also constantly dance with metering out whatever capacity they have to their customers, because the more extra capacity they buy, the more money they lose.   However you feel about what LLMs can do, it’s impossible to ignore the incredible abuse and deception happening to just about every customer of an AI service. As I’ve said for years, AI companies are inherently unsustainable due to the unreliable and inconsistent outputs of Large Language Models and the incredible costs of providing the services. It’s also clear, at this point, that Anthropic and OpenAI have both offered subscriptions that were impossible to provide at scale at the price and availability that they were leading up to 2026, and that they did so with the intention of growing their revenue to acquire more customers, equity investment and attention.  As a result, customers of AI services have built workflows and habits based on an act of deceit. While some will say “this is just what tech companies do, they get you in when it’s cheap then jack up the price,” doing so is an act of cowardice and allegiance with the rich and powerful.  To be clear, Anthropic and OpenAI need to do this. They’ve always needed to do this. In fact, the ethical thing to do would’ve been to charge for and restrict the services in line with their actual costs so that users could have reliable and consistent access to the services in question. As of now, anyone that purchases any kind of AI subscription is subject to the whims of both the AI labs and their ability to successfully manage their capacity, which may or may not involve making the product that a user pays for worse. The “demand” for AI as it stands is an act of fiction, as much of that demand was conjured up using products that were either cheaper or more-available. Every one of those effusive, breathless hype-screeds about Claude Code from January or February 2026 are discussing a product that no longer exists. On June 1 2026, any article or post about Codex’s efficacy must be rewritten, as rate limits will be halved .  While for legal reasons I’ll stop short of the most obvious word, Anthropic and OpenAI are running — intentionally or otherwise — deeply deceitful businesses where their customers cannot realistically judge the quality or availability of the service long-term. These companies also are clearly aware that their services are deeply unpopular and capacity-constrained, yet aggressively court and market toward new customers, guaranteeing further service degradations and potential issues with models. This applies even to API customers, who face exactly the same downtime and model quality issues, all with the indignity of paying on a per-million token basis, even when Claude Opus 4.6 decides to crap itself while refactoring something, running token-intensive “agents” to fix simple bugs or fails to abide by a user’s guidelines .  This is not a dignified way to use software, nor is it an ethical way to sell it.  How can you plan around this technology? Every month some new bullshit pops up. While incremental model gains may seem like a boon, how do you actually say “ok, let’s plan ahead” for a technology that CHANGES, for better or for worse, at random intervals? You’re constantly reevaluating model choices and harnesses and prompts and all kinds of other bullshit that also breaks in random ways because “that’s how large language models work.” Is that fun? Is that exciting? Do you like this? It seems exhausting to me, and nobody seems to be able to explain what’s good about it. How, exactly, does this change?  Right now, I’d guess that OpenAI has access to around 2GW of capacity ( as of the end of 2025 ), and Anthropic around 1GW based on discussions with sources. OpenAI is already building out around 10GW of capacity with Oracle, as well as locking in deals with CoreWeave ( $22.4 billion ), Amazon Web Services ( $138 billion ), Microsoft Azure ( $250 billion ), and Cerebras (“ 750MW ”). Meanwhile, Anthropic is now bringing on “multiple gigawatts of Google’s next-generation TPU capacity ” on top of deals with Microsoft , Hut8 , CoreWeave and Amazon Web Services. Both of these companies are making extremely large bets that their growth will continue at an astonishing, near-impossible rate. If OpenAI has reached “ $2 billion a month ” (which I doubt it can pay for) with around 2GW of capacity, this means that it has pre-ordered compute assuming it will make $10 billion or $20 billion a month in a few short years, which fits with The Information’s reporting that OpenAI projects it will make $113 billion in revenue in 2028. And if it doesn’t make that much revenue — and also doesn’t get funding or debt to support it — OpenAI will run out of money, much as Anthropic will if that capacity gets built and it doesn’t make tens of billions of dollars a month to pay for it. I see no scenario where costs come down, or where rate limits are eased. In fact, I think that as capacity limits get hit, both Anthropic and OpenAI will degrade the experience for the user (either through model degradation or rate limit decay) as much as they can.  I imagine that at some point enterprise customers will be able to pay for an even higher priority tier, and that Anthropic’s “Teams” subscription (which allows you to use the same subsidized subscriptions as everyone else) will be killed off, forcing anyone in an organization paying for Claude Code (and eventually Codex) via the API, as has already happened for Anthropic’s enterprise users. Anyone integrating generative AI is part of a very large and randomized beta test. The product you pay for today will be materially different in its quality and availability in mere months. I told you this would happen in September 2024 . I have been trying to warn you this would happen, and I will repeat myself: these companies are losing so much more money than you can think of, and they are going to twist the knife in and take as many liberties with their users and the media as they can on the way down.  It is fundamentally insane that we are treating these companies as real businesses, either in their economics or in the consistency of the product they offer.  These are unethical products sold in deceptive ways, both in their functionality and availability, and to defend them is to help assist in a society-wide con with very few winners. And even if you like this, mark my words — your current way of life is unsustainable, and these companies have already made it clear they will make the service worse, without warning, if they even acknowledge that they’ve done so directly. The thing you pay for is not sustainable at its current price and they have no way to fix that problem.  Do you not see you are being had? Do you not see that you are being used?  Do any of you think this is good? Does any of this actually feel like progress?  I think it’s miserable, joyless and corrosive to the human soul, at least in the way that so many people talk about AI. It isn’t even intelligent. It’s just more software that is built to make you defend it, to support it, to do the work it can’t so you can present the work as your own but also give it all the credit.  And to be clear, these companies absolutely fucking loathe you. They’ll make your service worse at a moment’s notice and then tell you nothing is wrong.  Anyone using a subscription to OpenAI or Anthropic’s services needs to wake up and realize that their way of life is going away — that rate limits will make current workflows impossible, that prices will increase, and that the product they’re selling even today is not one that makes any economic sense. Every single LLM product is being sold under false pretenses about what’s actually sustainable and possible long term. With AI, you’re not just the product, you’re a beta tester that pays for the privilege. And you’re a mark for untrustworthy con men selling software using deceptive and dangerous rhetoric.  I will be abundantly clear for legal reasons that it is illegal to throw a Molotov cocktail at anyone, as it is morally objectionable to do so. I explicitly and fundamentally object to the recent acts of violence against Sam Altman. It is also morally repugnant for Sam Altman to somehow suggest that the careful, thoughtful, determined, and eagerly fair work of Ronan Farrow and Andrew Marantz is in any way responsible for these acts of violence. Doing so is a deliberate attempt to chill the air around criticism of AI and its associated companies. Altman has since walked back the comments , claiming he “wishes he hadn’t used” a non-specific amount of the following words: These words remain on his blog, which suggests that Altman doesn’t regret them enough to remove them. I do, however, agree with Mr. Altman that the rhetoric around AI does need to change.  Both he and Mr. Amodei need to immediately stop overstating the capabilities of Large Language Models. Mr. Altman and Mr. Amodei should not discuss being “ scared ” of their models, or being “uncomfortable” that men such as they are in control unless they wish to shut down their services, or that they “ don’t know if models are conscious .”  They should immediately stop misleading people through company documentation that models are “ blackmailing ” people or, as Anthropic did in its Mythos system card , suggest a model has “broken containment and sent a message” when it A) was instructed to do so and B) did not actually break out of any container. They must stop discussing threats to jobs without actual meaningful data that is significantly more sound than “jobs that might be affected someday but for now we’ve got a chatbot.” Mr. Amodei should immediately cease any and all discussions of AI potentially or otherwise eliminating 50% of white collar jobs , as Mr. Altman should cease predicting when Superintelligence might arrive, as Mr. Amodei should actively reject and denounce any suggestions of AI “ creating a white collar bloodbath .” Those that defend AI labs will claim that these are “difficult conversations that need to be had,” when in actuality they engage in dangerous and frightening rhetoric as a means of boosting a company’s valuation and garnering attention. If either of these men truly believed these things were true, they would do something about it other than saying “you should be scared of us and the things we’re making, and I’m the only one brave enough to say anything.”  These conversations are also nonsensical and misleading when you compare them to what Large Language Models can do, and this rhetoric is a blatant attempt to scare people into paying for software today based on what it absolutely cannot and will not do in the future . It is an attempt to obfuscate the actual efficacy of a technology as a means of deceiving investors, the media and the general public.  Both Altman and Amodei engage in the language of AI doomerism as a means of generating attention, revenue and investment capital, actively selling their software and future investment potential based on their ownership of a technology that they say (disingenuously) is potentially going to take everybody’s jobs.  Based on reports from his Instagram , the man who threw the molotov cocktail at Sam Altman’s house was at least partially inspired by If Anyone Builds It, Everyone Dies, a doomer porn fantasy written by a pair of overly-verbose dunces spreading fearful language about the power of AI, inspired by the fearmongering of Altman himself. Altman suggested in 2023 that one of the authors might deserve the Nobel Peace Prize . I only see one side engaged in dangerous rhetoric, and it’s the ones that have the most to gain from spreading it. I need to be clear that this act of violence is not something I endorse in any way. I am also glad that nobody was hurt.  I also think we need to be clear about the circumstances — and the rhetoric — that led somebody to do this, and why the AI industry needs to be well aware that the society they’re continually threatening with job loss is one full of people that are very, very close to the edge. This is not about anybody being “deserving” of anything, but a frank evaluation of cause and effect.  People feel like they’re being fucking tortured every time they load social media. Their money doesn’t go as far. Their financial situation has never been worse . Every time they read something it’s a story about ICE patrols or a near-nuclear war in Iran, or that gas is more expensive, or that there’s worrying things happening in private credit. Nobody can afford a house and layoffs are constant. One group, however, appears to exist in an alternative world where anything they want is possible. They can raise as much money as they want . They can build as big a building as they want anywhere in the world. Everything they do is taken so seriously that the government will call a meeting about it . Every single media outlet talks about everything they do. Your boss forces you to use it. Every piece of software forces you to at least acknowledge that they use it too. Everyone is talking about it with complete certainty despite it not being completely clear why. As many people writhe in continual agony and fear, AI promises — but never quite delivers — some sort of vague utopia at the highest cost known to man. And these companies are, in no uncertain terms, coming for your job.  That’s what they want to do. They all say it. They use deceptively-worded studies that talk about “AI-exposed” careers to scare and mislead people into believing LLMs are coming for their jobs, all while spreading vague proclamations about how said job loss is imminent but also always 12 months away . Altman even says that jobs that will vanish weren’t real work to begin with , much as former OpenAI CTO Mira Murati said that some creative jobs shouldn’t have existed in the first place . These people who sell a product with no benefit comparable on any level to its ruinous, trillion-dollar cost are able to get anything they want at a time when those who work hard are given a kick in the fucking teeth, sneered at for not “using AI” that doesn’t actually seem to make their lives easier, and then told that their labor doesn’t constitute “real work.” At a time when nobody living a normal life feels like they have enough, the AI industry always seems to get more. There’s not enough money for free college or housing or healthcare or daycare but there’s always more money for AI compute.  Regular people face the harshest credit market in generations but private credit and specifically data centers can always get more money and more land .  AI can never fail — it can only be failed. If it doesn’t work, you simply don’t know how to “use AI” properly and will be “ at a huge disadvantage " despite the sales pitch being “this is intelligent software that just does stuff.”  AI companies can get as much attention as they need, their failings explained away, their meager successes celebrated like the ball dropping on New Years Eve, their half-assed sub-War Of The Worlds “Mythos” horseshit treated like they’ve opened the gates of Hell .  Regular people feel ignored and like they’re not taken seriously, and the people being given the most money and attention are the ones loudly saying “we’re richer than anyone has ever been, we intend to spend more than anyone has ever spent, and we intend to take your job.”  Why are they surprised that somebody mentally unstable took them seriously? Did they not think that people would be angry? Constantly talking about how your company will make an indeterminate amount of people jobless while also being able to raise over $162 billion in the space of two years and taking up as much space on Earth as you please is something that could send somebody over the edge.  Every day the news reminds you that everything sucks and is more expensive unless you’re in AI, where you’ll be given as much money and told you’re the most special person alive. I can imagine it tearing at a person’s soul as the world beats them down. What they did was a disgraceful act of violence.  Unstable people in various stages of torment act in erratic and dangerous ways. The suspect in the molotov cocktail incident apparently had a manifesto where he had listed the names and addresses of both Altman and multiple other AI executives, and, per CNBC, discussed the threat of AI to humanity as a justification for his actions. I am genuinely happy to hear that this person was apprehended without anyone being hurt.  These actions are morally wrong, and are also the direct result of the AI industry’s deceptive and manipulative scare campaign, one promoted by men like Altman and Amodei, as well as doomer fanfiction writers like Yudowsky, and, of course, Daniel Kokotajlo of AI 2027 — both of whom have had their work validated and propagated via the New York Times.  On the subject of “dangerous rhetoric,” I think we need to reckon with the fact that the mainstream media has helped spread harmful propaganda, and that a lack of scrutiny of said propaganda is causing genuine harm.  I also do not hear any attempts by Mr. Altman to deal with the actual, documented threat of AI psychosis, and the people that have been twisted by Large Language Models to take their lives and those of others . These are acts of violence that could have been stopped had ChatGPT and similar applications not been anthropomorphized by design, and trained to be “friendly.”  These dangerous acts of violence were not inspired by Ronan Farrow publishing a piece about Sam Altman. They were caused by a years-long publicity campaign that has, since the beginning, been about how scary the technology is and how much money its owners make.  I separately believe that these executives and their cohort are intentionally scaring people as a means of growing their companies, and that these continual statements of “we’re making something to take your job and we need more money and space to do it” could be construed as a threat by somebody that’s already on edge.  I agree that the dangerous rhetoric around AI must stop. Dario Amodei and Sam Altman must immediately cease their manipulative and disingenuous scare-tactics, and begin describing Large Language Models in terms that match their actual abilities, all while dispensing with any further attempts to extrapolate their future capabilities. Enough with the fluff. Enough with the bullshit. Stop talking about AGI. Start talking about this like regular old software, because that’s all that ChatGPT is.  In the end, if Altman wants to engage with “good-faith criticism,” he should start acting in good faith. That starts with taking ownership of his role in a global disinformation campaign. It starts with recognizing how the AI industry has sold itself based on spreading mythology with the intent of creating unrest and fear.  And it starts with Altman and his ilk accepting any kind of responsibility for their actions. I’m not holding my breath. As if their ability to try to do some of a task allows them to do the entire task.   As if their ability to do tasks is somehow impressive or a justification for their cost. An excuse for why they cannot do more hinged on something happening in the future.

0 views
Kev Quirk 5 days ago

I've Completed 100 Days To Offload (Again)

I just published my motorbike servicing rant and went over to my Pure Blog Dashboard to take a look at some stats, when I noticed this: 101 posts in the last year; which means I've complete 100 Days to Offload for a second time! 🎉 The whole point of the is to challenge you to publish 100 posts on your personal blog in a year. Mission accomplished! If you're interested in taking part in the challenge too, make sure you get yourself added to the hall of fame once you've completed it. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views
DYNOMIGHT 1 weeks ago

I quit drinking for a year

In early January 2025, a family friend was over for lunch. One of my many guilty midwit pleasures is a love of New Year’s resolutions, so I asked her if she had made any. She said no, but mentioned that she had some relatives that were doing “damp January”. In case you’re not aware, Dry January is a challenge many people do to quit drinking alcohol during the month of January. These folks were doing a variant in which, instead of not drinking, one simply drinks less. For some reason, this triggered me. I thought, “Are you kidding? You can’t even stop drinking for a single month? Do you know how pathetic that is?” And then, “Fuck you! Fuck you for doing damp January! You know what, I’m going to stop drinking for a year !” To be clear, these thoughts were directed at people I’ve never even met. In retrospect, I wonder what was going on with me emotionally. But I take resolutions seriously, so I felt committed. We are now 15 months down the timeline, so I’ll make my report. This will sound odd, but I swear it’s true. Not drinking was so easy that it was almost easier than my previous baseline of not-not-drinking. Before starting this resolution, I didn’t drink much—perhaps two or three drinks per week. But I often thought about drinking. Every time I saw friends or went to a restaurant, I thought, “Should I have a drink?” Usually I decided not to. But making that decision required effort. After a few weeks of not drinking, that question never even came up. Drinking was simply not a thing I did, so I never needed to negotiate with myself. Theoretically, you could allow yourself one drink a month instead of zero. Theoretically, that should be easier. But I’m pretty sure I’d find it harder, because alcohol would still be an option , a thing to consider. Early on, I sometimes wanted a drink. But gradually I noticed that I didn’t really want a drink, I just wanted a thing . I can’t find a precise name for this concept in psychology, but often, some deep part of my brain seems to scream, “I WANT A THING.” It could be alcohol, but I found dessert worked just as well. I suspect that a new shirt or meeting a new dog would also work. I was not able to stop my brain from doing this. When it demanded a thing, I gave it a thing. I just substituted a non-alcohol thing. So, over the year, I became interested in desserts and even-more interested in tea. The struggle was The Chocolates. Shortly after I made this resolution, my mother gave me a bag of chocolates that each contained a bit of whiskey. In general, I don’t keep chocolate at home. If anyone gives me chocolate, I immediately eat all of it and then text the giver, “Thanks for the chocolate, I ate it instead of dinner, it’s all gone, this is what will always happen if you give me chocolate.” But I couldn’t eat the Chocolates, because they contained alcohol. I managed to get guests to eat a few. A couple of times I came close to draining out the alcohol and eating the chocolate container. I even considered throwing them away, but that felt wrong. So instead I spent a year glaring at them and waiting for them to apologize for the anguish they were causing me. This represented half the difficulty of this resolution. I do not recommend it. Keep your things separate. Have you heard that alcohol is bad for sleep? Because alcohol is bad for sleep . I’ve always known that was true, abstractly. But sleep is variable. If I didn’t sleep well on an individual night, I was never sure: Was that because of the alcohol, or was it random variation? After a year without alcohol, I am very confident that yes indeed, alcohol is bad for sleep , because my sleep during 2025 was much better than in previous years. Sure, like anyone else, I still sometimes wake up and start thinking about oblivion rushing towards me, and how everything I love will vanish into time, and how all that was once future and hope inevitably becomes static and dust, and how the plague of bluetooth speakers continues to spread across the globe. But now: less! I wish there was a drug I could take that would give me energy and improve my mood and make me physically healthier and smarter, all without side-effects. I don’t think such a drug exists. But we do have the opposite! So, sadly, I’ve come to believe that alcohol is basically the perfect anti-nootropic. That’s not because it makes you dumb while you’re drunk. (True, but who cares?) Rather, that’s because it is bad for sleep , and therefore makes you worse across all dimensions the next day. I did find not drinking to have one clear downside: It’s just not that much fun to hang out with people who are drinking if you are not drinking yourself. To be clear, this is a limited effect. It’s only an issue at bars or certain parties where people are there to drink . I don’t go to many such gatherings, but when I did, I felt it was less fun. It’s not that I missed alcohol. Instead, my theory is that drinking parties are a sort of joint role-playing exercise: “Let’s all get together and collectively reduce our inhibitions and see what happens.” It’s fun not (just) because everyone is taking a recreational drug, but because it’s a joint social experience. If you don’t drink, then you aren’t fully participating. It seems like it should be possible to reproduce this effect without alcohol. You could imagine other ways to push the social equilibrium out of balance. Like… Masks? Or weird environments? Or mutual disclosure games? Should people get together and do a group cold plunge? Unfortunately, all these are complicated and/or carry some kind of social stigma. So until we figure something better out, this is a real cost of not drinking. It was minor for me, but it probably depends a lot on where you are in life. All other effects were minor. I guess I saved money at restaurants. I actually lost a bit of weight over the year, despite all the extra desserts, though I can’t say for sure if alcohol was the cause. Otherwise, once I stopped thinking of alcohol as an option, I rarely thought about the resolution at all, except when I saw those damn chocolates. Towards the end of the year, I started wondering if I should quit drinking forever. But I never came to a conclusion, because I rarely thought about alcohol. I considered having a drink at midnight on New Year’s eve, but I happened to be on a plane that crossed the international date line and thus skipped New Year’s eve. And then… for the first few months of 2026, I still didn’t drink. That wasn’t because of any decision. It just never seemed appealing because (a) sleep and (b) I’d broken the mental link between want thing and drink alcohol . Eventually, I ate the chocolates, and I had a glass of wine when visiting some friends. If I can continue rarely drinking while almost never thinking about drinking, I’ll probably do that. If I slowly slide back into always thinking of alcohol as a live option and always negotiating with myself, I might just resolve to quit forever. So that’s my story. Obviously, it’s heavily colored by my own idiosyncrasies, so it’s hard to say if it offers any general lesson. I do think people underrate the long-term health impact of drinking. The effect on heart disease is debated, but everyone agrees that any alcohol increases the risk of cancer. Still, the long-term effects from occasional light drinking probably aren’t huge. What’s really underrated is the short-term effects, via worse sleep. If I had to give advice, it would be this: If you drink, and you think you might be better off not drinking, why not try it? Maybe you’ll find that champagne is essential to your happiness and drink it every night, to hell with the costs. Maybe you’ll find a different baseline, or maybe you’ll quit forever. Whatever you decide, you’ll have full information.

1 views

We

In a glass-walled city ruled by the totalitarian One State, citizens have no privacy, no identity, no freedom, and no names: they each bear only a number. As they prepare to launch their first spaceship, The Integral , citizens are implored to write poems, treatises, and manifestos glorifying the One State and honoring this extraordinary time. D-503, the builder of The Integral, is not a writer, but he gamely takes up the challenge and discovers something quite shocking: he has a soul, a spirit, desires which exceed the container that the One State has set out for him, that make him long for something new. In the discovery of his own power of imagination is the greatest threat the One State will ever face—and his one chance for freedom. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
neilzone 1 weeks ago

Sex and the Fedi

Over the weekend, Girl on the Net - an esteemed sex blogger who, incidentally, happens to be one of the smartest, strongest, and downright loveliest people that I know - tooted : If you ever get sick of me banging on about my life and think ‘ugh I wish she would stick to the porn’ then please know: hardly anyone ever boosts the … porn. And this made me think. I had an engaging conversation with numerous people about it, and I still don’t have good answers, but I enjoyed the discussion and wanted to keep a note of it. This is that note. I follow and chat with quite a lot of sex positive / sex work-related people in the fediverse, and many have expressed similar sentiments. They create, they share, they get “likes” - and, of course, ample criticism - but very few boosts / shares. It must be incredibly demoralising. (I am in a different position in that I neither know nor care how many views my blogposts get .) It made me ponder why people do not share sex-related content, when sex is clearly part of life for many (but not all) people. My thoughts were: stigma about sex as pleasure. It’s fine to have sex, but not to talk about it. One of Girl on the Net’s regular themes is about communication, and simply asking questions (not just about sex, but also including about sex and one’s preferences and horizons). But I imagine that, for some, talking about sex is uncomfortable, including sharing other people talking about sex. concerns relating to professional expectations and obligations. I fall into this category. I am sex positive, but I do not know where the Solicitors Regulation Authority would draw the line, and I don’t wish to be even close to where that line might be. So I play it safe, even though there is stuff that I would like to post or share. But, oh well, self-censorship ftw. Sometimes, I would love not to be “me” online . being embarrassed about what others here might think. Similar, but different, to the points above. This is about other fedizens, who might be co-workers, employers, family members, or whatever. sex as being in the sphere of one’s private life. older people, perhaps especially men, being self-aware of engaging with younger adults posting sex-related stuff, and coming across as creepy. I completely get this, and I am somewhat paranoid about it myself. Several people responded to say that, yes, they felt like this. They might want to engage with public content (and I’m not talking about responding lasciviously, or sending dick pics), but do not want to be perceived as being inappropriate. I received some thought-provoking feedback too: women and non-binary people said that they felt unsafe boosting or posting sex-related content, because of reactions from men hitting on them. That, by posting about sex, some men took it as an unwelcome opportunity to solicit sex with them. some people not wanting to boost as they feel that they don’t have enough followers to make it worthwhile. And, in terms of increasing the distribution of a toot, yes, that makes sense. It probably still sends a nice endorphin boost to the poster though, that someone likes their work enough to want to boost it :) Where someone has a popular “main” account, and a less popular “alt” account, but would only be willing/able to post sex-related stuff via that alt, this perhaps comes into play. just not liking the stuff enough to boost it. Fair enough! concerns over whether their server rules allow boosting of this kind of content, and not wanting to get blocked / banned. I can understand each of these, and why they might lead to a “like” rather than a “boost”. None of them inhibit paying or tipping someone, as a thank you for their work though, which is another way of being supportive. But this also comes against a backdrop of increasing difficulties for sex workers and other people post sex-related stuff. Payment processors denying income streams. Platform operators enforcing their ever more restrictive morality rules, making working harder, and requiring more admin just to keep going. If people take, take, take, without giving back in some meaningful way, then that is challenging even for those who create and share for fun (for appreciation, perhaps, rather than tooting into the void), let alone those for whom this is their livelihood. I wish that I had better answers than I do. stigma about sex as pleasure. It’s fine to have sex, but not to talk about it. One of Girl on the Net’s regular themes is about communication, and simply asking questions (not just about sex, but also including about sex and one’s preferences and horizons). But I imagine that, for some, talking about sex is uncomfortable, including sharing other people talking about sex. concerns relating to professional expectations and obligations. I fall into this category. I am sex positive, but I do not know where the Solicitors Regulation Authority would draw the line, and I don’t wish to be even close to where that line might be. So I play it safe, even though there is stuff that I would like to post or share. But, oh well, self-censorship ftw. Sometimes, I would love not to be “me” online . being embarrassed about what others here might think. Similar, but different, to the points above. This is about other fedizens, who might be co-workers, employers, family members, or whatever. sex as being in the sphere of one’s private life. older people, perhaps especially men, being self-aware of engaging with younger adults posting sex-related stuff, and coming across as creepy. I completely get this, and I am somewhat paranoid about it myself. Several people responded to say that, yes, they felt like this. They might want to engage with public content (and I’m not talking about responding lasciviously, or sending dick pics), but do not want to be perceived as being inappropriate. women and non-binary people said that they felt unsafe boosting or posting sex-related content, because of reactions from men hitting on them. That, by posting about sex, some men took it as an unwelcome opportunity to solicit sex with them. some people not wanting to boost as they feel that they don’t have enough followers to make it worthwhile. And, in terms of increasing the distribution of a toot, yes, that makes sense. It probably still sends a nice endorphin boost to the poster though, that someone likes their work enough to want to boost it :) Where someone has a popular “main” account, and a less popular “alt” account, but would only be willing/able to post sex-related stuff via that alt, this perhaps comes into play. just not liking the stuff enough to boost it. Fair enough! concerns over whether their server rules allow boosting of this kind of content, and not wanting to get blocked / banned.

0 views
iDiallo 1 weeks ago

It's not that deep

I have these Sunday evenings where I find myself sitting alone at the kitchen table, thinking about my life and how I got here . Usually, these sessions end with an inspiring idea that makes me want to get up and build something. I remember the old days where I couldn't even sleep because I had all these ideas bubbling in my head, and I could just get up and do it because I had no familial responsibility. I still have that flare in me, but I also don't always give in to those ideas. Instead, sometimes I chose to do something much simpler. I read. Sometimes it's a book, sometimes it's a blog post. But always, it's something that stimulates my mind more than any startup idea, or tech disruption. There is a blog I follow, I'm not even sure how I stumbled upon it. I don't think it has a newsletter, and it's not in my RSS feed. But, it's in my mind. It's as if I can just feel it when the author posts something new. Right there on the kitchen table, I load it up, and I get a glimpse into someone else's life. I don't know much about this person, but reading her writing is soothing. It's not commercialized, the most I can say about it is, well it is human . When I read something that is written, anything that's written, I expect to hear the voice of the person behind it . Whether it is a struggle, a victory, or just a small remark. It only makes sense when there is a person behind it. Sometimes people write, and in order to sound professional, they remove their voice from it. It becomes like reading a corporate memphis blog. Devoid of any humanity. It's weird how I have these names in my head. Keenen Charles . I check his blog on Sunday evening as well. In fact, here are a few I read recently in no particular order: This isn't to tell you that you need to read those articles or you will be left behind. It's not that deep. You can find things you like, and enjoy them at your own comfort. They don't have to be world changing, they don't have to turn you into a millionaire, they just have to make you smile or nod for a moment. The world is constantly trying to remind us that we are at the edge of destruction. But you, the person sitting there, reading a random blog post from this random Guinean guy, yes you. Take it easy for the rest of the day. Jerry (I particularly liked this specific article) Don't know her name

0 views
ava's blog 1 weeks ago

small thoughts part 9

In ‘ small thoughts ’ posts, I’m posting a collection of short thoughts and opinions that don’t warrant their own post. :) It's been a while! I know self love exists, because I feel it and my body lives it (most of the time). I know it’s easy to pretend that self love doesn’t exist, because a (negative) ego should not exist; but in my view, seeing yourself as a part of a whole instead to cope with depression is also done out of self love. Your body wants to survive. Even if you hate yourself, there is a part in you mental illness cannot touch (for now?) that wants to heal and seek ways on how to live regardless and make it bearable. If you have to pretend like self love isn’t real because you can’t consciously do it yet, that’s fine. But I know it is there because otherwise I would not care about what I eat or drink, about healing illnesses, about fitness, about community, about higher goals than survival, like education and hobbies. I wanna enrich my body and my mind. I wanna act on my potential. I wanna be the best partner and friend I can be. Loving myself made loving others better, easier, healthier. Loving myself makes me show up for communities better. Loving myself makes me sacrifice for others within my boundaries and without burning out and without resentment. Self love is seldom selfish. It doesn’t have to be. I think there’s a misconception that self love inherently includes self-obsessed navel-gazing and that in turn makes you constantly nitpick yourself and your life and focus on what you deserve but aren’t getting and therefore makes you sad, but I disagree. It doesn’t have to be narcissistic and obsessive at all. It doesn’t have to mean putting yourself before others constantly, just in a way where you’ll put on the oxygen mask on yourself before you can help others put theirs on. Sometimes I am afraid to show my kindness online. My kindness naturally ebbs and flows - never truly gone, but there are phases where I really go out of my way and go extra hard, and phases where it’s just basic kindness. But as everyone, I can have bad days or a disagreement, I am low energy, my patience runs out or I need to criticize someone, set boundaries or call something out. In theory, all of that can be done kindly or do not detract from kindness. They can even be kindness. But in practice, some people don’t respect things until you say it in a very sharp tone, or you’ll let your negative feelings show. And realistically, the second you don’t let people do what they want or you criticize them, no matter how softly, they’ll see you as unkind. Kindness, to them, is you always being unconditionally supportive. I am a little scared of people feeling tricked when I can’t upkeep a strong habit of kindness at all times. I’ve seen it in the past, when people who made kindness one of their most prominent features (or were just being seen that way) were dragged for suddenly being unkind - like arguing with someone, being rude during a bad day or whatever. It was unfair. People felt as if they had used kindness to cover up actually being an asshole, like being kind was just an act. It makes me sad and scared that one bad moment can undo months or years of consistent kindness. I don’t know if I can be that perfect. On one hand, I get it - there really are these love & light girlies who preach all that stuff but are really toxic, mean and gossipy in real life. I acknowledge the stories of past school bullies always posting about ‘positive vibes only’ online. But it also makes it hard to show open kindness without putting yourself in a very limiting box of perfect behavior. Not to mention that there’s a gender aspect to it too; higher kindness requirements for women and more situations where you’re required to be kind, and normal behaviors read as unkindness because you’re not a servant, doormat, motherly etc.; looking young, feminine, maybe wearing dresses, or predominantly pink stuff increases the effects, in my experience. People, especially men, expect me to be a lot more motherly, forgiving, patient, kind and servant-like than I am. I actually just want to act and behave like myself, without someone slapping onto me that I must be very kind or motherly as a defining trait, just to accidentally violate that invisible role and have people claim that a rude moment is somehow my true self and all the genuine kindness was a mask. Hot take I am willing to change my mind about: I think looking back, it was a mistake that we saw people liking, commenting and following the same individual (as in, a social media account of a private person, not a company or band etc.) as a “community”. There is no community-building or organizing going on in comment sections of LA influencers, for example. Your readers (or viewers) all consume it independently from one another. There is barely any interaction between them, and often not positive or in-depth. No one bonds over “😂😂😂” or “Agreed.” or “Good post.” or some summary of the post. The views and likes you get are also partly people that checked you out once and that’s that. Really, the people that see you online have nothing in common most times. They’re most often not gathering under a shared message, movement or artstyle, nor are they really knowing each other, and pretending it is so has had a role in para-social behaviors. Implying you have a community or fanbase as a simple social media account or blogger is like implying the people who watch the same ads on YouTube are one. Reply via email Published 04 Apr, 2026

0 views
A Room of My Own 1 weeks ago

Craving Quiet: Stepping Away for a While

Lately I've realised that even though I'm barely on social media, my life still feels 95% digital. I don't post on LinkedIn. My Instagram account mostly exists so I can open links people send me when I absolutely have to. I only keep a fake Facebook account for Marketplace and I use my real account (I've had it since the beginning of Facebook and all my friends live there so it stays) for Messenger only. But there is more than social media to occupy our time now. My days are still full of feeds, links, apps, messages (whatsapp groups and such), digital projects, and little things I feel like I should be keeping up with. And they are easy to keep up with, my phone is always in my hand anyway. RELATED: I Choose Living Over Documenting On the Compulsion to Record The Journal Project I Can’t Quit The Art of Organizing (Things That Don’t Need to Be Organized) At work we showcase our AI agents and I wonder (from my anecdotal experience) if we are creating more busy work for ourselves and replacing reflection and with it, the actual prouctivity and output and good old ““getting the job done.” Most of our work meetings now have extensive transcripts that turn into minutes, notes, action points and insights. I remember when the output of such a meeting would be 2-3 points that we actually remembered. AI Generated Workslop certainly is a thing now. I need a break from it all. And from all the self-imposed shoulds such as scanning my old journals into Day One. Backing up Day One, which hasn't been backed up in a while. An external hard drive backup that's probably a year overdue. A Trello board full of things I want to do but don't really want to or have to do, or maybe I want to do them but can't justify the time when I already feel so busy. After a full day of work and virtual meetings, I feel completely depleted. Those self-imposed obligations, things that used to be fun because they were few and far between, are no longer acceptable. I used to sneak in 15 minutes of personal things at work. Now when I have a break, I'd rather grab a coffee with someone or go for a walk. I crave analog. I crave nature. I crave quiet thinking time (not with a meditation app). I have made some changes already and they seem to be sticking. We have dinner at the table now, which has been good, at least we get some family time before everyone retreats to their own corners. We used to eat while watching a show together as a family, which is fine every now and then, but it was too much of it all. But still my phone is somewhere nearby, and I'm half-watching TV and half-checking a message or voice journaling into an app. None of it is thoughtful. It's just me blabbering. My brain feels like it's all over the place. I used to be able to sit with my own thoughts. I haven't been able to do that in a long time. My daughter broke her arm two weeks ago. She has a purple cast all her friends signed, and she was wondering whether to keep it when it comes off. I told her how I broke my arm as a kid, and she asked if I kept my cast. I said I would have liked to, but what we have now is better. I can take a clear photo of hers and she'll have that memory without keeping the physical thing. Then she asked if I had a photo of mine. I didn't. It never even occurred to me. Back then we took maybe 20 photos a year, if that, and they were all the more precious for it. Now I'm struggling to keep my monthly saves under 150 photos and screenshots, most of which I probably don't need. RELATED: My Photo Management and Memory Keeping Workflow I love my Day One journals , I really do. I just exported all of 2025 to PDF and JSON. But reading back through it, it's every tiny minutia of my life. I like to think it'll be interesting to me one day. Probably not to anyone else. And I wonder whether the time I spent on it was worth it. Yes, there are some insights there , but nothing that I didn’t already know. Had I allowed myself that thinking time instead of outsourcing it to AI. RELATED: Committing to the Thinking Life If my house burned down and I lost everything, the memories that matter are still in my head. I'm a cumulative experience of all of it. Do I need the artifact to know who I am? I still have journals from my 20s and 30s sitting back home in Bosnia. Thick ones, full of pasted tickets and stubs and mementos. I haven't looked at them in years but I can't let them go. My plan is to eventually scan them, maybe pay one of my kids to do it since they won't be able to read my handwriting anyway. RELATED: Letting Go of Old Journals and Mementos But anyway. The point is, I just need a break. From reading things online, from note-keeping, from digital journaling, blogging, saving notes and highlights (even my Readwise subscription feels intrusive now), from all of it. I've decided to do a 30-day digital detox. Within reason, because I still have to work. But I'm off until Tuesday, so I have a few days to ease into it. I'm lucky and privileged that I can do this. That I can shut down for a while and stop following things I can't influence and let go of expectations I put on myself. So that's what I'm doing. Simplifying my phone, deleting apps, putting the phone away when I get home. If we're watching something as a family, fine. One episode. But otherwise, even if I'm bored and restless, I'll go for a walk or play a board game, read a book. Journal (on paper). I'll do nothing, like I used to. Go to bed early. Meet a friend for coffee (and be more proactive about that). It's all become too hard because easy distractions that scratch the itch of everything are too easy. Calm my mind. Slow down. It's been too much. Time to reclaim myself. And if you've gotten this far, the world is reminding me once again of E.M. Forster's The Machine Stops , which I wrote about in 2020 . It feels eerily even more relevant now.

0 views
Manuel Moreale 1 weeks ago

Anthony Nelzin-Santos

This week on the People and Blogs series we have an interview with Anthony Nelzin-Santos, whose blog can be found at z1nz0l1n.com . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Bonjour ! I’m a militant wayfarer, budding typographer, pathological reader, slow cyclist, obsessive tinkerer, dangerous cook, amateur bookbinder, homicidal gardener, mediocre sewist, and fanatical melomaniac living in Lyon (France). I was a technology journalist and journalism teacher for sixteen years, but i now work in instructional design. In my spare time, i take photos of old storefronts to preserve a rapidly fading typographical tradition. One of these days, i’ll finally finish the typefaces i’ve been working on forever. And my novel. And the painting of the bathroom. (My wife is a saint.) I was born a few years before the web was invented and grew up at this fascinating time when everybody wanted to do something with it, but nobody knew quite what yet. We were still supposed to learn Logo and Pascal in technology class, but most of the teachers understood the importance of the web and taught us the basics of HTML and CSS. I built my first website in 2000… as a school assignment! By 2007, i was one of those insufferable tech bloggers who made enough money to feel entitled, but not enough to feel safe. (I moonlighted as a graphic designer.) When more established outlets came knocking at my door, i shut down my blog and became one of those insufferable tech journalists who make enough money to feel entitled, but not enough to feel safe. (I moonlighted as a journalism teacher.) I kept a personal blog under the “zinzolin” moniker. This shade of purple is my favourite colour, partly because it sounds a bit like my name. Over the years, it became more and more difficult to find the energy to write recreationally after having spent the day writing professionally. In 2025, feeling more than a little burnt out, i rebooted my blog and switched from French to English. Fortunately, the name is equally weird in both languages. I don’t have a process so much as a way of managing the incessant chatter in my head. I write to give myself the permission to forget, and i publish to gift myself the ability to remember. You’ll never catch me without some way to capture those little “brain itches” — a notebook, the Bloom app, a digital recorder, the back of my hand… (I wrote part of this interview as a long series of text messages to myself!) In the middle of the week, i start reviewing my notes to find a common theme or extract the strongest idea. When an incomplete thought keeps coming back, i don’t try to force it by staring at a blinking cursor. I take a long walk, and usually, i have to stop part way to write. Most of the actual blogging is done long before i sit down to properly draft my weekly note. I have this romantic notion that the more comfortable i am, the more i can edit, the worse my writing tends to get. If i could, i’d write everything longhand in a rickety train, stream-of-consciousness style, and publish the raw scans of my notebooks. You wouldn’t be able to read half of it, but i can assure you the illegible half would be Nobel-prize worthy. But then, some things only happen after a few hours of diligent editing. If i give myself enough time, i can stop transcribing my notes and start conversing with them. There’s always something worth exploring in the gap between our past and present selves – even if the past was two days ago – but that delicate work requires a conducive environment. Judging by my recent output, it looks like this environment comprises a good chair , a MacBook Air on one of those ugly lap desks, my custom international QWERTY layout , iA Writer for writing and Antidote for proofreading, cosy lighting, just the right amount of background noise, and most important of all, a pot of delicious coffee. I’ve tried pretty much every CMS and SSG under the sun, but i’ve always come back to WordPress, until Matt Mullenweg reminded us that a benevolent dictator still is a dictator . Z1NZ0L1N is now built on Ghost and hosted by Magic Pages . I used to use Tinylytics and Buttondown , but i’m now using Ghost’s integrated analytics and newsletter features. My other websites are hosted on a VPS with Infomaniak , which is also where i get my domain names, e-mail, and assorted cloud services. That’s a question i had to ask myself when i rebooted Z1NZ0L1N last year. I switched to English in a bid to better separate my professional output from my recreational output. I jettisoned most of my audience, but i found a new community around the IndieWeb Carnival and quickly rebuilt a readership on my own merits. I get excited each time i get an e-mail from someone i don’t know from a country on the other side of the globe. I wanted to find a way to publish regularly without turning Z1NZ0L1N into the umpteenth link blog. After a few experiments, i’ve settled on a weekly note that’s part “what i’m doing”, part “what the rest of the world is doing”. This is old-school blogging meets recommendation algorithms — and i love it. Some things haven’t changed, though, and will never change. I use an open-source CMS that i could host myself, not a proprietary platform that i can’t control. I designed my theme myself. I don’t play the SEO/GEO game. I pay a little less than €10/month for Magic Pages’ starter plan with the custom themes add-on. Considering that it saves me €15/month in third-party services, i’d say it’s a fair price. I pay €12/year for the domain, but i also registered a few variations, including , which was first registered in 1999! Blogging is my least expensive hobby — by far. As someone who’s worked a lot on the economics of independent publishing, i’m happily subscribed to a few news outlets and magazines. I like the idea of $1/month memberships for blogs, but in practice, i find it hard to track multiple micro-subscriptions on top of my existing (and frankly far too numerous) digital subscriptions. I wonder if we should create blogging collectives, almost like unions and coops, to collect and redistribute a single subscription in between members. In the meantime, i’ll continue not talking about my Ko-Fi page . The Forest and Ye Olde Blogroll are fantastic discovery tools. A lot of my favourite bloggers have already been featured in People and blogs : VH Belvadi, BSAG, Frank Chimero, Keenan, Piper Haywood, Nick Heer, Tom McWright, Riccardo Mori, Jim Nielsen, Kev Quirk, Arun Venkatesan, Zinzy… I’d love to see how Rob Weychert , Chris Glass , Josh Ginter or Melanie Richards would answer. Their approach to blogging couldn’t be more different, but they each informed mine in their own way. Since 2008, i’ve taken thousands of photos of old storefronts. It began as a way to inform my typographical practice, but it rapidly became an excuse to go out and pay attention – really pay attention – to the world around me. You wouldn’t believe the things i’ve discovered in side streets, the number of conversations i’ve struck after taking a picture of a once-beloved shop, and how my way of looking at the evolution of cities has entirely changed. If you’re up for a little challenge, find your own collection. It might be cool doors, weird postboxes, triangular things, every bookshop in Nova Scotia , sewer manholes, purple things, number signs… It’ll give you another perspective not only when travelling in foreign places, but also on your (not so) familiar surroundings. It doesn’t cost a penny, but it’ll pay off immensely. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 135 interviews . People and Blogs is possible because kind people support it.

0 views
iDiallo 2 weeks ago

13th Year of Blogging

Of all the days to start a blog, I chose April Fools' Day. It wasn't intentional, maybe more of a reflection of my mindset. When I decide to do something, I shut off my brain and just do it. This was a commitment I made without thinking about the long-term effects. I knew writing was hard, but I didn't know how hard. I knew that maintaining a server was hard, but I didn't know the stress it would cause. Especially that first time I went viral. Seeing traffic pour in, reading back the article, and realizing it was littered with errors. I was scrambling to fix those errors while users hammered my server. I tried restarting it to relieve the load and update the content, but to no avail. It was a stressful experience. One I wouldn't trade for anything in the world. 13 years later, it feels like the longest debugging session I've ever run. Random people message me pointing out bugs. Some of it is complete nonsense. But others... well, I actually sent payment to a user who sent me a proof of concept showing how to compromise the entire server. I thought he'd done some serious hacking, but when I responded, he pointed me to one of my own articles where I had accidentally revealed a vulnerability in my framework. The amount you learn from running your own blog can't be replicated by any other means. Unlike other side projects that come and go, the blog has to remain. Part of its value is its longevity. No matter what, I need to make sure it stays online. In the age of AI, it feels like anyone can spin up a blog and fill it with LLM-generated content to rival any established one. But there's something no LLM can replicate: longevity. No matter what technology we come up with, no tool can create a 50-year-old oak tree. The only way to have one is to plant a seed and give it the time it needs to grow. Your very first blog post may not be entirely relevant years later, but it's that seed. Over time, you develop a voice, a process, a personality. Even when your blog has an audience of one, it becomes a reflection of every hurdle you cleared. For me, it's the friction in my career, the lessons I learned, the friends I made along the way. And luckily, it's also the audience that keeps me honest and stops me from spewing nonsense. Nothing brings a barrage of emails faster than being wrong. Maybe that's why I subconsciously published it on April Fools' Day. Maybe that's the joke. I'm going to keep adding rings to my tree, audience or no audience, I'm building longevity. Thank you for being part of this journey. Extra : Some articles I wrote on April Fools day. So you've been blogging for 2 years Quietly waiting for Overnight Success Happy 5th Anniversary Count the number of words with MySQL How to self-publish a book in 7 years The Art of Absurd Commitment Happy 12th Birthday Blog What is Copilot exactly?

0 views
Brain Baking 2 weeks ago

Favourites of March 2026

Our daughter turned three. We’re beyond exhausted but a ripgrep search in this repository yields five more instances of the word exhausted in combination of parenting so I’ll shut up. I guess we also celebrate that after three years of pure chaos, we’re… still alive? Previous month: February 2026 . I am just two levels short of finishing Gobliins 6 before deciding to throw in the towel. Thanks to the increased amount of moon logic presence, the entire adventure was more frustrating than relaxing. As a big Gobliins fan, I have to admit: the game left me a bit disappointed. It’s all right; I’ll just replay Gob3 again. As it left me wanting more, I went back to the original Gobliiins game that I somehow missed as back in the day my dad bought Gobliins 2 and we just continued with 3 without looking back. It’s still worth exploring but very basic and the presence of the life bar is a very strange (and bad!) design choice that fortunately was abandoned in the sequels. I charged the Analogue Pocket and hope to get in some good ol’ Game Boy (Color) games in the coming month. I read a depressing amount of personal genAI tales; more than enough to fill another blog post. I’ll try to keep these out of here as much as possible. My wife bumped into an hacker called Un Kyu Lee crafting his own micro journal hardware. The result looks very cool, including hinge to hang on the door as a physical reminder: I’d rather keep on journaling with my fountain pens, but still, very cool! Related topics: / metapost / By Wouter Groeneveld on 1 April 2026.  Reply via email . Michael vibe-code-ported an X11 window manager into Wayland ; an interesting Claude experiment to see how agentic development works. Greg Newman hosted the Emacs Blog Post Carnival 2025-07 on writing experiences and summarised the participating links. Lots of little gems in there. Rijksmuseum writes about the discovery of the new Rembrandt painting . Well, “new”—it’s been in private collection for years and only recently resurfaced. Peter Bridger shares his experience in the retro happening SWAG February 2026 . I wish we had something similar nearby! Chuck Jordan shares SimCity vibes . As one of the original programmers involved in the projects, he would know. (Via The Virtual Moose ) The 1MB Club has an interesting (older) article I read last month: consider disabling HTTPS auto redirects . I can’t remember why I turned this back on: I want my old WinXP machine to be able to reach as well without the extra TLS overhead. Funny though: they mention “You can freely view this website on both HTTPS and HTTP.”. I remove the in the protocol, press , and get redirected. Whoops. PolyWolf has been thinking about blazing fast static site generators . This is a goldmine as I have a wild idea to write my own generator in Clojure. When the exhaustion and brain fog go away, that is. According to Rishi Baldawa the reviewer isn’t the bottleneck . This one’s a bit AI flavoured, so beware if you’re coming down with an AI cold. (I know I have. Handkerchiefs full.) Marcin Wichary’s keyboard grandmastery again shines through in his Apple Fn endgame article . I wish his keyboard book wasn’t sold out. Wordsmith writes about the underrated simplicity of the original Harvest Moon (1996) video game. Dale Mellor defends sing a dynamically-produced blog site which is a nice change given the static site generator craziness. I’m still on Hugo and have little need for the points he brings up, but still, some others might. Tazjin tries out Guix as a Nixer . I was eyeing on Guix as a budding Lisp fanboy, but both options still can’t seem to fit in my head. I’ll let it stew for a little while longer. Homo Ludditus announces distro hopping time . The conclusion? “The madhouse could be a valid destination. But I’m still looking for better alternatives.” So far for 2026 as the year of the Linux desktop huh. The Digital Antiquarian writes about the year of peak Might & Magic , when New World Computing still was on top of the world. Here’s an interesting thought experiment by Andrey Listopadov: What if structural editing was a mistake? In this 2020 post by Vincent Bernat, photos of a bunch of cool vintage PC expansion cards are shared in conjunction with timeperiod-correct software that made great use of them. Gabor Torok switched to KDE Plasma , an interesting read because we both switched to OSX because of resons and are trying to crawl out of the Apple hole. I don’t know if I’m quite ready yet. Did you know there’s a relation between knitting and programming ? Abbey Perini does. Mykal Machon shares some insightful guiding principles to lead a fuller life. Judging by the principles, I don’t think Mykal has any young kids. I’m using this as a checklist to find out if I missed essential albums: Hip Hop Golden Age’s Top 40 Hip Hop Albums of 1998 . Here’s another GitHub “awesome” list; this time public APIs . Could be useful. Already used for my courses. It doesn’t hurt to link to the 2007 Slow Code manifesto . FontCrafter is a cool way to generate a real font based on your handwriting. WireTap is an open source Ngrok alternative. The Stump Window Manager is the only WM (except the obvious EXWM) I could find that’s written in Common Lisp. I should look into Ulauncher if I ever want to make the switch to Linux to replace Alfred. Christoph Frick shares a cool GitHub Gist showcasing you can write your AwesomeWM config in Fennel instead of Lua. Yazi looks like an Emacs Dired inside a shell?

0 views
David Bushell 2 weeks ago

I quit. The clankers won.

… is what I’m reading far too often! Some of you are losing faith! A growing sentiment amongst my peers — those who haven’t already resigned to an NPC career path † — is that blogging is over. Coding is cooked. What’s the point of sharing insights and expertise when the Cognitive Dark Forest will feed on our humanity? Before I’m dismissed as an ill-informed hater please note: I’ve done my research. † To be fair it’s a valid choice in this economy. Clock in, slop around, clock out. Why not? It’s never been more important to blog. There has never been a better time to blog. I will tell you why. We’re being starved for human conversation and authentic voices. What’s more: everyone is trying to take your voice away. Do not opt-out of using it yourself. First let’s accept the realities. The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms . Mass surveillance and tracking are a feature, privacy is a bug. Everything is an “algorithm” optimised to exploit. How can we possibly combat that? From a purely selfish perspective it’s never been easier to stand out and assert yourself as an authority. When everyone is deferring to the big bullshitter in the cloud your original thoughts are invaluable. Your brain is your biggest asset. Share it with others for mutual benefit. I find writing stuff down improves my memory and hardens my resolve. I bet that’s true for you too. It’s part rote learning part rubberducking † . Writing publicly in blog form forces me to question assumptions. Even when research fails me Cunningham’s Law saves me. † Some will claim writing into a predictive chat box helps too, and sure, they’re absolutely right! Blogging makes you a better professional. No matter how small your audience, someone will eventually stumble upon your blog and it will unblock their path. Don’t accept a fate being forced upon you. The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm . (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?) The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates. We’re saying goodbye to Sora. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. @soraofficialapp - XCancel Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value. I’m not protective over the word “art”. Generative AI is art. It’s irredeemably shit art; end of conversation. A child’s crayon doodle is also lacking refined artistry but we hang it on our fridge because a human made it and that matters. We care and caring has a positive effect on our lives. When you pass human creativity through the slop wringer, or just prompt an incantation, the result is continvoucly morged ; a vapid mockery of the input. The garbage out no longer matters, nobody cares, nobody benefits. I forgot where I was going with this… oh right: don’t resign yourself to the deskilling of our craft . You should keep blogging! Take pride in your ability and unique voice. But please don’t desecrate yourself with slop. The only winning move is not to play. WarGames (1983) We’ve gotten too comfortable with the convenience of Big Tech . We do not have to continue playing their game. Don’t buy the narratives they’re selling. The AI industry is built on the predatory business model of casinos. Except they’ve forget the house is supposed to win. One upside of this looming economic and intellectual depression is that the media is beginning to recognise gate keepers are no longer the hand that feeds them. Big Tech is not the web. You don’t have to use it nor support it. Blog for the old web , the open web , the indie web — the web you want to see. And if you think I’m being dramatic and I’ve upset your new toys, you’re welcome to be left behind in the miasmatic dystopia these technofacists are racing to build. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
Manuel Moreale 2 weeks ago

Slash AI

I’ve seen pages popping up here and there on other people’s blogs . The idea for these pages is, and I quote, «promote trust and transparency». Trust, in the context of 2026 internet—and society in general—is quite the complex topic. Dishing out trust willy-nilly is no longer a reasonable thing to do, and I also think we’re getting to the point where the “benefit of the doubt” is no longer worth considering. If I were to write on this /ai page that I don’t let these tools touch anything I post on this blog, would you trust me? Would that change the perception you have of me? And if you did trust me, why are you doing it? After all, you have no way to actually know for sure. But that is precisely what trust is, isn’t it? Trust is not based on knowledge, but on instinct, on intuitions, on feelings, and on prior experience. Personally, I couldn’t care less what you write on your /ai page. The same way I couldn’t care less if you use em-dashed. Words are cheap, easy to write, and they mean less and less. But your history, all the baggage you carry with you, all you have written and said, that is harder to fake, building it is time-consuming, but destroying it takes a second. If you start posting AI slop, my trust in you is gone in an instant, and no matter how you’ll try to justify it, that trust will not come back. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
ava's blog 2 weeks ago

offer: blogmaxxing class

Looksmaxxing is all the rage nowadays, but what about your blog? Look no further! I am easily one of the bloggers ever, and I have compiled everything I have learned in the years on this platform. And you guys get it first, for 50% off! ✍️ For only 67.67 Euro , you'll get course material covering ✨ For a steal of 69.99 Euro , you unlock access to everything about 🚀 The final lessons are yours for 42.00 Euro : Your blog deserves more than mediocrity. It deserves at least 50 upvotes . With this, you’ll unlock the secret 3-step system top bloggers use to dominate the Trending page while looking effortlessly perfect. ⏳ WARNING: Only 17 spots left for VIP access , and only available until 01.04.2026 23:59:59 CET ! Reply via email Published 01 Apr, 2026 High-impact writing and leveling up your Word/Memorability Ratio . Striking the balance between Jestermaxxing and Corporatemogging . Sharp sentence structure for a chiseled outline! Lessons learned from beating your header with a hammer. Smoothing out your CSS wrinkles with hardcore AI Sculpting ™. How the optimal font-weight changed my life! The art of biohacking Cortisol and Dopamine spikes that turns readers into fans. FOMO Widgets : “ 15 people are reading this now, ” and other social proof hacks that build core community moments! The undeniable magic of using OpenClaw to auto-respond to reader mails and letting it clean your Inbox for you :)

0 views
ava's blog 2 weeks ago

rose ▪ bud ▪ thorn - march 2026

Reply via email Published 31 Mar, 2026 I was featured as a Country Reporter on noyb's channels! My summaries made it into their newsletter 4 times this month. I reached Gold Status in my volunteering (20+ summarized and translated decisions for GDPRhub) now. Next up is the Magenta Status at 35+ :) I've written 4 exams this month; if I'd pass all of them, that's 30 ECTS! I think I'll pass 3. Switched away from Discord . I have no issue with being classified as a teen on the platform because it doesn't stop me from doing anything, but the move fit in with living my actual values like I do with other tech/media things (preferring open source, EU, etc.). I'm both on Matrix and Fluxer. Did some spring cleaning, like clearing out the fridge, wiping the inside, and rearranging the contents, together with throwing away expired toiletries, putting like 2 years of used batteries in the battery collection bin, decluttering a drawer, and vacuuming under and behind the sofa and bed. I've really felt like pouring extra energy into my looks lately. Got back into oil massages for my scalp, hair treatments, sheet masks, teeth bleaching, and got my nails done again (after going natural since December) and got a pedicure, too. I bought new dress pants that are so insanely comfortable, good looking and flattering, it's ridiculous! My yearly gyn checkup came back fine, and I finally caved and got proper treatment for my PCOS and endometriosis. I went out for some runs in the late evening :) haven't run outside in ages, I usually limit it to the treadmill. I went out to parks and forests , enjoying the weather and my free time after the exams. It was super healing and relaxing. Journaled more. Went to a vegan food fair. I applied to a job opening sent to me by a fellow blogger (James) and got an interview !!! I think I did well :) Upcoming: More decluttering and selling, tidying up the basement. Planning to go to two museum exhibitions soon before they close. Gonna go on vacation with two friends for 8 days next month! Booked tickets for an upcoming data protection event. Working on business cards (and maybe stickers?) for it. I've had some issues with my illnesses . :( The stress of intense studying most of February and March, weird weather changes, straining work stuff, eating a little too much sugar, the family situation, and starting two new medications this month sent my body over the edge. That made my fitness goals and studying a bit harder. I also unfortunately didn’t taper off a bigger dose of an anti-anxiety med I occasionally take as needed and accidentally caused agonizing withdrawal symptoms without realizing in time 🥴 I cut contact to last family member I was still talking to. It's stressful to withstand all the attempts to reach out to me, and to stick with the decision without guilt. My wardrobe is stressing me a little. I preferred not to own much. Unfortunately, the less you have, the more you wear the same things, the more they get washed and worn out. At some point, you want to replace a lot of it at the same time. That's not only financially hurtful, but also annoying when you have the goal to sew most of your clothes yourself, and you currently neither have the time nor the energy to buy fabric and sew the things you need. I am annoyed at walking into these fast fashion places, seeing nothing I like, then forcing myself to look at stuff more closely and everything is XS, feels like a trash bag, and costs too much for how flimsy and unethical it is. I'll have to try my luck with thrifting more, but even that has been overrun with Shein trash. If I make it to the second interview round, I might have to deny it. I like the company, they’re a great and respected employer, generous, and the interview was fun… but there are some dealbreakers for me, which hurts. I sat with it after, and slept over it now, and I just don’t think I’ll be happy in these circumstances. :( I wish it wasn’t so, because they were in the Top 3 of places I’d wanna work at, and I want a job in data protection badly. But it doesn’t feel right, and I can’t justify moving forward with it, all things considered. It feels like the wrong time for me. Maybe another open position in a couple years?

0 views
Cassidy Williams 2 weeks ago

My rainbow sweater

My sister got me a rainbow cardigan sweater a couple years ago for Christmas that is very fluffy and floppy. It doesn’t have pockets, it doesn’t have buttons, it just kind of drapes on me and is like a small blanket with arms. It’s not a practical sweater, but it’s cozy. Because it’s not practical, I always have to remember to, for example, wear only pants that have pockets with it, so that I can put my stuff (phone, lip balm, etc) somewhere. I always have to wear certain shirts that don’t bunch up in a certain way when the sweater is feeling extra floppy. It’s just… not the most convenient sweater. But hoo boy, my babies love my rainbow sweater. My oldest loves to sit on my lap and have me envelop her in it in a hug. My youngest loves to bury his face in it when he’s sleepy. Both of them love to pet it because it’s so soft. They admire the colors. They tangle their fingers in it and hold on tight to the loops. They flop with the sweater, and with me, and it’s the coziest thing in the world. I love, love, love putting on this sweater. I get a little giddy thinking about how the babies will gravitate towards it as soon as they see the thick loops plopped across my shoulders. It’s impractical, and it’s weird, but it brings me the best warm cuddles ever.

0 views
Manuel Moreale 2 weeks ago

Nikhil Anand

This week on the People and Blogs series we have an interview with Nikhil Anand, whose blog can be found at nikhil.io . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hi I'm Nikhil! I grew up the UAE and came to the United States for college and graduated with a degree in biomedical engineering. I worked in academia and industry for about 15 years before deciding to turn my attention and energies towards problems in healthcare. I'm now a graduate student at Columbia University's Medical Center and am studying clinical informatics and loving the magnificent beehive that is New York City. With the time I have, I love going to art museums, practicing calligraphy, reading short stories and graphic novels, and watching every suspense/mystery show or movie I can (huge fan of the genre; for example I've watched all of Columbo at least three times). I'm also trying to learn CAD and have 3D printed several small abominations. I started blogging around 2003 after discovering blogs like Kottke.org, Jeffrey Zeldman's blog , Greg Storey's Airbag.ca , and Todd Dominey's WhatDoIKnow.org . My first blog was at freeorange.net which I now use as a placeholder for my tiny LLC's future site. I used to live in Ames, Iowa at the time and decided to and blog what I knew, about stuff going on in the town: gossip, lectures and shows I'd attended, photos of random scenes and events, and so on. That last part proved to be great: I'd hear from a quite a few alumni or former residents who'd have photo requests for nostalgia and I'd gladly oblige, especially since I was super excited to use my first digital camera, a whopping 5 megapixel Sony DSC-F717 😊 I then stopped blogging for about 10 or so years and resumed in 2018. My current blog is essentially a freeform dump: just this mélange of stuff I find interesting and/or may want to reference later. There's really no audience in mind. I use a lot of tags on my posts and am often delighted by exploring them a while later. I moved all my bookmarks over from PinBoard (an excellent service) and am trying to get off Instagram . I'm also trying to be better about making and sharing things (photos, calligraphy, art) no matter how terrible they are and not just consuming them. As for the name, I really wanted a domain hack, , but this sadly required permission from the Israeli government I was pretty sure I wouldn't get 😅 So I went with the shortest and 'coolest' TLD I could find and ended up with nikhil.io. I also have nikhil.fish as an alias for no reason. I think half my site's half a a tumblelog . As for the other half, I have a Markdown file called in my iCloud Drive that I dump inchoate thoughts into (it's at about half a meg right now). I also use the excellent Things app on my phone to save blog posts, names, recommendations, articles, and media of interest to peruse later. When I have time, I look at these two sources to post and comment on something I think is beautiful, interesting, or funny. All professional creatives I know personally have a space that they attend to do their work and they have told me that this matters immensely to them. In my case, I have a setup I've used reliably over many years and love it. I especially love my sit-to-stand desk (on wheels), giant display, and clickity-clack keyboard. I always listen to ambient music or white noise while working on anything ( Loscil 's works are a favorite). I've found that I just cannot focus in coffeehouses or libraries. And I absolutely cannot work or think in harsh "cool white" lighting (3000K or lower; if you need me to divulge secrets, just put me in a room with two tubelights for thirty seconds). I know a lot of people (like my wife , a writer) who can work anywhere and may be a bit envious. I am also in the habit of pacing around and muttering things to myself while working and these are not nice things to do at coffeehouses or libraries. I write all my posts in Markdown and use an old and heavily modded version of 11ty.js with several Markdown-it plugins and supported by quite a few and Node scripts to generate the HTML pages. Images are processed with Sharp . The blog theme is a mess of TSX and SASS files. All posts and code are in and Github. I build everything on my laptop and sync all the files to an S3 bucket that serves my blog through CloudFront. Not really. I've spent enough time monkeying with the design/structure and code where my setup fits my needs like a bespoke suit. You can always nerd out over tooling, and it's a lot of fun, but I've suspended that in favor of using the tools. For the time being at least 😅 Now if my wife or a friend were starting a blog, I would absolutely recommend a platform like Bear . Anything simple, hosted, not creepy, and not run by greedy and/or awful people. It costs ~$5 a month. A giant part of that cost is the domain name. Zero revenue. No plans on 'growing' it or whatever; it's just my little garden on the internet. I have no problem with people monetising their blogs as long as the strategy they employ is respectful to visitors' privacy and unobtrusive to their experience. Patronage/memberships aside, The Deck comes to mind as an ad platform that achieved both these things very well. I do have my problems with platforms like Substack and might write a blog post about this later. Please interview Chris Glass ! His lovely and popular blog is a huge inspiration for mine, layout and content, and he's been at it since at least 2003 IIRC. Another old favorite is Witold Riedel's log . I'm also really digging this blog I discovered recently. I just put up a small project I've wanted to do for a while, my own little curated digital gallery of art I've loved over the years. It was mostly a design exercise but I thought I might use some LLM to discover some themes in why I love these works (or maybe you just love looking at things and don't really need to understand why). Other than that, I am so happy with what feels to me like a resurgence in personal blogging (here's a recent index of personal blogs from readers of HackerNews). Thank you for having me in your beautiful space and featuring several other lovely and interesting people! This is a fantastic project Manu 🤗 Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 134 interviews . People and Blogs is possible because kind people support it.

0 views
Robin Moffatt 3 weeks ago

Interesting links - March 2026

I’ve had a huge amount of fun this month exploring quite what AI (in the form of Claude Code) can do for a data engineer. Rather than just hack around at a prompt, I took a bit more of a considered approach to it, building a harness to test out different prompts and skills. You can read my write-up here, the headline of which is that literally Claude Code isn’t going to replace data engineers (yet) . I’ve also written up an AI Disclosure for my blog which I’ll keep up to date as my use of AI evolves, along with a sweary rant about why you basically have to get on board with AI if you value your career.

0 views
Jim Nielsen 3 weeks ago

Code as a Tool of Process

Steve Krouse wrote a piece that has me nodding along: Programming, like writing, is an activity, where one iteratively sharpens what they're doing as they do it. (You wouldn't believe how many drafts I've written of this essay.) There’s an incredible amount of learning and improvement, i.e. sharpening , to be had through the process of iteratively building something. As you bring each aspect of a feature into reality, it consistently confronts you with questions like, “But how will this here work?” And “Did you think of that there ?” If you jump over the process of iteratively building each part and just ask AI to generate a solution, you miss the opportunity of understanding the intricacies of each part which amounts to the summation of the whole. I think there are a lot of details that never bubble to the surface when you generate code from English as it’s simply not precise enough for computers . Writing code is a process that confronts you with questions about the details. If you gloss over the details, things are going to work unexpectedly and users will discover the ambiguity in your thinking rather than you (see also: “bugs”). Writing code is a tool of process. As you go, it sharpens your thinking and helps you discover and then formulate the correctness of your program. If you stop writing code and start generating it, you lose a process which helped sharpen and refine your thinking. That’s why code generation can seem so fast: it allows you to skip over the slow, painful process of sharpening without making it obvious what you’re losing along the way. You can’t understand the trade-offs you’re making, if you’re not explicitly confronted with making them. To help me try to explain my thinking (and understand it myself), allow me a metaphor. Imagine mining for gold. There are gold nuggets in the hills. And we used to discover them by using pick axes and shovels. Then dynamite came along. Now we just blow the hillside away. Nuggets are fragmented into smaller pieces. Quite frankly, we didn’t even know if there were big nuggets or small flecks in the hillside because we just blasted everything before we found anything. After blasting, we take the dirt and process it until all we have left is a bunch of gold — most likely in the form of dust. So we turn to people, our users, and say “Here’s your gold dust!” But what if they don’t want dust? What if they want nuggets? Our tools and their processes don’t allow us to find and discover that anymore. Dynamite is the wrong tool for that kind of work. It’s great in other contexts. If you just want a bunch of dust and you’re gonna melt it all down, maybe that works fine. But for finding intact, golden nuggets? Probably not. It’s not just the tool that helps you, it’s the process the tool requires. Picks and shovels facilitate a certain kind of process. Dynamite another. Code generation is an incredible tool, but it comes with a process too. Does that process help or hurt you achieve your goals? It’s important to be cognizant of the trade-offs we make as we choose tools and their corresponding processes for working because it’s trade-offs all the way down. Reply via: Email · Mastodon · Bluesky

0 views