Latest Posts (20 found)

Adblocking ≠ Piracy

I told Kevin I was going to write this post since we were discussing this topic the other day. This is my half of the argument; maybe he’ll write an “Adblocking = Piracy” post on his site if he finds the time between one meeting and the other. I am not the first person to write this post; I am sure I won’t be the last. Plenty of people have expressed their opinion on this subject, and so far, no consensus has been reached (and I suspect never will). For me, the reason why the two are not the same is very simple. When I pirate something (a game, a TV show, a movie, music, you name it), the original, legal, implied agreement was pretty straightforward: someone created something and put it up for sale, and if you want that something, you have to exchange money in order to get access to said something. There are no ambiguities here, and it’s a fairly simple transaction. That’s how most of society works. There’s a more complex discussion we can have to figure out if piracy = stealing, but that’s a separate discussion, and it’s not relevant here. With adblocking, on the other hand, the implied agreement is more complex. To start, while browsing the web, I don’t know upfront if the link I’m about to click on has ads or not. So the argument that you shouldn’t use adblockers because you have accepted to be served ads while consuming a specific piece of content is shaky at best in my view. I could see that argument being more valid if ads weren’t displayed straight away, and I was given the option to leave the site before ads were displayed to me, but this is not what’s happening on the web. Then there’s the issue of what being served an ad means. Do I have to watch the ad? Does it have to be displayed on my screen? If ads are displayed on the sidebar of your website, and I keep that portion of the browser outside my screen on purpose, is that adblocking? I’m literally not allowing the ads on my screen after all. If the ads load and I have a script that, after 100ms, replaces them with pictures of cats, is that ok? If I design an adblocker that grabs all the ads on your page and moves them all to the bottom of the page, and I never reach that portion of the site, is that ok? The moment your data has reached my computer, I should be free to interact with it however I see fit. And if I decide to strip away most of the junk you sent my way, it’s my right to do so, the same way it was my right to stand up and walk away or change channel when TV ads were running. Adblocking is not piracy. And actually, I think more people should run adblockers. Actually, all people should run adblockers and force businesses to reconsider how they monetise their content. But I’ll be curious to hear from the people who are in the “adblocking is piracy” camp. Kevin, go write that blog post. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views

(Un)portable defer in C

Modern system programming languages, from Hare to Zig, seem to agree that is a must-have feature. It's hard to argue with that, because makes it much easier to free memory and other resources correctly, which is crucial in languages without garbage collection. The situation in C is different. There was a N2895 proposal by Jens Gustedt and Robert Seacord in 2021, but it was not accepted for C23. Now, there's another N3734 proposal by JeanHeyd Meneide, which will probably be accepted in the next standard version. Since isn't part of the standard, people have created lots of different implementations. Let's take a quick look at them and see if we can find the best one. C23/GCC  • C11/GCC  • GCC/Clang  • MSVC  • Long jump  • STC  • Stack  • Simplified GCC/Clang  • Final thoughts Jens Gustedt offers this brief version: Usage example: This approach combines C23 attribute syntax ( ) with GCC-specific features: nested functions ( ) and the attribute. It also uses the non-standard macro (supported by GCC, Clang, and MSVC), which expands to an automatically increasing integer value. Nested functions and cleanup in GCC A nested function (also known as a local function) is a function defined inside another function: Nested functions can access variables from the enclosing scope, similar to closures in other languages, but they are not first-class citizens and cannot be passed around like function pointers. The attribute runs a function when the variable goes out of scope: The function should take one parameter, which is a pointer to a type that's compatible with the variable. If the function returns a value, it will be ignored. On the plus side, this version works just like you'd expect to work. On the downside, it's only available in C23+ and only works with GCC (not even Clang supports it, because of the nested function). We can easily adapt the above version to use C11: Usage example: The main downside remains: it's GCC-only. Clang fully supports the attribute, but it doesn't support nested functions. Instead, it offers the blocks extension, which works somewhat similar: We can use Clang blocks to make a version that works with both GCC and Clang: Usage example: Now it works with Clang, but there are several things to be aware of: On the plus side, this implementation works with both GCC and Clang. The downside is that it's still not standard C, and won't work with other compilers like MSVC. MSVC, of course, doesn't support the cleanup attribute. But it provides "structured exception handling" with the and keywords: The code in the block will always run, no matter how the block exits — whether it finishes normally, returns early, or crashes (for example, from a null pointer dereference). This isn't the we're looking for, but it's a decent alternative if you're only programming for Windows. There are well-known implementations by Jens Gustedt and moon-chilled that use and . I'm mentioning them for completeness, but honestly, I would never use them in production. The first one is extremely large, and the second one is extremely hacky. Also, I'd rather not use long jumps unless it's absolutely necessary. Still, here's a usage example from Gustedt's library: Here, all deferred statements run at the end of the guarded block, no matter how we exit the block (normally or through ). The stc library probably has the simplest implementation ever: Usage example: Here, the deferred statement is passed as and is used as the loop increment. The "defer-aware" block of code is the loop body. Since the increment runs after the body, the deferred statement executes after the main code. This approach works with all mainstream compilers, but it falls apart if you try to exit early with or : Dmitriy Kubyshkin provides a implementation that adds a "stack frame" of deferred calls to any function that needs them. Here's a simplified version: Usage example: This version works with all mainstream compilers. Also, unlike the STC version, defers run correctly in case of early exit: Unfortunately, there are some drawbacks: The Stack version above doesn't support deferring code blocks. In my opinion, that's not a problem, since most defers are just "free this resource" actions, which only need a single function call with one argument. If we accept this limitation, we can simplify the GCC/Clang version by dropping GCC's nested functions and Clang's blocks: Works like a charm: Personally, I like the simpler GCC/Clang version better. Not having MSVC support isn't a big deal, since we can run GCC on Windows or use the Zig compiler, which works just fine. But if I really need to support GCC, Clang, and MSVC — I'd probably go with the Stack version. Anyway, I don't think we need to wait for to be added to the C standard. We already have at home! We must compile with . We must put a after the closing brace in the deferred block: . If we need to modify a variable inside the block, the variable must be declared with : Defer only supports single-function calls, not code blocks. We always have to call at the start of the function and exit using . In the original implementation, Dmitriy overrides the keyword, but this won't compile with strict compile flags (which I think we should always use). The deferred function runs before the return value is evaluated, not after.

0 views

An Interview with Benedict Evans About AI and Software

An interview with Benedict Evans about the crisis facing software, the future of the corporation, OpenAI, and the struggle to define the LLM paradigm.

0 views

Rewriting pycparser with the help of an LLM

pycparser is my most widely used open source project (with ~20M daily downloads from PyPI [1] ). It's a pure-Python parser for the C programming language, producing ASTs inspired by Python's own . Until very recently, it's been using PLY: Python Lex-Yacc for the core parsing. In this post, I'll describe how I collaborated with an LLM coding agent (Codex) to help me rewrite pycparser to use a hand-written recursive-descent parser and remove the dependency on PLY. This has been an interesting experience and the post contains lots of information and is therefore quite long; if you're just interested in the final result, check out the latest code of pycparser - the main branch already has the new implementation. While pycparser has been working well overall, there were a number of nagging issues that persisted over years. I began working on pycparser in 2008, and back then using a YACC-based approach for parsing a whole language like C seemed like a no-brainer to me. Isn't this what everyone does when writing a serious parser? Besides, the K&R2 book famously carries the entire grammar of the C99 language in an appendix - so it seemed like a simple matter of translating that to PLY-yacc syntax. And indeed, it wasn't too hard, though there definitely were some complications in building the ASTs for declarations (C's gnarliest part ). Shortly after completing pycparser, I got more and more interested in compilation and started learning about the different kinds of parsers more seriously. Over time, I grew convinced that recursive descent is the way to go - producing parsers that are easier to understand and maintain (and are often faster!). It all ties in to the benefits of dependencies in software projects as a function of effort . Using parser generators is a heavy conceptual dependency: it's really nice when you have to churn out many parsers for small languages. But when you have to maintain a single, very complex parser, as part of a large project - the benefits quickly dissipate and you're left with a substantial dependency that you constantly grapple with. And then there are the usual problems with dependencies; dependencies get abandoned, and they may also develop security issues. Sometimes, both of these become true. Many years ago, pycparser forked and started vendoring its own version of PLY. This was part of transitioning pycparser to a dual Python 2/3 code base when PLY was slower to adapt. I believe this was the right decision, since PLY "just worked" and I didn't have to deal with active (and very tedious in the Python ecosystem, where packaging tools are replaced faster than dirty socks) dependency management. A couple of weeks ago this issue was opened for pycparser. It turns out the some old PLY code triggers security checks used by some Linux distributions; while this code was fixed in a later commit of PLY, PLY itself was apparently abandoned and archived in late 2025. And guess what? That happened in the middle of a large rewrite of the package, so re-vendoring the pre-archiving commit seemed like a risky proposition. On the issue it was suggested that "hopefully the dependent packages move on to a non-abandoned parser or implement their own"; I originally laughed this idea off, but then it got me thinking... which is what this post is all about. The original K&R2 grammar for C99 had - famously - a single shift-reduce conflict having to do with dangling else s belonging to the most recent if statement. And indeed, other than the famous lexer hack used to deal with C's type name / ID ambiguity , pycparser only had this single shift-reduce conflict. But things got more complicated. Over the years, features were added that weren't strictly in the standard but were supported by all the industrial compilers. The more advanced C11 and C23 standards weren't beholden to the promises of conflict-free YACC parsing (since almost no industrial-strength compilers use YACC at this point), so all caution went out of the window. The latest (PLY-based) release of pycparser has many reduce-reduce conflicts [2] ; these are a severe maintenance hazard because it means the parsing rules essentially have to be tie-broken by order of appearance in the code. This is very brittle; pycparser has only managed to maintain its stability and quality through its comprehensive test suite. Over time, it became harder and harder to extend, because YACC parsing rules have all kinds of spooky-action-at-a-distance effects. The straw that broke the camel's back was this PR which again proposed to increase the number of reduce-reduce conflicts [3] . This - again - prompted me to think "what if I just dump YACC and switch to a hand-written recursive descent parser", and here we are. None of the challenges described above are new; I've been pondering them for many years now, and yet biting the bullet and rewriting the parser didn't feel like something I'd like to get into. By my private estimates it'd take at least a week of deep heads-down work to port the gritty 2000 lines of YACC grammar rules to a recursive descent parser [4] . Moreover, it wouldn't be a particularly fun project either - I didn't feel like I'd learn much new and my interests have shifted away from this project. In short, the Potential well was just too deep. I've definitely noticed the improvement in capabilities of LLM coding agents in the past few months, and many reputable people online rave about using them for increasingly larger projects. That said, would an LLM agent really be able to accomplish such a complex project on its own? This isn't just a toy, it's thousands of lines of dense parsing code. What gave me hope is the concept of conformance suites mentioned by Simon Willison . Agents seem to do well when there's a very clear and rigid goal function - such as a large, high-coverage conformance test suite. And pycparser has an very extensive one . Over 2500 lines of test code parsing various C snippets to ASTs with expected results, grown over a decade and a half of real issues and bugs reported by users. I figured the LLM can either succeed or fail and throw its hands up in despair, but it's quite unlikely to produce a wrong port that would still pass all the tests. So I set it to run. I fired up Codex in pycparser's repository, and wrote this prompt just to make sure it understands me and can run the tests: Codex figured it out (I gave it the exact command, after all!); my next prompt was the real thing [5] : Here Codex went to work and churned for over an hour . Having never observed an agent work for nearly this long, I kind of assumed it went off the rails and will fail sooner or later. So I was rather surprised and skeptical when it eventually came back with: It took me a while to poke around the code and run it until I was convinced - it had actually done it! It wrote a new recursive descent parser with only ancillary dependencies on PLY, and that parser passed the test suite. After a few more prompts, we've removed the ancillary dependencies and made the structure clearer. I hadn't looked too deeply into code quality at this point, but at least on the functional level - it succeeded. This was very impressive! A change like the one described above is impossible to code-review as one PR in any meaningful way; so I used a different strategy. Before embarking on this path, I created a new branch and once Codex finished the initial rewrite, I committed this change, knowing that I will review it in detail, piece-by-piece later on. Even though coding agents have their own notion of history and can "revert" certain changes, I felt much safer relying on Git. In the worst case if all of this goes south, I can nuke the branch and it's as if nothing ever happened. I was determined to only merge this branch onto main once I was fully satisfied with the code. In what follows, I had to git reset several times when I didn't like the direction in which Codex was going. In hindsight, doing this work in a branch was absolutely the right choice. Once I've sufficiently convinced myself that the new parser is actually working, I used Codex to similarly rewrite the lexer and get rid of the PLY dependency entirely, deleting it from the repository. Then, I started looking more deeply into code quality - reading the code created by Codex and trying to wrap my head around it. And - oh my - this was quite the journey. Much has been written about the code produced by agents, and much of it seems to be true. Maybe it's a setting I'm missing (I'm not using my own custom AGENTS.md yet, for instance), but Codex seems to be that eager programmer that wants to get from A to B whatever the cost. Readability, minimalism and code clarity are very much secondary goals. Using raise...except for control flow? Yep. Abusing Python's weak typing (like having None , false and other values all mean different things for a given variable)? For sure. Spreading the logic of a complex function all over the place instead of putting all the key parts in a single switch statement? You bet. Moreover, the agent is hilariously lazy . More than once I had to convince it to do something it initially said is impossible, and even insisted again in follow-up messages. The anthropomorphization here is mildly concerning, to be honest. I could never imagine I would be writing something like the following to a computer, and yet - here we are: "Remember how we moved X to Y before? You can do it again for Z, definitely. Just try". My process was to see how I can instruct Codex to fix things, and intervene myself (by rewriting code) as little as possible. I've mostly succeeded in this, and did maybe 20% of the work myself. My branch grew dozens of commits, falling into roughly these categories: Interestingly, after doing (3), the agent was often more effective in giving the code a "fresh look" and succeeding in either (1) or (2). Eventually, after many hours spent in this process, I was reasonably pleased with the code. It's far from perfect, of course, but taking the essential complexities into account, it's something I could see myself maintaining (with or without the help of an agent). I'm sure I'll find more ways to improve it in the future, but I have a reasonable degree of confidence that this will be doable. It passes all the tests, so I've been able to release a new version (3.00) without major issues so far. The only issue I've discovered is that some of CFFI's tests are overly precise about the phrasing of errors reported by pycparser; this was an easy fix . The new parser is also faster, by about 30% based on my benchmarks! This is typical of recursive descent when compared with YACC-generated parsers, in my experience. After reviewing the initial rewrite of the lexer, I've spent a while instructing Codex on how to make it faster, and it worked reasonably well. While working on this, it became quite obvious that static typing would make the process easier. LLM coding agents really benefit from closed loops with strict guardrails (e.g. a test suite to pass), and type-annotations act as such. For example, had pycparser already been type annotated, Codex would probably not have overloaded values to multiple types (like None vs. False vs. others). In a followup, I asked Codex to type-annotate pycparser (running checks using ty ), and this was also a back-and-forth because the process exposed some issues that needed to be refactored. Time will tell, but hopefully it will make further changes in the project simpler for the agent. Based on this experience, I'd bet that coding agents will be somewhat more effective in strongly typed languages like Go, TypeScript and especially Rust. Overall, this project has been a really good experience, and I'm impressed with what modern LLM coding agents can do! While there's no reason to expect that progress in this domain will stop, even if it does - these are already very useful tools that can significantly improve programmer productivity. Could I have done this myself, without an agent's help? Sure. But it would have taken me much longer, assuming that I could even muster the will and concentration to engage in this project. I estimate it would take me at least a week of full-time work (so 30-40 hours) spread over who knows how long to accomplish. With Codex, I put in an order of magnitude less work into this (around 4-5 hours, I'd estimate) and I'm happy with the result. It was also fun . At least in one sense, my professional life can be described as the pursuit of focus, deep work and flow . It's not easy for me to get into this state, but when I do I'm highly productive and find it very enjoyable. Agents really help me here. When I know I need to write some code and it's hard to get started, asking an agent to write a prototype is a great catalyst for my motivation. Hence the meme at the beginning of the post. One can't avoid a nagging question - does the quality of the code produced by agents even matter? Clearly, the agents themselves can understand it (if not today's agent, then at least next year's). Why worry about future maintainability if the agent can maintain it? In other words, does it make sense to just go full vibe-coding? This is a fair question, and one I don't have an answer to. Right now, for projects I maintain and stand behind , it seems obvious to me that the code should be fully understandable and accepted by me, and the agent is just a tool helping me get to that state more efficiently. It's hard to say what the future holds here; it's going to interesting, for sure. There was also the lexer to consider, but this seemed like a much simpler job. My impression is that in the early days of computing, lex gained prominence because of strong regexp support which wasn't very common yet. These days, with excellent regexp libraries existing for pretty much every language, the added value of lex over a custom regexp-based lexer isn't very high. That said, it wouldn't make much sense to embark on a journey to rewrite just the lexer; the dependency on PLY would still remain, and besides, PLY's lexer and parser are designed to work well together. So it wouldn't help me much without tackling the parser beast. The code in X is too complex; why can't we do Y instead? The use of X is needlessly convoluted; change Y to Z, and T to V in all instances. The code in X is unclear; please add a detailed comment - with examples - to explain what it does.

0 views

Writing an LLM from scratch, part 32b -- Interventions: gradient clipping

I'm still working on training the best GPT-2 small sized base model that I can with a number of FLOPs roughly equal to two days on my own machine -- my "extra credit" exercise after having worked through Sebastian Raschka 's book " Build a Large Language Model (from Scratch) ". In the last post I trained a baseline model -- one with the same architecture and almost the same training code as in the minimal training run in the book, just modified to run using DDP on an 8x A100 40 GiB/GPU machine in the cloud. There are a bunch of "interventions" I want to try to see if they'll make it better, as measured by the loss they get on a test set. I'll do a post for each intervention, and this is the first: gradient clipping. In the training chart for the baseline model, you can see that there are three places where the loss suddenly spiked up, at around global steps 4,200, 13,000, and 23,000: There are a number of things that could cause loss spikes like that: Exploding gradients are common in RNNs, and also happen in LLMs like this one. I spent a bit of time reading around to find out how they happen, and the ah-ha moment came when I came across this post from Wanshun Wong . Not only is the post itself a good intro in terms of how it affects RNNs, but in the "further reading" at the end, there's some gold: Chapter 10.11 of [1] has a good overview of how gradient clipping works. Now, I bought a copy of " Deep Learning " at the same time as I bought Raschka's book, but I'd only glanced through it. Now was the time to get it down from the shelf -- and, indeed, section 10.11.1 is all about clipping to handle exploding gradients. I'll put the explanation of how they happen into my own words, to see if I can clarify things (at least in my mind). Normally, when we learn about gradient descent, it's illustrated with nice smooth loss charts like this imaginary one for a single-parameter model: We're told that we might start at point A. The gradient is quite high and negative, so we multiply it by our learning rate and subtract it from our parameter. That gets us to point B. This time around, the gradient is smaller as the curve is flatter there, so when we do the same -- multiply by LR and subtract -- we take a smaller step, and wind up at C. Rinse and repeat and we'll wind up near the minimum. The problem is, what if the loss curve actually looks like this: We start at A, with a small gradient, move a little to the right, and now we're at B halfway down a cliff! The gradient is massive, and when we subtract it, even scaled by the learning rate, we can zoom off somewhere to the right -- maybe not even on the chart. Indeed, you can imagine a cliff that is so steep that it would have vertical portions -- negative infinite gradients in this case -- and no matter what your learning rate is, you'll wind up with an infinite parameter update and everything will break. It's hard to see how a model can continue training in a case like that. Now, what can cause steep cliffs like that? The book says "strongly nonlinear functions, such as those computed by a recurrent neural net over many time steps". If you know about RNNs (I wrote about them if you'd like a summary), you'll remember that a single RNN might be quite shallow -- maybe three or four layers -- but when you're doing backpropagation, you run a number of inputs through, one after the other, work out the overall loss, and then "unroll" it to something similar to a "vanilla" neural net to do the backward pass. To put that in concrete terms, a 3-layer neural network trained with a 100-element sequence would unroll to a 300-layer deep network. Every one of those layers has several operations, including (in the implementation I was looking at in my post above), a t a n h . It's not surprising that there are cliffs in the loss landscape -- it's more surprising that there are any smooth bits! Now in LLMs, we don't have that unrolling through time -- but our network is deep enough as it is. For the GPT-2 small model, disregarding the embeddings and the final output head, we have 12 Transformer layers, each of which is multiple matrix multiplications for attention, then a softmax, then another layer, and then a feed-forward... mapping precisely to the equivalent vanilla NN is hard, but I think you can treat each one as at least four layers, so we've got 48. And there are GELUs and logs and exps 1 dotted around, so again -- we should expect cliffs. So if sometimes we'll get crazy gradients, what can we do about them? We clip them. Clipping gradients simply means that if they get larger than a particular number -- v , which we define -- we reduce them to that number. In other words, we have a cap on how big they can get. "Deep Learning" ("DL" from now on) suggests two ways to do it. Remember that while in the example above, we only had one parameter -- on the X axis -- for the GPT-2 small LLM we're training, we have 163 million of them. So the gradients, instead of being one number, will be a 163M-long vector, one per parameter. The two ways to clip are: The second feels more elegant -- we're scaling all of the elements of the gradient vector by the same amount, so it still points in the same direction. Interestingly, though, DL says that the two methods "work similarly", which I'll read as "are pretty much the same in practice". DL then goes on to say how infinite or not-a-number gradients should be handled. With the first way, clearly doing it naively would set every element in the gradient vector to v , which would make the total size (norm) of the update very large. With the second, it be even worse -- we'd still wind up with completely junk gradients, because the norm would be infinite, and in Python is , so we'd be applying gradients with NaNs in them at best. That would be likely to knock our model into unrecoverable territory, as any parameter that had that applied to it would be NaN forever. Their suggested solution is that if you get garbage gradients like that, you can take a random step -- that is, create a new gradient to apply that has the norm v but just points in a random direction. The idea is that this will move you away from the cliff-ridden part of the loss landscape where you've found yourself (more about that later), and things will continue nicely. So, anyway, how to do this in practice? PyTorch has a function, , and that's what's referenced in almost every bit of writing I've found about how to clip gradients. So I decided to use that, assuming it would do what was described in DL's second option and that it would do the random updates they suggest for non-finite gradients. (I was half-correct -- see later.) As to how to use it -- if we had a normal training loop, where we were just using a normal optimiser, we would go from: ...to something like ...where is the max value v from above. However, for our training code using Automatic Mixed Precision (AMP), it's a little more complicated -- but luckily, the AMP explainer we've been using has a section explaining what to do . Right now we have this: Per that explainer, we need to move to this: That looks a bit weird; we're "unscaling" the gradients, then clipping them, then using the scaler to step the optimiser. You'd think that you'd need to "re-scale" the scaler after clipping the gradients -- to get back to where you started from before the optimiser step. From the help page I gather it keeps track of whether or not the gradients it has right now are currently scaled and handles them appropriately based on that state in . Anyway, given that we know what the code looks like now, we need to implement it in a way that can be easily switched on for this experiment (and potentially in the future), but which also allows us to not use it if we don't want to. The best way with our setup is to make it a training option, so we can do it this way: ...with extracted from the file where we call it in : ...and we can just pass in for it in our function that we use to find the maximum micro-batch size for our current hardware, as all we're testing for there is memory usage -- we don't care if we're doing good updates. Here's the code delta for that , plus a bugfix to allow for files without a in them. But it would also be useful to be able to track when it "fired" -- that is, when we had to clip our gradients. Then we can see two things: Now, the docs for say that it returns the "[t]otal norm of the parameter gradients (viewed as a single vector)". It doesn't say whether that's before or after the clipping, but given that the return value would always be if it was after, I'm going to guess that it returns the pre-clipping norm (ChatGPT agrees). So we can chart that; changes in these diffs: 1 , 2 , 3 , 4 . So we now have code to clip gradients to a given norm size and to chart the gradient norms so that we know what they were before clipping. The question is, what should that clipping norm be? Some googling around suggested that there was no standard way of saying "for such-and-such a kind of model, gradients should be clipped at around x ". For example, on this Reddit thread , says "Common values are 1, 3, 5, 8, 10", and likewise sample code in this tutorial . has 1, as does this one . So my initial thought was, let's just use 1. But then I wondered, what actually are the gradient norms that we're getting in normal training? I decided to run a local short train on 3m tokens (a thousandth of the full training set, taking just less than four minutes) with very frequent checkpointing, and gradient clipping set to 1, and see what happened. You can see that the "grad max" line is almost always above the "grad clip" -- we're almost always clipping. This doesn't sound right. It looked like the range of the grad max was generally beween 1.1 and a little above 3, so I set the to 3.5 and did another train: Our loss is about the same, but we're no longer clipping -- and that's what we want; there was no evidence of exploding gradients for that short run -- just big updates near the start, as you'd expect. I then ran the same with no gradient clipping at all, and got exactly the same shape for the loss chart as I did with gradient clipping at 3.5, and the same final loss -- that's a good signal that clipping is not affecting the train when we stay inside the limit, which is exactly what we want. So, it was time to train our model! I kicked off the train, and after a little while, I looked at the training chart, which is updated dynamically as the model trains: You can see the dotted green lines, both the light one and the dark one -- that is, the "grad max" and the "grad avg" -- disappear starting just before global step 4,000, only coming back at about 5,500 -- that is, these were not plotted for global steps 4,319 and 4,936, even though the loss was. What was going on? I took a look at the checkpoint meta file for the first of those to see what the actual numbers were, and saw this: Aha! The PyPlot code I was using could not handle infinite values, which is entirely reasonable. That was easy enough to fix , though -- I just replaced positive infinity by 1,000,000 and negative infinity by -1,000,000, and then (in the interest of getting a proper from-scratch run) kicked everything off from the beginning. That training run completed with this chart: That's a little hard to read, but if you look closely at the green lines, you can see that there are seven periods where gradients were either very large or infinite. Weirdly, though, out of the seven, two of them were two checkpoint periods long (that is, two periods of 617 global steps). That felt weird, though of course we're looking at the maximum gradient norm and the average gradient norm -- so two single infinite/high-gradient steps in successive 617-step periods would lead to that effect. What was even stranger, though, was that if you look at the training chart for the run with no gradient clipping, we have only three loss spikes rather than seven: ...though it's also very noticeable that the gradient-clipped run had only two small loss spikes, unlike the three larger ones in the unclipped run. The training loss the gradient-clipped run reported at the end was better, too: ...versus 3.743 at the end of the baseline train. So it was time to download it, and run the sequence-completion smoke test: Coherent enough! Next, we evaluate it against our held-back test set: So, the loss had gone down -- but only from 3.743 to 3.678, a reduction of 0.065, or about 1.7%. That's not actually all that bad! After all, in my initial experiments on my local machine, training for a Chinchilla-optimal number of tokens from FineWeb-Edu (rather than the regular FineWeb I'm using now) got a loss of 4.167 on the same dataset (weirdly worse with the more-curated training set), and training for a further Chinchilla-optimal number of tokens only brought that down to 4.135, for a difference of 0.032, or 0.7%. It's not strictly comparable due to the different training sets, but speaking very loosely, we could say that gradient clipping for this train had more effect than doubling the training time for the other one. That's pretty nifty. But the question remained: why those long periods of high gradients, even with gradient clipping? And why were there still loss spikes -- in particular the one just before global step 12,000, which lasted for two checkpoint periods? Remember that when I started the first run of this train, and got the chart with the missing bits, it was because the logged and were infinite. What happens when gets an infinite gradient -- either one that has an infinity as one of its components, or one that (due to numerical overflow) winds up with a norm of infinity anyway? I'd been kind of assuming that it did what the authors described in "Deep Learning" -- a random update of norm v -- given that the book stated pretty confidently that you "can" do it but then appeared to consider the topic closed. But it doesn't! If you check that link to the docs, you'll see that it has a parameter , which is by default. If it's set to , that will raise an exception if the norm is positive or negative infinity, or if it's not a number -- which catches both the infinite component and the norm overflow cases above. But if it's not set -- and we weren't setting it -- and the norm or the gradients are non-finite, then will essentially return garbage gradients. Depending on the exact cause, elements will either be infinities of one sign or another, or NaNs. And if these are added to parameters, then those parameters will become garbage too. Now that leads to the question, given that we know that somewhere in the period between the checkpoint at global step 4,319 and the previous one at 3,702 there was an infinite norm at some point, how on earth did the model manage to continue training after that? Loss went up at around the same time, but it wasn't completely broken as it would have been with NaNs or infinities in its parameters. Obscurely enough, the answer turned out to be in the AMP explainer , in a comment in one of the bits of example code. Regarding the class we're using: So what was happening was that the scaler -- something we introduced into our code to get a speedup by using 16-bit floats instead of 32-bit whenever PyTorch thought it would make sense -- was protecting us against infinite and NaN gradients as a side-effect. It was skipping updates that would have polluted our weights with bad values from non-finite gradients. If the above comes across as a little frustrated, then it's because I am a bit! From a software engineering viewpoint, this situation really does feel a bit like a rather messy part of the API. There are three things that it's reasonable for a library to do with infinite/NaN gradients: Now, if we look at that , we can see that the first two of those cases are handled there; and the developer can choose which option to follow. It's not where I'd personally put it (the function on the optimiser seems more natural) and I think I'd probably set the default to too, but I can also imagine good reasons for it being the way it is -- backward compatibility for one. But the "skip non-finite gradients" being a (not even optional!) behaviour that is on a class designed for handling mixed-precision training just seems outright bonkers. I would be surprised if there weren't people out there who've spent days trying to work out why their training runs failed catastrophically when they decided to switch from mixed-precision to "full fat" 32-bit floats, not realising that a hardly-even-documented feature of the scaler 3 had been saving them from gradient issues previously. Anyway, rant over. What does this all mean? There are three ways a gradient can explode: With both the baseline code and our new code, the was saving us from the last two of those, by skipping the optimiser steps with non-finite gradients. However, the baseline run was not protected against the first kind -- large but finite gradients with a finite norm -- while this run was protected. What I'm almost certain is happening here is that in all of my training runs so far, there have been all three kinds of issues with exploding gradients. The , which again, we introduced for faster training, happened to be saving us from the infinite gradients/norms. But we were still being bitten by the finite but excessively large ones. And that, I think, is why this training run had a positive -- not huge, but certainly worthwhile -- effect on the test set loss. If I had more time, I think I'd do another run, logging all three of those categories of error to see how frequent they are, and charting the result. That might go some way to explaining the final question I had here: why is it that the renowned "Deep Learning" suggests a random update to get away from the cliff where you've found yourself, while we seem to be getting away with just skipping the update, which is much simpler? Well, the book was written in 2016, and I guess rather a lot has changed in the last 10 years :-) My guess is that their solution might have been a solid default in the age of RNNs, but might not make so much sense with the kind of models we're training these days. I think I can see a way in which that makes sense. Think of the illustration of a loss "cliff" in a one-parameter world that we had at the start of this post: If you happen to wind up on that cliff, you're in trouble. But imagine a two-parameter model -- the line of the loss function becomes a surface. Just as in the real world you might be able to walk along the edge at the top of a cliff and find a nice easy slope down next to it, you can imagine that the cliff in the two-parameter case might be less of a problem because you don't need to be lucky enough to jump down it -- you can walk around it. Extrapolating examples like this to higher dimensions is risky, but I think it should hold that the more dimensions you're working with, the less likely it is that a cliff is an issue -- you're more likely to be able to find a way around it. I've heard a very similar argument made for why local minima are less of an issue with lots of parameters. It's certainly worth saying that this is far from a mathematical proof, but I think it's a decent grounding for intuition. Now think about an RNN. Although you're doing back-propagation through time over what amounts to a very deep network, there aren't actually all that many parameters, certainly compared to an LLM like this. Each parameter is involved in the back-propagation multiple times. So, thinking of it that way, the gradient vector for the RNNs they were dealing with was of much lower dimensionality than the ones we're dealing with, even for this tiny model. They say that the random step "will typically move away from the numerically unstable configuration". I'm probably playing fast and loose here, but I'll take that as something like: if you wound up on a cliff, you were likely in a very "cliffy" area of the loss landscape. "Teleporting" randomly to somewhere some distance away was a sensible way to handle that. In our situation, even if the area is "cliffy" in the direction that one particular batch might push us, we have so many extra dimensions that it may well be that it won't be so bad with the next one. So just skipping the problematic update -- under all of those assumptions -- seems a perfectly reasonable way to handle it. All of this, BTW, made me think back to validation loss. In our previous training runs, where we were measuring it just before each checkpoint, its spikes were in general correlated with but not identical to spikes in training loss: Now, of course, exploding gradients don't have to be related to high training loss -- there's enough non-linearity in there that we can treat them as being completely uncorrelated, I think. But you definitely would expect them to have an effect on validation loss if applied. Disregarding the infinite ones (which were being filtered out anyway), the very high ones that we are now clipping would, in the unclipped baseline train, seem very likely to have caused validation loss spikes. So: if I hadn't stripped that out, we would likely have been able to see a clear difference in the validation loss line between clipped and unclipped. That would have been useful! I'm not going to re-introduce it, though. Best to keep the number of code changes to a minimum if I'm trying to compare like with like over the course of these intervention tests. I think that's enough for gradient clipping. I may come back and do the experiment another time to see what the relative ratios of the different kinds of problematic gradients are. Are there parts of the train where we get lots of them as a percentage (ie. we're somewhere "cliffy" in the loss landscape)? How many infinite gradient vs infinite norm vs big-but-not-infinite instances do we have relative to each other, and to normal gradient updates? What do we see if we have validation loss? And so on. But for now: gradient clipping definitely helps, and goes on the positive interventions list! I'm thinking I'll see what happens with switching off dropout next. That should at least be a bit easier... Stay tuned! Oh my .  ↩ Technically the L2 norm -- if you used cubes/cube root it would be L3, and likewise for the power of four and L4 and so on. But the L2 is the one used for gradient clipping.  ↩ Shades of Douglas Adams , really: "But the plans were on display..." "On display? I eventually had to go down to the cellar to find them." “That’s the display department." “With a flashlight." “Ah, well, the lights had probably gone." “So had the stairs." “But look, you found the notice, didn’t you?" “Yes," said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard."  ↩ A "bad batch" -- that is, one batch, or even one sequence in a batch, was massively different in structure to the others that the model had seen, so it just had much worse loss. That doesn't seem likely in this case, though: the numbers on the chart are averages over 617 global steps each, and it would take a truly pathological sequence to move the needle that much. Something weird in the optimiser. That's not something I understand well, but according to the various LLMs I'm working with, it's a possibility. Exploding gradients. This is my working hypothesis, and so in this post I'll try out gradient clipping, the normal solution to that problem. I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning (2016), MIT Press. We clip element-wise. If any one of the gradients in the vector is larger than v , we reduce it to v . We clip based on the norm: the length of the gradient vector in -- in our case -- 163M-dimensional space. That sounds harder than it is -- it's really just an extension of the Pythagorean equation that a 2 + b 2 = c 2 to multiple dimensions. If you want to work out the length of a vector ( a , b ) then you can use Pythagoras to work out c = a 2 + b 2 , and that generalises to any number of dimensions. So for our model we'd just square all 163M elements of the vector, sum those, and take the square root of the result, and that's the norm. 2 If the norm is greater than v , we just divide every element of the gradient vector by the norm and multiply the result by v , to produce a new gradient vector whose norm is v . Whether we actually did wind up clipping them and fixing those loss spikes Whether we were clipping at other times -- we don't want to be doing it unnecessarily. Blindly apply them and expect the developer to sanitise their inputs. Raise an error. Take some kind of default sane action, like skipping the update. It can get very large, still be finite, and have a finite norm. It can get very large, still be finite, but have an infinite norm (eg. due to numerical overflow) It can become infinite -- that is, at least one of the parameters' gradients is infinite (which of course means an infinite norm regardless of any numerical stuff). Oh my .  ↩ Technically the L2 norm -- if you used cubes/cube root it would be L3, and likewise for the power of four and L4 and so on. But the L2 is the one used for gradient clipping.  ↩ Shades of Douglas Adams , really: "But the plans were on display..." "On display? I eventually had to go down to the cellar to find them." “That’s the display department." “With a flashlight." “Ah, well, the lights had probably gone." “So had the stairs." “But look, you found the notice, didn’t you?" “Yes," said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard."  ↩

0 views

Heritability of intrinsic human life span is about 50% when heritability is redefined to be something completely different

How heritable is hair color? Well, if you’re a redhead and you have an identical twin, they will definitely also be a redhead. But the age at which twins go gray seems to vary a bit based on lifestyle. And there’s some randomness in where melanocytes end up on your skull when you’re an embryo. And your twin might dye their hair! So the correct answer is, some large number, but less than 100%. OK, but check this out: Say I redefine “hair color” to mean “hair color except ignoring epigenetic and embryonic stuff and pretending that no one ever goes gray or dyes their hair et cetera”. Now, hair color is 100% heritable. Amazing, right? Or—how heritable is IQ? The wise man answers, “Some number between 0% or 100%, it’s not that important, please don’t yell at me.” But whatever the number is, it depends on society. In our branch of the multiverse, some kids get private tutors and organic food and $20,000 summer camps, while other kids get dysfunctional schools and lead paint and summers spent drinking Pepsi and staring at glowing rectangles. These things surely have at least some impact on IQ. But again, watch this: Say I redefine “IQ” to be “IQ in some hypothetical world where every kid got exactly the same school, nutrition, and parenting, so none of those non-genetic factors matter anymore.” Suddenly, the heritability of IQ is higher. Thrilling, right? So much science. If you want to redefine stuff like this… that’s not wrong . I mean, heritability is a pretty arbitrary concept to start with. So if you prefer to talk about heritability in some other world instead of our actual world, who am I to judge? Incidentally, here’s a recent paper : I stress that this is a perfectly OK paper. I’m picking on it mostly because it was published in Science, meaning—like all Science papers—it makes grand claims but is woefully vague about what those claims mean or what was actually done. Also, publishing in Science is morally wrong and/or makes me envious. So I thought I’d try to explain what’s happening. It’s actually pretty simple. At least, now that I’ve spent several hours reading the paper and its appendix over and over again, I’ve now convinced myself that it’s pretty simple. So, as a little pedagogical experiment, I’m going to try to explain the paper three times, with varying levels of detail. The normal way to estimate the heritability of lifespan is using twin data. Depending on what dataset you use, this will give 23-35%. This paper built a mathematical model that tries to simulate how long people would live in a hypothetical world in which no one dies from any non-aging related cause, meaning no car accidents, no drug overdoses, no suicides, no murders, and no (non-age-related) infectious disease. On that simulated data, for simulated people in a hypothetical world, heritability was 46-57%. Everyone seems to be interpreting this paper as follows: Aha! We thought the heritability of lifespan was 23-35%. But it turns out that it’s around 50%. Now we know! I understand this. Clearly, when the editors at Science chose the title for this paper, their goal was to lead you to that conclusion. But this is not what the paper says. What it says is this: We built a mathematical model of alternate universe in which nobody died from accidents, murder, drug overdoses, or infectious disease. In that model, heritability was about 50%. Let’s start over. Here’s figure 2 from the paper. Normally, heritability is estimated from twin studies. The idea is that identical twins share 100% of their DNA, while fraternal twins share only 50%. So if some trait is more correlated among identical twins than among fraternal twins, that suggests DNA influences that trait. There are statistics that formalize this intuition. Given a dataset that records how long various identical and fraternal twins lived, these produce a heritability number. Two such traditional estimates appear as black circles in the above figures. For the Danish twin cohort, lifespan is estimated to be 23% heritable. For the Swedish cohort, it’s 35%. This paper makes a “twin simulator”. Given historical data, they fit a mathematical model to simulate the lifespans of “new” twins. Then they compute heritability on this simulated data. Why calculate heritability on simulated data instead of real data? Well, their mathematical model contains an “extrinsic mortality” parameter, which is supposed to reflect the chance of death due to all non-aging-related factors like accidents, murder, or infectious disease. They assume that the chance someone dies from any of this stuff is constant over people, constant over time, and that it accounts for almost all deaths for people aged between 15 and 40. The point of building the simulator is that it’s possible to change extrinsic mortality. That’s what’s happening in the purple curves in the above figure. For a range of different extrinsic mortality parameters, they simulate datasets of twins. For each simulated dataset, they estimate heritability just like with a real dataset. Note that the purple curves above nearly hit the black circles. This means that if they run their simulator with extrinsic mortality set to match reality, they get heritability numbers that line up with what we get from real data. That suggests their mathematical model isn’t totally insane. If you decrease extrinsic mortality, then you decrease the non-genetic randomness in how long people live. So heritability goes up. Hence, the purple curves go up as you go to the left. My explanation of this paper relies on some amount of guesswork. For whatever reason, Science has decided that papers should contain almost no math, even when the paper in question is about math. So I’m mostly working from an English description. But even that description isn’t systematic. There’s no place in the paper where clearly lay out all the things they did, in order. Instead, you get little hints, sort of randomly distributed throughout the paper. There’s an appendix, which the paper confidently cites over and over. But if you actually read the appendix, it’s just more disconnected explanations of random things except now with equations set in glorious Microsoft Work format. Now, in most journals, authors write everything. But Science has professional editors. Given that every single statistics-focused paper in Science seems to be like this, we probably shouldn’t blame the authors of this one. (Other than for their decision to publish in Science in the first place.) I do wonder what those editors are doing, though. I mean, let me show you something. Here’s the first paragraph where they start to actually explain what they actually did, from the first page: See that h(t,θ) at the end? What the hell is that, you ask? That’s a good question, because it was never introduced before this and is never mentioned again. I guess it’s just supposed to be f(t,θ) , which is fine. (I yield to none in my production of typos.) But if paying journals ungodly amounts of money brought us to this, of what use are those journals? Probably most people don’t need this much detail and should skip this section. For everyone else, let’s start over one last time. The “normal” way to estimate heritability is by looking at correlations between different kinds of twins. Intuitively, if the lifespans of identical twins are more correlated than the lifespans of fraternal twins, that suggests lifespan is heritable. And it turns out that one estimator for heritability is “twice the difference between the correlation among identical twins and the correlation among fraternal twins, all raised together.” There are other similar estimators for other kinds of twins. These normally say lifespan is perhaps 20% and 35% heritable. This paper created an equation to model the probability a given person will die at a given age. The parameters of the equation vary from person to person, reflecting that some of us have DNA that predisposes us to live longer than others. But the idea is that the chances of dying are fairly constant between the ages of 15 and 40, after which they start increasing. This equation contains an “extrinsic mortality” parameter. This is meant to reflect the chance of death due to all non-aging related factors like accidents or murder, etc. They assume this is constant. (Constant with respect to people and constant over time.) Note that they don’t actually look at any data on causes of death. They just add a constant risk of death that’s shared by all people at all ages to the equation, and then they call this “extrinsic mortality”. Now remember, different people are supposed to have different parameters in their probability-of-death equations. To reflect this, they fit a Gaussian distribution (bell curve) to the parameters with the goal of making it fit with historical data. The idea is that if the distribution over parameters were too broad, you might get lots of people dying at 15 or living until 120, which would be wrong. If the distribution were too concentrated, then you might get everyone dying at 43, which would also be wrong. So they find a good distribution, one that makes the ages people die in simulation look like the ages people actually died in historical data. Right! So now they have: Before moving on, I remind you of two things: The event of a person dying at a given age is random. But the probability that this happens is assumed to be fixed and determined by genes and genes alone. Now they simulate different kinds of twins. To simulate identical twins, they just draw parameters from their parameter distribution, assign those parameters to two different people, and then let them randomly die according to their death equation. (Is this getting morbid?) To simulate fraternal twins, they do the same thing, except instead of giving the two twins identical parameters, they give them correlated parameters, to reflect that they share 50% of their DNA. How exactly do they create those correlated parameters? They don’t explain this in the paper, and they’re quite vague in the supplement. As far as I can tell they sample two sets of parameters from their parameter distribution such that the parameters are correlated at a level of 0.5. Now they have simulated twins. They can simulate them with different extrinsic mortality values. If they lower extrinsic mortality, heritability of lifespan goes up. If they lower it to zero, heritability goes up to around 50%. Almost all human traits are partly genetic and partly due to the environment and/or random. If you could change the world and reduce the amount of randomness, then of course heritability would go up. That’s true for life expectancy just life for anything else. So what’s the point of this paper? There is a point! Sure, obviously heritability would be higher in a world without accidents or murder. We don’t need a paper to know that. But how much higher? It’s impossible to say without modeling and simulating that other world. Our twin datasets are really old. It’s likely that non-aging-related deaths are lower now in the past, because we have better healthcare and so on. This means that the heritability of lifespan for people alive today may be larger than it was for the people in our twin datasets, some of whom were born in 1870. We won’t know for sure until we’re all dead, but this paper gives us a way to guess. Have I mentioned that heritability depends on society? And that heritability changes when society changes? And that heritability is just a ratio and you should stop trying to make it be a non-ratio because only-ratio things cannot be non-ratios? This is a nice reminder. Honestly, I think the model the paper built is quite clever. Nothing is perfect, but I think this is a pretty good run at the question of “how high would the heritability of lifespan be if extrinsic mortality were lower. I only have two objections. The first is to the Science writing style. This is a paper describing a statistical model. So shouldn’t there be somewhere in the paper where they explain exactly what they did, in order, from start to finish? Ostensibly, I think this is done in the left-hand column on the second page, just with little detail because Science is written for a general audience. But personally I think that description is the worst of all worlds. Instead of giving the high-level story in a coherent way, it throws random technical details at you without enough information to actually make sense of them. Couldn’t the full story with the full details at least be in the appendix? I feel like this wasted hours of my time, and that if someone wanted to reproduce this work, they would have almost no chance of doing so from the description given. How have we as a society decided that we should take our “best” papers and do this to them? But my main objection is this: At first, I thought this was absurd. The fact that people die in car accidents is not a “confounding factor”. And pretending that no one dies in a car accidents does not “address” some kind of bias. That’s just computing heritability in some other world. Remember, heritability is not some kind of Platonic form. It is an observational statistic . There is no such thing as “true” heritability, independent of the contingent facts of our world. But upon reflection, I think they’re trying to say something like this: Heritability of intrinsic human lifespan is about 50% when extrinsic mortality is adjusted to be closer to modern levels. The problem is: I think this is… not true? Here are the actual heritability estimates in the paper, varying by dataset (different plots) the cutoff year (colors) and extrinsic mortality (x-axis). When extrinsic mortality goes down, heritability goes up. So the obvious question is: What is extrinsic mortality in modern people? This is a tricky question, because “extrinsic mortality” isn’t some simple observational statistic. It is a parameter in their model. (Remember, they never looked at causes of death.) So it’s hard to say, but they seem to suggest that extrinsic mortality in modern people is 0.001 / year, or perhaps a bit less. The above figures have the base-10 logarithm of extrinsic mortality on the x-axis. And the base-10 logarithm of 0.001 is -3. But if you look at the curves when the x-axis is -3, the heritability estimates are not 50% . They’re more like 35-45%, depending on the particular model and age cutoff. So here’s my suggested title: Heritability of intrinsic human lifespan is about 40% when extrinsic mortality is adjusted to modern levels, according to a simulation we built. There might be a reason I don’t work at Science. An equation that’s supposed to reflect the probability a given person dies at a given age. A distribution over the parameters of that equation that’s supposed to produce population-wide death ages that look like those in real historical data. They assume their death equation entirely determines the probability someone will die in a given year. They assume that the shape of someone’s death equation is entirely determined by genetics. Sure, obviously heritability would be higher in a world without accidents or murder. We don’t need a paper to know that. But how much higher? It’s impossible to say without modeling and simulating that other world. Our twin datasets are really old. It’s likely that non-aging-related deaths are lower now in the past, because we have better healthcare and so on. This means that the heritability of lifespan for people alive today may be larger than it was for the people in our twin datasets, some of whom were born in 1870. We won’t know for sure until we’re all dead, but this paper gives us a way to guess. Have I mentioned that heritability depends on society? And that heritability changes when society changes? And that heritability is just a ratio and you should stop trying to make it be a non-ratio because only-ratio things cannot be non-ratios? This is a nice reminder.

0 views

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

For decades, software companies commanded premium pricing not only for their data, but for their interfaces . The specialized keyboards. The Excel integrations. The workflow automations. Users spent years mastering these systems. Companies built processes hardcoded to specific tools. Switching meant massive productivity loss. The interface WAS the product. I haven’t used Google in a year. An LLM chat is my browser. Soon, knowledge workers won’t use specialized software interfaces either. The LLM chat will be their interface to everything. This isn’t incremental change. This is the completion of Ben Thomson’s Aggregation Theory. In this article: Why Aggregation Theory left suppliers with one critical asset: their interface How vertical software built empires on workflow complexity, not data Why LLMs absorb the interface layer entirely When interfaces are commoditized, it’s API versus API Valuation Framework: the math is brutal Who wins, who loses, and what comes next Subscribe now Ben Thompson’s framework reshaped how we think about internet economics. The value chain was simple: Suppliers → Distributors → Consumers . Pre-internet, high distribution costs created leverage for distributors. TV networks controlled what content got aired. Newspapers decided which stories mattered. Retailers chose which products reached shelves. Then distribution costs collapsed to zero. Transaction costs followed. Power shifted from distributors to a new species: aggregators. The classic aggregators emerged: Google aggregated websites via search. Facebook aggregated content via social graph. Amazon aggregated merchants via marketplace. Uber and Airbnb aggregated physical supply via mobile apps. Thompson identified the virtuous cycle: Better UX → More users → More suppliers → Better UX. The aggregator wins by owning the consumer relationship, commoditizing suppliers until they become interchangeable. THE WEB 2.0 AGGREGATION STACK But suppliers retained two critical assets. Their interface and their data. The paradox of Web 2.0 aggregation was structural. Google commoditized discovery. When you search “best Italian restaurant SF,” you don’t care which site ranks #1. The source is fungible. But you still visit that site. You see their brand. You experience their UX. You navigate their reservation system. This created a hard limit on commoditization: Discovery : Commoditized (Google owns it) Interface : Protected (suppliers own it) Data : Protected (suppliers own it) The interface layer mattered for four reasons: Brand persistence : Users saw the New York Times, not just “a news source.” Brand equity survived aggregation. UX differentiation : Suppliers could compete on design, speed, features. A better interface meant higher conversion. Switching costs : Users developed muscle memory, workflow habits. Learning a new system had real friction. Monetization control : Suppliers owned their conversion funnels. They controlled the paywall, the checkout, the subscription flow. Vertical software is the perfect case study. Financial data terminals, legal research platforms, medical databases, real estate analytics, recruiting tools. They all pull from data that’s largely commoditized or licensable. Yet they command premium pricing. Why? Because the interface IS the moat. THE INTERFACE MOAT IN VERTICAL SOFTWARE Same data. Different interfaces. Premium pricing. Knowledge workers spent years learning specialized interfaces. The muscle memory is real. They’re not paying for data. They’re paying to not relearn a workflow they’ve spent a decade mastering. Companies built models and processes hardcoded to specific plugins. Changing providers means rebuilding workflows, retraining teams, risking errors during the transition. Switching costs weren’t about data. They were about the interface. This is why vertical software traded at 20-30x earnings. The market believed the interface was defensible. But is it today? Subscribe now LLMs don’t just aggregate suppliers. They absorb the interface itself. When LLMs commoditize the interface, what’s left? Just the data. And then it’s API against API. Pure commodity competition. The three-layer collapse: What changes structurally: THE VISIBILITY COLLAPSE Users never see the supplier’s brand Users never experience the supplier’s UX Users don’t know where information originated The entire web becomes a backend database Consider a knowledge worker today using specialized vertical software. They open the application. Navigate to the screening tool. Set parameters. Export to Excel. Build a model. Run scenarios. Each step involves interacting with the software’s interface. Each step reinforces the switching cost. Now consider a knowledge worker with an LLM chat: “ Show me all software companies with >$1B market cap, P/E under 30, growing revenue >20% YoY. “ “ Build a DCF model for the top 5. “ “ Run sensitivity analysis on discount rate.” The user never touched any specialized interface. They don’t know (or care) which data provider the LLM queried. The LLM found the cheapest available source with adequate coverage. This is complete commoditization. Not just of discovery, but of the entire supplier experience. When interfaces are commoditized, all that remains is API versus API. What happens to pricing power when interfaces disappear: The old model (vertical software): $10-25K/seat/year Multi-year contracts with annual escalators 95%+ retention because switching means retraining Gross margins >80% The new model: Data licensing fees (pennies per query) No user lock-in (LLM can switch sources instantly) Margin compression to commodity levels Retention based purely on data quality and coverage The math is brutal. If a vertical software company’s interface was 60% of their value, and LLMs eliminate interface value entirely, what remains is pure data value. And if that data isn’t proprietary, if it can be licensed or replicated, there’s nothing left. VALUE DECOMPOSITION If no proprietary data you are in big trouble. This is Aggregation Theory applied to its logical conclusion. Look at financial data software. Companies that built empires on interface complexity are watching their moats evaporate. A $20B market cap company with no truly proprietary data should trade at $5-8B once LLMs absorb their interface value. That’s not a bear case. That’s math. The same logic applies everywhere interfaces created moats: Financial data : Terminals that charge $12-24K/year for interfaces over largely commoditized data feeds. When an LLM can query the same data directly, the interface premium evaporates. Legal research : Platforms charging premium prices for interfaces over case law that’s largely public domain. The specialized search and citational tools become worthless when an LLM can do it better. Medical databases : Clinical decision support tools that charge physicians for point-of-care recommendations. Exactly what LLMs excel at. Real estate analytics : Comprehensive databases accessed through specialized workflow tools. LLMs querying the same data through APIs eliminate the workflow lock-in. Recruiting : Search and outreach tools charging $10K+/year. When an LLM can query professional networks and draft personalized outreach, the interface value disappears. The only survivors: companies with truly proprietary data that cannot be replicated or licensed. If interfaces are irrelevant, what do suppliers need? The old stack: Frontend framework (React, Vue) Design system (component library) UX research (user testing, A/B tests) Brand marketing (differentiation) SEO optimization (Google discovery) The new stack: Clean, structured data (markdown, JSON) API/MCP endpoints (machine accessibility) Data quality monitoring (accuracy, freshness) That’s it. All software becomes API. A restaurant today invests in a beautiful website with parallax scrolling, professional food photography, reservation system integration, review management, local SEO. All to make humans want to click “Book Now.” A restaurant in the LLM era needs: # Bella Vista Italian Restaurant ## Location: 123 Main St, San Francisco ## Hours: Mon-Thu 5-10pm, Fri-Sat 5-11pm ## Menu: - Margherita Pizza: $22 - Spaghetti Carbonara: $24 ## Reservation API: POST /book {date, time, party_size} That’s everything an LLM needs. The $50K website becomes a text file and an API endpoint. Vertical software’s beautiful interfaces become: MCP endpoint: /query Parameters: {filters, fields, format} Returns: [structured data] No keyboard shortcuts to learn. No plugins to install. No interface to build. Just data, accessible via API. Subscribe now Traditional REST APIs had structural limitations that preserved switching costs: Rigid schemas requiring exact field names Extensive documentation humans had to read Bespoke integration for every service Stateless interactions without conversation context This created a moat: integration effort. Even if data was commoditized, the cost of switching APIs was non-trivial. Someone had to write new code, test edge cases, handle errors differently. MCP changes this. Model Context Protocol eliminates integration friction: When switching between data sources requires zero integration work, the only differentiator is data quality, coverage, and price. This is true commodity competition. SWITCHING COST COLLAPSE The New Aggregation Framework Reframing Thompson’s model for the LLM era: AGGREGATION EVOLUTION Original Aggregation Theory (2015): Suppliers → [Aggregator] → Consumers The aggregator (Google/Facebook) achieved zero distribution cost, zero transaction cost, and commoditized suppliers. But suppliers kept their interface and their data. LLM Aggregation Theory (2025): APIs → [LLM Chat] → Consumers The LLM achieves zero distribution cost, zero transaction cost, AND zero interface cost. Complete supplier invisibility. What remains is API versus API. The aggregator layer gets thicker while the supplier layer gets thinner . In Web 2.0, Google was a thin routing layer. It pointed you to suppliers who owned your attention once you clicked. The supplier had the relationship. The supplier had the interface. The supplier converted you. In the LLM era, the chat owns your entire interaction. Suppliers are invisible infrastructure. You don’t know where the information came from. You don’t experience their brand. You never see their interface. Vertical software in 2020: The product that owned the workflow. Vertical software in 2030: An API that the LLM queries. The moat wasn’t data. It was that knowledge workers lived inside these interfaces 10 hours a day. That interface now lives inside the LLM chat. The New Value Matrix The Winners: LLM Chat Interface Owners: Whoever owns the chat interface owns the user relationship. OpenAI with ChatGPT. Anthropic with Claude. Microsoft with Copilot. Google with Gemini. They capture the interface value that vertical software loses. The new aggregators. Proprietary Data Owners: Companies with truly unique, non-replicable data. The key test: Can this data be licensed or scraped? If yes, not defensible. If no, you survive. MCP-First Startups : Companies building for agents, not humans. No legacy interface to protect. No beautiful UI to maintain. Just clean data served through MCP endpoints that LLMs can query. They can undercut incumbents on price because they have no interface investment to recoup. The Losers: Interface-Moat Businesses : Any vertical software where “workflow” was the value. The interface that justified premium pricing becomes worthless. A $20B company with no proprietary data becomes a $5-8B company. Traditional Aggregators (Maybe): Google and Meta commoditized suppliers. Now LLMs could commoditize them. But here’s the nuance: only if they fail to own the LLM chat layer themselves. Google has Gemini and insane distribution. Meta has Llama. The race is on. If they win the chat interface, they stay aggregators. If they lose it, they become the commoditized. Content Creators : UGC platforms lose relevance when AI generates personalized content. The creator economy inverts: infinite AI content, zero human creators needed for most use cases. The UI/UX Industry: Beautiful interfaces become irrelevant when the LLM chat is the only interface. Hundreds of billions per year in frontend development... for what? Figma (amazing product!) is down by 90%. The framework for repricing interface businesses is simple: How much of the business is interface versus data? Most vertical software is 60-80% interface, 20-40% data. When LLMs absorb the interface, that value evaporates. Is the data truly proprietary? If it can be licensed, scraped, or replicated, there’s no moat left. Pure commodity competition. This is not a bear case. This is math. The market hasn’t priced this in because LLM capabilities are new (less than 2 years at scale), MCP adoption is early (less than 1 year), enterprise buyers move slowly (3-5 year contracts), and incumbents are in denial. But the repricing is coming in my opinion. The arc of internet economics: Pre-Internet (1950-1995) : Distributors controlled suppliers. High distribution costs created leverage. Web 1.0 (1995-2005) : Distribution costs collapsed. Content went online but remained siloed. Web 2.0 (2005-2023) : Transaction costs collapsed. Aggregators emerged. Suppliers were commoditized but kept their interfaces. LLM Era (2023+) : Interface costs collapse. LLMs complete aggregation. Suppliers become APIs. It’s API versus API, and whoever has no proprietary data loses. What Thompson got right: Suppliers would be commoditized. Consumer experience would become paramount. Winner-take-all dynamics would emerge. What Thompson couldn’t have predicted: The interface itself would be absorbed. Suppliers would become invisible. The aggregator would BE the experience, not just route to it. All software would become API. In the LLM era, the internet becomes a database. Structured data in, natural language out. No websites, no interfaces, no brands. Just APIs serving data to AI. For someone who spent a decade building beautiful interfaces, this is bittersweet. All those carefully crafted interactions, pixel-perfect layouts, workflow optimizations... obsolete. But this is what progress looks like. The UX of chatting with an LLM is infinitely better than navigating specialized software. And that’s all that matters. Aggregation Theory told us suppliers would be commoditized. LLMs are finishing the job. The interface moat is dead. What remains is data. And if your data isn’t proprietary, neither is your business. Subscribe now For decades, software companies commanded premium pricing not only for their data, but for their interfaces . The specialized keyboards. The Excel integrations. The workflow automations. Users spent years mastering these systems. Companies built processes hardcoded to specific tools. Switching meant massive productivity loss. The interface WAS the product. I haven’t used Google in a year. An LLM chat is my browser. Soon, knowledge workers won’t use specialized software interfaces either. The LLM chat will be their interface to everything. This isn’t incremental change. This is the completion of Ben Thomson’s Aggregation Theory. In this article: Why Aggregation Theory left suppliers with one critical asset: their interface How vertical software built empires on workflow complexity, not data Why LLMs absorb the interface layer entirely When interfaces are commoditized, it’s API versus API Valuation Framework: the math is brutal Who wins, who loses, and what comes next Subscribe now But suppliers retained two critical assets. Their interface and their data. The Interface Moat: Why Commoditization Had a Ceiling The paradox of Web 2.0 aggregation was structural. Google commoditized discovery. When you search “best Italian restaurant SF,” you don’t care which site ranks #1. The source is fungible. But you still visit that site. You see their brand. You experience their UX. You navigate their reservation system. This created a hard limit on commoditization: Discovery : Commoditized (Google owns it) Interface : Protected (suppliers own it) Data : Protected (suppliers own it) Same data. Different interfaces. Premium pricing. Knowledge workers spent years learning specialized interfaces. The muscle memory is real. They’re not paying for data. They’re paying to not relearn a workflow they’ve spent a decade mastering. Companies built models and processes hardcoded to specific plugins. Changing providers means rebuilding workflows, retraining teams, risking errors during the transition. Switching costs weren’t about data. They were about the interface. This is why vertical software traded at 20-30x earnings. The market believed the interface was defensible. But is it today? Subscribe now LLMs: The Final Aggregator LLMs don’t just aggregate suppliers. They absorb the interface itself. When LLMs commoditize the interface, what’s left? Just the data. And then it’s API against API. Pure commodity competition. The three-layer collapse: What changes structurally: THE VISIBILITY COLLAPSE Users never see the supplier’s brand Users never experience the supplier’s UX Users don’t know where information originated The entire web becomes a backend database $10-25K/seat/year Multi-year contracts with annual escalators 95%+ retention because switching means retraining Gross margins >80% Data licensing fees (pennies per query) No user lock-in (LLM can switch sources instantly) Margin compression to commodity levels Retention based purely on data quality and coverage If no proprietary data you are in big trouble. This is Aggregation Theory applied to its logical conclusion. Look at financial data software. Companies that built empires on interface complexity are watching their moats evaporate. A $20B market cap company with no truly proprietary data should trade at $5-8B once LLMs absorb their interface value. That’s not a bear case. That’s math. The same logic applies everywhere interfaces created moats: Financial data : Terminals that charge $12-24K/year for interfaces over largely commoditized data feeds. When an LLM can query the same data directly, the interface premium evaporates. Legal research : Platforms charging premium prices for interfaces over case law that’s largely public domain. The specialized search and citational tools become worthless when an LLM can do it better. Medical databases : Clinical decision support tools that charge physicians for point-of-care recommendations. Exactly what LLMs excel at. Real estate analytics : Comprehensive databases accessed through specialized workflow tools. LLMs querying the same data through APIs eliminate the workflow lock-in. Recruiting : Search and outreach tools charging $10K+/year. When an LLM can query professional networks and draft personalized outreach, the interface value disappears. The only survivors: companies with truly proprietary data that cannot be replicated or licensed. From Software to APIs: The New Supplier Stack If interfaces are irrelevant, what do suppliers need? The old stack: Frontend framework (React, Vue) Design system (component library) UX research (user testing, A/B tests) Brand marketing (differentiation) SEO optimization (Google discovery) Clean, structured data (markdown, JSON) API/MCP endpoints (machine accessibility) Data quality monitoring (accuracy, freshness) Rigid schemas requiring exact field names Extensive documentation humans had to read Bespoke integration for every service Stateless interactions without conversation context The New Aggregation Framework Reframing Thompson’s model for the LLM era: AGGREGATION EVOLUTION Original Aggregation Theory (2015): Suppliers → [Aggregator] → Consumers The aggregator (Google/Facebook) achieved zero distribution cost, zero transaction cost, and commoditized suppliers. But suppliers kept their interface and their data. LLM Aggregation Theory (2025): APIs → [LLM Chat] → Consumers The LLM achieves zero distribution cost, zero transaction cost, AND zero interface cost. Complete supplier invisibility. What remains is API versus API. The aggregator layer gets thicker while the supplier layer gets thinner . In Web 2.0, Google was a thin routing layer. It pointed you to suppliers who owned your attention once you clicked. The supplier had the relationship. The supplier had the interface. The supplier converted you. In the LLM era, the chat owns your entire interaction. Suppliers are invisible infrastructure. You don’t know where the information came from. You don’t experience their brand. You never see their interface. Vertical software in 2020: The product that owned the workflow. Vertical software in 2030: An API that the LLM queries. The moat wasn’t data. It was that knowledge workers lived inside these interfaces 10 hours a day. That interface now lives inside the LLM chat. Winners and Losers: A Framework The New Value Matrix The Winners: LLM Chat Interface Owners: Whoever owns the chat interface owns the user relationship. OpenAI with ChatGPT. Anthropic with Claude. Microsoft with Copilot. Google with Gemini. They capture the interface value that vertical software loses. The new aggregators. Proprietary Data Owners: Companies with truly unique, non-replicable data. The key test: Can this data be licensed or scraped? If yes, not defensible. If no, you survive. MCP-First Startups : Companies building for agents, not humans. No legacy interface to protect. No beautiful UI to maintain. Just clean data served through MCP endpoints that LLMs can query. They can undercut incumbents on price because they have no interface investment to recoup. Interface-Moat Businesses : Any vertical software where “workflow” was the value. The interface that justified premium pricing becomes worthless. A $20B company with no proprietary data becomes a $5-8B company. Traditional Aggregators (Maybe): Google and Meta commoditized suppliers. Now LLMs could commoditize them. But here’s the nuance: only if they fail to own the LLM chat layer themselves. Google has Gemini and insane distribution. Meta has Llama. The race is on. If they win the chat interface, they stay aggregators. If they lose it, they become the commoditized. Content Creators : UGC platforms lose relevance when AI generates personalized content. The creator economy inverts: infinite AI content, zero human creators needed for most use cases. The UI/UX Industry: Beautiful interfaces become irrelevant when the LLM chat is the only interface. Hundreds of billions per year in frontend development... for what? Figma (amazing product!) is down by 90%.

0 views
Evan Schwartz Yesterday

Scour - January Update

Hi friends, In January, Scour scoured 805,241 posts from 16,555 feeds (939 were newly added). I also rolled out a lot of new features that I'm excited to tell you about. Maybe because of some of these, I found more posts than usual that I thought were especially worth sharing. You can find them at the bottom of this post. Let's dive in! The Scour homepage has been completely revamped. It includes a new tagline, a more succinct description, and a live demo where you can try out my feed right from that page. Let me know what you think! Scour also finally has its own logo! (And it looks great on my phone's home screen, if I do say so myself! See below ) Have you ever wondered how Scour works? There is now a full documentation section, complete with detailed write-ups about Interests , Feeds , Reactions , How Ranking Works , and more. There are also guides specifically for RSS users and readers of Hacker News , arXiv , Reddit , and Substack . All of the docs have lots of interactive elements, which I wrote about in Building Docs Like a Product . My favorite one is on the Hacker News guide where you can search for hidden gems that have been submitted to HN but that have not reached the front page. Thanks to Tiago Ferreira , Andrew Doran , and everyone else who gave me the feedback that they wanted to understand more about how Scour works! Scour is now a Progressive Web App (PWA). That means you can install it as an icon on your home screen and access it easily. Just open Scour on your phone and follow the instructions there. Thanks to Adam Benenson for the encouragement to finally do this! This is one of the features I have most wanted as a user of Scour myself. When you're browsing the feed, Scour now keeps track of which items you've seen and scrolled past so it shows you new content each time you check it. If you don't want this behavior, you can disable it in the feed filter menu or change your default view to show seen posts. If you subscribe to specific feeds, as opposed to scouring all of them, it's now easier to find the feed for an article you liked . Click the "..." menu under the post, then "Show Feeds" to show feeds where the item was found. When populating that list, Scour will now automatically search the website where the article was found to see if it has a feed that Scour wasn't already checking. This makes it easy to discover new feeds and follow websites or authors whose content you like. This was another feature I've wanted for a long time myself. Previously, when I liked an article, I'd copy the domain and try to add it to my feeds on the Feeds page. Now, Scour does that with the click of a button. Some of the most disliked and flagged articles on Scour had titles such as "The Top 10..." or "5 tricks...". Scour now automatically penalizes articles with titles like those. Because I'm explicitly trying to avoid using popularity in ranking , I need to find other ways to boost high-quality content and down-rank low-quality content. You can expect more of these types of changes in the future to increase the overall quality of what you see in your feed. Previously, posts found through Google News links would show Google News as the domain under the post. Now, Scour extracts the original link. You can now navigate your feed using just your keyboard. Type to get the list of available keyboard shortcuts. Finally, here are some of my favorite posts that I found on Scour in January. There were a lot! Happy Scouring! Have feedback for Scour? Post it on the feedback board and upvote others' suggestions to help me prioritize new features! I appreciate this minimalist approach to coding agents: Pi: The Minimal Agent Within OpenClaw , even though it didn't yet convince me to switch away from Claude Code. A long and interesting take on which software tools will survive the AI era: Software Survival 3.0 . Scour uses Litestream for backup. While this new feature isn't directly relevant, I'm excited that it's now powering Fly.io's new Sprites offering (so I expect it to be a little more actively developed): Litestream Writable VFS . This is a very cool development in embedding models: a family of different size (and, as a result, cost) models whose embeddings are interoperable with one another: The Voyage 4 model family: shared embedding space with MoE architecture . A thought-provoking piece from Every about How AI Made Pricing Hard Again . TL;DR: over are the days where SaaS businesses have practically zero marginal cost for additional users or additional usage. A nice bit of UX design history about the gas tank arrow indicator on a car, with a lesson applied to AI: The Moylan Arrow: IA Lessons for AI-Powered Experiences . Helpful context for Understanding U.S. Intervention in Venezuela . Stoolap: an interesting new embedded database. Stoolap 0.2 Released For Modern Embedded SQL Database In Rust . I keep browsing fonts and, while I decided not to use this one for Scour, I think this is a neat semi-sans-serif from an independent designer: Heliotrope .

0 views

Ad Blockers didn’t help kill the open web

In the spirit of the open web, I’m writing this post to disagree with something someone else has posted on their own site. Specifically, a post titled “ Ad Blockers helped kill the open web ” by Christian Heilmann . I 100% agree with Christian when he writes that: The experience for users who don’t employ [ad blockers] is different, though, to the extend that the web is becoming unusable. What I disagree with is what follows: Here’s the thing: I am 100% sure these things are connected. The more people block ads, the more aggressive advertising became. To the extend that a newspaper site of today reminds you of illegal download or porn sites of the early 2000s. This is a generalization that’s maybe true in some cases. Maybe. Maybe if we’re only talking newspapers and other news sites, maybe that’s true. Again, maybe. I suspect there are other factors at play in the newspaper landscape, and it’s not just a matter of people blocking ads therefore, we need more ads. But the title of the post isn’t “Ad Blockers helped kill newspapers” but rather “Ad Blockers helped kill the open web”. That’s a much different claim, one that I strongly disagree with. The argument about not wanting to be tracked, I agree, is debatable. Some people don’t want to be tracked but are happy to do the tracking on their own sites. Still, I do think it should be my right not to be tracked around, and if the only way to do that is run tracker blockers, then so be it. But there is a difference between tracking prevention and blocking ads. Not every ad is predatory and designed to record your actions There probably is a difference, but honestly, the burden shouldn’t be on the user to figure it out. And so blocking everything seems to be the best course of action. Also, you can totally still run ads even when people have adblockers. They can’t be programmatic ads, sure. And they might be less effective (debatable), but that’s not a problem for the users to deal with. It’s a business issue. I agree that the web platform failed at figuring out a way to deal with monetisation. Everything ultimately falls back on Ads because it’s the only idea that “works”. But to me, the issue is that we have an overabundance of content, and most content is not worth paying for. Most content is not worth anything. This post is worth nothing. Before the web, nobody was going to pay anything to read something like this. At best, I could write it and send it to a newspaper as an opinion piece, and maybe they’d be interested in publishing it. But for some reason, the web has morphed our perception of content to the point where everything needs to generate money because everything is considered valuable. Well, it isn’t. The vast majority of sites on the web don’t need to make money. The vast majority of sites on the web cost very little to keep going. Adblockers have not killed those sites because there are no ads to be blocked there. To circle back to the topic of payments, Flattr was a good idea. Funnily enough, I even had the same idea years ago before discovering that someone had already built it (obviously). But that’s also not really a solution. Because the reality of the web is that if you provide something for free, most people are not going to pay for it. A physical paper I also skim at the store before purchasing it to see if it is worth it. This is also an already solved problem. Sites can have paywalls or limited free content previews. It doesn’t have to be an all-or-nothing approach. The problem with the web is that for years, corporations and individuals, who were running big and small sites, were happy to slap Google ads on their sites and cash in on the money while simultaneously helping make Google the dominant force it is today. And then enshittification kicked in, and Google started fucking those people over. This is the classic case of a shortsighted move from a previous generation that is screwing the subsequent ones. All that said, the open web is not dead. Maybe a small subset of sites whose business depended on publishing content for free is dying. And maybe it's a good thing. But I’m not gonna feel sorry for running dual adblockers both at the browser and the network level. Surveillance capitalism fucking sucks, and we should maybe start fixing that before we worry about adblockers. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
ava's blog Yesterday

a month without caffeine - conclusion

In January, I wrote about doing a month without caffeine and gave an update one week in. In the original post, I wrote about realizing I was using it to override exhaustion rather than addressing it. I had been relying on matcha, black tea, come coffee and caffeinated water flavours to get through poor sleep, university pressure, workouts, and social commitments, which ultimately led to burnout. So I decided to quit for a month, and also not allow any decaf products, since they also contain a lesser amount. I experienced withdrawal headaches, nausea and changes to my hunger, but also my energy became steadier, my mood calmer, and my concentration more sustainable without the sharp spikes and crashes. I concluded with some lessons for when I resume, namely reserving caffeinated drinks for when it really matters, not consuming them after noon, reducing the caffeine intake (less strong matcha or black tea, less coffee shots etc.) and not using it to suppress hunger or other needs. Now that the month has passed, I'm back to report that it continued like the first post ended; I feel very calm, emotions and situations are more manageable, focus and task-switching is less of an issue. Getting up and going to bed feel easier. What took the longest to normalize were the gastrointestinal effects; it became clear my body relied on the caffeine to do that business at the usual times, and at first, everything was very delayed and I dealt with constipation. But during the third week, it went back to normal. I've had quite a few moments towards the end where I almost gave in, but I persisted. Sometimes I just really crave a specific taste or mouthfeel, and nothing can really replace matcha for me. It's such a comfort and reward. I'm also very, very used to having specific kind of beverages to study or work on something, so breaking that was difficult. I think this reset was great. I found out I can just go without caffeine as well without a meaningful drop in productivity, and I genuinely feel happier, more rested and stable. Now I know it's still entirely optional and I can enjoy it for the taste or specific rituals to get ready :) I like to think I have reset my palate with this too, which will come in handy for upcoming matcha reviews ! Now I will enjoy a mocha chocolate bar I saved for this! Reply via email Published 04 Feb, 2026

0 views
iDiallo Yesterday

We installed a single turnstile to feel secure

After the acquisition by a much larger company, security became a top priority. Our company occupied three tall buildings, each at least 13 stories high. Key card readers were installed next to every entrance, every elevator car, and even at the parking lot entrance, which itself was eight stories tall. The parking lot system was activated first. If you wanted to park your car, you needed to scan your pass. It didn't take long for lines to start forming, but they were still manageable. Then the doors were activated. I would often forget my key card on my desk and get stuck in the stairwell. After lunch, I'd climb the stairs all the way to the 11th floor, only to find myself locked out at the door. Fortunately, the buildings were full of people, and there was always someone to open the door for me. I'd slip in suspiciously while they contemplated the email that clearly said not to let anyone in with your own card. While we were battling to get used to the key cards, the company was installing turnstiles on the ground floor of every building. They looked futuristic, but I was already anticipating a problem the designers hadn't considered. Each building had 13 floors. Each floor was full of employees. Hundreds of employees per building would each have to scan their card to get in. I'm a software engineer. I understand that security isn't an optional feature you build on top of your application. Instead, you need to implement safeguards at the foundation. In fact, one of the most important applications I was working on was a tool to manage how different teams retrieved their tasks from Jira. If you've read this blog before, you know I always complain about Jira. Anyway, the original designer of this application must have been pressed for time. Each action in the app required a call to the Jira endpoint, which needed authentication. He never saved the auth token returned by the API. Instead, each call had to re-authenticate and then perform its task. Did he ask the user to reenter the password every single time? No, he was smarter than that. Did he save the credentials in the database in plain text? He might have been an intern , but he wasn't crazy. No! Instead, he saved the username and password in the cookies. But for good measures, it was base64 encoded. Eventually, we received the email. All turnstiles were going to be activated. The following Monday, they would run in mock mode, where the turnstiles would remain open, but we'd have to scan and wait for the beep and green light before entering. I arrived at 8:30am. I met my colleagues and hundreds of other employees in the lobby. When the first person scanned their card, the machine beeped and turned green. We all clapped in celebration. We took turns making our way to the machine. Beep, turn green, next. But it grumbled for some employees and turned red. That was fine though, it was mock day. We all went about our day. The next day, when I came to work, I remained in my car, stuck in line for the parking lot for at least 10 minutes. Looking outside, I saw long lines of people circling each building. I managed to park my car and discovered that the line of people extended all the way down to the parking level. I waited in line for at least 30 minutes just to make it to the lobby. I texted my manager that I'd be late for the daily standup because I was stuck in line. She didn't text back. Instead, she waved at me from the front of the line. Scanning was already slow, you had to wait to be approved. But once you passed the turnstile, there was another line for the elevators. The elevator key card readers were also active. Imagine a couple dozen people all trying to squeeze into crowded elevators, each going to a different floor, and each trying to scan their key card to access their floor because someone who wasn't authorized for that floor couldn't scan it for them. Some elevator doors opened with a few people already inside because they couldn't scan their cards in the crowd, so they'd gone back down for a second attempt. In other words, it was complete chaos. It took more than an hour to go from the parking lot to my desk on the 11th floor. The next day, I decided to save time and take an Uber to work. Those were the days when an Uber ride cost only $3 . I thought I was being smart, but another hundred people or so had the same idea. We had a pile of Uber rides lining up outside, each trying to drop off their riders and blocking the way to the parking lot, causing yet another traffic jam. Inside the building, it was still the same chaos. I only saved a few minutes. On the third day, they shut down the turnstiles. They clearly weren't working. They also disabled the key card readers in the elevators. It was a relief. Security was supposedly a priority, yet nobody ever talked about the Jira credentials saved in cookies. I received significant pushback when I requested we install a Redis service to store the generated auth tokens. I had to write entire documents to justify using it and request enterprise support from a vendor. After a month, the security issue was fixed to no fanfare. We did, however, receive an email celebrating the installation of three new turnstiles in the lobby. They never turned the elevator key card readers back on. They remained dormant, a reminder of the mess we'd gone through. The turnstiles were visible. They were expensive. They disrupted everyone's day and made headlines in company-wide emails. Management could point to them and say that we're taking security seriously. Meanwhile, thousands of employees had their Jira credentials stored in cookies. A vulnerability that could expose our entire project management system. But that fix required documentation, vendor approval, a month of convincing people it mattered. A whole lot of begging. Security theater checks a box. It makes people feel like something is being done. Real security is invisible. It's reviewing code, implementing proper authentication, storing tokens correctly. It doesn't come with a ribbon-cutting ceremony or a celebratory email. It's just good engineering that nobody notices when it's done right. But security theater is impossible to miss.

0 views

A Note on Blogging Anonymously

This blog is anonymous. I wrote a bit about that in my blogging journey , how I made the mistake of announcing my first blog to all my friends and family then got self-conscious, and how that really stifled what I wanted to write about. I wrote about it in more detail in another post, but the simple version is this: this space is mine, a Room of My Own . Blogging felt like it belonged to a privileged few (a leftover belief from the early 2000s — I binged on those blogs like no other), and it wasn’t until Facebook that writing in public under my own name felt accessible. I also believed continuation had to be earned - that validation or “success” would give me permission to keep going. That whole thing around visibility and validation is captured so well in this quote from Baby Reindeer …because… because fame encompasses judgment, right? And I… I feared judgment my entire life. That’s why I wanted fame, because when you’re famous, people see you as that, famous. They’re not thinking all the other things that I’m scared they’re thinking. Like, “That guy’s a loser or a drip or a fucking fa*ggot.” They think, “It’s the guy from that thing.” “It’s the funny guy.” And I wanted so badly to be the funny guy. “Why keep your blog anonymous, why not just journal then?” someone asked me after we emailed about one of my blog posts. And although I do journal privately, writing publicly (even anonymously) does something different. When I know someone might read what I’m saying, I have to distil the idea. It forces clarity. I stop rambling and try to focus. And the bonus is that sometimes what I write resonates with someone else, and we exchange ideas. Over the last few years, and through my blogging struggle (I hate that it was a struggle: start, stop, change domains, shut down, start again), I’ve also realised that what I want to write about here isn’t something I know many people in real life are interested in. And even when I do try to have those conversations, I don’t really get anywhere in depth. It almost feels like there’s no real interest in topics that are admittedly a bit niche: do I put my notes in Obsidian or Bear? Where do admin notes live? How do I track the books I read? Or my thoughts on success, scarcity, work, life, and all that. There are probably people in real life who are interested in productivity and examining life this way, but maybe, like me, they keep those opinions elsewhere. I do sometimes talk about productivity. People love discussing it at a high level, but I want details: where do you put your meeting notes? How do you track your to-dos, personal vs team vs project? Every now and then I meet someone at work who enthusiastically walks me through their  system, how they streamline OneNote with Teams and Outlook (which I also use at work). I love picking up little bits and pieces. And on that note, I secretly admire people who don’t care about any of this and just… get on with it somehow. What I’m trying to say is that I don’t necessarily need people who know me to know what I think about certain topics. Some things just aren’t for your professional life. For me, there’s a clear separation between work and life, and I like to keep it that way. Even though I do make friends at work, as I wrote about in (my very first!) blog post What Happens When Your 9–5 Defines You I still want a professional boundary between what I say here and who I am at work. I want the freedom to write whatever I want, without worrying whether it’s work-appropriate. If I want to write about weight loss, menopause, or something else like that, I don’t need everyone (not that everyone would be reading it, but it would feel that way to me) at work knowing about it. If I want to write about relationships, I haven’t really, so far, but I want that option, without wondering who might read it. My blog has mostly been about my favourite topic in the world: obsessing over tools - how I use them, why I use them - and optimising processes, alongside examining the life topics I tend to fixate on. I want this blog to be a mix of everything I am. Maybe if I wasn’t working, I’d feel comfortable opening it up at this point. But I haven’t told anyone about this blog at all. And if someone ever read it and worked out it was me, fine. But that’s not likely to happen any time soon. I know a lot of people use their blog as a professional CV. In some ways, I wish I could do that. I even had a domain with my full name, which has just expired. But I don’t think I’d ever be comfortable with it, and I don’t really need a static personal site. I have LinkedIn for that, and, I suppose,  I’m quite Gen X in that way. What I do want is a blog. Something I can be prolific on, or not, as much as I want. And that freedom, that anonymity, is what makes it possible.

0 views

Date Arithmetic in Bash

Date and time management libraries in many programming languages are famously bad. Python's datetime module comes to mind as one of the best (worst?) examples, and so does JavaScript's Date class . It feels like these libraries could not have been made worse on purpose, or so I thought until today, when I needed to implement some date calculations in a backup rotation script written in bash. So, if you wanted to learn how to perform date and time arithmetic in your bash scripts, you've come to the right place. Just don't blame me for the nightmares.

0 views
Stratechery Yesterday

Apple Earnings, Supply Chain Speculation, China and Industrial Design

Apple's earnings could have been higher but the company couldn't get enough chips; then, once again a new design meant higher sales in China.

0 views
Giles's blog Yesterday

Writing an LLM from scratch, part 32a -- Interventions: training a baseline model

I'm rounding out my series of posts on Sebastian Raschka 's book " Build a Large Language Model (from Scratch) " by seeing how I could train the best base model I can from scratch on my own hardware. I started by training one in two days on my RTX 3090 , and found that while it was a decent little model, it wasn't as good as the original GPT-2 small, either in terms of the loss it got on my test dataset, or in terms of how good it was at following instruction prompts after fine-tuning on them. I decided that I wanted to see what levers I could pull -- dropout, attention weight biases, and so on -- to make it better. For that, I didn't want to have my PC tied up for days at a time with multiple long training runs, so I learned how to train faster in the cloud . That led to some refinements in the prompt-following test I was using , and I also spent a bit of time on a side quest getting the various models I'd trained onto Hugging Face Hub . Now it's time to try the various "interventions", as I'll call them -- the levers to pull to see if I can make the model better. This post is to recap what they are, and to describe what I did to establish a baseline model to compare to. I listed a number of possible interventions at the end of the RTX 3090 post; I'm not going to do them all, but for completeness, here's the full list: I'm going to work through each of those apart from the first two and the batch size (and will retrospectively add links to the posts when I do), trying a train with just that intervention and nothing else, on a cloud machine. Once that's done, I'll bake all of the things that helped into the training loop, and do another local train -- with gradient accumulation to make the batch size match the cloud instances'. The cloud machine size that I decided to use for this was the one that came out the most cost-effective (and due to its VRAM size, had the best loss) in my earlier cloud training test: an 8x A100 machine with 40 GiB VRAM per GPU. But first, we need a baseline model. I've already done a train on an 8x A100 40 GiB machine -- why do we need a new one? In my cloud training post, I came to the conclusion that the cost in terms of training time of running a periodic validation loop as we trained was not really worth it, at least in this case. Two of the biggest reasons to have validation during training are to work out when you're overfitting on a multi-epoch train, and to see how your model can handle datasets that it has not been trained on. In a single-epoch train like this, you're not going to overfit -- every sample it sees will be new to it -- and the training loss itself is over samples it's not been trained on at the time it was calculated, for the same reason (though of course it will be trained on them as soon as we do the backward pass starting with that loss). Of course, it's not perfect -- a big benefit of the validation loss is that it's over the same held-back dataset on every run -- and there are arguments for keeping it (albeit, perhaps doing full runs less frequently than I was). But for these experiments, I decided that I'd simply drop it. I also wanted to introduce a consistent random seed at the start of the training loop. I didn't have that in my cloud trains, and of course if we want to have solid results on whether each intervention really does improve matters, then we need one so that we can be sure they're all starting from the same point. Both of those meant that I couldn't use the earlier train on the 8x A100 40 GiB machine as a baseline; I'd need a new one, introducing those two changes: no validation during the training run (using training loss as a proxy), and setting a random seed at the start for reproducibility. So: what was the baseline train going to look like? The first step was to strip out the validation code and to replace it with code that just took periodic checkpoints, keeping track of which one had the best average training loss over the period since the previous one. Next, I decided to plot on the training chart that is generated during the run not just the training loss, but also an indicator of the maximum and minimum training loss over all of the steps in that period. Then I added the random seed , which I set to 42. A couple of bugfixes, and we were left with this version of the code . One thing to highlight: in the file that specifies the various training parameters, I set the per-GPU micro-batch size to 12 rather than the 13 I'd used on this size of machine earlier. Two reasons for that: Firstly, I'm going to want to do a local run with gradient accumulation later, using all of the helpful interventions. With gradient accumulation, you do a number of steps with batches that you can fit into your memory, but you don't update the gradients each time. After a number of those, you do one big update based on the accumulated gradients -- hence the name. The full batch is all of those smaller batches taken together. If I want that to closely match the cloud train, I'll want the accumulated batches to be the same size as each global batch in the cloud. Now, on my local machine, I can fit a batch of 6 into VRAM. So that means that the full batch needs to be divisible by 6 1 . On the cloud train, with a micro-batch of 13 and 8 GPUs, we had an overall batch size of 104 in the previous train. 104 is not divisible by 6: no joy. But with a micro-batch size of 12, we have an overall batch of 12 × 8 = 96 , which means we'd be able to do gradient accumulation and do a parameter update every 96 ÷ 6 = 16 steps. Secondly, while my estimate of the ideal overall batch size was based on a rather arbitrary bit of curve-fitting, it did say that 97 was the ideal size. So it could be interesting to see whether it did help! So, having coded that up and set up the configuration, it was time to run it. Here's the training chart it came up with: Note the loss spikes at around global steps 4,200, 13,000 and 23,000. Those are important, I'll explain why later. The training run reported this at the end: So it took about 3h24m to train, even less than we expected from the previous cloud experiments' estimates of how long it would take excluding validation. About US$35 in cost. Here is the model on Hugging Face Hub . Let's see how it looks. For these intervention posts, I won't run the instruction-following tests, as they can only be run against a batch of models in one go to get results that are consistent with each other . But the smoke test -- how does it complete the sequence is worthwhile: Looks good! Reasonably coherent. Now we can find the loss on our held-back test set: That's a bit worse than the 3.674 we got for the original cloud train. Either the calculations of the optimal batch size I did were not quite right (entirely likely, they were very ad-hoc) or the model weights we started with, given the random seed we're using, just happened to lead us in a slightly worse direction (also plausible). Either way, it's in line with what we expected, and is still better than the test loss of 3.725 that we got with the second-best machine in the cloud comparison post (the 8x H100 80 GiB with a global batch size of 216). So: we have a solid baseline model -- before we wrap up, let's consider those spikes in the loss that I called out in the training chart. Random spikes in the loss are a Bad Thing, right? Certainly they're a bad thing for a train in general, especially if you don't know for sure what's causing them. But my working assumption has been that they're caused by exploding gradients -- for some specific sample in the dataset, the gradients have gone up to some insanely high value, and we've had a bad update to our parameters as a result. It hasn't completely knocked the model back to its starting point, but it does take some time to recover, so we lose the benefit of some of our training. If that is the case -- and it's not just something like a batch happening to have stuff that's wildly different to the rest of the training data, or something weird in the optimiser -- then gradient clipping is the solution. I wanted to see if it would help the model quality in general, but of course if we hadn't had any loss spikes in this baseline train it would have been hard to see if that was the case! So I was very glad to see them here, as if there had been none I would either have had to do a gradient clipping experiment with no real expectation of it helping -- or do another baseline train with a different random seed in the hope that that caused some spikes, which would have cost another US$35. All in all, it was good to see them there, as it sets us up well for that experiment. So, we've trained a baseline model that we can make changes to -- the interventions I listed at the start -- and get a pretty reliable understanding of whether or not they help the quality of the final model. With that in place, we're in a good position to start running those intervention tests! Given the loss spike situation in that chart, I think that a solid first one to go for -- even though it was the last in that list at the top of this post -- is gradient clipping. Where are those loss spikes coming from, and if it's exploding gradients, what happens if we limit the damage they do with gradient clipping? Stay tuned! I've already done the training run for that (while I wrote this one up), so I should be able to post about it tomorrow. Well, you could potentially do something with batches of different sizes, but that would be fiddly.  ↩ The amount of training data. I'm not going to dig into this one; it looks like it does help, but the returns diminish rapidly, so I think that in order to get any serious improvement we'd need to train for much more than two days locally. In the one "extended training" test I did, I managed to get the loss down from 4.167 to 4.135, which was... less-than-inspiring. The number of epochs. I'm going to stick to single-epoch training -- that is, I'll train on a single pass through an amount of non-repeating data chosen to take 48 hours to handle on my local machine. The bias on the W q , W k and W v matrices. This one definitely sounds worth looking into -- easy, as it's just a change to a config flag, and makes the model more like the original GPT-2. I'll give that a go. Dropout. I've read that for single-epoch training, dropout doesn't help (which doesn't quite work with my mental model of what it's for, but does sound plausible). Worth a look! The learning rate, and weight decay. The values I've used for these are basically copypasta from the book. I think I should learn to understand these and try to optimise them a bit. The precision. I'm using AMP , which means that some calculations are done in 16-bit rather than 32-bit, and calling with to let PyTorch choose to use the GPU's tensor cores, which use TF32, a kind of "32-bit float lite" (see the post on the local train for details). Those both (at least potentially) reduce the precision of the train below what you'd get if you trained with full-fat . Would reverting that be worth the longer train time? I should probably at least poke at that. The batch size. I've already, in effect, tried playing with that. The different cloud machines I played with had different amounts of per-GPU VRAM, so supported different per-GPU micro-batch sizes. So I wound up trying batch sizes from 512 (the same as the original GPT-2 was trained with) down to 104 in the cloud, plus my local trains with a batch size of 6. I did a rough-and-ready calculation at the end of the cloud training post where I estimated that the ideal batch size might be something like 97. So, probably not worth much more investigation. Exploding gradients. In one of my local trains, and in three out of the four cloud trains, I had sudden spikes in both training and validation loss. It generally took quite a bit of training -- maybe 10-15% of training time -- to get back on track after some of these, so we had what could be seen as wasted time in the training runs. Exploding gradients can be fixed by gradient clipping, which is relatively easy to do. Definitely worth investigating! Well, you could potentially do something with batches of different sizes, but that would be fiddly.  ↩

0 views
Rob Zolkos Yesterday

Deep Dive: How Claude Code’s /insights Command Works

The command in Claude Code generates a comprehensive HTML report analyzing your usage patterns across all your Claude Code sessions. It’s designed to help you understand how you interact with Claude, what’s working well, where friction occurs, and how to improve your workflows. It’s output is really cool and I encourage you to try it and read it through! Description: “Generate a report analyzing your Claude Code sessions” Output: An interactive HTML report saved to But what’s really happening under the hood? Let’s trace through the entire pipeline. The insights generation is a multi-stage process: The facets are cached in so subsequent runs are faster. Before any LLM calls, Claude Code processes your session logs to extract structured metadata. If a session transcript exceeds 30,000 characters, it’s chunked into 25,000-character segments and each chunk is summarized before facet extraction. This is the core qualitative analysis. For each session (up to 50 new sessions per run), Claude analyzes the transcript to extract structured “facets” - qualitative assessments of what happened. Model: Haiku (fast, cost-effective) Max output tokens: 4096 → → → → → → → → → → → → → Once all session data and facets are collected, they’re aggregated and processed through multiple specialized analysis prompts. Model: Haiku Max output tokens: 8192 per prompt Each analysis prompt receives aggregated statistics: Plus text summaries: This is the longest prompt, providing actionable recommendations: The final LLM call generates an executive summary that ties everything together. This prompt receives all the previously generated insights as context. All the collected data and LLM-generated insights are rendered into an interactive HTML report. Here’s how the stages connect: Facets are cached per-session, so running multiple times only analyzes new sessions. All analysis happens locally using the Anthropic API. Your session data stays on your machine - the HTML report is generated locally and can be shared at your discretion. The facet extraction focuses on patterns in your interactions, not the content of your code: Collect all your session logs from Filter out agent sub-sessions and internal operations Extract metadata from each session (tokens, tools used, duration, etc.) Run LLM analysis to extract “facets” (qualitative assessments) from session transcripts Aggregate all the data across sessions Generate insights using multiple specialized prompts Render an interactive HTML report Agent sub-sessions (files starting with ) Internal facet-extraction sessions Sessions with fewer than 2 user messages Sessions shorter than 1 minute - Unique identifier - When the session began - How long the session lasted - Number of user messages / - Token usage - Which tools were used and how often - Programming languages detected from file extensions / - Git activity - How often you interrupted Claude - Tool failures and their categories / / - Code changes / / / - Feature usage - Your initial message - Brief session summary SESSION SUMMARIES: Up to 50 brief summaries FRICTION DETAILS: Up to 20 friction details from facets USER INSTRUCTIONS TO CLAUDE: Up to 15 repeated instructions users gave Claude Total sessions, messages, duration, tokens Git commits and pushes Active days and streaks Peak activity hours Daily activity charts Tool usage distribution Language breakdown Satisfaction distribution Outcome tracking Project areas with descriptions Interaction style analysis What’s working well (impressive workflows) Friction analysis with specific examples CLAUDE.md additions to try Features to explore On the horizon opportunities Fun memorable moment What types of tasks you ask for How you respond to Claude’s output Where friction occurs in the workflow Which tools and features you use Use Claude Code regularly - More sessions = richer analysis Give feedback - Say “thanks” or “that’s not right” so satisfaction can be tracked Don’t filter yourself - Natural usage patterns reveal the most useful insights Run periodically - Check in monthly to see how your patterns evolve

0 views
Xe Iaso Yesterday

Did Zendesk get popped?

I don't know how to properly raise this, but I've gotten at least 100 emails from various Zendesk customers (no discernible pattern, everything from Soundcloud to GitLab Support to the Furbo Pet Camera). Is Zendesk being hacked? I'll update the post with more information as it is revealed.

0 views
Lalit Maganti Yesterday

Rendering 100k trace events faster with exponential search

We’ve recently been looking into optimizing rendering performance of the Perfetto UI on large traces. We discovered that there was some inefficiency in our data fetching logic, especially when you’re very zoomed out. In this case, there can be a lot of slices (spans) which are so small that they take less than one pixel of width. So for each pixel, we need to figure out “what is the event which we should draw for this pixel”. Over time we’ve come to the conclusion that the best thing to draw is the slice with the largest duration in that pixel. We can break this into two sub-problems: We’re going to focus on 1) in this post as that’s where the slowdown was. 2) is fascinating but also surprisingly orthogonal. If you’re interested, I would suggest reading this excellent post from Tristan Hume explaining the basic algorithm we use. What is the range of events which correspond to each pixel? What is the event with the maximum duration for that pixel?

2 views

I just discovered Kokoro free Text To Speech that runs locally

Github: https://github.com/eduardolat/kokoro-web I can’t wait to try out the api. I am planning a series of video posts on Instagram, YouTube and Tiktok for people who just don’t read (a lot more these days it seems). Just my blog posts being read out, with the web page scrolling in the background. The challenge will be automating the video creation from start to finish. Here is the first one – this was my first blog post ever (not very long!) The post I just discovered Kokoro free Text To Speech that runs locally appeared first on Circus Scientist .

0 views