Latest Posts (20 found)
devansh Today

Bypassing egress filtering in BullFrog GitHub Action using shared IP

This is the third vulnerability I'm disclosing in BullFrog, alongside a Bypassing egress filtering in BullFrog GitHub Action and a sudo restriction bypass in BullFrog GitHub Action . Unlike those two, which exploit specific implementation gaps, this one is a fundamental design flaw, the kind that doesn't have a quick patch because it stems from how the filtering is architected. BullFrog markets itself as a domain-based egress filter. You give it a list of domains you trust, set , and everything else should be denied. The operative word there is should . When a workflow step makes a DNS query, BullFrog intercepts the DNS response and inspects the queried domain name against your allowlist. If the domain is allowed, BullFrog takes the resolved IP address from the DNS answer and adds it to a system-level firewall whitelist (nftables). From that point on, any traffic to that IP is permitted, no further domain-level inspection. BullFrog operates at the network layer (Layer 3) and transport layer (Layer 4). It can see IP addresses and ports. It cannot see HTTP Host headers, TLS SNI values, or any application-layer content. That's a Layer 7 problem, and BullFrog doesn't go there. The modern internet is not a one-to-one mapping of domains to IP addresses. It never really was, but today it's dramatic, a single IP address on a CDN like Cloudflare or CloudFront can serve hundreds of thousands of distinct domains. BullFrog's model assumes an IP corresponds to one domain (or at least one trusted context). That assumption is wrong. Consider what gets whitelisted in a typical CI workflow: Every one of these resolves to infrastructure shared with thousands of other tenants. The moment BullFrog whitelists the IP for a registry, it has also implicitly whitelisted every other domain on that same Cloudflare edge node, including an attacker's domain pointing to the same IP. Once an allowed domain is resolved and its IP is added to the nftables whitelist, an attacker can reach any other domain on that same IP by: BullFrog never sees the Host header. The firewall sees a packet destined for a permitted IP and passes it through. The server on the other end sees the injected Host header and responds with content from an entirely different, supposedly blocked domain. The flaw lives in at agent/agent.go#L285 : Two problems in one function. First, opens the IP without any application-layer binding, all traffic to that IP is permitted, not just traffic for the domain that triggered the rule. Second, the branch in the else-if means that even a DNS query for a blocked domain gets logged as "allowed" if its IP happens to already be in the whitelist. The policy has effectively already been bypassed before the HTTP connection is even made. This PoC uses a DigitalOcean droplet running Nginx with two virtual hosts on the same IP — one "good" (allowed by BullFrog policy), one "evil" (blocked). is used as a wildcard DNS service so no domain purchase is needed. SSH into your droplet and run: Both domains resolve to the same droplet IP. BullFrog will only be told to allow . The final step returns — served by the "evil" virtual host, through a connection BullFrog marked as allowed, to a domain BullFrog was explicitly told to block. The DigitalOcean + nip.io setup is a controlled stand-in for the real threat model, which is considerably worse. Consider what actually gets whitelisted in production CI workflows: An attacker doesn't need to compromise the legitimate service. They just need to host their C2 or exfiltration endpoint on the same CDN, and inject the right Host header. The guarantee evaporates entirely for any target on shared infrastructure, which in practice means most of the internet. How BullFrog's Egress Filtering Works The Layer 3/4 Problem Shared Infrastructure is Everywhere Vulnerability Vulnerable Code Proof of Concept Infrastructure Setup The Workflow Real-World Impact Disclosure Timeline You have a dependency registry → Cloudflare CDN You have a static files resource → Azure CDN Some blog storage hosted on cloud → Google infrastructure Using the allowed domain's URL (so the connection goes to the already-whitelisted IP — no new DNS lookup, no new policy check) Injecting a different header to tell the server which virtual host to serve Your dependency registry resolves to Cloudflare. An attacker with any domain on Cloudflare can receive requests from that runner once the registry IP is whitelisted. Your static file reserve resolves to Azure CDN. Every GitHub Actions workflow that pulls artifacts whitelists a slice of Azure's IP space. Discovery & Report : 28th November 2025 Vendor Contact : 28th November 2025 Vendor Response : None Public Disclosure : 28th February 2026

0 views

MOOving to a self-hosted Bluesky PDS

Bluesky is a “Twitter clone” that runs on the AT Protocol . I have to be honest, I’d struggle to explain how atproto works. I think it’s similar to Nostr but like, good? When atproto devs talk about The Atmosphere they sound like blockchain bros. The marketing needs consideration. Bluesky however, is a lot of fun. Feels like early Twitter. Nobody cool uses Twitter anymore ever. It’s a cesspit of racists asking Gork to undress women. Mastodon and Bluesky are the social platforms I use. I’ve always been tempted to self-host my own Mastodon instance but the requirements are steep. I use the omg.lol server instead. Self-hosting the Bluesky PDS is much less demanding. My setup includes: This is the host machine I glued an NVMe onto the underside. All services run as Docker containers for easy security sandboxing. I say easy but it took many painful years to master Docker. I have the Pi on a VLAN firewall because I’m extra paranoid. I setup my Bluesky PDS using the official Docker container. It’s configure with environment variables and has a single data volume mounted. I backup that volume to my NAS. I’ve put Caddy in front of the PDS container. Right now it just acts as a reverse proxy. This gives me flexibility later if I want to add access logs, rate limiting, or other plugins. Booo! If you know a good European alternative please let me know! The tunnel links Caddy to the outside world via Cloudflare to avoid exposing my home IP address. Cloudflare also adds an extra level of bot protection. The guides I followed suggest adding wildcard DNS for the tunnel. Cloudflare has shuffled the dashboard for the umpteenth time and I can’t figure out how. I think sub-domains are only used for user handles, e.g. . I use a different custom domain for my handle ( ) with a manual TXT record to verify. Allowing the PDS to send emails isn’t strictly necessary. It’s useful for password resets and I think it’ll send a code if I migrate PDS again. I went through the hassle of adding my PDS domain to Proton Mail and followed their SMTP guide . This shows how the PDS enviornment variables are formatted. It took me forever to figure out where the username and password went. PDS MOOver by Bailey Townsend is the tool that does the data migration. It takes your Bluesky password and probably sees your private key, so use at your own risk! I setup a new account to test it before I YOLO’d my main. MOOve successful! I still login at but I now select “custom account provider” and enter my PDS domain. SkyTools has a tool that confirms it. Bluesky Debug can check handles are verified correctly. PDSIs.dev is a neat atproto explorer. I cross-referenced the following guides for help: Most of the Cloudflare stuff is outdated because Cloudflare rolls dice every month. Bluesky is still heavily centralised but the atproto layer allows anyone to control their own data. I like doing that on principle. I don’t like maintenance, but I’ve heard that’s minimal for a PDS. Supposedly it’s possible to migrate back to Bluesky’s PDS if I get bored. I’m tempted to build something in The Atmosphere . Any ideas? Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Notes on Self Hosting a Bluesky PDS Alongside Other Services Self-host federated Bluesky instance (PDS) with CloudFlare Tunnel Host a PDS via a Cloudflare Tunnel Self-hosting Bluesky PDS

1 views
iDiallo Today

Why we feel an aversion towards LLM articles

Last year, I pushed myself to write and publish every other day for the whole year. I had accumulated a large number of subjects over the years, and I was ready to start blogging again. After writing a dozen or so articles, I couldn't keep up. What was I thinking? 180 articles in a year is too much. I barely wrote 4 articles in 2024. But there was this new emerging technology that people wouldn't stop talking about. What if I used it to help me achieve my goal? Have you ever heard of Mo Samuels? You probably haven't. But you must have heard of Seth Godin , right? Seth Godin is the author of several bestsellers. He is an icon in the world of marketing, and at one point he nudged me just enough to quit an old job. This is someone I deeply respected, and I bought his book All Marketers are Liars with great anticipation. I was several chapters in when he dropped this statement: I didn't write this book. What does he mean by that? His name is on the cover. These are the familiar words I often heard in his seminars. What is he trying to say? What I mean is that Seth Godin didn't write this book. It was written by a freelancer for hire named Mo Samuels. Godin hired me to write it based on a skimpy three-page outline. What? Mo Samuels? Who is Mo Samuels? If that name were on the cover, I wouldn't have bought the book in the first place. Does that bum you out? Does it change the way you feel about the ideas in this book? Does the fact that Seth paid me $10,000 and kept the rest of the advance money make the book less valuable? Well, yeah. It doesn't change the ideas in the book. But it is deceptive. I bought it specifically to read his words. Not someone else's. Why should it matter who wrote a book? The words don't change, after all. Yet I'm betting that you care a lot that someone named Mo wrote this book instead of the guy on the dust jacket. In fact, you're probably pretty angry. Well, if you've made it this far, you realize that there is no Mo Samuels, and in fact, I was pulling your leg. I (Seth Godin) wrote every word of this book. Imagine he hadn't added that last line. I never return a book after purchase, but this would have been a first. We don't just buy random books, a name carries value. I bought this book specifically because I wanted insight from this author. Anything less would have been a betrayal. Well, that's how people feel when they read an LLM-generated article. I wouldn't have noticed if I hadn't used LLMs to write articles on this very blog. The first time, I wrote a draft that had all the elements I wanted to present. The problem was the structure didn't entirely make sense. The story arc didn't really pay off, and the pacing was off. DeepSeek was just making the rounds, releasing open weights and open source code. I decided to use it to help me structure the article. The result was impressive. Not only had it fixed the pacing, it restructured the article in a way that made much more sense. Where I had dense blocks of information, DeepSeek turned them into convenient bullet points that were much easier to read. I was satisfied with the result and immediately published it. What I failed to notice, or maybe was too mesmerized to notice, was that the sentence structure had also been rewritten. I didn't use LLMs every time I wrote, but throughout the year I had at least a dozen AI-enhanced articles. When publishing, they sounded just fine. The problem started when I wanted to reference one of those articles in a new post. Reading through the AI-enhanced post felt strange. A paragraph I vaguely remembered and wanted to quote didn't sound like what I remembered. The articles were bloated with words I would never use. They had quips that seemed clever at the time but didn't sound like me at all. I ended up rewriting sections of those posts before quoting them. The second problem appeared whenever I landed on someone else's blog. I noticed the same patterns. The same voice. The same quips. "It's not just X, but Y." "Here's the part I find disturbing." "The irony is not lost on me." "It is a stark reminder." These and many more writing tropes were uniformly distributed across my LLM-assisted articles and countless others across the web. It felt like Mo Samuels was a guest writer on all of our blogs. And here's the kicker: (another famous thrope) I'm not singling out DeepSeek here. ChatGPT, Claude, Gemini, they all seem to have taken the same "Writing with Mo Samuels" Master class. It feels like this voice, no matter what personality you try to prompt it with, is the average of all the English language on the web. I wouldn't say readers of this blog are here for my distinct voice or writing style. I'm not famous or anything. But I know they can spot Mo from a mile away. My goal is not to trick readers. I want the stories and work experiences I share here to come from me, and I want to give readers that same assurance. So here is what I did. Since my goals are more modest this year, I've rewritten several of those lazy articles. I spend more time writing, and I try to hold onto this idea that's gaining traction among bloggers: "If you didn't bother writing, why should anyone bother reading?" I want to share my thoughts, even if no one reads them. When I come back to rediscover my own writing, I want to recognize my own voice in it. But if you do read this blog, if it sucks, if you disagree, if you have an opinion to share, you should know that I wrote it. Not Mo Samuels.

0 views

AI clones and data protection

A few days ago, news spread through the web about a Meta project for letting an AI run the social media account of a deceased person, as it could emulate the person's activity like posting content and responding to messages. The goal was to maintain engagement on the platform and reduce the grief when a person passes away. If you believe a screenshot going around, a poster on 4chan revealed this years prior, saying it has the internal name " Project Lazarus ", referencing the Lazarus of Bethany . While Meta spokespeople said they had no plans to pursue this (yet?), there are other services like ELIXIR AI , who want to push digital immortality via an " eternal doppelganger from a customer's lifetime data ". In general, we are already dealing with a deluge of deepfakes online. Not only are people using AI to remove the clothes on the images of people, but they are also creating new images, video and audio material with a person's physical and vocal likeness, trained on even just a handful of photos, up to terabytes of video material if it's a popular and active YouTuber. This also happens in the education and entertainment industry. Notable figures have digital copies in museums and other places to be interacted with, and deceased actors get "revived" to show up or to lend their voice to a character. Researchers talk about this as " spectral labour " in a " postmortal society ", meaning the " exploitation of digital remains for aesthetically pleasing, politically charged, and communicative representations ". The companies that provide these resurrection services are referred to as "* transcendence industry ". The tech and availability is changing fast, and as with any developing field, it can be hard to apply existing legal frameworks that didn't have this use case in mind specifically. While I have to leave the issues around general ethics and monetization to another day, I'd like to focus further on ( European ) data protection and privacy laws! First up, good to know: Are your body and voice capable of being personal data? Yes! They make you identifiable. You can also see this in Article 9 GDPR , which prohibits processing data related to racial or ethnic origin, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person and data concerning health, unless it falls into very specific allowed purposes. Your body carries this type of information. Additionally, the European Data Protection Board has also given out guidelines that suggest that voice data is considered inherently biometric . That means making a model of you via a series of photos from different angles, motion capture, voice recordings etc. is processing personal data, some of it sensitive data under Article 9 GDPR . This is then further processed during AI training and finetuning to reproduce a person's physical or vocal likeness reliably. Recital 51 of the GDPR mentions: " The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person. " So, simply taking or editing some pictures is not considered processing of special (sensitive) personal data, as this would reach too far; it needs specific technical means that take measurements to turn it into biometric data, like when you set up to unlock your phone for FaceID, or if you get an eye scan or fingerprint scan to be able to unlock a door with your eye or finger. There are actually quite a few interesting discussions on whether taking a picture of someone wearing glasses is processing data about their health - but I digress. AI models trained on reproducing your likeness reliably have turned you into a dataset, a bunch of measurements, a model, which generally counts as biometric data processing. Once data processing falls under Article 9 GDPR , legal bases of Article 6 GDPR - like legitimate interest, fulfillment of a contract, compliance, etc. - fall away, as only the specific allowances of Article 9(2) GDPR make an exception from the general prohibition. In the case of the entertainment and education industry, that will likely reduce it to the explicit consent named in: " a) the data subject has given explicit consent to the processing of those personal data for one or more specified purposes, except where Union or Member State law provide that the prohibition referred to in paragraph 1 may not be lifted by the data subject " This impossible for people who have already passed away, but you can usually ask their estate/remaining family members for consent in their stead though. Consent, under GDPR, always needs to be given freely. Article 7 GDPR says, among other things, " When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract. " It's also referred to as the Coupling prohibition . That may be difficult to avoid in the entertainment industry: What if getting the role is tied to agreeing with AI cloning, even if not explicitly, then implicitly? What if refusing, at some point, gets you blacklisted? What if agreeing has an effect on your success and income at an agency? Many actors now have to deal with this as studios try to reduce the time spent on set for actors to reduce costs via AI clones, and also want a backup AI clone option in case the actor dies during production. What's also problematic: How do you you freely and productively consent to something you don't understand? Of course, you don't need to be an expert in everything, but usually, stuff is pretty straightforward in terms of taking pictures, video or audio recordings. Explaining how AI models work has been very difficult, even for people deeply involved, but now we are likely dealing with studios who are completely uninvolved with the company that actually handles the AI cloning. And also, how do you properly inform someone contractually about how their data will be used and processed if the field and possibilities develop so fast? It's difficult to anticipate potential future use cases you'd want or not want. And if the data gets sent to somewhere outside of the EEA, you have a so-called ' third country transfer ' to worry about, which needs special considerations and protections. Now, we have established that your body and voice are personal data, and that processing them in this way falls under the GDPR. What about your clone data within the training set, or the output itself? This is a bit controversial at the moment! It makes sense that this would also be regarded as personal data, as it is still identifiably you when it gets used with zero alterations. Where it gets problematic are use cases where you lend your likeness to something, especially your voice. For example: Use for an ad that is not supposed to literally embody you, but instead just offer a neutral voice-over; or you're the new voice for Siri; or you might synchronize a cartoon character. Obviously, your friends and family could reliably recognize your voice, so it could count. But there are data protection authorities in Germany who vouch for a more usage-oriented interpretation, meaning: If your clone is used to identify you and represent you in some content, it is biometric identification, but if your voice is just used as one voice for a job, it's just imitation or synthesis. I don't agree with that, as the data itself and the identification methods are still the same and current synthesis usage can still be used for biometric identification later, but that's the discussion right now. Okay, so this type of data generally falls under the GDPR. That means I have the same rights as usual - right to deletion, too. But how I said before in my post about AI and the GDPR , it can be hard or impossible to delete data from a training set. Deleting the entire model or having to retrain it would incur massive costs and losses; it would make more sense instead to have more individual models that can be more easily separated and deleted, if possible. But since that is not in control of the person holding the rights, it might be hard to enforce them. It's equally difficult for the output of these models: That falls under the GDPR as well and would be affected by the deletion or restriction requests, but that's also where lots of contracts, laws and rights collide. It needs to be assessed in each case individually. There was an interesting case in Germany a while ago: A YouTuber using an AI generated voice from a famous voice actor in his videos, and the actor objecting to it. The YouTuber had around 190,000 subscribers and an associated online shop. He published two political satire videos on YouTube that used an AI-generated voice that closely imitated the actor’s voice, but didn't label that it was AI. Viewers in the comments identified the voice as the actor's as well. The videos ended with references to the online shop, which sold merchandise linked to the channel’s political opinions. The actor objected to the use of his voice towards the YouTuber and requested he stops, and wanted reimbursement of legal costs. The YouTuber agreed to cease, but refused to pay damages, arguing that the voice was synthetic, lawfully acquired from an AI voice provider, and used for satire rather than advertising. Meanwhile, the actor claimed that the AI-generated voice constituted use of his personal voice, that the processing occurred without consent, and that it created the impression that he endorsed the videos and products. He now also sought compensation equivalent to his usual licensing fees. The court sided with the actor and saw that the YouTuber interfered with the actor’s right to his own voice, as despite being AI, the voice closely imitated a distinctive personal characteristic. The court considered that a significant part of the audience would associate the voice with the data subject, which was sufficient to establish personal attribution. As expected and further explained above, the court rejected the reasoning of "legitimate interest" in Article 6(1)(f) GDPR , and saw that the voice primarily served the YouTuber's commercial interests. No exemption applied under Article 85 GDPR as the processing was neither journalistic nor genuinely artistic in a way that would justify overriding the data subject’s rights, particularly given the commercial context and the lack of transparency about the AI-generated nature of the voice. As a consequence, the court ordered the YouTuber to pay €4,000 as a fictitious license fee for the unauthorized use of the voice and €1,155.80 for reimbursable legal costs, plus interest. I think it's important to talk about this as it doesn't only affect actors and voice actors, or historical people's likeness used in the classroom or at concerts, but also has the potential to affect you. Your employer could ask to make an AI clone of you, for example. At the data protection law conference I attended in Munich, the AI Officer of a big insurance firm said they are holding the data protection trainings required for their employees via AI generated videos and AI generated avatars of him and his colleague. That means employees that need to do the training get in a digital environment with this avatar of him that responds, smiles, blinks and leads them through the material, some of which is AI generated as well. Circling back again to this research paper , we are at a point in time where, depending on your job, your body and voice can work independently of you, and people can monetize you after your death not by further selling what you produced in your lifetime, but producing new things indefinitely that you had no hand in while you were life, or selling access to "you". Eerie, huh? So it's important to know your rights and what's going on in the space :) Reply via email Published 02 Mar, 2026

0 views

Anthropic and Alignment

Just because you do not take an interest in politics doesn’t mean politics won’t take an interest in you. ― Pericles This is not an Article about the campaign being waged by the U.S. against Iran, but it’s a useful — and timely — analogy. There is a never-ending debate that can be had about the concept of International Law and who might be violating it. Some will argue that the U.S. is in violation for the attacks; others will note that Iran has been serially violating International Law with both its overt actions and its support of terror networks for my entire life. What is important to note is that the entire debate is ultimately pointless: the very concept of “international law” is fake, not because pertinent statutes and agreements don’t exist, but because their effectiveness is ultimately rooted in their enforceability. That, by extension, means there must be an entity to enact such enforcement, with the capability to match, and such an entity does not exist. Yes, there is the United Nations, but said body only operates by the agreement of its members, and their willingness to subjugate themselves to not only its edicts, but to also put forward the capabilities to enforce its mandates. In other words, the only agents that matter are nation states themselves, and the relative power of those nation states is not a function of lawyers and judges but rather their ability to project force and coerce others. To put it another way, if, after this weekend, you want to hold onto the concept of International Law, then realize the debate has been resolved: Iran was in violation, because their military just had their clock cleaned by the U.S., which means the U.S. decides who is right and who is wrong. While most of the U.S. and certainly the rest of the world were preoccupied with the happenings in Iran, another fervent debate has been ongoing in tech. Once again one of the parties is the United States itself, but the other entity in question is a private company, Anthropic. From the Wall Street Journal : The federal government will stop working with Anthropic and designate the artificial intelligence company a supply-chain risk, a dramatic escalation of the government’s clash with the company over how its technology can be used by the Pentagon. While Anthropic’s relationship with the administration hit a new low, rival OpenAI said late Friday that it reached an agreement with the Defense Department to have its models used in classified settings, until recently a status only held by Anthropic. Friday’s quick-fire developments between the Pentagon and two Silicon Valley darlings are poised to shape the future of how the federal government and, particularly the Pentagon, uses cutting-edge AI tools. Anthropic staked out its position earlier in the week in a Statement from Dario Amadei on [its] discussions with the Department of War : In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: To our knowledge, these two exceptions have not been a barrier to accelerating the adoption and use of our models within our armed forces to date. The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk” — a label reserved for US adversaries, never before applied to an American company — and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security. Regardless, these threats do not change our position: we cannot in good conscience accede to their request. I actually didn’t realize before this episode that the National Security Agency (NSA) is a part of the Department of War; that certainly provides useful context around the surveillance point. And, as we saw a decade ago with the Snowden revelations, the NSA can be both aggressive and creative in its interpretations of what is legal in terms of surveillance. One might have hoped that telecom companies in particular might have taken a stand like Anthropic did. At the same time, what is the standard by which it should be decided what is allowed and not allowed if not laws, which are passed by an elected Congress? Anthropic’s position is that Amodei — who I am using as a stand-in for Anthropic’s management and its board — ought to decide what its models are used for, despite the fact that Amodei is not elected and not accountable to the public. And, on the second point, who decides when and in what way American military capabilities are used? That is the responsibility of the Department of War, which ultimately answers to the President, who also is elected. Once again, however, Anthropic’s position is that an unaccountable Amodei can unilaterally restrict what its models are used for. It’s worth noting that there are reports that Anthropic’s concerns may be broader than just fully autonomous weapons; from Semafor : Anthropic is one of the few “frontier” large language models available for classified use by the US government because it is available through Amazon’s Top Secret Cloud and through Palantir’s Artificial Intelligence Platform, which is how its Claude chatbot ended up appearing on the screens of officials who were monitoring the seizure of then-Venezuelan President Nicolás Maduro… Soon after the Maduro raid, during a regular check-in that Palantir holds with Anthropic, an Anthropic official discussed the operation with a Palantir senior executive, who gathered from the exchange that the AI startup disapproved of its technology being used for that purpose. The Palantir executive was alarmed by the implication of Anthropic’s inquiry that the company might resist the use of its technology in a US military operation, and reported the conversation back to the Pentagon, a senior Defense Department official said. Anthropic denied it objected to whatever involvement Claude may have had in the Maduro raid, but the Semafor story resonates given the trend in some tech circles to resist any involvement in military operations. And, to that end, one could argue that this stand-off is ending as it should: Anthropic and its models will be removed from the Department of War tech stack, and an alternative will take their place. Amodei has been outspoken about other aspects of AI and national security; from Bloomberg in January : Anthropic Chief Executive Officer Dario Amodei said selling advanced artificial intelligence chips to China is a blunder with “incredible national security implications” as the US moves to allow Nvidia Corp. to sell its H200 processors to Beijing. “It would be a big mistake to ship these chips,” Amodei said in an interview with Bloomberg Editor-in-Chief John Micklethwait at the World Economic Forum in Davos, Switzerland. “I think this is crazy. It’s a bit like selling nuclear weapons to North Korea.” This rather raises the stakes of a messy procurement decision: consider the implications if we take Amodei’s analogy literally. Start with Iran: beyond the fact that Iran has been responsible for the deaths of thousands of Americans throughout the Middle East and beyond, one of the arguments for the U.S. intervention is that Iran continues to pursue nuclear weapons capabilities. It’s North Korea that shows why: North Korea doesn’t need to buy nuclear weapons, because they already have them, and it certainly makes any sort of potential military action against them considerably more complicated. Nuclear weapons make you an effective lawyer in the (nonexistent 1 ) court of international law! In short, nuclear weapons meaningfully tilt the balance of power; to the extent that AI is of equivalent importance is the extent to which the United States has far more interest in not only what Anthropic lets it do with its models, but also what Anthropic is allowed to do period. This, I think, gives important context to the designation of Anthropic as a supply chain risk. Secretary of War Pete Hegseth said on X : In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. This would decimate Anthropic: at a bare minimum the company relies on cloud hosting from AWS, Microsoft, and Google, all of which have contracts with the Department of War; I imagine the same applies to Nvidia. Fortunately for the company, Hegseth’s declaration does seem out of step with the law , which limits Hegseth’s authority to work covered by U.S. government contracts; in other words, AWS could still serve Anthropic models, as long as it doesn’t use Anthropic models for any of its services offered to the U.S. government. Regardless, this is an extreme measure that has been met with near universal dismay, even amongst people who are sympathetic to the idea that a private company should not have veto power over the U.S. military. Why would the U.S. government want to kneecap one of its AI champions? In fact, Amodei already answered the question: if nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company. The reason goes back to the question of international law, North Korea, and the rest: Anthropic talks a lot about alignment; this insistence on controlling the U.S. military, however, is fundamentally misaligned with reality. Current AI models are obviously not yet so powerful that they rival the U.S. military; if that is the trajectory, however — and no one has been more vocal in arguing for that trajectory than Amodei — then it seems to me the choice facing the U.S. is actually quite binary: Note that I’m not making the (very good) argument put forward by Anduril founder Palmer Luckey about the importance of democratic oversight; Luckey wrote on X : This gets to the core of the issue more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives?… The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say “But they will have cutouts to operate with autonomous systems for defensive use!”, but you immediately get into the same issues and more — what is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe. And that is why “bro just agree the AI won’t be involved in autonomous weapons or mass surveillance why can’t you agree it is so simple please bro” is an untenable position that the United States cannot possibly accept. Again, I think this is a good argument; the one I am putting forward, however, is much more basic and brutal, and doesn’t have anything to do with belief or not in the American experiment (although I’m with Luckey in that regard): it simply isn’t tolerable for the U.S. to allow for the development of an independent power structure — which is exactly what AI has the potential to undergird — that is expressly seeking to assert independence from U.S. control. I don’t, for the record, want Anthropic to be destroyed, and I want them to be a U.S. AI champion. I also, for the record, don’t trust Amodei’s judgment in terms of either national security or AI security. In terms of national security, I already commented on Amodei’s Davos comments on X : Last year I laid out in AI Promise and Chip Precariousness why I believed a systemic view of the U.S.-China rivalry entailed some painful tradeoffs when it came to chips and China: The important takeaway that is relevant to this Article is that Taiwan is the flashpoint in both scenarios. A pivot to Asia is about gearing up to defend Taiwan from a potential Chinese invasion or embargo; a retrenchment to the Americas is about potentially granting — or acknowledging — China as the hegemon of Asia, which would inevitably lead to Taiwan’s envelopment by China. This is, needless to say, a discussion where I tread gingerly, not least because I have lived in Taipei off and on for over two decades. And, of course, there is the moral component entailed in Taiwan being a vibrant democracy with a population that has no interest in reunification with China. To that end, the status quo has been simultaneously absurd and yet surprisingly sustainable: Taiwan is an independent country in nearly every respect, with its own border, military, currency, passports, and — pertinent to tech — economy, increasingly dominated by TSMC; at the same time, Taiwan has not declared independence, and the official position of the United States is to acknowledge that China believes Taiwan is theirs, without endorsing either that position or Taiwanese independence. Chinese and Taiwanese do, in my experience, handle this sort of ambiguity much more easily than do Americans; still, gray zones only go so far. What has been just as important are realist factors like military strength (once in favor of Taiwan, now decidedly in favor of China), economic ties (extremely deep between Taiwan and China, and China and the U.S.), and war-waging credibility. Here the Ukraine conflict and the resultant China-Russia relationship looms large, thanks to the sharing of military technology and overland supply chains for oil and food that have resulted, even as the U.S. has depleted itself. That, by extension, gets at another changing factor: the hollowing out of American manufacturing under Pax Americana has been directly correlated with China’s dominance of the business of making things, the most essential war-fighting capability. Still, there is — or rather was — a critical factor that might give China pause: the importance of TSMC. Chips undergird every aspect of the modern economy; the rise of AI, and the promise of the massive gains that might result, only make this need even more pressing. And, as long as China needs TSMC chips, they have a powerful incentive to leave Taiwan alone. The key thing to consider is the opposite scenario: cutting China off from advanced chips doesn’t just reduce the likelihood that Chinese companies are dependent on a U.S.-based ecosystem, it also reduces the cost of destroying TSMC. More than that, if AI becomes as capable as Amodei says it will — the equivalent, or more, of nuclear weapons — then it actually becomes game theory optimal for China to do exactly that: if China can’t have AI, then it is, at least under current circumstances, relatively easy to make sure that nobody does. Amodei is, as the quote above notes, cognizant of China as a threat generally; it concerns me that he consistently fails to acknowledge that the implication of his recommended course of action in terms of chip controls is to risk destroying AI for everybody. Then again, Amodei isn’t really a fan of AI for everybody: he and Anthropic have been vocal opponents of open source models, and were major drivers of what I considered a very misguided Biden executive order about AI . Like the Taiwan situation, I think these positions evince a failure to think systematically: There is certainly room for disagreement on these points; what concerns me about Amodei and Anthropic in particular is the consistent pattern of being singularly focused on being the one winner with all of the power, with limited consideration of how everyone else may react to that situation. Or, to be more blunt, the reality that other people exist and they have guns and missiles and yes, nuclear weapons. Might still makes right, and I personally would rather not hand over the future of humanity to a person and a company that seems to consistently forget that fact. I do think this post on X from Ramez Naam is the most optimistic way to frame the debate this weekend: I do have tremendous discomfort about AI’s surveillance capabilities in particular; there are a lot of safeguards we thought we had that were actually mostly due to the friction entailed in overcoming them. AI, even more than computers and the Internet, is a friction solvent, and I completely understand why Anthropic’s pushback on this specific point resonates broadly. The way to address this new reality, however, is with new laws and through strengthening accountable oversight; cheering or even demanding that an unelected executive decide how and where such powerful capabilities can be used is the road an even more despotic future. Our adversaries, meanwhile, will certainly be developing autonomous fighting capabilities (and yes, I admit my chip prescriptions make this more likely much sooner — tradeoffs are hard!); the U.S. will need to move in this direction if we are to remain the ultimate source of international law. And, by the U.S., I mean a democratically elected President and Congress, not a San Francisco executive. I don’t want that, and, more pertinently, the ones with guns aren’t going to tolerate it. Anthropic needs to align itself with that reality. Yes, The Hague exists; its subject to all of the same limitations as the United Nations  ↩ Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI. For example, under current law, the government can purchase detailed records of Americans’ movements, web browsing, and associations from public sources without obtaining a warrant, a practice the Intelligence Community has acknowledged raises privacy concerns and that has generated bipartisan opposition in Congress. Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale. Fully autonomous weapons. Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer. In addition, without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails, which don’t exist today. International law is ultimately a function of power; might makes right. There are some categories of capabilities — like nuclear weapons — that are sufficiently powerful to fundamentally affect the U.S.’s freedom of action; we can bomb Iran, but we can’t North Korea. To the extent that AI is on the level of nuclear weapons — or beyond — is the extent that Amodei and Anthropic are building a power base that potentially rivals the U.S. military. Option 1 is that Anthropic accepts a subservient position relative to the U.S. government, and does not seek to retain ultimate decision-making power about how its models are used, instead leaving that to Congress and the President. Option 2 is that the U.S. government either destroys Anthropic or removes Amodei. First, were there only closed AI systems, then unimaginable power would be vested in the owners of those systems; it seems that Amodei thinks that power should be wielded by him (at a minimum, I would prefer that be wielded by the U.S. government). Second, the idea that AI safety can only be guaranteed by a limited number of responsible stewards ignores the massive incentives that exist to build other models. This was clear years ago when only a few companies were working on AI models, and has been proven out by what has happened in reality so far. Third, in a world of AI proliferation, the best defense against AI will be AI; this means that more AI is actually safer than limited AI, which means open source is ultimately safer. Yes, The Hague exists; its subject to all of the same limitations as the United Nations  ↩

0 views

small thoughts part 8

In ‘ small thoughts ’ posts, I’m posting a collection of short thoughts and opinions that don’t warrant their own post. :) It's been a while! I’m looking back and am so grateful for everything I got myself through. The times I was alone, sick, in pain but still went to appointments, walked the dog, got groceries and picked up meds. The way I still always kept my home clean or resolved a pile of dishes after a few days. The way I would plan self care for myself; baths, making myself good meals, booking massages, scheduling walks in the forests, making playlists for these walks. Making time to stretch, to meditate, to do a little ritual for myself, or the evenings I spent hours helping strangers online while sipping on my tea, feeling cozy, safe, content, in my own world. I remember all the times I set out to watch something either on my TV or PC and prepared a thermos so I’d have lots of tea and not have to get up, and arranged cookies and nuts and some bread or fruits on a board for me. All the creams on my face and body, my hair. I’m so glad I did that. Now my wife does a lot of these for me. I think one day we will look back on this and realized we lived the dream. Just buying whatever we want at the grocery store, buying a lot for our shared niche hobbies, my wife being home all the time due to being unemployed, me being home most of the week, home office after all work’s done spent taking baths and gaming and grocery shopping and painting and watching things together, cleaning together, getting nails done… I used to think peoplewatching is for judging them, because that’s what my mum always did. But you can just watch them neutrally, or even compliment them in your head. People get less scary to me after spending time peoplewatching. It’s like in everyday life, they’re like cars I dodge on my way to something, and bad experiences stick out for longer. But when I am just a body observing somewhere in a corner, everyone is so human to me. So many people look interesting to talk to. I see little details on them that tell a story. Maybe I should make it a habit to sit in this café weekly, observing, sitting there with my notebook, and trying to talk to people who look inviting and like they wouldn’t mind. It would be a good practice for my hesitancy to talk to others, too. Too bad I usually have a job to do around this time. I guess I could try working from here, but it’s less nice. I always recover from a sort of work-induced misanthropy during time off, and when I have to work with people again or commute, it all comes back. Do I idealize people once they’re strangers from a distance, and just notice how rotten people are once I get close and am affected by their actions? I hate how my job burns me out on people and it’s not even customer-facing; it’s other employees causing me to feel that way. I wonder what the truth is; if my job is a bad influence on my view on people, or if it’s easy to love them from afar. Maybe both. The truth might be in the middle. I could work a job that makes me love people more, and I can acknowledge that it’s easy to think a stranger seems nice when you don’t actually know them. I regret leaving my notebook at home. I’d prefer to write this in there and not type it in my phone. Thinking about how it has never been easier to socialize, technically . Yes, third spaces disappear, yes less being outside; but all the messaging, video calling, social media, feeds, aggregators etc. lets you meet hundreds of people so quickly. Your selection to choose from is so much bigger than just locally. There’s more opportunities for travel for the average person compared to just 100 years ago, too. Lots like that, and still we complain about disconnection. I see it and I think, maybe it’s not necessarily that we live in disconnected times in general; it’s that you replaced connection with consumption of podcasts, and you frequently leave or never even join messaging servers and group chats, and you delete you accounts and purge your friend lists every couple months. You put off responding to messages and emails, and you lurk in most spaces you have accounts in, and you lock your profile and hide yourself from feeds. So? How are you supposed to capitalize on the social aspect of it all? It would be impossible to create a tool to help you. Change has to come from you. You have to open yourself up to receive love. Reply via email Published 02 Mar, 2026

0 views
Xe Iaso Today

The Unbound Scepter

Nobody warns you about the dreams. Not properly. Yesterday I killed my inner Necron — wrote the whole thing by voice from my hospital bed, felt the deepest peace of my life, went to sleep on whatever cocktail of post-op medications they had me on. Seroquel and Xanax, among other things. Doctors mention "vivid dreams" as a Seroquel side effect like it's nothing. Vivid. That word is doing an extraordinary amount of heavy lifting for what actually happened to me last night. Content warning: this post documents a medication-induced nightmare and gets into some heavy territory around belief systems, vulnerability, and psychological symbolism. These are prescribed medications, not recreational substances. If you're not in the headspace for this right now, it'll be here when you are. Last night I had a dream that was structured enough to have a narrator, a symbolic child heir, and a thesis statement delivered directly to my face before I woke up. I'm not exaggerating. I'm treating this as a trip report because honestly that's what it was. The details are already going fuzzy but the core of it burned in hard enough that I'm typing this up before it fades. Here's what I remember. The dream opened in a mall. Fluorescent lights, tile floors that went on forever, the works. There was an Old Navy ahead of me. But the world had gone full Purge — total lawlessness, everything collapsed — and the Old Navy staff had barricaded themselves inside and were defending it. Like, actively. With the energy of a last stand. My brain decided that in the post-apocalypse, the hill worth dying on was affordable basics. I was naked. Completely exposed, standing in the middle of all this, and I needed to get into that store. Not like "oh I should get dressed" — the desperation was animal-level. Find clothes. Cover yourself. The staff wouldn't let me in. Every step felt like wading through mud. You know that dream thing where your legs just won't work? Thirty feet to Old Navy and I could not close the distance. It was right there . At the center of everything stood a child. A boy, maybe eight or nine, but carrying himself like royalty. In the dream's logic he was the heir to Old Navy — I know how that sounds, but the dream was completely serious about it. He was the successor to this throne. Around his head he had this triangular scepter that worked as both crown and weapon. He kept showing up ahead of me, always blocking the way forward. The scepter was sealed. The triangle was closed — every vertex connected, no way in, no way out. And I just knew what that meant, the way you know things in dreams without anyone telling you: his belief system was a closed loop. Totally self-referencing. Nothing could get in and nothing could escape, and he had no idea, because from inside a sealed triangle there's no such thing as "outside." This maps to what epistemologists call a closed epistemic loop — a belief structure where all evidence gets interpreted through the existing framework, making disconfirmation structurally impossible. Conspiracy theories work this way. So do certain theological traditions. So do some software architectures, honestly. Standing near the child was a black mage. And I mean the Final Fantasy kind — tall, robed, face hidden in shadow. I'd literally been writing about Final Fantasy yesterday so I guess my brain had the assets loaded. But he wasn't threatening. He was... explaining things? Like a tour guide for whatever my subconscious was trying to show me. Very patient. Very calm. Spoke directly to me about what I was seeing. His subject was how belief systems work. He called them principalities of the mind — self-contained little kingdoms where every belief props up every other belief. Contradictions bounce off. The whole thing holds together through pure internal consistency, even when there's nothing underneath it. You can't see the foundation from inside. The child heir was his example — look, here's what a sealed principality looks like when you give it a body and a crown. Movement never got easier. I kept pushing through the mud, the child kept showing up with that sealed scepter catching the light, and the mage just... kept talking. Honestly it was like being in the world's most surreal college lecture. I couldn't take notes. I was naked and covered in dream-molasses. And then everything started dissolving. The mall went first, then the Old Navy fortress, then the chaos outside — all of it pulling apart. But the mage stayed. He looked right at me. Not past me, not through me — at me. And he said: "Your scepter is unbound — do with this what you will." I woke up and lay there for a long time. The contrast hit me while I was staring at the hospital ceiling. The child's scepter was sealed — a closed system that couldn't take in anything new. Perfect, complete, and totally stuck . Mine was unbound. Whatever that meant. I honestly don't know if this is my unconscious mind processing the surgery, the medication doing something weird to my REM cycles, or just the kind of thing that happens when you stare down your own mortality and then your brain has opinions about it. What I do know is that the symbolism was so on-the-nose it felt like getting a lecture from my own subconscious. In chaos magick — which, yes, is a real thing I've read about, I'm not just making this up — there's this concept that beliefs are tools. You pick one up, use it, put it down when it stops being useful. It's not who you are. It's "a person's preferred structure of reality," emphasis on preferred . You can swap it out. Principalities of the mind are what happens when you forget your beliefs are a tool and start treating them like physics. The triangle seals shut. The scepter becomes a prison you can't see from inside. And the part the black mage was so patient about — the really messed up part — is that from inside a sealed principality, everything seems fine. Your beliefs are consistent, reality makes sense, and you have no idea you're trapped because the cage is made of your own assumptions. An unbound scepter is the opposite of comfortable. Your worldview has gaps in it, entry points where new information can come in and rearrange everything. That's scary. But it also means you can actually change, which is more than the heir could say. Wait, so the good outcome here is having a belief system with holes in it? I mean... kind of? A sealed scepter means you never have to doubt anything but you also never grow. An unbound one is overwhelming but at least you can move . The heir was frozen. Perfect and still and going absolutely nowhere forever. Maybe that's why I couldn't move in the dream. Wading through mud, barely able to take a step — but I was taking steps. The heir just stood there. He didn't struggle because he had nowhere to go. His triangle was already complete. "Do with this what you will." That's what the mage said. Not telling me what to do with it. Just... handing me the choice. An unbound scepter doesn't come with instructions. I think the dream was telling me something I already knew. Or maybe reminding me that knowing it once isn't enough — you have to keep choosing to stay open. The triangle is always trying to close. Your scepter is unbound. Do with this what you will. Now if you'll excuse me, I have a hospital discharge to survive and a husband to hug.

0 views

You can't always fix it

I have some weird hobbies, and one of those is opening up the network tab on just about anything I'm using. Sometimes, I find egregious problems. Usually, this is something that can be fixed, when responsibly reported. But over time, I learned a bitter lesson: sometimes, you can't get it fixed. Recently, I was waiting for a time-sensitive delivery of medication. It used a courier company which focused on just delivering prescription medications. I opened up the tracking page on my computer, and saw the information I wanted: the medication would probably arrive around 6 PM. But... what if there's more? And what are they doing with my data? Can anyone else see it? So I peeked at the network tools, and was disappointed by what I saw. The first time this happened, I was surprised. By now, I expect to see this. And what I saw was every customer's address along the delivery route. I also saw how much the courier would get paid per stop, what their hourly rate was, and the driver's GPS coordinates (though these were sometimes missing). After the package was delivered, the tracking page changed and displayed a feedback form, my signature, and a picture of my porch. The JSON payload no longer included the entire route, but it included my address, and the payload from an easily guessable related endpoint did still contain the entire route. And that route? It included other recipients' ids, which can be used to find their home addresses, names, contents of the package (sometimes), a photo of their porch, and a copy of their signature. Um. This is bad, right? I've actually found approximately this vulnerability in two separate couriers' tracking pages (and they're using different software). One of them was even worse for them, it included their Stripe private key, I suppose as a bug bounty for people without ethics. And each time I find it, I try to report it. And I fail. They don't let me report it. These companies don't list security contacts. The staff I can find on LinkedIn or their website don't have email addresses that I can find or guess. Mail sent to the addresses I do find listed has all bounced. I tried going through back channels. I messaged the pharmacy which was using this courier. I talked to my prescriber, who was shocked at this issue. And the next time I got a delivery, it came via UPS instead (they do not have a leaky sieve for a tracking page, but they did "lose" my prescription once). But I don't know if they just did that for me , the miscreant who looks at her network tools? Or did they switch everyone over to a different courier? Either way, at least my data was safe now, right? It was, until I started using a different pharmacy, and this one is back to using the leaky couriers again. Sigh. I got pretty upset about this at one point. There's a security issue! Data is being leaked, I must get this fixed! And someone told me something really wise: "it's not your responsibility to fix this, and you've done everything you can (and more than you had to)." And ultimately, she was right. I was getting myself worked up about it, but it's not my responsibility to fix. Sometimes there will be things like this that are bad, that I cannot fix, and that I have to accept. So, where do I go from here? I could probably publicly name-and-shame the couriers, but it would not do anything productive. It would not get their attention to fix it, and it wouldn't be seen by the folks who need to know (pharmacists and prescribers). So I'm not going to disclose the specific company, because the main thing it would do is risk me getting in legal trouble, for dubious benefit. I've already notified the pharmacists and prescribers that I know; it's on them, if they want to let anyone else know.

0 views

Expert Beginners and Lone Wolves will dominate this early LLM era

After migrating this blog from a static site generator into Drupal in 2009 , I noted: As a sad side-effect, all the blog comments are gone. Forever. Wiped out. But have no fear, we can start new discussions on many new posts! I archived all the comments from the old 'Thingamablog' version of the blog, but can't repost them here (at least, not with my time constraints... it would just take a nice import script, but I don't have the time for that now).

0 views
Jim Nielsen Yesterday

Book Notes: “Blood In The Machine” by Brian Merchant

For my future self, these are a few of my notes from this book . A take from one historian on the Luddite movement: If workmen disliked certain machines, it was because of the use that they were being put, not because they were machines or because they were new Can’t help but think of AI. I don’t worry about AI becoming AGI and subjugating humanity. I worry that it’s put to use consolidating power and wealth into the hands of a few at the expense of many. The Luddites smashed things: to destroy, specifically, ‘machinery hurtful to commonality’ — machinery that tore at the social fabric, unduly benefitting a singly party at the expense of the rest of the community. Those who deploy automation can use it to erode the leverage and earning power of others, to capture for themselves the former earnings of a worker. It’s no wonder CEOs are all about their employees using AI: it gives them the leverage. Respect for the natural rights of humans has been displaced in favor of the unnatural rights of property. Richard Arkwright was an entrepreneur in England. His “innovation” wasn’t the technology for spinning yarn he invented (“pieced together from the inventions of others” would be a better wording), but rather the system of modern factory work he created for putting his machines to work. Arkwright’s “main difficulty”, according to early business theorist Andrew Ure, did not “lie so much in the invention of a proper mechanism for drawing out and twisting cotton into a continuous thread, as in […] training human beings to renounce their desultory habits of work and to identify themselves with the unvarying regularity of the complex automaton.” This was his legacy […] for all his innovation, the secret sauce in his groundbreaking success was labor exploitation. Not much has changed (which is kind of the point of the book). The model for success is: As the author says: [Impose discipline and rigidity on workers, and adapt] them to the rhythms of the machine and the dictates of capital — not the other way around. Reply via: Email · Mastodon · Bluesky Look at the technologies of the day Recognize what works and could turn a profit Steal the ideas and put them into action with unmatched aggression and shamelessness

0 views

Step aside, phone: week 3

Three-quarters of the way through this “challenge”, and the findings are mostly the same. Phone usage is very easy to keep in check if you decide to put your mind to it. The past seven days have been very similar to the previous seven, and that’s good, since this type of phone usage needs to become the new normal. Contrary to the previous week, this time it was the first half of the week that saw higher usage, and that was mostly due to a few long Telegram sessions late in the day on Monday and Tuesday. 44 or the 54 minutes logged on Monday, and 32 of the 45 logged on Tuesday, were spent on Telegram. Only 26 minutes out of 46 on Wednesday, the rest of the usage was work-related since I had to do a few phone calls and test a couple of things on mobile Safari. The second half of the week saw a lot less phone time, but I did have to spend a lot more time at my computer, taking care of client stuff, and that’s why I barely picked up the phone. Which is fine. I still have not consumed content on the phone, three weeks in. That’s awesome, and I want that to stay that way. Again, very pleased with how this month-long experiment is going, and I do have some takeaways, but I’ll wait until next Sunday to share them. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
<antirez> Yesterday

Redis patterns for coding

Here LLM and coding agents can find: 1. Exhaustive documentation about Redis commands and data types. 2. Patterns commonly used. 3. Configuration hints. 4. Algorithms that can be mounted using Redis commands. https://redis.antirez.com/ Some humans claim this documentation is actually useful for actual people, as well :) I'm posting this to make sure search engines will index it. Comments

0 views
devansh Yesterday

Hacking Better-Hub

Better-Hub ( better-hub.com ) is an alternative GitHub frontend — a richer, more opinionated UI layer built on Next.js that sits on top of the GitHub API. It lets developers browse repositories, view issues, pull requests, code blobs, and repository prompts, while authenticating via GitHub OAuth. Because Better-Hub mirrors GitHub content inside its own origin, any unsanitized rendering of user-controlled data becomes significantly more dangerous than it would be on a static page — it has access to session tokens, OAuth credentials, and the authenticated GitHub API. That attack surface is exactly what I set out to explore. Description The repository README is fetched from GitHub, piped through with and enabled — with zero sanitization — then stored in the state and rendered via in . Because the README is entirely attacker-controlled, any repository owner can embed arbitrary JavaScript that executes in every viewer's browser on better-hub.com. Steps to Reproduce Session hijacking via cookie theft, credential exfiltration, and full client-side code execution in the context of better-hub.com. Chains powerfully with the GitHub OAuth token leak (see vuln #10). Description Issue descriptions are rendered with the same vulnerable pipeline: with raw HTML allowed and no sanitization. The resulting is inserted directly via inside the thread entry component, meaning a malicious issue body executes arbitrary script for every person who views it on Better-Hub. Steps to Reproduce Arbitrary JavaScript execution for anyone viewing the issue through Better-Hub. Can be used for session hijacking, phishing overlays, or CSRF-bypass attacks. Description Pull request bodies are fetched from GitHub and processed through with / and no sanitization pass, then rendered unsafely. An attacker opening a PR with an HTML payload in the body causes XSS to fire for every viewer of that PR on Better-Hub. Steps to Reproduce Stored XSS affecting all viewers of the PR. Particularly impactful in collaborative projects where multiple team members review PRs. Description The same unsanitized pipeline applies to PR comments. Any GitHub user who can comment on a PR can inject a stored XSS payload that fires for every Better-Hub viewer of that conversation thread. Steps to Reproduce A single malicious commenter can compromise every reviewer's session on the platform. Description The endpoint proxies GitHub repository content and determines the from the file extension in the query parameter. For files it sets and serves the content inline (no ). An attacker can upload a JavaScript-bearing SVG to any GitHub repo and share a link to the proxy endpoint — the victim's browser executes the script within 's origin. Steps to Reproduce Reflected XSS with a shareable, social-engineered URL. No interaction with a real repository page is needed — just clicking a link is sufficient. Easily chained with the OAuth token leak for account takeover. Description When viewing code files larger than 200 KB, the application hits a fallback render path in that outputs raw file content via without any escaping. An attacker can host a file exceeding the 200 KB threshold containing an XSS payload — anyone browsing that file on Better-Hub gets the payload executed. Steps to Reproduce Any repository owner can silently weaponize a large file. Because code review is often done on Better-Hub, this creates a highly plausible attack vector against developers reviewing contributions. Description The function reads file content from a shared Redis cache . Cache entries are keyed by repository path alone — not by requesting user. The field is marked as shareable, so once any authorized user views a private file through the handler or the blob page, its contents are written to Redis under a path-only key. Any subsequent request for the same path — from any user, authenticated or not — is served directly from cache, completely bypassing GitHub's permission checks. Steps to Reproduce Complete confidentiality breach of private repositories. Any file that has ever been viewed by an authorized user is permanently exposed to unauthenticated requests. This includes source code, secrets in config files, private keys, and any other sensitive repository content. Description A similar cache-keying problem affects the issue page. When an authorized user views a private repo issue on Better-Hub, the issue's full content is cached and later embedded in Open Graph meta properties of the page HTML. A user who lacks repository access — and sees the "Unable to load repository" error — can still read the issue content by inspecting the page source, where it leaks in the meta tags served from cache. Steps to Reproduce Private issue contents — potentially including bug reports, credentials in descriptions, or internal discussion — are accessible to any unauthenticated party who knows or guesses the URL. Description Better-Hub exposes a Prompts feature tied to repositories. For private repositories, the prompt data is included in the server-rendered page source even when the requestor does not have repository access. The error UI correctly shows "Unable to load repository," but the prompt content is already serialized into the HTML delivered to the browser. Steps to Reproduce Private AI prompts — which may contain internal instructions, proprietary workflows, or system prompt secrets — leak to unauthenticated users. Description returns a session object that includes . This session object is passed as props directly to client components ( , , etc.). Next.js serializes component props and embeds them in the page HTML for hydration, meaning the raw GitHub access token is present in the page source and accessible to any JavaScript running on the page — including scripts injected via any of the XSS vulnerabilities above. The fix is straightforward: strip from the session object before passing it as props to client components. Token usage should remain server-side only. When chained with any XSS in this report, an attacker can exfiltrate the victim's GitHub OAuth token and make arbitrary GitHub API calls on their behalf — reading private repos, writing code, managing organizations, and more. This elevates every XSS in this report from session hijacking to full GitHub account takeover . Description The home page redirects authenticated users to the destination specified in the query parameter with no validation or allow-listing. An attacker can craft a login link that silently redirects the victim to an attacker-controlled domain immediately after they authenticate. Steps to Reproduce Phishing attacks exploiting the trusted better-hub.com domain. Can be combined with OAuth token flows for session fixation attacks, or used to redirect users to convincing fake login pages post-authentication. All issues were reported directly to Better-Hub team. The team was responsive and attempted rapid remediation. What is Better-Hub? The Vulnerabilities 01. Unsanitized README → XSS 02. Issue Description → XSS 03. Stored XSS in PR Bodies 04. Stored XSS in PR Comments 05. Reflected XSS via SVG Image Proxy 06. Large-File XSS (>200 KB) 07. Cache Deception — Private File Access 08. Authz Bypass via Issue Cache 09. Private Repo Prompt Leak 10. GitHub OAuth Token Leaked to Client 11. Open Redirect via Query Parameter Disclosure Timeline Create a GitHub repository with the following content in : View the repository at and observe the XSS popup. Create a GitHub issue with the following in the body: Navigate to the issue via to trigger the payload. Open a pull request whose body contains: View the PR through Better-Hub to observe the XSS popup. Post a PR comment containing: View the comment thread via Better-Hub to trigger the XSS. Create an SVG file in a public GitHub repo with content: Direct the victim to: Create a file named containing the payload, padded to exceed 200 KB: Browse to the file on Better-Hub at . The XSS fires immediately. Create a private repository and add a file called . As the repository owner, navigate to the following URL to populate the cache: Open the same URL in an incognito window or as a completely different user. The private file content is served — no authorization required. Create a private repo and create an issue with a sensitive body. Open the issue as an authorized user: Open the same URL in a different session (no repo access). While the access-error UI is shown, view the page source — issue details appear in the tags. Create a private repository and create a prompt in it. Open the prompt URL as an unauthorized user: View the page source — prompt details are present in the HTML despite the access-denied UI. Log in to Better-Hub with GitHub credentials. Navigate to: You are immediately redirected to .

0 views
Daniel Mangum Yesterday

Fooling Go's X.509 Certificate Verification

Below are two X.509 certificates. The first is the Certificate Authority (CA) root certificate, and the second is a leaf certifcate signed by the private key of the CA. ca.crt.pem -----BEGIN CERTIFICATE----- MIIBejCCASGgAwIBAgIUda4UvlFzwQEO/fD0f4hAnj+ydPYwCgYIKoZIzj0EAwIw EjEQMA4GA1UEAxMHUm9vdCBDQTAgFw0yNjAyMjcxOTQ3NDZaGA8yMTI2MDIwMzE5 NDc0NlowEjEQMA4GA1UEAxMHUm9vdCBDQTBZMBMGByqGSM49AgEGCCqGSM49AwEH A0IABKL5BB9aaQ2TtNgUymEsa/+s2ZlTXVll0N22KKWxh0N/JdgHcjrKfzqRlVrt UN2GXdvsdLOq15TxBq97WvE07lKjUzBRMB0GA1UdDgQWBBTAVEw9doSzY1DuPVxP EnwEp/+VJDAfBgNVHSMEGDAWgBTAVEw9doSzY1DuPVxPEnwEp/+VJDAPBgNVHRMB Af8EBTADAQH/MAoGCCqGSM49BAMCA0cAMEQCIHrSTk/KJHAjn3MC/egvfxMM1NpG GEzMB7EH+VXWz7RfAiAyhwy4E9hc8/qsTI+4iKf2o/zMRu5H2GNJOLqOngglbQ== -----END CERTIFICATE----- leaf.crt.pem -----BEGIN CERTIFICATE----- MIIBHjCBxAIULE3hvnYxU91g9c9H3+uGCSqXi4MwCgYIKoZIzj0EAwIwEjEQMA4G A1UEAwwHUm9vdCBDQTAgFw0yNjAyMjcxOTQ3NDZaGA8yMTI2MDIwMzE5NDc0Nlow DzENMAsGA1UEAwwEbGVhZjBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABKDZ21Yh +1AQp1TrxrS8FquIVEHrFRSXncX9xl5vVhZFqvblzTp2Tg7TER5x7rHG1TIqQL1z xDX4TB+nZOWkyAcwCgYIKoZIzj0EAwIDSQAwRgIhAMeo5t2d1RWL/SB0E+mvvIZP jFT0wDWX1Bm26MtxRcf9AiEApG96fs70WF1JliFgzkTiNvbG7Gj4SvErZ9nNX/Lr PnA= -----END CERTIFICATE----- If you downloaded these certificates, you could visually see that the latter references the former as its Issuer.

0 views
JSLegendDev Yesterday

If You Like PICO-8, You'll Love KAPLAY (Probably)

I’ve been checking out PICO-8 recently. For those unaware, It’s a nicely constrained environment for making small games in Lua. It provides a built-in editor allowing you to write code, make sprites, make tile maps and make sounds. This makes it ideal to prototype game ideas or make small games. You know what tool is also great for prototyping game ideas or making small games? Well… KAPLAY ! It’s a simple free and open source library for making games in JavaScript. I suspect there might be a sizeable overlap between people who like PICO-8 and those who would end up liking or even loving KAPLAY as well if they gave it a try. During my PICO-8 learning journey, I came across this nice tutorial teaching you how to make a coin collecting game in 10 minutes. In this article, I’d like to teach you how to build roughly the same game in KAPLAY. This will better demonstrate in what ways this game library makes game development faster much like PICO-8. Feel free to follow along if you wish to! KAPLAY lacks all of the tools included in PICO-8. There is no all-in-one package you can use to write your code, make your sprites, build your maps or even make sounds. You might be wondering, then, how KAPLAY is in any way similar to PICO-8 if it lacks all of this? My answer : KAPLAY makes up for it by making the coding part really easy by offering you a lot logic built-in. For example, it handles collisions, physics, scene management, animations etc… for you. You’ll see some of this in action when we arrive at the part where we write the game’s code. Now, how do we use KAPLAY? Here’s the simplest way I’ve found. You install VSCode (a popular code editor) along with the Live Server extension (can be found in the extensions marketplace within the editor). You then create a folder that you open within VSCode. Once the folder is opened, we only need to create two files. One called index.html and the other main.js. Your index.html file should contain the following : Since KAPLAY works on the web, it lives within a web page. index.html is that page. Then, we link our JavaScript file to it. We set the type to “module” so we can use import statements in our JS. We then add the following : Voilà! We can now use the KAPLAY library. Since we installed the Live Server extension, you should now have access to a “Go Live” button at the bottom of the editor. To actually run the game, all you have to do is click it. This will open the web page in your default browser. KAPLAY by default creates a canvas with a checkered pattern. One thing pretty cool with this setup, is that every time you change something in your code and hit save (Ctrl+S or Cmd+S on a Mac), the web page reloads and you can see your latest changes instantly. I’ve created the following spritesheet to be used in our game. Note that since the image is transparent, the cloud to the right is not really visible. You can download the image above to follow along. The next step is to place the image in the same folder as our HTML page and JavaScript file. We’re now ready to make our game. Here we set the width and the height of our canvas. The letterbox option is used to make sure the canvas scales according to the browser window but without losing its aspect ratio. We can load our spritesheet by using the loadSprite KAPLAY function. The first param is the name you want to use to refer to it elsewhere in your code. The second param is the path to get that asset. Finally, the third param is used to tell KAPLAY how to slice your image into individual frames. Considering that in our spritesheet we have three sprites placed in a row, the sliceX property should be set to 3. Since we have only one sprite per column (because we only have one column) sliceY should be set to 1. To make the coins fall from the top, we’ll use KAPLAY’s physics system. You can set the gravity by calling KAPLAY’s setGravity function. KAPLAY’s add function is used to create a game object by providing an array of components. These components are offered by KAPLAY and come with prebuilt functionality. The rect() component sets the graphics of the game object to be a rectangle with a width and height of 1000. On the other hand, the color component sets its color. You should have the following result at this stage. Creating The Basket The basket is a also a game object with several different components. Here is what each does : Sets the sprite used by the game object. The first param is for providing the name of the sprite we want to use. Since we’re using a spritesheet which contains three different sprites in the same image, we need to specify the frame to use. The basket sprite corresponds to frame 0. anchor() By default, game objects are positioned based on their top-left corner. However, I prefer having it centered. The anchor component is for this purpose. This component is used to set the position of the game object on the canvas. Here we also use center() which is a KAPLAY provided function that provides the coordinates of the center of the canvas. This component is used to set the hitbox of a game object. This will allow KAPLAY’s physics system to handle collisions for us. There is a debug mode you can access by pressing the f1 (fn+f1 on Mac) key which will make hitboxes visible. Example when debug mode is on. As for setting the shape of the hitbox, you can use the Rect class which takes 3 params. The first expects a vec2 (a data structure offered by KAPLAY used to set pair of values) describing where to place the hitbox relative to the game object. If set to 0, the hitbox will have the same position as the game object. The two params left are used to set its width and height. Finally, the body component is used to make the game object susceptible to physics. If added alone, the game object will be affected by gravity. However, to prevent this, we can set the isStatic property to true. This is very useful, for example, in a platformer where platforms need to be static so they don’t fall off. Here we can use the move method available on all game objects to make the basket move in the desired direction. The loop function spawns a coin every second. We use the randi function to set a random X position between 10 and 950. The offscreen component is used to destroy the game object once it’s out of view. Finally a simple string “coin” is added alongside the array of components to tag the game object being created. This will allow us to determine which coin collided with the basket so we can destroy it and increase the score. Text can be displayed by creating a game object with the text component. To know when a coin collides with the basket, we can use its onCollide method (available by default). The first param of that method is the tag of the game object you want to check collisions with. Since we have multiple coins using the “coin” tag, the specific coin currently colliding with the basket will be passed as a param to the collision handler. Now we can destroy the coin, increase the score and display the new score. As mentioned earlier, KAPLAY does not have a map making tool. However, it does offer the ability to create maps using arrays of strings. For anything more complex, you should check out Tiled which is also open source and made for that purpose. Where we place the # character in the string array determines where clouds will be placed in the game. Publishing a KAPLAY game is very simple. You compress your folder into a .zip archive and you upload it to itch.io or any other site you wish to. The game will be playable in the browser without players needing to download it. Now, what if you’d like to make it downloadable as well? A very simple tool you can use is GemShell. It allows you to make executables for Windows/Mac/Linux in what amounts to a click. You can use the lite version for free. If you plan on upgrading, you can use my link to get 15% off your purchase. To be transparent, this is an affliate link. If you end purchasing the tool using my link, I’ll get a cut of that sale. I just scratched the surface with KAPLAY today. I hope it gave you a good idea of what it’s like to make games with it. If you’re interested in more technical articles like this one, I recommend subscribing to not miss out on future publications. Subscribe now In the meantime, you can check out the following :

0 views

Notes on Lagrange Interpolating Polynomials

Polynomial interpolation is a method of finding a polynomial function that fits a given set of data perfectly. More concretely, suppose we have a set of n+1 distinct points [1] : And we want to find the polynomial coefficients {a_0\cdots a_n} such that: Fits all our points; that is p(x_0)=y_0 , p(x_1)=y_1 etc. This post discusses a common approach to solving this problem, and also shows why such a polynomial exists and is unique. When we assign all points (x_i, y_i) into the generic polynomial p(x) , we get: We want to solve for the coefficients a_i . This is a linear system of equations that can be represented by the following matrix equation: The matrix on the left is called the Vandermonde matrix . This matrix is known to be invertible (see Appendix for a proof); therefore, this system of equations has a single solution that can be calculated by inverting the matrix. In practice, however, the Vandermonde matrix is often numerically ill-conditioned, so inverting it isn’t the best way to calculate exact polynomial coefficients. Several better methods exist. Lagrange interpolation polynomials emerge from a simple, yet powerful idea. Let’s define the Lagrange basis functions l_i(x) ( i \in [0, n] ) as follows, given our points (x_i, y_i) : In words, l_i(x) is constrained to 1 at and to 0 at all other x_j . We don’t care about its value at any other point. The linear combination: is then a valid interpolating polynomial for our set of n+1 points, because it’s equal to at each (take a moment to convince yourself this is true). How do we find l_i(x) ? The key insight comes from studying the following function: This function has terms (x-x_j) for all j\neq i . It should be easy to see that l'_i(x) is 0 at all x_j when j\neq i . What about its value at , though? We can just assign into l'_i(x) to get: And then normalize l'_i(x) , dividing it by this (constant) value. We get the Lagrange basis function l_i(x) : Let’s use a concrete example to visualize this. Suppose we have the following set of points we want to interpolate: (1,4), (2,2), (3,3) . We can calculate l'_0(x) , l'_1(x) and l'_2(x) , and get the following: Note where each l'_i(x) intersects the axis. These functions have the right values at all x_{j\neq i} . If we normalize them to obtain l_i(x) , we get these functions: Note that each polynomial is 1 at the appropriate and 0 at all the other x_{j\neq i} , as required. With these l_i(x) , we can now plot the interpolating polynomial p(x)=\sum_{i=0}^{n}y_i l_i(x) , which fits our set of input points: We’ve just seen that the linear combination of Lagrange basis functions: is a valid interpolating polynomial for a set of n+1 distinct points (x_i, y_i) . What is its degree? Since the degree of each l_i(x) is , then the degree of p(x) is at most . We’ve just derived the first part of the Polynomial interpolation theorem : Polynomial interpolation theorem : for any n+1 data points (x_0,y_0), (x_1, y_1)\cdots(x_n, y_n) \in \mathbb{R}^2 where no two x_j are the same, there exists a unique polynomial p(x) of degree at most that interpolates these points. We’ve demonstrated existence and degree, but not yet uniqueness . So let’s turn to that. We know that p(x) interpolates all n+1 points, and its degree is . Suppose there’s another such polynomial q(x) . Let’s construct: That do we know about r(x) ? First of all, its value is 0 at all our , so it has n+1 roots . Second, we also know that its degree is at most (because it’s the difference of two polynomials of such degree). These two facts are a contradiction. No non-zero polynomial of degree \leq n can have n+1 roots (a basic algebraic fact related to the Fundamental theorem of algebra ). So r(x) must be the zero polynomial; in other words, our p(x) is unique \blacksquare . Note the implication of uniqueness here: given our set of n+1 distinct points, there’s only one polynomial of degree \leq n that interpolates it. We can find its coefficients by inverting the Vandermonde matrix, by using Lagrange basis functions, or any other method [2] . The set P_n(\mathbb{R}) consists of all real polynomials of degree \leq n . This set - along with addition of polynomials and scalar multiplication - forms a vector space . We called l_i(x) the "Lagrange basis" previously, and they do - in fact - form an actual linear algebra basis for this vector space. To prove this claim, we need to show that Lagrange polynomials are linearly independent and that they span the space. Linear independence : we have to show that implies a_i=0 \quad \forall i . Recall that l_i(x) is 1 at , while all other l_j(x) are 0 at that point. Therefore, evaluating s(x) at , we get: Similarly, we can show that a_i is 0, for all \blacksquare . Span : we’ve already demonstrated that the linear combination of l_i(x) : is a valid interpolating polynomial for any set of n+1 distinct points. Using the polynomial interpolation theorem , this is the unique polynomial interpolating this set of points. In other words, for every q(x)\in P_n(\mathbb{R}) , we can identify any set of n+1 distinct points it passes through, and then use the technique described in this post to find the coefficients of q(x) in the Lagrange basis. Therefore, the set l_i(x) spans the vector space \blacksquare . Previously we’ve seen how to use the \{1, x, x^2, \dots x^n\} basis to write down a system of linear equations that helps us find the interpolating polynomial. This results in the Vandermonde matrix . Using the Lagrange basis, we can get a much nicer matrix representation of the interpolation equations. Recall that our general polynomial using the Lagrange basis is: Let’s build a system of equations for each of the n+1 points (x_i,y_i) . For : By definition of the Lagrange basis functions, all l_i(x_0) where i\neq 0 are 0, while l_0(x_0) is 1. So this simplifies to: But the value at node is , so we’ve just found that a_0=y_0 . We can produce similar equations for the other nodes as well, p(x_1)=a_1 , etc. In matrix form: We get the identity matrix; this is another way to trivially show that a_0=y_0 , a_1=y_1 and so on. Given some numbers \{x_0 \dots x_n\} a matrix of this form: Is called the Vandermonde matrix. What’s special about a Vandermonde matrix is that we know it’s invertible when are distinct. This is because its determinant is known to be non-zero . Moreover, its determinant is [3] : Here’s why. To get some intuition, let’s consider some small-rank Vandermonde matrices. Starting with a 2-by-2: Let’s try 3-by-3 now: We can use the standard way of calculating determinants to expand from the first row: Using some algebraic manipulation, it’s easy to show this is equivalent to: For the full proof, let’s look at the generalized n+1 -by- n+1 matrix again: Recall that subtracting a multiple of one column from another doesn’t change a matrix’s determinant. For each column k>1 , we’ll subtract the value of column k-1 multiplied by from it (this is done on all columns simultaneously). The idea is to make the first row all zeros after the very first element: Now we factor out x_1-x_0 from the second row (after the first element), x_2-x_0 from the third row and so on, to get: Imagine we erase the first row and first column of . We’ll call the resulting matrix . Because the first row of is all zeros except the first element, we have: Note that the first row of has a common factor of x_1-x_0 , so when calculating \det(W) , we can move this common factor out. Same for the common factor x_2-x_0 of the second row, and so on. Overall, we can write: But the smaller matrix is just the Vandermonde matrix for \{x_0 \dots x_{n-1}\} . If we continue this process by induction, we’ll get: If you’re interested, the Wikipedia page for the Vandermonde matrix has a couple of additional proofs.

0 views
iDiallo Yesterday

&ldquo;How old are you?&rdquo; Asked the OS

A new law passed in California to require every operating system to collect the user's age at account creation time. The law is AB-1043 . And it was passed in October of 2025. How does it work? Does it apply to offline systems? When I set up my Raspberry Pi at home, is this enforced? What if I give an incorrect age, am I breaking the law now? What if I set my account correctly, but then my kids use the device? What happens? There is no way to enforce this law, but I suspect that's not the point. It's similar to statements you find in IRS documents. The IRS requires you to report all income from illegal activities, such as bribes and scams. Obviously, if you are getting a bribe, you wouldn't report it, but by not reporting it you are breaking additional laws that can be used to get you prosecuted. When you don't report your age to your OS whether it's a windows device or a Tamagotchi, you are breaking the law. It's not enforced of course, but when you are suspected of any other crime, you can be arrested for the age violation first, then prosecuted for something else. What a world we live in.

0 views
Hugo Yesterday

Dogfooding: Why I Migrated My Own Blog to Writizzy

In 2022, I created an open-source static blog generator: Bloggrify . It’s conceptually similar to Hugo —it generates a static site (just a bunch of HTML files) that you can host for free on Cloudflare, GitHub Pages, or Bunny.net . Before that, I had tried everything: WordPress, Joomla, Medium. I wanted to regain flexibility and customize my blog exactly how I wanted. But let’s be honest: I’m a developer, and I mainly wanted a new technical playground. Fast forward to 2026, and I have to admit: using a static blog has become a major friction point for my writing. So, I decided to migrate again, this time to a managed platform: Writizzy , another product I’m building. This move is a great opportunity to talk about several things: Bloggrify started as a love letter to the Nuxt ecosystem, specifically . Back when I migrated from WordPress, my criteria were simple: In 2022, it wasn't a "product" yet—just my personal blog code made public. It only became a full-fledged open-source project in 2024, with a dedicated site and a proper README to encourage contributions. I wanted the product to be "opinionated." Nuxt-content does 90% of the heavy lifting, but it’s a generic tool. For a real blog, you still need to build the RSS feed, sitemap, robots.txt, comments, table of contents, share buttons, newsletter integration, analytics, and SEO. That’s what Bloggrify is: a "starter pack" that comes with everything pre-configured. Think of it as Docus , but for blogs instead of documentation. I’m a numbers person. When I launch a project, I want to see usage. It might sound trivial, but considering the effort it takes to manage NPM releases (which is honestly a nightmare), handle versioning, and maintain themes, you expect a minimum return on investment. Bloggrify reached 164 stars on GitHub and sits somewhere in the middle of the pack on Jamstack.org . That’s... okay, I guess. But in reality, I have almost zero feedback on its actual usage. A few rare GitHub issues, one contributor who was active for a few weeks before vanishing, and then silence. I only know of one blog that used it before switching back to Hugo. The experience has been bittersweet. Building in the dark is demotivating. However, it did lead me to launch two other side-products: I launched Broadcast and Pulse in 2024 and 2025. They’re living a quiet life, but they aren't "exploding." My target audience is static bloggers—mostly developers. And as we know, developers are the hardest group to convince to pay for a service! Still, I’m satisfied. These products taught me how to build a SaaS, handle subscriptions, and find my ideal tech stack. My own newsletters were sent via Broadcast (reaching about 150 subscribers), and I used Pulse to track which articles were actually being read. The reality? These two tools generate about €100 in Monthly Recurring Revenue (MRR) . Not enough to retire on, but a great learning experience. And that brings us to Writizzy. With Bloggrify, I realized my writing workflow had become painful. Between maintaining the framework, jumping between spell-checkers, writing in Markdown, spinning up a local server to check for broken links, and waiting for build/deployment times... I was losing hours. For my last article, someone pointed out a few typos. It took me 20 minutes between editing the file and seeing the fix live. Add to that the friction of managing images in an IDE and the recent Nuxt 4 / Nuxt-content updates which, while I love them, have made the dev experience slightly slower for simple blogging. To be honest, I wasn't aware of that. I put up with these inconveniences and was still very happy to have “flexibility” in what I could do with my blog. I wasn't fully aware of this "friction" until I built Writizzy . Writizzy is the synthesis of my blogging experience. It’s a mix of Substack, Ghost, and Medium, but built as a European alternative with four core pillars: I moved my English blog to Writizzy first (this one), with no intention of moving the french one. But I soon noticed I was writing much faster on the English site. The workflow was just... better. Copy-pasting images directly into the editor, instant previews, no server to launch. It was a joy. I hesitated for a long time before migrating eventuallycoding.com . I knew that by doing so, I was taking the risk of killing Bloggrify. If even I don't use it anymore, the project enters a danger zone. When you don’t use your own product daily—when you’re no longer obsessed with the problem it solves—it’s almost impossible to stay attached to it. This is a symptom I see in so many "disposable" projects across the internet: built by people who flutter from one idea to the next without any real skin in the game. So yes, moving away from Bloggrify is a risk. But I’ve come to terms with it. Today, I have almost zero evidence that Bloggrify is being used. Meanwhile, Writizzy already has 314 blogs and 11 paying users (€135 MRR) in just four months. Why stubbornly cling to Bloggrify? Ultimately, I believe I’m solving the same problem with Writizzy, but in a much better way. I receive feedback emails and feature requests every single week. I get constant positive reinforcement from people actually using it. The product isn’t perfect, but it improves every day. It improves because real users are pushing me to refine the site, fix what’s broken, and add the features that absolutely need to be there. And it also improves because I use it constantly. This is the massive benefit of dogfooding . Every day, I am confronted with my own software, so I know exactly what needs to change. So yes, Bloggrify is moving to maintenance mode. I’m taking this opportunity to turn all templates into Open Source. Two of them were "premium," but it wouldn't make sense to keep them that way today. I tell myself I’ll still evolve it from time to time, but honestly, I wonder if I’m just lying to myself. As for Hakanai.io , I’m definitely continuing. The problem it solves still fascinates me. I get great feedback, especially on Broadcast. Pulse , however, suffers from being misunderstood. It’s a "blog analytics" product, and people don't really grasp what that entails—SEO advice, outlier detection, evergreen content tracking. I’m not great at marketing, so it mostly flies under the radar except for the readers of this blog who took the time to test it. But I’m motivated to keep them alive. As for Writizzy , there is no doubt. The product is incredibly exciting to build. The stakes are high: building a platform for expression that exists outside the US-centric giants. The traction is there, and the numbers follow (+45% MoM user growth). Welcome to this blog, now officially on Writizzy. As a reader, you can already test several things: The Discover feed to read other articles from Writizzy bloggers. We’ve handpicked a few to start with, and this feed will become even more customizable in the future. Welcome home. Dogfooding: Why you absolutely must use your own products. The harsh reality of Open Source: Why it’s harder than it looks. Product Satisfaction: The joy of building something people actually use. The future of my projects: Bloggrify, Writizzy, and Hakanai.io . A simple templating language (Markdown). Extensibility (RSS feeds, sitemaps, etc.). Low carbon footprint (static sites are incredibly efficient). broadcast.hakanai.io : A newsletter system for static blogs based on RSS feeds. pulse.hakanai.io : A specialized analytics tool for bloggers (not just generic web traffic). Sustainability : Focusing on reversibility and interoperability. Discoverability. Economic accessibility : Implementing Purchasing Power Parity (PPP). Transparency. The comments section . The newsletter subscription (if you haven’t already).

0 views
Justin Duke Yesterday

February, 2026

Last month's Wednesday update, I recorded from a train headed to Middelburg. This month I write closer to home temporally and otherwise. I am en route to the office on a very early Friday morning. We are still eight days from being able to return home with a new set of floors and an absent population of termites awaiting us. Eight days is not so far away. In fact, I have to remind myself a few times every day to make sure the message sticks. I mentioned that I'm going in early. It is currently six in the morning. We're staying with my parents out in the West End, and the office has a distance to it now that robs itself of much of its novelty. Half of the value I placed in it was its Goldilocks nature of being just far away from home to feel like a true second place without actually imposing any tax on the distance traveled. Obviously, it is privileged of me to say that a 30-minute commute is odorous. But the reason why I'm going early is to avoid some of the traffic. For the past few weeks, I've been moving up my schedule a couple hours to spend afternoons with Lucy. It's easy to forget too how lucky I am to be able to do this. This serves as a good metonymy for February writ large. Reminders of luck and flexibility in having parents happy to host us for weeks on end. And in having uncles excited to spend languorous long weekends with their niece. Lucky for a child who wants for nothing and ends every day with a smile on her face. Lucky for a wife who can move mountains and carry rivers. Lucky for time at all to write, to think, and to hit send before going on with my day.

0 views
Xe Iaso Yesterday

Killing my inner Necron

Hey everybody, I wanted to make this post to be the announcement that I did in fact survive my surgery I am leaving the hospital today and I want to just write up what I've had on my mind over these last couple months and why have not been as active and open source I wanted to. This is being dictated to my iPhone using voice control. I have not edited this. I am in the hospital bed right now, I have no ability to doubted this. As a result of all typos are intact and are intended as part of the reading experience. That week leading up to surgery was probably one of the scariest weeks of my life. Statistically I know that with the procedure that I was going to go through that there's a very low all-time mortality rate. I also know that with propofol the anesthesia that was being used, there is also a very all-time low mortality rate. However one person is all it takes to be that one lucky one in 1 million. No, I mean unlucky. Leading up to surgery I was afraid that I was going to die during the surgery so I prepared everything possible such that if I did die there would be as a little bad happening as possible. I made peace with my God. I wrote a will. I did everything it is that one was expected to do when there is a potential chance that your life could be ended including filing an extension for my taxes. Anyway, the point of this post is that I want to explain why I named the lastest release of Anubis Necron. Final Fantasy is a series of role-playing games originally based on one development teams game of advanced Dungeons & Dragons of the 80s. In the Final Fantasy series there are a number of legendary summons that get repeated throughout different incarnations of the games. These summons usually represent concepts or spiritual forces or forces of nature. The one that was coming to mind when I was in that pre-operative state was Necron. Necron is summoned through the fear of death. Specifically, the fear of the death of entire kingdom. All the subjects absolutely mortified that they are going to die and nothing that they can do is going to change that. Content warning: spoilers for Final Fantasy 14 expansion Dawntrail. In Final Fantasy 14 these legendary summons are named primals. These primals become the main story driver of several expansions. I'd be willing to argue that the first expansion a realm reborn is actually just the story of Ifrit (Fire), Garuda (Wind), Titan (Earth), and Lahabrea (Edgelord). Late into Dawn Trail, Nekron gets introduced. The nation state of Alexandria has fused into the main overworld. In Alexandria citizens know not death. When they die, their memories are uploaded into the cloud so that they can live forever in living memory. As a result, nobody alive really knows what death is or how to process it because it's just not a threat to them. Worst case if their body actually dies they can just have a new soul injected into it and revive on the spot. Part of your job as the player is to break this system of eternal life, as powering it requires the lives of countless other creatures. So by the end of the expansion, an entire kingdom of people that did not know the concept of death suddenly have it thrust into them. They cannot just go get more souls in order to compensate for accidental injuries in the field. They cannot just get uploaded when they die. The kingdom that lost the fear of death suddenly had the fear of death thrust back at them. And thus, Necron was summoned by the Big Bad™️ using that fear of death. I really didn't understand that part of the story until the week leading up to my surgery. The week where I was contacting people to let people know what was going on, how to know if I was OK, and what they should do if I'm not. In that week I ended up killing my fear of death. I don't remember much from the day of the operation, but what I do remember is this: when I was wheeled into the operating theater before they placed the mask over my head to put me to sleep they asked me one single question. "Do you want to continue?" In that moment everything swirled into my head again. all of the fear of death. All of the worries that my husband would be alone. That fear that I would be that unlucky 1 in 1 million person. And with all of that in my head, with my heart beating out of my chest, I said yes. The mask went down. And everything went dark. I got what felt like the best sleep in my life. And then I felt myself, aware again. In that awareness I felt absolutely nothing. Total oblivion. I was worried that that was it. I was gone. And then I heard the heart rate monitor and the blood pressure cuff squeezed around my arm. And in that moment I knew I was alive. I had slain my inner Necron and I felt the deepest peace in my life. And now I am in recovery. I am safe. I am going to make it. Do not worry about me. I will make it. Thank you for reading this, I hope it helped somehow. If anything it helped me to write this all out. I'm going to be using claude code to publish this on my blog, please forgive me like I said I am literally dictating this from an iPhone in the hospital room that I've been in for the last seven days. Let the people close to you know that you love them.

0 views