Posts in Security (20 found)
devansh Yesterday

HonoJS JWT/JWKS Algorithm Confusion

After spending some time looking for security issues in JS/TS frameworks , I moved on to Hono - fast, clean, and popular enough that small auth footguns can become "big internet problems". This post is about two issues I found in Hono's JWT/JWKS verification path: Both were fixed in hono 4.11.4 , and GitHub Security Advisories were published on January 13, 2026 . If you already have experience with JWT stuff, you can skip this: The key point here is that, algorithm choice must not be attacker-controlled. Hono's JWT helper documents that is optional - and defaults to HS256. That sounds harmless until you combine it with a very common real-world setup: In that case, the verification path defaults to HS256, treating that public key string as an HMAC secret, and that becomes forgeable because public keys are, well… public. If an attacker can generate a token that passes verification, they can mint whatever claims the application trusts ( , , , etc.) and walk straight into protected routes. This is the "algorithm confusion" class of bugs, where you think you're doing asymmetric verification, but you're actually doing symmetric verification with a key the attacker knows. This is configuration-dependent. The dangerous case is: The core issue is, Hono defaults to , so a public key string can accidentally be used as an HMAC secret, allowing forged tokens and auth bypass. Advisory: GHSA-f67f-6cw9-8mq4 This was classified as High (CVSS 8.2) and maps it to CWE-347 (Improper Verification of Cryptographic Signature) . Affected versions: Patched version: 4.11.4 In the JWK/JWKS verification middleware, Hono could pick the verification algorithm like this: GitHub's advisory spells it out, when the selected JWK doesn't explicitly define an algorithm, the middleware falls back to using the from the unverified JWT header - and since in JWK is optional and commonly omitted, this becomes a real-world issue. If the matching JWKS key lacks , falls back to token-controlled , enabling algorithm confusion / downgrade attacks. "Trusting " is basically letting the attacker influence how you verify the signature. Depending on surrounding constraints (allowed algorithms, how keys are selected, and how the app uses claims), this can lead to forged tokens being accepted and authz/authn bypass . Advisory: GHSA-3vhc-576x-3qv4 This was classified as High (CVSS 8.2) , also CWE-347 , with affected versions and patched in 4.11.4 . Both advisories took the same philosophical stance i.e. Make explicit. Don't infer it from attacker-controlled input. The JWT middleware now requires an explicit option — a breaking change that forces callers to pin the algorithm instead of relying on defaults. Before (vulnerable): After (patched): (Example configuration shown in the advisory.) The JWK/JWKS middleware now requires an explicit allowlist of asymmetric algorithms, and it no longer derives the algorithm from untrusted JWT header values. It also explicitly rejects symmetric HS* algorithms in this context. Before (vulnerable): After (patched): (Example configuration shown in the advisory.) JWT / JWK / JWKS Primer Vulnerabilities [CVE-2026-22817] - JWT middleware "unsafe default" (HS256) Why this becomes an auth bypass Who is affected? Advisory / severity [CVE-2026-22817] - JWK/JWKS middleware fallback Why it matters Advisory / severity The Fix Fix for #1 (JWT middleware) Fix for #2 (JWK/JWKS middleware) Disclosure Timeline a default algorithm footgun in the JWT middleware that can lead to forged tokens if an app is misconfigured a JWK/JWKS algorithm selection bug where verification could fall back to an untrusted value JWT is . The header includes (the signing algorithm). JWK is a JSON representation of a key (e.g. an RSA public key). JWKS is a set of JWKs, usually hosted at something like . The app expects RS256 (asymmetric) The developer passes an RSA public key string But they don't explicitly set you use the JWT middleware with an asymmetric public key and you don't pin Use if present Otherwise, fall back to from the JWT (unverified input) Discovery: 09th Dec, 2025 First Response: 09th Dec, 2025 Patched in: hono 4.11.4 Advisories published: 13 Jan, 2026 Advisory: GHSA-f67f-6cw9-8mq4 Advisory: GHSA-3vhc-576x-3qv4

0 views
Marc Brooker 3 days ago

Agent Safety is a Box

Keep a lid on it. Before we start, let’s cover some terms so we’re thinking about the same thing. This is a post about AI agents, which I’ll define (riffing off Simon Willison 1 ) as: An AI agent runs models and tools in a loop to achieve a goal. Here, goals can include coding, customer service, proving theorems, cloud operations , or many other things. These agents can be interactive or one-shot; called by humans, other agents, or traditional computer systems; local or cloud; and short-lived or long-running. What they don’t tend to be is pure . They typically achieve their goals by side effects. Side effects including modifying the local filesystem, calling another agent, calling a cloud service, making a payment, or starting a 3D print. The topic of today’s post is those side-effects. Simply, what agents can do . We should also be concerned with what agents can say , and I’ll touch on that topic a bit as I go. But the focus is on do . Agents do things with tools. These could be MCP-style tools, powers , skills , or one of many other patterns for tool calling. But, crucially, the act of doing inference doesn’t do anything. Without the do , the think seems less important. The right way to control what agents do is to put them in a box. The box is a strong, deterministic, exact, layer of control outside the agent which limits which tools it can call, and what it can do with those tools. The most important one of those properties is outside the agent . Alignment and other AI safety topics are important. Steering , careful prompting, and context management help a lot. These techniques have a lot of value for liveness (success rate, cost, etc), but are insufficient for safety. They’re insufficient for safety for the same reason we’re building agents in the first place: because they’re flexible, adaptive, creative 2 problem solvers. Traditional old-school workflows are great. They’re cheap, predictable, deterministic, understandable, and well understood. But they aren’t flexible, adaptive, or creative. One change to a data representation or API, and they’re stuck. One unexpected exception case, and they can’t make progress. We’re interested in AI agents because they can make progress towards a broader range of goals without having a human think about all the edge cases before hand. Safety approaches which run inside the agent typically run against this hard trade-off: to get value out of an agent we want to give it as much flexibility as possible, but to reason about what it can do we need to constrain that flexibility. Doing that, with strong guarantees, by trying to constrain what an agent can think, is hard. The other advantage of the box, the deterministic layer around an agent, is that it allows us to make some crisp statements about what matters and doesn’t. For example, if the box deterministically implements the policy a refund can only be for the original purchase price or less , and only one refund can be issued per order , we can exactly reason about how much refunds can be without worrying about the prompt injection attack of the week. What is the Box? The implementation of the box depends a lot on the type of agent we’re talking about. In later posts I’ll look a bit at local agents (the kind I run on my laptop), but for today I’ll start with agents in the cloud. In this cloud environment, agents implemented in code run in a secure execution environment like AgentCore Runtime . Each agent session running inside this environment gets a secure, isolated, place to run its loop, execute generated code, store things in local memory, and so on. Then, we have to add a way to interact with the outside world. To allow the agent to do things. This is where gateways (like AgentCore Gateway ) come in. The gateway is the singular hole in the box. The place where tools are given to the agent, where those tools are controlled, and where policy is enforced. This scoping of tools differs from the usual concerns of authorization: typical authorization is concerned with what an actor can do with a tool, the gateway’s control is concerned with which tools are available. Agents can’t bypass the Gateway, because the Runtime stops them from sending packets anywhere else. Old-school network security controls. The Box’s Policy The simplest way this version of the box constrains what an agent can do is by constraining which tools it can access 3 . Then we need to control what the agent can do with these tools. This is where authorization comes in. In the simplest case, the agent is working on behalf of a human user, and inherits a subset of its authorizations. In a future post I’ll write about other cases, where agents have their own authorization and the ability to escalate privilege, but none of that invalidates the box concept. Regardless, most of today’s authorization implementations don’t have sufficient power and flexibility to express some of the constraints we’d like to express as we control what an agent can do. And they don’t tend to compose across tools. So we need a policy layer at the gateway. AgentCore Policy gives fine-grained, deterministic, control over the ways that an agent can call tools. Using the powerful Cedar policy language , AgentCore Policy is super flexible. But most people don’t want to learn Cedar, so we built on our research on converting human intent to policy to allow policies to also be expressed in natural language . Here’s what a policy looks like: By putting these policies at the edge of the box, in the gateway, we can make sure they are true no matter what the agent does. No errant prompt, context, or memory can bypass this policy. Anyway, this post has gotten very long, and there’s still some ground to cover. There’s more to say about multi-agent systems, memories, local agents, composition of polcies, and many other topics. But hopefully the core point is clear: by building a deterministic, strong, box around an agent we can get a level of safety and control that’s impossible to achieve without it. If this sounds interesting, and you’d like to spend an hour on it, here’s me talking about it at reInvent’25. Simon’s version is An LLM agent runs tools in a loop to achieve a goal , but I like to expand the definition to capture agents that may use smaller models and multiple models, and to highlight that inference is just one tool used by the larger system. I don’t love using the word creative in this sense, because it implies something is happening that really isn’t. But it’s not a terrible mental model. Which, of course, also requires that these tools are built in a way that they can’t be deputized to have their own unexpected side effects. In general, SaaS and cloud tools are built with an adversarial model which assumes that clients are badly-intentioned and so strictly scopes their access, so a lot of this work has already been done.

0 views
ava's blog 4 days ago

i'm looking for work!

I'm currently employed full-time working with pharmaceutical databases, but I'm looking to shift into job roles centered around Data Protection Law , like Compliance and Privacy, or Data Governance, preferably in the 📍Nuremberg/Erlangen/Fürth area 🇩🇪, where I am relocating to from NRW. My current role in drug regulatory has already given me hands-on experience with highly regulated environments and sensitive data, which is a strong foundation that I'm bringing into the new role. This could be... ... or similar roles! :) In October 2025, I finished a 1.5 year advanced studies program in 6 months to be a certified consultant in data protection law . Aside from that, I'm a part-time student pursuing a Bachelor of Laws (LL.B) at a distanced-learning university since 2022 and I'm over halfway done. I'm looking to add a Master's in Data Protection Law in the future. In my free time, I write this blog, particularly about data protection law and tech . I also volunteer as a Country Reporter for noyb.eu on their GDPRhub project , translating and summarizing court cases pertaining to national and European data protection law, specifically German and Austrian cases. You can see my current list of contributions here , and there are more to come. When possible, I also attend events and conferences, like the 2nd Beschäftigtendatenschutztag 2025 in Munich. I'm very passionate about the work and love to self-teach and research. I'm particularly interested in working within a team in a hybrid working setup, with a regular in-office presence to collaborate and learn. That said, I remain open to fully remote roles if the role and organization are a good match. Looking ahead, I would be very open to pursuing additional professional certifications where they are relevant to the role, such as the AIGP or ISO 27001 Lead Implementer . This is a snapshot of what I’m currently working toward and excited about! If you think my profile could be a good fit, or if you’re working in this space and feel like exchanging notes, or just know people who do, I’m always happy to hear from you. Published 10 Jan, 2026 , last updated 12 hours, 16 minutes ago. Data Protection Officer Privacy/Data Protection Consultant Compliance/Regulatory Counsel (Privacy) Data Governance Manager

7 views
devansh 4 days ago

ElysiaJS Cookie Signature Validation Bypass

The recent React CVE(s) made quite a buzz in the industry. It was a pretty powerful vulnerability, which directly leads to Pre-auth RCE (one of the most impactful vuln classes). The React CVE inspired me to investigate vulnerabilities in other JS/TS frameworks. I selected Elysia as my target for several reasons: active maintenance, ~16K GitHub stars, clear documentation, and clean codebase - all factors that make for productive security research. While scrolling through the codebase, one specific codeblock looked interesting: It took me less than a minute to identify the "anti-pattern" here. Can you see what's wrong here? We'll get to it in a bit, but first, a little primer on ElysiaJS Cookie Signing. Elysia treats cookies as reactive signals, meaning they're mutable objects you can read and update directly in your route handlers without getters/setters. Cookie signing adds a cryptographic layer to prevent clients from modifying cookie values (e.g., escalating privileges in a session token). Elysia uses a signature appended to the cookie value, tied to a secret key. This ensures integrity (data wasn't altered) and authenticity (it came from your server). On a higher level, it works something like this: Rotating secrets is essential for security hygiene (e.g., after a potential breach or periodic refresh). Elysia handles this natively with multi-secret support . This code is responsible for handling cookie related logic (signing, unsigning, secrets rotation). Now, going back to the vulnerability, can you spot the vulnerability in the below screenshot? No worries if you couldn't. I will walk you through. The guard check at the end ( ) becomes completely useless because can never be . This is dead code. You see now? Basically if you are using the vulnerable version of Elysia and using secrets array (secrets rotation); Complete auth bypass is possible because error never gets thrown. This seemed like a pretty serious issue, so I dropped a DM to Elysia's creator SaltyAom . SaltyAom quickly confirmed the issue At this point, we know that this is a valid issue, but we still need to create a PoC for it to showcase what it can do, so a security advisory could be created. Given my limited experience with Tyscript. I looked into the docs of Elysia and looked into sample snippets. After getting a decent understanding of syntax Elysia uses, it was time to create the PoC app using Elysia. I had the basic idea in my mind of how my PoC app would look like, It will have a protected resource only admin can access, and by exploiting this vulnerability I should be able to reach the protected resource without authenticating as admin or without even having admin cookies. Eventually, I came up with the following PoC for demonstrating impact: Without signing up as admin, or login, issue the following cURL command: We got access to protected content; without using an signed admin cookie. Pretty slick, no? The developer likely meant to write: Instead, they wrote: The attacker only needs to: That's literally it. This vulnerability was fixed in v1.4.19 With this fix in place, the verification logic now works correctly. Affected Versions : Elysia ≤ v1.4.18 ( confirmed ), potentially earlier versions Fixed Versions : v1.4.19 Elysia and Cookie Signing Secrets Rotation Vulnerability Proof of Concept What It Does Let's Break It Disclosure Timeline cookies.ts#L413-L426 Signing : When you set a cookie (e.g., profile.value = data), Elysia hashes the serialized value + secret, appends sig to the cookie. Unsigning/Verification : On read, Elysia checks the signature against the secret. If invalid (tampered or wrong secret), it throws an error or rejects the cookie. How It Works: Provide secrets as an array: [oldestDeprecated, ..., currentActive]. Tries the latest secret first for signing new cookies. For reading, it falls back sequentially through the array until a match (or fails). Sets ( assumes the cookie is valid before checking anything! ) Loops through each secret Calls for each secret If any secret successfully verifies, sets ( wait, it's already - this does nothing ), stores the unsigned value, and breaks If no secrets verify , the loop completes naturally without ever modifying Checks if is ... but it's still from step 1 No error is thrown - the tampered cookie is accepted as valid Allows one-time signup of an admin account only Allows an existing admin to log in . Issues a signed session cookie once logged in. Protects a secret route so only logged-in admin can access it. Capture or observe one valid cookie ( even their own ) Edit the cookie value to some other users' identify in their browser or with curl; and remove the signature Send it back to the server Discovery : 9th December 2025 Vendor Contact : 9th December 2025 Vendor Response : 9th December 2025 Patch Release : 13th December 2025 CVE Assignment : Pending Vulnerable Code: src/cookies.ts#L413-L426 Elysia Documentation: elysiajs.com Elysia Cookie Documentation: elysiajs.com/patterns/cookie

0 views
Heather Burns 4 days ago

Resistance honeypots

Journos: if you are advising your readers on counter-surveillance measures but not dropping the adtech, you are not helping your readers. You are building a honeypot to trap them.

0 views

Network-Wide Ad Blocking with Tailscale and AdGuard Home

One of the frustrations with traditional network-wide ad blocking is that it only works when you’re at home. The moment you leave your network, you’re back to seeing ads and trackers on every device. But if you’re already running Tailscale, there’s a simple fix: run AdGuard Home on a device in your tailnet and point all your devices at it. The result? Every device on your Tailscale network gets full ad blocking and secure DNS resolution, whether you’re at home, in a coffee shop, or on the other side of the world. I’ve been taking digital privacy more seriously in recent years. I prefer encrypted email via PGP , block ads and trackers wherever possible, and generally try to minimise the data I leak online. I’ve been running Pi-hole for years, but it always felt like a half-measure. It worked great at home, but my phone and laptop were unprotected the moment I stepped outside. I could have set up a VPN back to my home network, but that felt clunky. With Tailscale, the solution is elegant. Every device is already connected to my tailnet, so all I need is a DNS server that’s accessible from anywhere on that network. AdGuard Home fits the bill perfectly. It’s lighter than Pi-hole, has a cleaner interface, and supports DNS-over-HTTPS out of the box for upstream queries. The other benefit is that this setup preserves Tailscale’s Magic DNS. I can still access my tailnet devices by name (like ), while all other DNS queries go through AdGuard for secure resolution and ad blocking. SSH into your always-on device and run the official installer: This installs AdGuard Home to and sets it up as a systemd service. Once installed, open the setup wizard in your browser at . During setup: The key here is binding to your Tailscale IP rather than . This ensures AdGuard only listens on your tailnet, not on your local network or the public internet. By default, AdGuard will use your system’s DNS servers for upstream queries. That’s not ideal. We want encrypted DNS all the way through. In AdGuard Home, go to Settings → DNS settings → Upstream DNS servers and replace the defaults with: These are Quad9’s DNS-over-HTTPS and DNS-over-TLS endpoints. Quad9 is a privacy-focused resolver that also blocks known malicious domains. For the Bootstrap DNS servers (used to resolve the upstream hostnames), add: I’d also recommend enabling DNSSEC validation and Optimistic caching in the same settings page for better security and performance. Now the easy part. Open your Tailscale admin console and: That’s it. Every device on your tailnet will now use your AdGuard instance for DNS resolution. This setup gives you: If you do keep logging enabled, the query logs can be useful for identifying apps that are phoning home or misbehaving. But there’s a trade-off here. By default, AdGuard Home logs every DNS query from every device. That’s useful for debugging, but it felt uncomfortable to me. The majority of my family use my tailnet, and I have no interest in knowing what sites they’re visiting. I also don’t need my own traffic logged if it isn’t necessary. I’ve turned off query logging entirely in Settings > General settings > Query log configuration , and disabled statistics as well. Ad blocking still works without any of this data being stored. Since all your devices depend on this DNS server, you’ll want to make sure it’s reliable. If the device running AdGuard goes offline, DNS resolution will fail for your entire tailnet. A few options to mitigate this: For my setup, I’m running it on a small Intel NUC that’s always on anyway. It’s been rock solid so far. This is one of those setups that takes ten minutes and then quietly improves your life. Every device on my tailnet now gets ad blocking and secure DNS without any per-device configuration. The combination of Tailscale’s networking and AdGuard’s filtering is genuinely elegant. If you’re already running Tailscale, this is worth the effort. A device on your Tailscale network that’s always on (a small home server, Raspberry Pi, or even an old laptop) AdGuard Home installed on that device Access to your Tailscale admin console Set the DNS listen address to your device’s Tailscale IP (e.g., ) Set the admin interface to the same Tailscale IP on port 3000 Create an admin username and password Add your device’s Tailscale IP as a Global nameserver Enable Override local DNS Ad and tracker blocking everywhere , not just at home Encrypted DNS queries , so your ISP can’t see what domains you’re resolving Malware protection via Quad9, which blocks known malicious domains at the DNS level A single dashboard to view query logs and statistics for all your devices in one place No client configuration since Tailscale pushes the DNS settings automatically Run AdGuard on a device that’s always on (a dedicated home server or cloud VPS) Add a fallback DNS server in Tailscale (though this bypasses AdGuard when your server is down) Run a second AdGuard instance on another device and add both as nameservers

0 views

Who Benefited from the Aisuru and Kimwolf Botnets?

Our first story of 2026 revealed how a destructive new botnet called Kimwolf has infected more than two million devices by mass-compromising a vast number of unofficial Android TV streaming boxes . Today, we’ll dig through digital clues left behind by the hackers, network operators and services that appear to have benefitted from Kimwolf’s spread. On Dec. 17, 2025, the Chinese security firm XLab published a deep dive on Kimwolf , which forces infected devices to participate in distributed denial-of-service (DDoS) attacks and to relay abusive and malicious Internet traffic for so-called “residential proxy” services. The software that turns one’s device into a residential proxy is often quietly bundled with mobile apps and games. Kimwolf specifically targeted residential proxy software that is factory installed on more than a thousand different models of unsanctioned Android TV streaming devices. Very quickly, the residential proxy’s Internet address starts funneling traffic that is linked to ad fraud, account takeover attempts and mass content scraping. The XLab report explained its researchers found “definitive evidence” that the same cybercriminal actors and infrastructure were used to deploy both Kimwolf and the Aisuru botnet — an earlier version of Kimwolf that also enslaved devices for use in DDoS attacks and proxy services. XLab said it suspected since October that Kimwolf and Aisuru had the same author(s) and operators, based in part on shared code changes over time. But it said those suspicions were confirmed on December 8 when it witnessed both botnet strains being distributed by the same Internet address at 93.95.112[.]59 . Image: XLab. Public records show the Internet address range flagged by XLab is assigned to Lehi, Utah-based Resi Rack LLC . Resi Rack’s website bills the company as a “Premium Game Server Hosting Provider.” Meanwhile, Resi Rack’s ads on the Internet moneymaking forum BlackHatWorld  refer to it as a “Premium Residential Proxy Hosting and Proxy Software Solutions Company.” Resi Rack co-founder Cassidy Hales told KrebsOnSecurity his company received a notification on December 10 about Kimwolf using their network “that detailed what was being done by one of our customers leasing our servers.” “When we received this email we took care of this issue immediately,” Hales wrote in response to an email requesting comment. “This is something we are very disappointed is now associated with our name and this was not the intention of our company whatsoever.” The Resi Rack Internet address cited by XLab on December 8 came onto KrebsOnSecurity’s radar more than two weeks before that. Benjamin Brundage is founder of Synthient , a startup that tracks proxy services. In late October 2025, Brundage shared that the people selling various proxy services which benefitted from the Aisuru and Kimwolf botnets were doing so at a new Discord server called resi[.]to . On November 24, 2025, a member of the resi-dot-to Discord channel shares an IP address responsible for proxying traffic over Android TV streaming boxes infected by the Kimwolf botnet. When KrebsOnSecurity joined the resi[.]to Discord channel in late October as a silent lurker, the server had fewer than 150 members, including “ Shox ” — the nickname used by Resi Rack’s co-founder Mr. Hales — and his business partner “ Linus ,” who did not respond to requests for comment. Other members of the resi[.]to Discord channel would periodically post new IP addresses that were responsible for proxying traffic over the Kimwolf botnet. As the screenshot from resi[.]to above shows, that Resi Rack Internet address flagged by XLab was used by Kimwolf to direct proxy traffic as far back as November 24, if not earlier. All told, Synthient said it tracked at least seven static Resi Rack IP addresses connected to Kimwolf proxy infrastructure between October and December 2025. Neither of Resi Rack’s co-owners responded to follow-up questions. Both have been active in selling proxy services via Discord for nearly two years. According to a review of Discord messages indexed by the cyber intelligence firm Flashpoint , Shox and Linus spent much of 2024 selling static “ISP proxies” by routing various Internet address blocks at major U.S. Internet service providers. In February 2025, AT&T announced that effective July 31, 2025, it would no longer originate routes for network blocks that are not owned and managed by AT&T (other major ISPs have since made similar moves). Less than a month later, Shox and Linus told customers they would soon cease offering static ISP proxies as a result of these policy changes. Shox and Linux, talking about their decision to stop selling ISP proxies. The stated owner of the resi[.]to Discord server went by the abbreviated username “D.” That initial appears to be short for the hacker handle “ Dort ,” a name that was invoked frequently throughout these Discord chats. Dort’s profile on resi dot to. This “Dort” nickname came up in KrebsOnSecurity’s recent conversations with “ Forky ,” a Brazilian man who acknowledged being involved in the marketing of the Aisuru botnet at its inception in late 2024. But Forky vehemently denied having anything to do with a series of massive and record-smashing DDoS attacks in the latter half of 2025 that were blamed on Aisuru, saying the botnet by that point had been taken over by rivals. Forky asserts that Dort is a resident of Canada and one of at least two individuals currently in control of the Aisuru/Kimwolf botnet. The other individual Forky named as an Aisuru/Kimwolf botmaster goes by the nickname “ Snow .” On January 2 — just hours after our story on Kimwolf was published — the historical chat records on resi[.]to were erased without warning and replaced by a profanity-laced message for Synthient’s founder. Minutes after that, the entire server disappeared. Later that same day, several of the more active members of the now-defunct resi[.]to Discord server moved to a Telegram channel where they posted Brundage’s personal information, and generally complained about being unable to find reliable “bulletproof” hosting for their botnet. Hilariously, a user by the name “Richard Remington” briefly appeared in the group’s Telegram server to post a crude “Happy New Year” sketch that claims Dort and Snow are now in control of 3.5 million devices infected by Aisuru and/or Kimwolf. Richard Remington’s Telegram account has since been deleted, but it previously stated its owner operates a website that caters to DDoS-for-hire or “stresser” services seeking to test their firepower. Reports from both Synthient and XLab found that Kimwolf was used to deploy programs that turned infected systems into Internet traffic relays for multiple residential proxy services. Among those was a component that installed a software development kit (SDK) called ByteConnect, which is distributed by a provider known as Plainproxies . ByteConnect says it specializes in “monetizing apps ethically and free,” while Plainproxies advertises the ability to provide content scraping companies with “unlimited” proxy pools. However, Synthient said that upon connecting to ByteConnect’s SDK they instead observed a mass influx of credential-stuffing attacks targeting email servers and popular online websites. A search on LinkedIn finds the CEO of Plainproxies is Friedrich Kraft , whose resume says he is co-founder of ByteConnect Ltd. Public Internet routing records show Mr. Kraft also operates a hosting firm in Germany called 3XK Tech GmbH . Mr. Kraft did not respond to repeated requests for an interview. In July 2025, Cloudflare reported that 3XK Tech (a.k.a. Drei-K-Tech) had become the Internet’s largest source of application-layer DDoS attacks . In November 2025, the security firm GreyNoise Intelligence found that Internet addresses on 3XK Tech were responsible for roughly three-quarters of the Internet scanning being done at the time for a newly discovered and critical vulnerability in security products made by Palo Alto Networks. Source: Cloudflare’s Q2 2025 DDoS threat report. LinkedIn has a profile for another Plainproxies employee, Julia Levi , who is listed as co-founder of ByteConnect. Ms. Levi did not respond to requests for comment. Her resume says she previously worked for two major proxy providers: Netnut Proxy Network, and Bright Data. Synthient likewise said Plainproxies ignored their outreach, noting that the Byteconnect SDK continues to remain active on devices compromised by Kimwolf. A post from the LinkedIn page of Plainproxies Chief Revenue Officer Julia Levi, explaining how the residential proxy business works. Synthient’s January 2 report said another proxy provider heavily involved in the sale of Kimwolf proxies was Maskify , which currently advertises on multiple cybercrime forums that it has more than six million residential Internet addresses for rent. Maskify prices its service at a rate of 30 cents per gigabyte of data relayed through their proxies. According to Synthient, that price range is insanely low and is far cheaper than any other proxy provider in business today. “Synthient’s Research Team received screenshots from other proxy providers showing key Kimwolf actors attempting to offload proxy bandwidth in exchange for upfront cash,” the Synthient report noted. “This approach likely helped fuel early development, with associated members spending earnings on infrastructure and outsourced development tasks. Please note that resellers know precisely what they are selling; proxies at these prices are not ethically sourced.” Maskify did not respond to requests for comment. The Maskify website. Image: Synthient. Hours after our first Kimwolf story was published last week, the resi[.]to Discord server vanished, Synthient’s website was hit with a DDoS attack, and the Kimwolf botmasters took to doxing Brundage via their botnet. The harassing messages appeared as text records uploaded to the Ethereum Name Service (ENS), a distributed system for supporting smart contracts deployed on the Ethereum blockchain. As documented by XLab, in mid-December the Kimwolf operators upgraded their infrastructure and began using ENS to better withstand the near-constant takedown efforts targeting the botnet’s control servers. An ENS record used by the Kimwolf operators taunts security firms trying to take down the botnet’s control servers. Image: XLab. By telling infected systems to seek out the Kimwolf control servers via ENS, even if the servers that the botmasters use to control the botnet are taken down the attacker only needs to update the ENS text record to reflect the new Internet address of the control server, and the infected devices will immediately know where to look for further instructions. “This channel itself relies on the decentralized nature of blockchain, unregulated by Ethereum or other blockchain operators, and cannot be blocked,” XLab wrote. The text records included in Kimwolf’s ENS instructions can also feature short messages, such as those that carried Brundage’s personal information. Other ENS text records associated with Kimwolf offered some sage advice: “If flagged, we encourage the TV box to be destroyed.” An ENS record tied to the Kimwolf botnet advises, “If flagged, we encourage the TV box to be destroyed.” Both Synthient and XLabs say Kimwolf targets a vast number of Android TV streaming box models, all of which have zero security protections, and many of which ship with proxy malware built in. Generally speaking, if you can send a data packet to one of these devices you can also seize administrative control over it. If you own a TV box that matches one of these model names and/or numbers , please just rip it out of your network. If you encounter one of these devices on the network of a family member or friend, send them a link to this story (or to our January 2 story on Kimwolf ) and explain that it’s not worth the potential hassle and harm created by keeping them plugged in.

0 views
Jim Nielsen 1 weeks ago

The AI Security Shakedown

Matthias Ott shared a link to a post from Anthropic titled “Disrupting the first reported AI-orchestrated cyber espionage campaign” , which I read because I’m interested in the messy intersection of AI and security. I gotta say: I don’t know if I’ve ever read anything quite like this article. At first, the article felt like a responsible disclosure — “Hey, we’re reaching an inflection point where AI models are being used effectively for security exploits. Look at this one.” But then I read further and found statements like this: [In the attack] Claude didn’t always work perfectly. It occasionally hallucinated […] This remains an obstacle to fully autonomous cyberattacks. Wait, so is that a feature or a bug? Is it a good thing that your tool hallucinated and proved a stumbling block? Or is this bug you hope to fix? The more I read, the more difficult it became to discern whether this security incident was a helpful warning or a feature sell. With the correct setup, threat actors can now use agentic AI systems for extended periods to do the work of entire teams of experienced hackers: analyzing target systems, producing exploit code, and scanning vast datasets of stolen information more efficiently than any human operator. Less experienced and resourced groups can now potentially perform large-scale attacks of this nature. Shoot, this sounds like a product pitch! Don’t have the experience or resources to keep up with your competitors who are cyberattacking? We’ve got a tool for you! Wait, so if you’re creating something that can cause so much havoc, why are you still making it? Oh good, they address this exact question: This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense. Ok, so the article is a product pitch: But that’s my words. Here’s theirs: A fundamental change has occurred in cybersecurity. We advise security teams to experiment with applying AI for defense in areas like Security Operations Center automation, threat detection, vulnerability assessment, and incident response. We also advise developers to continue to invest in safeguards across their AI platforms, to prevent adversarial misuse. The techniques described above will doubtless be used by many more attackers—which makes industry threat sharing, improved detection methods, and stronger safety controls all the more critical. It appears AI is simultaneously the problem and the solution. It’s a great business to be in, if you think about it. You sell a tool for security exploits and you sell the self-same tool for protection against said exploits. Everybody wins! I can’t help but read this post and think of a mafia shakedown. You know, where the mafia implies threats to get people to pay for their protection — a service they created the need for in the first place. ”Nice system you got there, would be a shame if anyone hacked into it using AI. Better get some AI to protect yourself.” I find it funny that the URL slug for the article is: That’s a missed opportunity. They could’ve named it: Reply via: Email · Mastodon · Bluesky We’ve reached a tipping point in security. Look at this recent case where our AI was exploited to do malicious things with little human intervention. No doubt this same thing will happen again. You better go get our AI to protect yourself.

0 views
daniel.haxx.se 1 weeks ago

curl 8.18.0

Download curl from curl.se ! the 272nd release 5 changes 63 days (total: 10,155) 391 bugfixes (total: 13,376) 758 commits (total: 37,486) 0 new public libcurl function (total: 100) 0 new curl_easy_setopt() option (total: 308) 0 new curl command line option (total: 273) 69 contributors, 36 new (total: 3,571) 37 authors, 14 new (total: 1,430) 6 security fixes (total: 176) This time there is no less than six separate vulnerabilities announced. There are a few this time, mostly around dropping support for various dependencies: See the release presentation video for a walk-through of some of the most important/interesting fixes done for this release, or go check out the full list in the changelog . CVE-2025-13034 : skipping pinning check for HTTP/3 with GnuTLS CVE-2025-14017 : broken TLS options for threaded LDAPS CVE-2025-14524 : bearer token leak on cross-protocol redirect CVE-2025-14819 : OpenSSL partial chain store policy bypass CVE-2025-15079 : libssh global knownhost override CVE-2025-15224 : libssh key passphrase bypass without agent set drop support for VS2008 (Windows) drop Windows CE / CeGCC support drop support for GnuTLS < 3.6.5 gnutls: implement CURLOPT_CAINFO_BLOB openssl: bump minimum OpenSSL version to 3.0.0

0 views
neilzone 1 weeks ago

Dealing with apt's warning 'Policy will reject signature within a year, see --audit for details'

I’ve noticed an increasing number of s result in a warning that: Running (as suggested), results in something like: My understanding is that - as the last line suggests - there has been a change in key-handling policy by apt, and that keys which were previously acceptable are (or, rather, will be) no longer acceptable by default. The “correct” way of solving this is for the repository provider to update their signing key to something which is compliant. However, I have no control over what a repository provider does, or when they will do it. For instance, the warning message above suggests to me that I will have a problem on 1 February 2026, so under a month away. I can suppress this warning - and tell apt to accept the key - by adding to : Or, to avoid having to add that each time, I can added a slightly-tweaked version of it to an apt config file, in . For instance, I can put this into : Hopefully though, repository providers will update their keys (which will then need re-importing).

0 views
Danny McClelland 1 weeks ago

Using Proton Pass CLI to Keep Linux Scripts Secure

If you manage dotfiles in a public Git repository, you’ve probably faced the dilemma of how to handle secrets. API keys, passwords, and tokens need to live somewhere, but committing them to version control is a security risk. Proton has recently released a CLI tool for Proton Pass that solves this elegantly. Instead of storing secrets in files, you fetch them at runtime from your encrypted Proton Pass vault. The CLI is currently in beta. Install it with: This installs to . Then authenticate: This opens a browser for Proton authentication. Once complete, you’re ready to use the CLI. List your vaults: View an item: Fetch a specific field: Get JSON output (useful for parsing multiple fields): I have several tools that need API credentials. Rather than storing these in config files, I created wrapper scripts that fetch credentials from Proton Pass at runtime. Here’s a wrapper for a TUI application that needs API credentials: The key insight: fetching JSON once and parsing with is faster than making separate API calls for each field. The Proton Pass API call takes a few seconds. For frequently-used tools, this adds noticeable latency. The solution is to cache credentials in the Linux kernel keyring: With caching: The cache expires after one hour, or when you log out. Clear it manually with: The CLI also has built-in commands for secret injection. The command passes secrets as environment variables: The command processes template files: These use a URI syntax: to reference secrets. For applications that read credentials from config files (like WeeChat’s ), the wrapper can update the file before launching: The CLI can also act as an SSH agent, loading keys stored in Proton Pass: This is useful if you store SSH private keys in your vault. This approach keeps secrets out of your dotfiles repository entirely. The wrapper scripts reference Proton Pass item names, not actual credentials. Your secrets remain encrypted in Proton’s infrastructure and are only decrypted locally when needed. The kernel keyring cache is per-user and lives only in memory. It’s cleared on logout or reboot, and the TTL ensures credentials don’t persist indefinitely. For public dotfiles repositories, this is a clean solution: commit your wrapper scripts freely, keep your secrets in Proton Pass. First run: ~5-6 seconds (fetches from Proton Pass) Subsequent runs: ~0.01 seconds (from kernel keyring)

0 views
Carlos Becker 1 weeks ago

LIVE from GitHub Universe: Inside the GitHub Secure Open Source Fund

I had a chat with Greg Cochran (GitHub) , Christian Grobmeier (log4j) , Michael Geers (evcc) , and Camila Maia (ScanAPI) about the GitHub Secure OpenSource Fund . It was recorded at the last day of GitHub Universe 2025.

0 views
The Jolly Teapot 1 weeks ago

New year, new me, new web browsing setup?

Since we’re at the start of a new year, I will stop fine-tuning everything on this blog and let it live as the receptacle it’s supposed to be. With my mind cleared of HTML and CSS concerns, I now have energy to waste on new optimisations of my digital environment, and this time with an old favourite of mine: content blockers. * 1 In 2022, I experimented with blocking JavaScript on a per-site basis , which, at the time, allowed me to feel better about my behaviour on the web. You see, I thought that I was not actively refusing adverts. I was just disabling a specific technology on my web browser; not my fault if most ads are enabled via JS after all. True, ads couldn’t reach my house, but not because I actively refused their delivery; simply because the trucks used for their delivery weren’t allowed to drive on my pedestrian-only street. Ethically, I preferred this approach to the one blocking all ads blindly on every site, even if the consequences, from the publishers’ perspective, were the same. I know it was very hypocritical of me, and I know I was still technically blocking the ads. Nevertheless, I felt less guilty blocking the technology used for ads, and not the ads directly. This setup was fine, until it wasn’t. My web experience was not great. Blocking JavaScript by default breaks too many non-media sites, and letting it on made me realise how awful browsing the web without a content blocker can be. The only way for this system to work was to have patience and discipline on the per-site settings. Eventually, I gave up and reinstalled the excellent Wipr Safari extension on all my devices a few weeks later. Last year, on top of Wipr , I also tried services like NextDNS and Mullvad DNS . With these, the browser ad blocker becomes almost superfluous, as all it has to do is remove empty boxes that were supposed to be ads before being blocked by the DNS. It was an efficient setup, but I was still blocking ads, which kept on bothering me. While I happily support financially a few publications, I can’t do the same for all the sites I visit. For the ones I am not paying, seeing ads seems like a fair deal; blocking ads was making me feel increasingly guilty. * 2 Like I wrote in the other post on the topic : Somehow, I always feel a little bit of shame and guilt when talking about content blockers, especially ad blockers. Obviously ads are too often the only way many publishers manage to make decent money on the internet: every newspaper can’t be financially successful with subscriptions, and every media company can’t survive only on contributions and grants. That’s why recently, I stopped using Mullvad as my DNS resolver, and switched to Quad9 , which focuses on privacy-protection and not ad-blocking. I also uninstalled Wipr. Today, I rely solely on StopTheScript . What’s new this time around is that I will try to be more disciplined than I was three years ago, and do the work to make this system last. What I do is set the default StopTheScript setting on “Ask”. When a site aggressively welcomes me with three or four banners masking the article I came to read, I click on the StopTheScript icon and allow it to block JavaScript on the website, and refresh the page. Two clicks, one keyboard shortcut. In most cases, these steps are easier and faster than what is the usual series of events. You know, the one where you need to reload the page with ad blockers disabled, just so you can close the modal window that was blocking scrolling on the page, and then reload the page once again, this time with ad blockers enabled. With JavaScript turned off, visiting most websites is a breeze: my computer feels like it uses an M4 chip and not an Intel Core i5, the page is clean, the article is there, it works. There are a few media sites that refuse to display anything with JS turned off, but I’d say that 95% of the time it’s fine, and I can live my life without a proper ad blocker. * 3 For websites where ads are tolerable, I don’t bother blocking JavaScript, I let it pass. In my mind, this is how my first interaction with a website goes if it were a department store: [opens page at URL] Website: “ Hi dear visitor, I see you’re looking at this product, but may I interest you in a free newsletter? Or would you like to share your Google account with us so next time you come back we’ll know? Also, could you sign this agreement real quick? Oh, and by the way, have you seen that we have a special offer currently? Would you like a cookie? ” Me: “ Hello, yes, oh wow, hum… wait a second… ” [blocks JavaScript] Me: “ Sorry, I don’t speak your language and don’t understand anything you say .” [Salesperson goes away instantly] Me: “ Ah, this is nice and quiet. ” Maybe I’m wrong, but to me, this is a more “polite” default behaviour than using an ad blocker from the get-go, which, in this analogy, would be something like this: [opens page at URL] Ad blocker: “ Alright, well done team, great job. We arrested all sales people, handcuffed them, and brought them all to in the basement. All clear. The boss can come in. ” Me: “ Ah, this is nice and quiet. ” If you have a better analogy, I’m all ears: I really struggled with this one. I’m not sure how long this JS blocking setup will last this time. I’m not sure if it feels that much better to block JS permanently on some websites rather than blocking ads. All I know is that most websites are much quicker to load without JavaScript, much easier to handle by my machine, and just for those reasons, StopTheScript may be the best content blocker for Safari. I guess this is not surprising that all the cool new web browsers include a JavaScript toggle natively. Why are they called content blockers and not ad blockers? Pretty sure it’s some sort of diplomatic lingo used to avoid hurting the feelings of ad companies. I don’t like the word content , but calling ads and trackers content is just weird. ^ I know I could use an ad blocker and disable it on some websites, or only activate it on the most annoying sites, but ad blockers tend to disappear in the background, don’t they? ^ I mention media sites because obviously ecommerce sites, video sites, and interactive sites require JavaScript. Interestingly, Mastodon doesn’t need it to display posts, whereas Bluesky does. ^ Why are they called content blockers and not ad blockers? Pretty sure it’s some sort of diplomatic lingo used to avoid hurting the feelings of ad companies. I don’t like the word content , but calling ads and trackers content is just weird. ^ I know I could use an ad blocker and disable it on some websites, or only activate it on the most annoying sites, but ad blockers tend to disappear in the background, don’t they? ^ I mention media sites because obviously ecommerce sites, video sites, and interactive sites require JavaScript. Interestingly, Mastodon doesn’t need it to display posts, whereas Bluesky does. ^

0 views

The Kimwolf Botnet is Stalking Your Local Network

The story you are reading is a series of scoops nestled inside a far more urgent Internet-wide security advisory. The vulnerability at issue has been exploited for months already, and it’s time for a broader awareness of the threat. The short version is that everything you thought you knew about the security of the internal network behind your Internet router probably is now dangerously out of date. The security company Synthient currently sees more than 2 million infected Kimwolf devices distributed globally but with concentrations in Vietnam, Brazil, India, Saudi Arabia, Russia and the United States. Synthient found that two-thirds of the Kimwolf infections are Android TV boxes with no security or authentication built in. The past few months have witnessed the explosive growth of a new botnet dubbed Kimwolf , which experts say has infected more than 2 million devices globally. The Kimwolf malware forces compromised systems to relay malicious and abusive Internet traffic — such as ad fraud, account takeover attempts and mass content scraping — and participate in crippling distributed denial-of-service (DDoS) attacks capable of knocking nearly any website offline for days at a time. More important than Kimwolf’s staggering size, however, is the diabolical method it uses to spread so quickly: By effectively tunneling back through various “ residential proxy ” networks and into the local networks of the proxy endpoints, and by further infecting devices that are hidden behind the assumed protection of the user’s firewall and Internet router. Residential proxy networks are sold as a way for customers to anonymize and localize their Web traffic to a specific region, and the biggest of these services allow customers to route their traffic through devices in virtually any country or city around the globe. The malware that turns an end-user’s Internet connection into a proxy node is often bundled with dodgy mobile apps and games. These residential proxy programs also are commonly installed via unofficial Android TV boxes  sold by third-party merchants on popular e-commerce sites like Amazon , BestBuy, Newegg , and Walmart . These TV boxes range in price from $40 to $400, are marketed under a dizzying range of no-name brands and model numbers , and frequently are advertised as a way to stream certain types of subscription video content for free . But there’s a hidden cost to this transaction: As we’ll explore in a moment, these TV boxes make up a considerable chunk of the estimated two million systems currently infected with Kimwolf. Some of the unsanctioned Android TV boxes that come with residential proxy malware pre-installed. Image: Synthient. Kimwolf also is quite good at infecting a range of Internet-connected digital photo frames that likewise are abundant at major e-commerce websites. In November 2025, researchers from Quokka published a report (PDF) detailing serious security issues in Android-based digital picture frames running the Uhale app — including Amazon’s bestselling digital frame as of March 2025. There are two major security problems with these photo frames and unofficial Android TV boxes. The first is that a considerable percentage of them come with malware pre-installed, or else require the user to download an unofficial Android App Store and malware in order to use the device for its stated purpose (video content piracy). The most typical of these uninvited guests are small programs that turn the device into a residential proxy node that is resold to others. The second big security nightmare with these photo frames and unsanctioned Android TV boxes is that they rely on a handful of Internet-connected microcomputer boards that have no discernible security or authentication requirements built-in. In other words, if you are on the same network as one or more of these devices, you can likely compromise them simultaneously by issuing a single command across the network. The combination of these two security realities came to the fore in October 2025, when an undergraduate computer science student at the Rochester Institute of Technology began closely tracking Kimwolf’s growth, and interacting directly with its apparent creators on a daily basis. Benjamin Brundage is the 22-year-old founder of the security firm Synthient , a startup that helps companies detect proxy networks and learn how those networks are being abused. Conducting much of his research into Kimwolf while studying for final exams, Brundage told KrebsOnSecurity in late October 2025 he suspected Kimwolf was a new Android-based variant of Aisuru , a botnet that was incorrectly blamed for a number of record-smashing DDoS attacks last fall. Brundage says Kimwolf grew rapidly by abusing a glaring vulnerability in many of the world’s largest residential proxy services. The crux of the weakness, he explained, was that these proxy services weren’t doing enough to prevent their customers from forwarding requests to internal servers of the individual proxy endpoints. Most proxy services take basic steps to prevent their paying customers from “going upstream” into the local network of proxy endpoints, by explicitly denying requests for local addresses specified in RFC-1918 , including the well-known Network Address Translation (NAT) ranges 10.0.0.0/8, 192.168.0.0/16, and 172.16.0.0/12. These ranges allow multiple devices in a private network to access the Internet using a single public IP address, and if you run any kind of home or office network, your internal address space operates within one or more of these NAT ranges. However, Brundage discovered that the people operating Kimwolf had figured out how to talk directly to devices on the internal networks of millions of residential proxy endpoints, simply by changing their Domain Name System (DNS) settings to match those in the RFC-1918 address ranges. “It is possible to circumvent existing domain restrictions by using DNS records that point to 192.168.0.1 or 0.0.0.0,” Brundage wrote in a first-of-its-kind security advisory sent to nearly a dozen residential proxy providers in mid-December 2025. “This grants an attacker the ability to send carefully crafted requests to the current device or a device on the local network. This is actively being exploited, with attackers leveraging this functionality to drop malware.” As with the digital photo frames mentioned above, many of these residential proxy services run solely on mobile devices that are running some game, VPN or other app with a hidden component that turns the user’s mobile phone into a residential proxy — often without any meaningful consent. In a report published today , Synthient said key actors involved in Kimwolf were observed monetizing the botnet through app installs, selling residential proxy bandwidth, and selling its DDoS functionality. “Synthient expects to observe a growing interest among threat actors in gaining unrestricted access to proxy networks to infect devices, obtain network access, or access sensitive information,” the report observed. “Kimwolf highlights the risks posed by unsecured proxy networks and their viability as an attack vector.” After purchasing a number of unofficial Android TV box models that were most heavily represented in the Kimwolf botnet, Brundage further discovered the proxy service vulnerability was only part of the reason for Kimwolf’s rapid rise: He also found virtually all of the devices he tested were shipped from the factory with a powerful feature called Android Debug Bridge (ADB) mode enabled by default. Many of the unofficial Android TV boxes infected by Kimwolf include the ominous disclaimer: “Made in China. Overseas use only.” Image: Synthient. ADB is a diagnostic tool intended for use solely during the manufacturing and testing processes, because it allows the devices to be remotely configured and even updated with new (and potentially malicious) firmware. However, shipping these devices with ADB turned on creates a security nightmare because in this state they constantly listen for and accept unauthenticated connection requests. For example, opening a command prompt and typing “adb connect” along with a vulnerable device’s (local) IP address followed immediately by “:5555” will very quickly offer unrestricted “super user” administrative access. Brundage said by early December, he’d identified a one-to-one overlap between new Kimwolf infections and proxy IP addresses offered for rent by China-based IPIDEA , currently the world’s largest residential proxy network by all accounts. “Kimwolf has almost doubled in size this past week, just by exploiting IPIDEA’s proxy pool,” Brundage told KrebsOnSecurity in early December as he was preparing to notify IPIDEA and 10 other proxy providers about his research. Brundage said Synthient first confirmed on December 1, 2025 that the Kimwolf botnet operators were tunneling back through IPIDEA’s proxy network and into the local networks of systems running IPIDEA’s proxy software. The attackers dropped the malware payload by directing infected systems to visit a specific Internet address and to call out the pass phrase “ krebsfiveheadindustries ” in order to unlock the malicious download. On December 30, Synthient said it was tracking roughly 2 million IPIDEA addresses exploited by Kimwolf in the previous week. Brundage said he has witnessed Kimwolf rebuilding itself after one recent takedown effort targeting its control servers — from almost nothing to two million infected systems just by tunneling through proxy endpoints on IPIDEA for a couple of days. Brundage said IPIDEA has a seemingly inexhaustible supply of new proxies, advertising access to more than 100 million residential proxy endpoints around the globe in the past week alone . Analyzing the exposed devices that were part of IPIDEA’s proxy pool, Synthient said it found more than two-thirds were Android devices that could be compromised with no authentication needed . After charting a tight overlap in Kimwolf-infected IP addresses and those sold by IPIDEA, Brundage was eager to make his findings public: The vulnerability had clearly been exploited for several months, although it appeared that only a handful of cybercrime actors were aware of the capability. But he also knew that going public without giving vulnerable proxy providers an opportunity to understand and patch it would only lead to more mass abuse of these services by additional cybercriminal groups. On December 17, Brundage sent a security notification to all 11 of the apparently affected proxy providers, hoping to give each at least a few weeks to acknowledge and address the core problems identified in his report before he went public. Many proxy providers who received the notification were resellers of IPIDEA that white-labeled the company’s service. KrebsOnSecurity first sought comment from IPIDEA in October 2025, in reporting on a story about how the proxy network appeared to have benefitted from the rise of the Aisuru botnet , whose administrators appeared to shift from using the botnet primarily for DDoS attacks to simply installing IPIDEA’s proxy program, among others. On December 25, KrebsOnSecurity received an email from an IPIDEA employee identified only as “ Oliver ,” who said allegations that IPIDEA had benefitted from Aisuru’s rise were baseless. “After comprehensively verifying IP traceability records and supplier cooperation agreements, we found no association between any of our IP resources and the Aisuru botnet, nor have we received any notifications from authoritative institutions regarding our IPs being involved in malicious activities,” Oliver wrote. “In addition, for external cooperation, we implement a three-level review mechanism for suppliers, covering qualification verification, resource legality authentication and continuous dynamic monitoring, to ensure no compliance risks throughout the entire cooperation process.” “IPIDEA firmly opposes all forms of unfair competition and malicious smearing in the industry, always participates in market competition with compliant operation and honest cooperation, and also calls on the entire industry to jointly abandon irregular and unethical behaviors and build a clean and fair market ecosystem,” Oliver continued. Meanwhile, the same day that Oliver’s email arrived, Brundage shared a response he’d just received from IPIDEA’s security officer, who identified himself only by the first name Byron . The security officer said IPIDEA had made a number of important security changes to its residential proxy service to address the vulnerability identified in Brundage’s report. “By design, the proxy service does not allow access to any internal or local address space,” Byron explained. “This issue was traced to a legacy module used solely for testing and debugging purposes, which did not fully inherit the internal network access restrictions. Under specific conditions, this module could be abused to reach internal resources. The affected paths have now been fully blocked and the module has been taken offline.” Byron told Brundage IPIDEA also instituted multiple mitigations for blocking DNS resolution to internal (NAT) IP ranges, and that it was now blocking proxy endpoints from forwarding traffic on “high-risk” ports “to prevent abuse of the service for scanning, lateral movement, or access to internal services.” An excerpt from an email sent by IPIDEA’s security officer in response to Brundage’s vulnerability notification. Click to enlarge. Brundage said IPIDEA appears to have successfully patched the vulnerabilities he identified. He also noted he never observed the Kimwolf actors targeting proxy services other than IPIDEA, which has not responded to requests for comment. Riley Kilmer is founder of Spur.us , a technology firm that helps companies identify and filter out proxy traffic. Kilmer said Spur has tested Brundage’s findings and confirmed that IPIDEA and all of its affiliate resellers indeed allowed full and unfiltered access to the local LAN. Kilmer said one model of unsanctioned Android TV boxes that is especially popular — the Superbox, which we profiled in November’s Is Your Android TV Streaming Box Part of a Botnet? — leaves Android Debug Mode running on localhost:5555. “And since Superbox turns the IP into an IPIDEA proxy, a bad actor just has to use the proxy to localhost on that port and install whatever bad SDKs [software development kits] they want,” Kilmer told KrebsOnSecurity. Superbox media streaming boxes for sale on Walmart.com. Both Brundage and Kilmer say IPIDEA appears to be the second or third reincarnation of a residential proxy network formerly known as 911S5 Proxy , a service that operated between 2014 and 2022 and was wildly popular on cybercrime forums. 911S5 Proxy imploded a week after KrebsOnSecurity published a deep dive on the service’s sketchy origins and leadership in China. In that 2022 profile, we cited work by researchers at the University of Sherbrooke in Canada who were studying the threat 911S5 could pose to internal corporate networks. The researchers noted that “the infection of a node enables the 911S5 user to access shared resources on the network such as local intranet portals or other services.” “It also enables the end user to probe the LAN network of the infected node,” the researchers explained . “Using the internal router, it would be possible to poison the DNS cache of the LAN router of the infected node, enabling further attacks.” 911S5 initially responded to our reporting in 2022 by claiming it was conducting a top-down security review of the service. But the proxy service abruptly closed up shop just one week later, saying a malicious hacker had destroyed all of the company’s customer and payment records. In July 2024, The U.S. Department of the Treasury sanctioned the alleged creators of 911S5 , and the U.S. Department of Justice arrested the Chinese national named in my 2022 profile of the proxy service. Kilmer said IPIDEA also operates a sister service called 922 Proxy , which the company has pitched from Day One as a seamless alternative to 911S5 Proxy. “You cannot tell me they don’t want the 911 customers by calling it that,” Kilmer said. Among the recipients of Synthient’s notification was the proxy giant Oxylabs . Brundage shared an email he received from Oxylabs’ security team on December 31, which acknowledged Oxylabs had started rolling out security modifications to address the vulnerabilities described in Synthient’s report. Reached for comment, Oxylabs confirmed they “have implemented changes that now eliminate the ability to bypass the blocklist and forward requests to private network addresses using a controlled domain.” But it said there is no evidence that Kimwolf or other other attackers exploited its network. “In parallel, we reviewed the domains identified in the reported exploitation activity and did not observe traffic associated with them,” the Oxylabs statement continued. “Based on this review, there is no indication that our residential network was impacted by these activities.” Consider the following scenario, in which the mere act of allowing someone to use your Wi-Fi network could lead to a Kimwolf botnet infection. In this example, a friend or family member comes to stay with you for a few days, and you grant them access to your Wi-Fi without knowing that their mobile phone is infected with an app that turns the device into a residential proxy node. At that point, your home’s public IP address will show up for rent at the website of some residential proxy provider. Miscreants like those behind Kimwolf then use residential proxy services online to access that proxy node on your IP, tunnel back through it and into your local area network (LAN), and automatically scan the internal network for devices with Android Debug Bridge mode turned on. By the time your guest has packed up their things, said their goodbyes and disconnected from your Wi-Fi, you now have two devices on your local network — a digital photo frame and an unsanctioned Android TV box — that are infected with Kimwolf. You may have never intended for these devices to be exposed to the larger Internet, and yet there you are. Here’s another possible nightmare scenario: Attackers use their access to proxy networks to modify your Internet router’s settings so that it relies on malicious DNS servers controlled by the attackers — allowing them to control where your Web browser goes when it requests a website. Think that’s far-fetched? Recall the DNSChanger malware from 2012 that infected more than a half-million routers with search-hijacking malware, and ultimately spawned an entire security industry working group focused on containing and eradicating it. Much of what is published so far on Kimwolf has come from the Chinese security firm XLab , which was the first to chronicle the rise of the Aisuru botnet in late 2024. In its latest blog post , XLab said it began tracking Kimwolf on October 24, when the botnet’s control servers were swamping Cloudflare’s DNS servers with lookups for the distinctive domain 14emeliaterracewestroxburyma02132[.]su. This domain and others connected to early Kimwolf variants spent several weeks topping Cloudflare’s chart of the Internet’s most sought-after domains , edging out Google.com and Apple.com of their rightful spots in the top 5 most-requested domains. That’s because during that time Kimwolf was asking its millions of bots to check in frequently using Cloudflare’s DNS servers. The Chinese security firm XLab found the Kimwolf botnet had enslaved between 1.8 and 2 million devices, with heavy concentrations in Brazil, India, The United States of America and Argentina. Image: blog.xLab.qianxin.com It is clear from reading the XLab report that KrebsOnSecurity (and security experts) probably erred in misattributing some of Kimwolf’s early activities to the Aisuru botnet, which appears to be operated by a different group entirely. IPDEA may have been truthful when it said it had no affiliation with the Aisuru botnet, but Brundage’s data left no doubt that its proxy service clearly was being massively abused by Aisuru’s Android variant, Kimwolf. XLab said Kimwolf has infected at least 1.8 million devices, and has shown it is able to rebuild itself quickly from scratch. “Analysis indicates that Kimwolf’s primary infection targets are TV boxes deployed in residential network environments,” XLab researchers wrote. “Since residential networks usually adopt dynamic IP allocation mechanisms, the public IPs of devices change over time, so the true scale of infected devices cannot be accurately measured solely by the quantity of IPs. In other words, the cumulative observation of 2.7 million IP addresses does not equate to 2.7 million infected devices.” XLab said measuring Kimwolf’s size also is difficult because infected devices are distributed across multiple global time zones. “Affected by time zone differences and usage habits (e.g., turning off devices at night, not using TV boxes during holidays, etc.), these devices are not online simultaneously, further increasing the difficulty of comprehensive observation through a single time window,” the blog post observed. XLab noted that the Kimwolf author shows an almost ‘obsessive’ fixation” on Yours Truly, apparently leaving “easter eggs” related to my name in multiple places through the botnet’s code and communications: Image: XLAB. One frustrating aspect of threats like Kimwolf is that in most cases it is not easy for the average user to determine if there are any devices on their internal network which may be vulnerable to threats like Kimwolf and/or already infected with residential proxy malware. Let’s assume that through years of security training or some dark magic you can successfully identify that residential proxy activity on your internal network was linked to a specific mobile device inside your house: From there, you’d still need to isolate and remove the app or unwanted component that is turning the device into a residential proxy. Also, the tooling and knowledge needed to achieve this kind of visibility just isn’t there from an average consumer standpoint. The work that it takes to configure your network so you can see and interpret logs of all traffic coming in and out is largely beyond the skillset of most Internet users (and, I’d wager, many security experts). But it’s a topic worth exploring in an upcoming story. Happily, Synthient has erected a page on its website that will state whether a visitor’s public Internet address was seen among those of Kimwolf-infected systems. Brundage also has compiled a list of the unofficial Android TV boxes that are most highly represented in the Kimwolf botnet. If you own a TV box that matches one of these model names and/or numbers, please just rip it out of your network. If you encounter one of these devices on the network of a family member or friend, send them a link to this story and explain that it’s not worth the potential hassle and harm created by keeping them plugged in. The top 15 product devices represented in the Kimwolf botnet, according to Synthient. Chad Seaman is a principal security researcher with Akamai Technologies . Seaman said he wants more consumers to be wary of these pseudo Android TV boxes to the point where they avoid them altogether. “I want the consumer to be paranoid of these crappy devices and of these residential proxy schemes,” he said. “We need to highlight why they’re dangerous to everyone and to the individual. The whole security model where people think their LAN (Local Internal Network) is safe, that there aren’t any bad guys on the LAN so it can’t be that dangerous is just really outdated now.” “The idea that an app can enable this type of abuse on my network and other networks, that should really give you pause,” about which devices to allow onto your local network, Seaman said. “And it’s not just Android devices here. Some of these proxy services have SDKs for Mac and Windows, and the iPhone. It could be running something that inadvertently cracks open your network and lets countless random people inside.” In July 2025, Google filed a “John Doe”  lawsuit (PDF) against 25 unidentified defendants collectively dubbed the “ BadBox 2.0 Enterprise ,” which Google described as a botnet of over ten million unsanctioned Android streaming devices engaged in advertising fraud. Google said the BADBOX 2.0 botnet, in addition to compromising multiple types of devices prior to purchase, also can infect devices by requiring the download of malicious apps from unofficial marketplaces. Google’s lawsuit came on the heels of a  June 2025 advisory  from the  Federal Bureau of Investigation (FBI), which warned that cyber criminals were gaining unauthorized access to home networks by either configuring the products with malware prior to the user’s purchase, or infecting the device as it downloads required applications that contain backdoors — usually during the set-up process. The FBI said BADBOX 2.0 was discovered after the original BADBOX campaign was disrupted in 2024. The original BADBOX was identified in 2023, and primarily consisted of Android operating system devices that were compromised with backdoor malware prior to purchase. Lindsay Kaye is vice president of threat intelligence at HUMAN Security , a company that worked closely on the BADBOX investigations. Kaye said the BADBOX botnets and the residential proxy networks that rode on top of compromised devices were detected because they enabled a ridiculous amount of advertising fraud, as well as ticket scalping, retail fraud, account takeovers and content scraping. Kaye said consumers should stick to known brands when it comes to purchasing things that require a wired or wireless connection. “If people are asking what they can do to avoid being victimized by proxies, it’s safest to stick with name brands,” Kaye said. “Anything promising something for free or low-cost, or giving you something for nothing just isn’t worth it. And be careful about what apps you allow on your phone.” Many wireless routers these days make it relatively easy to deploy a “Guest” wireless network on-the-fly. Doing so allows your guests to browse the Internet just fine but it blocks their device from being able to talk to other devices on the local network — such as shared folders, printers and drives. If someone — a friend, family member, or contractor — requests access to your network, give them the guest Wi-Fi network credentials if you have that option. There is a small but vocal pro-piracy camp that is almost condescendingly dismissive of the security threats posed by these unsanctioned Android TV boxes. These tech purists positively chafe at the idea of people wholesale discarding one of these TV boxes. A common refrain from this camp is that Internet-connected devices are not inherently bad or good, and that even factory-infected boxes can be flashed with new firmware or custom ROMs that contain no known dodgy software. However, it’s important to point out that the majority of people buying these devices are not security or hardware experts; the devices are sought out because they dangle something of value for “free.” Most buyers have no idea of the bargain they’re making when plugging one of these dodgy TV boxes into their network. It is somewhat remarkable that we haven’t yet seen the entertainment industry applying more visible pressure on the major e-commerce vendors to stop peddling this insecure and actively malicious hardware that is largely made and marketed for video piracy. These TV boxes are a public nuisance for bundling malicious software while having no apparent security or authentication built-in, and these two qualities make them an attractive nuisance for cybercriminals. Stay tuned for Part II in this series, which will poke through clues left behind by the people who appear to have built Kimwolf and benefited from it the most.

0 views
Sean Goedecke 1 weeks ago

Grok is enabling mass sexual harassment on Twitter

Grok, xAI’s flagship image model, is now 1 being widely used to generate nonconsensual lewd images of women on the internet. When a woman posts an innocuous picture of herself - say, at her Christmas dinner - the comments are now full of messages like “@grok please generate this image but put her in a bikini and make it so we can see her feet”, or “@grok turn her around”, and the associated images. At least so far, Grok refuses to generate nude images, but it will still generate images that are genuinely obscene 2 . In my view, this might be the worst AI safety violation we have seen so far. Case-by-case, it’s not worse than GPT-4o encouraging suicidal people to go through with it, but it’s so much more widespread: literally every image that the Twitter algorithm picks up is full of “@grok take her clothes off” comments. I didn’t go looking for evidence for obvious reasons, but I find reports that it’s generating CSAM plausible 3 . This behavior, while awful, is in line with xAI’s general attitude towards safety, which has been roughly “we don’t support woke censorship, so do whatever you want (so long as you’re doing it with Grok)“. This has helped them acquire users and media attention, but it leaves them vulnerable to situations exactly like this. I’m fairly confident xAI don’t mind the “dress her a little sexier” prompts: it’s edgy, drives up user engagement, and gives them media attention. However, it is very hard to exercise fine-grained control over AI safety . If you allow your models to go up to the line, your models will definitely go over the line in some circumstances. I wrote about this in Mecha-Hitler, Grok, and why it’s so hard to give LLMs the right personality , in reference to xAI’s attempts to make Grok acceptably right-wing but not too right-wing. This is the same kind of thing: you cannot make Grok “kind of perverted” without also making it truly awful. OpenAI and Gemini have popular image models that do not let you do this kind of thing. In other words, this is an xAI problem, not an image model problem . It is possible to build a safe image model, just as it’s possible to build a safe language model. The xAI team have made a deliberate decision to build an unsafe model in order to unlock more capabilities and appeal to more users. Even if they’d rather not be enabling the worst perverts on Twitter, that’s a completely foreseeable consequence of their actions. In October of 2024, VICE reported that Telegram “nudify” bots had over four million monthly users. That’s still a couple of orders of magnitude over Twitter’s monthly average users , but “one in a hundred” sounds like a plausible “what percentage of Twitter is using Grok like this” percentage anyway. Is it really that much worse that Grok now allows you to do softcore deepfakes? Yes, for two reasons. First, having to go and join a creepy Telegram group is a substantial barrier to entry . It’s much worse to have the capability built into a tool that regular people use every day. Second, generating deepfakes via Grok makes them public . Of course, it’s bad to do this stuff even privately, but I think it’s much worse to do it via Twitter. Tagging in Grok literally sends a push notification to your target saying “hey, I made some deepfake porn of you”, and then advertises that porn to everyone who was already following them. Yesterday xAI rushed out an update to rein this behavior in (likely a system prompt update, given the timing). I imagine they’re worried about the legal exposure, if nothing else. But this will happen again . It will probably happen again with Grok . Every AI lab has a big “USER ENGAGEMENT” dial where left is “always refuse every request” and right is “do whatever the user says, including generating illegal deepfake pornography”. The labs are incentivized to turn that dial as far to the right as possible. In my view, image model safety is a different topic from language model safety . Unsafe language models primarily harm the user (via sycophancy, for instance). Unsafe image models, as we’ve seen from Grok, can harm all kinds of people. I tend to think that unsafe language models should be available (perhaps not through ChatGPT dot com, but certainly for people who know what they’re doing). However, it seems really bad for everyone on the planet to have a “turn this image of a person into pornography” button. At minimum, I think it’d be sensible to pursue entities like xAI under existing CSAM or deepfake pornography laws , to set up a powerful counter-incentive for people with their hands on the “USER ENGAGEMENT” dial. I also think it’d be sensible for AI labs to strongly lock down “edit this image of a human” requests , even if that precludes some legitimate user activity. Earlier this year, in The case for regulating AI companions , I suggested regulating “AI girlfriend” products. I mistakenly thought AI companions or sycophancy might be the first case of genuine widespread harm caused by AI products, because of course nobody would ship an image model that allowed this kind of prompting. Turns out I was wrong. There were reports in May of this year of similar behavior, but it was less widespread and xAI jumped on it fairly quickly. Clever prompting by unethical fetishists can generate really degrading content (to the point where I’m uncomfortable going into more detail). I saw a few cases earlier this year of people trying this prompting tactic and Grok refusing them. It seems the latest version of Grok now allows this. Building a feature that lets you digitally undress 18-year-olds but not 17-year-olds is a really difficult technical problem, which is one of the many reasons to never do this . There were reports in May of this year of similar behavior, but it was less widespread and xAI jumped on it fairly quickly. ↩ Clever prompting by unethical fetishists can generate really degrading content (to the point where I’m uncomfortable going into more detail). I saw a few cases earlier this year of people trying this prompting tactic and Grok refusing them. It seems the latest version of Grok now allows this. ↩ Building a feature that lets you digitally undress 18-year-olds but not 17-year-olds is a really difficult technical problem, which is one of the many reasons to never do this . ↩

0 views
flak 2 weeks ago

using lava lamps to break RSA

It’s well known, in some circles, that the security of the internet depends on a wall of lava lamps to protect us from hackers. It is perhaps less well known that hackers can turn around and use this technology to augment their attacks. background Trillions of dollars in transactions are protected by the RSA algorithm. The security of this algorithm depends on the difficulty of factoring a large number, assumed to be infeasible when the prime numbers are selected randomly. To this end, Cloudflare has a wall of lava lamps to keep us all safe. The chaotic perturbations in the lava flow can be used to generate random numbers. However, Cloudflare has fallen victim to a regrettably common SEV-CRIT random number generator vulnerability. They have exposed the internal state of their system to the internet. Countless puff pieces include pictures of the lava wall, allowing attackers to recreate its internal state. There are even tubefluencers with videos of the wall in action. In this paper, we present a novel technique that uses pictures of a wall of lava lamps to calculate prime factors. As the chaotic behavior of lava lamps depends on quantum effects, it is not possible to replicate these results with solely conventional computing techniques. method We download a picture of lava lamps. (Optionally using a local image.) We reduce the entropy of the image using the SHA-512 algorithm to 512 bits. We further reduce it to 128 bits using the MD5 algorithm. A further reduction to 32 bits is performed with the CRC32 algorithm. This concludes stage one, entropy compaction and stabilization. We next proceed to stage two, factor extraction. We use a three bit extractor mask (hexadecimal: 0x00000007) to examine the low three bits looking for either of the prime numbers 3 or 5 . If found, that’s our result. Otherwise we right shift by one position and repeat. results We achieve a success rate exceeding 99.9% when factoring 15 . Larger values such as 21 are also factored 66% of the time. Even more challenging targets such as 35 can be factored with a 33% success rate. Ongoing experimentation suggests this technique is capable of factoring 46% of all positive integers. We hope to improve on this result with further refinement to the factor extraction stage. Theoretical calculations suggest a three bit extractor may be sufficient achieve a 77% success rate. Refer to figure 1. figure 1 The author’s lava factor tool factoring 15 . acknowledgements The author is indebted to Peter Gutmann’s pioneering work in dog factorization. No lava lamps were permanently damaged in the conduct of this experiment. source Source code is provided in accordance with the principles of knowledge sharing. Commercial use prohibited.

0 views

CHERIoT RTOS: An OS for Fine-Grained Memory-Safe Compartments on Low-Cost Embedded Devices

CHERIoT RTOS: An OS for Fine-Grained Memory-Safe Compartments on Low-Cost Embedded Devices Saar Amar, Tony Chen, David Chisnall, Nathaniel Wesley Filardo, Ben Laurie, Hugo Lefeuvre, Kunyan Liu, Simon W. Moore, Robert Norton-Wright, Margo Seltzer, Yucong Tao, Robert N. M. Watson, and Hongyan Xia SOSP'25 This paper is a companion to a previous paper which described the CHERIoT hardware architecture. This work presents an OS that doesn’t look like the systems you are used to. The primary goal is memory safety (and security more broadly). Why rewrite your embedded code in Rust when you can switch to a fancy new chip and OS instead? Recall that a CHERI capability is a pointer augmented with metadata (bounds, access permissions). CHERI allows a more restrictive capability to be derived from a less restrictive one (e.g., reduce the bounds or remove access permissions), but not the other way around. CHERIoT RTOS doesn’t have the notion of a process, instead it has a compartment. A compartment comprises code and compartment-global data. Compartment boundaries are trust boundaries. I think of it like a microkernel operating system. Example compartments in CHERIoT include: Boot loader Context switcher Heap allocator Thread scheduler The boot loader is fully trusted and is the first code to run. The hardware provides the boot loader with the ultimate capability. The boot loader then derives more restrictive capabilities, which it passes to other compartments. You could imagine a driver compartment which is responsible for managing a particular I/O device. The boot loader would provide that compartment with a capability that enables the compartment to access the MMIO registers associated with the device. There is no user space/kernel space distinction here, only a set of compartments, each with a unique set of capabilities. Fig. 3 illustrates a compartment: Source: https://dl.acm.org/doi/10.1145/3731569.3764844 Sealed Capabilities The CHERIoT hardware architecture supports sealing of capabilities. Sealing a capability is similar to deriving a more restrictive one, only this time the derived capability is useless until it is unsealed by a compartment which holds a capability with unsealing permissions. I think of this like a client encrypting some data before storing it on a server. The data is useless to everyone except for the client who can decrypt it. Cross-compartment function calls are similar to system calls and are implemented with sealed capabilities. Say compartment needs to be able to call a function exported by compartment . At boot, the boot loader derives a “function call” capability which is a pointer into the export table associated with , seals that capability, and passes it to compartment at initialization. The boot loader also gives the switcher a capability which allows it to unseal the function call capability. When A wants to call the function exported by , it passes the sealed capability to the switcher. The switcher then unseals the capability and uses it to read metadata about the exported function from ’s export table. The switcher uses this metadata to safely perform the function call. Capability sealing also simplifies inter-compartment state management. Say compartment calls into compartment (for networking) to create a TCP connection. The networking compartment can allocate a complicated tree of objects and then return a sealed capability which points to that tree. Compartment can hold on to that capability and pass it as a parameter for future networking function calls (which will unseal and then use). Compartment doesn’t need to track per-connection objects in its global state. The heap compartment handles memory allocation for all compartments. There is just one address space shared by all compartments, but capabilities make the whole thing safe. As described in the previous summary, when an allocation is freed, the heap allocator sets associated revocation bits to zero. This prevents use-after-free bugs (in conjunction with the CHERIoT hardware load filter). Similar to garbage collection, freed memory is quarantined (not reused) until a memory sweep completes which ensures that no outstanding valid capabilities are referencing the memory to be reused. The allocator supports allocation capabilities which can enforce per-compartment quotas. If you’ve had enough novelty, you can rest your eyes for a moment. The CHERIoT RTOS supports threads, and they mostly behave like you would expect. The only restriction is that threads are statically declared in code. Threads begin execution in the compartment that declares them, but then threads can execute code in other compartments via cross-compartment calls. Each compartment is responsible for managing its own state with proper error handling. If all else fails, the OS supports micro-reboots, where a single compartment can be reset to a fresh state. The cross-compartment call mechanism supported by the switcher enables the necessary bookkeeping for micro-reboots. The steps to reboot a single compartment are: Stop new threads from calling into the compartment (these calls fail with an error code) Fault all threads which are currently executing in the compartment (this will also result in error codes being returned to other compartments) Release all resources (e.g., heap data) which have been allocated by the compartment Reset all global variables to their initial state I wonder how often a micro-reboot of one compartment results in an error code which causes other compartments to micro-reboot. If a call into a compartment which is in the middle of a micro-reboot can fail, then I could see that triggering a cascade of micro-reboots. The ideas here remind me of Midori , which relied on managed languages rather than hardware support. I wonder which component is better to trust, an SoC or a compiler? Subscribe now Boot loader Context switcher Heap allocator Thread scheduler Source: https://dl.acm.org/doi/10.1145/3731569.3764844 Sealed Capabilities The CHERIoT hardware architecture supports sealing of capabilities. Sealing a capability is similar to deriving a more restrictive one, only this time the derived capability is useless until it is unsealed by a compartment which holds a capability with unsealing permissions. I think of this like a client encrypting some data before storing it on a server. The data is useless to everyone except for the client who can decrypt it. Cross-compartment function calls are similar to system calls and are implemented with sealed capabilities. Say compartment needs to be able to call a function exported by compartment . At boot, the boot loader derives a “function call” capability which is a pointer into the export table associated with , seals that capability, and passes it to compartment at initialization. The boot loader also gives the switcher a capability which allows it to unseal the function call capability. When A wants to call the function exported by , it passes the sealed capability to the switcher. The switcher then unseals the capability and uses it to read metadata about the exported function from ’s export table. The switcher uses this metadata to safely perform the function call. Capability sealing also simplifies inter-compartment state management. Say compartment calls into compartment (for networking) to create a TCP connection. The networking compartment can allocate a complicated tree of objects and then return a sealed capability which points to that tree. Compartment can hold on to that capability and pass it as a parameter for future networking function calls (which will unseal and then use). Compartment doesn’t need to track per-connection objects in its global state. Heap Allocator The heap compartment handles memory allocation for all compartments. There is just one address space shared by all compartments, but capabilities make the whole thing safe. As described in the previous summary, when an allocation is freed, the heap allocator sets associated revocation bits to zero. This prevents use-after-free bugs (in conjunction with the CHERIoT hardware load filter). Similar to garbage collection, freed memory is quarantined (not reused) until a memory sweep completes which ensures that no outstanding valid capabilities are referencing the memory to be reused. The allocator supports allocation capabilities which can enforce per-compartment quotas. Threads If you’ve had enough novelty, you can rest your eyes for a moment. The CHERIoT RTOS supports threads, and they mostly behave like you would expect. The only restriction is that threads are statically declared in code. Threads begin execution in the compartment that declares them, but then threads can execute code in other compartments via cross-compartment calls. Micro-reboots Each compartment is responsible for managing its own state with proper error handling. If all else fails, the OS supports micro-reboots, where a single compartment can be reset to a fresh state. The cross-compartment call mechanism supported by the switcher enables the necessary bookkeeping for micro-reboots. The steps to reboot a single compartment are: Stop new threads from calling into the compartment (these calls fail with an error code) Fault all threads which are currently executing in the compartment (this will also result in error codes being returned to other compartments) Release all resources (e.g., heap data) which have been allocated by the compartment Reset all global variables to their initial state

0 views
daniel.haxx.se 2 weeks ago

no strcpy either

Some time ago I mentioned that we went through the curl source code and eventually got rid of all () calls. strncpy() is a weird function with a crappy API. It might not null terminate the destination and it pads the target buffer with zeroes. Quite frankly, most code bases are probably better off completely avoiding it because each use of it is a potential mistake. In that particular rewrite when we made strncpy calls extinct, we made sure we would either copy the full string properly or return error. It is rare that copying a partial string is the right choice, and if it is, we can just as well it and handle the null terminator explicitly. This meant no case for using strlcpy or anything such either. strncpy density in curl over time But strcpy? strcpy however, has its valid uses and it has a less bad and confusing API. The main challenge with strcpy is that when using it we do not specify the length of the target buffer nor of the source string. This is normally not a problem because in a C program should only be used when we have full control of both. But normally and always are not necessarily the same thing. We are but all human and we all do mistakes. Using strcpy implies that there is at least one or maybe two, buffer size checks done prior to the function invocation. In a good situation. Over time however – let’s imagine we have code that lives on for decades – when code is maintained, patched, improved and polished by many different authors with different mindsets and approaches, those size checks and the function invoke may glide apart. The further away from each other they go, the bigger is the risk that something happens in between that nullifies one of the checks or changes the conditions for the strcpy. To make sure that the size checks cannot be separated from the copy itself we introduced a string copy replacement function the other day that takes the target buffer , target size , source buffer and source string length as arguments and only if the copy can be made and the null terminator also fits there, the operation is done. This made it possible to implement the replacement using memcpy(). Now we can completely ban the use of strcpy in curl source code, like we already did strncpy. Using this function version is a little more work and more cumbersome than strcpy since it needs more information, but we believe the upsides of this approach will help us have an oversight for the extra pain involved. I suppose we will see how that will fare down the road. Let’s come back in a decade and see how things developed! strcpy density in curl over time the strcopy source An additional minor positive side-effect of this change is of course that this should effectively prevent the AI chatbots to report strcpy uses in curl source code and insist it is insecure if anyone would ask (as people still apparently do). It has been proven numerous times already that strcpy in source code is like a honey pot for generating hallucinated vulnerability claims. Still, this will just make them find something else to make up a report about, so there is probably no net gain. AI slop is not a game we can win.

0 views

Happy 16th Birthday, KrebsOnSecurity.com!

KrebsOnSecurity.com celebrates its 16th anniversary today! A huge “thank you” to all of our readers — newcomers, long-timers and drive-by critics alike. Your engagement this past year here has been tremendous and truly a salve on a handful of dark days. Happily, comeuppance was a strong theme running through our coverage in 2025, with a primary focus on entities that enabled complex and globally-dispersed cybercrime services. Image: Shutterstock, Younes Stiller Kraske. In May 2024, we scrutinized the history and ownership of Stark Industries Solutions Ltd. , a “bulletproof hosting” provider that came online just two weeks before Russia invaded Ukraine and served as a primary staging ground for repeated Kremlin cyberattacks and disinformation efforts. A year later, Stark and its two co-owners were sanctioned by the European Union, but our analysis showed those penalties have done little to stop the Stark proprietors from rebranding and transferring considerable network assets to other entities they control. In December 2024, KrebsOnSecurity profiled Cryptomus, a financial firm registered in Canada that emerged as the payment processor of choice for dozens of Russian cryptocurrency exchanges and websites hawking cybercrime services aimed at Russian-speaking customers. In October 2025, Canadian financial regulators ruled that Cryptomus had grossly violated its anti-money laundering laws, and levied a record $176 million fine against the platform. In September 2023, KrebsOnSecurity published findings from researchers who concluded that a series of six-figure cyberheists across dozens of victims resulted from thieves cracking master passwords stolen from the password manager service LastPass in 2022. In a court filing in March 2025, U.S. federal agents investigating a spectacular $150 million cryptocurrency heist said they had reached the same conclusion . Phishing was a major theme of this year’s coverage, which peered inside the day-to-day operations of several voice phishing gangs that routinely carried out elaborate, convincing, and financially devastating cryptocurrency thefts. A Day in the Life of a Prolific Voice Phishing Crew examined how one cybercrime gang abused legitimate services at Apple and Google to force a variety of outbound communications to their users, including emails, automated phone calls and system-level messages sent to all signed-in devices. Nearly a half-dozen stories in 2025 dissected the incessant SMS phishing or “smishing” coming from China-based phishing kit vendors , who make it easy for customers to convert phished payment card data into mobile wallets from Apple and Google. In an effort to wrest control over this phishing syndicate’s online resources, Google has since filed at least two John Doe lawsuits targeting these groups and dozens of unnamed defendants. In January, we highlighted research into a dodgy and sprawling content delivery network called Funnull that specialized in helping China-based gambling and money laundering websites distribute their operations across multiple U.S.-based cloud providers. Five months later, the U.S. government sanctioned Funnull , identifying it as a top source of investment/romance scams known as “ pig butchering .” Image: Shutterstock, ArtHead. In May, Pakistan arrested 21 people alleged to be working for Heartsender , a phishing and malware dissemination service that KrebsOnSecurity first profiled back in 2015 . The arrests came shortly after the FBI and the Dutch police seized dozens of servers and domains for the group . Many of those arrested were first publicly identified in a 2021 story here about how they’d inadvertently infected their computers with malware that gave away their real-life identities . In April, the U.S. Department of Justice indicted the proprietors of a Pakistan-based e-commerce company for conspiring to distribute synthetic opioids in the United States. The following month, KrebsOnSecurity detailed how the proprietors of the sanctioned entity are perhaps better known for operating an elaborate and lengthy scheme to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs . Earlier this month, we examined an academic cheating empire turbocharged by Google Ads that earned tens of millions of dollars in revenue and has curious ties to a Kremlin-connected oligarch whose Russian university builds drones for Russia’s war against Ukraine . An attack drone advertised on a website hosted in the same network as Russia’s largest private education company — Synergy University. As ever, KrebsOnSecurity endeavored to keep close tabs on the world’s biggest and most disruptive botnets, which pummeled the Internet this year with distributed denial-of-service (DDoS) assaults that were two to three times the size and impact of previous record DDoS attacks . In June, KrebsOnSecurity.com was hit by the largest DDoS attack that Google had ever mitigated at the time (we are a grateful guest of Google’s excellent Project Shield offering). Experts blamed that attack on an Internet-of-Things botnet called Aisuru that had rapidly grown in size and firepower since its debut in late 2024. Another Aisuru attack on Cloudflare just days later practically doubled the size of the June attack against this website. Not long after that, Aisuru was blamed for a DDoS that again doubled the previous record. In October, it appeared the cybercriminals in control of Aisuru had shifted the botnet’s focus from DDoS to a more sustainable and profitable use: Renting hundreds of thousands of infected Internet of Things (IoT) devices to proxy services that help cybercriminals anonymize their traffic . However, it has recently become clear that at least some of the disruptive botnet and residential proxy activity attributed to Aisuru last year likely was the work of people responsible for building and testing a powerful botnet known as Kimwolf . Chinese security firm XLab, which was the first to chronicle Aisuru’s rise in 2024,  recently profiled Kimwolf as easily the world’s biggest and most dangerous collection of compromised machines — with approximately 1.83 million devices under its thumb as of December 17. XLab noted that the Kimwolf author “shows an almost ‘obsessive’ fixation on the well-known cybersecurity investigative journalist Brian Krebs, leaving easter eggs related to him in multiple places.” Image: XLab, Kimwolf Botnet Exposed: The Massive Android Botnet with 1.8 million infected devices. I am happy to report that the first KrebsOnSecurity stories of 2026 will go deep into the origins of Kimwolf, and examine the botnet’s unique and highly invasive means of spreading digital disease far and wide. The first in that series will include a somewhat sobering and global security notification concerning the devices and residential proxy services that are inadvertently helping to power Kimwolf’s rapid growth. Thank you once again for your continued readership, encouragement and support. If you like the content we publish at KrebsOnSecurity.com, please consider making an exception for our domain in your ad blocker. The ads we run are limited to a handful of static images that are all served in-house and vetted by me (there is no third-party content on this site, period). Doing so would help further support the work you see here almost every week. And if you haven’t done so yet, sign up for our email newsletter ! (62,000 other subscribers can’t be wrong, right?). The newsletter is just a plain text email that goes out the moment a new story is published. We send between one and two emails a week, we never share our email list, and we don’t run surveys or promotions. Thanks again, and Happy New Year everyone! Be safe out there.

0 views
Danny McClelland 2 weeks ago

Omarchy Hardening

A few weeks ago, I came across A Word on Omarchy which highlighted some security gaps in Omarchy’s default configuration. Things like LLMNR being enabled, UFW configured but not actually running, and relaxed login attempt limits. The post resonated with me. Omarchy is a fantastic opinionated setup for Arch Linux with Hyprland, but like any distribution that prioritises convenience, some security defaults get loosened in the process. That’s not necessarily wrong, it’s a trade-off, but it’s worth knowing about. So I built Omarchy Hardening . It’s an interactive terminal script that walks you through five hardening options: Each option shows exactly what will change before you confirm. Nothing is selected by default. The script opens with a warning, and I’ll repeat it here: you should not rely on automation to secure your system . The best approach is to understand your distribution and make these changes yourself. Read the source code. Run the commands manually. This builds knowledge you’ll need when things go wrong. The tool exists to demonstrate what these changes look like and to make them easier to apply consistently. But it’s not a substitute for understanding. If you’re curious about going further, the README includes a section on additional hardening steps. OpenSnitch is worth particular attention. It’s an application-level firewall that prompts you whenever a program tries to make a network connection. Educational and practical. The code is on GitHub: dannymcc/omarchy-hardening Disable LLMNR - Prevents name poisoning attacks on local networks Enable UFW Firewall - For earlier Omarchy versions where UFW wasn’t enabled by default Tailscale-only SSH - Restricts SSH to your Tailscale network, making it invisible to the public internet Limit Login Attempts - Reduces failed attempts from 10 back to 3 before lockout Configure Git Signing - Enables SSH commit signing for verified commits

0 views