Latest Posts (20 found)

Mythos finds a curl vulnerability

yes, as in singular one . Back in April 2026 Anthropic caused a lot of media noise when they concluded that their new AI model Mythos is dangerously good at finding security flaws in source code. Apparently Mythos was so good at this that Anthropic would not release this model to the public yet but instead trickle it out to a selected few companies for a while to allow a few good ones(?) to get a head start and fix the most pressing problems first, before the general populace would get their hands on it. The whole world seemed to lose its marbles. Is this the end of the world as we know it? An amazingly successful marketing stunt for sure. Part of the deal with project Glasswing was that Anthropic also offered access to their latest AI model to “Open Source projects” via Linux Foundation . Linux Foundation let their project Alpha Omega handle this part, and I was contacted by their representatives. As lead developer of curl I was offered access to the magic model and I graciously accepted the offer. Sure, I’d like to see what it can find in curl. I signed the contract for getting access, but then nothing happened. Weeks went past and I was told there was a hiccup somewhere and access was delayed. Eventually, I was instead offered that someone else, who has access to the model, could run a scan and analysis on curl for me using Mythos and send me a report. To me, the distinction isn’t that important. It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway. Getting the tool to generate a first proper scan and analysis would be great, whoever did it. I happily accepted this offer. (I am purposely leaving out the identity of the individual(s) involved in getting the curl analysis done as it is not the point of this blog post.) Before this first Mythos report, we had already scanned curl with several different very capable AI powered tools (I mean in addition to running a number of “normal” static code analyzers all the time, using the pickiest compiler options and doing fuzzing on it for years etc). Primarily AISLE , Zeropath and OpenAI’s Codex Security have been used to scrutinize the code with AI. These tools and the analyses they have done have triggered somewhere between two and three hundred bugfixes merged in curl through-out the recent 8-10 months or so. A bunch of the findings these AI tools reported were confirmed vulnerabilities and have been published as CVEs. Probably a dozen or more. Nowadays we also use tools like GitHub’s Copilot and Augment code to review pull requests, and their remarks and complaints help us to land better code and avoid merging new bugs. I mean, we still merge bugs of course but the PR review bots regularly highlight issues that we fix: our merges would be worse without them. The AI reviews are used in addition to the human reviews. They help us, they don’t replace us. We also see a high volume of high quality security reports flooding in : security researchers now use AI extensively and effectively. Security is a top priority for us in the curl project. We follow every guideline and we do software engineering properly, to reduce the number of flaws in code. Scanning for flaws is just one of many steps to keep this ship safe. You need to search long and hard to find another software project that makes as much or goes further than curl, for software security. Steps involved in keeping curl secure May 6, 2026 It was with great anticipation we received the first source code analysis report generated with Mythos. Another chance for us to find areas to improve and bugs to fix. To make an even better curl. This initial scan was made on curl’s git repository and its master branch of a certain recent commit . It counted 178K lines of code analyzed in the src/ and lib/ subdirectories. The analysis details several different approaches and methods it has performed the search, and how it has focused on trying to find which flaws. A fun note in the top of the report says: curl is one of the most fuzzed and audited C codebases in existence (OSS-Fuzz, Coverity, CodeQL, multiple paid audits). Finding anything in the hot paths (HTTP/1, TLS, URL parsing core) is unlikely. … and it correctly found no problems in those areas. Completely unscientific poll on Mastodon about people’s expectations for Mythos scanning curl The size of curl curl is currently 176,000 lines of C code when we exclude blank lines. The source code consists of 660,000 words, which is 12% more words than the entire English edition of the novel War and Peace. On average, every single production source code line of curl has been written (and then rewritten) 4.14 times. We have polished on this. Right now, the existing production code in git master that still remains, has been authored by 573 separate individuals. Over time, a total of 1,465 individuals have so far had their proposed changes merged into curl’s git repository. We have published 188 CVEs for curl up until now. curl is installed in over twenty billion instances . It runs on over 110 operating systems and 28 CPU architectures . It runs in every smart phone, tablet, car, TV, game console and server on earth. The report concluded it found five “Confirmed security vulnerabilities”. I think using the term confirmed is a little amusing when the AI says it confidently by itself. Yes, the AI thinks they are confirmed, but the curl security team has a slightly different take. Five issues felt like nothing as we had expected an extensive list. Once my curl security team fellows and I had poked on the this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability. The other four were three false positives (they highlighted shortcomings that are documented in API documentation) and the fourth we deemed “just a bug”. The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June. The flaw is not going to make anyone grasp for breath. All details of that vulnerability will of course not get public before then, so you need to hold out for details on that. The Mythos report on curl also contained a number of spotted bugs that it concluded were not vulnerabilities, much like any new code analyzer does when you run it on hundreds of thousands of lines of code. All the bugs in the report are being investigated and one by one we are fixing those that we agree with. All in all about twenty bugs that are described and explained very nicely. Barely any false positives, so I presume they have had a rather high threshold for certainty. curl is certainly getting better thanks to this report, but counted by the volume of issues found, all the previous AI tools we have used have resulted in larger bugfix amounts. This is only natural of course since the first tools we ran had many more and easier bugs to find. As we have fixed issues along the way, finding new ones are slowly becoming harder. Additionally, a bug can be small or big so it’s not always fair to just compare numbers My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing. I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos. Maybe this model is a little bit better, but even if it is, it is not better to a degree that seems to make a significant dent in code analyzing. This is just one source code repository and maybe it is much better on other things. I can only tell and comment on what it found here. But allow me to highlight and reiterate what I have said before: AI powered code analyzers are significantly better at finding security flaws and mistakes in source code than any traditional code analyzers did in the past. All modern AI models are good at this now. Anyone with time and some experimental spirits can find security problems now. The high quality chaos is real. Any project that has not scanned their source code with AI powered tooling will likely find huge number of flaws, bugs and possible vulnerabilities with this new generation of tools. Mythos will, and so will many of the others. Not using AI code analyzers in your project means that you leave adversaries and attackers time and opportunity to find and exploit the flaws you don’t find. Zero memory-safety vulnerabilities found. Methodology note: this review is hand-driven analysis using LLM subagents for parallel file reads, with every candidate finding re-verified by direct source inspection in the main session before being recorded. The CVE to variant-hunt mapping was built from curl’s own vuln.json. No automated SAST tooling was used. This outcome is consistent with curl’s status as one of the most heavily fuzzed and audited C codebases. The defensive infrastructure (capped dynbufs everywhere, with explicit max on every numeric parse, overflow guard, CURL_PRINTF format-string enforcement, per-protocol response-size caps, pingpong 64KB line cap) systematically closes the bug classes that would normally be productive in a codebase this size. Coverage now includes: all minor protocols, all file parsers, all TLS backends’ verify paths, http/1/2/3, ftp full depth, mprintf, x509asn1, doh, all auth mechanisms, content encoding, connection reuse, session cache, CLI tool, platform-specific code, and CI/build supply chain. It should be noted that the AI tools find the usual and established kind of errors we already know about. It just finds new instances of them. We have not seen any AI so far report a vulnerability that would somehow be of a novel kind or something totally new. They do not reinvent the field in that way, but they do dig up more issues than any other tools did before. These were absolutely not the last bugs to find or report. Just while I was writing the drafts for this blog post we have received more reports from security researchers about suspected problems. The AI tools will improve further and the researchers can find new and different ways to prompt the existing AIs to make them find more. We have not reached the end of this yet. I hope we can keep getting more curl scans done with Mythos and other AIs, over and over until they truly stop finding new problems. Thanks to Anthropic and Alpha Omega for providing the model, the tools and doing the scan for us. Thanks also to the individual who did the scan for us. Much appreciated! Top image by Jin Kim from Pixabay Thanks for flying curl. It’s never dull. They can spot when the comment says something about the code and then conclude that the code does not work as the comment says. It can check code for platforms and configurations we otherwise cannot run analyzers for It “knows” details about 3rd party libraries and their APIs so it can detect abuse or bad assumptions. It “knows” details about protocols curl implements and can question details in the code that seem to violate or contradict protocol specifications They are typically good at summarizing and explaining the flaw, something which can be rather tedious and difficult with old style analyzers. They can often generate and offer a patch for its found issue (even if the patch usually is not a 100% fix).

0 views
daniel.haxx.se 1 weeks ago

Approaching zero bugs?

In this era of powerful tools to find software bugs , we now see tools find a lot of problems at a high speed. This causes problems for developers, as dealing with the growing list of issues is hard. It may take a longer time to address the problems than to find them – not to mention to put them into releases and then it takes yet another extended time until users out in the wild actually get that updated version into their hands. In order to find many bugs fast, they have to already exist in source code. These new tools don’t add or create the problems. They just find them, filter them out and bring them to the surface for exposure. A better filter in the pool filters out more rubbish. The more bugs we fix, the fewer bugs remain in the code. Assuming the developers manage to fix problems at a decent enough pace. For every bugfix we merge, there is a risk that the change itself introduces one more more new separate problems. We also tend to keep adding features and changing behavior as we want to improve our products, and when doing so we occasionally slip up and introduce new problems as well. Source code analyzing tools is a concept as old as source code itself. There has always existed tools that have tried to identify coding mistakes. Now they just recently got better so they can find more mistakes. These new tools, similar to the old ones, don’t find all the problems. Even these new modern tools sometimes suggest fixes to the problems they find that are incomplete and in fact sometimes downright buggy. Undoubtedly code analyzer tooling will improve further. The tools of tomorrow will find even more bugs, some of them were not found when the current generation of tools scanned the code yesterday. Of course, we now also introduce these tools in CI and general development pipelines, which should make us land better code with fewer mistakes going forward. Ideally. If we assume that we fix bugs faster than we introduce new ones and we assume that the AI tools can improve further, the question is then more how much more they can improve and for how long that improvement can go on. Will the tools find 10% more bugs? 100%? 1000%? Is the tool improving going to gradually continue for the next two, ten or fifty years? Can they actually find all bugs? Can we reach the utopia where we have no bugs left in a given software project and when we do merge a new one, it gets detected and fixed almost instantly? If we assume that there is at least a theoretical chance to reach that point, how would we know when we reach it? Or even just if we are getting closer? I propose that one way to measure if we are getting closer to zero bugs is to check the age of reported and fixed bugs. If the tools are this good, we should soon only be fixing bugs we introduced very recently. In the curl project we don’t keep track of the age of regular bugs, but we do for vulnerabilities. The worst kind of bugs. If the tools can find almost all problems, they should soon only be finding very recently added vulnerabilities too. The age of new finds should plummet and go towards zero. If the age of newly reported vulnerabilities are getting younger, it should make the average and median age of the total collection go down over time. The average and median time vulnerabilities had existed in the curl source code by the time they were found and reported to the project. Accumulated vulnerability age when reported Bugfixes When the tools have found most problems there should be less bugs left to fix. The bugfix rate should go down rapidly – independently of how you count them or how liberal we are in counting exactly what is a bugfix. Bugfixes Given the data from the curl project, there does not seem to be fewer bugfixes done – yet. Maybe the bugfix speed goes up before it goes down? Given the look of these graphs I don’t think we are close to zero bugs yet. These two curves do not seem to even start to fall yet. Yes, these graphs are based on data from a single project, which makes it super weak to draw statistical conclusions from, but this is all I have to work with. I think that’s mostly an indication of what you believe the tooling can do and how good they can eventually end up becoming. I don’t know. I will keep fixing bugs.

0 views
daniel.haxx.se 1 weeks ago

Inspired

The picture was taken by mr Nasser and shared on social media In appendix A of the book Root cause: Stories and lessons from two decades of Backend Engineering Bugs , author Hussein Nasser has these wonderful words to say about me: Daniel Stenberg is a Swedish engineer and the creator of curl (cURL), one of the most widely used tools and libraries for fetching content over various protocols. I’ve always admired Daniel’s work, reading his blogs and watching his talks on YouTube. He is one of the engineers who inspired me to start my own YouTube channel and teach backend engineering. It warms my heart to read this. Words like this give me energy and motivation. My work has meaning.

0 views
daniel.haxx.se 1 weeks ago

curl 8.20.0

You always find the new curl releases on the curl site ! the 274th release 8 changes 49 days (total: 10,761) 282 bugfixes (total: 13,922) 521 commits (total: 38,545) 0 new public libcurl function (total: 100) 0 new curl_easy_setopt() option (total: 308) 0 new curl command line option (total: 273) 73 contributors, 45 new (total: 3,664) 28 authors, 12 new (total: 1,463) 8 security fixes (total: 188) As mentioned elsewhere , the security reporting volume has been intense lately. We publish eight new curl vulnerabilities this time. The official count says over 260 bugfixes were merged in this 49 day cycle. See the changelog for all the details. Planned upcoming removals include: If you are concerned about any of these, speak up on the curl-library ASAP. Unless we messed up this one and need to do a patch release, the pending next release is scheduled to happen on June 24. CVE-2026-7168: cross-proxy Digest auth state leak CVE-2026-7009: OCSP stapling bypass with Apple SecTrust CVE-2026-6429: netrc credential leak with reused proxy connection CVE-2026-6276: stale custom cookie host causes cookie leak CVE-2026-6253: proxy credentials leak over redirect-to proxy CVE-2026-5773: wrong reuse of SMB connection CVE-2026-5545: wrong reuse of HTTP Negotiate connection CVE-2026-4873: connection reuse ignores TLS requirement now uses a thread pool and queue for resolving NTLM is disabled by default dropped support for CMake 3.17 and older dropped support for < c-ares 1.16.0 SMB is disabled by default added CURLMNWC_CLEAR_ALL for all network changes dropped RTMP support local crypto implementations TLS-SRP support

0 views
daniel.haxx.se 2 weeks ago

High-Quality Chaos

As I have been preparing slides for my coming talk at foss-north on April 28, 2026 I figured I could take the opportunity and share a glimpse of the current reality here on my blog. The high quality chaos era, as I call it. I complained and I complained about the high frequency junk submissions to the curl bug-bounty that grew really intense during 2025 and early 2026. To the degree that we shut it down completely on February 1st this year. At the time we speculated if that would be sufficient or if the flood would go on. Now we know. In March 2026, the curl project went back to Hackerone again once we had figured out that GitHub was not good enough. From that day, the nature of the security report submissions have changed. The slop situation is not a problem anymore. AI slop rate The report frequency is higher than ever. Recently it’s been about double the rate we had through 2025, which already was more than double from previous years. Number of hours between security reports The quality is higher. The rate of confirmed vulnerabilities is back to and even surpassing the 2024 pre-AI level, meaning somewhere in the 15-16% range. Confirmed vulnerability rate In addition to that, the share of reports that identify a bug, meaning that they aren’t vulnerabilities but still some kind of problem, is significantly higher than before. Share of reports that were bugs, not vulnerabilities Everything is AI now Almost every security report now uses AI to various degrees. You can tell by the way they are worded, how the report is phrased and also by the fact that they now easily get very detailed duplicates in ways that can’t be done had they been written by humans. The difference now compared to before however, is that they are mostly very high quality. The reporters rarely mention exactly which AI tool or model they used (and really, we don’t care), but the evidence is strong that they used such help. I did a quick unscientific poll on Mastodon to see if other Open Source projects see the same trends and man, do they! Friends from the following projects confirmed that they too see this trend. Of course the exact numbers and volumes vary, but it shows its not unique to any specific project. Apache httpd, BIND, curl, Django, Elasticsearch Python client, Firefox, git, glibc, GnuTLS, GStreamer, Haproxy, Immich, libssh, libtiff, Linux kernel, OpenLDAP, PowerDNS, python, Prometheus, Ruby, Sequoia PGP, strongSwan, Temporal, Unbound, urllib3, Vikunja, Wireshark, wolfSSL, … I bet this list of projects is just a random selection that just happened to see my question. You will find many more experiencing and confirming this reality view. When we ship curl 8.20.0 in the middle of next week – end of April 2026, we expect to announce at least six new vulnerabilities. Assuming that the trend keeps up for at least the rest of the year, and I think that is a fair assumption, we are looking at an estimated explosion and a record amount of CVEs to be published by the curl project this year. We might publish closer to 50 curl vulnerabilities in 2026. Number of published vulnerabilities Given this universal trend, I cannot see how this pattern can not also be spotted and expected to happen in many other projects as well. The tools are still improving. We keep adding flaws when we do bugfixes and add new features. Someone has suggested it might work as with fuzzing, that we will see a plateau within a few years. I suppose we just have to see how it goes. This avalanche is going to make maintainer overload even worse. Some projects will have a hard time to handle this kind of backlog expansion without any added maintainers to help. It is probably a good time for the bad guys who can easily find this many problems themselves by just using the same tools, before all the projects get time, manpower and energy to fix them. Then everyone needs to update to the newly released fixed versions of all packages, which we know is likely to take an even longer time. We are up for a bumpy ride.

0 views
daniel.haxx.se 1 months ago

Don’t trust, verify

Software and digital security should rely on verification , rather than trust. I want to strongly encourage more users and consumers of software to verify curl. And ideally require that you could do at least this level of verification of other software components in your dependency chains. With every source code commit and every release of software, there are risks. Also entirely independent of those. Some of the things a widely used project can become the victim of, include… In the event any of these would happen, they could of course also happen in combinations and in a rapid sequence. curl, mostly in the shape of libcurl, runs in tens of billions of devices. Clearly one of the most widely used software components in the world. People ask me how I sleep at night given the vast amount of nasty things that could occur virtually at any point. There is only one way to combat this kind of insomnia: do everything possible and do it openly and transparently. Make it a little better this week than it was last week. Do software engineering right. Provide means for everyone to verify what we do and what we ship. Iterate, iterate, iterate. If even just a few users verify that they got a curl release signed by the curl release manager and they verify that the release contents is untainted and only contains bits that originate from the git repository, then we are in a pretty good state. We need enough independent outside users to do this, so that one of them can blow the whistle if anything at any point would look wrong. I can’t tell you who these users are, or in fact if they actually exist, as they are and must be completely independent from me and from the curl project. We do however provide all the means and we make it easy for such users to do this verification . The few outsiders who verify that nothing was tampered with in the releases can only validate that the releases are made from what exists in git. It is our own job to make sure that what exists in git is the real thing . The secure and safe curl. We must do a lot to make sure that whatever we land in git is okay. Here’s a list of activities we do. All this done in the open with full transparency and full accountability. Anyone can follow along and verify that we follow this. Require this for all your dependencies. We plan for the event when someone actually wants and tries to hurt us and our users really bad. Or when that happens by mistake. A successful attack on curl can in theory reach widely . This is not paranoia. This setup allows us to sleep well at night. This is why users still rely on curl after thirty years in the making. I recently added a verify page to the curl website explaining some of what I write about in this post. Jia Tan is a skilled and friendly member of the project team but is deliberately merging malicious content disguised as something else. An established committer might have been breached unknowingly and now their commits or releases contain tainted bits. A rando convinced us to merge what looks like a bugfix but is a small step in a long chain of tiny pieces building up a planted vulnerability or even backdoor Someone blackmails or extorts an existing curl team member into performing changes not otherwise accepted in the project A change by an established and well-meaning project member that adds a feature or fixes a bug mistakenly creates a security vulnerability. The website on which tarballs are normally distributed gets hacked and now evil alternative versions of the latest release are provided, spreading malware. Credentials of a known curl project member is breached and misinformation gets distributed appearing to be from a known and trusted source . Via email, social media or websites. Could even be this blog! Something in this list is backed up by an online deep-fake video where a known project member seemingly repeats something incorrect to aid a malicious actor. A tool used in CI, hosted by a cloud provider, is hacked and runs something malicious While the primary curl git repository has a downtime, someone online (impersonating a curl team member?) offers a temporary “curl mirror” that contains tainted code. we have a consistent code style (invalid style causes errors). This reduces the risk for mistakes and makes it easier to debug existing code. we ban and avoid a number of “sensitive” and “hard-to-use” C functions (use of such functions causes errors) we have a ceiling for complexity in functions to keep them easy to follow, read and understand (failing to do so causes errors) we review all pull requests before merging, both with humans and with bots. We link back commits to their origin pull requests in commit messages. we ban use of “binary blobs” in git to not provide means for malicious actors to bundle encrypted payloads (trying to include a blob causes errors) we actively avoid base64 encoded chunks as they too could function as ways to obfuscate malicious contents we ban most uses of Unicode in code and documentation to avoid easily mixed up characters that look like other characters. (adding Unicode characters causes errors) we document everything to make it clear how things are supposed to work. No surprises. Lots of documentation is tested and verified in addition to spellchecks and consistent wording. we have thousands of tests and we add test cases for (ideally) every functionality. Finding “white spots” and adding coverage is a top priority. curl runs on countless operating systems, CPU architectures and you can build curl in billions of different configuration setups: not every combination is practically possible to test we build curl and run tests in over two hundred CI jobs that are run for every commit and every PR. We do not merge commits that have unexplained test failures. we build curl in CI with the most picky compiler options enabled and we never allow compiler warnings to linger. We always use that converts warnings to errors and fail the builds. we run all tests using valgrind and several combinations of sanitizers to find and reduce the risk for memory problems, undefined behavior and similar we run all tests as “torture tests”, where each test case is rerun to have every invoked fallible function call fail once each, to make sure curl never leaks memory or crashes due to this. we run fuzzing on curl: non-stop as part of Google’s OSS-Fuzz project, but also briefly as part of the CI setup for every commit and PR we make sure that the CI jobs we have for curl never “write back” to curl. They access the source repository read-only and even if they would be breached, they cannot infect or taint source code. we run and other code analyzer tools on the CI job config scripts to reduce the risk of us running or using insecure CI jobs. we are committed to always fix reported vulnerabilities in the following release. Security problems never linger around once they have been reported. we document everything and every detail about all curl vulnerabilities ever reported our commitment to never breaking ABI or API allows all users to easily upgrade to new releases. This enables users to run recent security-fixed versions instead of legacy insecure versions. our code has been audited several times by external security experts, and the few issues that have been detected in those were immediately addressed Two-factor authentication on GitHub is mandatory for all committers

0 views
daniel.haxx.se 1 months ago

One hundred weirdo emails

I hope I don’t have to spell it out but I will do it anyway: in these cases I don’t know anything about their products and I cannot help them. Quite often I first need to search around only to figure out what the product is or does, that the person asks me about. Over the years I have collected such emails that end up in my inbox. Out of those that I have received, I have cherry-picked my favorites: the best, the weirdest, the most offensive and the most confused ones and I put them up online . A few of then also triggered separate blog posts of their own in the past. They help us remember that the world is complicated and hard to understand . Today, my online collection reached the magical amount: 100 emails. The first one in the stash was received in 2009 and the latest arrived just the other day. I expect I’ll keep adding occasional new ones going forward as well. My email address is spelled out in the curl license The curl license appears in many products Some people have problems with their products and need someone to email A few of these discover my email in their product Occasionally, the person in need of help emails me about their product. I collect some of those and make them public

0 views
daniel.haxx.se 1 months ago

NTLM and SMB go opt-in

The NTLM authentication method was always a beast. It is a proprietary protocol designed by Microsoft which was reverse engineered a long time ago. That effort resulted in the online documentation that I based the curl implementation on back in 2003. I then also wrote the NTLM code for wget while at it. NTLM broke with the HTTP paradigm: it is made to authenticate the connection instead of the request , which is what HTTP authentication is supposed to do and what all the other methods do. This might sound like a tiny and insignificant detail, but it has a major impact in all HTTP implementations everywhere. Indirectly it is also the cause for quite a few security related issues in HTTP code, because NTLM needs many special exceptions and extra unique treatments. curl has recorded no less than seven past security vulnerabilities in NTLM related code! While that may not be only NTLM’s fault, it certainly does not help. The connection-based concept also makes the method incompatible with HTTP/2 and HTTP/3. NTLM requires services to stick to HTTP/1. NTLM (v1) uses super weak cryptographic algorithms (DES and MD5), which makes it a bad choice even when disregarding the other reasons. We are slowly deprecating NTLM in curl, but we are starting out by making it opt-in. Starting in curl 8.20.0, NTLM is disabled by default in the build unless specifically enabled. Microsoft themselves have deprecated NTLM already. The wget project looks like it is about to make their NTLM support opt-in. curl only supports SMB version 1. This protocol uses NTLM for the authentication and it is equally bad in this protocol. Without NTLM enabled in the build, SMB support will also get disabled. But also: SMBv1 is in itself a weak protocol that is barely used by curl users, so this protocol is also opt-in starting in curl 8.20.0. You need to explicitly enable it in the build to get it added. I want to emphasize that we have not removed support for these ancient protocols, we just strongly discourage using them and I believe this is a first step down the ladder that in a future will make them get removed completely.

0 views
daniel.haxx.se 1 months ago

bye bye RTMP

In May 2010 we merged support for the RTMP protocol suite into curl, in our desire to support the world’s internet transfer protocols. The protocol is an example of the spirit of an earlier web: back when we still thought we would have different transfer protocols for different purposes. Before HTTP(S) truly became the one protocol that rules them all. RTMP was done by Adobe, used by Flash applications etc. Remember those? RTMP is an ugly proprietary protocol that simply was never used much in Open Source. The common Open Source implementation of this protocol is done in the rtmpdump project . In that project they produce a library, librtmp , which curl has been using all these years to handle the actual binary bits over the wire. Build curl to use librtmp and it can transfer RTMP:// URLs for you. In our constant pursuit to improve curl, to find spots that are badly tested and to identify areas that could be weak from a security and functionality stand-point, our support of RTMP was singled out. Here I would like to stress that I’m not suggesting that this is the only area in need of attention or improvement, but this was one of them. As I looked into the RTMP situation I realized that we had no (zero!) tests of our own that actually verify RTMP with curl. It could thus easily break when we refactor things. Something we do quite regularly. I mean refactor (but also breaking things). I then took a look upstream into the librtmp code and associated project to investigate what exactly we are leaning on here. What we implicitly tell our users they can use. I quickly discovered that the librtmp project does not have a single test either. They don’t even do releases since many years back, which means that most Linux distros have packaged up their code straight from their repositories. (The project insists that there is nothing to release, which seems contradictory.) Is there perhaps any librtmp tests perhaps in the pipe? There had not been a single commit done in the project within the last twelve months and when I asked one of their leading team members about the situation, I was made clear to me that there is no tests in the pipe for the foreseeable future either. In November 2025 I explicitly asked for RTMP users on the curl-library mailing list, and one person spoke up who uses it for testing. In the 2025 user survey, 2.2% of the respondents said they had used RTMP within the last year. The combination of few users and untested code is a recipe for pending removal from curl unless someone steps up and improves the situation. We therefor announced that we would remove RTMP support six months into the future unless someone cried out and stepped up to improve the RTMP situation. We repeated this we-are-doing-to-drop-RTMP message in every release note and release video done since then, to make sure we do our best to reach out to anyone actually still using RTMP and caring about it. If anyone would come out of the shadows now and beg for its return, we can always discuss it – but that will of course require work and adding test cases before it would be considered. Can we remove support for a protocol and still claim API and ABI backwards compatibility with a clean conscience? This is the first time in modern days we remove support for a URL scheme and we do this without bumping the SONAME. We do not consider this an incompatibility primarily because no one will notice . It is only a break if it actually breaks something. (RTMP in curl actually could be done using six separate URL schemes, all of which are no longer supported: rtmp rtmpe rtmps, rtmpt rtmpte rtmpts.) The offical number of URL schemes supported by curl is now down to 27: DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, MQTTS, POP3, POP3S, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS. The commit that actually removed RTMP support has been merged. We had the protocol supported for almost sixteen years. The first curl release without RTMP support will be 8.20.0 planned to ship on April 29, 2026

0 views
daniel.haxx.se 1 months ago

One hundred curl graphs

In the spring of 2020 I decided to finally do something about the lack of visualizations for how the curl project is performing, development wise. How does the line of code growth look like? How many command line options have we had over time and how many people have done more than 10 commits per year over time? I wanted to have something that visually would show me how the project is doing, from different angles, viewpoints and probes. In my mind it would be something like a complicated medical device monitoring a patient that a competent doctor could take a glance at and assess the state of the patient’s health and welfare. This patient is curl, and the doctors would be fellow developers like myself. GitHub offers some rudimentary graphs but I found (and still find) them far too limited. We also ran gitstats on the repository so there were some basic graphs to get ideas from. I did a look-around to see what existing frameworks and setups that existed that I should base this one, as I was convinced I would have to do quite some customizing myself. Nothing I saw was close enough to what I was looking for. I decided to make my own, at least for a start. I decided to generate static images for this, not add some JavaScript framework that I don’t know how to use to the website. Static daily images are excellent for both load speed and CDN caching. As we already deny running JavaScript on the site that saved me from having to work against that. SVG images are still vector based and should scale nicely. SVG is also a better format from a download size perspective, as PNG almost always generate much larger images for this kind of images. When this started, I imagined that it would be a small number of graphs mostly showing timelines with plots growing from lower left to upper right. It would turn out to be a little naive. I knew some basics about gnuplot from before as I had seen images and graphs generated by others in the past. Since gitstats already used it I decided to just dive in deeper and use this. To learn it. gnuplot is a 40 year old (!) command line tool that can generate advanced graphs and data visualizations. It is a powerful tool, which also means that not everything is simple to understand and use at once, but there is almost nothing in terms of graphs, plots and curves that it cannot handle in one way or another. I happened to meet Lee Phillips online who graciously gave me a PDF version of his book aptly named gnuplot . That really helped! I decided that for every graph I want to generate, I first gather and format the data with one script, then render an image in a separate independent step using gnuplot. It made it easy to work on them in separate steps and also subsequently tune them individually and to make it easy to view the data behind every graph if I ever think there’s a problem in one etc. It took me about about two weeks of on and off working in the background to get a first set of graphs visualizing curl development status. I then created the glue scripting necessary to add a first dashboard with the existing graphs to the curl website. Static HTML showing static SVG images. On March 20, 2020 the first version of the dashboard showed no less than twenty separate graphs. I refer to “a graph” as a separate image, possibly showing more than one plot/line/curve. That first dashboard version had twenty graphs using 23 individual plots. Since then, we display daily updated graphs there . All data used for populating the graphs is open and available, and I happily use whatever is available: Open and transparent as always. Every once in a while since then I get to think of something else in the project, the code, development, the git history, community, emails etc that could be fun or interesting to visualize and I add a graph or two more to the dashboard. Six years after its creation, the initial twenty images have grown to one hundred graphs including almost 300 individual plots. Most of them show something relevant, while a few of them are in the more silly and fun category. It’s a mix. The 100th graph was added on March 15, 2026 when I brought back the “vulnerable releases” graph (appearing on the site on March 16 for the first time). It shows the number of known vulnerabilities each past release has. I removed it previously because it became unreadable, but in this new edition I made it only show the label for every 4th release which makes it slightly less crowded than otherwise. vulnerabilities in releases This day we also introduce a new 8-column display mode. Many of the graphs are internal and curl specific of course. The scripts for this, and the entire dashboard, remain written specifically for curl and curl’s circumstances and data. They would need some massaging and tweaking in order to work for someone else. All the scripts are of course open and available for everyone. I used to also offer all the CSV files generated to render the graphs in an easy accessible form on the site, but this turned out to be work done for virtually no audience, so I removed that again. If you replace the .svg extension with .csv, you can still get most of the data – if you know. The graphs and illustrations are not only silly and fun. They also help us see development from different angles and views, and they help us draw conclusions or at least try to. As an established and old project that makes an effort to do right, some of what we learn from this curl data might be possible to learn from and use even in other projects. Maybe even use as basis when we decide what to do next. I personally have used these graphs in countless blog posts, Mastodon threads and public curl presentations. They help communicate curl development progress. On Mastodon I keep joking about me being a graphaholic and often when I have presented yet another graph added the collection, someone has asked the almost mandatory question: how about a graph over number of graphs on the dashboard? Early on I wrote up such a script as well, to immediately fulfill that request. On March 14 2026, I decided to add it it as a permanent graph on the dashboard. Graphs in the curl dashboard The next-level joke (although some would argue that this is not fun anymore) is then to ask me for a graph showing the number of graphs for graphs. As I aim to please, I have that as well. Although this is not on the dashboard: Number of graphs on the dashboard showing number of graphs on the dashboard More graphs I am certain I (we?) will add more graphs over time. If you have good ideas for what source code or development details we should and could illustrate, please let me know. The git repository: https://github.com/curl/stats/ Daily updated curl dashboard: https://curl.se/dashboard.html curl gitstats: https://curl.se/gitstats/ git repository (source, tags, etc) GitHub issues mailing list archives curl vulnerability data hackerone reports historic details from the curl past

0 views
daniel.haxx.se 2 months ago

chicken nuget

Background: nuget.org is a Microsoft owned and run service that allows users to package software and upload it to nuget so that other users can download it. It is targeted for .Net developers but there is really no filter in what you can offer through their service. Three years ago I reported on how nuget was hosting and providing ancient, outdated and insecure curl packages. Random people download a curl tarball, build curl and then upload it to nuget, and nuget then offers those curl builds to the world – forever. To properly celebrate the three year anniversary of that blog post, I went back to nuget.org , entered curl into the search bar and took a look at the results. I immediately found at least seven different packages where people were providing severely outdated curl versions. The most popular of those, rmt_curl , reports that it has been downloaded almost 100,000 times over the years and is still downloaded almost 1,000 times/week the last few weeks. It is still happening . The packages I reported three years ago are gone, but now there is a new set of equally bad ones. No lessons learned. rmt_curl claims to provide curl 7.51.0, a version we shipped in November 2016. Right now it has 64 known vulnerabilities and we have done more than 9,000 documented bugfixes since then. No one in their right mind should ever download or use this version. Conclusion: the state of nuget is just as sad now as it was three years ago and this triggered another someone is wrong on the internet moments for me. I felt I should do my duty and tell them. Again. Surely they will act this time! Surely they think of the security of their users? The entire nuget concept is setup and destined to end up like this: random users on the internet put something together, upload it to nuget and then the rest of the world downloads and uses those things – trusting that whatever the description says is accurate and well-meaning. Maybe there are some additional security scans done in the background, but I don’t see how anyone can know that they don’t contain any backdoors, trojans or other nasty deliberate attacks. And whatever has been uploaded once seems to then be offered in perpetuity. Like three years ago I listed a bunch of severely outdated curl packages in my report. nuget says I can email them a report, but that just sent me a bounce back saying they don’t accept email reports anymore. (Sigh, and yes I reported that as a separate issue.) I was instead pointed over to the generic Microsoft security reporting page where there is not even any drop-down selection to use for “nuget” so I picked “.NET” instead when I submitted my report. Almost identically to three years ago, my report was closed within less than 48 hours. It’s not a nuget problem they say. Thank you again for submitting this report to the Microsoft Security Response Center (MSRC). After careful investigation, this case has been assessed as not a vulnerability and does not meet Microsoft’s bar for immediate servicing. None of these packages are Microsoft owned, you will need to reach out directly to the owners to get patched versions published. Developers are responsible for removing their own packages or updating the dependencies. In other words: they don’t think it’s nuget’s responsibility to keep the packages they host, secure and safe for their users. I should instead report these things individually to every outdated package provider, who if they cared, would have removed or updated these packages many years ago already. Also, that would imply a never-ending wack-a-mole game for me since people obviously keep doing this. I think I have better things to do in my life. In the cases I reported, the packages seem to be of the kind that once had the attention and energy by someone who kept them up-to-date with the curl releases for a while and then they stopped and since then the packages on nuget has just collected dust and gone stale. Still, apparently users keep finding and downloading them, even if maybe not at terribly high numbers. Thousands of fooled users per week is thousands too many. The uploading users are perfectly allowed to do this, legally, and nuget is perfectly allowed to host these packages as per the curl license. I don’t have a definite answer to what exactly nuget should do to address this problem once and for all, but as long as they allow packages uploaded nine years ago to still get downloaded today, it seems they are asking for this. They contribute and aid users getting tricked into downloading and using insecure software , and they are indifferent to it. A rare few applications that were uploaded nine years ago might actually still be okay but those are extremely rare exceptions. The last time I reported this nuget problem nothing happened on the issue until I tweeted about it. This time around, a well-known Microsoft developer (who shall remain nameless here) saw my Mastodon post about this topic when mirrored over to Bluesky and pushed for the case internally – but not even that helped. The nuget management thinks this is okay. If I were into puns I would probably call them chicken nuget for their unwillingness to fix this. Maybe just closing our eyes and pretending it doesn’t exist will just make it go away? Absolutely no one should use nuget.

0 views
daniel.haxx.se 2 months ago

curl 8.19.0

Release presentation Welcome to the curlhacker stream at 10:00 CET (09:00 UTC) today March 11, 2026 for a live-streamed presentation of curl 8.19.0. The changes, the security fixes and some bugfixes. the 273rd release 8 changes 63 days (total: 10,712) 264 bugfixes (total: 13,640) 538 commits (total: 38,024) 0 new public libcurl function (total: 100) 0 new curl_easy_setopt() option (total: 308) 0 new curl command line option (total: 273) 77 contributors, 48 new (total: 3,619) 37 authors, 21 new (total: 1,451) 4 security fixes (total: 180) We stopped the bug-bounty but it has not stopped people from finding vulnerabilities in curl. The following upcoming changes might be worth noticing. See the deprecate documentation for details. We plan to ship the next curl release on April 29. See you then! CVE-2026-1965: bad reuse of HTTP Negotiate connection CVE-2026-3783: token leak with redirect and netrc CVE-2026-3784: wrong proxy connection reuse with credentials CVE-2026-3805: use after free in SMB connection reuse We stopped the bug-bounty. It’s worth repeating, even if it was no code change. The cmake build got a option Initial support for MQTTS was merged curl now supports fractions for –limit-rate and –max-filesize curl’s -J option now uses the redirect name as a backup we no longer support OpenSSL-QUIC on Windows, curl can now get built to use the native CA store by default the minimum Windows version curl supports is now Vista (up from XP) NTLM support becomes opt-in RTMP support is getting dropped SMB support becomes opt-in Support for c-ares versions before 1.16 goes away Support for CMake 3.17 and earlier gets dropped TLS-SRP support will be removed

0 views
daniel.haxx.se 2 months ago

Dependency tracking is hard

curl and libcurl are written in C. Rather low level components present in many software systems. They are typically not part of any ecosystem at all. They’re just a tool and a library. In lots of places on the web when you mention an Open Source project, you will also get the option to mention in which ecosystem it belongs. npm, go, rust, python etc. There are easily at least a dozen well-known and large ecosystems. curl is not part of any of those. Recently there’s been a push for PURLs ( Package URLs ), for example when describing your specific package in a CVE. A package URL only works when the component is part of an ecosystem. curl is not. We can’t specify curl or libcurl using a PURL. SBOM generators and related scanners use package managers to generate lists of used components and their dependencies . This makes these tools quite frequently just miss and ignore libcurl. It’s not listed by the package managers. It’s just in there, ready to be used. Like magic. It is similarly hard for these tools to figure out that curl in turn also depends and uses other libraries. At build-time you select which – but as we in the curl project primarily just ships tarballs with source code we cannot tell anyone what dependencies their builds have. The additional libraries libcurl itself uses are all similarly outside of the standard ecosystems. Part of the explanation for this is also that libcurl and curl are often shipped bundled with the operating system many times, or sometimes perceived to be part of the OS. Most graphs, SBOM tools and dependency trackers therefore stop at the binding or system that uses curl or libcurl, but without including curl or libcurl. The layer above so to speak. This makes it hard to figure out exactly how many components and how much software is depending on libcurl. A perfect way to illustrate the problem is to check GitHub and see how many among its vast collection of many millions of repositories that depend on curl. After all, curl is installed in some thirty billion installations, so clearly it used a lot . (Most of them being libcurl of course.) It lists one dependency for curl. Repositories that depend on curl/curl: one. Screenshot taken on March 9, 2026 What makes this even more amusing is that it looks like this single dependent repository ( Pupibent/spire ) lists curl as a dependency by mistake.

0 views
daniel.haxx.se 2 months ago

10K curl downloads per year

The Linux Foundation, the organization that we want to love but that so often makes that a hard bargain, has created something they call “Insights” where they gather lots of metrics on Open Source projects. I held back so I never blogged and taunted OpenSSF for their scorecard attempts that were always lame and misguided. This Insights thing looks like their next attempt to “grade” and “rate” Open Source. It is so flawed and full of questionable details that I decided there is no point in me listing them all in a blog post – it would just be too long and boring. Instead I will just focus on a single metric. The one that made me laugh out loud when I saw it. They claim curl was downloaded 10,467 times the last year. ( source ) Number of curl downloads the last 365 days according to Linux Foundation What does “a download” mean? They refer to statistics from ecosyste.ms , which is an awesome site and service, but it has absolutely no idea about curl downloads. How often is curl “downloaded”? curl release tarballs are downloaded from curl.se at a rate of roughly 250,000 / month. curl images are currently pulled from docker at a rate of around 400,000 – 700,000 / day. curl is pulled from quay.io at roughly the same rate. curl’s git repository is cloned roughly 32,000 times / day curl is installed from Linux and BSD distributions at an unknown rate. curl, in the form of libcurl, is bundled in countless applications, games, devices, cars, TVs, printers and services, and we cannot even guess how often it is downloaded as such an embedded component. curl is installed by default on every Windows and macOS system since many years back. But no, 10,467 they say.

0 views
daniel.haxx.se 2 months ago

curl up 2026

The annual curl users and developers meeting, curl up, takes place May 23-24 2026 in Prague, Czechia. We are in fact returning to the same city and the exact same venue as in 2025. We liked it so much! This is a cozy and friendly event that normally attracts around 20-30 attendees. We gather in a room through a weekend and we talk curl. The agenda is usually setup with a number of talks through the two days, and each talk ends with a follow-up Q&A and discussion session. So no big conference thing, just a bunch of friends around a really large table. Over a weekend. Anyone is welcome to attend – for free – and everyone is encouraged to submit a talk proposal – anything that is curl and Internet transfer related goes. We make an effort to attract and lure the core curl developers and the most active contributors of recent years into the room. We do this by reimbursing their travel and hotel expenses. The agenda is a collaborative effort and we are going to work on putting it together from now all the way until the event, in order to make sure we make the best of the weekend and we get to talk to and listen to all the curl related topics we can think of! Help us improve the Agenda in the curl-up wiki: https://github.com/curl/curl-up/wiki/2026 Meeting up in the real world as opposed to doing video meetings helps us get to know each other better, allows us to socialize in ways we otherwise never can do and in the end it helps us work better together – which subsequently helps us write better code and produce better outcomes! It also helps us meet and welcome newcomers and casual contributors. Showing up at curl up is an awesome way to dive into the curl world wholeheartedly and in the deep end. Needless to say this event costs money to run. We pay our top people to come, we pay for the venue and pay for food. We would love to have your company mentioned as top sponsor of the event or perhaps a social dinner on the Saturday? Get in touch and let’s get it done! Everyone is welcome and encouraged to attend – at no cost. We only ask that you register in advance (the registration is not open yet). We always record all sessions on video and make them available after the fact. You can catch up on previous years’ curl up sessions on the curl website’s video section . We also live-stream all the sessions on curl up during both days. To be found on my twitch channel: curlhacker . Our events are friendly to everyone. We abide to the code of conduct and we never had anyone be even close to violating that,

0 views
daniel.haxx.se 2 months ago

curl security moves again

tldr: curl goes back to Hackerone. When we announced the end of the curl bug-bounty at the end of January 2026, we simultaneously moved over and started accepting curl security reports on GitHub instead of its previous platform. This move turns out to have been a mistake and we are now undoing that part of the decision. The reward money is still gone, there is no bug-bounty , no money for vulnerability reports, but we return to accepting and handling curl vulnerability and security reports on Hackerone . Starting March 1st 2026, this is now (again) the official place to report security problems to the curl project. This zig-zagging is unfortunate but we do it with the best of intentions. In the curl security team we were naively thinking that since so many projects are already using this setup it should be good enough for us too since we don’t have any particular special requirements. We wrongly thought . Now I instead question how other Open Source projects can use this. It feels like an area and use case for Open Source projects that is under-focused: proper, secure and efficient vulnerability reporting without bug-bounty. To illustrate what we are looking for, I made a little list that should show that we’re not looking for overly crazy things. Here is a list of nits and missing features we fell over on GitHub that, had we figured them out ahead of time, possibly would have made us go about this a different way. This list might interest fellow maintainers having the same thoughts and ideas we had. I have provided this feedback to GitHub as well – to make sure they know . Sure, we could switch to handling them all over email but that also has its set of challenges. Including: Since we dropped the bounty, the inflow tsunami has dried out substantially . Perhaps partly because of our switch over to GitHub? Perhaps it just takes a while for all the sloptimists to figure out where to send the reports now and perhaps by going back to Hackerone we again open the gates for them? We just have to see what happens. We will keep iterating and tweaking the program, the settings and the hosting providers going forward to improve. To make sure we ship a robust and secure set of products and that the team doing so can do that If you suspect a security problem in curl or libcurl, report it here: https://hackerone.com/curl Gitlab, Codeberg and others are GitHub alternatives and competitors, but few of them offer this kind of security reporting feature. That makes them bad alternatives or replacements for us for this particular service. Incoming submissions are reports that identify security problems . The reporter needs an account on the system. Submissions start private; only accessible to the reporter and the curl security team All submissions must be disclosed and made public once dealt with. Both correct and incorrect ones. This is important. We are Open Source. Maximum transparency is key. There should be a way to discuss the problem amongst security team members, the reporter and per-report invited guests. It should be possible to post security-team-only messages that the reporter and invited guests cannot see For confirmed vulnerabilities, an advisory will be produced that the system could help facilitate If there’s a field for CVE, make it possible to provide our own. We are after all our own CNA. Closed and disclosed reports should be clearly marked as invalid/valid etc Reports should have a tagging system so that they can be marked as “AI slop” or other terms for statistical and metric reasons Abusive users should be possible to ban/block from this program Additional (customizable) requirements for the privilege of submitting reports is appreciated (rate limit, time since account creation, etc) GitHub sends the whole report over email/notification with no way to disable this. SMTP and email is known for being insecure and cannot assure end to end protection. This risks leaking secrets early to the entire email chain. We can’t disclose invalid reports (and make them clearly marked as such) Per-repository default collaborators on GitHub Security Advisories is annoying to manage, as we now have to manually add the security team for each advisory or have a rather quirky workflow scripting it. https://github.com/orgs/community/discussions/63041 We can’t edit the CVE number field! We are a CNA, we mint our own CVE records so this is frustrating. This adds confusion. We want to (optionally) get rid of the CVSS score + calculator in the form as we actively discourage using those in curl CVE records No CI jobs working in private forks is going to make us effectively not use such forks, but is not a big obstacle for us because of our vulnerability working process. https://github.com/orgs/community/discussions/35165 No “quote” in the discussions? That looks… like an omission. We want to use GitHub’s security advisories as the report to the project, not the final advisory (as we write that ourselves) which might get confusing, as even for the confirmed ones, the project advisories (hosted elsewhere) are the official ones, not the ones on GitHub No number of advisories count is displayed next to “security” up in the tabs, like for issues and Pull requests. This makes it hard to see progress/updates. When looking at an individual advisory, there is no direct button/link to go back to the list of current advisories In an advisory, you can only “report content”, there is no direct “block user” option like for issues There is no way to add private comments for the team-only, as when discussing abuse or details not intended for the reporter or other invited persons in the issue There is a lack of short (internal) identifier or name per issue, which makes it annoying and hard to refer to specific reports when discussing them in the security team. The existing identifiers are long and hard to differentiate from each other. You quite weirdly cannot get completion help for in comments to address people that were added into the advisory thanks to them being in a team you added to the issue? There are no labels, like for issues and pull requests, which makes it impossible for us to for example mark the AI slop ones or other things, for statistics, metrics and future research Hard to keep track of the state of each current issue when a number of them are managed in parallel. Even just to see how many cases are still currently open or in need of attention. Hard to publish and disclose the invalid ones, as they never cause an advisory to get written and we rather want the initial report and the full follow-up discussion published. Hard to adapt to or use a reputation system beyond just the boolean “these people are banned”. I suspect that we over time need to use more crowdsourced knowledge or reputation based on how the reporters have behaved previously or in relation to other projects.

0 views
daniel.haxx.se 2 months ago

decomplexification continued

Last spring I wrote a blog post about our ongoing work in the background to gradually simplify the curl source code over time. This is a follow-up: a status update of what we have done since then and what comes next. In May 2025 I had just managed to get the worst function in curl down to complexity 100, and the average score of all curl production source code (179,000 lines of code) was at 20.8. We had 15 functions still scoring over 70. Almost ten months later we have reduced the most complex function in curl from 100 to 59. Meaning that we have simplified a vast number of functions. Done by splitting them up into smaller pieces and by refactoring logic. Reviewed by humans, verified by lots of test cases, checked by analyzers and fuzzers, The current 171,000 lines of code now has an average complexity of 15.9. The complexity score in this case is just the cold and raw metric reported by the pmccabe tool. I decided to use that as the absolute truth, even if of course a human could at times debate and argue about its claims. It makes it easier to just obey to the tool, and it is quite frankly doing a decent job at this so it’s not a problem. In almost all cases the main problem with complex functions is that they do a lot of things in a single function – too many – where the functionality performed could or should rather be split into several smaller sub functions. In almost every case it is also immediately obvious that when splitting a function into two, three or more sub functions with smaller and more specific scopes, the code gets easier to understand and each smaller function is subsequently easier to debug and improve. I don’t know how far we can take the simplification and what the ideal average complexity score of a the curl code base might be. At some point it becomes counter-effective and making functions even smaller then just makes it harder to follow code flows and absorbing the proper context into your head. To illustrate our simplification journey, I decided to render graphs with a date axle starting at 2022-01-01 and ending today. Slightly over four years, representing a little under 10,000 git commits. First, a look a the complexity of the worst scored function in curl production code over the last four years. Comparing with P90 and P99. The most complex function in curl over time Identifying the worst function might not say too much about the code in general, so another check is to see how the average complexity has changed. This is calculated like this: For all functions, add its function-score x function-length to a total complexity score, and in the end, divide that total complexity score on total number of lines used for all functions. Also do the same for a median score. Average and median complexity per source code line in curl, over time. When 2022 started, the average was about 46 and as can be seen, it has been dwindling ever since, with a few steep drops when we have merged dedicated improvement work. One way to complete the average and median lines to offer us a better picture of the state, is to investigate the complexity distribution through-out the source code. How big portion of the curl source code is how complex This reveals that the most complex quarter of the code in 2022 has since been simplified. Back then 25% of the code scored above 60, and now all of the code is below 60. It also shows that during 2025 we managed to clean up all the dark functions, meaning the end of 100+ complexity functions. Never to return, as the plan is at least. We don’t really know. We believe less complex code is generally good for security and code readability, but I it is probably still too early for us to be able to actually measure any particular positive outcome of this work (apart from fancy graphs). Also, there are many more ways to judge code than by this complexity score alone. Like having sensible APIs both internal and external and making sure that they are properly and correctly documented etc. The fact that they all interact together and they all keep changing, makes it really hard to isolate a single factor like complexity and say that changing this alone is what makes an impact. Additionally: maybe just the refactor itself and the attention to the functions when doing so either fix problems or introduce new problems, that is then not actually because of the change of complexity but just the mere result of eyes giving attention on that code and changing it right then. Maybe we just need to allow several more years to pass before any change from this can be measured? All functions get a complexity score by pmccabe Each function has a number of lines

0 views
daniel.haxx.se 3 months ago

Open Source security in spite of AI

The title of my ending keynote at FOSDEM February 1, 2026. As the last talk of the conference, at 17:00 on the Sunday lots of people had already left, and presumably a lot of the remaining people were quite tired and ready to call it a day. Still, the 1500 seats in Janson got occupied and there was even a group of more people outside wanting to get in that had to be refused entry. Thanks to the awesome FOSDEM video team, the recording was made available this quickly after the presentation. You can also get the video off FOSDEM servers . The 59 slide PDF version .

0 views
daniel.haxx.se 3 months ago

A third medal

In January 2025 I received the European Open Source Achievement Award . The physical manifestation of that prize was a trophy made of translucent acrylic (or something similar). The blog post I above has a short video where I show it off. In the year that passed since, we have established an organization for how do the awards going forward in the European Open Source Academy and we have arranged the creation of actual medals for the awardees. That was the medal we gave the award winners last week at the award ceremony where I handed Greg his prize. I was however not prepared for it, but as a direct consequence I was handed a medal this year , in recognition for the award a got last year , because now there is a medal. A retroactive medal if you wish. It felt almost like getting the award again. An honor. The box The backside Front The medal design The medal is made in a shiny metal, roughly 50mm in diameter. In the middle of it is a modern version (with details inspired by PCB looks) of the Yggdrasil tree from old Norse mythology – the “World Tree”. A source of life, a sacred meeting place for gods. In a circle around the tree are twelve stars , to visualize the EU and European connection. On the backside, the year and the name are engraved above an EU flag, and the same circle of twelve stars is used there as a margin too, like on the front side. The medal has a blue and white ribbon, to enable it to be draped over the head and hung from the neck. The box is sturdy thing in dark blue velvet-like covering with European Open Source Academy printed on it next to the academy’s logo. The same motif is also in the inside of the top part of the box. I do feel overwhelmed and I acknowledge that I have receive many medals by now. I still want to document them and show them in detail to you, dear reader. To show appreciation; not to boast.

0 views
daniel.haxx.se 3 months ago

GregKH awarded the Prize for Excellence in Open Source 2026

I had the honor and pleasure to hand over this prize to its first real laureate during the award gala on Thursday evening in Brussels, Belgium. This annual award ceremony is one of the primary missions for the European Open Source Academy , of which I am the president since last year. As an academy, we hand out awards and recognition to multiple excellent individuals who help make Europe the home of excellent Open Source. Fellow esteemed academy members joined me at this joyful event to perform these delightful duties. As I stood on the stage, after a brief video about Greg was shown I introduced Greg as this year’s worthy laureate. I have included the said words below. Congratulations again Greg. We are lucky to have you. There are tens of millions of open source projects in the world, and there are millions of open source maintainers. Many more would count themselves as at least occasional open source developers. These are the quiet builders of Europe’s digital world. When we work on open source projects, we may spend most of our waking hours deep down in the weeds of code, build systems, discussing solutions, or tearing our hair out because we can’t figure out why something happens the way it does, as we would prefer it didn’t. Open source projects can work a little like worlds on their own. You live there, you work there, you debate with the other humans who similarly spend their time on that project. You may not notice, think, or even care much about other projects that similarly have a set of dedicated people involved. And that is fine. Working deep in the trenches this way makes you focus on your world and maybe remain unaware and oblivious to champions in other projects. The heroes who make things work in areas that need to work for our lives to operate as smoothly as they, quite frankly, usually do. Greg Kroah-Hartman, however, our laureate of the Prize for Excellence in Open Source 2026, is a person whose work does get noticed across projects. Our recognition of Greg honors his leading work on the Linux kernel and in the Linux community, particularly through his work on the stable branch of Linux. Greg serves as the stable kernel maintainer for Linux, a role of extraordinary importance to the entire computing world. While others push the boundaries of what Linux can do, Greg ensures that what already exists continues to work reliably. He issues weekly updates containing critical bug fixes and security patches, maintaining multiple long-term support versions simultaneously. This is work that directly protects billions of devices worldwide. It’s impossible to overstate the importance of the work Greg has done on Linux. In software, innovation grabs headlines, but stability saves lives and livelihoods. Every Android phone, every web server, every critical system running Linux depends on Greg’s meticulous work. He ensures that when hospitals, banks, governments, and individuals rely on Linux, it doesn’t fail them. His work represents the highest form of service: unglamorous, relentless, and essential. Without maintainers like Greg, the digital infrastructure of our world would crumble. He is, quite literally, one of the people keeping the digital infrastructure we all depend on running. As a fellow open source maintainer, Greg and I have worked together in the open source security context. Through my interactions with him and people who know him, I learned a few things: An American by origin, Greg now calls Europe his home, having lived in the Netherlands for many years. While on this side of the pond, he has taken on an important leadership role in safeguarding and advocating for the interests of the open source community. This is most evident through his work on the Cyber Resilience Act, through which he has educated and interacted with countless open source contributors and advocates whose work is affected by this legislation. We — if I may be so bold — the Open Source community in Europe — and yes, the whole world, in fact — appreciate your work and your excellence. Thank you, Greg. Please come on stage and collect your award. Greg is competent. a custodian and maintainer of many parts and subsystems of the Linux kernel tree and its development for decades. Greg has a voice. He doesn’t bow to pressure or take the easy way out. He has integrity. Greg is persistent. He has been around and done hard work for the community for decades. Greg is a leader. He shares knowledge, spreads the word, and talks to crowds. In a way that is heard and appreciated. He is a mentor.

0 views