Latest Posts (20 found)

Open Source security in spite of AI

The title of my ending keynote at FOSDEM February 1, 2026. As the last talk of the conference, at 17:00 on the Sunday lots of people had already left, and presumably a lot of the remaining people were quite tired and ready to call it a day. Still, the 1500 seats in Janson got occupied and there was even a group of more people outside wanting to get in that had to be refused entry. Thanks to the awesome FOSDEM video team, the recording was made available this quickly after the presentation. You can also get the video off FOSDEM servers . The 59 slide PDF version .

0 views
daniel.haxx.se 2 days ago

A third medal

In January 2025 I received the European Open Source Achievement Award . The physical manifestation of that prize was a trophy made of translucent acrylic (or something similar). The blog post I above has a short video where I show it off. In the year that passed since, we have established an organization for how do the awards going forward in the European Open Source Academy and we have arranged the creation of actual medals for the awardees. That was the medal we gave the award winners last week at the award ceremony where I handed Greg his prize. I was however not prepared for it, but as a direct consequence I was handed a medal this year , in recognition for the award a got last year , because now there is a medal. A retroactive medal if you wish. It felt almost like getting the award again. An honor. The box The backside Front The medal design The medal is made in a shiny metal, roughly 50mm in diameter. In the middle of it is a modern version (with details inspired by PCB looks) of the Yggdrasil tree from old Norse mythology – the “World Tree”. A source of life, a sacred meeting place for gods. In a circle around the tree are twelve stars , to visualize the EU and European connection. On the backside, the year and the name are engraved above an EU flag, and the same circle of twelve stars is used there as a margin too, like on the front side. The medal has a blue and white ribbon, to enable it to be draped over the head and hung from the neck. The box is sturdy thing in dark blue velvet-like covering with European Open Source Academy printed on it next to the academy’s logo. The same motif is also in the inside of the top part of the box. I do feel overwhelmed and I acknowledge that I have receive many medals by now. I still want to document them and show them in detail to you, dear reader. To show appreciation; not to boast.

0 views
daniel.haxx.se 5 days ago

GregKH awarded the Prize for Excellence in Open Source 2026

I had the honor and pleasure to hand over this prize to its first real laureate during the award gala on Thursday evening in Brussels, Belgium. This annual award ceremony is one of the primary missions for the European Open Source Academy , of which I am the president since last year. As an academy, we hand out awards and recognition to multiple excellent individuals who help make Europe the home of excellent Open Source. Fellow esteemed academy members joined me at this joyful event to perform these delightful duties. As I stood on the stage, after a brief video about Greg was shown I introduced Greg as this year’s worthy laureate. I have included the said words below. Congratulations again Greg. We are lucky to have you. There are tens of millions of open source projects in the world, and there are millions of open source maintainers. Many more would count themselves as at least occasional open source developers. These are the quiet builders of Europe’s digital world. When we work on open source projects, we may spend most of our waking hours deep down in the weeds of code, build systems, discussing solutions, or tearing our hair out because we can’t figure out why something happens the way it does, as we would prefer it didn’t. Open source projects can work a little like worlds on their own. You live there, you work there, you debate with the other humans who similarly spend their time on that project. You may not notice, think, or even care much about other projects that similarly have a set of dedicated people involved. And that is fine. Working deep in the trenches this way makes you focus on your world and maybe remain unaware and oblivious to champions in other projects. The heroes who make things work in areas that need to work for our lives to operate as smoothly as they, quite frankly, usually do. Greg Kroah-Hartman, however, our laureate of the Prize for Excellence in Open Source 2026, is a person whose work does get noticed across projects. Our recognition of Greg honors his leading work on the Linux kernel and in the Linux community, particularly through his work on the stable branch of Linux. Greg serves as the stable kernel maintainer for Linux, a role of extraordinary importance to the entire computing world. While others push the boundaries of what Linux can do, Greg ensures that what already exists continues to work reliably. He issues weekly updates containing critical bug fixes and security patches, maintaining multiple long-term support versions simultaneously. This is work that directly protects billions of devices worldwide. It’s impossible to overstate the importance of the work Greg has done on Linux. In software, innovation grabs headlines, but stability saves lives and livelihoods. Every Android phone, every web server, every critical system running Linux depends on Greg’s meticulous work. He ensures that when hospitals, banks, governments, and individuals rely on Linux, it doesn’t fail them. His work represents the highest form of service: unglamorous, relentless, and essential. Without maintainers like Greg, the digital infrastructure of our world would crumble. He is, quite literally, one of the people keeping the digital infrastructure we all depend on running. As a fellow open source maintainer, Greg and I have worked together in the open source security context. Through my interactions with him and people who know him, I learned a few things: An American by origin, Greg now calls Europe his home, having lived in the Netherlands for many years. While on this side of the pond, he has taken on an important leadership role in safeguarding and advocating for the interests of the open source community. This is most evident through his work on the Cyber Resilience Act, through which he has educated and interacted with countless open source contributors and advocates whose work is affected by this legislation. We — if I may be so bold — the Open Source community in Europe — and yes, the whole world, in fact — appreciate your work and your excellence. Thank you, Greg. Please come on stage and collect your award. Greg is competent. a custodian and maintainer of many parts and subsystems of the Linux kernel tree and its development for decades. Greg has a voice. He doesn’t bow to pressure or take the easy way out. He has integrity. Greg is persistent. He has been around and done hard work for the community for decades. Greg is a leader. He shares knowledge, spreads the word, and talks to crowds. In a way that is heard and appreciated. He is a mentor.

0 views
daniel.haxx.se 1 weeks ago

curl distro meeting 2026

We are doing another curl + distro online meeting this spring in what now has become an established annual tradition. A two-hour discussion, meeting, workshop for curl developers and curl distro maintainers. 2026 curl distro meeting details The objective for these meetings is simply to make curl better in distros. To make distros do better curl. To improve curl in all and every way we think we can, together. A part of this process is to get to see the names and faces of the people involved and to grease the machine to improve cross-distro collaboration on curl related topics. Anyone who feels this is a subject they care about is welcome to join. We aim for the widest possible definition of distro and we don’t attempt to define the term. The 2026 version of this meeting is planned to take place in the early evening European time, morning west coast US time. With the hope that it covers a large enough amount of curl interested people. The plan is to do this on March 26 , and all the details, planning and discussion items are kept on the dedicated wiki page for the event . Please add your own discussion topics that you want to know or talk about, and if you feel inclined, add yourself as an intended participant. Feel free to help make this invite reach the proper people. See you on March 26!

0 views
daniel.haxx.se 1 weeks ago

Improving curl -J

We introduced curl’s option , also known as back in February 2010. A decent amount of years ago. The option is used in combination with ( ) when downloading data from a HTTP(S) server and instructs curl to use the filename in the incoming header when saving the content, instead of the filename of the URL passed on the command line (if provided). That header would later be explained further in RFC 6266 . The idea is that for some URLs the server can provide a more suitable target filename than what the URL contains from the beginning. Like when you do a command similar to: Without -J, the content would be save in the target output filename called ‘download’ – since curl strips off the query part. With -J, curl parses the server’s response header that contains a better filename; in the example below fun.jpg . The above approach mentioned works pretty well, but has several limitations. One of them being that the obvious that if the site instead of providing a Content-Disposition header perhaps only redirects the client to a new URL to the download from, curl does not pick up the new name but instead keeps using the one from the originally provided URL. This is not what most users want and not what they expect. As a consequence, we have had this potential improvement mentioned in the TODO file for many years. Until today . We have now merged a change that makes curl with -J pick up the filename from Location: headers and it uses that filename if no Content-Disposition. This means that if you now rerun a similar command line as mentioned above, but this one is allowed to follow redirects: And that site redirects curl to the actual download URL for the tarball you want to download: … curl now saves the contents of that transfer in a local file called . If there is both a redirect and a Content-Disposition header, the latter takes precedence. Since this gets the filename from the server’s response, you give up control of the name to someone else. This can of course potentially mess things up for you. curl ignores all provided directory names and only uses the filename part. If you want to save the download in a dedicated directory other than the current one, use –output-dir . As an additional precaution, using -J implies that curl avoids to clobber, overwrite, any existing files already present using the same filename unless you also use –clobber . Since the selected final name used for storing the data is selected based on contents of a header passed from the server, using this option in a scripting scenario introduces the challenge: what filename did curl actually use? A user can easily extract this information with curl’s -w option . Like this: This command line outputs the used filename to stdout. Tweak the command line further to instead direct that name to stderr or to a specific file etc. Whatever you think works. The content-disposition RFC mentioned above details a way to provide a filename encoded as UTF-8 using something like the below, which includes a U+20AC Euro sign: curl still does not support this filename* style of providing names. This limitation remains because curl cannot currently convert that provided name into a local filename using the provided characters – with certainty. Room for future improvement! This -J improvement ships in curl 8.19.0, coming in March 2026.

0 views
daniel.haxx.se 1 weeks ago

The end of the curl bug-bounty

tldr: an attempt to reduce the terror reporting . There is no longer a curl bug-bounty program. It officially stops on January 31, 2026. After having had a few half-baked previous takes, in April 2019 we kicked off the first real curl bug-bounty with the help of Hackerone, and while it stumbled a bit at first it has been quite successful I think. We attracted skilled researchers who reported plenty of actual vulnerabilities for which we paid fine monetary rewards. We have certainly made curl better as a direct result of this: 87 confirmed vulnerabilities and over 100,000 USD paid as rewards to researchers. I’m quite happy and proud of this accomplishment. I would like to especially highlight the awesome Internet Bug Bounty project, which has paid the bounties for us for many years. We could not have done this without them. Also of course Hackerone, who has graciously hosted us and been our partner through these years. Looking back, I think we can say that the downfall of the bug-bounty program started slowly in the second half of 2024 but accelerated badly in 2025. We saw an explosion in AI slop reports combined with a lower quality even in the reports that were not obvious slop – presumably because they too were actually misled by AI but with that fact just hidden better. Maybe the first five years made it possible for researchers to find and report the low hanging fruit. Previous years we have had a rate of somewhere north of 15% of the submissions ending up confirmed vulnerabilities. Starting 2025, the confirmed-rate plummeted to below 5%. Not even one in twenty was real . The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live. I have also started to get the feeling that a lot of the security reporters submit reports with a bad faith attitude. These “helpers” try too hard to twist whatever they find into something horribly bad and a critical vulnerability, but they rarely actively contribute to actually improve curl. They can go to extreme efforts to argue and insist on their specific current finding, but not to write a fix or work with the team on improving curl long-term etc. I don’t think we need more of that. There are these three bad trends combined that makes us take this step: the mind-numbing AI slop, humans doing worse than ever and the apparent will to poke holes rather than to help. In an attempt to do something about the sorry state of curl security reports, this is what we do: We believe that we can maintain and continue to evolve curl security in spite of this change. Maybe even improve thanks to this, as hopefully this step helps prevent more people pouring sand into the machine. Ideally we reduce the amount of wasted time and effort. I believe the best and our most valued security reporters still will tell us when they find security vulnerabilities. If you suspect a security problem in curl going forward, we advise you to head over to GitHub and submit them there. Alternatively, you send an email with the full report to . In both cases, the report is received and handled privately by the curl security team. But with no monetary reward offered . Hackerone was good to us and they have graciously allowed us to run our program on their platform for free for many years. We thank them for that service. As we now drop the rewards, we feel it makes a clear cut and displays a clearer message to everyone involved by also moving away from Hackerone as a platform for vulnerability reporting. It makes the change more visible. It is probably going to be harder for us to publicly disclose every incoming security report in the same way we have done it on Hackerone for the last year. We need to work out something to make sure that we can keep doing it at least imperfectly, because I believe in the goodness of such transparency. Let me emphasize that this change does not impact our presence and mode of operation with the curl repository and its hosting on GitHub . We hear about projects having problems with low-quality AI slop submissions on GitHub as well, in the form of issues and pull-requests, but for curl we have not (yet) seen this – and frankly I don’t think switching to a GitHub alternative saves us from that. Compared to others, we seem to be affected by the sloppy security reports to a higher degree than the average Open Source project. With the help of Hackerone, we got numbers of how the curl bug-bounty has compared with other programs over the last year. It turns out curl’s program has seen more volume and noise than other public open source bug bounty programs in the same cohort. Over the past four quarters, curl’s inbound report volume has risen sharply, while other bounty-paying open source programs in the cohort, such as Ruby, Node, and Rails, have not seen a meaningful increase and have remained mostly flat or declined slightly. In the chart, the pink line represents curl’s report volume, and the gray line reflects the broader cohort. Inbound Report Volume on Hackerone: curl compared to OSS peers We suspect the idea of getting money for it is a big part of the explanation. It brings in real reports, but makes it too easy to be annoying with little to no penalty to the user. The reputation system and available program settings were not sufficient for us to prevent sand from getting into the machine. The exact reason why we suffer more of this abuse than others remains a subject for further speculation and research. There is a non-zero risk that our guesses are wrong and that the volume and security report frequency will keep up even after these changes go into effect. If that happens, we will deal with it then and take further appropriate steps. I prefer not to overdo things or overplan already now for something that ideally does not happen. People keep suggesting that one way to deal with the report tsunami is to charge security researchers a small amount of money for the privilege of submitting a vulnerability report to us. A curl reporters security club with an entrance fee. I think that is a less good solution than just dropping the bounty. Some of the reasons include: Maybe we need to do this later anyway, but we stay away from it for now. We have seen other projects and repositories see similar AI-induced problems for pull requests, but this has not been a problem for the curl project. I believe that for PRs we have much better means to sort out the weed with automatic means, since we have tools, tests and scanners to verify such contributions. We don’t need to waste any human time on pull requests until the quality is good enough to get green check-marks from 200 CI jobs. I will do a talk at FOSDEM 2026 titled Open Source Security in spite of AI that of course will touch on this subject. We never say never. This is now and we might have reasons to reconsider and make a different decision in the future. If we do, we will let you know. These changes are applied now with the hope that they will have a positive effect for the project and its maintainers. If that turns out to not be the outcome, we will of course continue and apply further changes later. Since I created the pull request for updating the bug-bounty information for curl on January 14, almost two weeks before we merged it, various media picked up the news and published articles. Long before I posted this blog post. Also discussed (indirectly) on Hacker News . We no longer offer any monetary rewards for security reports – no matter which severity. In an attempt to remove the incentives for submitting made up lies. We stop using Hackerone as the recommended channel to report security problems. To make the change immediately obvious and because without a bug-bounty program we don’t need it. We refer everyone to submit suspected curl security problems on GitHub using their Private vulnerability reporting feature. We continue to immediately ban and publicly ridicule everyone who submits AI slop to the project. Charging people money in an International context is complicated and a maintenance burden. Dealing with charge-backs, returns and other complaints and friction add work. It would limit who could or would submit issues. Even some who actually find legitimate issues. The Register: Curl shutters bug bounty program to remove incentive for submitting AI slop Elektroniktidningen: cURL removes bug bounties Heise online: curl: Projekt beendet Bug-Bounty-Programm Neowin: Beloved tool, cURL is shutting down its bug bounty over AI slop reports Golem: Curl-Entwickler dreht dem “KI-Schrott” den Geldhahn zu Linux Easy: cURL chiude il programma bug bounty: troppi report generati dall’AI Bleeping Computer: Curl ending bug bounty program after flood of AI slop reports The New Stack: Drowning in AI slop, cURL ends bug bounties Ars Technica: Overrun with AI slop, cURL scraps bug bounties to ensure “intact mental health” PressMind Labs: cURL konczy program bug bounty – czy to koniec jakosci zgloszen? Socket: curl Shuts Down Bug Bounty Program After Flood of AI Slop Reports

0 views
daniel.haxx.se 2 weeks ago

libcurl memory use some years later

One of the trickier things in software is gradual degradation. Development that happens in the wrong direction slowly over time which never triggers any alarms or upset users. Then one day you suddenly take a closer look at it and you realize that this area that used to be so fine several years ago no longer is. Memory use is one of those things. It is easy to gradually add more and larger allocations over time as we add features and make new cool architectural designs. curl and libcurl literally run in billions of installations and it is important for us that we keep memory use and allocation count to a minimum. It needs to run on small machines and it needs to be able to scale to large number of parallel connections without draining available resources. So yes, even in 2026 it is important to keep allocations small and as few as possible. In July 2025 we added a test case to curl’s test suite (3214) that simply checks the sizes of fifteen important structs. Each struct has a fixed upper limit which they may not surpass without causing the test to fail. Of course we can adjust the limits when we need to, as it might be entirely okay to grow them when the features and functionalities motivate that, but this check makes sure that we do not mistakenly grow the sizes simply because of a mistake or bad planning. It’s of course a question of a balance. How much memory is a feature and added performance worth? Every libcurl user probably has their own answers to that but I decided to take a look at how we do today, and compare with data I blogged five years ago . The point in time I decided to compare with here, curl 7.75.0, is fun to use because it was a point in time where I had given the size use in curl some focused effort and minimization work. libcurl memory use was then made smaller and more optimized than it had been for almost a decade before that. The struct sizes always vary depending on which features that are enabled, but in my tests here they are “maximized”, with as many features and backends enabled as possible. Let’s take a look at three important structs. The multi handle, the easy handle and the connectdata struct. Now compared to then, five years ago. As seen in the table, two of the structs have grown and one has shrunken. Let’s see what impact that might have. If we assume a libcurl-using application doing 10 parallel transfers that have 20 concurrent connections open, libcurl five ago needed: 1472 x 20 + 5272 x 10 + 416 = 82,576 bytes for that While libcurl in current git needs: 912 x 20 + 5352 x 10 + 816 = 72,576 bytes. Incidentally that is exactly 10,000 bytes less, five years and many new features later. This said, part of the reason the structs chance is that we move data between them and to other structs. The few mentioned here are not the whole picture. Using a bleeding edge curl build, this command line on my 64 bit Linux Debian host does 107 allocations, that needs at its maximum 133,856 bytes. Compared to five years ago, where it needed 131,680 bytes done in a mere 96 allocations. curl now needs 1.6% more memory for this, done with 11% more allocation calls. I believe the current amounts are still okay consider we have refactored, developed and evolved the library significantly over the same period. As a comparison, downloading the same file twenty times in parallel over HTTP/1 using the same curl build needs 2,222 allocations but only a total of 308,613 bytes allocated at peak. Twenty times the number of allocations but only three times the maximum size, compared to the single file download. Caveat: this measures clear text HTTP downloads. Almost everything transferred these days is using TLS and if you add TLS to this transfer, curl itself does only a few additional allocations but more importantly the TLS library involved allocates much more memory and do many more allocations. I just consider those allocations to be someone else’s optimization work. I generated a few graphs that illustrate memory use changes in curl over time based on what I described above. The “easy handle” is the handle an application creates and that is associated which each individual transfer done with libcurl. curl easy handle size changes over time The “multi handle” is a handle that holds one or more easy handles. An application has at least one of these and adds many easy handles to it, or the easy handles has one of its own internally. curl multi handle size changes over time The “connectdata” is an internal struct for each existing connection libcurl knows about. A normal application that makes multiple transfers, either serially or in parallel tends to make the easy handle hold at least a few of these since libcurl uses a connection pool by default to use for subsequent transfers. curl connectdata struct size over time Here is data from the internal tracking of memory allocations done when the curl tool is invoked to download a 512 megabyte file from a locally hosted HTTP server. (Generally speaking though, downloading a larger size does not use more memory.) curl downloading a 512MB file needs this much memory and allocations Conclusion I think we are doing alright and none of these struct sizes or memory use have gone bad. We offer more features and better performance than ever, but keep memory spend at a minimum.

2 views
daniel.haxx.se 2 weeks ago

Now with MQTTS

Back in 2020 we added MQTT support to curl. When curl 8.19.0 ships in the beginning of March 2026, we have also added MQTTS; meaning MQTT done securely over TLS. This bumps the number of supported transfer protocols to 29 not too long after the project turned 29 years old. The 29 transfer protocols (or schemes) that curl supports in January 2026 libcurl backends as of now What’s MQTT? Wikipedia describes it as a lightweight, publish–subscribe, machine-to-machine network protocol for message queue/message queuing service. It is designed for connections with remote locations that have devices with resource constraints or limited network bandwidth, such as in the Internet of things (IoT). It must run over a transport protocol that provides ordered, lossless, bi-directional connections—typically, TCP/IP. If things go as planned, the number of supported protocols will decrease soon as we have RTMP scheduled for removal later in the spring of 2026.

0 views
daniel.haxx.se 2 weeks ago

My first 20,000 curl commits

Some of you may of course think what, only 20,000 commits after almost thirty years in the project, what kind of slacker is that guy? But yes, today I merged my 20,000th commit into the curl repository – out of a total of 37,604 commits (53%). Not that anyone is counting. 20,000 – Today, January 17, 2026 19,000 – March 2025 18,000 – February 2024 17,000 – December 2022 16,000 – November 2021 15,000 – September 2020 The first kept curl git commit is dated December 29, 1999. That is the date of our source code import into SourceForge as I quite annoyingly decided to not keep the prior history. The three years of development and the commits that happened before that import date are therefore not included in this count. These 20,000 commits have been done on 5,589 separate days, meaning 59% of all days since December 1999. It also means I have done an average of 2.1 commits per day since then. The curl commits done before 2010 were not actually made with git, but with CVS. The curl source repository was converted to git when we switched hosting over to GitHub. As of today, 1,431 separate individuals have authored commits merged into the curl source repository. 16 of us have made more than 100 commits. Five authors have written more than 1,000 commits. 941 of the authors only wrote a single commit (so far)! The second-most curl committer by number of commits (Yang Tse) has almost 2,600 commits but he stopped being active already back in 2013. The top-20 all time curl commit authors as of now: My share of the total amount of commits has been shrinking gradually since a long time and that is a good thing. It means we have awesome contributors and maintainers helping out. Not too far into the future I expect my share to go below 50%. the number of commits done by the top-20 commit authors in curl over time Number of commit authors in curl over time Number of unique authors per month over time Daniel’s share of authored commits over time Future These are my first 20,000 commits. I have no plans to go anywhere. I have averaged at about 800 commits per year in the curl source code repository for the last 25 years. That would imply reaching 30,000 would take another 12.5 years, so about by mid 2038 or so. If I manage to keep up that speed. Feels distant. This was my commit 20,000. Daniel Stenberg (20000 commits) Yang Tse (2587 commits) Viktor Szakats (2496 commits) Steve Holme (1916 commits) Dan Fandrich (1435 commits) Stefan Eissing (941 commits) Jay Satiro (773 commits) Guenter Knauf (662 commits) Gisle Vanem (498 commits) Marc Hoersken (461 commits) Marcel Raad (405 commits) Patrick Monnerat (362 commits) Kamil Dudka (255 commits) Daniel Gustafsson (217 commits) renovate[bot] (183 commits) Tatsuhiro Tsujikawa (150 commits) Michael Kaufmann (84 commits) Alessandro Ghedini (83 commits) Fabian Keil (77 commits) Nick Zitzmann (70 commits)

1 views
daniel.haxx.se 2 weeks ago

More HTTP/3 focus, one backend less

In the curl project we have a long tradition of offering multiple optional backends for specific protocols. In this spirit we have added experimental support for a number of different HTTP/3 + QUIC backends over time. A while ago we dropped one of those experiments, the msh3 backend. Today we cleanup even more and remove support for yet another backend: the OpenSSL-QUIC stack and we are now down to only supporting two different HTTP/3 alternatives: the nghttp2 + nghttp3 combo or quiche. And out of those two, the quiche backend is still considered experimental. The first release shipping with this change will be curl 8.19.0. This is the QUIC stack implemented and provided by OpenSSL. To make matters a little complicated, this is a separate thing from the QUIC API that OpenSSL also offers. The first one is a full QUIC implementation, the second one is an API that is powerful enough to allow a separate QUIC implementation use OpenSSL for its cryptographic and TLS needs. 2019 – BoringSSL introduced an API for QUIC. QUIC implementations picked it up and it worked. A pull request was made for OpenSSL to allow them to provide the same API so that QUIC stacks all over could use OpenSSL. 2021 – OpenSSL eventually denied merging the pull-request and announced they would instead implement their own QUIC stack – that nobody had asked for. 2023 – OpenSSL 3.2 shipped with support for their own QUIC stack. It was broken in many ways. 2025: OpenSSL version 3.4.1 was released and now the QUIC stack worked decently . In OpenSSL 3.5.0 they announced a QUIC API that now finally allowed independent QUIC stacks to use OpenSSL. Skilled contributors added support for OpenSSL-QUIC to curl primarily to allow people using OpenSSL to still be able to use HTTP/3. OpenSSL’s own QUIC implementation only reached experimental state in curl meaning that we explicitly and strongly discourage users from using it in production and reserve ourselves the right to change functionality and more between versions. There are three reasons why it did not graduate from experimental and they are also the reasons why we think we are better off without offering support for it: This makes the curl backend situation simpler in the HTTP/3 and QUIC department as the image below tries to show. HTTP/3 backends in curl in January 2026 The API is lacking. We have communicated with the OpenSSL-QUIC team since even before the API first shipped and it still does not offer the knobs and controls we would like to make it a competitive QUIC alternative. We don’t feel they care much. The performance is bad. And by bad I mean really bad. The leading QUIC implementation alternative ngtcp2 transfers data much faster in all benchmarks and comparisons. Sometimes up to a factor three difference. The memory use is abysmal. The amount of more memory required to do transfers with OpenSSL-QUIC compared to ngtcp2 can reach a factor twenty.

0 views
daniel.haxx.se 4 weeks ago

curl 8.18.0

Download curl from curl.se ! the 272nd release 5 changes 63 days (total: 10,155) 391 bugfixes (total: 13,376) 758 commits (total: 37,486) 0 new public libcurl function (total: 100) 0 new curl_easy_setopt() option (total: 308) 0 new curl command line option (total: 273) 69 contributors, 36 new (total: 3,571) 37 authors, 14 new (total: 1,430) 6 security fixes (total: 176) This time there is no less than six separate vulnerabilities announced. There are a few this time, mostly around dropping support for various dependencies: See the release presentation video for a walk-through of some of the most important/interesting fixes done for this release, or go check out the full list in the changelog . CVE-2025-13034 : skipping pinning check for HTTP/3 with GnuTLS CVE-2025-14017 : broken TLS options for threaded LDAPS CVE-2025-14524 : bearer token leak on cross-protocol redirect CVE-2025-14819 : OpenSSL partial chain store policy bypass CVE-2025-15079 : libssh global knownhost override CVE-2025-15224 : libssh key passphrase bypass without agent set drop support for VS2008 (Windows) drop Windows CE / CeGCC support drop support for GnuTLS < 3.6.5 gnutls: implement CURLOPT_CAINFO_BLOB openssl: bump minimum OpenSSL version to 3.0.0

0 views
daniel.haxx.se 4 weeks ago

6,000 curl stickers

I am heading to FOSDEM again at the end of January. I go there every year and I have learned that there is a really sticker-happy audience there. The last few times I have been there, I have given away several thousands of curl stickers. As I realized I did not actually have a few thousand stickers left, I had to restock. I consider stickers a fun and somewhat easy way to market the curl project. It helps us getting known and seen out there in the world. The stickers are paid for by curl donations . Thanks to all of you who have donated! This time I ordered the stickers from stickerapp.se . They have a rather fancy web UI editor and tools to make sure the stickers become exactly the way I want them. I believe the total order price was actually slightly cheaper than the previous provider I used. I ordered five classic curl sticker designs and I introduced a new one. Here is the full set: Six different curl stickers Die cut curl logo 7.5cm x 2.8cm – the classic “small” curl logo sticker. (bottom left in the photo) Die cut curl logo 10cm x 3.7cm – the slightly larger curl logo sticker. (top row in the photo) Rounded rectangle 7.5cm x 4.1cm – yes we curl , the curl symbol and my face (mid left in the photo) Oval 7.5cm x 4cm – with the curl logo (bottom right in the photo) Round 2.5cm x 2.5 cm – small curl symbol. (in the middle of the photo). My favorite. Perfect for the backside of a phone. Fits perfectly in the logo on the lid of a Frame Work laptop. Round 4cm x 4cm – curl symbol in a slightly larger round version. The new sticker variant in the set. (on the right side in the middle row in the photo) The quality and feel of the products are next to identical to previous sticker orders. They look great! I got 1,000 copies of each variant this time. The official curl logo, the curl symbol, the colors and everything related is freely available and anyone is welcome to print their own stickers at will: https://curl.se/logo/ I bring curl stickers to all events I go to. Ask me! There is no way to buy stickers from me or from the curl project. I encourage you to look me up and ask for one or a few. At FOSDEM I try to make sure the wolfSSL stand has plenty to hand out, since it is a fixed geographical point that might be easier to find than me.

0 views
daniel.haxx.se 1 months ago

no strcpy either

Some time ago I mentioned that we went through the curl source code and eventually got rid of all () calls. strncpy() is a weird function with a crappy API. It might not null terminate the destination and it pads the target buffer with zeroes. Quite frankly, most code bases are probably better off completely avoiding it because each use of it is a potential mistake. In that particular rewrite when we made strncpy calls extinct, we made sure we would either copy the full string properly or return error. It is rare that copying a partial string is the right choice, and if it is, we can just as well it and handle the null terminator explicitly. This meant no case for using strlcpy or anything such either. strncpy density in curl over time But strcpy? strcpy however, has its valid uses and it has a less bad and confusing API. The main challenge with strcpy is that when using it we do not specify the length of the target buffer nor of the source string. This is normally not a problem because in a C program should only be used when we have full control of both. But normally and always are not necessarily the same thing. We are but all human and we all do mistakes. Using strcpy implies that there is at least one or maybe two, buffer size checks done prior to the function invocation. In a good situation. Over time however – let’s imagine we have code that lives on for decades – when code is maintained, patched, improved and polished by many different authors with different mindsets and approaches, those size checks and the function invoke may glide apart. The further away from each other they go, the bigger is the risk that something happens in between that nullifies one of the checks or changes the conditions for the strcpy. To make sure that the size checks cannot be separated from the copy itself we introduced a string copy replacement function the other day that takes the target buffer , target size , source buffer and source string length as arguments and only if the copy can be made and the null terminator also fits there, the operation is done. This made it possible to implement the replacement using memcpy(). Now we can completely ban the use of strcpy in curl source code, like we already did strncpy. Using this function version is a little more work and more cumbersome than strcpy since it needs more information, but we believe the upsides of this approach will help us have an oversight for the extra pain involved. I suppose we will see how that will fare down the road. Let’s come back in a decade and see how things developed! strcpy density in curl over time the strcopy source An additional minor positive side-effect of this change is of course that this should effectively prevent the AI chatbots to report strcpy uses in curl source code and insist it is insecure if anyone would ask (as people still apparently do). It has been proven numerous times already that strcpy in source code is like a honey pot for generating hallucinated vulnerability claims. Still, this will just make them find something else to make up a report about, so there is probably no net gain. AI slop is not a game we can win.

0 views
daniel.haxx.se 1 months ago

A curl 2025 review

Let’s take a look back and remember some of what this year brought. At more than 3,400 commits we did 40% more commits in curl this year than any single previous year! Since at some point during 2025, all the other authors in the project have now added more lines in total to the curl repository than I have. Meaning that out of all the lines ever added in the curl repository, I have now added less than half. More than 150 individuals authored commits we merged during the year. Almost one hundred of them were first-timers. Thirteen authors wrote ten or more commits. Viktor Szakats did the most number of commits per month for almost all months in 2025. Stefan Eissing has now done the latest commit for 29% of the product source code lines – where my share is 36%. About 598 authors have their added contributions still “surviving” in the product code. This is down from 635 at end of last year. We have 232 more tests at the end of this year compared to last December (now at 2179 separate test cases), and for the first time ever we have more than twelve test cases per thousand lines of product source code. (Sure, counting test cases is rather pointless and weird since a single test can be small or big, simple or complex etc, but that’s the only count we have for this.) The eight releases we did through the year is a fairly average amount: No major revolution happened this year in terms of big features or changes. We reduced source code complexity a lot. We have stopped using some more functions we deem were often the reasons for errors or confusion. We have increased performance. We have reduced numbed of used allocations. We added experimental support for HTTPS-RR , the DNS record. The bugfix frequency rate beat new records towards the end of the year as nearly 450 bugfixes shipped in curl 8.17.0. This year we started doing release candidates . For every release we upload a series of candidates before the actual release so that people can help us and test what is almost the finished version. This helps us detect and fix regressions before the final release rather than immediately after. We end the year with 6 more curl command line options than we had last new year’s eve; now at 273 in total. The curl man page continued to grow; now more than 500 lines longer since last year (7090 lines), which means that even when counted number of man page lines per command line option it grew from 24.7 to 26. libcurl grew with a mere 100 lines of code over the year while the command line tool got 1,150 new lines. libcurl is now a little over 149,000 lines. The command line tool has 25,800 lines. Most of the commits clearly went into improving the products rather than expanding them. See also the dropped support section below. This year OpenSSL finally introduced and shipped an API that allows QUIC stacks to use vanilla OpenSSL, starting with version 3.5. As a direct result of this, the use of the OpenSSL QUIC stack has been marked as deprecated in curl and is queued for removal early next year. As we also removed msh3 support during 2025, we are looking towards a 2026 with supporting only two QUIC and HTTP/3 backends in curl. This year the number of AI slop security reports for curl really exploded. The curl security team has gotten a lot of extra load because of this. We have been mentioned in media a lot during the year because of this. The reports not evidently made with AI help have also gotten significantly worse quality wise while the total volume has increased – a lot. Also adding to our collective load. We published nine curl CVEs during 2025, all at severity low or medium. A new breed of AI-powered high quality code analyzers , primarily ZeroPath and Aisle Research, started pouring in bug reports to us with potential defects. We have fixed several hundred bugs as a direct result of those reports – so far. This is in addition to the regular set of code analyzers we run against the code and for which we of course also fix the defects they report. At the end of the year 2025 we see 79 TB of data getting transferred monthly from curl.se. This is up from 58 TB (+36%) for the exact same period last year. We don’t have logs or analysis so we don’t know for sure what all this traffic is, but we know that only a tiny fraction is actual curl downloads. A huge portion of this traffic is clearly not human-driven. More than two hundred pull requests were opened each month in curl’s GitHub repository. For a brief moment during the fall we reached zero open issues. We have over 220 separate CI jobs that in the end of the year spend more than 25 CPU days per day verifying our ongoing changes. The curl dashboard expanded a lot. I removed a few graphs that were not accurate anymore, but the net total change is still that we went up from 82 graphs in December 2024 to 92 separate illustrations in December 2025. Now with a total of 259 individual plots (+25). We removed old/legacy things from the project this year, in an effort to remove laggards, to keep focus on what’s important and to make sure all of curl is secure. It was a crazy year in this aspect (as well) and I was honored with: I also dropped out of the Microsoft MVP program during the year, to which I was accepted into in October 2024. I attended these eight conferences and talked – in five countries. My talks are always related to curl in one way or another. I participated on these podcasts during the year. Always related to curl. Support for Visual Studio 2005 and older (removed in 8.13.0) Secure Transport (removed in 8.15.0) BearSSL (removed in 8.15.0) msh3 (removed in 8.16.0) winbuild build system (removed in 8.17.0) European Open Source Achievement Award 2025 Developer of the year 2025 Swedish IVA Gold Medal 2025 Open Infra Forum Joy of Coding Open Source Summit Europe Security Weekly Open Source Security Day Two DevOps Netstack.FM Software Engineering Radio OsProgrammadores

0 views
daniel.haxx.se 1 months ago

20,000 issues on GitHub

The curl project moved over its source code hosting to GitHub in March 2010, but we kept the main bug tracker running like before – on Sourceforge. It took us a few years, but in 2015 we finally ditched the Sourceforge version fully. We adopted and switched over to the pull request model and we labeled the GitHub issue tracker the official one to use for curl bugs. Announced on the curl website proper on March 9 2015. GitHub holds issues and pull requests in the same number series, and since a few years back they also added discussions to the mix. This number is another pointless one, but it is large and even so let’s celebrate it! Issue one in curl’s GitHub repository is from October 2010. Issue 100 is from May 18, 2014. Issue 500 is from Oct 20, 2015. Issue 10,000 was created November 29, 2022. That meant 9,500 issues created in 2,597 days. 3.7 issues/day on average over seven years. Issue 20,000 (a pull request really) was created today, on December 16, 2025. 10,000 more issues created in 1,113 days. 9 issues/day over the last three years. The pace of which primarily new pull requests are submitted has certainly gone up over the recent years, as this graph clearly shows. (Since the current month is only half so far, the drop at the right end of the plot is quite expected.) Number of issues and pull-requests submitted each month We work hard in the project to keep the number of open issues and pull requests low even when the frequency rises. Number of open issues and pull requests any given day It can also be noted that issues and pull requests are typically closed fast. Out of the ones that are closed with instructions in the git commit message, the trend looks like below. Half of them are closed within 6 hours. Number of hours until an issue is closed, when closed with git commit instructions Of course, these graphs are updated daily and shown on the curl dashboard . Note: we have not seen the AI slop tsunami in the issues and pull requests as we do on Hackerone. This growth is entirely human made and benign.

0 views
daniel.haxx.se 2 months ago

Parsing integers in C

In the standard libc API set there are multiple functions provided that do ASCII numbers to integer conversions. They are handy and easy to use, but also error-prone and quite lenient in what they accept and silently just swallow. atoi() is perhaps the most common and basic one. It converts from a string to signed integer. There is also the companion atol() which instead converts to a long. Some problems these have include that they return 0 instead of an error, that they have no checks for under or overflow and in the atol() case there’s this challenge that long has different sizes on different platforms. So neither of them can reliably be used for 64-bit numbers. They also don’t say where the number ended. Using these functions opens up your parser to not detect and handle errors or weird input. We write better and stricter parser when we avoid these functions. This function, along with its siblings strtoul() and strtoll() etc, is more capable. They have overflow detection and they can detect errors – like if there is no digit at all to parse. However, these functions as well too happily swallow leading whitespace and they allow a + or – in front of the number. The long versions of these functions have the problem that long is not universally 64-bit and the long long version has the problem that it is not universally available. The overflow and underflow detection with these function is quite quirky, involves errno and forces us to spend multiple extra lines of conditions on every invoke just to be sure we catch those. I think we in the curl project as well as more or less the entire world has learned through the years that it is usually better to be strict when parsing protocols and data, rather than be lenient and try to accept many things and guess what it otherwise maybe meant. As a direct result of this we make sure that curl parses and interprets data exactly as that data is meant to look and we error out as soon as we detect the data to be wrong. For security and for solid functionality, providing syntactically incorrect data is not accepted. This also implies that all number parsing has to be exact, handle overflows and maximum allowed values correctly and conveniently and errors must be detected. It always supports up to 64-bit numbers. I have previously blogged about how we have implemented our own set of parsing function in curl , and these also include number parsing. curlx_str_number() is the most commonly used of the ones we have created. It parses a string and stores the value in a 64-bit variable (which in curl code is always present and always 64-bit). It also has a max value argument so that it returns error if too large. And it of course also errors out on overflows etc. This function of ours does not allow any leading whitespace and certainly no prefixing pluses or minuses. If they should be allowed, the surrounding parsing code needs to explicitly allow them. The curlx_str_number function is most probably a little slower that the functions it replaces, but I don’t think the difference is huge and the convenience and the added strictness is much welcomed. We write better code and parsers this way. More secure. ( curlx_str number source code ) As of yesterday, November 12 2025 all of those weak functions calls have been wiped out from the curl source code. The drop seen in early 2025 was when we got rid of all strtrol() variations. Yesterday we finally got rid of the last atoi() calls. libc number function call density in curl production code ( Daily updated version of the graph .) The function mentioned above uses a ‘curlx’ prefix. We use this prefix in curl code for functions that exist in libcurl source code but that be used by the curl tool as well – sharing the same code without them being offered by the libcurl API. A thing we do to reduce code duplication and share code between the library and the command line tool.

0 views
daniel.haxx.se 3 months ago

curl 8.17.0

Download curl from curl.se . the 271st release 11 changes 56 days (total: 10,092) 448 bugfixes (total: 12,537) 699 commits (total: 36,725) 2 new public libcurl function (total: 100) 0 new curl_easy_setopt() option (total: 308) 1 new curl command line option (total: 273) 69 contributors, 35 new (total: 3,534) 22 authors, 5 new (total: 1,415) 1 security fixes (total: 170) CVE-2025-10966 : missing SFTP host verification with wolfSSH. curl’s code for managing SSH connections when SFTP was done using the wolfSSH powered backend was flawed and missed host verification mechanisms. We drop support for several things this time around: And then we did some other smaller changes: We set a new project record this time with no less than 448 documented bugfixes since the previous release. The release presentation mentioned above discusses some of the perhaps most significant ones. There a small set of pull-requests waiting to get merged, but other than that our future is not set and we greatly appreciate your feedback, submitted issues and provided pull-requests to guide us. If this release happens to include an annoying regression, there might be a patch release already next week. If we are lucky and it doesn’t, then we aim for a 8.18.0 release in the early January 2026. drop Heimdal support drop the winbuild build system drop support for Kerberos FTP drop support for wolfSSH up the minimum libssh2 requirement to 1.9.0 add a notifications API to the multi interface expand to use 6 characters per size in the progress meter support Apple SecTrust – use the native CA store add to the command line tool wcurl : import v2025.11.04 write-out: make able to output all occurrences of a header

0 views
daniel.haxx.se 3 months ago

Yes really, curl is still developed

One of the most common reactions or questions I get about curl when I show up at conferences somewhere and do presentations: — is curl still being actively developed? How many more protocols can there be? This of course being asked by people without very close proximity or insight into the curl project and probably neither into the internet protocol world – which frankly probably is most of the civilized world. Still, these questions keep surprising me. Can projects actually ever get done ? (And do people really believe that adding protocols is the only thing that is left to do?) There are new car models being made every year in spite of the roads being mostly the same for the last decades and there are new browser versions shipped every few weeks even though the web to most casual observers look roughly the same now as it did a few years ago. Etc etc. Even things such as shoes or bicycles are developed and shipped in new versions every year. In spite of how it may appear to casual distant observers, very few things remain the same over time in this world. This certainly is also true for internet, the web and how to do data transfers over them. Just five years ago we did internet transfers differently than how we (want to) do them today. New tweaks and proposals are brought up at least on a monthly basis. Not evolving implies stagnation and eventually… death. As standards, browsers and users update their expectations, curl does as well. curl needs to adapt and keep up to stay relevant. We want to keep improving it so that it can match and go beyond what people want from it. We want to help drive and push internet transfer technologies to help users to do better , more efficient and more secure operations. We like carrying the world’s infrastructure on our shoulders. One of the things that actually have occurred to me, after having worked on this project for some decades by now – and this is something I did not at all consider in the past, is that there is a chance that the project will remain alive and in use the next few decades as well. Because of exactly this nothing-ever-stops characteristic of the world around us, but also of course because of the existing amount of users and usage. Current development should be done with care, a sense of responsibility and with the anticipation that we will carry everything we merge today with us for several more decades – at least. At the latest curl up meeting, I had session I called 100 year curl where I brought up thoughts for us as a project that we might need to work on and keep in mind if indeed we believe the curl project will and should be able to celebrate its 100th birthday in a future. It is a slightly overwhelming (terrifying even?) thought but in my opinion not entirely unrealistic. And when you think about it, we have already traveled almost 30% of the way towards that goalpost. — I used curl the first time decades ago and it still looks the same. This is a common follow-up statement. What have we actually done during all this time that the users can’t spot? A related question that to me also is a little amusing is then: — You say you worked on curl full time since 2019, but what do you actually do all days? We work hard at maintaining backwards compatibility and not breaking existing use cases. If you cannot spot any changes and your command lines just keep working, it confirms that we do things right. curl is meant to do its job and stay out of the way. To mostly be boring. A dull stack is a good stack. We have refactored and rearranged the internal architecture of curl and libcurl several times in the past and we keep doing it at regular intervals as we improve and adapt to new concepts, new ideas and the ever-evolving world. But we never let that impact the API, the ABI or by breaking any previously working curl tool command lines. I personally think that this is curl’s secret super power. The one thing we truly have accomplished and managed to stick to: stability . In several aspects of the word. curl offers stability in an unstable world. Counting commit frequency or any other metric of project activity , the curl project is actually doing more development now and at a higher pace than ever before during its entire lifetime. We do this to offer you and everyone else the best, the most reliable, the fastest, the most feature rich, the best documented and the most secure internet transfer library on the planet.

0 views
daniel.haxx.se 3 months ago

A gold ceremony to remember

There are those moments in life you know already from the start are going to be the rare once in a lifetime events. This evening was one of those times. On a dark and wet autumn Friday afternoon my entire family and me dressed up to the most fancy level you can expect and took at taxi to the Stockholm City Hall. Anja my wife and my kids Agnes and Rex. Rex, Agnes, Daniel, Anja. The Stenberg family. This was the Swedish Royal Academy of Engineering Science’s ( IVA ) 106th Högtidssammankomst (“festive gathering”) since its founding in 1919. Being one the four gold medal recipients of the night our family got a special dedicated person assigned to us who would help us “maneuver” the venue and agenda. Thanks Linus! In the golden hall me and Anja took a seat in our reserved seats in the front row as the almost 700 other guests slowly entered and filled up every last available chair. The other guests were members of the Academy or special invitees, ministers, the speaker of the parliament etc. All in tail coats, evening dresses and the likes to conform with the dress code of the night. The Golden Hall before people arrived The golden hall is named after its golden colored walls, all filled up with paintings of Swedish historic figures contributing to a pompous and important atmosphere and spirit. This is the kind of room you want to get awards in. Part of the program in this golden hall was the gold medal awards ceremony. After having showed short two-minute videos of each of the awardees and our respective deeds and accomplishments on the giant screen in the front of the room, us awardees were called to the stage. Three gold medals and one large gold medal were handed out to my fellow awardees and myself this year. Carl-Henric Svanberg received the large gold medal. Mats Danielsson and Helena Hedblom were awarded the gold medal. The same as I. The medals were handed to us one by one by Marcus Wallenberg . Photographer: Erik Cronberg. Marcus and me shaking hands. with Helena Hedblom on the right. Photographer: Erik Cronberg. Marcus on the left, me in the middle and Mats Danielsson behind me. In one of the agenda items in the golden hall,IVA’s CEO Sylvia Schwaag Serger did a much inspiring talk about Swedish Engineering and mentioned an amazing list of feats and accomplishments done over the last year and with hope and anticipation for the future. I and curl were also mentioned in her speech. Even more humbled. The audience here were some of the top minds and Engineering brains in Sweden. Achievers and great minds. The kind of people you want appreciation from because they know a thing or two. A small break followed. We strolled down to the giant main hall for some drinks. The blue hall, which is somewhat famous to anyone who ever watched the Nobel Prize banquets. Several people told me the story that the original intent was for the walls to be blue, but… Projecting patterns on the walls Banquet At about 19:00, me and Anja had to sneak up a floor again together with crowd of others who were seated on that main long table you can see on the photo above. Table 1. On the balcony someone mentioned I should wear the prize. So with some help I managed to get it around my neck. It’s not a bad feeling I can tell you. Daniel, wearing the IVA gold medal. As everyone else in the hall had found their ways to their seats, we got to do a slow procession walking down the big wide stairs down into the main hall and find our ways to our seats. Then followed a most wonderful three-course meal. I had excellent table neighbor company and we had a lively and interesting conversation all through the dinner. There were a few welcome short interruptions in the form of speeches and music performances. A most delightful dinner. After the final apple tart was finished, there was coffee and more drinks served upstairs again, as the golden hall had apparently managed to transition while we ate downstairs. Disco(?) in the golden hall When the clock eventually approached midnight the entire Stenberg family walked off into the night and went home. A completely magical night was over but it will live on in my mind and head for a long time. Thank you to every single one involved. Entertainment Program Menu The medal The medal has an image of Prometus on the front side, and Daniel Stenberg 2025 engraved on the back side. On the back it also says the name of the Academy and för framstående gärning , for outstanding achievement. A medal to be proud of. In the box Front side Back side Of course I figured this moment in time also called for a graph.

0 views
daniel.haxx.se 3 months ago

On 110 operating systems

In November 2022, after I had been keeping track and adding names to this slide for a few years already, we could boast about curl having run on 89 different operating systems and only one year later we celebrated having reached 100 operating systems . This time I am back with another update and I here is the official list of the 110 operating systems that have run curl. I don’t think curl is unique in having reached this many operating systems, but I think it is a rare thing and I think it is even rarer that we actually have tracked all these names down to have them mentioned – and counted. For several of these cases, no patches or improvements were ever sent back to the curl project and we don’t know how much or little work that was required to make them happen. The exact definition of “operating system” in this context is vague but separate Linux distributions do not count as another operating system. There are probably more systems to include. Please tell me if you have run curl on something not currently mentioned.

0 views