Posts in Open-source (20 found)

How I discover new (and old) blogs and websites

One of the great things about having a blog is that you get a space that is entirely yours, where you share whatever you want and you make it look exactly how you want it to look. It's a labor of creativity and self-expression. An encouraging aspect of having a blog is also being read by others. I love receiving emails from people who liked a post. It's just nice to know I'm not shouting into the void! But take for instance posts I wrote last year or many years ago. How do those get discovered? Perhaps you wrote an awesome essay on your favorite topic back in 2022. How can I or anyone else stumble upon your work? Making it easy to discover hidden gems from the indie web was my motivation for making powRSS . powRSS is a public RSS feed aggregator to help you find the side of the internet that seldom appears on corporate search engines. It surfaces posts and blogs going all the way back to 1995. You never know what you're going to find and I think it's really fun. Today I made a video showing how it works.

1 views
Daniel Mangum Yesterday

Interesting SPI Routing with iCE40 FPGAs

A few weeks ago I posted about how much fun I was having with the Fomu FPGA development board while travelling. This project from Tim ‘mithro’ Ansell and Sean ‘xobs’ Cross is not new, but remains a favorite of mine because of how portable it is — the entire board can fit in your USB port! The Fomu includes a Lattice Semiconductor iCE40 UltraPlus 5K, which has been a popular FPGA option over the past few years due to the reverse engineered bitstream format and ability to program it with a fully open source toolchain (see updated repository here).

0 views
Evan Schwartz Yesterday

Scour - October Update

Hi friends, In October, Scour ingested 1,042,894 new posts from 14,140 sources . I was also training for the NYC Marathon (which is why this email comes a few days into November)! Last month was all about Interests: Your weekly email digest now includes a couple of topic recommendations at the end. And, if you use an RSS reader to consume your Scour feed, you’ll also find interest recommendations in that feed as well. When you add a new interest on the Interests page, you’ll now see a menu of similar topics that you can click to quickly add. You can browse the new Popular Interests page to find other topics you might want to add. Infinite scrolling is now optional. You can disable it and switch back to explicit pages on your Settings page. Thanks Tomáš Burkert for this suggestion! Earlier, Scour’s topic recommendations were a little too broad. I tried to fix that and now, as you might have noticed, they’re often too specific. I’m still working on solving this “Goldilocks problem”, so more on this to come! Finally, here were a couple of my favorite posts that I found on Scour in October: Happy Scouring! - Evan Introducing RTEB: A New Standard for Retrieval Evaluation Everything About Transformers Turn off Cursor, turn on your mind

1 views
neilzone 2 days ago

Using vimwiki as a personal, portable, knowledge base

A while back, I was looking for a tool to act as basically a semi-organised dumping ground for all sorts of notes and thoughts. I wanted it to be Free software, easy to keep in sync / use across multiple devices, I can use it offline / without a LAN connection, and it should render Markdown nicely. I looked at logseq, which looked interesting, but decided to give vimwiki a go. I spend a lot of my time in vim already, so this seemed like it would fit into the way I work very easily. And I was right. Since it is “just” a collection of .md files, it appeals to me from a simplicity point of view, and also makes synchronisation and backing up very easy. There are [multiple ways to install vimwiki]. I went for: and then adding the following to my (although I already had one of them): To add a new wiki with support for Markdown (rather than the default vimwiki syntax), I put the details into Then, I opened vim, and used to open the wiki. On the first use, there was a prompt to create the first page. The basic vimwiki-specific keybindings are indeed the ones I use the most to manage the wiki itself. For me, “ ” is “". Otherwise, I just use vim normally, which is a significant part of the appeal for me. The wiki is just a collection of markdown files, in the directory specified in the “path” field in the configuration. This makes synchronisation easy. I sync my vimwiki directory with Nextcloud, so that it propogates automatically onto my machines, and I can also push it to git, so that I can grab it on my phone. This works for me, and means that I don’t need to configure, secure etc. another sync tool or a dedicated sync system. There is support for multiple wikis, although I have not experimented much with this. Each wiki gets its own line in . You can use in vim to select which wiki you want to use. I really like vimwiki. It is simple but effective, and because it runs in vim, it does not require me to learn a different tool, or adjust my workflow. I just open vim and open my wiki. Prior to vimwiki, I was just dropping .md or .txt files into a directory which got synchronised, so this is not massively different, other than more convenient. Everything is still file-based, but with an increased ease of organisation. For someone who didn’t already use vim, it is probably a more challenging choice.

1 views
neilzone 3 days ago

Upgrading our time recording system from Kimai v1 to Kimai v2

I have used Kimai as a FOSS time recording system for probably the best part of 10 years. It is a great piece of software, allowing multiple users to record the time that they spend on different tasks, linked to different customers and projects. I use it for time tracking for decoded.legal, recording all my working time. I run it on a server which is not accessible from the Internet, so the fact that we were running the now long outdated v1 of the software did not bother me too much. But, as part of ongoing hygiene / system security stuff, I’ve had my eye on upgrading it to Kimai v2 for a while now, and I’ve finally got round to upgrading it. Fortunately, there is a clear upgrade path from v1 to v2 and It Just Worked. The installation of v2 was itself pretty straightforward, with clear installation instructions . I then imported the data from v1, and the migration/importer tool flagged a couple of issues which needed fixing (e.g. no email address associated with system users, which is now a requirement). The documentation was good in terms of how to deal with those. All in all, it took about 20 minutes to install the new software, and sort out DNS, the web server configuration, TLS, and so on, and then import the data from the old installation. I used the export functionality to compare the data in v2 with what I had in v1, to check that there were no (obvious, anyway) disparities. There were not, which was good! One of the changes in Kimai v2 is the ability to create customised exportable timesheets easily, using the GUI tool. This means that, within a couple of minutes, I had created the kind of timesheet that I provide to clients along with each monthly invoice, so that they can see exactly what I did on which of their matters, and how long I spent on it. For clients who prefer to pay on the basis of time spent, this is important. This is nothing fancy; just a clear summary on the front page, and then a detailed breakdown. I have yet to work out how to group the breakdown on a per-project basis, rather than a single chronological list, but I doubt that this will be much of a problem. I have yet to investigate the possibility for some automation, particularly around the generation of timesheets at the end of each month, one per customer. I’ll still check each of them by hand, of course, but automating their production would be nice. Or, even if not automated, just one click to produce them all. As with v1, Kimai v2 stores its data in MariaDB database, so automating backups is straightforward. Again, there are clear instructions , which is a good sign.

0 views
daniel.haxx.se 3 days ago

curl 8.17.0

Download curl from curl.se . the 271st release 11 changes 56 days (total: 10,092) 448 bugfixes (total: 12,537) 699 commits (total: 36,725) 2 new public libcurl function (total: 100) 0 new curl_easy_setopt() option (total: 308) 1 new curl command line option (total: 273) 69 contributors, 35 new (total: 3,534) 22 authors, 5 new (total: 1,415) 1 security fixes (total: 170) CVE-2025-10966 : missing SFTP host verification with wolfSSH. curl’s code for managing SSH connections when SFTP was done using the wolfSSH powered backend was flawed and missed host verification mechanisms. We drop support for several things this time around: And then we did some other smaller changes: We set a new project record this time with no less than 448 documented bugfixes since the previous release. The release presentation mentioned above discusses some of the perhaps most significant ones. There a small set of pull-requests waiting to get merged, but other than that our future is not set and we greatly appreciate your feedback, submitted issues and provided pull-requests to guide us. If this release happens to include an annoying regression, there might be a patch release already next week. If we are lucky and it doesn’t, then we aim for a 8.18.0 release in the early January 2026. drop Heimdal support drop the winbuild build system drop support for Kerberos FTP drop support for wolfSSH up the minimum libssh2 requirement to 1.9.0 add a notifications API to the multi interface expand to use 6 characters per size in the progress meter support Apple SecTrust – use the native CA store add to the command line tool wcurl : import v2025.11.04 write-out: make able to output all occurrences of a header

0 views
neilzone 4 days ago

Using LibreOffice and other Free software for documents as a lawyer

I was asked recently about how I get on using LibreOffice for document-related legal work, and I promised to write down some thoughts. The short answer is that I use a mix of LibreOffice and other FOSS tools, and I’m very positive about what I do and how I do it, with no particular concerns. (I’ve written more broadly about how I use Free software for legal work ; this blogpost is more specific.) This is about my experience. Yours might be different. You might not want to, or be able to, use, or try, LibreOffice (or vim, or git, or whatever). And that’s fine. I’m not trying to convert or persuade anyone. I do a lot of work which entails producing and amending, documents, and exchanging documents with others. This includes contracts, policies and procedures, and collaborative report writing. Occasionally, it means filling in other people’s forms. I use LibreOffice’s Writer for this. I use Writer pretty much every day, and have done for several years, with a wide range of clients and counterparties, including large law firms, small companies, and government departments, and I have no concerns, or significant gripes. I have made templates for my most common types of document, and I have styles set up to make formatting easy and consistent. (I don’t know why people produce documents without styles, but that’s just a personal gripe.) I have exchanged complex documents, usually with lots of tracked changes and comments, with many, many recipients, and I have had no problems with tracked changes, or people not being able to open documents or see what I have done. I’ve had a document recently where automatic numbering had gone wrong, and one where formatting was been messed up, but these were both documents which started life 5+ years ago, and I have not been able to identify whether this was a LibreOffice Writer issue, or a Word (or whatever tool others involved have been using) issue, or something else. In both cases, I fixed them rapidly and got on with things. I don’t know what Word is like recently, but when I last used it a few years ago, I found automatic numbering and formatting were mostly fine but occasionally a pain back then too, so perhaps this is just par for the course. I found Writer’s recent change to dark mode / theming a bit of a pain, but I seem to have resolved it now. For version control and documents, I don’t do anything fancy. I have a script which appends a date and timestamp to the beginning of the file’s name, and this works well. I get a directory of drafts, with clear naming / sequencing. I’ve experimented with git and documents, and while it sort of works to a point, it is not the right approach for me at the moment. Factors which might aid my positive experience: I do a lot of advisory work, where I produce reports, advice notes, and briefing notes. I don’t tend to use LibreOffice for this, preferring instead to use vim, writing in Markdown. For instance, this is how I prepared the new terms of service for mastodon.social / mastodon.online , and, on a friendly basis outside work, a draft vendor agreement for postmarketOS . This means none of the cruft of a document filetype, and it means that I can use git for version control in a way that actually works (unlike with documents). It also makes it easy to produce diffs. But it doesn’t work well for things like cross-referencing; it is not the right tool for the job. If the output needs to be a nicely-formatted PDF, I use pandoc and typst to convert the Markdown using a template . This makes producing a formatted document very easy, while letting me focus on the content. Some clients send and receive plain text / .md files (and, yes, you, who likes LaTeX files :)) and share .diffs, others prefer documents. Both are fine with me and I go with whichever works better for each client or each situation. I do not use Impress, the presentation tool, other than for viewing presentations which are sent to me. Instead, I use reveal.js for presentations , writing in markdown and presenting in my browser. I really like reveal.js. I can easily upload my presentations for people to view , and I can convert them to .pdf for distribution. I’ve not had to work on a collaborative presentation in the last 5+ years; I imagine that I’d have to use Impress, or a client’s hosted tool of choice, if someone wanted that. I use the spreadsheet tool, Calc, when I need a spreadsheet, which is not very often. It is mostly basic accountancy. For my limited uses, Calc has been absolutely fine, and I’m certainly not qualified to comment on it in any detail. Some clients want me to use their choice of hosted tools - Microsoft, Google Docs, Cryptpad, Nextcloud, etherpad, and so on. That’s fine; if a client wants to use them, and gives me access, I use them. All the ones that I’ve tried so far work fine in Firefox. I’m also happy to make PRs to, or commit directly into, a client’s git repositories. Over the past few years, I’ve hosted instances of Collabora (via Nextcloud), Cryptpad, and etherpad. All have had their pros and cons, and perhaps that’s something for a different blogpost. Most recently, I hosted etherpad, but right now, I’m not hosting any of these. I just don’t use them enough. I don’t depend on any third party plug-ins or integrations. I imagine that someone whose work depends on that kind of thing, then Writer might not be a good fit. I don’t do litigation, or anything which requires court filings.

0 views
daniel.haxx.se 4 days ago

Yes really, curl is still developed

One of the most common reactions or questions I get about curl when I show up at conferences somewhere and do presentations: — is curl still being actively developed? How many more protocols can there be? This of course being asked by people without very close proximity or insight into the curl project and probably neither into the internet protocol world – which frankly probably is most of the civilized world. Still, these questions keep surprising me. Can projects actually ever get done ? (And do people really believe that adding protocols is the only thing that is left to do?) There are new car models being made every year in spite of the roads being mostly the same for the last decades and there are new browser versions shipped every few weeks even though the web to most casual observers look roughly the same now as it did a few years ago. Etc etc. Even things such as shoes or bicycles are developed and shipped in new versions every year. In spite of how it may appear to casual distant observers, very few things remain the same over time in this world. This certainly is also true for internet, the web and how to do data transfers over them. Just five years ago we did internet transfers differently than how we (want to) do them today. New tweaks and proposals are brought up at least on a monthly basis. Not evolving implies stagnation and eventually… death. As standards, browsers and users update their expectations, curl does as well. curl needs to adapt and keep up to stay relevant. We want to keep improving it so that it can match and go beyond what people want from it. We want to help drive and push internet transfer technologies to help users to do better , more efficient and more secure operations. We like carrying the world’s infrastructure on our shoulders. One of the things that actually have occurred to me, after having worked on this project for some decades by now – and this is something I did not at all consider in the past, is that there is a chance that the project will remain alive and in use the next few decades as well. Because of exactly this nothing-ever-stops characteristic of the world around us, but also of course because of the existing amount of users and usage. Current development should be done with care, a sense of responsibility and with the anticipation that we will carry everything we merge today with us for several more decades – at least. At the latest curl up meeting, I had session I called 100 year curl where I brought up thoughts for us as a project that we might need to work on and keep in mind if indeed we believe the curl project will and should be able to celebrate its 100th birthday in a future. It is a slightly overwhelming (terrifying even?) thought but in my opinion not entirely unrealistic. And when you think about it, we have already traveled almost 30% of the way towards that goalpost. — I used curl the first time decades ago and it still looks the same. This is a common follow-up statement. What have we actually done during all this time that the users can’t spot? A related question that to me also is a little amusing is then: — You say you worked on curl full time since 2019, but what do you actually do all days? We work hard at maintaining backwards compatibility and not breaking existing use cases. If you cannot spot any changes and your command lines just keep working, it confirms that we do things right. curl is meant to do its job and stay out of the way. To mostly be boring. A dull stack is a good stack. We have refactored and rearranged the internal architecture of curl and libcurl several times in the past and we keep doing it at regular intervals as we improve and adapt to new concepts, new ideas and the ever-evolving world. But we never let that impact the API, the ABI or by breaking any previously working curl tool command lines. I personally think that this is curl’s secret super power. The one thing we truly have accomplished and managed to stick to: stability . In several aspects of the word. curl offers stability in an unstable world. Counting commit frequency or any other metric of project activity , the curl project is actually doing more development now and at a higher pace than ever before during its entire lifetime. We do this to offer you and everyone else the best, the most reliable, the fastest, the most feature rich, the best documented and the most secure internet transfer library on the planet.

0 views
xenodium 4 days ago

agent-shell 0.17 improvements + MELPA

While it's only been a few weeks since the last agent-shell post , there are plenty of new updates to share. What's agent-shell again? A native Emacs shell to interact with any LLM agent powered by ACP ( Agent Client Protocol ). Before getting to the latest and greatest, I'd like to say thank you to new and existing sponsors backing my projects. While the work going in remains largely unsustainable, your contributions are indeed helping me get closer to sustainability. Thank you! If you benefit from my content and projects, please consider sponsoring to make the work sustainable. Work paying for your LLM tokens and other tools? Why not get your employer to sponsor agent-shell also? Now on to the very first update… Both agent-shell and acp.el are now available on MELPA. As such, installation now boils down to: OpenCode and Qwen Code are two of the latest agents to join agent-shell . Both accessible via and through the agent picker, but also directly from and . Adding files as context has seen quite a few improvements in different shapes. Thank you Ian Davidson for contributing embedded context support. Invoke to take a screenshot and automatically send it over to . A little side-note, did you notice the activity indicator in the header bar? Yep. That's new too. While file completion remains experimental, you can enable via: From any file you can now invoke to send the current file to . If region is selected, region information is sent also. Fancy sending a different file other than current one? Invoke with , or just use . , also operates on files (selection or region), DWIM style ;-) You may have noticed paths in section titles are no longer displayed as absolute paths. We're shortening those relative to project roots. While you can invoke with prefix to create new shells, is now available (and more discoverable than ). Cancelling prompt sessions (via ) is much more reliable now. If you experienced a shell getting stuck after cancelling a session, that's because we were missing part of the protocol implementation. This is now implemented. Use the new to automatically insert shell (ie. bash) command output. Initial work for automatically saving markdown transcripts is now in place. We're still iterating on it, but if keen to try things out, you can enable as follows: Text header Applied changes are now displayed inline. The new and can now be used to change the session mode. You can now find out what capabilities and session modes are supported by your agent. Expand either of the two sections. Tired of pressing and to accept changes from the diff buffer? Now just press from the diff viewer to accept all hunks. Same goes for rejecting. No more and . Now just press from the diff buffer. We get a new basic transient menu. Currently available via . We got lots of awesome pull requests from wonderful folks. Thank you for your contributions! Beyond what's been showcased here, much love and effort's been poured into polishing the experience. Interested in the nitty-gritty? Have a look through the 173 commits since the last blog post. If agent-shell or acp.el are useful to you, please consider sponsoring its development. LLM tokens aren't free, and neither is the time dedicated to building this stuff ;-) Arthur Heymans : Add a Package-Requires header ( PR ). Elle Najt : Execute commands in devcontainer ( PR ). Elle Najt : Fix Write tool diff preview for new files ( PR ). Elle Najt : Inline display of historical changes ( PR ). Elle Najt : Live Markdown transcripts ( PR ). Elle Najt : Prompt session mode cycling and modeline display ( PR ). Fritz Grabo : Devcontainer fallback workspace ( PR ). Guilherme Pires : Codex subscription auth ( PR ). Hordur Freyr Yngvason : Make qwen authentication optional ( PR ). Ian Davidson : Embedded context support ( PR ). Julian Hirn : Fix quick-diff window restoration for full-screen ( PR ). Ruslan Kamashev : Hide header line altogether ( PR ). festive-onion : Show Planning mode more reliably ( PR ).

0 views
devansh 4 days ago

On AI Slop vs OSS Security

Disclosure: Certain sections of this content were grammatically refined/updated using AI assistance, as English is not my first language. Quite ironic, I know, given the subject being discussed. I have now spent almost a decade in the bug bounty industry, started out as a bug hunter (who initially used to submit reports with minimal impact, low-hanging fruits like RXSS, SQLi, CSRF, etc.), then moved on to complex chains involving OAuth, SAML, parser bugs, supply chain security issues, etc., and then became a vulnerability triager for HackerOne, where I have triaged/reviewed thousands of vulnerability submissions. I have now almost developed an instinct that tells me if a report is BS or a valid security concern just by looking at it. I have been at HackerOne for the last 5 years (Nov 2020 - Present), currently as a team lead, overseeing technical services with a focus on triage operations. One decade of working on both sides, first as a bug hunter, and then on the receiving side reviewing bug submissions, has given me a unique vantage point on how the industry is fracturing under the weight of AI-generated bug reports (sometimes valid submissions, but most of the time, the issues are just plain BS). I have seen cases where it was almost impossible to determine whether a report was a hallucination or a real finding. Even my instincts and a decade of experience failed me, and this is honestly frustrating, not so much for me, because as part of the triage team, it is not my responsibility to fix vulnerabilities, but I do sympathize with maintainers of OSS projects whose inboxes are drowning. Bug bounty platforms have already started taking this problem seriously, as more and more OSS projects are complaining about it. This is my personal writing space, so naturally, these are my personal views and observations. These views might be a byproduct of my professional experience gained at HackerOne, but in no way are they representative of my employer. I am sure HackerOne, as an organization, has its own perspectives, strategies, and positions on these issues. My analysis here just reflects my own thinking about the systemic problems I see and potential solutions(?). There are fundamental issues with how AI has infiltrated vulnerability reporting, and they mirror the social dynamics that plague any feedback system. First, the typical AI-powered reporter, especially one just pasting GPT output into a submission form, neither knows enough about the actual codebase being examined nor understands the security implications well enough to provide insight that projects need. The AI doesn't read code; it pattern-matches. It sees functions that look similar to vulnerable patterns and invents scenarios where they might be exploited, regardless of whether those scenarios are even possible in the actual implementation. Second, some actors with misaligned incentives interpret high submission volume as achievement. By flooding bug bounty programs with AI-generated reports, they feel productive and entrepreneurial. Some genuinely believe the AI has found something real. Others know it's questionable but figure they'll let the maintainers sort it out. The incentive is to submit as many reports as possible and see what sticks, because even a 5% hit rate on a hundred submissions is better than the effort of manually verifying five findings. The result? Daniel Stenberg, who maintains curl , now sees about 20% of all security submissions as AI-generated slop, while the rate of genuine vulnerabilities has dropped to approximately 5%. Think about that ratio. For every real vulnerability, there are now four fake ones. And every fake one consumes hours of expert time to disprove. A security report lands in your inbox. It claims there's a buffer overflow in a specific function. The report is well-formatted, includes CVE-style nomenclature, and uses appropriate technical language. As a responsible maintainer, you can't just dismiss it. You alert your security team, volunteers, by the way, who have day jobs and families and maybe three hours a week for this work. Three people read the report. One person tries to reproduce the issue using the steps provided. They can't, because the steps reference test cases that don't exist. Another person examines the source code. The function mentioned in the report doesn't exist in that form. A third person checks whether there's any similar functionality that might be vulnerable in the way described. There isn't. After an hour and a half of combined effort across three people, that's 4.5 person-hours—you've confirmed what you suspected: this report is garbage. Probably AI-generated garbage, based on the telltale signs of hallucinated function names and impossible attack vectors. You close the report. You don't get those hours back. And tomorrow, two more reports just like it will arrive. The curl project has seven people on its security team . They collaborate on every submission, with three to four members typically engaging with each report. In early July 2025, they were receiving approximately two security reports per week. The math is brutal. If you have three hours per week to contribute to an open source project you love, and a single false report consumes all of it, you've contributed nothing that week except proving someone's AI hallucinated a vulnerability. The emotional toll compounds exponentially. Stenberg describes it as "mind-numbing stupidities" that the team must process. It's not just frustration, it's the specific demoralization that comes from having your expertise and goodwill systematically exploited by people who couldn't be bothered to verify their submissions before wasting your time. According to Intel's annual open source community survey , 45% of respondents identified maintainer burnout as their top challenge. The Tidelift State of the Open Source Maintainer Survey is even more stark: 58% of maintainers have either quit their projects entirely (22%) or seriously considered quitting (36%). Why are they quitting? The top reason, cited by 54% of maintainers, is that other things in their life and work took priority over open source contributions. Over half (51%) reported losing interest in the work. And 44% explicitly identified experiencing burnout. But here's the gut punch: the percentage of maintainers who said they weren't getting paid enough to make maintenance work worthwhile rose from 32% to 38% between survey periods. These are people maintaining infrastructure that powers billions of dollars of commercial activity, and they're getting nothing. Or maybe they get $500 a year from GitHub Sponsors while companies make millions off their work. The maintenance work itself is rarely rewarding. You're not building exciting new features. You're addressing technical debt, responding to user demands, managing security issues, and now—increasingly—sorting through AI-generated garbage to find the occasional legitimate report. It's like being a security guard who has to investigate every single alarm, knowing that 95% of them are false, but unable to ignore any because that one real threat could be catastrophic. When you're volunteering out of love in a market society, you're setting yourself up to be exploited. And the exploitation is getting worse. Toxic communities, hyper-responsibility for critical infrastructure, and now the weaponization of AI to automate the creation of work for maintainers—it all adds up to an unsustainable situation. One Kubernetes contributor put it simply: "If your maintainers are burned out, they can't be protecting the code base like they're going to need to be." This transforms maintainer wellbeing from a human resources concern into a security imperative. Burned-out maintainers miss things. They make mistakes. They eventually quit, leaving projects unmaintained or understaffed. A typical AI slop report will reference function names that don't exist in the codebase. The AI has seen similar function names in its training data and invents plausible sounding variations. It will describe memory operations that would indeed be problematic if they existed as described, but which bear no relationship to how the code actually works. One report to curl claimed an HTTP/3 vulnerability and included fake function calls and behaviors that appeared nowhere in the actual codebase. Stenberg has publicly shared a list of AI-generated security submissions received through HackerOne , and they all follow similar patterns, professional formatting, appropriate jargon, and completely fabricated technical details. The sophistication varies. Some reports are obviously generated by someone who just pasted a repository URL into ChatGPT and asked it to find vulnerabilities. Others show more effort—the submitter may have fed actual code snippets to the AI and then submitted its analysis without verification. Both are equally useless to maintainers, but the latter takes longer to disprove because the code snippets are real even if the vulnerability analysis is hallucinated. Here's why language models fail so catastrophically at this task: they're designed to be helpful and provide positive responses. When you prompt an LLM to generate a vulnerability report, it will generate one regardless of whether a vulnerability exists. The model has no concept of truth—only of plausibility. It assembles technical terminology into patterns that resemble security reports it has seen during training, but it cannot verify whether the specific claims it's making are accurate. This is the fundamental problem: AI can generate the form of security research without the substance. While AI slop floods individual project inboxes, the broader CVE infrastructure faces its own existential crisis . And these crises compound each other in dangerous ways. In April 2025, MITRE Corporation announced that its contract to maintain the Common Vulnerabilities and Exposures program would expire. The Department of Homeland Security failed to renew the long-term contract, creating a funding lapse that affects everything: national vulnerability databases, advisories, tool vendors, and incident response operations. The National Vulnerability Database experienced catastrophic problems throughout 2024. CVE submissions jumped 32% while creating massive processing delays. By March 2025, NVD had analyzed fewer than 300 CVEs, leaving more than 30,000 vulnerabilities backlogged. Approximately 42% of CVEs lack essential metadata like severity scores and product information. Now layer AI slop onto this already-stressed system. Invalid CVEs are being assigned at scale. A 2023 analysis by former insiders suggested that only around 20% of CVEs were valid, with the remainder being duplicates, invalid, or inflated. The issues include multiple CVEs being assigned for the same bug, CNAs siding with reporters over project developers even when there's no genuine dispute, and reporters receiving CVEs based on test cases rather than actual distinct vulnerabilities. The result is that the vulnerability tracking system everyone relies on is becoming less trustworthy exactly when we need it most. Security teams can't rely on CVE assignments to prioritize their work. Developers don't trust vulnerability scanners because false positive rates are through the roof. The signal-to-noise ratio has deteriorated so badly that the entire system risks becoming useless. Banning submitters doesn't work at scale. You can ban an account, but creating new accounts is trivial. HackerOne implements reputation scoring where points are gained or lost based on report validity, but this hasn't stemmed the tide because the cost of creating throwaway accounts is essentially zero. Asking people to "please verify before submitting" doesn't work. The incentive structure rewards volume, and people either genuinely believe their AI-generated reports are valid or don't care enough to verify. Polite requests assume good faith, but much of the slop comes from actors who have no stake in the community norms. Trying to educate submitters about how AI works doesn't scale. For every person you educate, ten new ones appear with fresh GPT accounts. The problem isn't knowledge—it's incentives. Simply closing inboxes or shutting down bug bounty programs "works" in the sense that it stops the slop, but it also stops legitimate security research. Several projects have done this, and now they're less secure because they've lost a channel for responsible disclosure. None of the easy answers work because this isn't an easy problem. Disclosure Requirements represent the first line of defense. Both curl and Django now require submitters to disclose whether AI was used in generating reports. Curl's approach is particularly direct: disclose AI usage upfront and ensure complete accuracy before submission. If AI usage is disclosed, expect extensive follow-up questions demanding proof that the bug is genuine before the team invests time in verification. This works psychologically. It forces submitters to acknowledge they're using AI, which makes them more conscious of their responsibility to verify. It also gives maintainers grounds to reject slop immediately if AI usage was undisclosed but becomes obvious during review. Django goes further with a section titled "Note for AI Tools" that directly addresses language models themselves, reiterating that the project expects no hallucinated content, no fictitious vulnerabilities, and a requirement to independently verify that reports describe reproducible security issues. Proof-of-Concept Requirements raise the bar significantly. Requiring technical evidence such as screencasts showing reproducibility, integration or unit tests demonstrating the fault, or complete reproduction steps with logs and source code makes it much harder to submit slop. AI can generate a description of a vulnerability, but it cannot generate working exploit code for a vulnerability that doesn't exist. Requiring proof forces the submitter to actually verify their claim. If they can't reproduce it, they can't prove it, and you don't waste time investigating. Projects are choosing to make it harder to submit in order to filter out the garbage, betting that real researchers will clear the bar while slop submitters won't. Reputation and Trust Systems offer a social mechanism for filtering. Only users with a history of validated submissions get unrestricted reporting privileges or monetary bounties. New reporters could be required to have established community members vouch for them, creating a web-of-trust model. This mirrors how the world worked before bug bounty platforms commodified security research. You built reputation over time through consistent, high-quality contributions. The downside is that it makes it harder for new researchers to enter the field, and it risks creating an insider club. But the upside is that it filters out low-effort actors who won't invest in building reputation. Economic Friction fundamentally alters the incentive structure. Charge a nominal refundable fee—say $50—for each submission from new or unproven users. If the report is valid, they get the fee back plus the bounty. If it's invalid, you keep the fee. This immediately makes mass AI submission uneconomical. If someone's submitting 50 AI-generated reports hoping one sticks, that's now $2,500 at risk. But for a legitimate researcher submitting one carefully verified finding, $50 is a trivial barrier that gets refunded anyway. Some projects are considering dropping monetary rewards entirely. The logic is that if there's no money involved, there's no incentive for speculative submissions. But this risks losing legitimate researchers who rely on bounties as income. It's a scorched earth approach that solves the slop problem by eliminating the entire ecosystem. AI-Assisted Triage represents fighting fire with fire. Use AI tools trained specifically to identify AI-generated slop and flag it for immediate rejection. HackerOne's Hai Triage system embodies this approach, using AI agents to cut through noise before human analysts validate findings. The risk is obvious: what if your AI filter rejects legitimate reports? What if it's biased against certain communication styles or methodologies? You've just automated discrimination. But the counterargument is that human maintainers are already overwhelmed, and imperfect filtering is better than drowning. The key is transparency and appeals. If an AI filter rejects a report, there should be a clear mechanism for the submitter to contest the decision and get human review. Transparency and Public Accountability leverage community norms. Curl recently formalized that all submitted security reports will be made public once reviewed and deemed non-sensitive. This means that fabricated or misleading reports won't just be rejected, they'll be exposed to public scrutiny. This works as both deterrent and educational tool. If you know your slop report will be publicly documented with your name attached, you might think twice. And when other researchers see examples of what doesn't constitute a valid report, they learn what standards they need to meet. The downside is that public shaming can be toxic and might discourage good-faith submissions from inexperienced researchers. Projects implementing this approach need to be careful about tone and focus on the technical content rather than attacking submitters personally. Every hour spent evaluating slop reports is an hour not spent on features, documentation, or actual security improvements. And maintainers are already working for free, maintaining infrastructure that generates billions in commercial value. When 38% of maintainers cite not getting paid enough as a reason for quitting, and 97% of open source maintainers are unpaid despite massive commercial exploitation of their work , the system is already broken. AI slop is just the latest exploitation vector. It's the most visible one right now, but it's not the root cause. The root cause is that we've built a global technology infrastructure on the volunteer labor of people who get nothing in return except burnout and harassment. So what does sustainability actually look like? First, it looks like money. Real money. Not GitHub Sponsors donations that average $500 a year. Not swag and conference tickets. Actual salaries commensurate with the value being created. Companies that build products on open source infrastructure need to fund the maintainers of that infrastructure. This could happen through direct employment, foundation grants, or the Open Source Pledge model where companies commit percentages of revenue. Second, it looks like better tooling and automation that genuinely reduces workload rather than creating new forms of work. Automated dependency management, continuous security scanning integrated into development workflows, and sophisticated triage assistance that actually works. The goal is to make maintenance less time-consuming so burnout becomes less likely. Third, it looks like shared workload and team building. No single volunteer should be a single point of failure. Building teams with checks and balances where members keep each other from taking on too much creates sustainability. Finding additional contributors willing to share the burden rather than expecting heroic individual effort acknowledges that most people have limited time available for unpaid work. Fourth, it looks like culture change. Fostering empathy in interactions, starting communications with gratitude even when rejecting contributions, and publicly acknowledging the critical work maintainers perform reduces emotional toll. Demonstrating clear processes for handling security issues gives confidence rather than trying to hide problems. Fifth, it looks like advocacy and policy at organizational and governmental levels. Recognition that maintainer burnout represents existential threat to technology infrastructure . Development of regulations requiring companies benefiting from open source to contribute resources. Establishment of security standards that account for the realities of volunteer-run projects. Without addressing these fundamentals, no amount of technical sophistication will prevent collapse. The CVE slop crisis is just the beginning. We're entering an arms race between AI-assisted attackers or abusers and AI-assisted defenders, and nobody knows how it ends. HackerOne's research indicates that 70% of security researchers now use AI tools in their workflow. AI-powered testing is becoming the industry standard. The emergence of fully autonomous hackbots—AI systems that submitted over 560 valid reports in the first half of 2025—signals both opportunity and threat. The divergence will be between researchers who use AI as a tool to enhance genuinely skilled work versus those who use it to automate low-effort spam. The former represents the promise of democratizing security research and scaling our ability to find vulnerabilities. The latter represents the threat of making the signal-to-noise problem completely unmanageable. The challenge is developing mechanisms that encourage the first group while defending against the second. This probably means moving toward more exclusive models. Invite-only programs. Dramatically higher standards for participation. Reputation systems that take years to build. New models for coordinated vulnerability disclosure that assume AI-assisted research as the baseline and require proof beyond "here's what the AI told me." It might mean the end of open bug bounty programs as we know them. Maybe that's necessary. Maybe the experiment of "anyone can submit anything" was only viable when the cost of submitting was high enough to ensure some minimum quality. Now that AI has reduced that cost to near-zero, the experiment might fail soon if things don't improve. So, net-net, here's where we are: When it comes to vulnerability reports, what matters is who submits them and whether they've actually verified their claims. Accepting reports from everyone indiscriminately is backfiring catastrophically because projects are latching onto submissions that sound plausible while ignoring the cumulative evidence that most are noise. You want to receive reports from someone who has actually verified their claims, understands the architecture of what they're reporting on, and isn't trying to game the bounty system or offload verification work onto maintainers. Such people exist, but they're becoming harder to find amidst the deluge of AI-generated content. That's why projects have to be selective about which reports they investigate and which submitters they trust. Remember: not all vulnerability reports are legitimate. Not all feedback is worthwhile. It matters who is doing the reporting and what their incentives are. The CVE slop crisis shows the fragility of open source security. Volunteer maintainers, already operating at burnout levels, face an explosion of AI-generated false reports that consume their limited time and emotional energy. The systems designed to track and manage vulnerabilities struggle under dual burden of structural underfunding and slop inundation. The path forward requires holistic solutions combining technical filtering with fundamental changes to how we support and compensate open source labor. AI can be part of the solution through better triage, but it cannot substitute for adequate resources, reasonable workloads, and human judgment. Ultimately, the sustainability of open source security depends on recognizing that people who maintain critical infrastructure deserve more than exploitation. They deserve compensation, support, reasonable expectations, and protection from abuse. Without addressing these fundamentals, no amount of technical sophistication will prevent the slow collapse of the collaborative model that has produced so much of the digital infrastructure modern life depends on. The CVE slop crisis isn't merely about bad vulnerability reports. It's about whether we'll choose to sustain the human foundation of technological progress, or whether we'll let it burn out under the weight of automated exploitation. That's the choice we're facing. And right now, we're choosing wrong.

0 views
Jeff Geerling 1 weeks ago

The Arduino Uno Q is a weird hybrid SBC

The Arduino Uno Q is... a weird board. It's the first product born out of Qualcomm's buyout of Arduino . It's like if you married an Intel CPU, and a Raspberry Pi RP2040 microcontroller—oh wait, Radxa's X4 did that . Arduino even tried it before with their old Yún board, which had Linux running on a MIPS CPU, married to an ATmega microcontroller.

0 views
Farid Zakaria 1 weeks ago

Nix derivation madness

I’ve written a bit about Nix and I still face moments where foundational aspects of the package system confounds and surprises me. Recently I hit an issue that stumped me as it break some basic comprehension I had on how Nix works. I wanted to produce the build and runtime graph for the Ruby interpreter. I have Ruby but I don’t seem to have the derivation, , file present on my machine. No worries, I think I can it and download it from the NixOS cache. I guess the NixOS cache doesn’t seem to have it. 🤷 This was actually perplexing me at this moment. In fact there are multiple discourse posts about it. My mental model however of Nix though is that I must have first evaluated the derivation (drv) in order to determine the output path to even substitute. How could the NixOS cache not have it present? Is this derivation wrong somehow? Nope. This is the derivation Nix believes that produced this Ruby binary from the database. 🤨 What does the binary cache itself say? Even the cache itself thinks this particular derivation, , produced this particular Ruby output. What if I try a different command? So I seem to have a completely different derivation, , that resulted in the same output which is not what the binary cache announces. WTF? 🫠 Thinking back to a previous post, I remember touching on modulo fixed-output derivations . Is that what’s going on? Let’s investigate from first principles. 🤓 Let’s first create which is our fixed-output derivation . ☝️ Since this is a fixed-output derivation (FOD) the produced path will not be affected to changes to the derivation beyond the contents of . Now we will create a derivation that uses this FOD. The for the output for this derivation will change on changes to the derivation except if the derivation path for the FOD changes. This is in fact what makes it “modulo” the fixed-output derivations. Let’s test this all out by changing our derivation. Let’s do this by just adding some garbage attribute to the derivation. What happens now? The path of the derivation itself, , has changed but the output path remains consistent. What about the derivation that leverages it? It also got a new derivation path but the output path remained unchanged. 😮 That means changes to fixed-output-derivations didn’t cause new outputs in either derivation but it did create a complete new tree of files. 🤯 That means in nixpkgs changes to fixed-output derivations can cause them to have new store paths for their but result in dependent derivations to have the same output path. If the output path had already been stored in the NixOS cache, then we lose the link between the new and this output path. 💥 The amount of churn that we are creating in derivations was unbeknownst to me. It can get even weirder! This example came from @ericson2314 . We will duplicate the to another file whose only difference is the value of the garbage. Let’s now use both of these in our derivation. We can now instantiate and build this as normal. What is weird about that? Well, let’s take the JSON representation of the derivation and remove one of the inputs. We can do this because although there are two input derivations, we know they both produce the same output! Let’s load this modified derivation back into our and build it again! We got the same output . Not only do we have a trait for our output paths to derivations but we can also take certain derivations and completely change them by removing inputs and still get the same output! 😹 The road to Nix enlightenment is no joke and full of dragons.

2 views
Daniel De Laney 1 weeks ago

Free software scares normal people

I’m the person my friends and family come to for computer-related help. (Maybe you, gentle reader, can relate.) This experience has taught me which computing tasks are frustrating for normal people. Normal people often struggle with converting video. They will need to watch, upload, or otherwise do stuff with a video, but the format will be weird. (Weird, broadly defined, is anything that won’t play in QuickTime or upload to Facebook.) I would love to recommend Handbrake to them, but the user interface is by and for power users. Opening it makes normal people feel unpleasant feelings. This problem is rampant in free software. The FOSS world is full of powerful tools that only have a “power user” UI. As a result, people give up. Or worse: they ask people like you and I to do it for them. I want to make the case to you that you can (and should) solve this kind of problem in a single evening. Take the example of Magicbrake , a simple front end I built. It hides the power and flexibility of Handbrake. It does only the one thing most people need Handbrake for: taking a weird video file and making it normal. (Normal, for our purposes, means a small MP4 that works just about anywhere.) There is exactly one button. This is a fast and uncomplicated thing to do. Unfortunately, the people who have the ability to solve problems like this are often disinclined to do it. “Why would you make Handbrake less powerful on purpose?” “What if someone wants a different format?” “What about [feature/edge case]?” The answer to all these questions is the same: a person who needs or wants that stuff can use Handbrake. If they don’t need everything Handbrake can do and find it bewildering, they can use this. Everyone wins. It’s a bit like obscuring the less-used functions on a TV remote with tape. The functions still exist if you need them, but you’re not required to contend with them just to turn the TV on. People benefit from stuff like this, and I challenge you to make more of it. Opportunities are everywhere. The world is full of media servers normal people can’t set up. Free audio editing software that requires hours of learning to be useful for simple tasks. Network monitoring tools that seem designed to ward off the uninitiated. Great stuff normal people don’t use. All because there’s only one UI, and it’s designed to do everything. 80% of the people only need 20% of the features. Hide the rest from them and you’ll make them more productive and happy. That’s really all it takes.

1 views
Preah's Website 1 weeks ago

Reddix: Reddit in the terminal

In the latest email from Terminal Trove , I spotted a tool called Reddix . It's a terminal user interface for reddit, where you can set up your account and browse distraction-free through a simple reddit interface using the keyboard and keyboard shortcuts. You can upvote, downvote, and even view images using the kitty graphics protocol. The setup isn't hard. If you have eget, you can run to install it. You can also install the latest release from GitHub using . You also need kitty graphics if you want to show images and videos. You can use homebrew on macOS to do this with The more technical part appears to be actually signing in, but it's still not that hard. Here are the instructions on the Reddix GitHub: Create a Reddit “script” at https://www.reddit.com/prefs/apps and set the redirect URI to . Launch reddix, press m, and follow the guided menu for setup. Prefer to configure things manually? Copy into and fill in your credentials. Your "User ID" is the App ID you get emailed after creating the script from your account, and it gives you the secret to input on Reddix as well. Then, you authorize Reddix to use your account and you're all set. It probably won't be my daily driver, but it's certainly fun to use :) Subscribe via email or RSS Create a Reddit “script” at https://www.reddit.com/prefs/apps and set the redirect URI to . Launch reddix, press m, and follow the guided menu for setup. Prefer to configure things manually? Copy into and fill in your credentials.

0 views
xenodium 1 weeks ago

time-zones now on MELPA. Do I have your support?

A little over a week ago, I introduced time-zones , an Emacs utility to easily check city times around the world. Today, I'm happy to report, the package has been accepted into MELPA . It's been wonderful to see how well time-zones was received on Reddit . ✓ You asked for MELPA publishing and I delivered . ✓ You asked for DST display and I delivered . ✓ You asked for a UTC picker and I delivered . ✓ You asked for UTC offset display and I delivered . ✓ You asked for Windows support and I delivered . ✓ You asked for help and bug fixes and I delivered . Bringing features and improving our beloved text editor takes time and effort. isn't my first package, I've also published a bunch of Emacs packages . Will you help make this work sustainable ?

0 views
neilzone 1 weeks ago

What goes on at a meeting of the Silicon Corridor Linux User Group in 2025

What goes on at a meeting of the Silicon Corridor Linux User Group in 2025 I found this post in my drafts, half completed. I am not really sure why I started it, but I did start it, some point earlier this year, so now I will finish it. I am a long time member of our local Linux user group, the curiously named Silicon Corridor Linux User Group (SCLUG) . (Its website looks much how you might expect the website of a Linux user group to look.) Given that we’ve only met in Reading for as long as I can remember, I guess that it is really the Reading And Thereabouts Linux User Group. RATLUG. I first went to a SCLUG meeting in around 2005, when I was back in the area after university. The group had an active email list, which was the primary means of communication. We met at the seating area in the front of a Wetherspoons (urgh). I think because the food was cheap. It certainly wasn’t because it was good. Or a pleasant place conducive to a good chat, given how loud and crowded it was. But it was fun , and it was enjoyable to chat with people developing, supporting, and using Linux (and BSD etc.). Meetings were well attended, and we often struggled for space. I stopped going for a quite a few years, both because I really wasn’t a fan of Wetherspoons, and also life got in the way. I started to go again just before the first Covid lockdown. It was still in Wetherspoons, but oh well. I think that I managed one meeting before everything was shut down. We moved online during the covid lockdowns, using jitsi as a platform for chatting. I rather enjoyed it. I particularly liked the convenience, of being able to join from home, rather than travel all the way to Reading for a couple of hours. But it was not a success from a numbers point of view, and while I liked the idea of people proposing mini-talks (as I like the idea of using the LUG as a place to learn things), that did not catch on. So now we are in 2025, and SCLUG keeps going. Times have changed, though. The mailing list is almost silent; we have a Signal group instead, but there is relatively little chat between meetings. We still meet in person, once a month, of a Wednesday evening. We have, finally, moved from Wetherspoons to another pub, thank goodness. The fact that meetings were in Wetherspoons were a significant factor in me not bothering to go, so I was keen to encourage a move to somewhere… better. At the moment, we meet in the covered garden area of The Nag’s Head and in the warmer and lighter months, it is quite pleasant. We’ve acknowledged that this is not going to be viable for much longer because of the weather, and the pub itself is small and noisy, so I suspect that we are back to looking for another venue. It is not a big group. I reckon that, on average, there are probably six or seven of us at most meetings. Visitors / drop-ins are very welcome; the Signal group is a good way of finding us, else look for the penguin on the table if I remember to bring it. “Meetings” sounds a bit formal, since it is just us sitting and chatting. There is no formality to it at all, really; turn up, have a chat, and leave whenever. I tend to be there a bit earlier than the times on the website, and leave not too late in the evening. The conversation tends to be of a technical bent, although not just Linux by any means. Self-hosting comes up a fair amount, as do people’s experiments with new devices and technologies, and chats about tech and society and politics etc. While I doubt that anyone who didn’t have an interest in such things would enjoy it, there’s certainly no expectation of knowledge/experience/expertise, nor any elitism or snobbery. I can’t say that I learn a huge amount - for me, it is definitely more social than educational. Even with a small number of people, I have to have enough social spoons left to persuade myself to go into Reading of a Wednesday evening for a chat. We have not done anything like PGP key signing, or helping people install Linux, or anything similar, for as long as I can remember. Yes, I think so. There are, of course, so many online places where one can go to chat about Linux, and to seek support, that an in-person group is not needed for this. To me, SCLUG is really now a social thing. A pleasant and laid back evening, once a month, to chat with people with complementary interests. It strikes me as of those things that will continue for as long as there are people willing and able to turn up and chat. Perhaps that will wane at some point…

0 views
André Arko 1 weeks ago

We want to move Ruby forward

On September 9, without warning, Ruby Central kicked out the maintainers who have cared for Bundler and RubyGems for over a decade. Ruby Central made these changes against the established project policies , while ignoring all objections from the maintainers’ team . At the time, Ruby Central claimed these changes were “temporary". However, While we know that Ruby Central had no right to act the way they did, it is nevertheless clear to us that the Ruby community will be better off if the codebase, maintenance, and legal rights to RubyGems and Bundler are all together in the same place. To bring this about, we are prepared to transfer our interests in RubyGems and Bundler to Matz , end the dispute over the GitHub enterprise account, 2 GitHub organizations, and 70 repositories, and hand over all rights in the Bundler logo and Bundler name, including the trademark applications in the US, EU, and Japan. Once we have entered into a legal agreement to settle any legal claims with Ruby Central and transfer all rights to Matz, the former maintainers will step back entirely from the RubyGems and Bundler projects, leaving them fully and completely to Matz, and by extension to the entire Ruby community. Although Ruby Central’s actions were not legitimate, our commitment to the Ruby community remains strong. We’re choosing to focus our energy on projects to improve Ruby for everyone, including rv , Ruby Butler , jim , and gem.coop . Signed, The former maintainers: André , David , Ellen , Josef , Martin , and Samuel None of the “temporary” changes made by Ruby Central have been undone, more than six weeks later. Ruby Central still has not communicated with the removed maintainers about restoring any permissions. Ruby Central still has not offered “operator agreements” or “contributor agreements” to any of the removed maintainers. The Ruby Together merger agreement plainly states that it is the maintainers who will decide what is best for their projects, not Ruby Central. Last week, Matz stepped in to assume control of RubyGems and Bundler himself. His announcement states that the Ruby core team will assume control and responsibility for the primary RubyGems and Bundler GitHub repository. Ruby Central did not communicate with any removed maintainers before transferring control of the rubygems/rubygems GitHub repo to the Ruby core team. On October 24th, Shan publicly confirmed she does not believe the maintainers need to be told why they were removed .

0 views
neilzone 2 weeks ago

Is now the best time ever for Linux laptops?

As I’ve said, ad nauseum probably, I like my secondhand ThinkPads. But I’m not immune to the charms of other machines and, as far as I can tell, now is an amazing time for Linux laptops. By which I mean, companies selling laptops with Linux pre-installed or no OS preinstalled, or aimed at Linux users. Yes, it’s a bit subjective. There seems to be quite a range of machines, at quite a range of prices, with quite a range of Linux and other non-Windows/macOS operating systems available. This isn’t meant to be a comprehensive list, but just some thoughts on a few of them that have crossed my timeline recently. All have points that I really like but, right now at least, if my current ThinkPad died, I’d probably just buy another eBay ThinkPad… Update 2025-10-25: This is a list, not recommendations, but personally I won’t be buying a Framework machine: “Framework flame war erupts over support of politically polarizing Linux projects” I love the idea of the Framework laptops , which a user can repair and upgrade with ease. Moving away from “disposable” IT, into well-built systems which can be updated in line with user needs, and readily repaired, is fantastic. Plus, they have physical switches to disconnect microphone and camera, which I like. I’ve seen more people posting about Framework machines than I have about pretty much all of the others here put together, so my guess is that these are some of the more popular Linux-first machines at the moment. I know a few people who have, or had, one of these. Most seem quite happy. One… not so much. But the fact that multiple people I know have them means, perhaps, sooner rather than later, I’ll get my hands on one temporarily, to see what it is like. I only heard about Malibal while seeing if there was anything obvious that I’d missed from this post. Their machines appear to start at $4197, based on what they displayed when I clicked on the link to Linux machines, which felt noteworthy. And some of the stuff on their website seems surprising. Update 2025-10-25: The link about their reasons for not shipping to Colorado no longer works, nor is it available via archive.org (“This URL has been excluded from the Wayback Machine.”). Again, this is a list, not recommendations, but this thread on Reddit does not make for good reading. I’m slipping this in because I have soft spot for Leah’s Minifree range of machines even though, strictly, they are not “Linux-first” laptops, but rather Libreboot machines, which can come with a Linux installation. I massively admire what Leah is doing here, both in terms of funding their software development work, and also helping reduce electronic waste through revitalising used equipment. Of all the machines and companies in this blog post, Minifree’s are, I think, the ones which tempt me the most. I think the MNT Pocket Reform is a beautiful device, in a sort-of-quirky kind of way. In my head, these are hand-crafted, artisan laptops. Could I see myself using it every day? Honestly, no. The keyboard would concern me, and I am not sure I see the attraction of a trackball. (I’d happily try one though!) But I love the idea of a 7" laptop, and this, for me, is one of its key selling points. If I saw one in person, could I be tempted? Perhaps… The Pinebook Pro is a cheap ARM laptop. I had one of these, and it has gone to someone who could make better use of it than I could. Even its low price - I paid about £150 for it, I think, because it was sold as “broken” (which it was not) - could not really make up for the fact that I found it underpowered for my needs. This is probably a “me” thing, and perhaps my expectations were simply misaligned. The Pine64 store certainly hints in this direction: Please do not order the Pinebook Pro if you’re seeking a substitute for your X86 laptop, or are just curious Purism makes a laptop, a tablet, and a mini desktop PC . I love their hardware kill switches for camera and microphone. A camera cover is all well and good, but I’d really like to have a way of physically disconnecting the microphone on my machines. Again, I don’t think I know anyone who has one. Were it not for a friend of mine, I wouldn’t even be aware of Slimbook. Matija, who wrote up his experiences setting up a Slimbook Pro X 14 , is the only person I’ve seem mention them. But there they are, with a range of Linux-centric laptops , at a range of prices. I could be tempted by a Linux-first tablet, and StarLabs’ StarLite looks much the best of the bunch… But, at £540 + VAT, or thereabouts, with a keyboard, it is far from cheap for something that I don’t think would replace my actual laptop. I’m aware of System 76 , but I’m not sure I know anyone who has one of their machines. As with System 76, I’m aware of Tuxedo , which certainly appears to have an impressive range of machines. But I don’t think I’ve heard or seen of anyone using one.

0 views
fLaMEd fury 2 weeks ago

Disable AI In Firefox

What’s going on, Internet? To the outrage of the Firefox community across the web, Mozilla has started rolling out AI across our beloved browser and has enabled the features by default. I’ve found the new Firefox “AI” features, like the pop-ups that appear when highlighting text, to be more distracting than useful. The sidebar chat isn’t something I need either; if I want that experience, I’ll just open ChatGPT in a containerised tab. If you’d like to turn these features off, open in the Firefox address bar, search for , set it to false, and that should disable everything. If you’d rather try some features while disabling others, keep set to true and toggle each feature individually. I’m giving Smart Tab Groups a try for now, as I’m curious to see how the “AI” handles organising my dozens of open tabs. I’ll let you know how that goes. Below is a list of the “AI” features you can disable in , along with a short explanation of what I understand each one does. Enjoy. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website.

0 views