Latest Posts (20 found)

What goes on at a meeting of the Silicon Corridor Linux User Group in 2025

What goes on at a meeting of the Silicon Corridor Linux User Group in 2025 I found this post in my drafts, half completed. I am not really sure why I started it, but I did start it, some point earlier this year, so now I will finish it. I am a long time member of our local Linux user group, the curiously named Silicon Corridor Linux User Group (SCLUG) . (Its website looks much how you might expect the website of a Linux user group to look.) Given that we’ve only met in Reading for as long as I can remember, I guess that it is really the Reading And Thereabouts Linux User Group. RATLUG. I first went to a SCLUG meeting in around 2005, when I was back in the area after university. The group had an active email list, which was the primary means of communication. We met at the seating area in the front of a Wetherspoons (urgh). I think because the food was cheap. It certainly wasn’t because it was good. Or a pleasant place conducive to a good chat, given how loud and crowded it was. But it was fun , and it was enjoyable to chat with people developing, supporting, and using Linux (and BSD etc.). Meetings were well attended, and we often struggled for space. I stopped going for a quite a few years, both because I really wasn’t a fan of Wetherspoons, and also life got in the way. I started to go again just before the first Covid lockdown. It was still in Wetherspoons, but oh well. I think that I managed one meeting before everything was shut down. We moved online during the covid lockdowns, using jitsi as a platform for chatting. I rather enjoyed it. I particularly liked the convenience, of being able to join from home, rather than travel all the way to Reading for a couple of hours. But it was not a success from a numbers point of view, and while I liked the idea of people proposing mini-talks (as I like the idea of using the LUG as a place to learn things), that did not catch on. So now we are in 2025, and SCLUG keeps going. Times have changed, though. The mailing list is almost silent; we have a Signal group instead, but there is relatively little chat between meetings. We still meet in person, once a month, of a Wednesday evening. We have, finally, moved from Wetherspoons to another pub, thank goodness. The fact that meetings were in Wetherspoons were a significant factor in me not bothering to go, so I was keen to encourage a move to somewhere… better. At the moment, we meet in the covered garden area of The Nag’s Head and in the warmer and lighter months, it is quite pleasant. We’ve acknowledged that this is not going to be viable for much longer because of the weather, and the pub itself is small and noisy, so I suspect that we are back to looking for another venue. It is not a big group. I reckon that, on average, there are probably six or seven of us at most meetings. Visitors / drop-ins are very welcome; the Signal group is a good way of finding us, else look for the penguin on the table if I remember to bring it. “Meetings” sounds a bit formal, since it is just us sitting and chatting. There is no formality to it at all, really; turn up, have a chat, and leave whenever. I tend to be there a bit earlier than the times on the website, and leave not too late in the evening. The conversation tends to be of a technical bent, although not just Linux by any means. Self-hosting comes up a fair amount, as do people’s experiments with new devices and technologies, and chats about tech and society and politics etc. While I doubt that anyone who didn’t have an interest in such things would enjoy it, there’s certainly no expectation of knowledge/experience/expertise, nor any elitism or snobbery. I can’t say that I learn a huge amount - for me, it is definitely more social than educational. Even with a small number of people, I have to have enough social spoons left to persuade myself to go into Reading of a Wednesday evening for a chat. We have not done anything like PGP key signing, or helping people install Linux, or anything similar, for as long as I can remember. Yes, I think so. There are, of course, so many online places where one can go to chat about Linux, and to seek support, that an in-person group is not needed for this. To me, SCLUG is really now a social thing. A pleasant and laid back evening, once a month, to chat with people with complementary interests. It strikes me as of those things that will continue for as long as there are people willing and able to turn up and chat. Perhaps that will wane at some point…

0 views
iDiallo Today

Is RSS Still Relevant?

I'd like to believe that RSS is still relevant and remains one of the most important technologies we've created. The moment I built this blog, I made sure my feed was working properly. Back in 2013, the web was already starting to move away from RSS. Every few months, an article would go viral declaring that RSS was dying or dead. Fast forward to 2025, those articles are nonexistent, and most people don't even know what RSS is. One of the main advantages of an RSS feed is that it allows me to read news and articles without worrying about an algorithm controlling how I discover them. I have a list of blogs I'm subscribed to, and I consume their content chronologically. When someone writes an article I'm not interested in, I can simply skip it. I don't need to train an AI to detect and understand the type of content I don't like. Who knows, the author might write something similar in the future that I do enjoy. I reserve that agency to judge for myself. The fact that RSS links aren't prominently featured on blogs anymore isn't really a problem for me. I have the necessary tools to find them and subscribe on my own. In general, people who care about RSS are already aware of how to subscribe. Since I have this blog and have been posting regularly this year, I can actually look at my server logs and see who's checking my feed. From January 1st to September 1st, 2025, there were a total of 537,541 requests to my RSS feed. RSS readers often check websites at timed intervals to detect when a new article is published. Some are very aggressive and check every 10 minutes throughout the day, while others have somehow figured out my publishing schedule and only check a couple of times daily. RSS readers, or feed parsers, don't always identify themselves. The most annoying name I've seen is just , probably a Node.js script running on someone's local machine. However, I do see other prominent readers like Feedly, NewsBlur, and Inoreader. Here's what they look like in my logs: There are two types of readers: those from cloud services like Feedly that have consistent IP addresses you can track over time, and those running on user devices. I can identify the latter as user devices because users often click on links and visit my blog with the same IP address. So far throughout the year, I've seen 1,225 unique reader names. It's hard to confirm if they're truly unique since some are the same application with different versions. For example, Tiny Tiny RSS has accessed the website with 14 different versions, from version 22.08 to 25.10. I've written a script to extract as many identifiable readers as possible while ignoring the generic ones that just use common browser user agents. Here's the list of RSS readers and feed parsers that have accessed my blog: Raw list of RSS user agents here RSS might be irrelevant on social media, but that doesn't really matter. The technology is simple enough that anyone who cares can implement it on their platform. It's just a fancy XML file. It comes installed and enabled by default on several blogging platforms. It doesn't have to be the de facto standard on the web, just a good way for people who are aware of it to share articles without being at the mercy of dominant platforms.

0 views
André Arko Yesterday

We want to move Ruby forward

On September 9, without warning, Ruby Central kicked out the maintainers who have cared for Bundler and RubyGems for over a decade. Ruby Central made these changes against the established project policies , while ignoring all objections from the maintainers’ team . At the time, Ruby Central claimed these changes were “temporary". However, While we know that Ruby Central had no right to act the way they did, it is nevertheless clear to us that the Ruby community will be better off if the codebase, maintenance, and legal rights to RubyGems and Bundler are all together in the same place. To bring this about, we are prepared to transfer our interests in RubyGems and Bundler to Matz , end the dispute over the GitHub enterprise account, 2 GitHub organizations, and 70 repositories, and hand over all rights in the Bundler logo and Bundler name, including the trademark applications in the US, EU, and Japan. Once we have entered into a legal agreement to settle any legal claims with Ruby Central and transfer all rights to Matz, the former maintainers will step back entirely from the RubyGems and Bundler projects, leaving them fully and completely to Matz, and by extension to the entire Ruby community. Although Ruby Central’s actions were not legitimate, our commitment to the Ruby community remains strong. We’re choosing to focus our energy on projects to improve Ruby for everyone, including rv , Ruby Butler , jim , and gem.coop . Signed, The former maintainers: André , David , Ellen , Josef , Martin , and Samuel None of the “temporary” changes made by Ruby Central have been undone, more than six weeks later. Ruby Central still has not communicated with the removed maintainers about restoring any permissions. Ruby Central still has not offered “operator agreements” or “contributor agreements” to any of the removed maintainers. The Ruby Together merger agreement plainly states that it is the maintainers who will decide what is best for their projects, not Ruby Central. Last week, Matz stepped in to assume control of RubyGems and Bundler himself. His announcement states that the Ruby core team will assume control and responsibility for the primary RubyGems and Bundler GitHub repository. Ruby Central did not communicate with any removed maintainers before transferring control of the rubygems/rubygems GitHub repo to the Ruby core team. On October 24th, Shan publicly confirmed she does not believe the maintainers need to be told why they were removed .

0 views
Dayvster Yesterday

AI’s Trap: Settling for Boilerplate Over Elegant Code

We are all familiar with Picasso's "The Bull" series, in which he progressively simplifies the image of a bull down to its most basic, yet still recognizable form. Steve Jobs was famously inspired by this concept, leading him to advocate for simplicity and elegance in design and technology above countless features and excessive complexity. Distill a concept even as complex as software or UX down to its essence, and what you are left with is something beautiful and elegant that fulfills its purpose with minimal fuss. ## So Why Do We Accept Ugly Code Then? I've noticed a worrying trend in programming that, as tools around a programming language improve and automate more of our work, we increase our tolerance for boilerplate, repetitive, and frankly, ugly code. We accept it and we tell ourselves it's ok, the linter will fix it, the formatter will fix it, the compiler will optimize it, in the end it's all ones and zeroes anyway, right, why would any of this matter? Since AI has entered the equation, tolerance for boilerplate and excuses for ugly code has only increased. We tell ourselves that as long as the AI can generate the code for us, it doesn't matter if it's elegant or not. After all, we didn't write it, the AI did. So why should we care about the quality of the code? After all, we relinquished ownership of the code the moment we asked the AI to generate it for us. Now, you may not be someone who uses AI to generate code, and kudos to you, welcome to the club, however, even as someone who relatively early noticed that AI does not produce something that I could sign off on as my own work proudly and with confidence. I have engaged in the practice of using AI to generate some more tedious tasks for myself, such as just taking a JSON response from an API and asking Copilot, ChatGPT, or Grok to generate a type definition in the language I am currently working with. I work on many personal and professional projects, and I encounter different types of people and teams, some embrace AI, others shun it. However I have noticed that in teams or projects where AI is embraced or even encouraged or mandated as part of the development process, tend to produce a lot of boilerplate and a lot of very ugly inelegant code that few wish to take real ownership of, because it is not their creation they handed ownership of it to the AI and therefore abandoned the developer - code relationship that is so essential to producing quality software. ## Developers Should Love Their Code I harp on this a lot, development is a unique one of a kind blend of engineering and creative expression, those two aspects of our craft should not be at odds with each other, but should rather complement each other. When you write code, you should love it, you should be proud of it, you should want to show it off to others who can understand it and appreciate it, in essence, your way of expressing your way of thinking and how you tackle problems. I've briefly touched upon the fact that handing off ownership of your code to AI means abandoning that relationship between you, the developer, and your code. I want to expand on that a bit more. If you don't care about the code you write, if you don't love what you are doing or the creative process to solve a specific problem, you will forever lack understanding of that specific problem that you are solving. It will get solved for you by the AI, and you will rob yourself of the opportunity to learn and understand that problem on a deeper level. This is something that has kicked me in the ass a fair few times in the past, I'd get a problem to solve, think oh this is easy, draft up an initial solution solve the problem at a very superficial level and then in the coming weeks get 10 QA tickets filed against that problem because there is more to the problem than meets the eye and there are often things that you miss or do not even consider during your first implementation. AI will do exactly the same thing every single time. The difference, though, is that every time I was given a problem to solve, it resulted in fewer and fewer QA tickets because I understood and learnt from my past experiences and mistakes and knew how to approach problem-solving more effectively. AI will not do any of that, it will always solve the problem given to it at face value without any deeper understanding or context. It will not learn from past mistakes, grow, or adapt its mindset to shift its approach based on new information or insights. It will always solve the problem, and worst of all, it will never learn to understand the problem so well that it can simplify the solution down to its essence and produce something elegant. There is a certain beauty to solving a problem and all of its potential side effects in a very minimal and easily readable way. That is an ideal that I strive for in my own work, and I encourage you to do the same. Love your code, be proud of it, and strive for elegance over boilerplate. ## The Cost of Boilerplate Boilerplate code is not just an eyesore, it comes with real costs. It increases the cognitive load on developers, makes the project less enjoyable to work on, and increases the time to onboard new team members. And increases the time to fix or resolve issues, which directly impacts the experience of the end user of your product. As a third-order effect, it also increases the soft requirement for more tooling and more AI assistance to manage and maintain your codebase, which is a dangerous spiral best avoided altogether. A question I've been asking myself is how much of this is by happenstance and how much of this is by design. I don't want to get all conspiracy theorist on you, but it does make me wonder if there are vested interests in making us accept boilerplate and ugly code as the norm, because it increases the demand for AI tools and services that can help us manage and maintain our codebases. I'm not confident enough to attach any weight to this thought, but it's something worth pondering. ## Aesthetic Responsibility It's very easy for us as developers to simply dismiss aesthetics as something superficial and unimportant in the grand scheme of things. After all, we are not building art, we are building software that solves problems and delivers value to users. However, I would argue that aesthetics play a crucial role in the quality and maintainability of our code. When we write code that is elegant and beautiful, we are not just making it easier for ourselves to read and understand, we are also making it easier for others to read and understand. We are creating a shared language and a shared understanding of the problem we are solving and the solution we are implementing. When we write ugly and boilerplate code, we are creating barriers and obstacles for ourselves and others. We are making it harder to read and understand, we are making it harder to maintain and evolve, and we are making it harder to collaborate and share knowledge. We're increasing the friction instead of reducing it. It always feels nice as a human to look at something beautiful, whether it's a piece of art, a well-designed product, a sports car, a building with stunning architecture, and so on and so forth. But when it comes to code, we often dismiss aesthetics as unimportant, after all, it's a means to an end, right? I would argue that as developers, we have an aesthetic responsibility to ourselves and to others to write code that is elegant and beautiful, not just for the sake of aesthetics, but for the sake of quality and maintainability. But it is very easy lately to just let the tools we have been given and AI do the heavy lifting for us, and in doing so, we risk losing sight of our aesthetic responsibility as developers. ## How Does AI Increase the Tolerance for Boilerplate? You know the drill: you are working away on a piece of a project, and you're on a tight deadline with a backlog full of features to implement and bugs to fix. You think to yourself, "I just need to get this done quickly, I can refactor it later(a mindset I generally encourage)." So you ask the AI to generate the code for you, and it spits out a solution that does not work, so you refine your prompt and try again, and again, and again until you get something that works. Now you have a working solution, great! One quick glance at your `git status`, **HOLY SH.. WHY ARE THERE 32 FILES CHANGED? AND WHY ARE MOST OF THEM NEW FILES?** Ok..OK, you have a working solution. It's best to open up a PR and let the Copilot PR bot handle the code review, and maybe a couple of co-workers will spot some things and suggest improvements. Once you have more time on your hands, you will absolutely go back and refactor this mess down to something elegant and beautiful. You just need to get this next task done first, oh, and it's on a tight deadline as well... Before you know it, you have a codebase full of boilerplate and code that could easily be done in half as many lines or less, you will never go back and refactor it because there is always something more urgent to do, and the cycle continues. Maintenance? Not a problem, we can just use AI to generate documentation for us or explain code blocks for us. Testing? AI can generate tests for us, no need to think about edge cases or test coverage. Performance? AI can optimize our code for us, no need to understand the underlying algorithms or data structures. It creeps up on you slowly, but surely. AI stands as an excuse to not care about the quality of your code, but only the quality or functionality of your outcome. There's a lot to be said and discussed about this topic. You might be rightfully asking yourself, does it even matter if the code is ugly and I feel no pride in it, as long as the end user gets the functionality they need? ## "Good Enough" Is Not Good Enough I always liked the saying, "How you do anything is how you do everything." It has multiple different explanations and interpretations, but to me, it boils down to this: if you accept mediocrity in one aspect of your life, you will accept mediocrity in all aspects of your life. Or, how you do the little things is how you do the big things. If you accept "good enough" code, you will eventually have a good enough product on your hands that might have to compete with products that were built with care and pride. Do you want to deliver good enough products, or do you want to deliver great products that you can be proud of? Completely ignoring the market and financial viability of this approach, will you be happy with the work you do if mediocrity is the standard you hold yourself to? Or worse, if mediocrity is the modus operandi of your team or company? If you truly believe that "good enough" is good enough, then by all means continue down that path. But I urge you to test that belief, challenge it, start a project that you may never finish or get paid for, but attempt to take a complex concept and distill it down to its essence in the most elegant way possible, do what Picasso did with his bull series, and see how far you can push yourself to create something beautiful out of something complex while still maintaining its core functionality and purpose. ## Conclusion AI is a powerful tool that can help us be more productive and efficient, but it should not be used as an excuse to accept boilerplate and ugly code. As developers, we should strive for elegance and simplicity in our code, and we should take pride in the work we do. We should love our code and our craft, and we should never settle for "good enough" when it comes to the quality of our work. **Write code you would sign your name under.** If you've enjoyed this article and made it this far, thank you sincerely for your time. I hope it was worth it and that it sparked some thoughts and reflections on your own approach to coding and craftsmanship, and I sincerely hope it did not feel wasted. If you have any thoughts or feedback on this article, please feel free to reach out to me on [Twitter](https://twitter.com/dayvsterdev). I'm always open to discussions and feedback, or just general chit chat about just about anything I find interesting.

0 views

LaTeX, LLMs and Boring Technology

Depending on your particular use case, choosing boring technology is often a good idea. Recently, I've been thinking more and more about how the rise and increase in power of LLMs affects this choice. By definition, boring technology has been around for a long time. Piles of content have been written and produced about it: tutorials, books, videos, reference manuals, examples, blog posts and so on. All of this is consumed during the LLM training process, making LLMs better and better at reasoning about such technology. Conversely, "shiny technology" is new, and has much less material available. As a result, LLMs won't be as familiar with it. This applies to many domains, but one specific example for me personally is in the context of LaTeX. LaTeX certainly fits the "boring technology" bill. It's decades old, and has been the mainstay of academic writing since the 1980s. When I used it for the first time in 2002 (for a project report in my university AI class), it was already very old. But people keep working on it and fixing issues; it's easy to install and its wealth of capabilities and community size are staggering. Moreover, people keep working with it, producing more and more content and examples the LLMs can ingest and learn from. I keep hearing about the advantages of new and shiny systems like Typst. However, with the help of LLMs, almost none of the advantages seem meaningful to me. LLMs are great at LaTeX and help a lot with learning or remembering the syntax, finding the right packages, deciphering errors and even generating tedious parts like tables and charts, significantly reducing the need for scripting [1] . You can use LLMs either as standalone or fully integrated into your LaTeX environment; Overleaf has a built-in AI helper, and for local editing you can use VSCode plugins or other tools. I'm personally content with TeXstudio and use LLMs as standalone help, but YMMV. There are many examples where boring technology and LLMs go well together. The main criticism of boring technology is typically that it's "too big, full of cruft, difficult to understand". LLMs really help cutting through the learning curve though, and all that "cruft" is very likely to become useful some time in the future when you graduate from the basic use cases. To be clear: Typst looks really cool, and kudos to the team behind it! All I'm saying in this post is that for me - personaly - the choice for now is to stick with LaTeX as a "boring technology". For finding the right math symbols, I rarely need to scan reference materials any longer. LLMs will easily answer questions like "what's that squiggly Greek letter used in math, and its latex symbol?" or "write the latex for Green's theorem, integral form". For the trickiest / largest equations, LLMs are very good at "here's a picture I took of my equation, give me its latex code" these days [2] . "Here's a piece of code and the LaTeX error I'm getting on it; what's wrong?" This is made more ergonomic by editor integrations, but I personally find that LaTeX's error message problem is hugely overblown. 95% of the errors are reasonably clear, and serious sleuthing is only rarely required in practice. In that minority of cases, pasting some code and the error into a standalone LLM isn't a serious time drain. Generating TikZ diagrams and plots. For this, the hardest part is getting started and finding the right element names, and so on. It's very useful to just ask an LLM to emit something initial and then tweak it manually later, as needed. You can also ask the LLM to explain each thing it emits in detail - this is a great learning tool for deeper understanding. Recently I had luck going "meta" with this: when the diagram has repetitive elements, I may ask the LLM to "write a Python program that generates a TikZ diagram ...", and it works well. Generating and populating tables, and converting them from other data formats or screenshots. Help with formatting and typesetting (how do I change margins to XXX and spacing to YYY). When it comes to scripting, I generally prefer sticking to real programming languages anyway. If there's anything non-trivial to auto-generate I wouldn't use a LaTeX macro, but would write a Python program to generate whatever I need and embed it into the document with something like \input{} . Typst's scripting system may be marketed as "clean and powerful", but why learn yet another scripting language? Ignoring LaTeX's equation notation and doing their own thing is one of the biggest mistakes Typst makes, in my opinion. LaTeX's notation may not be perfect, but it's near universal at this point with support in almost all math-aware tools. Typst's math mode is a clear sign of the second system effect, and isn't even stable . For finding the right math symbols, I rarely need to scan reference materials any longer. LLMs will easily answer questions like "what's that squiggly Greek letter used in math, and its latex symbol?" or "write the latex for Green's theorem, integral form". For the trickiest / largest equations, LLMs are very good at "here's a picture I took of my equation, give me its latex code" these days [2] . "Here's a piece of code and the LaTeX error I'm getting on it; what's wrong?" This is made more ergonomic by editor integrations, but I personally find that LaTeX's error message problem is hugely overblown. 95% of the errors are reasonably clear, and serious sleuthing is only rarely required in practice. In that minority of cases, pasting some code and the error into a standalone LLM isn't a serious time drain. Generating TikZ diagrams and plots. For this, the hardest part is getting started and finding the right element names, and so on. It's very useful to just ask an LLM to emit something initial and then tweak it manually later, as needed. You can also ask the LLM to explain each thing it emits in detail - this is a great learning tool for deeper understanding. Recently I had luck going "meta" with this: when the diagram has repetitive elements, I may ask the LLM to "write a Python program that generates a TikZ diagram ...", and it works well. Generating and populating tables, and converting them from other data formats or screenshots. Help with formatting and typesetting (how do I change margins to XXX and spacing to YYY).

0 views
neilzone Yesterday

Is now the best time ever for Linux laptops?

As I’ve said, ad nauseum probably, I like my secondhand ThinkPads. But I’m not immune to the charms of other machines and, as far as I can tell, now is an amazing time for Linux laptops. By which I mean, companies selling laptops with Linux pre-installed or no OS preinstalled, or aimed at Linux users. Yes, it’s a bit subjective. There seems to be quite a range of machines, at quite a range of prices, with quite a range of Linux and other non-Windows/macOS operating systems available. This isn’t meant to be a comprehensive list, but just some thoughts on a few of them that have crossed my timeline recently. All have points that I really like but, right now at least, if my current ThinkPad died, I’d probably just buy another eBay ThinkPad… Update 2025-10-25: This is a list, not recommendations, but personally I won’t be buying a Framework machine: “Framework flame war erupts over support of politically polarizing Linux projects” I love the idea of the Framework laptops , which a user can repair and upgrade with ease. Moving away from “disposable” IT, into well-built systems which can be updated in line with user needs, and readily repaired, is fantastic. Plus, they have physical switches to disconnect microphone and camera, which I like. I’ve seen more people posting about Framework machines than I have about pretty much all of the others here put together, so my guess is that these are some of the more popular Linux-first machines at the moment. I know a few people who have, or had, one of these. Most seem quite happy. One… not so much. But the fact that multiple people I know have them means, perhaps, sooner rather than later, I’ll get my hands on one temporarily, to see what it is like. I only heard about Malibal while seeing if there was anything obvious that I’d missed from this post. Their machines appear to start at $4197, based on what they displayed when I clicked on the link to Linux machines, which felt noteworthy. And some of the stuff on their website seems surprising. Update 2025-10-25: The link about their reasons for not shipping to Colorado no longer works, nor is it available via archive.org (“This URL has been excluded from the Wayback Machine.”). Again, this is a list, not recommendations, but this thread on Reddit does not make for good reading. I’m slipping this in because I have soft spot for Leah’s Minifree range of machines even though, strictly, they are not “Linux-first” laptops, but rather Libreboot machines, which can come with a Linux installation. I massively admire what Leah is doing here, both in terms of funding their software development work, and also helping reduce electronic waste through revitalising used equipment. Of all the machines and companies in this blog post, Minifree’s are, I think, the ones which tempt me the most. I think the MNT Pocket Reform is a beautiful device, in a sort-of-quirky kind of way. In my head, these are hand-crafted, artisan laptops. Could I see myself using it every day? Honestly, no. The keyboard would concern me, and I am not sure I see the attraction of a trackball. (I’d happily try one though!) But I love the idea of a 7" laptop, and this, for me, is one of its key selling points. If I saw one in person, could I be tempted? Perhaps… The Pinebook Pro is a cheap ARM laptop. I had one of these, and it has gone to someone who could make better use of it than I could. Even its low price - I paid about £150 for it, I think, because it was sold as “broken” (which it was not) - could not really make up for the fact that I found it underpowered for my needs. This is probably a “me” thing, and perhaps my expectations were simply misaligned. The Pine64 store certainly hints in this direction: Please do not order the Pinebook Pro if you’re seeking a substitute for your X86 laptop, or are just curious Purism makes a laptop, a tablet, and a mini desktop PC . I love their hardware kill switches for camera and microphone. A camera cover is all well and good, but I’d really like to have a way of physically disconnecting the microphone on my machines. Again, I don’t think I know anyone who has one. Were it not for a friend of mine, I wouldn’t even be aware of Slimbook. Matija, who wrote up his experiences setting up a Slimbook Pro X 14 , is the only person I’ve seem mention them. But there they are, with a range of Linux-centric laptops , at a range of prices. I could be tempted by a Linux-first tablet, and StarLabs’ StarLite looks much the best of the bunch… But, at £540 + VAT, or thereabouts, with a keyboard, it is far from cheap for something that I don’t think would replace my actual laptop. I’m aware of System 76 , but I’m not sure I know anyone who has one of their machines. As with System 76, I’m aware of Tuxedo , which certainly appears to have an impressive range of machines. But I don’t think I’ve heard or seen of anyone using one.

0 views
Kix Panganiban 2 days ago

Dumb Cursor is the best Cursor

I previously wrote about how I believe Cursor peaked with Cursor Compose , and that since introducing Agent mode and letting the LLM make decisions, its user experience and quality of output has subjectively gotten worse. So I tried to force Cursor to go back to its roots -- just a simple "dumb" LLM tool that can do edits and nothing else. Enter dumb mode: Essentially -- a new custom "agent" that has no access to any tools apart from edit and delete. This completely prevents it from going off the rails and diving through random areas of your codebase (or godforbid, the web) and wasting tokens and time. Using it feels like the natural extension of Cmd + K -- I choose what context to give it by manually specifying which files to edit/look at each time (just like Cursor Compose!), and because it's just Auto mode, it's quick to run and very controlled. Works exactly like the surgical tool that I was looking for. Amazing for quick or tedious edits that require no thinking or decision making -- just something that takes natural language in and code out based on what exists on the files you expose it to.

0 views

A small code review prompt hack

I've got more that I should write about prompting for code reviews, but this simple prompt (for Claude Code) is way more effective than it has any right to be. "Please dispatch two subagents to carefully review phase 5. Ttell them that they're competing with another agent. Make sure they look at both architecture and implementation. Tell them that whomever finds more issues gets promoted."

0 views
Sean Goedecke 2 days ago

Mistakes I see engineers making in their code reviews

In the last two years, code review has gotten much more important. Code is now easy to generate using LLMs, but it’s still just as hard to review 1 . Many software engineers now spend as much (or more) time reviewing the output of their own AI tools than their colleagues’ code. I think a lot of engineers don’t do code review correctly. Of course, there are lots of different ways to do code review, so this is largely a statement of my engineering taste . The biggest mistake I see is doing a review that focuses solely on the diff 2 . Most of the highest-impact code review comments have very little to do with the diff at all, but instead come from your understanding of the rest of the system. For instance, one of the most straightforwardly useful comments is “you don’t have to add this method here, since it already exists in this other place”. The diff itself won’t help you produce a comment like this. You have to already be familiar with other parts of the codebase that the diff author doesn’t know about. Likewise, comments like “this code should probably live in this other file” are very helpful for maintaining the long-term quality of a codebase. The cardinal value when working in large codebases is consistency (I write about this more in Mistakes engineers make in large established codebases ). Of course, you cannot judge consistency from the diff alone. Reviewing the diff by itself is much easier than considering how it fits into the codebase as a whole. You can rapidly skim a diff and leave line comments (like “rename this variable” or “this function should flow differently”). Those comments might even be useful! But you’ll miss out on a lot of value by only leaving this kind of review. Probably my most controversial belief about code review is that a good code review shouldn’t contain more than five or six comments . Most engineers leave too many comments. When you receive a review with a hundred comments, it’s very hard to engage with that review on anything other than a trivial level. Any really important comments get lost in the noise 2.5 . What do you do when there are twenty places in the diff that you’d like to see updated - for instance, twenty instances of variables instead of ? Instead of leaving twenty comments, I’d suggest leaving a single comment explaining the stylistic change you’d like to make, and asking the engineer you’re reviewing to make the correct line-level changes themselves. There’s at least one exception to this rule. When you’re onboarding a new engineer to the team, it can be helpful to leave a flurry of stylistic comments to help them understand the specific dialect that your team uses in this codebase. But even in this case, you should bear in mind that any “real” comments you leave are likely to be buried by these other comments. You may still be better off leaving a general “we don’t do early returns in this codebase” comment than leaving a line comment on every single early return in the diff. One reason engineers leave too many comments is that they review code like this: This is a good way to end up with hundreds of comments on a pull request: an endless stream of “I would have done these two operations in a different order”, or “I would have factored this function slightly differently”, and so on. I’m not saying that these minor comments are always bad. Sometimes the order of operations really does matter, or functions really are factored badly. But one of my strongest opinions about software engineering is that there are multiple acceptable approaches to any software problem , and that which one you choose often comes down to taste . As a reviewer, when you come across cases where you would have done it differently, you must be able to approve those cases without comment, so long as either way is acceptable. Otherwise you’re putting your colleagues in an awkward position. They can either accept all your comments to avoid conflict, adding needless time and setting you up as the de facto gatekeeper for all changes to the codebase, or they can push back and argue on each trivial point, which will take even more time. Code review is not the time for you to impose your personal taste on a colleague. So far I’ve only talked about review comments. But the “high-order bit” of a code review is not the content of the comments, but the status of the review: whether it’s an approval, just a set of comments, or a blocking review. The status of the review colors all the comments in the review. Comments in an approval read like “this is great, just some tweaks if you want”. Comments in a blocking review read like “here’s why I don’t want you to merge this in”. If you want to block, leave a blocking review. Many engineers seem to think it’s rude to leave a blocking review even if they see big problems, so they instead just leave comments describing the problems. Don’t do this. It creates a culture where nobody is sure whether it’s okay to merge their change or not. An approval should mean “I’m happy for you to merge, even if you ignore my comments”. Just leaving comments should mean “I’m happy for you to merge if someone else approves, even if you ignore my comments.” If you would be upset if a change were merged, you should leave a blocking review on it. That way the person writing the change knows for sure whether they can merge or not, and they don’t have to go and chase up everyone who’s left a comment to get their informal approval. I should start with a caveat: this depends a lot on what kind of codebase we’re talking about. For instance, I think it’s fine if PRs against something like SQLite get mostly blocking reviews. But a standard SaaS codebase, where teams are actively developing new features, ought to have mostly approvals. I go into a lot more detail about the distinction between these two types of codebase in Pure and Impure Engineering . If tons of PRs are being blocked, it’s usually a sign that there’s too much gatekeeping going on . One dynamic I’ve seen play out a lot is where one team owns a bottleneck for many other teams’ features - for instance, maybe they own the edge network configuration where new public-facing routes must be defined, or the database structure that new features will need to modify. That team is typically more reliability-focused than a typical feature team. Engineers on that team may have a different title, like SRE, or even belong to a different organization. Their incentives are thus misaligned with the feature teams they’re nominally supporting. Suppose the feature team wants to update the public-facing ingress routes in order to ship some important project. But the edge networking team doesn’t care about that project - it doesn’t affect their or their boss’s review cycles. What does affect their reviews is any production problem the change might cause. That means they’re motivated to block any potentially-risky change for as long as possible. This can be very frustrating for the feature team, who is willing to accept some amount of risk for the sake of delivering new features 3 . Of course, there are other reasons why many PRs might be getting blocking reviews. Maybe the company just hired a bunch of incompetent engineers, who ought to be prevented from merging their changes. Maybe the company has had a recent high-profile incident, and all risky changes should be blocked for a couple of weeks until their users forget about it. But in normal circumstances, a high rate of blocked reviews represents a structural problem . For many engineers - including me - it feels good to leave a blocking review, for the same reasons that it feels good to gatekeep in general. It feels like you’re single-handedly protecting the quality of the codebase, or averting some production incident. It’s also a way to indulge a common vice among engineers: flexing your own technical knowledge on some less-competent engineer. Oh, looks like you didn’t know that your code would have caused an N+1 query! Well, I knew about it. Aren’t you lucky I took the time to read through your code? This principle - that you should bias towards approving changes - is important enough that Google’s own guide to code review begins with it, calling it ” the senior principle among all of the code review guidelines” 4 . I’m quite confident that many competent engineers will disagree with most or all of the points in this post. That’s fine! I also believe many obviously true things about code review, but I didn’t include them here. In my experience, it’s a good idea to: This all more or less applies to reviewing code from agentic LLM systems. They are particularly prone to missing code that they ought to be writing, they also get a bit lost if you feed them a hundred comments at once, and they have their own style. The one point that does not apply to LLMs is the “bias towards approving” point. You can and should gatekeep AI-generated PRs as much as you want. I do want to close by saying that there are many different ways to do code review . Here’s a non-exhaustive set of values that a code review practice might be trying to satisfy: making sure multiple people on the team are familiar with every part of the codebase, letting the team discuss the software design of each change, catching subtle bugs that a single person might not see, transmitting knowledge horizontally across the team, increasing perceived ownership of each change, enforcing code style and format rules across the codebase, and satisfying SOC2 “no one person can change the system alone” constraints. I’ve listed these in the order I care about them, but engineers who would order these differently will have a very different approach to code review. Of course there are LLM-based reviewing tools. They’re even pretty useful! But at least right now they’re not as good as human reviewers, because they can’t bring to bear the amount of general context that a competent human engineer can. For readers who aren’t software engineers, “diff” here means the difference between the existing code and the proposed new code, showing what lines are deleted, added, or edited. This is a special instance of a general truth about communication: if you tell someone one thing, they’ll likely remember it; if you tell them twenty things, they will probably forget it all. In the end, these impasses are typically resolved by the feature team complaining to their director or VP, who complains to the edge networking team’s director or VP, who tells them to just unblock the damn change already. But this is a pretty crude way to resolve the incentive mismatch, and it only really works for features that are high-profile enough to receive air cover from a very senior manager. Google’s principle is much more explicit, stating that you should approve a change if it’s even a minor improvement, not when it’s perfect. But I take the underlying message here to be “I know it feels good, but don’t be a nitpicky gatekeeper - approve the damn PR!” Look at a hunk of the diff Ask themselves “how would I write this, if I were writing this code?” Leave a comment with each difference between how they would write it and the actual diff Consider what code isn’t being written in the PR instead of just reviewing the diff Leave a small number of well-thought-out comments, instead of dashing off line comments as you go and ending up with a hundred of them Review with a “will this work” filter, not with a “is this exactly how I would have done it” filter If you don’t want the change to be merged, leave a blocking review Unless there are very serious problems, approve the change Of course there are LLM-based reviewing tools. They’re even pretty useful! But at least right now they’re not as good as human reviewers, because they can’t bring to bear the amount of general context that a competent human engineer can. ↩ For readers who aren’t software engineers, “diff” here means the difference between the existing code and the proposed new code, showing what lines are deleted, added, or edited. ↩ This is a special instance of a general truth about communication: if you tell someone one thing, they’ll likely remember it; if you tell them twenty things, they will probably forget it all. ↩ In the end, these impasses are typically resolved by the feature team complaining to their director or VP, who complains to the edge networking team’s director or VP, who tells them to just unblock the damn change already. But this is a pretty crude way to resolve the incentive mismatch, and it only really works for features that are high-profile enough to receive air cover from a very senior manager. ↩ Google’s principle is much more explicit, stating that you should approve a change if it’s even a minor improvement, not when it’s perfect. But I take the underlying message here to be “I know it feels good, but don’t be a nitpicky gatekeeper - approve the damn PR!” ↩

0 views
annie's blog 2 days ago

Love letters 11-13

Seeds are shitty little bastards. You put them in the ground. Nothing happens. You water. You watch. You pull weeds. Nothing happens. You wait. You water. You watch. Nothing happens. You give up. You figure it’s over. Bad seed. Bad soil. Too much something. Not enough something else. You turn your attention away. In silence, a tiny stem pushes through the soil. Delicate roots reach and cling. Fragile new yellow-green leaves open. Just like that. Whatever you’ve planted that is stubbornly not cooperating: leave it alone. Quit messing around with it. Go ahead and give up! Face and bear the anguish of love. Face and bear bravely your own responsibility. (I am so proud of you.) Sometimes we bury seeds in a garden, sometimes we bury seeds in a grave. I see your effort, your love, your heart. Wow, what a heart. O heart! heart! heart! O the bleeding drops of red! Now: stop hiding in martyrdom and entertainment. Stop playing in the shallows. Dive. Dive in. Dive the fuck in. Start using all that you are to be who you are. Release all the resentment, fear, and self-pity. It’s not about whether you’re justified. Of course you are. It’s about whether it helps you live. Sometimes it does help you. Keeps you safe, or at least makes you feel safer. Then the walls that were a fortress become a prison. Time to knock ‘em down. You have stuff to do.

0 views
Harper Reed 2 days ago

Note #292

do to a slight and annoying migraine like headache i am currently coding with sunglasses on. it is a vibe Thank you for using RSS. I appreciate you. Email me

0 views
Jampa.dev 2 days ago

Writing with AI without the Slop

I suck at writing. I open too many parentheses, and my thoughts scatter (everywhere). So when ChatGPT launched, I thought it would finally replace Grammarly. But LLMs have their own problems: “It’s not just x—it’s y,” Rhetorical questions? Affirmative answers! “Here’s the kicker”: That preface was entirely unnecessary, And in the end, it ends with recaps — that repeat everything already said, now with bullet points. The problem with AI text is that when you read it, your first thought is: “Did this person actually invest time in this, or did they write a two-line prompt and expect me to read something they never even thought about?” And as some people put it: “I’d rather just read the prompt.” The current state of Reddit, basically. LLMs can’t be genuine because they don’t know how to be a person. They read text from multiple public sources and average it out. They weren’t trained by eavesdropping on authentic conversations or messages. (At least I hope not) The more the AI creates for you, the worse the output becomes. That’s why when you ask it to  keep it casual,  it turns into “How do you do, fellow kids?” and when you ask for a professional tone, it becomes “Alas, who’d’ve done this?”. If you want LLMs to cook, you need to provide ingredients. As a general writing (and cooking) tip, start it raw. Don’t use autocorrect. In fact, don’t even look at what you’re typing. Close your eyes and let raw ideas flow, along with grammatical mistakes and misconstrued sentences. Just make it coherent enough. Make bullet points to answer: “What’s the point of me writing this?” Connect those bullet points with your personality, which dictates how you link sentences. A serious person uses serious connectors; a casual person throws in verbal expressions (and memes). How LLMs can help When you have the first draft, the key is using the right edits. The biggest mistake people make is while prompting. If you prompt like a casual writer, it treats you like one. Saying, "Improve the text below for my email,” makes the AI slopify everything: it accesses the neural latent space of “This person needs my help immensely.”. You need to signal, “Hey, I know what I’m writing. I just need help improving the flow while keeping my own words.” You can do this by using the verbiage editors and publishers use during the different editing phases, from solidifying the overall scope to minor edits like correcting grammar. While the LLM won't write for you, it can help you immensely, because writing words is not the hard part once you get the hang of it. For me, the editing takes 80% of the overall time . Most people start as slow writers because they try to write and edit simultaneously. With chain-of-thoughts in newer models, you don't need much prompt engineering anymore . You just need to know the right words so the LLM's thinking can go into the embeddings. Content editing improves flow and structure at the sentence level. It is useful when you know what you want to say but are unsure how to connect thoughts. It’s the most destructive, so it's better only to use it once. Example Prompt: “You are a content editor. Improve the flow of the sentences and make the text stronger and more structured.” The AI will make many edits to make your text make sense, and the places where the AI misunderstood your intentions will stick out like sore thumbs. You will need to adjust them and add points that solidify your premise. As you add (and cut) content for a second draft, it's time to move to line editing. Line editing is where AI shines, especially for short texts like announcements. Use this when you know what and how you want to say something, but specific words escape you, or phrasing could be simpler. I spend most of my time here, line editing multiple times until nothing stands out badly. Example Prompt: “Line edit this (Slack message / blog post).” Proofreading happens when you’ve “mastered” the copy. It’s always safe to run multiple times without fearing the AI will destroy your voice, because you will tempt yourself to write small additional bits here and there. Example Prompt: “You're Grammarly, fix the mistakes in the text:” This is basically a cheap Grammarly (but better). Writing text is not magic, and you must put in effort. Even if we have better AI, I don’t think we will ever remove the AI scent of text writing. So we as humans will need to write until we get tired and don't even want to finis- Thanks for reading Jampa.dev! Subscribe for free to receive new posts. (And avoid getting shot by a snip- Note: I’ve added all the editing phrases of this article here . You can see how the content was changed from draft to final editing. I used Claude Sonnet for the editing part. Overall, I did one content edit and 18 line edits (on different snippets), and I lost count of how much proofreading I used.

0 views
Jeff Geerling 2 days ago

Why do some radio towers blink?

One day on my drive home, I saw three towers. One of them had a bunch of blinking white lights, another one had red lights that kind of faded in and out, and the third one, well, it wasn't doing anything. I'm lucky to have a radio engineer for a dad, so Dad: why do some towers blink? Joe: Well, blinking I would call like the way you described it, "flashing", "white light", or "strobe". All these lights are to aid pilots and air traffic. helicopters, fighter planes, regular jets. So that's the purpose of it. Jeff: Well that one tower that I saw had red lights that faded in and out, but I even think there's a freestanding tower just north of here that has red and white on top.

1 views

Falcon: A Reliable, Low Latency Hardware Transport

Falcon: A Reliable, Low Latency Hardware Transport Arjun Singhvi, Nandita Dukkipati, Prashant Chandra, Hassan M. G. Wassel, Naveen Kr. Sharma, Anthony Rebello, Henry Schuh, Praveen Kumar, Behnam Montazeri, Neelesh Bansod, Sarin Thomas, Inho Cho, Hyojeong Lee Seibert, Baijun Wu, Rui Yang, Yuliang Li, Kai Huang, Qianwen Yin, Abhishek Agarwal, Srinivas Vaduvatha, Weihuang Wang, Masoud Moshref, Tao Ji, David Wetherall, and Amin Vahdat SIGCOMM'25 Falcon is an IP block which can be integrated into a 3rd-party NIC. Fig. 7 shows an example integration of Falcon into a NIC. Blue components are part of Falcon: Source: https://dl.acm.org/doi/abs/10.1145/3718958.3754353 Multiple Upper Layer Protocols (ULPs, e.g., NVMe and RDMA ) are implemented on top of Falcon. Other protocols (e.g., Ethernet) can bypass Falcon and go straight to the standard NIC hardware. Falcon provides reliability and ordering via a connection-oriented interface to the ULPs. Multipathing is the ability for a single connection to use multiple network paths from the sender to the receiver. This improves throughput by allowing use of aggregate bandwidth and allows Falcon to quickly react to transient congestion on a subset of paths. The paper uses the term flow for a single path from sender to receiver. A single connection is associated with many flows. There are two parts to implementing multipathing, one easy and one not-so-easy. The easy task is to use the IPv6 Flow Label field. When the sending NIC chooses a flow for a particular packet, it sets the index of the flow in the flow label field. When a switch determines that there are multiple valid output ports for a packet, it hashes various fields from the packet (including the flow label) to determine which port to use. The switches are doing the hard work here. A Falcon NIC doesn’t need to maintain a local view of the network topology between the sender and receiver, nor does it have to pre-plan the exact set of switches a packet will traverse. The NIC simply sets the flow label field. The hard part is handling out-of-order packets. If the sending NIC is interleaving between flows at a fine granularity, then the receiving NIC will commonly receive packets out of order. Falcon burns 1-2 mm 2 of silicon on a packet buffer which holds received packets until they can be delivered to a ULP in order. ACK packets contain a packet sequence number and a 128-bit wide bitmap which represent a window of 128 recent packets that have been received. The sender uses these bitmaps to determine when to retransmit. The NIC maintains an estimate of the round-trip latency on each flow. If the most recent bitmap indicates that a packet has not been received, and a period of time longer than the round-trip latency has elapsed, then the packet is retransmitted. Falcon attempts to be a good citizen and minimize bufferbloat by estimating per-flow round-trip latency. These estimates are gathered via hardware near the edge of the NIC which records timestamps as packets (including ACKs) are sent and received. When Falcon is processing a packet to be sent for a given connection, it computes the open window associated with each flow. The open window is the difference between the round-trip latency and the number of unacknowledged packets. The flow with the largest open window is selected. You can think of the open window like a per-flow credit scheme, where the total credits available is determined from round-trip latency, sending a packet consumes a credit, and receiving an ACK produces credits. The trick here is that the round-trip latency associated with each flow is constantly changing. Section 5.2 of the paper describes three details which the authors felt were worth mentioning. The unspoken assumption is that these are non-standard design choices: As mentioned before, Falcon dedicates a non-trivial amount of on-chip resources to SRAM buffers which hold received packets before they are reassembled into the correct order. The paper says 1.2MB is required for 200Gbps, and the buffer size grows linearly with throughput. One interesting fact is that the buffer size is independent of latency, because throughput decreases with latency. For example, the paper mentions the same size works well with “inter-metro use-cases” which have 5-10x higher latency, but also 5-10x lower bandwidth. Falcon has an on-chip cache to hold mutable connection state, but the paper says that it is very common to have a high miss rate in this cache. The solution to this is to provision enough bandwidth to be able to have good performance when most accesses to connection state must go off chip. Reading between the lines, it seems like there are two scenarios which are important. The first has a small number of connections, with each connection experiencing a high packet rate. The second is a large number of connections, each with a low packet rate. Falcon has hardware support for somewhat rare events (errors, timeouts) rather than letting software on the host handle this. Fig. 10 compares Falcon against RoCE for various RDMA verbs and drop rates. Note that the drop rate maxes out at 1%. Source: https://dl.acm.org/doi/abs/10.1145/3718958.3754353 Dangling Pointers Falcon contains a lot of great optimizations. I wonder how many of them are local optimizations, and how much more performance is on the table if global optimization is allowed. In particular, Falcon works with standard ULPs (RDMA, NVMe) and standard Ethernet switches. At some scale, maybe extending the scope of allowable optimizations to those components would make sense? Thanks for reading Dangling Pointers! Subscribe for free to receive new posts. Source: https://dl.acm.org/doi/abs/10.1145/3718958.3754353 Multiple Upper Layer Protocols (ULPs, e.g., NVMe and RDMA ) are implemented on top of Falcon. Other protocols (e.g., Ethernet) can bypass Falcon and go straight to the standard NIC hardware. Falcon provides reliability and ordering via a connection-oriented interface to the ULPs. Multipathing Multipathing is the ability for a single connection to use multiple network paths from the sender to the receiver. This improves throughput by allowing use of aggregate bandwidth and allows Falcon to quickly react to transient congestion on a subset of paths. The paper uses the term flow for a single path from sender to receiver. A single connection is associated with many flows. There are two parts to implementing multipathing, one easy and one not-so-easy. The easy task is to use the IPv6 Flow Label field. When the sending NIC chooses a flow for a particular packet, it sets the index of the flow in the flow label field. When a switch determines that there are multiple valid output ports for a packet, it hashes various fields from the packet (including the flow label) to determine which port to use. The switches are doing the hard work here. A Falcon NIC doesn’t need to maintain a local view of the network topology between the sender and receiver, nor does it have to pre-plan the exact set of switches a packet will traverse. The NIC simply sets the flow label field. The hard part is handling out-of-order packets. If the sending NIC is interleaving between flows at a fine granularity, then the receiving NIC will commonly receive packets out of order. Falcon burns 1-2 mm 2 of silicon on a packet buffer which holds received packets until they can be delivered to a ULP in order. ACK packets contain a packet sequence number and a 128-bit wide bitmap which represent a window of 128 recent packets that have been received. The sender uses these bitmaps to determine when to retransmit. The NIC maintains an estimate of the round-trip latency on each flow. If the most recent bitmap indicates that a packet has not been received, and a period of time longer than the round-trip latency has elapsed, then the packet is retransmitted. Congestion Control Falcon attempts to be a good citizen and minimize bufferbloat by estimating per-flow round-trip latency. These estimates are gathered via hardware near the edge of the NIC which records timestamps as packets (including ACKs) are sent and received. When Falcon is processing a packet to be sent for a given connection, it computes the open window associated with each flow. The open window is the difference between the round-trip latency and the number of unacknowledged packets. The flow with the largest open window is selected. You can think of the open window like a per-flow credit scheme, where the total credits available is determined from round-trip latency, sending a packet consumes a credit, and receiving an ACK produces credits. The trick here is that the round-trip latency associated with each flow is constantly changing. Notable Hardware Details Section 5.2 of the paper describes three details which the authors felt were worth mentioning. The unspoken assumption is that these are non-standard design choices: As mentioned before, Falcon dedicates a non-trivial amount of on-chip resources to SRAM buffers which hold received packets before they are reassembled into the correct order. The paper says 1.2MB is required for 200Gbps, and the buffer size grows linearly with throughput. One interesting fact is that the buffer size is independent of latency, because throughput decreases with latency. For example, the paper mentions the same size works well with “inter-metro use-cases” which have 5-10x higher latency, but also 5-10x lower bandwidth. Falcon has an on-chip cache to hold mutable connection state, but the paper says that it is very common to have a high miss rate in this cache. The solution to this is to provision enough bandwidth to be able to have good performance when most accesses to connection state must go off chip. Reading between the lines, it seems like there are two scenarios which are important. The first has a small number of connections, with each connection experiencing a high packet rate. The second is a large number of connections, each with a low packet rate. Falcon has hardware support for somewhat rare events (errors, timeouts) rather than letting software on the host handle this.

0 views
Manuel Moreale 2 days ago

Romina Malta

This week on the People and Blogs series we have an interview with Romina Malta, whose blog can be found at romi.link . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Piet Terheyden and the other 122 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m Romina Malta, a graphic artist and designer from Buenos Aires. Design found me out of necessity: I started with small commissions and learned everything by doing. What began as a practical skill became a way of thinking and a way to connect the things I enjoy: image, sound, and structure. Over time, I developed a practice with a very specific and recognizable imprint, working across music, art, and technology. I take on creative direction and design projects for artists, record labels, and cultural spaces, often focusing on visual identity, books, and printed matter. I also run door.link , a personal platform where I publish mixtapes. It grew naturally from my habit of spending time digging for music… searching, buying, and finding sounds that stay with me. The site became a way to archive that process and to share what I discover. Outside of my profession, I like traveling, writing, and spending long stretches of time alone at home. That’s usually when I can think clearly and start new ideas. The journal began as a way to write freely, to give shape to thoughts that didn’t belong to my design work or to social media. I wanted a slower space where things could stay in progress, where I could think through writing. I learned to read and write unusually early, with a strange speed, in a family that was almost illiterate, which still makes it more striking to me. I didn’t like going to school, but I loved going to the library. I used to borrow poetry books, the Bible, short novels, anything I could find. Every reading was a reason to write, because reading meant getting to know the world through words. That was me then, always somewhere between reading and writing. Over the years that habit never left. A long time ago I wrote on Blogger, then on Tumblr, and later through my previous websites. Each version reflected a different moment in my life, different interests, tones, and ways of sharing. The format kept changing, but the reason stayed the same: I’ve always needed to write things down, to keep a trace of what’s happening inside and around me. For me, every design process involves a writing process. Designing leads me to write, and writing often leads me back to design. The journal became the space where those two practices overlap, where I can translate visual ideas into words and words into form. Sometimes the texts carry emotion; other times they lean toward a kind of necessary dramatism. I like words, alone, together, read backwards. I like letters too; I think of them as visual units. The world inside my mind is a constant conversation, and the journal is where a part of that dialogue finds form. There’s no plan behind it. It grows slowly, almost unnoticed, changing with whatever I’m living or thinking about. Some months I write often, other times I don’t open it for weeks. But it’s always there, a reminder that part of my work happens quietly, and that sometimes the most meaningful things appear when nothing seems to be happening. Writing usually begins with something small, a sentence I hear, a word that stays, or an image I can’t stop thinking about. I write when something insists on being written. There is no plan or schedule; it happens when I have enough silence to listen. I don’t do research, but I read constantly. Reading moves the language inside me. It changes how I think, how I describe, how I look at things. Sometimes reading becomes a direct path to writing, as if one text opened the door to another. I love writing on the computer. The rhythm of typing helps me find the right tempo for my thoughts. I like watching the words appear on the screen, one after another, almost mechanically. It makes me feel that something is taking shape outside of me. When I travel, I often write at night in hotels. The neutral space, the different air, the sound of another city outside the window, all create a certain kind of attention that I can’t find at home. The distance, in some way, sharpens how I think. Sometimes I stop in the middle of a sentence and return to it days later. Other times I finish in one sitting and never touch it again. It depends on how it feels. Writing is less about the result and more about the moment when the thought becomes clear. You know, writing and design are part of the same process. Both are ways of organizing what’s invisible, of trying to give form to something I can barely define. Designing teaches me how to see, and writing teaches me how to listen. Yes, space definitely influences how I work. I notice it every time I travel. Writing in hotels, for example, changes how I think. There’s something about being in a neutral room, surrounded by objects that aren’t mine, that makes me more observant. I pay attention differently. At home I’m more methodical. I like having a desk, a comfortable chair, and a bit of quiet. I usually work at night or very early in the morning, when everything feels suspended. I don’t need much: my laptop, a notebook, paper, pencils around. Light is important to me. I prefer dim light, sometimes just a lamp, enough to see but not enough to distract. Music helps too, especially repetitive sounds that make time stretch. I think physical space shapes how attention flows. Sometimes I need stillness, sometimes I need movement. A familiar room can hold me steady, while an unfamiliar one can open something unexpected. Both are necessary. The site is built on Cargo, which I’ve been using for a few years. I like how direct it feels… It allows me to design by instinct, adjusting elements visually instead of through code. For the first time, I’m writing directly on a page, one text over another, almost like layering words in a notebook. It’s a quiet process. Eventually I might return to using a service that helps readers follow and archive new posts more easily, but for now I enjoy this way. I don’t think I would change much. The formats have changed, the platforms too, but the impulse behind it is the same. Writing online has always been a way to think in public. Maybe I’d make it even simpler. I like when a website feels close to a personal notebook… imperfect, direct, and a bit confusing at times. The older I get, the more I value that kind of simplicity. If anything, I’d try to document more consistently. Over the years I’ve lost entire archives of texts and images because of platform changes or broken links. Now I pay more attention to preserving what I make, both online and offline. Other than that, I’d still keep it small and independent. It costs very little. Just the domain, hosting, and the time it takes to keep it alive. I don’t see it as a cost but as part of the work, like having a studio, or paper, or ink. It’s where things begin before they become something else. I’ve never tried to monetise the blog. It doesn’t feel like the right space for that. romi.link/journal exists outside of that logic; it’s not meant to sell or promote anything. It’s more like an open notebook, a record of thought. That said, I understand why people monetise their blogs. Writing takes time and energy, and it’s fair to want to sustain it. I’ve supported other writers through subscriptions or by buying their publications, and I think that’s the best way to do it, directly, without the noise of algorithms or ads. I’ve been reading Fair Companies for a while now. Not necessarily because I agree with everything, of course, but because it’s refreshing to find other points of view. I like when a site feels personal, when you can sense that someone is genuinely curious. Probably Nicolas Boullosa Hm… No mucho. Lately I’ve been thinking about how fragile the internet feels. Everything moves too quickly, and yet most of what we publish disappears almost instantly. Keeping a personal site today feels like keeping a diary in public: it’s small, quiet, and mostly unseen, but it resists the speed of everything else. I find comfort in that slowness. Now that you're done reading the interview, go check the blog . If you're looking for more content, go read one of the previous 112 interviews . Make sure to also say thank you to Jim Mitchell and the other 122 supporters for making this series possible.

1 views
iDiallo 2 days ago

The TikTok Model is the Future of the Web

I hate to say it, but when I wake up in the morning, the very first thing I do is check my phone. First I turn off my alarm, I've made it a habit to wake up before it goes off. Then I scroll through a handful of websites. Yahoo Finance first, because the market is crazy. Hacker News, where I skim titles to see if AWS suffered an outage while I was sleeping. And then I put my phone down before I'm tempted to check my Twitter feed. I've managed to stay away from TikTok, but the TikTok model is finding its way to every user's phone whether we like it or not. On TikTok, you don't surf the web. You don't think of an idea and then research it. Instead, based entirely on your activity in the app, their proprietary algorithm decides what content will best suit you. For their users, this is the best thing since sliced bread. For the tech world, this is the best way to influence your users. Now, the TikTok model is no longer reserved for TikTok, but has spread to all social media. What worries me is that it's also going to infect the entire World Wide Web. Imagine this for a second: You open your web browser. Instead of a search bar or a list of bookmarks, you're greeted by an endless, vertically scrolling stream of content. Short videos, news snippets, product listings, and interactive demos. You don't type anything, you just swipe what you don't like and tap what you do. The algorithm learns, and soon it feels like the web is reading your mind. You're served exactly what you didn't know you wanted. Everything is effortless, because the content you see feels like something you would have searched for yourself. With AI integrations like Google's Gemini being baked directly into the browser, this TikTok-ification of the entire web is the logical next step. We're shifting from a model of surfing the web to one where the web is served to us. This looks like peak convenience. If these algorithms can figure out what you want to consume without you having to search for it, what's the big deal? The web is full of noise, and any tool that can cut through the clutter and help surface the gems should be a powerful discovery tool. But the reality doesn't entirely work this way. There's something that always gets in the way: incentives. More accurately, company incentives. When I log into my Yahoo Mail (yes, I still have one), the first bolded email on top isn't actually an email. It's an ad disguised as an email. When I open the Chrome browser, I'm presented with "Sponsored content" I might be interested in. Note that Google Discover is supposed to be the ultimate tool for discovering content, but their incentives are clear: they're showing you sponsored content first. The model for content that's directly served to you is designed to get you addicted. It isn't designed for education or fulfillment; it's optimized for engagement. The goal is to provide small, constant dopamine hits, keeping you in a state of perpetual consumption without ever feeling finished. It's browsing as a slot machine, not a library. What happens when we all consume a unique, algorithmically-generated web? We lose our shared cultural space. After the last episode of Breaking Bad aired, I texted my coworkers: "Speechless." The reply was, "Best TV show in history." We didn't need more context to understand what we were all talking about. With personalized content, this shared culture is vanishing. The core problem isn't algorithmic curation itself, but who it serves. The algorithms are designed to benefit the company that made them, not the user. And as the laws of "enshittification" dictate, any platform that locks in its users will eventually turn the screws, making the algorithm worse for you to better serve its advertisers or bottom line . Algorithmic solutions often fix problems that shouldn't exist in the first place. Think about your email. The idea of "algorithmically sorted email" only makes sense if your inbox is flooded with spam, newsletters you never wanted, and automated notifications. You need a powerful AI to find the real human messages buried in the noise. But here's the trick: your email shouldn't be flooded with that junk to begin with. If we had better norms, stricter regulations, and more respectful systems, your inbox would contain only meaningful correspondence. In that world, you wouldn't want an algorithm deciding what's important. You'd just read your emails. The same is true for the web. The "noise" the TikTok model promises to solve, the SEO spam, the clickbait, the low-value content, is largely a product of an ad-driven attention economy. Instead of fixing that root problem, the algorithmic model just builds a new, even more captivating layer on top of it. It doesn't clean up the web; it just gives you a more personalized and addictive filter bubble to live inside. The TikTok model of the web is convenient, addictive, and increasingly inevitable. But it's not the only future. It's the path of least resistance for platforms seeking growth and engagement at all costs . There is an alternative, though. No, you don't have to demand more from these platforms. You don't have to vote for a politician. You don't even have to do much. The very first thing to do is remember your own agency. You are in control of the web you see and use. Change the default settings on your device. Delete the apps that are taking advantage of you. Use an ad blocker. If you find creators making things you like, look for ways to support them directly. Be the primary curator of your digital life. It requires some effort, of course. But it's worth it, because the alternative is letting someone else decide what you see, what you think about, and how you spend your time. The web can still be a tool for discovery and connection rather than a slot machine optimized for your attention. You just have to choose to make it that way.

1 views
fLaMEd fury 2 days ago

Disable AI In Firefox

What’s going on, Internet? To the outrage of the Firefox community across the web, Mozilla has started rolling out AI across our beloved browser and has enabled the features by default. I’ve found the new Firefox “AI” features, like the pop-ups that appear when highlighting text, to be more distracting than useful. The sidebar chat isn’t something I need either; if I want that experience, I’ll just open ChatGPT in a containerised tab. If you’d like to turn these features off, open in the Firefox address bar, search for , set it to false, and that should disable everything. If you’d rather try some features while disabling others, keep set to true and toggle each feature individually. I’m giving Smart Tab Groups a try for now, as I’m curious to see how the “AI” handles organising my dozens of open tabs. I’ll let you know how that goes. Below is a list of the “AI” features you can disable in , along with a short explanation of what I understand each one does. Enjoy. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website.

0 views
Phil Eaton 3 days ago

Transaction pooling for Postgres with pgcat

This is an external post of mine. Click here if you are not redirected.

0 views
Karboosx 3 days ago

Use OTP instead of email verification link

Why are we still forcing users to click annoying verification links? That flow is broken. There's a much smoother, simpler, and just-as-secure solution: Use OTP codes instead.

0 views