Latest Posts (20 found)

EDM: An Ultra-Low Latency Ethernet Fabric for Memory Disaggregation

EDM: An Ultra-Low Latency Ethernet Fabric for Memory Disaggregation Weigao Su and Vishal Shrivastav ASPLOS'25 This paper describes incremental changes to Ethernet NICs and switches to enable efficient disaggregation of memory without the need for a separate network ( e.g., CXL) for memory traffic. Fig. 1 shows the north star: Source: https://dl.acm.org/doi/10.1145/3669940.3707221 Servers are partitioned into Compute Nodes and Memory Nodes . When a compute node wants to access remote memory, it issues a request to its local NIC, which sends the request to the correct memory node (via a switch). The key problem this paper addresses is Ethernet fabric latency (i.e., the time taken for requests/responses to flow between NICs and switches). The paper assumes that the latency between the processor and the NIC is low (and cites other papers which describe techniques for reducing this latency to below 100ns). Typical Ethernet fabric latency is measured in microseconds, which is much higher than a local memory access. The Ethernet hardware stack can be decomposed into MAC and PHY layers. The MAC is higher level and sits on top of the PHY. The paper proposes implementing EDM (Ethernet Disaggregated Memory) with modifications to the PHY layer in both the NIC and the switch. Normal network packets flow through the MAC and PHY as they usually would, but a side channel exists which allows remote memory accesses to be handled directly by the enhanced PHY layer. Fig. 3 illustrates the hardware changes in Ethernet NICs and switches. Source: https://dl.acm.org/doi/10.1145/3669940.3707221 Remote memory access requests and responses are smaller than typical Ethernet packets. Additionally, end-to-end application performance is more sensitive to remote memory access latency than the latency of regular network traffic. The bulk of the paper describes how EDM achieves low latency for remote memory traffic. The EDM PHY modifications allow a memory request to preempt a non-memory packet. Say the MAC sends a 1KiB packet to the PHY, which begins to send the packet over the wire in 66-bit blocks. If a memory request shows up in the middle of transmitting the network packet, the PHY can sneak the memory request onto the wire between 66-bit blocks, rather than waiting for the whole 1KiB to be sent. Standard Ethernet requires 96 bits of zeros to be sent on the wire between each packet. This overhead is small for large packets, but it is non-trivial for small packets (like remote memory access requests). The EDM PHY modifications allow these idle bits to be used for remote memory accesses. The MAC still sees the gaps, but the PHY does not. If you ask an LLM what could possibly go wrong by trying to use the inter-frame gap to send useful data, it will spit out a long list. I can’t find too much detail in the paper about how to ensure that this enhancement is robust. The possible problems are limited to the PHY layer however, as the MAC still sees the zeros it expects. To avoid congestion and dropping of memory requests, EDM uses an in-network scheduling algorithm somewhat like PFC. The EDM scheduler is in the PHY layer of the switch. Senders notify the switch when they have memory traffic to send, and the switch responds later with a grant , allowing a certain amount of data to be sent. The authors implemented EDM on FPGAs (acting as both NIC and switch). Table 1 compares latencies for TCP/IP, RDMA, raw Ethernet packets, and EDM, breaking down latencies at each step: Source: https://dl.acm.org/doi/10.1145/3669940.3707221 Fig. 7 throws CXL into the mix: Source: https://dl.acm.org/doi/10.1145/3669940.3707221 Dangling Pointers Section 3.3 “Practical Concerns” has a discussion of what could go wrong ( e.g., fault tolerance and data corruption). It is hard to judge how much work is needed to make this into something that industry could rely on. Subscribe now

0 views
Jim Nielsen Yesterday

A Brief History of App Icons From Apple’s Creator Studio

I recently updated my collection of macOS icons to include Apple’s new “Creator Studio” family of icons. Doing this — in tandem with seeing funny things like this post on Mastodon — got me thinking about the history of these icons. I built a feature on my icon gallery sites that’s useful for comparing icons over time. For example, here’s Keynote : (Unfortunately, the newest Keynote isn’t part of that collection because I have them linked in my data by their App Store ID and it’s not the same ID anymore for the Creator Studio app — I’m going to have to look at addressing that somehow so they all show up together in my collection.) That’s one useful way of looking at these icons. But I wanted to see them side-by-side, so I dug them all up. Now, my collection of macOS icons isn’t complete. It doesn’t show every variant since the beginning of time, but it’s still interesting to see what’s changed within my own collection. So, without further ado, I present the variants in my collection. The years labeled in the screenshots represent the year in which I added the to my collection (not necessarily the year that Apple changed them). For convenience, I’ve included a link to the screenshot of icons as they exist in my collection ( how I made that page , if you’re interested). Final Cut Pro: Compressor: Pixelmator Pro: (Granted, Pixelmator wasn’t one of Apple’s own apps until recently but its changes follow the same pattern showing how Apple sets the tone for itself as well as the ecosystem.) One last non-visual thing I noticed while looking through these icons in my archive. Apple used to call their own apps in the App Store by their name, e.g. “Keynote”. But now Apple seems to have latched on to what the ecosystem does by attaching a description to the name of the app, e.g. “Keynote: Design Presentations”. Reply via: Email · Mastodon · Bluesky Keynote -> Keynote: Design Presentations Pages -> Pages: Create Documents Numbers -> Numbers: Make Spreadsheets Final Cut Pro -> Final Cut Pro: Create Video Compressor -> Compressor: Encode Media Logic Pro -> Logic Pro: Make Music MainStage -> MainStage: Perform Live Pixelmator Pro -> Pixelmator Pro: Edit Images

0 views
David Bushell Yesterday

Big Design, Bold Ideas

I’ve only gone and done it again! I redesigned my website. This is the eleventh major version. I dare say it’s my best attempt yet. There are similarities to what came before and plenty of fresh CSS paint to modernise the style. You can visit my time machine to see the ten previous designs that have graced my homepage. Almost two decades of work. What a journey! I’ve been comfortable and coasting for years. This year feels different. I’ve made a career building for the open web. That is now under attack. Both my career, and the web. A rising sea of slop is drowning out all common sense. I’m seeing peers struggle to find work, others succumb to the chatbot psychosis. There is no good reason for such drastic change. Yet change is being forced by the AI industrial complex on its relentless path of destruction. I’m not shy about my stance on AI . No thanks! My new homepage doubles down. I won’t be forced to use AI but I can’t ignore it. Can’t ignore the harm. Also I just felt like a new look was due. Last time I mocked up a concept in Adobe XD . Adobe in now unfashionable and Figma, although swank, has that Silicon Valley stench . Penpot is where the cool kids paint pretty pictures of websites. I’m somewhat of an artist myself so I gave Penpot a go. My current brand began in 2016 and evolved in 2018 . I loved the old design but the rigid layout didn’t afford much room to play with content. I spent a day pushing pixels and was quite chuffed with the results. I designed my bandit game in Pentpot too (below). That gave me the confidence to move into real code. I’m continuing with Atkinson Hyperlegible Next for body copy. I now license Ahkio for headings. I used Komika Title before but the all-caps was unwieldy. I’m too lazy to dig through backups to find my logotype source. If you know what font “David” is please tell me! I worked with Axia Create on brand strategy. On that front, we’ll have more exciting news to share later in the year! For now what I realised is that my audience here is technical. The days of small business owners seeking me are long gone. That market is served by Squarespace or Wix. It’s senior tech leads who are entrusted to find and recruit me, and peers within the industry who recommend me. This understanding gave me focus. To illustrate why AI is lame I made an interactive mini-game! The slot machine metaphor should be self-explanatory. I figured a bit of comedy would drive home my AI policy . In the current economy if you don’t have a sparkle emoji is it even a website? The game is built with HTML canvas, web components, and synchronised events I over-complicated to ensure a unique set of prizes. The secret to high performance motion blur is to cheat with pre-rendered PNGs. In hindsight I could have cheated more with a video. I commissioned Declan Chidlow to create a bespoke icon set. Declan delivered! The icons look so much better than the random assortment of placeholders I found. I’m glad I got a proper job done. I have neither the time nor skill for icons. Declan read my mind because I received a 88×31 web badge bonus gift. I had mocked up a few badges myself in Penpot. Scroll down to see them in the footer. Declan’s badge is first and my attempts follow. I haven’t quite nailed the pixel look yet. My new menu is built using with invoker commands and view transitions for a JavaScript-free experience. Modern web standards are so cool when the work together! I do have a tiny JS event listener to polyfill old browsers. The pixellated footer gradient is done with a WebGL shader. I had big plans but after several hours and too many Stack Overflow tabs, I moved on to more important things. This may turn into something later but I doubt I’ll progress trying to learn WebGL. Past features like my Wasm static search and speech synthesis remain on the relevant blog pages. I suspect I’ll be finding random one-off features I forgot to restyle. My homepage ends with another strong message. The internet is dominated by US-based big tech. Before backing powers across the Atlantic, consider UK and EU alternatives. The web begins at home. I remain open to working with clients and collaborators worldwide. I use some ‘big tech’ but I’m making an effort to push for European alternatives. US-based tech does not automatically mean “bad” but the absolute worst is certainly thriving there! Yeah I’m English, far from the smartest kind of European, but I try my best. I’ve been fortunate to find work despite the AI threat. I’m optimistic and I refuse to back down from calling out slop for what it is! I strongly believe others still care about a job well done. I very much doubt the touted “10x productivity” is resulting in 10x profits. The way I see it, I’m cheaper, better, and more ethical than subsidised slop. Let me know on the socials if you love or hate my new design :) P.S. I published this Sunday because Heisenbugs only appear in production. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
matduggan.com Yesterday

GitButler CLI Is Really Good

My workflow has remained mostly the same for over a decade. I write everything in Vim using the configuration found here . I run Vim from inside of tmux with a configuration found here . I write things on a git branch, made with the CLI, then I add them with to that branch, trying to run all of the possible linting and tests with before I waste my time on GitHub Actions. Then I run which is an alias to . Finally I successfully commit, then I copy paste the URL returned by GitHub to open a PR. Then I merge the PR and run to go back to the primary branch, which is an alias to . This workflow, I think, is pretty familiar for anyone working with GitHub a lot. Now you'll notice I'm not saying because almost nothing I'm doing has anything to do with . There's no advantage to my repo being local to my machine, because everything I need to actually merge and deploy code lives on GitHub. The CI runs there, the approval process runs there, the monitoring of the CI happens there, the injection of secrets happens there. If GitHub is down my local repo does, effectively, nothing. My source of truth is always remote, which means I pay the price for complexity locally but I don't benefit from it. At most jobs: This means the following is also true: Almost all the features of are wasted on me in this flow. Now because this tool serves a million purposes and is designed to operate in a way that almost nobody uses it for, we all pay the complexity price of and never reap any of the benefits. So instead I keep having to add more aliases to paper over the shortcomings of . These are all the aliases I use at least once a week. Git's offline-first design creates friction for online-first workflows, and GitButler CLI eliminates that friction by being honest about how we actually work. (Edit: I forgot to add this disclaimer. I am not, nor have ever been an employee/investor/best friends with anyone from GitButler. They don't care that I've written this and I didn't communicate with anyone from that team before I wrote this.) So let's take the most basic command as an example. This is my flow that I do 2-3 times a day without my aliases. I do this because can't make assumptions about the state of the world. However because GitButler is designed with the assumption that I'm working online, we can skip a lot of this nonsense. It's status command understands that there is always a remote main that I care about and that when I run a status that I need to understand my status relative to the remote main as it exists right now. Not how it existed the last time I remembered to pull. However this is far from the best trick it has up its sleeve. You're working on a feature, notice an unrelated bug, and now you have to stash, checkout, fix, commit, push, checkout back, stash pop. Context switching is expensive and error-prone. GitButler effectively hacks a solution into that fixes this with multiple branches applied simultaneously. Assign files to different branches without leaving your workspace. What do I mean by that. Let's start again with my status Great looks good. Alright so lets say I make 2 new branches. I'm working on a new feature for adding auth and while I'm working on that, I see a typo I need to fix in a YAML. I can work on both things at the same time: And easily commit to both at the same time without doing anything weird . Stacked PRs are the "right" way to break up large changes so people on your team don't throw up at being asked to review 2000 lines, but Git makes them miserable. When the base branch gets feedback, you have to rebase every dependent branch, resolve conflicts, force-push, and pray. Git doesn't understand branch dependencies. It treats every branch as independent, so you have to manually maintain the stack. GitButler solves this problem with First-class stacked branches. The dependency is explicit, and updates propagate automatically. So what do I mean. Let's say I make a new API endpoint in some Django app. First I make the branch. So let's say I'm working on the branch and get some good feedback on my PR. It's easy to resolve the comments there while leaving my branched off this as a stacked thing that understands the relationship back to the first branch as shown here. In practice this is just a much nicer way of dealing with a super common workflow. Maybe the most requested feature from new users I encounter is an easier undo. When you mess up in Git, recovery means diving into , understanding the cryptic output, and hoping you pick the right . One wrong move and you've made it worse. GitButlers is just easier to use. So the basic undo functionality is super simple to understand. rolls me back one operation. To me the mental model of a snapshot makes a lot more sense than the git history model. I do an action, I want to undo that action. This is better than the git option of: I've been using GitButler in my daily work since I got the email that the CLI was available and I've really loved it. I'm a huge fan of what this team is doing to effectively remodel and simplify Git operations in a world where almost nobody is using it in the way the tool was originally imagined to be used. I strongly encourage folks go check it out for free at: https://docs.gitbutler.com/cli-guides/cli-tutorial/tutorial-overview . It does a ton of things (like help you manage PRs) that I didn't even touch on here. Let me know if you find something cool that I forgot at: https://c.im/@matdevdug You can't merge without GitHub (PRs are the merge mechanism) You can't deploy without GitHub (Actions is the deployment trigger) You can't get approval without GitHub (code review lives there) Your commits are essentially "drafts" until they exist on GitHub You never work disconnected intentionally You don't use local branches as long-lived divergent histories You don't merge locally between branches (GitHub PRs handle this) You don't use for archaeology — you use GitHub's blame/history UI (I often use git log personally but I have determined I'm in the minority on this). Your local repo might be offline for days or weeks The "remote" might be someone else's laptop, not a central server Divergent histories are expected and merging is a deliberate, considered act

0 views
iDiallo Yesterday

Microsoft Should Watch The Expanse

My favorite piece of technology in science fiction isn't lightsabers, flying spaceships, or even robots. It's AI. But not just any AI. My favorite is the one in the TV show The Expanse . If you watch The Expanse, the most advanced technology is, of course, the Epstein drive (an unfortunate name in this day and age). In their universe, humanity can travel to distant planets, the Belt, and Mars. Mars has the most high-tech military, which is incredibly cool. But the AI is still what impresses me most. If you watched the show, you're probably wondering what the hell I'm talking about right now. Because there is no mention of AI ever. The AI is barely visible. In fact, it's not visible at all. Most of the time, there aren't even voices. Instead, their computer interfaces respond directly to voice and gesture commands without returning any sass. In Season 1, Miller (the detective) is trying to solve a crime. Out of the blue, he just says, "Plot the course the Scopuli took over the past months." The course is plotted right there in his living room. No fuss, no interruptions, no "OK Google." And when he finally figures it out, no one says "You are absolutely right!" He then interacts with the holographic display in real time, asking for additional information and manipulating the data with gestures. At no point does he anthropomorphize the AI. It's always there, always available, always listening, but it never interrupts. This type of interaction is present throughout the series. In the Rocinante, James Holden will give commands like "seal bulkhead," "plot intercept course," or "scan for life signs," and the ship's computer simply executes. There are no loading screens, no chatbot personality trying to be helpful. The computer doesn't explain what it's doing or ask for confirmation on routine tasks. It just works. When Holden needs tactical information during a firefight, he doesn't open an app or navigate menus. He shouts questions, and relevant data appears on his helmet display. When Naomi needs to calculate a complex orbital maneuver, she doesn't fight with an interface. She thinks out loud, and the system provides the calculations she needs. This is the complete opposite of Microsoft's Copilot... Yes, this is about Copilot. In Microsoft's vision, they think they're designing an AI assistant, an AI copilot that's always there to help. You have Copilot in Excel, in Edge, in the taskbar. It's everywhere, yet it's as useless as you can imagine. What is Copilot? Is it ChatGPT or a wrapper around it? Is it a code assistant? Is it a search engine? Or wait, is it all of Microsoft Office now? It's attached to every application, yet it hasn't been particularly helpful. We now use Teams at work, and I see Copilot popping up every time to offer to help me, just like Clippy. OK, fine, I asked for the meaning of a term I hear often in this company. Copilot doesn't know. Well, it doesn't say it doesn't know. Instead, it gives me the definition of what it thinks the term means in general. Imagine for a second you're a manager and you hear developers talking about issues with Apache delaying a project. You don't know what Apache is, so you ask Copilot. It tells you that the Apache are a group of Native American tribes known for their resilience in the Southwest. If you don't know any better, you might take that definition at face value, never knowing that Copilot has does not have access to any of the company data. Now in the project retro, you'll blame a native American tribe for delaying the project. Copilot is everywhere, yet it is nowhere. Nobody deliberately opens it to solve a problem. Instead, it's like Google Plus from back in the day. If you randomly clicked seven times on the web, you would somehow end up with a Google Plus account and, for some reason, two YouTube accounts. Copilot is visible when it should be invisible, and verbose when it should be silent. It interrupts your workflow to offer help you didn't ask for, then fails to provide useful answers when you actually need them. It's the opposite of the AI in The Expanse. It doesn't fade in the background. It is constantly reminding you that you need to use it here and now. In The Expanse , the AI doesn't have a personality because it doesn't need one. It's not trying to be your friend or impress you with its conversational abilities. It's a tool, refined to perfection. It is not trying to replace your job, it is there to support you. Copilot only exists to impress you, and it fails at it every single time. Satya should binge-watch The Expanse. I'm not advocating for AI everything, but I am all for creating useful tools. And Copilot, as it currently exists, is one of the least useful implementations of AI I've encountered. The best technology is invisible. It doesn't announce itself, doesn't demand attention, and doesn't try to be clever. It simply works when you need it and disappears when you don't. I know Microsoft won't read this or learn from it. Instead, I expect Windows 12 to be renamed Microsoft Copilot OS. In The Expanse, the AI turn people into heroes. In our world, Copilot, Gemini, ChatGPT, all want to be the heroes. And they will differentiate themselves by trying to be the loudest.

0 views
ava's blog Yesterday

are you out of touch?

In Mina Le's latest video, she quotes Adam Aleksic about quitting or severely reducing social media and phone use: " For one, it's the equivalent of sticking your head in the sand and pretending like the algorithm doesn't exist. Whether you like it or not, our culture is still being shaped by these platforms, and they won't go away by themselves. All of our music and fashion aesthetics are either defined by or against the algorithm, which means that even the "countercultural" tastes of the No Phone People are necessarily influenced by it. Engaging with algorithmic media - in a limited, deliberate manner - is thus important to understanding your experience in society as a whole. Not engaging, meanwhile, makes you vulnerable to being blindsided by sudden social or political shifts. Each Reddit argument and YouTube comment war is an epistemic basis for understanding the current state of cultural discourse. If you ignore those, you lose touch with reality as most people experience it. " I can see why he'd think that, and maybe to a small part I can understand. We feel out of control about our screen behavior at times, and we expect drastic changes from drastic measures, when a bit more nuance could be more helpful. But in my view, the importance of social media in staying culturally in touch is completely overstated. People still go outside! People go to work, to university, to school, to their clubs and other responsibilities or hobby spaces. They talk to their friends, family, superiors and acquaintances and they see what people vote for locally. They see the banners, flags, posters and stickers in their area. They witness what the strangers on the sidewalk, in cafes, restaurants, public transport and other spaces talk about. The quote, on the other hand, acts as if people's only connection to others or the outside world in general is through their phone, which is nuts. No one is blinded by a cultural shift for not having social media unless they also do not interact with anyone outside of their home. Not everyone in your real life is part of "your bubble". Plenty of us have family members, peers or coworkers with wildly different views that we still interact with. Yes, these are mass platforms where tons of content gets created, and music snippets, memes and viral moments have shaped our time and memories of specific years, don't get me wrong - but this ignores that a lot of the accounts are simply lurkers who do not contribute at all. Many have a very weak output that has no impact at all (or no lasting one), or they create on a private, locked down profile for people they approved. For every area, country, and even globally, there are a few hundred creators who truly shape culture, but they do so in a way that either transcends the online, or stays only making a local impact no one else outside is missing out on. The view also doesn't take into account how sturdy algorithmic bubbles now seem to be. What some see as a huge trend online is actually something small in the grand scheme of things, and it's something their friend hasn't even seen, despite otherwise living in the same area and having tastes. You can be on social media and still "miss out" on whatever Adam means; you can also be off of social media and your friends will send you (or screen record for you) funny posts and short-form videos from Tumblr, Tiktok, X and more anyway. News outlets and publications like 404media pick up internet drama and memes as well, and commentary/video essay YouTubers like Hannah Alonzo, Kiki Chanel, Brooke Sharks, Becauseimmissy and more show and break down viral videos and creators and give more insight what's going on socially and culturally in 40-90 minute long videos. This is far more valuable to me (and the attention span, I guess!) than just seeing the original video on a feed. It contextualizes a lot of videos under a shared topic, identifies a pattern, and tends to be published a few weeks later, only giving time to things that truly lasted a while or were blowing up. It's an amazing filter, and you do not need to have any accounts or spend hours of time on a feed that makes you sad and harvests your data if you don't want to. You don't even need a phone to consume all that - you can do it on a cheap laptop, if you want to. I disagree with the notion that it is culturally important to be very aware of what goes on in comment sections. They are notoriously filled with inflammatory trash because it is easier to fire off a comment than to write an email or write a long-form blog post about it. People comment on things without opening the link or fully reading the post, and just read the title, rushing to be the first ones to comment and get more engagement. Comment sections also suffer from the usual review bias, where people usually only feel the need to comment if they feel strongly about something (usually negatively). That means the impression you'll get from these will be very skewed towards the loud, often abrasive minority and their upvoters. As things that make you feel strongly get more engagement, feeds get distorted and comments asking for the most extreme consequences or showing the most extreme view get catapulted to the top visually. While the websites and many of the commenters skew towards focusing on US culture and issues, it also skews towards the American lens on things. If you really want to be in touch with culture (especially if you do not live in the US), you cannot base your cultural understanding on these! In a way, this quote reads to me like an addict justifying why they should stay; like a smoker who says they need the breaks to rest and socialize, or the alcoholic who says they need the bar to socialize and the drinks to loosen up, as "social lubricant". Lots of culture and tradition in my country involves alcohol, yet I don't drink, and the disadvantages of that have yet to show. It's important to note that social media is Adam Aleksic's job . He gets his success from his short-form content on TikTok. It will never be in the interest of people in that industry for others to log off or stop consuming. His job necessitates that he posts frequently, stays up to date, consumes the feed and jumps on any trend he can, even if it's just the latest slang word explained through an etymologist's lens. Content creators also have to, at times, overstate their importance and impact to justify it all - the sums of money, the dark patterns, money off of unethical platforms, or spending so much time in front of a screen, some even essentially living a lie for content. It's all supposed to be worth something, to be for the common good, be done for the people, and immortalize... something , I guess. In my view, not everyone needs to experience everything firsthand or be directly knowledgeable about everything. It's better that way, even. You can always rely on articles, long-form video essays accessible without accounts, and podcasts from different sources, or simple conversations with others to keep you updated on stuff that's not on your radar. If it's important enough it will make your way to you, filtered and curated in a way that makes sense to you and focuses on what is truly important to you. If you want to know more, you are free to research and dive deeper. But it will always be impossible for you to be aware of everything. I do not need to know about the latest looksmaxxing trend that will vanish in a month, but I do care about how influencers consistently normalize overconsumption and how it is done. Others seeing it for me and sparking a conversation about it is how I was still able to write this without having an account on any of the big platforms. I know it can be scary to suddenly feel like you do not understand internet culture or memes anymore, but being less in touch about youth culture is a normal part of getting older, and the speed at which we go through trends and viral content has increased massively. Most things you do not understand right now that make you question whether it was the right choice to leave some socials behind is something you will never hear about again. You'll see what stands the test of time and what doesn't. The full piece is here , if you are interested in the quote's context. Reply via email Published 09 Feb, 2026

0 views
Stratechery Yesterday

Google Earnings, Google Cloud Crushes, Search Advertising and LLMs

Google announced a massive increase in CapEx that blew away expectations; the companies earnings results explain why the increase is justified.

0 views

2026-6: Week Notes

It was a short week at work thanks to the Waitangi Day long weekend. Over the last five years we’ve lived in New Zealand, my family has built a tradition of going camping over that weekend. You can usually count on decent weather over that weekend (wasn’t that great this year unfortunatelly but good enough. 🏕️We went camping by the river again, a spot the kids absolutely love. They spent hours swimming, it was too cold to me, but just being there was perfect. And just as I started properly relaxing, it was time to pack up and head home. ⛺️We’ve been talking about upgrading our tent and are looking at a Zempire one that seems like a good fit for how we actually camp. That said, I think I’ll save the whole camping and gear rabbit hole for a separate post maybe 📚I finished reading The Safekeep by Yael van der Wouden this week and really enjoyed it. It felt relatable in a lot of way (war is unfair, so unfair). I’m looking forward to talking about it at book club. 📖Lately, I’ve been reading a lot about minimalism and decluttering. Minimalism feels so close and yet always just out of reach. People often comment on how minimalist our home is, but to me, it’s still not minimalist enough . I want to be much more ruthless about what I actually need versus what I’m keeping simply because I have space for it. That’s very much a work in progress. And my husband’s tendency to keep every gift anyone has ever given him definitely doesn’t help 📍Work itself has been intense. When things are this busy, I feel depleted and don’t have much energy left for blogging or creative hobbies. In those phases, I mostly want to read or do something physical. Even if it is sorting out a drawer or clearing a shelf… or getting rid of something. There’s professional development I want to do, but right now I just feel too tired to engage with it properly. 🪑I booked an appointment with a psychotherapist for the first time. Work covers a few mental health sessions, and I feel like I’m at a point where talking things through could help. I chose someone who felt like a good fit, hard to explain why, but something resonated. We’re meeting online tomorrow, and I want to talk about the expectations I place on myself, where I feel I fall short, and my ongoing anxiety. 📸On a more practical note, I mostly kept up with my photo management for January. I didn’t finish everything, but I did most of it. 🤓I’ve also decided to buy a new Kindle. My current one is about 15 years old and still works perfectly, but newer Paperwhites let you email highlights directly from the device. I read a lot of PDFs and want my highlights to flow straight into Readwise without any friction (at the moment I have to manually transfer it using a cable). I also tried the latest Paperwhite recently, and it’s fast . Once you experience that, it’s hard to go back. The plan is to keep one Kindle in my bedroom for bedtime reading and one for the living room. Small luxury, but I’m really looking forward to it. I tried to buy it yesterday, but the shop was out of stock. Will try again later.

0 views

Leaning on AI

It’s been five months since my last dedicated Lean post and as usual I have started to lose steam on Lean projects. After the thrill of discovering the world of formalized mathematics started to wear off, I did not find motivation to push as hard as before. The SF Math with Lean work group kept me vaguely connected (at least in the one hour a week we meet (see retro) ) but other than that I wasn’t putting in more than a few hours a week on math and Lean.

0 views
HeyDingus Yesterday

7 Things This Week [#182]

A weekly list of interesting things I found on the internet, posted on Sundays. Sometimes themed, often not. 1️⃣ Jose Munoz has a good tip for not getting sucked into doom-scrolling apps by Siri Suggestions in Search and the App Library: simply hide them from those areas. [ 🔗 josemunozmatos.com ] 2️⃣ I love a good stats-based pitch. Herman provides one for the benefits of morning exercise. [ 🔗 herman.bearblog.dev ] 3️⃣ Jason Fried explains a clever design detail about the power reserve indicator on a mechanical watch. [ 🔗 world.hey.com ] 4️⃣ I found myself nodding along to Chris Coyier’s list of words you should probably avoid using in your writing. [ 🔗 css-tricks.com ] 5️⃣ I spent a surprising amount of time recently perusing the depths of Louie Mantia’s portfolio and blog after reading his People & Blogs interview . He’s worked on so many cool things, lots of which have touched my life. [ 🔗 lmnt.me ] 6️⃣ Robert Birming made me feel a little better about my less-than-tidy house. [ 🔗 robertbirming.com ] 7️⃣ I’m not going to buy it, but I’m certainly intrigued by this tiny eReader that attaches via MagSafe onto the back of your phone. I love my Kobo, but it so often gets left behind. This would be a remedy. [ 🔗 theverge.com ] Thanks for reading 7 Things . If you enjoyed these links or have something neat to share, please let me know . And remember that you can get more links to internet nuggets that I’m finding every day by following me @jarrod on the social web. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
./techtipsy Yesterday

SteamOS on a ThinkPad P14s gen 4 (AMD) is quite nice

In April 2024, I wrote on the Lenovo ThinkPad P14s gen 4 and how it does not suck under Linux. That is still true. It’s been fantastic, and a very reliable laptop during all that time. The P14s gen 4 comes with a CPU that is still solid today, the AMD Ryzen 7 PRO 7840U, and that comes with impressive integrated graphics in the form of an AMD Radeon 780M. I’ve had a Steam Deck. I’ve also accidentally built a Steam Machine. I had to put SteamOS on this laptop to see how well it does. I did a quick Bazzite test the last time around, but after being impressed with how well the stock SteamOS image runs on a random machine with an AMD GPU, I had to test that, too. The normal way to install SteamOS on a machine is to take the Steam Deck recovery image and to install it on your own machine that has one NVMe SSD. I didn’t want to do exactly that, I wanted to run it off of an USB SATA SSD, which the recovery image does not support, as it hard-codes the target SSD for the SteamOS installation to . There’s a handy project out there that customizes the recovery script to allow you to install SteamOS to any target device, but I learned about that after the fact. I went a slightly different route: I imaged the SteamOS installation from my DIY Steam Machine build, wrote it to the 4TB USB SSD that I had available for testing, and after that I resized the partition to take up the full disk. Bam, clean SteamOS on a USB SSD! Oh, and before I did that, I did the same process but to a 128 GB Samsung FIT USB 3.0 thumb drive. The game library images did load a bit slowly, but it was a great demonstration of how low you can go with the hardware requirements. I wouldn’t recommend actually installing games on such a setup as that would likely kill the USB thumb drive very quickly. I ran the SteamOS setup on this laptop over a USB-C dock that only supports running at up to 4K at 30Hz, so I did testing at 1080p 60Hz setup. You’re unlikely to want to run this setup at 4K anyway, unless you’re a fan of light, easy to run games like Katamari or Donut County. In most games, the experience was enjoyable. 1080p resolution, maybe change the settings to medium or low in some cases, and you’ll likely have a solid gaming experience. Forza Horizon 4? No problem, 1080p high settings and a solid, consistent experience. Need for Speed Hot Pursuit Remastered was an equally enjoyable experience, and I did not have to turn the settings down from high/ultra. God of War Ragnarök was pushing the setup to the limits. With 1080p, low/medium settings you can expect 30+ FPS. If you include AMD FSR settings in the mix and also enable FSR frame generation, you can have a perfectly enjoyable 50-60 FPS experience. Some UI hints were a bit “laggy” with frame generation, but I’m genuinely surprised how well that rendering trick worked. I’ll admit it, my eyesight is not the best, but given the choice of a crisp but laggy picture, and a slightly blurrier but smoother experience, I’d pick the latter. After a pint of Winter Stout, you won’t even notice the difference. 1 Wreckfest was also heaps fun. It did push the limits of the GPU at times, but running it at 1080p and medium/high settings is perfectly enjoyable. The observed power usage throughout the heaviest games measured via SteamOS performance metrics ( ) were around 30-40 W, with the GPU using up the most of that budget. In most games, the CPU was less heavily loaded, and in the games that required good single thread performance, it could provide it. I like SteamOS. It’s intentionally locked down in some aspects (but you can unlock it with one command), and the Flatpak-only approach to software installation will make some people mad, but I like this balance. It almost feels like a proper console-type experience, almost . Valve does not officially support running SteamOS on random devices, but they haven’t explicitly prevented it either. I love that. Take any computer from AMD that has been manufactured from the last 5 years, slap SteamOS on it, and there is a very high chance that you’ll have a lovely gaming experience, with the level of detail and resolution varying depending on what hardware you pick. A top of the line APU from AMD seems to do the job well enough for most casual gamers like myself, and if the AMD Strix Halo based systems were more affordable, I would definitely recommend getting one if you want a small but efficient SteamOS machine. Last year, we saw the proliferation of gaming-oriented Linux distros. The Steam Machine is shipping this year. DankPods is covering gaming on Linux. 2026 has to be the year of the Linux (gaming) desktop. that’s the tipsy part in techtipsy   ↩︎ that’s the tipsy part in techtipsy   ↩︎

0 views
baby steps Yesterday

Hello, Dada!

Following on my Fun with Dada post, this post is going to start teaching Dada. I’m going to keep each post short – basically just what I can write while having my morning coffee. 1 Here is a very first Dada program I think all of you will be able to guess what it does. Still, there is something worth noting even in this simple program: “You have the right to write code. If you don’t write a function explicitly, one will be provided for you.” Early on I made the change to let users omit the function and I was surprised by what a difference it made in how light the language felt. Easy change, easy win. Here is another Dada program Unsurprisingly, this program does the same thing as the last one. “Convenient is the default.” Strings support interpolation (i.e., ) by default. In fact, that’s not all they support, you can also break them across lines very conveniently. This program does the same thing as the others we’ve seen: When you have a immediately followed by a newline, the leading and trailing newline are stripped, along with the “whitespace prefix” from the subsequent lines. Internal newlines are kept, so something like this: would print Of course you could also annotate the type of the variable explicitly: You will find that it is . This in and of itself is not notable, unless you are accustomed to Rust, where the type would be . This is of course a perennial stumbling block for new Rust users, but more than that, I find it to be a big annoyance – I hate that I have to write or everywhere that I mix constant strings with strings that are constructed. Similar to most modern languages, strings in Dada are immutable. So you can create them and copy them around: OK, we really just scratched the surface here! This is just the “friendly veneer” of Dada, which looks and feels like a million other languages. Next time I’ll start getting into the permission system and mutation, where things get a bit more interesting. My habit is to wake around 5am and spend the first hour of the day doing “fun side projects”. But for the last N months I’ve actually been doing Rust stuff, like symposium.dev and preparing the 2026 Rust Project Goals . Both of these are super engaging, but all Rust and no play makes Niko a dull boy. Also a grouchy boy.  ↩︎ My habit is to wake around 5am and spend the first hour of the day doing “fun side projects”. But for the last N months I’ve actually been doing Rust stuff, like symposium.dev and preparing the 2026 Rust Project Goals . Both of these are super engaging, but all Rust and no play makes Niko a dull boy. Also a grouchy boy.  ↩︎

0 views
Dominik Weber Yesterday

Lighthouse update February 9th

During the past week I finished the most important onboarding improvements. For new users it's now easier to get into Lighthouse. The biggest updates were - An onboarding email drip which explains the features of Lighthouse - Feed subscribe changes, now showing a suggestion list of topics and curated feeds, and a search for websites and feeds to subscribe to The next step becamse clear after talking to users and potential customers. The insight was that even if the structure and features of Lighthouse are much better for content curation, it doesn't matter if not all relevant content can be pulled into Lighthouse. This means first and foremost websites that don't have a feed or newsletter. So the next feature will be a website to feed conversion. That websites can be subscribed to even if they don't have a feed or newsletter. ## Pricing Big parts of the indie business community give the advice to charge more. "You're not charging enough, charge more" is a generic and relatively popular advice. I stopped frequenting these (online) places as much, so I'm not sure they give the same advice in the current environment, but for a long time I read this advice a lot. I'm sure in some areas this holds true, but I since realized that the content aggregator space is different. It's a relatively sticky type of product, people don't like to switch. Even if OPML exports and imports make it easy to move feeds, additional custom features like newsletter subscriptions, rule setups, tags, and so on make it harder to move. So people rightfully place a risk premium on smaller products. Pricing it close to the big ones is too high, and I now consider this a mistake. So I'm lowering the price from 10€ to 7€ for the premium plan. Another issue is the 3-part pricing structure. Everyone does it because the big companies do. And maybe at this point the big companies do it because "it's always been done that way". But as a small company I don't yet know where the lines are, which features are important to which customer segment. Therefore I'll remove the 2nd paid plan, to only have a free and one paid plan. I'm worried that the pricing changes are seen as erratic, but honestly too few people care yet for this worry to be warranted or important. What I find interesting is that I'm much more confident on the product side than on the business side. On the one hand this is clear, because I'm a software engineer. But on the other hand I believe it's also because (software) products are additive. In the sense that features can always be added. For pricing there is always one. The more time I have the more features I can add, so the only decision is what to do first. For pricing it doesn't matter how much time I have, I must always choose between one or the other. It doesn't really have a consequence, but I found it an interesting meta-thought.

0 views

A Language For Agents

Last year I first started thinking about what the future of programming languages might look like now that agentic engineering is a growing thing. Initially I felt that the enormous corpus of pre-existing code would cement existing languages in place but now I’m starting to think the opposite is true. Here I want to outline my thinking on why we are going to see more new programming languages and why there is quite a bit of space for interesting innovation. And just in case someone wants to start building one, here are some of my thoughts on what we should aim for! Does an agent perform dramatically better on a language that it has in its weights? Obviously yes. But there are less obvious factors that affect how good an agent is at programming in a language: how good the tooling around it is and how much churn there is. Zig seems underrepresented in the weights (at least in the models I’ve used) and also changing quickly. That combination is not optimal, but it’s still passable: you can program even in the upcoming Zig version if you point the agent at the right documentation. But it’s not great. On the other hand, some languages are well represented in the weights but agents still don’t succeed as much because of tooling choices. Swift is a good example: in my experience the tooling around building a Mac or iOS application can be so painful that agents struggle to navigate it. Also not great. So, just because it exists doesn’t mean the agent succeeds and just because it’s new also doesn’t mean that the agent is going to struggle. I’m convinced that you can build yourself up to a new language if you don’t want to depart everywhere all at once. The biggest reason new languages might work is that the cost of coding is going down dramatically. The result is the breadth of an ecosystem matters less. I’m now routinely reaching for JavaScript in places where I would have used Python. Not because I love it or the ecosystem is better, but because the agent does much better with TypeScript. The way to think about this: if important functionality is missing in my language of choice, I just point the agent at a library from a different language and have it build a port. As a concrete example, I recently built an Ethernet driver in JavaScript to implement the host controller for our sandbox. Implementations exist in Rust, C, and Go, but I wanted something pluggable and customizable in JavaScript. It was easier to have the agent reimplement it than to make the build system and distribution work against a native binding. New languages will work if their value proposition is strong enough and they evolve with knowledge of how LLMs train. People will adopt them despite being underrepresented in the weights. And if they are designed to work well with agents, then they might be designed around familiar syntax that is already known to work well. So why would we want a new language at all? The reason this is interesting to think about is that many of today’s languages were designed with the assumption that punching keys is laborious, so we traded certain things for brevity. As an example, many languages — particular modern ones — lean heavily on type inference so that you don’t have to write out types. The downside is that you now need an LSP or the resulting compiler error messages to figure out what the type of an expression is. Agents struggle with this too, and it’s also frustrating in pull request review where complex operations can make it very hard to figure out what the types actually are. Fully dynamic languages are even worse in that regard. The cost of writing code is going down, but because we are also producing more of it, understanding what the code does is becoming more important. We might actually want more code to be written if it means there is less ambiguity when we perform a review. I also want to point out that we are heading towards a world where some code is never seen by a human and is only consumed by machines. Even in that case, we still want to give an indication to a user, who is potentially a non-programmer, about what is going on. We want to be able to explain to a user what the code will do without going into the details of how. So the case for a new language comes down to: given the fundamental changes in who is programming and what the cost of code is, we should at least consider one. It’s tricky to say what an agent wants because agents will lie to you and they are influenced by all the code they’ve seen. But one way to estimate how they are doing is to look at how many changes they have to perform on files and how many iterations they need for common tasks. There are some things I’ve found that I think will be true for a while. The language server protocol lets an IDE infer information about what’s under the cursor or what should be autocompleted based on semantic knowledge of the codebase. It’s a great system, but it comes at one specific cost that is tricky for agents: the LSP has to be running. There are situations when an agent just won’t run the LSP — not because of technical limitations, but because it’s also lazy and will skip that step if it doesn’t have to. If you give it an example from documentation, there is no easy way to run the LSP because it’s a snippet that might not even be complete. If you point it at a GitHub repository and it pulls down individual files, it will just look at the code. It won’t set up an LSP for type information. A language that doesn’t split into two separate experiences (with-LSP and without-LSP) will be beneficial to agents because it gives them one unified way of working across many more situations. It pains me as a Python developer to say this, but whitespace-based indentation is a problem. The underlying token efficiency of getting whitespace right is tricky, and a language with significant whitespace is harder for an LLM to work with. This is particularly noticeable if you try to make an LLM do surgical changes without an assisted tool. Quite often they will intentionally disregard whitespace, add markers to enable or disable code and then rely on a code formatter to clean up indentation later. On the other hand, braces that are not separated by whitespace can cause issues too. Depending on the tokenizer, runs of closing parentheses can end up split into tokens in surprising ways (a bit like the “strawberry” counting problem), and it’s easy for an LLM to get Lisp or Scheme wrong because it loses track of how many closing parentheses it has already emitted or is looking at. Fixable with future LLMs? Sure, but also something that was hard for humans to get right too without tooling. Readers of this blog might know that I’m a huge believer in async locals and flow execution context — basically the ability to carry data through every invocation that might only be needed many layers down the call chain. Working at an observability company has really driven home the importance of this for me. The challenge is that anything that flows implicitly might not be configured. Take for instance the current time. You might want to implicitly pass a timer to all functions. But what if a timer is not configured and all of a sudden a new dependency appears? Passing all of it explicitly is tedious for both humans and agents and bad shortcuts will be made. One thing I’ve experimented with is having effect markers on functions that are added through a code formatting step. A function can declare that it needs the current time or the database, but if it doesn’t mark this explicitly, it’s essentially a linting warning that auto-formatting fixes. The LLM can start using something like the current time in a function and any existing caller gets the warning; formatting propagates the annotation. This is nice because when the LLM builds a test, it can precisely mock out these side effects — it understands from the error messages what it has to supply. For instance: Agents struggle with exceptions, they are afraid of them. I’m not sure to what degree this is solvable with RL (Reinforcement Learning), but right now agents will try to catch everything they can, log it, and do a pretty poor recovery. Given how little information is actually available about error paths, that makes sense. Checked exceptions are one approach, but they propagate all the way up the call chain and don’t dramatically improve things. Even if they end up as hints where a linter tracks which errors can fly by, there are still many call sites that need adjusting. And like the auto-propagation proposed for context data, it might not be the right solution. Maybe the right approach is to go more in on typed results, but that’s still tricky for composability without a type and object system that supports it. The general approach agents use today to read files into memory is line-based, which means they often pick chunks that span multi-line strings. One easy way to see this fall apart: have an agent work on a 2000-line file that also contains long embedded code strings — basically a code generator. The agent will sometimes edit within a multi-line string assuming it’s the real code when it’s actually just embedded code in a multi-line string. For multi-line strings, the only language I’m aware of with a good solution is Zig, but its prefix-based syntax is pretty foreign to most people. Reformatting also often causes constructs to move to different lines. In many languages, trailing commas in lists are either not supported (JSON) or not customary. If you want diff stability, you’d aim for a syntax that requires less reformatting and mostly avoids multi-line constructs. What’s really nice about Go is that you mostly cannot import symbols from another package into scope without every use being prefixed with the package name. Eg: instead of . There are escape hatches (import aliases and dot-imports), but they’re relatively rare and usually frowned upon. That dramatically helps an agent understand what it’s looking at. In general, making code findable through the most basic tools is great — it works with external files that aren’t indexed, and it means fewer false positives for large-scale automation driven by code generated on the fly (eg: , invocations). Much of what I’ve said boils down to: agents really like local reasoning. They want it to work in parts because they often work with just a few loaded files in context and don’t have much spatial awareness of the codebase. They rely on external tooling like grep to find things, and anything that’s hard to grep or that hides information elsewhere is tricky. What makes agents fail or succeed in many languages is just how good the build tools are. Many languages make it very hard to determine what actually needs to rebuild or be retested because there are too many cross-references. Go is really good here: it forbids circular dependencies between packages (import cycles), packages have a clear layout, and test results are cached. Agents often struggle with macros. It was already pretty clear that humans struggle with macros too, but the argument for them was mostly that code generation was a good way to have less code to write. Since that is less of a concern now, we should aim for languages with less dependence on macros. There’s a separate question about generics and comptime . I think they fare somewhat better because they mostly generate the same structure with different placeholders and it’s much easier for an agent to understand that. Related to greppability: agents often struggle to understand barrel files and they don’t like them. Not being able to quickly figure out where a class or function comes from leads to imports from the wrong place, or missing things entirely and wasting context by reading too many files. A one-to-one mapping from where something is declared to where it’s imported from is great. And it does not have to be overly strict either. Go kind of goes this way, but not too extreme. Any file within a directory can define a function, which isn’t optimal, but it’s quick enough to find and you don’t need to search too far. It works because packages are forced to be small enough to find everything with grep. The worst case is free re-exports all over the place that completely decouple the implementation from any trivially reconstructable location on disk. Or worse: aliasing. Agents often hate it when aliases are involved. In fact, you can get them to even complain about it in thinking blocks if you let them refactor something that uses lots of aliases. Ideally a language encourages good naming and discourages aliasing at import time as a result. Nobody likes flaky tests, but agents even less so. Ironic given how particularly good agents are at creating flaky tests in the first place. That’s because agents currently love to mock and most languages do not support mocking well. So many tests end up accidentally not being concurrency safe or depend on development environment state that then diverges in CI or production. Most programming languages and frameworks make it much easier to write flaky tests than non-flaky ones. That’s because they encourage indeterminism everywhere. In an ideal world the agent has one command, that lints and compiles and it tells the agent if all worked out fine. Maybe another command to run all tests that need running. In practice most environments don’t work like this. For instance in TypeScript you can often run the code even though it fails type checks . That can gaslight the agent. Likewise different bundler setups can cause one thing to succeed just for a slightly different setup in CI to fail later. The more uniform the tooling the better. Ideally it either runs or doesn’t and there is mechanical fixing for as many linting failures as possible so that the agent does not have to do it by hand. I think we will. We are writing more software now than we ever have — more websites, more open source projects, more of everything. Even if the ratio of new languages stays the same, the absolute number will go up. But I also truly believe that many more people will be willing to rethink the foundations of software engineering and the languages we work with. That’s because while for some years it has felt you need to build a lot of infrastructure for a language to take off, now you can target a rather narrow use case: make sure the agent is happy and extend from there to the human. I just hope we see two things. First, some outsider art: people who haven’t built languages before trying their hand at it and showing us new things. Second, a much more deliberate effort to document what works and what doesn’t from first principles. We have actually learned a lot about what makes good languages and how to scale software engineering to large teams. Yet, finding it written down, as a consumable overview of good and bad language design, is very hard to come by. Too much of it has been shaped by opinion on rather pointless things instead of hard facts. Now though, we are slowly getting to the point where facts matter more, because you can actually measure what works by seeing how well agents perform with it. No human wants to be subject to surveys, but agents don’t care . We can see how successful they are and where they are struggling.

0 views
Den Odell Yesterday

Fast by Default

After 25 years building sites for global brands, I kept seeing the same pattern appear. A team ships new features, users quietly begin to struggle, and only later do the bug reports start trickling in. Someone finally checks the metrics, panic spreads, and feature development is put on hold so the team can patch problems already affecting thousands of people. The fixes help for a while, but a month later another slowdown appears and the cycle begins again. The team spends much of its time firefighting instead of building. I call this repeating sequence of ship, complain, panic, patch the Performance Decay Cycle . Sadly, it’s the default state for many teams and it drains morale fast. There has to be a better way. When I stepped into tech-lead roles, I started experimenting. What if performance was something we protected from the start rather than something we cleaned up afterward? What if the entire team shared responsibility instead of relying on a single performance-minded engineer to swoop in and fix things? And what if the system itself made performance visible early, long before issues hit production? Across several teams and many iterations, a different pattern began to emerge. I now call it Fast by Default . Fast by Default is the practice of embedding performance into every stage of development so speed becomes the natural outcome, not a late rescue mission. It involves everyone in the team, not just engineers. Most organizations treat performance as something to address when it hurts, or they schedule a bug-fix sprint every few months. Both approaches are expensive, unreliable, and almost always too late. By the time a slowdown is noticeable, the causes are already baked into the rendering strategy, the data-fetching sequence, and the component boundaries. These decisions define a ceiling on how fast your system can ever be. You can tune within that ceiling, but without a rebuild, you can’t break through it. Meanwhile, the baseline slowly drifts. Slow builds and sluggish interactions become expected. What felt unacceptable in week 1 feels normal by month 6. And once a feature ships, the attention shifts. Performance work competes with new ideas and roadmap pressure. Most teams never return to clean things up. Performance regressions rarely announce themselves through one dramatic failure. They accumulate quietly, through dozens of reasonable decisions. A feature adds a little more JavaScript, a new dependency brings a hidden transitive load, and a design tweak introduces layout movement. A single page load still feels fine, but interactions begin to feel heavier. More features are added, more code ships, and slowly the slow path becomes the normal path. It shows up most clearly at the dependency level: Each import made sense in isolation and passed through code review. No single decision broke the experience; the combination did. This is why prevention always beats the cure. If you want to avoid returning to a culture of whack-a-mole fixes, you need to change the incentives so fast outcomes happen naturally. The core idea is simple: make the fast path easier than the slow path. Once you do that, performance stops depending on vigilance or heroics. You create systems and workflows that quietly pull the team toward fast decisions without friction. Here’s what this looks like day-to-day: If your starting point is a client-rendered SPA, you’re already fighting uphill. Server-first rendering with selective hydration (often called the Islands Architecture ) gives you a performance margin that doesn’t require constant micro-optimization to maintain. It also helps clarify how much of your SPA truly needs to be a SPA. When dependency size appears directly in your IDE, bundle size and budget checks run automatically in CI, and hydration warnings surface in local development, developers see the cost of their changes immediately and fix issues while the context is still fresh. Reaching for utility-first libraries, choosing smaller dependencies, and cultivating a culture where the first question is "do we need this?" rather than "why not?" keeps complexity from compounding. When reviewers consistently ask how a change affects render time or memory pressure, the entire team levels up. The question becomes part of the craft rather than an afterthought, and eventually it appears in every pull request. Teams that stay fast don’t succeed because they have more performance experts; they succeed because they distribute ownership. Designers think about layout stability, product managers scope work with speed in mind, and engineers treat performance budgets as part of correctness rather than a separate concern. Everyone understands that shipping fast code is as important as shipping correct code. For this to work, regressions need to surface early. That requires continuous measurement, clear ownership, and tooling that highlights problems before users do. Once the system pulls in the right direction with minimal resistance, performance becomes self-sustaining. A team with fast defaults ships fast software in month 1, and they’re still shipping fast software in month 12 and month 36 because small advantages accumulate in their favor. A team living in the Performance Decay Cycle may start with acceptable performance, but by month 12 they find themselves planning a dedicated performance sprint, and by month 36 they’re discussing a rewrite. The difference isn’t expertise or effort; it’s the approach they started from. Speed is leverage because it builds trust, sharpens design, and accelerates development. Once you lose it, you lose more than seconds: you lose users, revenue, and confidence in your own system. Fast by Default is how teams break this cycle and build systems that stay fast as they grow. For more on this model, see https://fastbydefault.com. <small>This article was first published on 4 December 2025 at https://calendar.perfplanet.com/2025/fast-by-default/</small>

0 views
Kev Quirk Yesterday

Step Aside, Phone!

I read this post on Manu's blog and it immediately resonated. I've been spending more time than I'd like to admit staring at my phone recently, and most of that consists of a stupid game, or YouTube shorts. If you also want to cut down on some of your phone usage, feel free to join in; I’ll be happy to include links to your posts. As a benchmark, my screen time this week averaged around 2.5 hours per day on my phone and 1.5 hours per day on my tablet. That's bloody embarrassing - 28 hours in one week sat staring at (mostly) pointless shite on a fucking screen. I think my phone usage is more harmful as it's stupid stuff, whereas my tablet is more reading posts in my RSS reader, and "proper" YouTube (whatever that is). I think reducing both and picking up my Kindle more - or just being bored - will be far more healthy though. So count me in, Manu. Thanks for reading this post via RSS. RSS is great, and you're great for using it. ❤️ You can reply to this post by email , or leave a comment .

1 views

How To Quiet A Ugreen 4800 Plus Without Sacrificing Drive Temps

I recently got a Ugreen 4800 Plus NAS, and it is basically perfect for what I wanted. Four bays, enough CPU, enough RAM, nice build quality, and it does not look like a sci-fi router from 2012. The first thing I did was wipe the OS it shipped with and install TrueNAS. That part was also great. The not so great part was the noise. I expected it to be louder than my old Synology, mostly because I moved from “HDDs in a plastic box” to “a more PC-like NAS with more airflow”. Still, it was louder than I thought it would be, and it had this annoying behavior where the fan would randomly ramp up. Which is exactly the kind of thing you notice at night.

0 views
ava's blog 2 days ago

privacy professionals: working at a messaging/social media platform

Welcome to a little series I'm starting, where I ask people working in the privacy field 7 questions about their work! This includes Data Protection Officers, Managers and Consultants, and other members of Privacy & Compliance teams. I find career advice and more specific information about the field to be lacking online, so I want to change that and host it myself :) First up is an employee from the privacy team at a social media/messaging platform! I messaged them via their support platform asking the questions and asking for consent to publish the answers, and received this response from one of the employees. Note: An earlier version published mentioned the company name; they have since requested me to anonymize it. 1. Can you describe your career path and what led you to become a Data Protection Officer (or similar role)? I started as a lawyer and then transitioned into the corporate world leveraging my law degree in a major corporation in their emerging privacy program. Another one of our teammates actually spent 25 years in teaching and took her CIPP US and transitioned careers. In privacy specifically you will see many backgrounds and stories of people "falling into" this career. Our DPO has experience across multiple companies and years of experience to make it to where he is now as a leader in the company. 2. What drew you specifically to data protection law and privacy as a profession? I loved the legal aspect of it and the ability to leverage my law degree. Fascinating intersection where humanity meets privacy. 3. What does a typical day in your role look like? Our team works with customer facing requests, internal team meetings discussing ways we can continue to serve our customers and also lead with excellence in compliance and communication. Compliance, legal regulations, new laws etc are all things we spend time working on, studying, and implementing within our platform. 4. What aspects of your work do you find most rewarding or challenging? Everyday comes with a new opportunity. With the ever changing privacy landscape the team is always learning, growing, and adapting. Its a very dynamic atmosphere. Love the challenge! 5. Which skills, qualities, or experiences do you consider essential for someone in such a role? Being a good listener as number one! Background in privacy law and certifications such as CIPP/ US - AI etc. A well rounded approach to both the legal aspects and the human impact which can come through experience, reading and working in the industry. 6. How do you keep up with the rapidly changing landscape of data protection regulations? Reading, conferences, webinar, IAPP, and association. Once you immerse yourself in understanding privacy you will find it touches virtually every part of our human existence in the marketplace, health, education, housing, finance etc. It is truly a fascinating industry. 7. If you could give advice to someone aspiring to enter this role, what would it be? It's a great career with growing impact across all industries. I would say consume content that makes you better. Books, podcasts, articles. Check out the IAPP website that has lots of resources. Stay up to date on different laws and regulations being passed. Finally, keep reaching out to industry leaders, think about how you want to show up either through certification, law school etc. It is always a bonus to get internships or equivalent. In the end though, I would say, no matter what you do work on your character through the decisions that you make in your day to day life now. Integrity, honesty, work ethic, humility, and curiosity will take you far in whatever you do! Thank you to this employee for the reply! I'm still reaching out to other companies, but if you know some who would be interested or know of people working in the privacy field that would like to answer these, please shoot me a message! :) Reply via email Published 08 Feb, 2026

0 views
Kev Quirk 2 days ago

I've Moved to Pure Blog!

In my last post I introduced Pure Blog and ended the post by saying: I'm going to take a little break from coding for a few days, then come back and start migrating this site over to Pure Blog. Dogfoodin' yo! Yeah, I didn't take a break. Instead I've pretty much spent my entire weekend at the computer migrating this site from Jekyll to Pure Blog, and trying to make sure everything works ok. Along the way there were features that I wanted to add into Pure Blog to make my live easier, which I've now done, these include: As well as all this, I've also changed the way Pure Blog is formatted so that it's easier for people to update their Pure Blog version. While I was there, I also added a simple little update page in settings so people can see if they're running the latest version or not: Finally, I decided to give the site a new lick of paint. Which was by far the easiest part of this whole thing - just some custom CSS in the CMS and I ended up with this nice (albeit brutal) new design. The way I've architected Pure Blog should allow me to very easily change the design going forward, which is just fantastic for a perpetual fiddler, such as myself. OK, that's enough for one weekend. I hope publishing this post doesn't bring any other issues to the surface, but we shall see. Now I really am going to take a break from coding. This has been so much fun, and I continue to learn a lot. For now though, my brain needs a rest. Oh, if you're using Pure Blog, please do let me know - I'd love to hear your feedback. The reply button below should be working fine. 🙃 Thanks for reading this post via RSS. RSS is great, and you're great for using it. ❤️ You can reply to this post by email , or leave a comment . Hooks so I can automatically purge Bunny CDN cache when posts are published/updated. Implementing data files so I can generate things like my Blogroll and Projects pages from YML lists. Adding shortcodes so I can have a site wide email setting and things like my Reply by email button works at the bottom of every post. Post layout partial so I can add custom content below my posts without moving away from Pure Blog's upstream code.

0 views