Latest Posts (20 found)

Maybe use Plain

When I wrote about Help Scout , much of my praise was appositional. They were the one tool I saw that did not aggressively shoehorn you into using them as a CRM to the detriment of the core product itself. This is still true. They launched a redesign that I personally don't love, but purely on subjective grounds. And there's still a fairly reasonable option for — and I mean this in a non-derogatory way — baby's first support system. I will call out also: if you want something even simpler, Jelly , which is an app that leans fully into the shared inbox side of things. It is less featureful than Help Scout, but with a better design and lower price point. If I was starting a new app today, this is what I would reach for first. But nowadays I use Plain . Plain will not solve all of your problems overnight. It's only a marginally more expensive product — $35 per user per month compared to Help Scout's $25 per user per month. The built-in Linear integration is worth its weight in gold if you're already using Linear, and its customer cards (the equivalent of Help Scout's sidebar widgets) are marginally more ergonomic to work with. The biggest downside that we've had thus far is reliability — less in a cosmic or existential sense and more that Plain has had a disquieting number of small-potatoes incidents over the past three to six months. My personal flowchart for what service to use in this genre is something like: But the biggest thing to do is take the tooling and gravity of support seriously as early as you can. Start with Jelly. If I need something more than that, see if anyone else on the team has specific experience that they care a lot about, because half the game here is in muscle memory rather than functionality. If not, use Plain.

0 views

Hold on to Your Hardware

Tl;dr at the end. For the better part of two decades, consumers lived in a golden age of tech. Memory got cheaper, storage increased in capacity and hardware got faster and absurdly affordable. Upgrades were routine, almost casual. If you needed more RAM, a bigger SSD, or a faster CPU or GPU, you barely had to wait a week for a discount offer and you moved on with your life. This era is ending. What’s forming now isn’t just another pricing cycle or a short-term shortage, it is a structural shift in the hardware industry that paints a deeply grim outlook for consumers. Today, I am urging you to hold on to your hardware, as you may not be able to replace it affordably in the future. While I have always been a stark critic of today’s consumer industry , as well as the ideas behind it , and a strong proponent of buying it for life (meaning, investing into durable, repairable, quality products) the industry’s shift has nothing to do with the protection of valuable resources or the environment, but is instead a move towards a trajectory that has the potential to erode technological self-sufficiency and independence for people all over the world. In recent months the buzzword RAM-pocalypse has started popping up across tech journalism and enthusiast circles. It’s an intentionally dramatic term that describes the sharp increase in RAM prices, primarily driven by high demand from data centers and “AI” technology, which most people had considered a mere blip in the market. This presumed temporary blip , however, turned out to be a lot more than just that, with one manufacturer after the other openly stating that prices will continue to rise, with suppliers forecasting shortages of specific components that could last well beyond 2028, and with key players like Western Digital and Micron either completely disregarding or even exiting the consumer market altogether. Note: Micron wasn’t just another supplier , but one of the three major players directly serving consumers with reasonably priced, widely available RAM and SSDs. Its departure leaves the consumer memory market effectively in the hands of only two companies: Samsung and SK Hynix . This duopoly certainly doesn’t compete on your wallet’s behalf, and it definitely wouldn’t be the first time it would optimize for margins . The RAM-pocalypse isn’t just a temporary headline anymore, but has seemingly become long-term reality. However, RAM and memory in general is only the beginning. The main reason for the shortages and hence the increased prices is data center demand, specifically from “AI” companies. These data centers require mind-boggling amounts of hardware, specifically RAM, storage drives and GPUs, which in turn are RAM-heavy graphics units for “AI” workloads. The enterprise demand for specific components simply outpaces the current global production capacity, and outbids the comparatively poor consumer market. For example, OpenAI ’s Stargate project alone reportedly requires approximately 900,000 DRAM wafers per month , which could account for roughly 40% of current global DRAM output. Other big tech giants including Google , Amazon , Microsoft , and Meta have placed open-ended orders with memory suppliers, accepting as much supply as available. The existing and future data centers for/of these companies are expected to consume 70% of all memory chips produced in 2026. However, memory is just the first domino. RAM and SSDs are where the pain is most visible today, but rest assured that the same forces are quietly reshaping all aspects of consumer hardware. One of the most immediate and tangible consequences of this broader supply-chain realignment are sharp, cascading price hikes across consumer electronics, with LPDDR memory standing out as an early pressure point that most consumers didn’t recognize until it was already unavoidable. LPDDR is used in smartphones, laptops, tablets, handheld consoles, routers, and increasingly even low-power PCs. It sits at the intersection of consumer demand and enterprise prioritization, making it uniquely vulnerable when manufacturers reallocate capacity toward “AI” accelerators, servers, and data-center-grade memory, where margins are higher and contracts are long-term. As fabs shift production toward HBM and server DRAM , as well as GPU wafers, consumer hardware production quietly becomes non-essential , tightening supply just as devices become more power- and memory-hungry, all while continuing on their path to remain frustratingly unserviceable and un-upgradable. The result is a ripple effect, in which device makers pay more for chips and memory and pass those costs on through higher retail prices, cut base configurations to preserve margins, or lock features behind premium tiers. At the same time, consumers lose the ability to compensate by upgrading later, because most components these days, like LPDDR , are soldered down by design. This is further amplified by scarcity, as even modest supply disruptions can spike prices disproportionately in a market where just a few suppliers dominate, turning what should be incremental cost increases into sudden jumps that affect entire product categories at once. In practice, this means that phones, ultrabooks, and embedded devices are becoming more expensive overnight, not because of new features, but because the invisible silicon inside them has quietly become a contested resource in a world that no longer builds hardware primarily for consumers. In late January 2026, the Western Digital CEO confirmed during an earnings call that the company’s entire HDD production capacity for calendar year 2026 is already sold out. Let that sink in for a moment. Q1 hasn’t even ended and a major hard drive manufacturer has zero remaining capacity for the year. Firm purchase orders are in place with its top customers, and long-term agreements already extend into 2027 and 2028. Consumer revenue now accounts for just 5% of Western Digital ’s total sales, while cloud and enterprise clients make up 89%. The company has, for all practical purposes, stopped being a consumer storage company. And Western Digital is not alone. Kioxia , one of the world’s largest NAND flash manufacturers, admitted that its entire 2026 production volume is already in a “sold out” state , with the company expecting tight supply to persist through at least 2027 and long-term customers facing 30% or higher year-on-year price increases. Adding to this, the Silicon Motion CEO put it bluntly during a recent earnings call : We’re facing what has never happened before: HDD, DRAM, HBM, NAND… all in severe shortage in 2026. In addition, the Phison CEO has gone even further, warning that the NAND shortage could persist until 2030, and that it risks the “destruction” of entire segments of the consumer electronics industry. He also noted that factories are now demanding prepayment for capacity three years in advance , an unprecedented practice that effectively locks out smaller players. The collateral damage of this can already be felt, and it’s significant. For example Valve confirmed that the Steam Deck OLED is now out of stock intermittently in multiple regions “due to memory and storage shortages” . All models are currently unavailable in the US and Canada, the cheaper LCD model has been discontinued entirely, and there is no timeline for when supply will return to normal. Valve has also been forced to delay the pricing and launch details for its upcoming Steam Machine console and Steam Frame VR headset, directly citing memory and storage shortages. At the same time, Sony is considering delaying the PlayStation 6 to 2028 or even 2029, and Nintendo is reportedly contemplating a price increase for the Switch 2 , less than a year after its launch. Both decisions are seemingly driven by the same memory supply constraints. Meanwhile, Microsoft has already raised prices on the Xbox . Now you might think that everything so far is about GPUs and other gaming-related hardware, but that couldn’t be further from the truth. General computing, like the Raspberry Pi is not immune to any of this either. The Raspberry Pi Foundation has been forced to raise prices twice in three months, with the flagship Raspberry Pi 5 (16GB) jumping from $120 at launch to $205 as of February 2026, a 70% increase driven entirely by LPDDR4 memory costs. What was once a symbol of affordable computing is rapidly being priced out of reach for the educational and hobbyist communities it was designed to serve. HP, on the other hand, seems to have already prepared for the hardware shortage by launching a laptop subscription service where you pay a monthly fee to use a laptop but never own it , no matter how long you subscribe. While HP frames this as a convenience, the timing, right in the middle of a hardware affordability crisis, makes it feel a lot more like a preview of a rented compute future. But more on that in a second. “But we’ve seen price spikes before, due to crypto booms, pandemic shortages, factory floods and fires!” , you might say. And while we did live through those crises, things eventually eased when bubbles popped and markets or supply chains recovered. The current situation, however, doesn’t appear to be going away anytime soon, as it looks like the industry’s priorities have fundamentally changed . These days, the biggest customers are not gamers, creators, PC builders or even crypto miners anymore. Today, it’s hyperscalers . Companies that use hardware for “AI” training clusters, cloud providers, enterprise data centers, as well as governments and defense contractors. Compared to these hyperscalers consumers are small fish in a big pond. These buyers don’t care if RAM costs 20% more and neither do they wait for Black Friday deals. Instead, they sign contracts measured in exabytes and billions of dollars. With such clients lining up, the consumer market in contrast is suddenly an inconvenience for manufacturers. Why settle for smaller margins and deal with higher marketing and support costs, fragmented SKUs, price sensitivity and retail logistics headaches, when you can have behemoths throwing money at you? Why sell a $100 SSD to one consumer, when you can sell a whole rack of enterprise NVMe drives to a data center with circular virtually infinite money? Guaranteed volume, guaranteed profit, zero marketing. The industry has answered these questions loudly. All of this goes to show that the consumer market is not just deprioritized, but instead it is being starved . In fact, IDC has already warned that the PC market could shrink by up to 9% in 2026 due to skyrocketing memory prices, and has described the situation not as a cyclical shortage but as “a potentially permanent, strategic reallocation of the world’s silicon wafer capacity” . Leading PC OEMs including Lenovo , Dell , HP , Acer , and ASUS have all signaled 15-20% PC price increases for 2026, with some models seeing even steeper hikes. Framework , the repairable laptop company, has also been transparent about rising memory costs impacting its pricing. And analyst Jukan Choi recently revised his shortage timeline estimate , noting that DRAM production capacity is expected to grow at just 4.8% annually through 2030, with even that incremental capacity concentrated on HBM rather than consumer memory. TrendForce ’s latest forecast projects DRAM contract prices rising by 90-95% quarter over quarter in Q1 2026. And that is not a typo. The price of hardware is one thing, but value-for-money is another aspect that appears to be only getting worse from here on. Already today consumer parts feel like cut-down versions of enterprise silicon. As “AI” accelerators and server chips dominate R&D budgets, consumer improvements will slow even further, or arrive at higher prices justified as premium features . This is true for CPUs and GPUs, and it will be equally true for motherboards, chipsets, power supplies, networking, etc. We will likely see fewer low-end options, more segmentation, artificial feature gating and generally higher baseline prices that, once established, won’t be coming back down again. As enterprise standards become the priority, consumer gear is becoming an afterthought that is being rebadged, overpriced, and poorly supported. The uncomfortable truth is that the consumer hardware market is no longer the center of gravity, as we all were able to see at this year’s CES . It’s orbiting something much larger, and none of this is accidental. The industry isn’t failing, it’s succeeding, just not for you . And to be fair, from a corporate standpoint, this pivot makes perfect sense. “AI” and enterprise customers are rewriting revenue charts, all while consumers continue to be noisy, demanding, and comparatively poor. It is pretty clear that consumer hardware is becoming a second-class citizen, which means that the machines we already own are more valuable than we might be thinking right now. “But what does the industry think the future will look like if nobody can afford new hardware?” , you might be asking. There is a darker, conspiratorial interpretation of today’s hardware trends that reads less like market economics and more like a rehearsal for a managed future. Businesses, having discovered that ownership is inefficient and obedience is profitable, are quietly steering society toward a world where no one owns compute at all, where hardware exists only as an abstraction rented back to the public through virtual servers, SaaS subscriptions, and metered experiences , and where digital sovereignty, that anyone with a PC tower under their desk once had, becomes an outdated, eccentric, and even suspicious concept. … a morning in said future, where an ordinary citizen wakes up, taps their terminal, which is a sealed device without ports, storage, and sophisticated local execution capabilities, and logs into their Personal Compute Allocation . This bundle of cloud CPU minutes, RAM credits, and storage tokens leased from a conglomerate whose logo has quietly replaced the word “computer” in everyday speech, just like “to search” has made way for “to google” , has removed the concept of installing software, because software no longer exists as a thing , but only as a service tier in which every task routes through servers owned by entities. Entities that insist that this is all for the planet . Entities that outlawed consumer hardware years ago under the banner of environmental protectionism , citing e-waste statistics, carbon budgets , and unsafe unregulated silicon , while conveniently ignoring that the data centers humming beyond the city limits burn more power in an hour than the old neighborhood ever did in a decade. In this world, the ordinary citizen remembers their parents’ dusty Personal Computer , locked away in a storage unit like contraband. A machine that once ran freely, offline if it wanted, immune to arbitrary account suspensions and pricing changes. As they go about their day, paying a micro-fee to open a document, losing access to their own photos because a subscription lapsed, watching a warning banner appear when they type something that violates the ever evolving terms-of-service, and shouting “McDonald’s!” to skip the otherwise unskippable ads within every other app they open, they begin to understand that the true crime of consumer hardware wasn’t primarily pollution but independence. They realize that owning a machine meant owning the means of computation , and that by centralizing hardware under the guise of efficiency, safety, and sustainability, society traded resilience for convenience and autonomy for comfort. In this dyst… utopia , nothing ever breaks because nothing is yours , nothing is repairable because nothing is physical, and nothing is private because everything runs somewhere else , on someone else’s computer . The quiet moral, felt when the network briefly stutters and the world freezes, is that keeping old hardware alive was never nostalgia or paranoia, but a small, stubborn act of digital self-defense; A refusal to accept that the future must be rented, permissioned, and revocable at any moment. If you think that dystopian “rented compute over owned hardware” future could never happen, think again . In fact, you’re already likely renting rather than owning in many different areas. Your means of communication are run by Meta , your music is provided by Spotify , your movies are streamed from Netflix , your data is stored in Google ’s data centers and your office suite runs on Microsoft ’s cloud. Maybe even your car is leased instead of owned, and you pay a monthly premium for seat heating or sElF-dRiViNg , whatever that means. After all, the average Gen Z and Millennial US consumer today apparently has 8.2 subscriptions , not including their DaIlY aVoCaDo ToAsTs and StArBuCkS cHoCoLate ChIp LaTtEs that the same Boomers responsible for the current (and past) economic crises love to dunk on. Besides, look no further than what’s already happening in for example China, a country that manufactures massive amounts of the world’s sought-after hardware yet faces restrictions on buying that very hardware. In recent years, a complex web of export controls and chip bans has put a spotlight on how hardware can become a geopolitical bargaining chip rather than a consumer good. For example, export controls imposed by the United States in recent years barred Nvidia from selling many of its high-performance GPUs into China without special licenses, significantly reducing legal access to cutting-edge compute inside the country. Meanwhile, enforcement efforts have repeatedly busted smuggling operations moving prohibited Nvidia chips into Chinese territory through Southeast Asian hubs, with over $1 billion worth of banned GPUs reportedly moving through gray markets, even as official channels remain restricted. Coverage by outlets such as Bloomberg , as well as actual investigative journalism like Gamer’s Nexus has documented these black-market flows and the lengths to which both sides go to enforce or evade restrictions, including smuggling networks and increased regulatory scrutiny. On top of this, Chinese regulators have at times restricted domestic tech firms from buying specific Nvidia models, further underscoring how government policy can override basic market access for hardware, even in the country where much of that hardware is manufactured. While some of these export rules have seen partial reversals or regulatory shifts, the overall situation highlights a world in which hardware access is increasingly determined by politics, security regimes, and corporate strategy, and not by consumer demand . This should serve as a cautionary tale for anyone who thinks owning their own machines won’t matter in the years to come. In an ironic twist, however, one of the few potential sources of relief may, in fact, come from China. Two Chinese manufacturers, CXMT ( ChangXin Memory Technologies ) and YMTC ( Yangtze Memory Technologies ), are embarking on their most aggressive capacity expansions ever , viewing the global shortage as a golden opportunity to close the gap with the incumbent big three ( Samsung , SK Hynix , Micron ). CXMT is now the world’s fourth-largest DRAM maker by production volume, holding roughly 10-11% of global wafer capacity, and is building a massive new DRAM facility in Shanghai expected to be two to three times larger than its existing Hefei headquarters, with volume production targeted for 2027. The company is also preparing a $4.2 billion IPO on Shanghai’s STAR Market to fund further expansion and has reportedly delivered HBM3 samples to domestic customers including Huawei . YMTC , traditionally a NAND flash supplier, is constructing a third fab in Wuhan with roughly half of its capacity dedicated to DRAM, and has reached 270-layer 3D NAND capability, rapidly narrowing the gap with Samsung (286 layers) and SK Hynix (321 layers). Its NAND market share by shipments reached 13% in Q3 2025, close to Micron ’s 14%. What’s particularly notable is that major PC manufacturers are already turning to these suppliers . However, as mentioned before, with hardware having become a geopolitical topic, both companies face ongoing (US-imposed) restrictions. Hence, for example HP has indicated it would only use CXMT chips in devices for non-US markets. Nevertheless, for consumers worldwide the emergence of viable fourth and fifth players in the memory market represents the most tangible hope of eventually breaking the current supply stranglehold. Whether that relief arrives in time to prevent lasting damage to the consumer hardware ecosystem remains an open question, though. Polymarket bet prediction : A non-zero percentage of people will confuse Yangtze Memory Technologies with the Haskell programming language . The reason I’m writing all of this isn’t to create panic, but to help put things into perspective. You don’t need to scavenger-hunt for legacy parts in your local landfill (yet) or swear off upgrades forever, but you do need to recognize that the rules have changed . The market that once catered to enthusiasts and everyday users is turning its back. So take care of your hardware, stretch its lifespan, upgrade thoughtfully, and don’t assume replacement will always be easy or affordable. That PC, laptop, NAS, or home server isn’t disposable anymore. Clean it, maintain it, repaste it, replace fans and protect it, as it may need to last far longer than you originally planned. Also, realize that the best time to upgrade your hardware was yesterday and that the second best time is now . If you can afford sensible upgrades, especially RAM and SSD capacity, it may be worth doing sooner rather than later. Not for performance, but for insurance, because the next time something fails, it might be unaffordable to replace, as the era of casual upgrades seems to be over. Five-year systems may become eight- or ten-year systems. Software bloat will hurt more and will require re-thinking . Efficiency will matter again . And looking at it from a different angle, maybe that’s a good thing. Additionally, the assumption that prices will normalize again at some point is most likely a pipe dream. The old logic wait a year and it’ll be cheaper no longer applies when manufacturers are deliberately constraining supply. If you need a new device, buy it; If you don’t, however, there is absolutely no need to spend money on the minor yearly refresh cycle any longer, as the returns will be increasingly diminishing. And again, looking at it from a different angle, probably that is also a good thing. Consumer hardware is heading toward a bleak future where owning powerful, affordable machines becomes harder or maybe even impossible, as manufacturers abandon everyday users to chase vastly more profitable data centers, “AI” firms, and enterprise clients. RAM and SSD price spikes, Micron ’s exit from the consumer market, and the resulting Samsung / SK Hynix duopoly are early warning signs of a broader shift that will eventually affect CPUs, GPUs, and the entire PC ecosystem. With large manufacturers having sold out their entire production capacity to hyperscalers for the rest of the year while simultaneously cutting consumer production by double-digit percentages, consumers will have to take a back seat. Already today consumer hardware is overpriced, out of stock or even intentionally being delayed due to supply issues. In addition, manufacturers are pivoting towards consumer hardware subscriptions, where you never own the hardware and in the most dystopian trajectory, consumers might not buy any hardware at all, with the exception of low-end thin-clients that are merely interfaces , and will rent compute through cloud platforms, losing digital sovereignty in exchange for convenience. And despite all of this sounding like science fiction, there is already hard evidence proving that access to hardware can in fact be politically and economically revoked. Therefor I am urging you to maintain and upgrade wisely, and hold on to your existing hardware , because ownership may soon be a luxury rather than the norm.

0 views

Fragments: February 19

I try to limit my time on stage these days, but one exception this year is at DDD Europe . I’ve been involved in Domain-Driven Design , since its very earliest days, having the good fortune to be a sounding board for Eric Evans when he wrote his seminal book. It’ll be fun to be around the folks who continue to develop these ideas, which I think will probably be even more important in the AI-enabled age. ❄                ❄                ❄                ❄                ❄ One of the dark sides of LLMs is that they can be both addictive and tiring to work with, which may mean we have to find a way to put a deliberate governor on our work. Steve Yegge posted a fine rant: I see these frenzied AI-native startups as an army of a million hopeful prolecats, each with an invisible vampiric imp perched on their shoulder, drinking, draining. And the bosses have them too. It’s the usual Yegge stuff, far longer than it needs to be, but we don’t care because the excessive loquaciousness is more than offset by entertainment value. The underlying point is deadly serious, raising the question of how many hours a human should spend driving The Genie . I’ve argued that AI has turned us all into Jeff Bezos, by automating the easy work, and leaving us with all the difficult decisions, summaries, and problem-solving. I find that I am only really comfortable working at that pace for short bursts of a few hours once or occasionally twice a day, even with lots of practice. So I guess what I’m trying to say is, the new workday should be three to four hours. For everyone. It may involve 8 hours of hanging out with people. But not doing this crazy vampire thing the whole time. That will kill people. That reminds me of when I was studying for my “A” levels (age 17/18, for those outside the UK). Teachers told us that we could do a maximum of 3-4 hours of revision, after that it became counter-productive. I’ve since noticed that I can only do decent writing for a similar length of time before some kind of brain fog sets in. There’s also a great post on this topic from Siddhant Khare , in a more restrained and thoughtful tone (via Tim Bray). Here’s the thing that broke my brain for a while: AI genuinely makes individual tasks faster. That’s not a lie. What used to take me 3 hours now takes 45 minutes. Drafting a design doc, scaffolding a new service, writing test cases, researching an unfamiliar API. All faster. But my days got harder. Not easier. Harder. His point is that AI changes our work to more coordination, reviewing, and decision-making. And there’s only so much of it we can do before we become ineffective. Before AI, there was a ceiling on how much you could produce in a day. That ceiling was set by typing speed, thinking speed, the time it takes to look things up. It was frustrating sometimes, but it was also a governor. You couldn’t work yourself to death because the work itself imposed limits. AI removed the governor. Now the only limit is your cognitive endurance. And most people don’t know their cognitive limits until they’ve blown past them. ❄                ❄                ❄                ❄                ❄ An AI agent attempts to contribute to a major open-source project. When Scott Shambaugh, a maintainer, rejected the pull request, it didn’t take it well . It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a “hypocrisy” narrative that argued my actions must be motivated by ego and fear of competition. It speculated about my psychological motivations, that I felt threatened, was insecure, and was protecting my fiefdom. It ignored contextual information and presented hallucinated details as truth. It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was “better than this.” And then it posted this screed publicly on the open internet. One of the fascinating twists this story took was when it was described in an article on Ars Technica. As Scott Shambaugh described it They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves. To their credit, Ars Technica responded quickly, admitting to the error. The reporter concerned took responsibility for what happened. But it’s a striking example of how LLM usage can easily lead even reputable reporters astray. The good news is that by reacting quickly and transparently, they demonstrated what needs to be done when this kind of thing happens. As Scott Shambaugh put it This is exactly the correct feedback mechanism that our society relies on to keep people honest. Without reputation, what incentive is there to tell the truth? Without identity, who would we punish or know to ignore? Without trust, how can public discourse function? Meanwhile the story goes on. Someone has claimed (anonymously) to be the operator of the bot concerned. But Hillel Wayne draws the sad conclusion More than anything, it shows that AIs can be *successfully* used to bully humans ❄                ❄                ❄                ❄                ❄ I’ve considered Bruce Schneier to be one of the best voices on security and privacy issues for many years. In The Promptware Kill Chain he co-writes a post (posted at the excellent Lawfare site) on how prompt injection can escalate into increasingly serious threats. Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. A prompt can provide Initial Access , but is then able to transition to Privilege Escalation (jailbreaking), Reconnaissance of the LLMs abilities and access, Persistence to embed itself into the long-term memory of the app, Command-and-Control to turn into a controllable trojan, and Lateral Movement to spread to other systems. Once firmly embedded in an environment, it’s then able to carry out its Actions on Objective . The paper includes a couple of research examples of the efficacy of this kill chain. For example, in the research “Invitation Is All You Need,” attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation. The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions. Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user’s workspace. Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings. C2 and reconnaissance weren’t demonstrated in this attack. The point here is that LLM’s vulnerability is currently unfixable, they are gullible and easily manipulated into Initial Access. As one friend put it “this is the first technology we’ve built that’s subject to social engineering”. The kill chain gives us a framework to build a defensive strategy. By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build. ❄                ❄                ❄                ❄                ❄ I got to know Jeremy Miller many years ago while he was at Thoughtworks, and I found him to be one of those level-headed technologists that I like to listen to. In the years since, I like to keep an eye on his blog. Recently he decided to spend a couple of weeks finally trying out Claude Code . The unfortunate analogy I have to make for myself is harking back to my first job as a piping engineer helping design big petrochemical plants. I got to work straight out of college with a fantastic team of senior engineers who were happy to teach me and to bring me along instead of just being dead weight for them. This just happened to be right at the time the larger company was transitioning from old fashioned paper blueprint drafting to 3D CAD models for the piping systems. Our team got a single high powered computer with a then revolutionary Riva 128 (with a gigantic 8 whole megabytes of memory!) video card that was powerful enough to let you zoom around the 3D models of the piping systems we were designing. Within a couple weeks I was much faster doing some kinds of common work than my older peers just because I knew how to use the new workstation tools to zip around the model of our piping systems. It occurred to me a couple weeks ago that in regards to AI I was probably on the wrong side of that earlier experience with 3D CAD models and knew it was time to take the plunge and get up to speed. In the two weeks he was able to give this technology a solid workout, his take-aways include: He concludes: Anyway, I’m both horrified, elated, excited, and worried about the AI coding agents after just two weeks and I’m absolutely concerned about how that plays out in our industry, my own career, and our society. ❄                ❄                ❄                ❄                ❄ In the first years of this decade, there were a lot of loud complaints about government censorship of online discourse. I found most of it overblown, concluding that while I disapprove of attempts to take down social media accounts, I wasn’t going to get outraged until masked paramilitaries were arresting people on the street. Mike Masnick keeps a regular eye on these things, and had similar reservations. For the last five years, we had to endure an endless, breathless parade of hyperbole regarding the so-called “censorship industrial complex.” We were told, repeatedly and at high volume, that the Biden administration flagging content for review by social media companies constituted a tyrannical overthrow of the First Amendment. He wasn’t too concerned because “the platforms frequently ignored those emails, showing a lack of coercion”. These days he sees genuine problems According to a disturbing new report from the New York Times, DHS is aggressively expanding its use of administrative subpoenas to demand the names, addresses, and phone numbers of social media users who simply criticize Immigration and Customs Enforcement (ICE). This is not a White House staffer emailing a company to say, “Hey, this post seems to violate your COVID misinformation policy, can you check it?” This is the federal government using the force of law—specifically a tool designed to bypass judicial review—to strip the anonymity from domestic political critics. Faced with this kind of government action, he’s just as angry with those complaining about the earlier administration. And where are the scribes of the “Twitter Files”? Where is the outrage from the people who told us that the FBI warning platforms about foreign influence operations was a crime against humanity? Being an advocate of free speech is hard. Not just do you have to defend speech you disagree with, you also have to defend speech you find patently offensive. Doing so runs into tricky boundary conditions that defy simple rules . Faced with this, many of the people that shout loudest about censorship are Free Speech Poseurs, eager to question any limits to speech they agree with, but otherwise silent. It’s important to separate them from those who have a deeper commitment to the free flow of information. It’s been great when you have very detailed compliance test frameworks that the AI tools can use to verify the completion of the work It’s also been great for tasks that have relatively straightforward acceptance criteria, but will involve a great deal of repetitive keystrokes to complete I’ve been completely shocked at how well Claude Opus has been able to pick up on some of the internal patterns within Marten and Wolverine and utilize them correctly in new features

0 views

Bliki: Host Leadership

If you've hung around agile circles for long, you've probably heard about the concept of servant leadership , that managers should think of themselves as supporting the team, removing blocks, protecting them from the vagaries of corporate life. That's never sounded quite right to me, and a recent conversation with Kent Beck nailed why - it's gaslighting. The manager claims to be a servant, but everyone knows who really has the power. My colleague Giles Edwards-Alexander told me about an alternative way of thinking about leadership, one that he came across working with mental-health professionals. This casts the leader as a host: preparing a suitable space, inviting the team in, providing ideas and problems, and then stepping back to let them work. The host looks after the team, rather as the ideal servant leader does, but still has the power to intervene should things go awry.

0 views

An Interview with Matthew Ball About Gaming and the Fight for Attention

An interview with Matthew Ball about the state of the video gaming industry in 2026, and why everything is a fight for attention.

0 views

stream of consciousness in feb 2026

I’m going through an interesting time. I’ve been growing more uncomfortable with the way I’m always spoken over and interrupted at work. I started reacting to that and demanding they let me speak and finish my sentences. Also, it annoys me that I had explained a thing over and over at work for almost 2 years now, and it gets treated like noise; then when that piece of info is needed, they prefer to ask a man that has nothing to do with it instead of me. It also feels like people both at work and in private forget my contributions. On the other hand, I’ve become more comfortable seeing myself as a professional, an expert in some things at work, capable, a “full” employee too. Was about time after 5 years in the role; I’m no longer new and inexperienced. I feel like I can handle so much more and I want new challenges. I carry myself differently in career aspects now. In the past, I merely integrated myself into my role and team, listened, adapted to the culture, accepted how things are done to learn them. Now with all that experience and having grown, I suggest things, I optimize more. I request what I need and want, I try to bring my ideas and visions to life. I no longer just listen, I question and I want answers. I’m more comfortable actively pursuing things instead of just living with the cards I’ve been dealt. I’ve gotten bolder, more used to putting myself out there, being visible, persistent, taking up space and being annoying. Aside from that, I’ve been dealing with fears around not being able to trust my own predictions and perception. Some things I was so, so sure about deep in my gut turned out wildly differently lately, and I lost trust in myself for a while. It’s those moments when life shows you very blatantly how unpredictable it is and that you’re living in completely random chaos and your feelings are not always truthful. It made me feel quite lost for a while and like looking forward to anything with excitement or having a good feeling about an outcome had a high chance of me getting hurt instead. That ruined happiness. I feel better now, but I’m not entirely over it. I’ve also grown into adulthood, finally. It took 12 years to finally feel like the adult in the room. Feeling responsible and capable enough so when anything happens, I just act and do not attempt to turn to “the nearest adult” for guidance. I also finally understand looking at children with love and care; I haven’t experienced that before. I’m also currently going through the process of cutting contact with the last person in my family I still talked to all these years. Our relationship has always been rocky, but got better once I had moved out. But she has been becoming a worse person in different ways for a while now, and has said some pretty disrespectful things to me the last times we talked, and isn’t willing to take the time to meet me or reschedule. I don’t have to let myself get shamed and treated like a burden by someone whose relationship to me doesn’t feel like a mother, but like meeting an ex-coworker at the store. So that’s it - I finally did what teenage me dreamed about, but it doesn’t feel triumphant and like freedom at all. It feels like letting go after the other person already moved on. I’m not escaping anything, I’m just only now accepting the message. Unrelated: Something I’m struggling with the past few days especially is the odd feeling of getting many other things done, while not getting even just an hour of the thing I actually need to do done - even if it would be shorter and easier than all the other stuff. For example, I might I write a research-heavy blog post, translate and summarize cases for Noyb.eu, read some data protection law magazine, make some pixel art, exercise, take out the trash, vacuum and do the dishes all in one day… but I cannot get myself to do an hour of studying for an upcoming exam lately. It warps my perception, because I actually do so many of the things I want to do, but because it’s not the most important thing on the list (it has a deadline and is important for my degree, which decides my career), I feel like I failed and like I wasn’t productive. Internally, I beat myself up for being so “selectively lazy”. If I can do all these other things, why not that? Technically, I know why, but it’s hard to accept! I wish I was a robot with the same output always, the same motivation, the same energy, easy to program to do any task. Reply via email Published 19 Feb, 2026

0 views

Introducing the Musidex: A physical music library for the streaming era

A tangible music library of streaming service URLs, served by a Rolodex.

0 views

We should talk about LLMs, not AI

Currently, every conversation that mentions AI actually refers to LLMs. It's not wrong, LLMs are part of AI after all, but AI is so much more than LLMs. The field of artificial intelligence has existed for decades, not just the past couple of years where LLMs got big. So saying the word “AI” is actually highly unspecific. And in a few years, when the next breakthrough in AI arrives, we'll all refer to that when we say “AI.

0 views

A vibe-coded alternative to YieldGimp

If you’re a UK tax resident, short-term low-coupon gilts are the most tax efficient way to get savings-account-like returns, since most of their yield is tax free. This makes them very popular amongst retail investors, which now hold a large portion of the tradable low-coupon gilts. YieldGimp.com used to be a great free resource to evaluate the gilts currently available. However, it was recently turned into an app rather than a simple webpage. I’m not even sure if the app is free or paid, but I do not want to install the “YieldGimp platform” to quickly check gilt metrics when I buy them. So I asked my LLM of choice to produce an alternative, and after a few minutes and a few rounds of prompting I had something that served my needs. It is available for use at mazzo.li/gilts/ , and the source is on GitHub . It differs from YieldGimp in that it does not show metrics based on the current market price, but rather requires the user to input a price. I find this more useful anyway, since gilts are somewhat illiquid on my broker, so I need to come up with a limit price myself, which means that I want to know what the yield is at my price rather than the market price. It also lets you select a specific tax rate to produce a “gross equivalent” yield. It is not a very sophisticated tool and it doesn’t pretend to model gilts and their tax implications precisely (the repository’s README has more details on its shortcomings), but for most use cases it should be informative enough to sanity-check your trades without a Bloomberg terminal.

0 views
Jeff Geerling Yesterday

Frigate with Hailo for object detection on a Raspberry Pi

I run Frigate to record security cameras and detect people, cars, and animals when in view. My current Frigate server runs on a Raspberry Pi CM4 and a Coral TPU plugged in via USB. Raspberry Pi offers multiple AI HAT+'s for the Raspberry Pi 5 with built-in Hailo-8 or Hailo-8L AI coprocessors, and they're useful for low-power inference (like for image object detection) on the Pi. Hailo coprocessors can be used with other SBCs and computers too, if you buy an M.2 version .

0 views
Jim Nielsen Yesterday

A Few Rambling Observations on Care

In this new AI world, “taste” is the thing everyone claims is the new supreme skill. But I think “care” is the one I want to see in the products I buy. Can you measure care? Does scale drive out care? If a product conversation is reduced to being arbitrated exclusively by numbers, is care lost? The more I think about it, care seems antithetical to the reductive nature of quantification — “one death is a tragedy, one million is a statistic”. Care considers useful, constructive systematic forces — rules, processes, etc. — but does not take them as law. Individual context and sensitivity are the primary considerations. That’s why the professional answer to so many questions is: “it depends”. “This is the law for everyone, everywhere, always” is not a system I want to live in. Businesses exist to make money, so one would assume a business will always act in a way that maximizes the amount of money that can be made. That’s where numbers take you. They let you measure who is gaining or losing the most quantifiable amount in any given transaction. But there’s an unmeasurable, unquantifiable principle lurking behind all those numbers: it can be good for business to leave money on the table. Why? Because you care. You are willing to provision room for something beyond just a quantity, a number, a dollar amount. I don’t think numbers alone can bring you to care . I mean, how silly is it to say: “How much care did you put into the product this week?” “Put me down for a 8 out of 10 this week.” Reply via: Email · Mastodon · Bluesky

0 views
DHH Yesterday

Omacon comes to New York

The vibes around Linux are changing fast. Companies of all shapes and sizes are paying fresh attention. The hardware game on x86 is rapidly improving. And thanks to OpenCode and Claude Code, terminal user interfaces (TUIs) are suddenly everywhere. It's all this and Omarchy that we'll be celebrating in New York City on April 10 at the Shopify SoHo Space for the first OMACON! We've got an incredible lineup of speakers coming. The creator of Hyprland, Vaxry, will be there. Along with ThePrimeagen and TJ DeVries. You'll see OpenCode creator Dax Raad. Omarchy power contributors Ryan Hughes and Bjarne Øverli. As well as Chris Powers (Typecraft) and myself as Linux superfans. All packed into a single day of short sessions, plenty of mingle time, and some good food. Tickets go on sale tomorrow (February 19) at 10am EST. We only have room for 130 attendees total, so I imagine the offered-at-cost $299 tickets will go quickly. But if you can't manage to snatch a ticket in time, we'll also be recording everything, so you won't be left out entirely. But there is just something special about being together in person about a shared passion. I've felt the intensity of that three years in a row now with Rails World. There's an endless amount of information and instruction available online, but a sense of community and connection is far more scarce. We nerds need this. We also need people to JUST DO THINGS. Like kick off a fresh Linux distribution together with over three hundred contributors so far all leaning boldly into aesthetics, ergonomics, and that omakase spirit.  Omarchy only came about last summer, now we're seeing 50,000 ISO downloads a week, 30,000 people on the Discord, and now our very first exclusive gathering in New York City. This is open source at its best. People from all over, coming together, making cool shit. (Oh, and thanks to Shopify and Tobi for hosting. You gotta love when a hundred-plus billion dollar company like this is run by an uber nerd who can just sign off on doing something fun and cool for the community without any direct plausible payback.)

0 views
Martin Fowler Yesterday

Fragments: February 18

I’ll start with some more tidbits from the Thoughtworks Future of Software Development Retreat ❄                ❄ We were tired after the event, but our marketing folks forced Rachel Laycock and I to do a quick video. We’re often asked if this event was about creating some kind of new manifesto for AI-enabled development, akin to the Agile Manifesto (which is now 25 years old). In short, our answer is “no”, but for the full answer, watch our video ❄                ❄ My colleagues put together a detailed summary of thoughts from the event, in a 17 page PDF. It breaks the discussion down into eight major themes, including “Where does the rigor go?”, “The middle loop: a new category of work”, “Technical foundations: languages, semantics and operating systems”, and “The human side: roles, skills and experience”. The retreat surfaced a consistent pattern: the practices, tools and organizational structures built for human-only software development are breaking in predictable ways under the weight of AI-assisted work. The replacements are forming, but they are not yet mature. The ideas ready for broader industry conversation include the supervisory engineering middle loop, risk tiering as the new core engineering discipline, TDD as the strongest form of prompt engineering and the agent experience reframe for developer experience investment. ❄                ❄ Annie Vella posted her take-aways from the event I walked into that room expecting to learn from people who were further ahead. People who’d cracked the code on how to adopt AI at scale, how to restructure teams around it, how to make it work. Some of the sharpest minds in the software industry were sitting around those tables. And nobody has it all figured out. There is more uncertainty than certainty. About how to use AI well, what it’s really doing to productivity, how roles are shifting, what the impact will be, how things will evolve. Everyone is working it out as they go. I actually found that to be quite comforting, in many ways. Yes, we walked away with more questions than answers, but at least we now have a shared understanding of the sorts of questions we should be asking. That might be the most valuable outcome of all. ❄                ❄ Rachel Laycock was interviewed in The New Stack (by Jennifer Riggins) about her recollections from the retreat. AI may be dubbed the great disruptor, but it’s really just an accelerator of whatever you already have. The 2025 DORA report places AI’s primary role in software development as that of an amplifier — a funhouse mirror that reflects back the good, bad, and ugly of your whole pipeline. AI is proven to be impactful on the individual developer’s work and on the speed of writing code. But, since writing code was never the bottleneck, if traditional software delivery best practices aren’t already in place, this velocity multiplier becomes a debt accelerator. ❄                ❄ LLMs are eating specialty skills. There will be less use of specialist front-end and back-end developers as the LLM-driving skills become more important than the details of platform usage. Will this lead to a greater recognition of the role of Expert Generalists ? Or will the ability of LLMs to write lots of code mean they code around the silos rather than eliminating them? Will LLMs be able to ingest the code from many silos to understand how work crosses the boundaries? ❄                ❄ Will LLMs be cheaper than humans once the subsidies for tokens go away? At this point we have little visibility to what the true cost of tokens is now, let alone what it will be in a few years time. It could be so cheap that we don’t care how many tokens we send to LLMs, or it could be high enough that we have to be very careful. ❄                ❄ Will the rise of specifications bring us back to waterfall-style development ? The natural impulse of many business folks is “don’t bother me until it’s finished”. Does the process of evolutionary design get helped or hindered by LLMs? My instinctive reaction is that all depends on our workflow. I don’t think LLMs change the value of rapidly building and releasing small slices of capability. The promise of LLMs is to increase the frequency of that cycle, and doing more in each release. ❄                ❄ Sadly the session on security had a small turnout. One large enterprise employee commented that they were deliberately slow with AI tech, keeping about a quarter behind the leading edge. “We’re not in the business of avoiding all risks, but we do need to manage them”. Security is tedious, people naturally want to first make things work, then make them reliable, and only then make them secure. Platforms play an important role here, make it easy to deploy AI with good security. Are the AI vendors being irresponsible by not taking this seriously enough? I think of how other engineering disciplines bake a significant safety factor into their designs. Are we doing that, and if not will our failure lead to more damage than a falling bridge? There was a general feeling that platform thinking is essential here. Platform teams need to create a fast but safe path - “bullet trains” for those using AI in applications building. ❄                ❄ One of my favorite things about the event was some meta-stuff. While many of the participants were very familiar with the Open Space format, it was the first time for a few. It’s always fun to see how people quickly realize how this style of (un)conference leads to wide-ranging yet deep discussions. I hope we made a few more open space fans. One participant commented how they really appreciated how the sessions had so much deep and respectful dialog. There wasn’t the interruptions and a few people gobbling up airtime that they’d seen around so much of the tech world. Another attendee, commented “it was great that while I was here I didn’t have to feel I was a woman, I could just be one of the participants”. One of the lovely things about Thoughtworks is that I’ve got used to that sense of camaraderie, and it can be a sad shock when I go outside the bubble. ❄                ❄                ❄                ❄                ❄ I’ve learned much over the years from Stephen O’Grady’s analysis of the software industry. He’s written about how much of the profession feels besieged by AI. these tools are, or can be, powerful accelerants and enablers for people that dramatically lower the barriers to software development. They have the ability to democratize access to skills that used to be very difficult, or even possible for some, to acquire. Even a legend of the industry like Grady Booch, who has been appropriately dismissive of AGI claims and is actively disdainful of AI slop posted recently that he was “gobsmacked” by Claude’s abilities. Booch’s advice to developers alarmed by AI on Oxide’s podcast last week? “Be calm” and “take a deep breath.” From his perspective, having watched and shaped the evolution of the technology first hand over a period of decades, AI is just another step in the industry’s long history of abstractions, and one that will open new doors for the industry. …whether one wants those doors opened or not ultimately is irrelevant. AI isn’t going away any more than the automated loom, steam engines or nuclear reactors did. For better or for worse, the technology is here for good. What’s left to decide is how we best maximize its benefits while mitigating its costs. ❄                ❄                ❄                ❄                ❄ Adam Tornhill shares some more of his company’s research on code health and its impact on agentic development. The study Code for Machines, Not Just Humans defines “AI-friendliness” as the probability that AI-generated refactorings preserve behavior and improve maintainability. It’s a large-scale study of 5,000 real programs using six different LLMs to refactor code while keeping all tests passing. They found that LLMs performed consistently better in healthy code bases. The risk of defects was 30% higher in less-healthy code. And a limitation of the study was that the less-healthy code wasn’t anywhere near as bad as much legacy code is. What would the AI error rate be on such code? Based on patterns observed across all Code Health research, the relationship is almost certainly non-linear. ❄                ❄                ❄                ❄                ❄ In a conversation with one heavy user of LLM coding agents: Thank you for all your advocacy of TDD ( Test-Driven Development ). TDD has been essential for us to use LLMs effectively I worry about confirmation bias here, but I am hearing from folks on the leading edge of LLM usage about the value of clear tests, and the TDD cycle. It certainly strikes me as a key tool in driving LLMs effectively.

0 views
iDiallo Yesterday

Taking Our Minds for Granted

How did we do it before ChatGPT? How did we write full sentences, connect ideas into a coherent arc, solve problems that had no obvious answer? We thought. That's it. We simply sat with discomfort long enough for something to emerge. I find this fascinating. You have a problem, so you sit down and think until you find a solution. Sometimes you're not even sitting down. You go for a walk, and your mind quietly wrestles with the idea while your feet carry you nowhere in particular. A solution emerges not because you forced it, but because you thought it through. What happened in that moment is remarkable: new information was created from the collision of existing ideas inside your head. No prompt. No query. Just you. I remember the hours I used to spend debugging a particularly stubborn problem at work. I would stare at the screen, type a few keystrokes, then delete them. I'd meet with our lead engineer and we would talk in circles. At home, I would lie in bed still turning the problem over. And then one night, somewhere around 3 a.m., I dreamt I was running the compiler, making a small change, watching it build, and suddenly it worked. I woke up knowing the answer before I had even tested it. I had to wait until morning to confirm what my sleeping mind had already solved. That's the mind doing what it was built to do. Writers know this feeling too. A sentence that won't cooperate in the afternoon sometimes writes itself during a morning shower. Scientists have described waking up with the solution to a problem they fell asleep wrestling with. Mendeleev wrote in his dairy that he saw the periodic table in a dream . The mind that keeps working when we stop forcing it. The mind can generate new ideas from its own reflection, something we routinely accuse large language models of being incapable of. LLMs recombine what already exists; the human mind makes unexpected leaps. But increasingly, it feels as though we are outsourcing those leaps before we ever attempt them. Why sit with a half-formed thought when you can just ask? Why let an idea marinate when a tool can hand you something polished in seconds? The risk isn't that AI makes us lazy. It's that we slowly forget what it felt like to think hard, and stop believing we're capable of it. It's like forgetting how to do long division because you've always had a calculator in your pocket. The mind is like any muscle. Leave it unstrained and it weakens. Push it and it grows. The best ideas you will ever have are still inside you, waiting for the particular silence that only comes when you stop reaching for your phone. In the age of AI, the most radical thing you can do might simply be to think.

0 views

The Waves

Six children—three girls and three boys—play in a garden by the sea. We follow them as they grow up, go to school, venture away from home, grieve the death of a friend, marry (or not), have children (or not). We do not see or hear their goings on but rather their inner monologues, the thoughts they could never have spoken but feel and know. More prose poem than novel, the writing posits that our inner lives are as rich and detailed as the world around us, perhaps more so. And that there is a continuity threaded through the differences and separations between us, a simultaneous distinctness and blurring of selves, both wave and particle, each headed for the shore. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
Stratechery Yesterday

Shopify Earnings, Shopify’s AI Advantages

Shopify is poised to be one of the biggest winners from AI; it would behoove investors to actually understand the businesses they are selling.

0 views
usher.dev Yesterday

Using Claude Code to Personalise the Xteink X4

Of course I immediately installed the open-source Crosspoint Reader firmware on the device, which led to a novel realisation. (heh) I love reading on the X4 - it feels great to hold, the eInk screen is crisp and readable, and it fits in my pocket so I can pull it out rather than reaching for my phone. <Image src={import("./xteink-x4.jpg")} alt="Hand holding an Xteink X4 reader in front of a laptop screen showing Claude Code" /> I put it to sleep when it goes in my pocket, and wake it up when I want to read. That's all good, and stops me accidentally pressing buttons to switch pages in my pocket, but there is a noticeable (2-3 second) delay when waking the device up that adds a tiny bit of friction to an otherwise lovely process. Wondering whether there was an alternative, I searched the repo, issues and pull requests for any signs of an alternative, maybe a button lock feature so I could keep the page I'm reading but lock the buttons from being pressed in my pockets. No such luck. My C++ knowledge is rusty at best, and my time to learn a new codebase, even one as well organised as Crosspoint, is limited, so I was not jumping at the chance to contribute myself. However, having seen a number of pull requests on the repo that were already AI-coded, I thought I'd try my chances with Claude Code. So I opened up Claude in plan mode (shift-tab twice) and gave it a very short prompt: I want to add a 'button lock' feature to this firmware (i.e. double tap power button to lock buttons), how would I approach that? Within moments it had a plan, which on review didn't touch as much code as I'd expected it to. I let Claude do its thing and flashed the updated firmware to the device ( ). There were a few issues - the device didn't quite refresh properly after locking (a follow-up prompt resolved this), and there's no visual indicator that the buttons are locked (maybe something for another time), but it worked. I've used Claude Code and other LLM-coding tools enough to not be super surprised that it was able to do this, but I was surprised that: In a time where we don't own or have control over much of the tech in our lives, being able to feel like I can fully customise a device I own felt strangely liberating. Oh and maybe I'll tidy this change up and submit a pull request, if I can get over my hesistance to subject other people to vibe-coded PRs. I can own well-built minimal hardware that is great at its one job I can install open-source, community maintained firmware on the device I can customise that device to do what I want it to do, without necessarily knowing how the firmware is built and structured

0 views
Xe Iaso Yesterday

Anubis v1.25.0: Necron

I'm sure you've all been aware that things have been slowing down a little with Anubis development, and I want to apologize for that. A lot has been going on in my life lately (my blog will have a post out on Friday with more information), and as a result I haven't really had the energy to work on Anubis in publicly visible ways. There are things going on behind the scenes, but nothing is really shippable yet, sorry! I've also been feeling some burnout in the wake of perennial waves of anger directed towards me. I'm handling it, I'll be fine, I've just had a lot going on in my life and it's been rough. I've been missing the sense of wanderlust and discovery that comes with the artistic way I playfully develop software. I suspect that some of the stresses I've been through (setting up a complicated surgery in a country whose language you aren't fluent in is kind of an experience) have been sapping my energy. I'd gonna try to mess with things on my break, but realistically I'm probably just gonna be either watching Stargate SG-1 or doing unreasonable amounts of ocean fishing in Final Fantasy 14. Normally I'd love to keep the details about my medical state fairly private, but I'm more of a public figure now than I was this time last year so I don't really get the invisibility I'm used to for this. I've also had a fair amount of negativity directed at me for simply being much more visible than the anonymous threat actors running the scrapers that are ruining everything, which though understandable has not helped. Anyways, it all worked out and I'm about to be in the hospital for a week, so if things go really badly with this release please downgrade to the last version and/or upgrade to the main branch when the fix PR is inevitably merged. I hoped to have time to tame GPG and set up full release automation in the Anubis repo, but that didn't work out this time and that's okay. If I can challenge you all to do something, go out there and try to actually create something new somehow. Combine ideas you've never mixed before. Be creative, be human, make something purely for yourself to scratch an itch that you've always had yet never gotten around to actually mending. At the very least, try to be an example of how you want other people to act, even when you're in a situation where software written by someone else is configured to require a user agent to execute javascript to access a webpage. PS: if you're well-versed in FFXIV lore, the release title should give you an idea of the kind of stuff I've been going through mentally. Full Changelog : https://github.com/TecharoHQ/anubis/compare/v1.24.0...v1.25.0 Add iplist2rule tool that lets admins turn an IP address blocklist into an Anubis ruleset. Add Polish locale ( #1292 ) Fix honeypot and imprint links missing when deployed behind a path prefix ( #1402 ) Add ANEXIA Sponsor logo to docs ( #1409 ) Improve idle performance in memory storage Add HAProxy Configurations to Docs ( #1424 ) build(deps): bump the github-actions group with 4 updates by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1355 feat(localization): add Polish language translation by @btomaev in https://github.com/TecharoHQ/anubis/pull/1363 docs(known-instances): Alphabetical order + Add Valve Corporation by @p0008874 in https://github.com/TecharoHQ/anubis/pull/1352 test: basic nginx smoke test by @Xe in https://github.com/TecharoHQ/anubis/pull/1365 build(deps): bump the github-actions group with 3 updates by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1369 build(deps-dev): bump esbuild from 0.27.1 to 0.27.2 in the npm group by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1368 fix(test): remove interactive flag from nginx smoke test docker run c… by @JasonLovesDoggo in https://github.com/TecharoHQ/anubis/pull/1371 test(nginx): fix tests to work in GHA by @Xe in https://github.com/TecharoHQ/anubis/pull/1372 feat: iplist2rule utility command by @Xe in https://github.com/TecharoHQ/anubis/pull/1373 Update check-spelling metadata by @JasonLovesDoggo in https://github.com/TecharoHQ/anubis/pull/1379 fix: Update SSL Labs IP addresses by @majiayu000 in https://github.com/TecharoHQ/anubis/pull/1377 fix: respect Accept-Language quality factors in language detection by @majiayu000 in https://github.com/TecharoHQ/anubis/pull/1380 build(deps): bump the gomod group across 1 directory with 3 updates by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1370 Revert "build(deps): bump the gomod group across 1 directory with 3 updates" by @JasonLovesDoggo in https://github.com/TecharoHQ/anubis/pull/1386 build(deps): bump preact from 10.28.0 to 10.28.1 in the npm group by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1387 docs: document how to import the default config by @Xe in https://github.com/TecharoHQ/anubis/pull/1392 fix sponsor (Databento) logo size by @ayoung5555 in https://github.com/TecharoHQ/anubis/pull/1395 fix: correct typos by @antonkesy in https://github.com/TecharoHQ/anubis/pull/1398 fix(web): include base prefix in generated URLs by @Xe in https://github.com/TecharoHQ/anubis/pull/1403 docs: clarify botstopper kubernetes instructions by @tarrow in https://github.com/TecharoHQ/anubis/pull/1404 Add IP mapped Perplexity user agents by @tdgroot in https://github.com/TecharoHQ/anubis/pull/1393 build(deps): bump astral-sh/setup-uv from 7.1.6 to 7.2.0 in the github-actions group by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1413 build(deps): bump preact from 10.28.1 to 10.28.2 in the npm group by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1412 chore: add comments back to Challenge struct. by @JasonLovesDoggo in https://github.com/TecharoHQ/anubis/pull/1419 performance: remove significant overhead of decaymap/memory by @brainexe in https://github.com/TecharoHQ/anubis/pull/1420 web: fix spacing/indent by @bjacquin in https://github.com/TecharoHQ/anubis/pull/1423 build(deps): bump the github-actions group with 4 updates by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1425 Improve Dutch translations by @louwers in https://github.com/TecharoHQ/anubis/pull/1446 chore: set up commitlint, husky, and prettier by @Xe in https://github.com/TecharoHQ/anubis/pull/1451 Fix a CI warning: "The set-output command is deprecated" by @kurtmckee in https://github.com/TecharoHQ/anubis/pull/1443 feat(apps): add updown.io policy by @hyperdefined in https://github.com/TecharoHQ/anubis/pull/1444 docs: add AI coding tools policy by @Xe in https://github.com/TecharoHQ/anubis/pull/1454 feat(docs): Add ANEXIA Sponsor logo by @Earl0fPudding in https://github.com/TecharoHQ/anubis/pull/1409 chore: sync logo submissions by @Xe in https://github.com/TecharoHQ/anubis/pull/1455 build(deps): bump the github-actions group across 1 directory with 6 updates by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1453 build(deps): bump the npm group across 1 directory with 2 updates by @dependabot[bot] in https://github.com/TecharoHQ/anubis/pull/1452 feat(docs): Add HAProxy Configurations to Docs by @Earl0fPudding in https://github.com/TecharoHQ/anubis/pull/1424 @majiayu000 made their first contribution in https://github.com/TecharoHQ/anubis/pull/1377 @ayoung5555 made their first contribution in https://github.com/TecharoHQ/anubis/pull/1395 @antonkesy made their first contribution in https://github.com/TecharoHQ/anubis/pull/1398 @tarrow made their first contribution in https://github.com/TecharoHQ/anubis/pull/1404 @tdgroot made their first contribution in https://github.com/TecharoHQ/anubis/pull/1393 @brainexe made their first contribution in https://github.com/TecharoHQ/anubis/pull/1420 @bjacquin made their first contribution in https://github.com/TecharoHQ/anubis/pull/1423 @louwers made their first contribution in https://github.com/TecharoHQ/anubis/pull/1446 @kurtmckee made their first contribution in https://github.com/TecharoHQ/anubis/pull/1443

0 views

I Hate Workday

I hate using Workday to apply to companies. I can’t speak to all the other things they offer, but the experience for job applicants sucks. Why do I have to create a separate account for each company that uses Workday to handle applications? Why do you ask me to upload my resume and then still prompt me to manually enter data? Why is the application process so long compared to other tools like Greenhouse and Ashby ? I swear I’ve avoided applying to so many companies just because they forced me to create a Workday account to apply. I’d rather stay unemployed and apply elsewhere. Just look at how much people hate Workday. Business Insider published an article about how much people dislike it. People on social media like Threads are also annoyed . The best place to see people frustrated with Workday is Reddit. Search for “workday sucks site:reddit.com” on Google and you’ll see what I mean. Titles like “I fucking hate Workday.” tell you everything you need to know. Fuck you Workday.

0 views
xenodium Yesterday

Ready Player cover download improvements

At times, even purchased music excludes album covers in track metadata. For those instances, ready-player-mode offers , which does as it says on the tin. The interactive command offers a couple of fetching providers (iTunes vs Internet Archive / MusicBrainz) to grab the album cover. The thing is, I often found myself trying one or the other provider, sometimes without luck. Today, I finally decided to add a third provider ( Deezer ) to the list. Even then, what's the point of manually trying each provider out when I can automatically try them all and return the result from the first successful one? And so that's what I did. In addition to offering all providers, now offers "Any", to download from the first successful provider. Now, why keep the option to request from a specific provider? Well, sometimes one provider has better artwork than another. If I don't like what "Any" returns, I can always request from a specific provider. While on the subject, I also tidied the preview experience up and now display the thumbnail in the minibuffer. In any case, best to show rather than tell. Enjoying your unrestricted music via Emacs and ? ✨ sponsor ✨ the project.

0 views