Latest Posts (20 found)

The Worlds Left To Conquer

It has been a year and a half since I quit my job to start a consultancy. It took me years to build up to quitting, and I had not only a chip on my shoulder, but to quote Seth Sentry, “the guac and the dip and the salsa.” The people that read this blog probably understand what I’m talking about. I looked around at how organizations are run, at the people that told me what to do, and thought “Surely I could do a better job than this.” This feels like a dangerous train of thought. On one hand, that arrogance is precisely one of the mechanisms that makes someone incompetent. If you’ve learned everything, there’s no real reason to open up another book, and even that is rather generously assuming that the person providing a service to you has bothered to crack the spine on even one . On the other hand, how else are we to make sense of the world? If you walk out the door, you will be immediately clotheslined by institutions failing to achieve the most basic of tasks with any reliability. Almost every office I’ve walked into as an employee has been a decrepit nest populated by the beaten-down working class, a sickly ooze of self-important managers amongst whom a Gladwell reader ranks as a towering intellect, and executives that are feverishly muttering the word “AI” to credulous journalists as they blindly cut headcount. So many of these institutions seem to be held together by either regulatory capture or writhing clients bound by enterprise contracts like so much barbed wire. I’ve lost track of the number of times that someone has looked at work from a company like KPMG and gone “Ha ha ha, maybe we should all be consulting – then we can do terrible work and bill at two thousand dollars a day.” This joke is so overused that you can see the person saying it is reluctantly dispensing the cliché. So when I kicked off the company, some traitorous part of me was hoping that it would be difficult, as horrible as that would be for me personally. If it was hard, yes, perhaps I’d have to go back to some miserable office and be beset on all sides by smiling imbeciles talking about innovation, but it would make sense . It simply can’t be that easy to be free of those structures. Surely there’s a reason for it that isn’t simply “Wow, we’re systematically producing people that are terrible at their jobs and they can’t even see it.” Unfortunately, that really is most of the explanation. In late 2025, I said I’d write more after admitting how awkward it is to say the business is going well. I haven’t written anything for five months, and there’s no delicate way to put this, I drastically understated how well we’re doing. I'm ripping off the bandaid: in February 2026, I realised that we had already generated enough revenue to last us until 2027. On some engagements, I split my income several ways with teammates that weren’t on the job and still exceeded my corporate salary. For forty hours in 2025, I broke a thousand dollars an hour on tasks with measurable success metrics, an amount of money that would have seemed like some sort of sick joke two years ago, and both customers asked for a repeat engagement because the service quality was higher than what specialised firms were doing – I had spent about ten hours thinking about the engagement model. And we still have seven months left in the year. All of this is to say two things. The first, I’m not going to pretend that everyone would find it as easy as I do 1 , but it’s easy enough that basically anyone that can read both a book in software and the humanities will be fine. 2 The other is that this was all so easy that I’m going mad with boredom. Crept to their door, opened it slowly and tip-toed but, shit Somebody set the bar too low and I tripped over it Whoops, jumped up, tried to throw in a quick ultimate Just hopin' to scare 'em but, oh, it just killed both of 'em Bodies with slit throats on the linoleum I just throw 'em in dumpsters, the shit's appropriate Blue Shell , Seth Sentry I wish that I could say it was difficult to make things work. It would make sense of the world. I could have fun talking about going extremely overboard with machinations . The reality is that all of it, from service delivery to sales, has been more-or-less trivial. Closing and delivering a deal for twenty thousand dollars takes less time and energy than one sprint in a regular office. Nothing even feels high stakes – the global economy is so large that, for an efficient team, you can roll the sales conversation dice over and over until it turns up a 20. I personally blundered hundreds of thousands of dollars in sales over our first six months, and we’re fine . As a company, there are many things that I'd like to improve – it might sound silly given that we’re doing well and all our customers are happy (or lying to me), but the places where we're falling short of my expectations are extremely visible to me. By virtue of having a sizable following on this blog, I have extensive exposure to programmers that are better than me and people that are smarter than me. Every Thursday, I have a call with Efron Licht , and frankly I can scarcely grasp why someone that competent spends time talking to me 3 . The problem is that I’m not competing with Efron. If I was, I'd either have to study for five hours every day for the rest of my life, or shut the company down tomorrow. I’m competing with people that don’t have functional literacy. And it’s not just incompetence at programming, it’s everything. The world has phoned it in, leaving us with no pressure to push for excellence. Last year, I was unable to put clients on both Evidence and Prefect because the former failed to attend a sales meeting booked through their website and the latter failed to book a meeting after the ex-real estate agent they hired failed to actually schedule a meeting following outreach also through their website . Our (excellent) accounting team is Hales Redden , who managed my co-founder Jordan Andersen’s old physiotherapy business… because the people I tried in Melbourne don’t check their sales inbox. Our lawyer is reader Iain McLaren 4 because the firms I initially tried also don’t respond to their sales inbox . I cannot state this clearly enough – the bar is so low that it is hard to give people money . There are competent actors on the market, but at least in software, there are simply so few of them that you’re more likely to be allies than enemies. This was infuriating at first, comical later, and has now lapsed into depressing. As an employee, these people were an unending source of frustration, the same six-figure delinquents that would forget to renew my contracts when I was on a temporary visa. As an independent operator, they’re babies that have yet to develop executive function and I’m taking their candy. I’ll do it – candy is delicious and babies are weak – but it's hard to feel good about it after the thrill of being right wore off. Some days, I get to 5PM after pitching to fix a competitor's work, put my head in my hands, and go “There is no way you dumb motherfuckers can’t stand up a database. We’ve been on the moon. We’ve been on the fucking moon . There’s no way you dipshits cannot operate Google.” Nonetheless, there is money in my bank account and I’m in a house with three bedrooms, and we must all reckon with this dreadful portent. Is this it? I’m just going to stand up data platforms for the next forty years, a task so easy for us that we could do it drunk out of our minds, then die? As much as I enjoy having free time, the whole affair has been oddly unsatisfying. Every day, I wake up and feel like I’ve opted out of society. I don’t have the same problems as my peers anymore. Daily stand-up is a hazy memory that I remember with faint queasiness. And the very nature of consulting, even though we make the majority of our money on technical delivery rather than pure advice, is that we’re simply adding efficiency to clients. We’ve had the luxury of firing a few for bad vibes 5 , leaving us only with clients that we’re very happy to work with – but at the end of the day, they‘re doing the thing worth being proud of, and we’re simply an instrument. They do the admirable thing, and we make them better at it. It’s better than continuing to be an ultra-coward and getting paid to let people Do Scrum at me, but I dunno. Part of the reason that we’ve done so well to begin with is that we haven’t worried about scaling at all. I still think that is the obviously correct decision when you’re starting off and don’t want to take on debt. But at the same time, when a reader asks me if I’m hiring, my answer is essentially, “The whole business is designed for the team to be comfortable, and we didn’t build in the leeway to take care of other people.” My largest expenses outside of housing over the past year have been donations to a local writer’s group, Meridian Australis , and various bits to other causes, but this amounts to a few thousand dollars per year. I’m probably supposed to be content with that, but I’ve already quit my job, so what’s a bit more risk? Why am I always reading about unreflective narcissists and tedious bootlickers funding things? Why can’t the causes I care about have resources thrown at them without them having to contort their value systems for the money? At any rate, the passage is crystal clear in both cases: Alexander is not weeping in sorrow that there are no more throats to cut. This is not a picture of a man at the end of a career of world conquest; he’s at the beginning. “Look at all these throats—and I haven’t even cut one!” And Alexander Wept , Anthony Madrid We still run into problems all the time that aren’t solvable by simple efficiency – perverse incentives from sloppy legislation, places where buyers can’t understand enough to avoid exploitation, gambling companies run by vile degenerates, things that make me want to throw up. I am fully engaged with capitalism every day, and despite the fact that I’m winning for some definition of winning, much of it is grotesque. Sometimes I wonder whether I should have gone into medicine, like most of my family, but at the same time someone has to keep the databases running. So here’s what’s going to happen for now. We have seven months left in the year. Around the start of June, we’ll be done with our most complex work, and ready to try something new, where by “something new” I mean we’re going to pick some nerds (pejorative) and cut their throats. The area that we’ve picked out specifically is technical recruiting, if only because it is the most accessible area that is most densely populated with easy prey. It should take us a little bit to knock out a small platform 6 , then I’ll broadcast that here for readers to sign up. We’ve done some work in the space, and all I can say is that software recruiters are defenseless money piñatas incapable of serving the competent sectors of the market, and I am going to beat them with a large stick and then loot the wallets from their corpses. Is this it? I’m just going to stand up data platforms for the next forty years, a task so easy for us that we could do it smashed out of our fucking minds, then die? At a rough estimate, every time we place someone that would otherwise have had to go through the hellish experience of conventional recruiting, we could plausibly knock one individual recruiter out of the market because of their slim margins (due to all the incompetence), which will temporarily satisfy my never-ending lust for blood. Then we’re going to take that money and use it to knife someone else that's causing negligent misery, and funnel some of the excess into things we care about. If we do a really good job, I really believe we can meaningfully distort some section of the market, even if that’s just “Ugh, everyone knows you can't recruit software engineers in the A$180K band in Melbourne. Those Hermit Tech folks have destroyed all the margin and established themselves as supreme dictators, and also their CEO will bully you online if you do a bad job.” I’m going to commit economic violence for the next forty years, and get so good at it that we can do that smashed out of our minds, teach other people how to do it, then die, and some of you will pick up the work where we left off. I’ve had a sale for $100,000 fall through, and twenty minutes later said “Easy come, easy go” and moved on with my life. I’m sure this is trainable, but I can’t take credit for this because I think I’m just a weirdo.  ↩ It is unbelievable how much of a competitive advantage “Responds to emails from paying clients within 24 hours” is. The bar is subterranean.  ↩ Incidentally, the two largest influences on my company’s culture are Jesse Alford and Efron Licht, on team culture and programming fundamentals respectively. I don’t think Jesse has written anything particularly friendly for mass-consumption, but Efron has an amazing series called Starting Systems Programming that has been transformative for my practice. It might seem obvious to some of the most talented programmers in the audience, but I cannot recommend it highly enough for everyone else. If you enjoy it, I’m sure he’d get a huge kick out of an email, as I don’t think he has analytics. I’ll do a writeup on all my influences at some point, as the list is long and they all write quite a bit.  ↩ Certified Cool Dude, by the way.  ↩ To no one’s surprise, they’re mostly startups.  ↩ Think “limited window for candidate signups and extreme pickiness about employers, no CVs, and a hard limit on interview stages, and so on”, not Seek. I don’t think Seek has done anything wrong , they’re just the inevitable result of the state of letting the entire market use their service.  ↩ I’ve had a sale for $100,000 fall through, and twenty minutes later said “Easy come, easy go” and moved on with my life. I’m sure this is trainable, but I can’t take credit for this because I think I’m just a weirdo.  ↩ It is unbelievable how much of a competitive advantage “Responds to emails from paying clients within 24 hours” is. The bar is subterranean.  ↩ Incidentally, the two largest influences on my company’s culture are Jesse Alford and Efron Licht, on team culture and programming fundamentals respectively. I don’t think Jesse has written anything particularly friendly for mass-consumption, but Efron has an amazing series called Starting Systems Programming that has been transformative for my practice. It might seem obvious to some of the most talented programmers in the audience, but I cannot recommend it highly enough for everyone else. If you enjoy it, I’m sure he’d get a huge kick out of an email, as I don’t think he has analytics. I’ll do a writeup on all my influences at some point, as the list is long and they all write quite a bit.  ↩ Certified Cool Dude, by the way.  ↩ To no one’s surprise, they’re mostly startups.  ↩ Think “limited window for candidate signups and extreme pickiness about employers, no CVs, and a hard limit on interview stages, and so on”, not Seek. I don’t think Seek has done anything wrong , they’re just the inevitable result of the state of letting the entire market use their service.  ↩

0 views
Unsung Today

Peaked in 2015

I have a confession to make. I prefer Apple TV’s 2015 remote: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/1.1600w.avif" type="image/avif"> The remote was universally ridiculed for its “which way is up?” problem – too much vertical symmetry which didn’t give your hand enough cues to know whether you’re picking it up the right way or the wrong way. Apple tried a half-measure first; in 2017 they broke the symmetry by making the MENU button slightly distinct in visual and tactile ways. Hindsight is 4K, but I don’t think it had a chance of working – the tactile cues were too subtle, and the visual ones do not matter when you’re not looking: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/2.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/2.1600w.avif" type="image/avif"> So Apple overshot – the subsequent 2021 edition was a full-measure-and-then-a-half: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/3.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/3.1600w.avif" type="image/avif"> The remote shrank the touch surface but otherwise drastically increased the volume, and added four arrows, two new buttons, and a strange iPod-inspired clock wheel interaction on top. And to me it started feeling a bit complicated, inching toward the very TV remotes that earlier designs ridiculed. (It also wasn’t as pleasant to touch, as the buttons feel a bit rougher.) But the reason I like the 2015 remote is primarily because it introduced one of my favourite gestures in recent history: tap to see progress. It’s hard to describe how wonderfully light this interaction feels every time I use it. You just tap anywhere on the remote’s top half, you see where you are in the video via a subtle UI, and then wait a few second for it to disappear. After this, doing the same in every other player – YouTube, Netflix, HBO Max, anything on a Mac or even the iPhone – feels clunky and heavy. In many of them, you can’t even see were you are without stopping the video! It gets better. Tap for the second time, and the elapsed time gets replaced by current time, and the remaining time by what the clock will say whenever you’re done watching. I thought this is delightful and clever, sneaking in clock functionality without showing it all the time. There is also this really nice gestural separation. When you watch the video, taps and swipes are safe. Anything that is “destructive” – that is, causes the video to stop, or rewind, or fast forward, is on the “click” layer: press stronger on the center to pause, or on either side to move forward or back. What I’m describing feels mechanically similar to other input devices, but the devil is in the details. On smartphones, everything is a tap, so you don’t really get anything lighter. On a Mac, tap as a gesture could only be available for people who opt in to press to click on their trackpad (like I do) – but the fact that tap is the default for clicking, means that can never realistically happen. The Apple TV tap feels conceptually like Mac’s hover instead, but so much more pleasant and elegant and simple. (I want to prototype tap on a Mac as a lightweight “explainer,” showing tooltips there instead of on hover.) To be fair, the tap gesture still exists in the still-current 2021 Apple TV remote, too – but the tap area is much smaller. And just in case you were curious, these are the first two editions: the 2005 remote – shipped with the iMac, predating Apple TV – and the 2010 remote. (I’m referring to model years, because Apple’s own names are so confusing.) = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/6.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/6.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/7.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/peaked-in-2015/7.1600w.avif" type="image/avif"> I don’t have access to Apple’s user feedback, but I guess that Apple’s 2021 design was likely the very right thing to do. But looking at four-and-a-half of these models side by side, I am still in the 2015’s minimalistic, unusual, innovative corner. #apple #details #interface design #touch

0 views

I'm off GitHub

Ok, that's it. I'm officially off GitHub. First I moved all of my private repos to my Synology, which was extremely easy to do. I did that around a week or so ago and it's be working great. Then I had to start sorting and moving all my public repos to Codeberg . Many were archived as I no longer maintained the projects, which left me with just 7 actual repos that I needed to move. Pure Blog/Comments and Simple.css were the most challenging as they all had other people who relied on them, but I managed to get them moved with a little bit of messing around. The others were super simply, I used Codeberg's migration tool to migrate the repos over, the ran a command locally to point my repos to a new target: That's it! Repo migrated. It's fine . And I don't mean that negatively - there's a lot less going on in the UI than on GitHub, but everything is still familiar and similarly laid out. There's been almost zero learning curve moving from GitHub to Codeberg, so props to the Codeberg team for that. I've applied for a Coderberg membership as I think it's important to support the open source projects we use, so hopefully that will be approved soon. Overall I'm very happy with the move. All the old GitHub repos have had their files updated to point to Codeberg, and they too have been archived. So that's one less piece of big tech I need to rely on. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views

#LiegendDemo - protesting for ME/CFS treatment & visibility

Today, I attended a protest for the visibility of ME/CFS sufferers . ME/CFS is short for Myalgic encephalomyelitis/chronic fatigue syndrome ; it is a chronic illness characterized by extreme fatigue that doesn't improve with rest, along with sleep issues, dizziness, muscle and joint pain, cognitive difficulties, extreme sensitivity to stimuli, and more. There is significant overlap with what is often referred to as " Long Covid " or " Post Covid ", leading to speculation that they're one and the same. It is estimated that 1.5 Million people are affected in Germany alone, with around 40 Million estimated worldwide. One day, it could be you. The exact cause is still being investigated, but it is most often associated with a viral infection (Covid, Epstein-Barr, etc.), and while symptoms can sometimes be managed, a full recovery is very rare. There is currently no known treatment or cure, and diagnostic criteria are still being developed after all this time (50 years since the WHO has acknowledged it!), which makes getting a diagnosis hard. There is stigma around the illness, with doctors dismissing symptoms entirely or blaming it on mental illness or laziness, inappropriately trying to force sufferers to overexert themselves, worsening their symptoms. This is aided by the fact that ME/CFS is often not taught in medical degrees. This group of patients is especially vulnerable, because advocating for themselves takes so much energy they don't have. Many of them can not even get out of bed or do any strenuous mental tasks, or they have to spend the little energy they have with the bare minimum to survive and then have none left for their free time. They are frequently very isolated and lacking the support they need. Any exertion can cause weeks of increased symptoms (post-exertional malaise). Years of their life are just gone, spent existing in bed in a dark room, unable to think clearly or to really move, having difficulty speaking, having difficulty processing and enduring sounds, touch, or light. The fatigue can become so bad that they are unable to even talk. Their education, finances and careers suffer, they can no longer take care of themselves and their families or pets, they struggle with doctor's appointments or the paperwork required to receive assistance, disability benefits, etc. and often start to have other chronic illnesses like fibromyalgia, irritable bowel syndrome, postural orthostatic tachycardia syndrome (POTS) and more. It can affect both children and adults. It's easy to forget they exist because they are not visible out in public and left behind in public discourse. To make them visible, people all across the country meet up to lie on the floor - this happened for the first time in 2023, and is still going strong in 2026. I don't have ME/CFS, but after a Covid infection, I struggled with orthostatic issues, post-viral tachycardia, and my chronic illnesses (Crohn's and Bechterew's disease) sometimes cause me intense fatigue as well; so I can relate a little to some parts of the illness, but I am lucky that my issues have treatments that helped (and some I could recover from). It was important to me to show up when they can't. 1. Dedicated funding for ME/CFS Research funds must be specifically allocated to ME/CFS with PEM, rather than absorbed into broader post-infectious research categories. Otherwise, the disease risks being underfunded while still being treated politically as adequately addressed. 2. Priority for drugs and effective treatments Prioritize the development of medications and clinically effective therapies, not only basic research or administrative structures. 3.Mandatory involvement of patient organizations ME/CFS patient representatives with PEM expertise should be directly involved in planning and implementation. Past programs included people unfamiliar with the disease, resulting in research that overlooked core symptoms and legitimized unsuitable therapies. 4. Immediate funding for biomedical research Concrete biomedical projects should receive funding without delay, not spending years building structures before supporting treatment-oriented research, despite already existing promising drug approaches. 5. Clear disease definitions, rigorous research standards and exclusion of unsuitable research approaches Studies should use strict diagnostic criteria and focus on PEM as the defining symptom. Many previous studies examined general fatigue rather than properly diagnosed ME/CFS, producing weak or misleading results. Research that ignores the biological, multisystem nature of ME/CFS should not be funded. 6. Legal and political safeguards The so-called research decade should be backed by binding commitments rather than remaining a non-binding political initiative. Otherwise, funding and programs could be reduced or canceled after political changes. 7. Healthcare access, diagnostics, social support, and patient care, sustainable research infrastructure Long-term structures such as specialized centers, biobanks, patient registries, and clinical trial networks should be established to support ongoing nationwide ME/CFS research. 8. Use existing research and strengthen international cooperation Future work should build on existing ME/CFS findings and coordinate internationally to avoid redundant studies and accelerate progress. The International ME/CFS Awareness Day is on May 12th. Donate to the ME/CFS Research Foundation Read what people affected by ME/CFS say 18 Minute Short Documentary on YouTube, English Subtitles 🇩🇪 Doku: ME/CFS: Keine Kraft mehr 🇩🇪 ME/CFS sufferer in Austria making use of assisted suicide program 🇩🇪 Liegenddemo Germany 🇩🇪 MECFS.de 🇩🇪 MECFS-Info.de 🇩🇪 ME-Hilfe.de 🇩🇪 Fatigatio.de 🇩🇪 Nicht Genesen Kids Reply via email Published 09 May, 2026

0 views

Emulating old junk from yesteryear – or my obsession making native resolution PS2 emulation look good

Lately I’ve been on a kick to tackle the latter part of PS2 graphics emulation that never seems to come up, analog TV emulation. 99% of the PS2 emulation space is all about upscaling to the max, cranking out 4K with polygons so sharp they can cleave mountains in half, but that’s not particularly to my taste, which is why paraLLEl-GS focuses more on the super-sampling aspects rather than raw upscaling. Earlier blog post on the paraLLEl-GS here if you have no idea what I’m referring to. For example, a raw native render with progressive scan (640×448) with paraLLEl-GS backend: With Vulkan HW renderer in PCSX2, a basic 2x upscale would look like: which is obviously higher resolution, but does not attempt to resolve aliasing either, and as upscaling factors go up, the mismatch between texture resolution, polygon counts and output resolution create a jarring effect for me. The approach I tend to favor is super-sampling and keeping the resolution native, even if it means a more blurred resolve. Here’s how it’d look with 16x SSAA. This is quite overkill for most cases, but why not. There is a special mode in paraLLEl-GS that can scanout the 16x super-samples at double the resolution, which is effectively 4x SSAA at double the resolution. It can look great when it works well, but the key word is when . This approach is not playing to the strengths of paraLLEl-GS at all, but it’s a thing. The main problem is that 2D images like HUD elements remain at native resolution with integer scale, which can create a jarring look, and post processing passes can mess up things in some cases. My main focus is on the native resolution, super sampled output, since it has very few gotchas compared to other upscaling styles. For a while, I’ve relied on FSR1 to do the upscaling, which is just a temporary hack. While properly anti-aliased 3D can look surprisingly good with FSR1 when blown up to large resolutions (what it is designed for), 2D game elements still create questionable artifacts. Text upscaling starts looking like those old HQ2x, SuperEagle and xBR filters that used to be popular with SNES emulators. That’s likely because FSR1 is “just” edge-aware Lanczos filtering at its core. Still, at SD resolutions there’s only so much you can do to blow it up to a modern display. The result here is of course quite blurry, but looking at it from a reasonable distance, it can look sort-of okay on a good day. The only proper way to display SD content is in my opinion to use a CRT, or in lieu of getting a nerd tan, CRT shaders. Simulating CRTs has been done to death to the point Hel is blushing, so I feel a bit uncomfortable even trying to write about it, but this post wouldn’t be complete without it. RetroArch has like 23428323 CRT shader presets already, and there’s nothing novel about any of this. However, there are some considerations for PS2 that most CRT shaders don’t target: A lot of CRT shaders assume a high quality VGA-style monitor. Nothing wrong with that of course, but I find most of them a bit “too” good for PS2. Where we’re going, we’ll need that analog fuzz too. I’ve been deep down the rabbit hole looking up old specifications to get a deeper appreciation for the brilliant engineering involved in making color TV work all those years ago, and the PS2 era was the last hurrah for SD CRTs, 480i warts and all. I’ve sort of gone through all of this stuff before back in my early days of programming, but I’d like to think I’m a little smarter than I was back then, and I learned far more details than I knew before. From a graphics programming PoV this is hardly “difficult” stuff like debugging random GPU hangs at 10pm on a Friday coming from the latest and greatest AAA game, but hey, sometimes you just gotta relax with some good old DSP coding to stay sane. The signal processing for NTSC and PAL is fairly straight forward and it’s actually a good entry point into signal processing for graphics programmers since it combines very visual things with actual real world problems. Component is the highest quality analog cable (actually, 3 cables!) available to consumers. It supports progressive scan which very few PS2 games natively supported. Simulating this cable is quite trivial. Composite is the yellow cable “everyone” used. This one is the hardest since separating luma from chroma in the same single-channel signal is actually kinda complicated to do well. Most of the time I spent was trying various strategies for this problem and try to come up with a look that seems authentic and “tastefully shitty” S-Video is very similar to composite, except that luma and chroma are separate signals. Chroma is packed into one cable and chroma decoding is basically same as for composite signals, where phase and amplitude dictate the hue and saturation. Main difference to component is that bandwidth for chroma should be quite low, so more color smearing should be present. Us Euro-bros technically had RGB SCART too, but I’ve never seen those cables myself for a Sony console. I believe our old N64 and GameCube had those, but I’d have to double check next time I visit the family … While it is very easy to find a ton of content online about old TV standards, your random YouTube video is not going to have the minute detail needed to implement much. I found “Video Demystified – A Handbook for the Digital Engineer (2005)” by Keith Jack which has ton of great detail that is often left out to make sure my implementation stays grounded. BT.601 defined how to take NTSC and PAL signals and turn them into the digital domain. It is the foundation for all digital video today. The primary interest for us is that the standard defines a 13.5 MHz sampling rate. This was supposedly chosen since it was convenient for both NTSC and PAL and filtering requirements. This scheme works out to 720 horizontal pixels for NTSC and PAL when you account for 525 lines at 29.97 Hz and 625 lines at 30 Hz. The H-sync part of the analog signal pads out to a bit over 800 pixels, but that’s irrelevant here. I also learned just recently why it’s called YUV444, 422, 420, etc: The “4:2:2” notation now commonly used originally applied to NTSC and PAL video, 480i and 480p Systems implying that Y, U and V were sampled at 4×, 2× and 2× the color subcarrier frequency, respectively. The “4:2:2” notation was then adapted to BT.601 digital component video, implying that the sampling frequencies of Y, Cb and Cr were 4×, 2× and 2× 3.375 MHz, respectively. Now you know! The Nyquist frequency of SD video is thus 6.75 MHz. Talking about video like this is a little weird, but it will make more sense later since analog signals like NTSC and PAL have a certain bandwidth, which limits how much horizontal resolution we can cram into it. The number of video lines is hardcoded. The video signal is really just a 1D signal if you look at it in an oscilloscope (haven’t seen those since my Bachelors …). The next question was to determine the sampling rate of the PS2 CRTC. Given it generates analog signals, it’s not obvious that PS2 would use a standard frequency here. However, several old forum threads I found did indeed suggest that PS2 is based around the 13.5 MHz rate. This is likely because it could use off-the-shelf video DACs at the time. The nominal maximum horizontal resolution of PS2 is 640 pixels, but it seems like overscan is supposed to stretch the 640 pixels out to fill the screen anyway. Even if 480 lines are “visible” in NTSC, games typically just render 448 lines since the top and bottom is eaten by overscan by most TVs. The actual CRTC clock seems to be 54 MHz, because when programming the CRTC, you’re supposed to set some dividers which ends up determining the resolution. E.g. 640 pixel width is a divider of 4, 512 pixels a divider of 5 and so on. It conveniently supports most relevant horizontal resolutions like this. My armchair theory for how this works is that the CRTC runs at 54 MHz and uses the divider to do a zero order hold (aka nearest filtering) which is then fed into the DAC. Either that or the console is really doing horizontal linear filtering up to 640 pixels and does composite video generation at 13.5 MHz. I’m not sure what really happens, so I just have to guess. I’m not quite obsessed enough to have an oscilloscope to suss out micro-details like these from a real PS2. The PS2 is technically able to emit 1080i and 720p signals, so there’s no reason why it couldn’t do analog video processing at the full 54 MHz. To debug any of this we need test images. I don’t exactly have test signal generators that TV engineers had back in the day, so I had to synthesize something. The use for these is to validate various edge cases in the NTSC and PAL decoding process. The basic idea is to have some 75% color bars at the top to validate the color pipeline. The middle section is a sweeping increase of horizontal frequency up to Nyquist of 6.75 MHz. Each 1 MHz section is delimited by a single line color bar. The lower portion is a sweep with increasing frequencies. This is used to test the comb filter. Doing FIR filter design by hand is uh … not something I have time for. Anything beyond the basic windowed sinc design process gets annoying very quickly. GNU Octave is a free alternative to Matlab that does what we need for these tasks. While we can do composite video generation at the 13.5 MHz rate, it is not easy to avoid aliasing, especially since we will be modulating with a chroma subcarrier that will shift the spectrum all over the place, potentially creating aliases out of thin air. The handbook calls for a 2x oversampling to avoid this. First, the image is padded out to fill 720 horizontal pixels for BT.601 reasons. Then, integer scale the image up to 2880xN (to match with how I understand the CRTC to work), and downsample that with a low-pass to 1440xN which completes a clean 2x oversampling. A polyphase upsampling filter is of course possible too to avoid the intermediate upscale, but that’s needless complexity for something that literally takes 10 microseconds on the GPU The immediate upscale is not written to memory at least, the filter just assumes a 2880 pixel input and it’s just sampling the input texture redundantly instead. Since these are fundamentally analog signals, we only filter horizontally. The natural image type for these is 1D Array. All the processing shaders are 1D as well. (The 1504 width is just extra padding for convolutions.) It’s all the same thing really. NTSC is the oddball one with YIQ but in practice this difference is completely irrelevant. The original idea of IQ was to take the blue and red difference signals and rotate them by 33 degrees to make it so that I would align with skin tones better, and give I more bandwidth than Q . However, the handbook doesn’t seem to give it too much consideration. Especially not for composite signals which are not meant to be sent over the air. The implementation of this is really just doing a 3×3 matrix multiply with RGB, nothing special. NTSC and PAL have fairly narrow bandwidth defined for their broadcast signals, but composite signals don’t really have those strict limits. However, since the digital input has Nyquist at 6.75 MHz the handbook calls for bandlimiting the signal to about 6 MHz anyway. Super sharp falloff filters just lead to ringing. Component video is specified by BT.1358 and doubles the sampling rate to 27 MHz. Y should fall off at about 11 MHz and chroma half that. Interpreting that in interlaced terms, the falloff should start at 5.5 MHz, getting close-ish to the Nyquist limit for SD video. After filtering luma and chroma, just decode back to RGB and we have a nice signal, done. Chroma is nuked down to ~1.3 MHz or so for composite / S-Video. Sometimes, 0.6 MHz is called for it seems, but it’s quite unclear … S-Video can ignore the modulation + demodulation part if we just want to simulate the smear, but it was easier to let it go through the full chain for completeness. Despite the chroma bandwidth being so horrible, it sort of looks good. Amazing how terrible our eyes are at seeing color detail. I wonder if this is there the esoteric 4:1:1 YCbCr subsampling mode comes from now that I think about it … E.g. NTSC luma filter, which starts falling off at about 4.2 MHz and stops at 6.75 MHz-ish. Broadcast NTSC is capped well below this bandwidth. Chroma encode, with ~1.3 MHz passband: Main difference for PAL is that luma passband is a bit higher. PAL-B (which Norway used) seems to specify 5 MHz rather than 4.2 MHz. Generating these filters can be done easily with firls and friends in Octave. The handbook calls for a gaussian kernel for chroma to avoid any ringing, but I missed that memo Either way, these are implemented with trivial convolutions which GPUs eat up like butter. This is the first point where NTSC and PAL differ significantly. While NTSC and PAL have different color subcarrier frequencies, PAL is also a bit more sophisticated. The “sign” of V is where Phase Alternate Line comes in. It flips every scanline. In broadcasting, this was meant to fix bad colors being introduced during broadcasting through complicated terrain (and boy do we have that over here in Norway). Bad phase shifts being introduced by NTSC will manifest as hue shifts. The basic idea behind PAL is that if phase shifts are introduced by signal reflections the sign flipping of V every scanline ensures that the decoded hue is complementary from scanline to scanline. This manifests as the Hanover Bar artifact. By averaging chroma from scanlines, the errors cancel out and we recover the correct color with slightly less saturation. The cost is of course reduced vertical chroma precision, but given how comically smeared chroma is horizontally, I’m not sure this matters, and digital video uses 4:2:0 subsampling anyway. Now, broadcasting considerations are kind of irrelevant for something like composite video (I would think), but I’m not sure if PAL TVs skip the filter for anything not coming from an antenna. I kept the vertical chroma filter in my implementation because it’s neat to have. The NTSC chroma subcarrier is constructed such that every scanline completes 227.5 cycles. Every line flips chroma phase which is very convenient and makes luma and chroma separation less complicated. The NTSC chroma pattern is a checkerboard as a result. PAL is more annoying here. A half cycle method would not work since V is already flipping every line, so PAL chose 287.75 cycles. On top of that a tiny 1 / 625 cycle offset per line is added for … reasons. The V flipping leads to a chroma pattern where U takes a diagonal pattern while V has a pattern along the other diagonal. Frame progression is also a concern. As fields are scanned, each time the same field is drawn, the chroma subcarrier should have opposite phases. This follows naturally from how fields are drawn. The period of NTSC is 4 fields: As we can see, things repeat after 4 fields and this point is easy to miss. The artifacts introduced by composite video should ideally cancel themselves out over time which manifests itself as flickery noise rather than a horribly glitched image. PAL is annoying and has a longer cycle of 8 fields due to the three-quarter of a cycle setup. After completing 2500 lines (625 * 4), the chroma subcarrier has completed an integer number of cycles, and the sign of V is back to where it started. After modulation, the NTSC color bar signal looks more like: and the next line flips phase as expected: Some old Nintendo consoles (and likely others) emit NTSC and PAL in non-standard ways. E.g. NES is infamous for shifting the chroma carrier by 120 degrees instead of 180 which leads to very particular artifacts. See NESdev Wiki for more detail. It’s easy to mess up RGB to YUV conversions and the compositing process. The handbook had reference outputs for RGB inputs in NTSC and PAL where I could confirm that the math was indeed correct. The things to check are that minimum and maximum of the signal are what they should be. NTSC, at least the US variant of it maps black to 7.5 IRE (just think of it as some abstract voltage) and white to 100.0 IRE, and it tripped me up a bit at first since the NTSC color bars were defined in terms of this shifted and scaled IRE value. Looking at the peaks and valleys of the generated composite signal in RenderDoc is enough since we just need to eyeball it. Close enough is good enough. This is non-trivial and a source of endless head scratching. Chroma information lives in the frequency spectrum around the carrier, but so does higher frequency luma detail. The basic theory for comb filters is to take advantage of the opposing phase of the chroma carrier. For NTSC, I used this basic structure from the handbook: In code, this translates to: Ideally, by adding two neighbor lines, chroma should cancel out and only luma remains. Subtracting, we cancel luma and only chroma remains. We know that chroma won’t (or at least shouldn’t) exist outside its bandwidth, so the result is run through a bandpass filter that centers around the carrier frequency, and we have an estimate for the modulated chroma signal. Since the composite signal is Y + C, we subtract the chroma estimate from composite signal to form a Y estimate. Chroma can now be demodulated and low-passed to remove the harmonics introduced by demodulation. This filter works “perfectly” for regions where the chroma is constant, but not so much where there are discontinuities. This results in “chroma dots” where the color subcarrier bleeds into decoded luminance. Notice the dot pattern on the bottom of the image. Thus, different colors modulate the luminance intensity differently, creating a “dot” pattern on the scan line between two col- ors. To eliminate these “hanging dots,” a chroma trap filter is sometimes used after the comb filter. In the real world of analog circuitry, having a perfectly locked signal like this is probably also not too realistic to assume. The literature also calls out for notch filtering as an approach. I attempted combining a comb filter with a notch filter on top to reduce the artifacts, but it is quite tricky to create a notch filter that works well. A simple FIR notch filter with zeroes is easy enough to make: This filter is convolved with a simple low-pass to complete the luma decoding filter. This approach just leads to severe blurring for NTSC and a band-stop filter approach just lead to less severe blurring and ringing instead, so I’m not sure what should be done. Unfortunately, the handbook isn’t clear on what kind of filtering is called for here. IIR notch filter designs can be super sharp to surgically carve out the carrier, but IIR filtering is also a massive pain in the ass on GPUs. It’s also likely to ring heavily too, which I found rather annoying in my testing. E.g. 3-line comb NTSC without the notch (integer nearest upscale from 640×240): and with notch: Yikes. There’s no way this notch approach is correct. It’s like we’re getting double vision here. It does clean up the chroma dots though, so … yay? Going beyond these base techniques there’s adaptive filtering where the filtering strategy changes based on which kind of case we’re dealing with. And even more sophisticated is taking advantage of temporal information (look ma’, TAA has been a thing since forever :D) since N fields in the past we have complementary chroma phase perfectly aligned to our pixel grid. Very cool stuff, but I doubt consumer TVs at the time would have those. The added latency for doing this kind of analysis doesn’t sound like something you’d want for games at least … Either way, I’m not designing high-end TV circuitry in the late 90s/early 2000s here. We can just flip on S-Video to simulate the perfect Y/C separator, so at some point I have to decide that I’ve done enough filter masturbation and move on. This was way harder than expected and I had to bang my head against the wall for a while to come up with a good solution. The ~90 degree shift every scanline means the basic comb filter for NTSC won’t work at all. The handbook has two main strategies here. Either a delay line which is slightly longer than a scanline to align the phases: Or use a highly magical “PAL modifier”: The function of this modifier is esoteric as all hell, but I think the purpose of it is to phase shift the signal by 90 degrees to “realign” the carrier somehow (it will still be off by 0.6 degrees). This filter path with two bandpass filters just got so messy, and I couldn’t figure out how to debug the thing effectively (were the inevitable visual artifacts my bugs or just the filter being bad?) that I eventually gave up and designed my own filter. That’s more fun anyway. I started from first principles and designed a 3×3 kernel that should be able to perfectly pass a chrominance signal and 100% reject any luminance signal that is DC in either horizontal or vertical direction. To make things simple I started with the assumption of 4 samples per subcarrier cycle to make the examples easier. Given a constant value of 1.0 for U, a signal would look something like: U = sin(wt) with N + 0.75 cycles per line here. A kernel that satisfies the criteria is: The sum of all rows and column is 0, meaning that if the signal is DC either horizontally or vertically, the result is completely filtered out. The filter also rejects V signals. V = +/- cos(wt) and looks like This is just the same signal flipped horizontally, so: Then a combined filter is made that accepts U and V signals together. U and V can be perfectly split later during demodulation so that is okay. This just boils down to a simple diagonal edge detection filter (high pass vertically and horizontally), but actually works quite well. To deal with the actual 2x oversampled rate of 27 MHz and the PAL subcarrier at ~4.433 MHz, the ~90 degree shift per line is about 1.51 samples, so to make this sort of work, I stretched out the horizontal kernel to a 5-tap filter: The vertical kernel remains the same of Some error is introduced since we’re not sampling the signal 100% correctly anymore (theoretically we need a sinc to reconstruct the signal perfectly), but I measured the reconstructed error to be -40 dB, which is good enough I think. The measured error for U and V were also similar, which indicates no weird artifacts from the V flips. With this 15-tap kernel, we get a pretty good chroma estimate even in PAL. From here the same ideas as NTSC apply, bandpass the estimate and subtract it from composite signal to get luma. Notch filtering to cleanup the chroma dots worked way, way better for PAL than NTSC, likely because the carrier has a much higher frequency on PAL, so the low pass behavior of the notch isn’t as devastating to image quality as it is on NTSC. In the end I think I prefer 3-line comb + notch for PAL and just plain 3-line comb for NTSC. These screenshots are just one still frame (or rather, field). The color fringing will cancel out the next field and it’s hard to show the effect without seeing it at full 60 fields per second. While PlayStation 2 didn’t support this hack mode, GameCube did back in the day. It’s a non-standard video mode that has same refresh rate as NTSC and vertical resolution, but retains the bandwidth and chroma encoding system of PAL, the best of both worlds! My implementation can trivially implement this by just enabling PAL on 60 Hz games. Only thing I’m not quite clear on is how the 1 / 625 subcarrier offset per line is supposed to work, but it’s a non-standard mode anyway, so eeeeeh. With comb filter and notch on NTSC: As expected, luma detail is murdered around ~3.58 MHz carrier. Also serious color fringing due to the extreme high frequency diagonal patterns. In the pattern at the bottom, no fringing is observed since the comb filter did its job as expected. PAL is similar, but the carrier and notch moves to ~4.43 MHz instead: The main feature of PAL is being robust against phase shifts during analog broadcasts. It’s a little unclear if composite inputs cared about this case, but for completeness sake, I implemented it. This path is naturally skipped for S-Video and Component outputs since I can’t imagine a TV caring about that for the more luxurious inputs. It doesn’t take many degrees of error in the phase to get quite different colors for NTSC. For PAL, the phase error manifests as complementary errors every line. However, by averaging out chroma vertically, we can recover the original chroma almost perfectly. At worst, a little less saturation. This topic is kind of unfortunate in that it’s done to death already, and it’s a 100% subjective topic meaning that everyone has some kind of opinion, none which agree with each other. Holy wars have been fought over less. Trying to write anything fresh about this topic is futile in 2026 – the heyday of CRT hobbyist shader development was in the early 2010s – but I felt the need to explain what I did at the very least. If anything, it’s a useful intro to writing your own shader. https://nyanpasu64.gitlab.io/blog/crt-appearance-tv-monitor/ is a good read too for more background information. The most obvious part of a CRT filter is scanlines, however, the idea that CRT images should have clearly visible scanlines is actually an artifact of 240p. For PS2 games, we’re operating at either 480i or 480p for NTSC. For interlaced video, we expect each individual field to have clearly visible scanlines, but the complete image (fused together by our brains) should not. The beam profile should be tuned as such. What most shaders do is for each scanline to take a gaussian distribution in the Y direction, sampled for the neighbor lines to cover the useful portion of the kernel. Another common effect is that very bright scanlines are smeared out, supposedly due to the electron guns not being as stable when they’re driven at high voltages. This can be simulated by varying the standard deviation. It can be subtle, but creates a neat effect I think. Exactly how to come up with the beam profile for a given input voltage is purely up to taste I suppose, I doubt there is a linear relationship between R’G’B’ value and standard deviation of the beam The sampled RGB value is in gamma-space still, since the CRT gamma curve is due to the phosphor response, not the CRT itself adjusting the gamma curve. The BT.1886 standard calls for a 2.4 gamma for SD content, which is the default and looks good. I also added options to use 2.2 (NTSC legacy) and 2.8 (?!, PAL legacy) for fun. Most CRT shaders I’ve seen apply gamma in this way: I think it would depend on whether or not the phosphor’s response is a function of how many electrons hit it, or individual “particles” respond to the energy of the electrons hitting them, where the gaussian beam profile is just a distribution of how many particles light up. In the latter interpretation, the code as-is makes sense, while the first interpretation would call for apply a gamma function on the gaussian profile. The visual output as-is looks good to me at any rate. After this point, all color math happens in linear light, so floating point render targets is a must. Color CRTs get their colors by having colored phosphors that are arranged in some kind of grid. The venerable Trinitrons use vertical stripes of RGB, and I like that look. While CRTs don’t really have a horizontal resolution, there is a “dot pitch”, which sort of dictates resolution. This part is the key to create the “texture” of a CRT. From what I read online a typical dot pitch for consumer TVs was 0.5 – 1.0 mm, and for a typical 20″ CRT, I estimated a reasonable number of RGB triads to be ~640 or so. Close enough to BT.601 standard horizontal resolution, neat. From what I understand, this value is also referred to as “TVL”, and these values seem ballpark reasonable. When looked at a distance from the screen, the dots blend together nicely as we’d expect. It’s basically just LCD subpixels, just larger. The dot layout I used was mostly lifted from Lotte’s CRT shader, but the approach can probably be found in a million shaders already. Just alternating stripes of R, G and B. I suppose a perfect mask of 0.0 should be used here, but it doesn’t end up looking as good as I’d like, even after adding glow effects, so I think the intent behind passing through a portion of other colors is more of a pragmatic decision. At least I cannot think of a physical interpretation of why we’d want to do it. The signal we are creating has a ton of high frequency information and we need to be very careful sampling it such that obvious aliasing is avoided. The common mistake is to just render this shader at output resolution and hoping for the best. This will almost surely lead to terrible aliasing patterns in the image. Bad aliasing of the scanlines in Y direction leads to a low frequency pumping pattern in the image which is extremely distracting, and bad aliasing in the X direction leads to a horribly noisy pattern due to uneven sampling of the dot mask. The easy fix is to render the effect at an integer multiple. E.g. if input image is 240 lines, render to a height that is an integer multiple of that. For the color dot mask, make sure the horizontal resolution is e.g. 3 times the dot resolution (one pixel for R, G, B dots). I ended up with something like width = 640 * 3, and height = 240 * 6 (3x sampling for progressive). A nebulous effect of the CRT is the glow aspect to it. Anyone can tell it’s there, but it’s not entirely clear to me why this happens. Google searches don’t turn up anything useful either. Without knowing the physical reason for it, it’s hard to emulate accurately. Could it be scattering effects inside the thick CRT glass perhaps? Either way, the common way to emulate this effect is to compute a gaussian blur (lots of those around here) and composite it over the original image. Very similar to the usual HDR + bloom effect that games in the 2010s loved to overuse. The main effect here is that the phosphor dots end up blending together nicely, yet retain the added “texture” that the aperture grille pattern gives. Humans like to see some high frequency detail, even if that detail is completely bogus. That’s the common trick behind video compression after all. The glow component, boosted up a ton to make it very visible: With the typical HDR effect in HD games, only very bright pixels participate in the effect, effectively spreading excess light energy over a larger area of pixels. It makes little sense to do anything like this for a CRT shader unless there is a physical threshold where phosphors just randomly start to glow more than they should, but all of this is purely up to taste anyway. Here’s from Soul Calibur II with progressive scan, component cable emulation and 16x SSAA, without any glow added. The look is very harsh to me. (The full-screen image is needed to see it without the extreme aliasing caused by thumbnailing.) Some glow on top and it looks like: Purely up to taste how much to add of course. I like a decent amount of glow. Phosphors don’t turn off right away when they’re lit. It’s very quick though, but adding a few percent of feedback between frames seems to help a bit with making 480i games look better with less flicker. This is not really how things work in the real world I think, but a reasonable approximation. We need a high quality rescaler to get the integer sampled CRT to the screen without introducing significant aliasing from the aperture grille or scanlines. The way to go here is simply to use a proper windowed sinc or something like that. I don’t like it, so I don’t implement it. If you do, make sure to consider resampling it properly to not introduce more aliasing. A point many shaders miss is that the RGB of a modern monitor is not the same as RGB on an old CRT TV. What we think of RGB today is usually BT.709 sRGB which defines a set of color primaries and white point. Old SDTV era video uses BT.601 which is a bit narrower than BT.709. In linear RGB space, this transform is a trivial 3×3 matrix multiply. I actually learned that the very old NTSC 1953 standard defines a set of primaries that were extremely saturated compared to the standards of today. While the primaries of 1953 were aspirational, it was clearly way too early. SMPTE refined NTSC to use more reasonable primaries as part of BT.601. PAL primaries are almost exactly the same (TVs tended to use the same phosphor formulation across the world I suppose), but there’s a theoretical difference so I added both for good measure. Supposedly, Japan kept the use of legacy NTSC 1953 primaries, so that opens an interesting question if the same games actually looked vastly different in Japan compared to the rest of the world? I’ve never heard anything mention this before, so who knows. Either way, I support enabling those primaries for fun. The look of it is quite … something? It would need a HDR monitor with solid gamut to do justice. Here’s with standard BT.601 primaries: (From Legaia 2, which is a purely field rendered game, hence the scanlines) and with NTSC 1953 primaries. Almost like the “Interpret sRGB as Display P3” bullshit that phones do these days to “pop”. Given all the masking we’re doing which lower average brightness, it’s beneficial to support HDR10 rendering. Now that HDR is widely available on Linux too, enjoying some HDR CRT shaders is a good time I added a few modes where I can target a specified maximum nit level. KDE at least respects MaxCLL HDR metadata and disables tonemapping if MaxCLL falls within bounds of the display. I also added a no-tonemap option where MaxCLL = 0 (unknown), which makes KDE tonemap how it wants to. Black Frame Insertion and their friends have been a thing in emulation for ages, and it works quite well with CRT simulation, especially to sell the de-interlacing effect properly. In my implementation, I query EXT_present_timing and decide how many frames I should insert in-between. There’s a gentler falloff between the frames. The overall screen brightness decreases a lot as expected, but with HDR, we can crank the brightness of the proper frame up to compensate. It’s still very experimental and any missed frame leads to horrrrrrrible flicker at the moment (big epilepsy warning), so it’s not something I actually recommend, but it’s fun to experiment with. While simulating each field independently with scanlines, we get de-interlacing the same way a CRT would in theory. This is not free from flicker of course, but most games had mitigation strategies for this. The PS2 supported blending two frames being sent to the video output. Most games render internally at e.g. 448p @ 30 FPS, but since they cannot output that resolution without component cables, the frame is output interlaced over two fields where the CRTC scans every other line rather than every line. That tends to look quite flickery if done as-is given how aliased PS2 graphics are, so what pretty much every game did was using the two CRTCs to blend the two frames vertically before sending it by programming a 1 pixel offset with duplicate inputs. By shifting the offset every field, the 30 FPS progressive image could be scanned out nicely into a 60 FPS interlaced image. This is the “flicker filter” that some games allowed toggling. Here’s a RenderDoc capture showing that CRTC 1 and 2 are configured with one pixel offset in Y: After merging and blending the frames together, a smoothed image is sent. PCSX2 GSdx and paraLLEl-GS have modes to detect this pattern and just scan out the full 480p of course, without the added blurring. Most games fall into this pattern, which is fortunate if we want to avoid interlacing shenanigans, but not all games are so nice to deal with. The “Anti-Blur” option is designed precisely for disabling this filter. I also added an option to force-disable the automatic progressive scan, mostly to test the video output that the games would actually have output back in the day, which is interlaced video. Some games decided that they wanted to render at 60 FPS and sacrifice half the vertical resolution to do so, jittering the rendering to stay in sync with the interlaced output. These are painful to deal with to this day since they absolutely require some kind of de-interlacing solution to look good. I never got satisfactory results with a typical de-interlacer, but the CRT simulation does a quite good job at it I think. It’s not perfect (interlacing wasn’t exactly perfect on CRTs either), but it’s usable for me to the point now that I can play interlaced games as intended. It’s not really possible to demonstrate this with screenshots. Some games break if you try to promote them to progressive scan, because the games might decide for some stupid reason to use the SCANMSK feature to discard pixels every other line, and rely on the FRAME scanout mode to exactly scan out the pixels that were not masked. Kings’s Field IV is an example of this absolute insanity. I maintain a patch for PCSX2-git which supports parallel-gs and now this CRT/analog thing. This is super niche stuff that I don’t really expect many people to actually use, but it’s there for those who are interested. It does what I need at least, and that’s what’s important to me. I put together some test video clips in HEVC/PQ/4:4:4/1440p at ridiculously high bitrates. I tried AV1 but my CPU could not decode it in real time, so it is what it is ._. Clip 1 is: Raw RGB passed into the CRT. It uses the game’s native 480p and widescreen support. Super sampling is 4x SSAA. Clip 2 is: “PAL60” with 3-line comb + notch. It also uses the more default 4:3 and interlaced video. It has some frame drops which completely botch the interlace and even at comically large bitrates, the aperture grill effect doesn’t translate well, so it is what it is, but it’s a rough approximation of what it should look like. Anyway, I’m happy with the results. Time to actually play something instead of debugging stuff Analog TV input path 480i instead of 240p High refresh rate simulation Field 0: 262 lines, chroma carrier starts in + phase, ends in + phase due to even number of lines Field 1: 263 lines, chroma carrier starts in + phase, ends in – phase due to odd number of lines Field 2: 262 lines, chroma carrier starts in – phase, …

0 views
ava's blog Yesterday

your social media habits sound like an abusive relationship

Most people in my life still use big social media platforms. My wife, for example, is on Tumblr. As someone who has been off of these platforms for quite a while, some of the things people share with me sound extremely odd to me; weird rules and behaviors they feel the need to abide by or else!.... Whatever that may be. Some I even recognize from back when I used them, but now I have a completely different view of them as I am no longer embedded in a culture that normalizes them. For one, apparently some people are scared of unfollowing others. " I can't unfollow them! We already follow each other for years and they'd notice and then it's awkward! " so they'd rather stick it out with someone they no longer like or whose posts they don't wanna see. They'd rather filter out all posts via keywords and other means than just unfollow. Internet strangers! Not even people in real life they'd run into. Why do you feel the need to lie so much just to protect a random person's feelings about having one less follower? The whole concept of being trapped with someone because you're "mutuals" is insane! Why do you care whether only one side follows the other? What does it matter? Why do you fuel the notion that unfollowing means downgrading a friendship or rejecting someone completely? It shouldn't be this way and you voluntarily participate in this. Same with blocking. " I can't block. That is so harsh. I can instead just block them and unblock them again so we are both unfollowed from each other. This is called softblocking. " Okay? And what for? So you can pretend it was totally a website glitch that made you guys unfollow each other? As if they wouldn't notice and know. Everyone knows what softblocking is on those platforms! Don't kid yourself. When they refollow you again, what then? What if they message you and ask why you unfollowed, the dreadful thing you fear? Many then go on to lie, saying it must have totally been an accident, and follow them again?? Guys, it's a website, pixels on a screen - you can be honest. They're not gonna stick a hand through your screen to strangle you? Thanks to digital mediums, it has never been easier to just ride out awkward shit and ignore things. Make use of it. Pressing a button is not being aggressive or dramatic. "*No, I cannot message them directly, that is awkward, we have never interacted before!" ... so? Damn, the website/app offers DMs and now you can't even privately message strangers on the internet anymore? What has this place come to? Now you're just there to scroll and passively consume ads and no longer talk to the people that share the ads around voluntarily? DMing someone is "intimate"? You are "harassing" someone with a simple message they can choose to open or ignore? Do you hear yourself? Then there is the far more subtle or platform-specific stuff... like the fact that people feel like they can't comment in the replies until others have done so, or cannot reblog something because the post is still "too small"; that liking old posts is "creepy"; watching or not watching a story, liking or not liking a post has deep consequences; you have to put things in the tags instead of the post body to be safe of OPs wrath and signal that this is for your followers only (just for OP to screenshot the tags anyway and rake you over the coals). There's also people that are too scared to challenge others directly and openly on the respective post, and instead screenshot it, put a water filter over it to visually signify they disagree with its content, and then post it themselves? The type of stuff they are comfortable to say when they think OP won't notice, while being too scared to do it underneath the post, and just living off of follower validation like " Look how dumb this is! Hype me up, like this post, comment that you agree! " is so embarrassing to see. As people on there are treating public interactions as definitive signs and ownership, when someone bad follows you and likes your posts, while you don't even follow back, you're still treated as attracting and tolerating the bad person, therefore implicitly agreeing to their vile views. I guess that's where the whole culture of " Do Not Interact " disclaimers comes from, because you have to prove from the get-go where your alliances are and as a precaution for when you haven't deeply vetted every follower you have. In the same vein, people seem to proactively confess old opinions, archive tweets, lock accounts, or add disclaimers to avoid or soften hypothetical future attacks. It all adds up to weird stories... I can't even completely recall it, investigative, roundabout stuff with second accounts and softblocking and other checks, weaponizing features of the platform, circumventing things, completely normalized mutual surveillance disguised as casual browsing, where they manually actively check who viewed stories, who liked posts, posting times, and other activity to judge the friendship level? All of this is tip-toeing around, scared to offend someone, worried about nebulous consequences and being subject to toxic rage; never getting out of the awful behaviors you're subjected to by your peers in high school. It's as if you're in an abusive relationship with the platform and its users, and it's uncomfortable to see from the outside how scared it makes you to actually interact with anyone online or use the space for what it is made for. It's like your online home constantly has signs of a punch hole in drywall. I see it with my wife as well, who also has a blog on here and sometimes would like to reply to some other blog posts on Bearblog, but never ends up doing it because " It's weird to barely post and then immediately shit on someone else's post. " and other convoluted reasons that only exist because social media culture is what it is. If you relate to anything in this post, you have been conditioned by people who can only scream and shout and " I am not reading all that " and siccing their followers on you. How sad! You're like a beaten puppy and your behaviors are completely warped. It's actively harmful for you, and I wouldn't be surprised if it significantly fuels the social anxiety you feel even when offline. In the online spaces you're in, you are always asked to put the needs of someone else above yours that you cannot even fully anticipate because they're a nebulous mob entity. Your nervous system constantly deals with the risk of using this app or site blowing up in your face, and you're always scared when you see a bunch of notifications coming up. I don't know how you can feel mentally well when this is always looming over your head. Spending my online time in places where none of this weird stuff exists has really put it into perspective. I can just reply! I can just send emails or reach out otherwise! No stress, no worries! No followers, no blocking! Again, I know why all these exist in theory, and many I've known from my own time on these platforms, but none of it is justified - period. You don't have to tell me why any of these are valid or why they happen; this is like listening to an abuse victim justify the abuse. Sometimes you can only see how badly you've been treated months or years after you leave. Reply via email Published 09 May, 2026

0 views
Susam Pal Yesterday

I Will Not Add Query Strings to Your URLs

Last evening, a short blog post appeared in my feed reader that felt as if it spoke directly to me. It is Chris Morgan's excellent post I've banned query strings . Chris is someone whose Internet comments I have been reading for about half a decade now. I first stumbled upon his comments on Hacker News, where he left very detailed feedback on a small collection of boilerplate CSS rules I had shared there. I am by no means a web developer. I have spent most of my professional life doing systems programming in C and C++. However, developing websites and writing small HTML tools has been a long-time hobby for me. I have learnt most of my web development skills as a hobbyist by studying what other people do: first by viewing the source of websites I liked in the early 2000s, and later by occasionally getting possessed by the urge to implement a new game or tool and searching MDN Web Docs to learn whatever I needed to make it work. One problem with learning a skill this way is that you sometimes pick up habits and practices that are fashionable but not necessarily optimal or correct. So it was really valuable to me when Chris commented on my collection of boilerplate CSS rules. It helped me improve my CSS a lot. In fact, a few of the lessons from his comment have really stuck with me; I keep them in mind whenever I make a hobby HTML project: always retain underlines in links and retain purple for visited links. I have been following Chris's posts and comments on web-related topics since then. He often posts great feedback on web-related projects. Whenever I come across one, I make sure to read them carefully, even when the project isn't mine. I always end up learning something nice and useful from his comments. Here is one such recent example from the Lobsters story Adding author context to RSS . A couple of months ago, I created a new project called Wander Console . It is a small, decentralised, self-hosted web console that lets visitors to your website explore interesting websites and pages recommended by a community of independent personal website owners. For example, my console is here: susam.net/wander/ . If you click the 'Wander' button there, the tool loads a random personal web page recommended by the Wander community. The tool consists of one HTML file that implements the console and one JavaScript file where the website owner defines a list of neighbouring consoles along with a list of web pages they recommend. If you copy these two files to your web server, you instantly have a Wander console live on the Web. You don't need any server-side logic or server-side software beyond a basic web server to run Wander Console. You can even host it in constrained environments like Codeberg Pages or GitHub Pages. When you click the 'Wander' button, the console connects to other remote consoles, fetches web page recommendations, picks one randomly and loads it in your web browser. It is a bit like the now defunct StumbleUpon but it is completely decentralised. It is also a bit like web rings except that the community network is not restricted to being a cycle; it is a graph that can take any shape. There are currently over 50 websites hosting this tool. Together, they recommend over 1500 web pages. You can find a recent snapshot of the list of known consoles and the pages they recommend at susam.codeberg.page/wcn/ . To learn more about this tool or to set it up on your website, please see codeberg.org/susam/wander . In case you were wondering why I suddenly plugged my project into this post in the previous section, it is because I recently added a dubious feature to that project that I myself was not entirely convinced about. That misfeature is relevant to this post. In version 0.4.0 of Wander Console, I added support for a query parameter while loading web pages. For example, if you encountered midnight.pub while using the console at susam.net/wander/ , the console loaded the page using the following URL: This allowed the owner of the recommended website to see, via their access logs, that the visit originated from a Wander Console. Chris's recent blog post is critical of features like this. He writes: I don't like people adding tracking stuff to URLs. Still less do I like people adding tracking stuff to my URLs. ? Did I ask? If I wanted to know I'd look at the header; and if it isn't there, it's probably for a good reason. You abuse your users by adding that to the link. I mentioned earlier that I was not entirely convinced that adding a referral query string was a good thing to do. Why did I add it anyway? I succumbed to popular demand. Let me briefly describe my frame of mind when I considered and implemented that feature. When I first saw the feature request on Codeberg, my initial reaction was reluctance. I wasn't convinced it was a good feature. But I was too busy with some ongoing algebraic graph theory research, another recent hobby, with a looming deadline, so I didn't have a lot of time to think about it clearly. In fact, everything about Wander Console has been made in very little time during the short breaks I used to take from my research. I made the first version of the console in about one and a half hours one early morning when my brain was too tired to read more algebraic graph theory literature and I really needed a break. During another such break, I revisited that feature request and, despite my reservations, decided to implement it anyway. During yet another such break, I am writing this post. Normally, I don't like adding too many new features to my little projects. I want them to have a limited scope. I also want them to become stable over time. After a project has fulfilled some essential requirements I had, I just want to call it feature complete and never add another feature to it again. I'll fix bugs, of course. But I don't like to keep adding new features endlessly. That's my style of maintaining my hobby projects. So it should have been very easy for me to ignore the feature request for adding a referral query string to URLs loaded by the console tool. But I think a tired body and mind, worn down by long and intense research work, took a toll on me. Although my gut feeling was telling me that it was not a good feature, I couldn't articulate to myself exactly why. So I implemented the referral query string feature anyway. While doing so, I added an opt-out mechanism to the configuration, so that if someone else didn't like the feature, they could disable it for themselves. This was another mistake. A questionable feature like this should be implemented as an opt-in feature, not an opt-out feature, if implemented at all. The fact that I didn't have a lot of time to reason through the implications of this feature meant that I just went ahead and implemented it without thinking about it critically. As the famous quote from Jurassic Park goes: It soon turned out that my gut feeling was correct. After I implemented that feature, a page from one of my favourite websites refused to load in the console. To illustrate the problem, here are a few similar but slightly different URLs for that page: The first and second URLs load fine, but the third URL returns an HTTP 404 error page. The website uses the query string to determine which one of its several font collections to show. So when we add an arbitrary query string to the URL, the website tries to interpret it as a font collection identifier and the page fails to load. That is why, when my tool added the query parameter to the first URL, the page failed to load. Later, with a little time to breathe and some hindsight, I could articulate why adding referral query strings to a working URL was such a bad idea. Altering a URL gives you a new URL. The new URL could point to a completely different resource, or to no resource at all, even if the alteration is as small as adding a seemingly harmless query string. By adding the referral query string, I had effectively broken a working URL from a website I am very fond of. It is also worth asking whether an HTML tool should concern itself with referral query strings at all when web browsers already have a mechanism for this: the HTTP Referer header, governed by Referrer-Policy . That policy can be set at the server level, the document level or even on individual links. The Web standards already provide deliberate controls to decide how much referrer information should be sent. Appending referral query strings to URLs bypasses those controls. It moves a privacy and attribution concern out of the referrer mechanism and embeds it into the destination URL instead. I don't think an HTML tool should do that. There is also a moral question here about whether it is okay to modify a given URL on behalf of the user in order to insert a referral query string into it. I think it isn't. In the end, I decided to remove the referral query string feature from Wander Console. One might wonder why I couldn't simply leave the feature in as an opt-in. Well, the answer is that once I had deemed the feature misguided, I no longer wanted it to be part of my software in any form. The project is still new and we are still in the days of 0.x releases, so if there is a good time to remove features, this is it. But my ongoing research work left me with no time to do it. Finally, when the post I've banned query strings appeared in my feed reader last evening, it nudged me just enough to take a little time away from my academic hobby and devote it to removing that ill-considered feature. The feature is now gone. See commit b26d77c for details. The latest release, version 0.6.0, does not have it anymore. This is a lesson I'll remember for any new hobby projects I happen to make in the future. If I ever load URLs again, I'll load them exactly as the website's author intended. I will never add query strings to your URLs. Read on website | #web | #technology Wisdom on the Web Wander on the Web Broken URLs https://int10h.org/oldschool-pc-fonts/fontlist/ https://int10h.org/oldschool-pc-fonts/fontlist/?2 https://int10h.org/oldschool-pc-fonts/fontlist/?foo

0 views
Sean Goedecke Yesterday

AI makes weak engineers less harmful

Like other kinds of puzzle-solving, software engineering ability is strongly heavy-tailed. The strongest engineers produce way more useful output than the average, and the weakest engineers often are actively net-negative: instead of moving projects along, they create problems that their colleagues have to spend time solving. That’s why many tech companies try to build a small, ludicrously well-paid team instead of a large team of more average engineers, and why so far this seems to be a winning strategy. Being effective in a large tech company is often about managing this phenomenon: trying to arrange things so that the most competent people land on projects you want to succeed, and the least competent are shunted out of the way 1 . For instance, if you’re technical lead on a project, you more or less have to ensure 2 that the most critical pieces are in the hands of people who won’t screw them up (whether by directly assigning the work, or by making sure someone can “sit on the shoulder” of the engineer who you’re worried about). Claude Code changed this. Frontier LLMs don’t have the taste or the system familiarity of a strong engineer, but they have absolutely raised the floor for weak engineers. Instead of getting a pull request that could never possibly work or would cause immediate problems, the worst you’ll now see is a standard LLM pull request: wrong in some ways, baffling in others, but at least functional on the line-by-line level and not so obviously incorrect that someone with no knowledge of the codebase could point it out. That is a huge improvement! You can try this out yourself. If you attempt to deliberately make mistakes while working with a coding agent, you’ll find that the agent pushes back hard against many obvious errors (i.e. caching user data with a non-user-specific key, writing an infinite loop that might never terminate, or leaking open files). Of course, the agent will still miss subtle errors, particularly ones that require understanding other parts of the codebase. Working with the least effective engineers is now sometimes like working with a Claude Opus or Codex instance that you communicate with over Slack. Occasionally it’s literally that: your colleague is simply pasting your messages into Claude Code and pasting you the response. This is annoying, but it’s a much better experience than working with this kind of engineer directly. After all, you probably already work with a bunch of LLM instances. The Slack interface is not ideal - unlike using Claude Code directly, you sometimes wait hours or days for a response, and you don’t get visibility into the agent’s thought processes - but it’s still helpful on the margin. More compute being thrown at your problem is better than less. Of course, this isn’t a great state of affairs for the engineer in question, who is almost certainly learning less than if they were making their own (bad) decisions. It’s also a bad state of affairs for the company, who is paying a human salary and getting a Copilot subscription (which they’re likely also paying for) 3 . After the current push to figure out what value AI is adding to engineers, I suspect there will be a push to figure out what value engineers are adding to AI , and the engineers who aren’t adding much may find themselves out of a job. You can’t talk to Claude-over-Slack like you’d talk to normal Claude. If you tend to handle LLMs roughly (insulting them, or just being very curt), you’ll have to change your communication style. A human is going to read your messages, after all, even if you’re really interacting with a LLM. There’s no point being rude. But if, like me, you say please-and-thank-you to the models 4 , you can treat your LLM-using coworker as just another Copilot window or Codex tab. It’s far better than having to treat them as an unwitting saboteur. Not all net-negative engineers use AI tools like this. Many are strongly convinced in their own wrong opinions about how to build good software, or mistrust AI in general, or believe that relying heavily on LLMs is not a good way to improve 5 . But no strong engineers use AI tools like this. Even when they’re being lazy or sloppy, a capable engineer will have enough baseline taste to catch obvious AI-generated errors. So the phenomenon of engineers 6 becoming thin wrappers around Claude Code is limited to the kind of engineers for whom this is an improvement in their work product. More charitably: many “least competent” engineers are just out of their comfort zone, and can be fine or even excel under the right circumstances (though in my view the best engineers are able to do good work in a wide variety of environments). Also, I don’t currently work with a lot of incompetent people. Much of this is based on past experience or talking to other engineers in the industry. Since your managers are doing the same thing, this can sometimes feel like Moneyball: you’re trying to identify underappreciated talent who are strong enough to help you win without being so high-profile that your boss poaches them to lead something else. I suppose it’s better to pay for nothing than to pay for net-negative output, but it still doesn’t seem good . I think this is actually the right way to hold Claude Opus 4.7. Is this true? I think relying on LLMs is not a great way for most engineers to improve, but if LLM output is consistently better than your own, it might be different. So long as you’re paying attention to where the LLM does better, it could actually be a good way to learn. I don’t have as much experience (or anecdotes) about non-engineers falling into this trap, but this post has convinced me that it might be worse. More charitably: many “least competent” engineers are just out of their comfort zone, and can be fine or even excel under the right circumstances (though in my view the best engineers are able to do good work in a wide variety of environments). Also, I don’t currently work with a lot of incompetent people. Much of this is based on past experience or talking to other engineers in the industry. ↩ Since your managers are doing the same thing, this can sometimes feel like Moneyball: you’re trying to identify underappreciated talent who are strong enough to help you win without being so high-profile that your boss poaches them to lead something else. ↩ I suppose it’s better to pay for nothing than to pay for net-negative output, but it still doesn’t seem good . ↩ I think this is actually the right way to hold Claude Opus 4.7. ↩ Is this true? I think relying on LLMs is not a great way for most engineers to improve, but if LLM output is consistently better than your own, it might be different. So long as you’re paying attention to where the LLM does better, it could actually be a good way to learn. ↩ I don’t have as much experience (or anecdotes) about non-engineers falling into this trap, but this post has convinced me that it might be worse. ↩

0 views
Rob Zolkos Yesterday

Watch Your Agents

I’ve been telling developers to watch their logs for years. Not just when something is broken. Not just when production is on fire. Watch them while you are building. Your logs are the closest thing you have to x-ray vision for a web application. Click a button in the browser, watch the request move through the app, and you can see what is really happening behind the scenes. The habit is simple: keep the server log visible while you work. When you do, you start spotting problems long before they become production issues: The logs give you immediate feedback. They make the invisible visible. Coding agents need the same treatment. When you are working with an agent, do not just look at the final diff. Watch what it is doing. Watch the commands it runs, the files it opens, the mistakes it repeats, and the little bits of glue code it keeps inventing along the way. That is the agent equivalent of watching your development log. You are not only checking whether this turn succeeded. You are looking for patterns that can make future turns better. Most coding agents keep some kind of session history: transcripts, tool calls, command output, file edits, errors, retries, and sometimes timing information. Those logs are useful after the fact. Point the agent at its own session logs and ask it to look for patterns: A prompt I like for this: This is the same habit as watching the Rails log after clicking around a page. You are looking for the part of the system that is doing too much work, guessing too often, or hiding useful signal. A useful signal is when the model keeps generating code to do the same mechanical task. For example, imagine you have a skill for publishing blog posts. Every time you run it, the model writes a small Ruby or Python snippet to: If the agent is generating that code every time, that is a smell. The model is doing work that should probably be deterministic. Ask the agent to turn that behavior into a script: Then update the skill so future agents call the script instead of improvising the logic. Bad pattern: every publishing session, the agent manually inspects YAML front matter and tries to remember the required fields. Better pattern: create that exits non-zero when , , , or are missing or malformed. Now the agent does not need to reason about the rules from scratch. It runs the command and reacts to the result. Bad pattern: the agent repeatedly writes one-off Python to resize screenshots, compare image dimensions, or calculate visual diffs. Better pattern: create with clear output like: The agent can use the result without reinventing image processing each time. Bad pattern: the agent keeps constructing ad hoc SQL to answer common questions like “which users have duplicate active subscriptions?” or “which jobs are stuck?” Better pattern: create named scripts or Rails tasks: Now the workflow is repeatable, reviewable, and safe to run again. Bad pattern: the agent writes custom code every time it needs to build a fake webhook payload or API response. Better pattern: create or a small fixture library that produces known-good examples. The agent stops guessing at payload shapes and starts using something the test suite can trust. Moving repeated agent behavior into deterministic tools gives you a few wins: Watch the agent the way you watch your logs. When you see friction, repetition, or uncertainty, ask whether the agent needs better instructions or a better tool. Sometimes the answer is a clearer prompt. Sometimes it is a skill. And sometimes the best thing you can do is take the fragile reasoning out of the model entirely and give it a boring, deterministic script to call. That is not making the agent less useful. That is making the whole system more useful. the same query firing 50 times because of an N+1 a page that feels fine locally but is doing way too much work a slow query that needs an index an unexpected redirect or extra request a cache miss you thought was a cache hit a background job being enqueued more often than expected parameters coming through in a shape you did not expect What tasks did you repeat multiple times in this session? What code did you generate only to throw away later? Which commands failed, and what would have prevented those failures? Did you write any one-off scripts that should become checked-in tools? Did you repeatedly search for the same files or project conventions? Were there project rules you had to infer that should be documented? Which parts of the workflow were deterministic enough to automate? What should be added to , a skill, or a script? If a smaller model had to do this next time, what tools or instructions would it need? parse front matter validate the title, summary, badge, tags, and date derive the final filename move the draft into Dependability: the same input produces the same output. Determinism: fewer “creative” variations in routine work. Testability: scripts can have tests; improvised reasoning usually cannot. Reviewability: a script can be read, improved, and versioned. Cost: once the workflow is encoded, you may be able to use a smaller model for that task. Speed: future turns spend less time rediscovering the same procedure.

0 views

Photo Journal - Day 5

Thought I would try something different for this entry! Each of these photos were taken with my Gameboy Camera attached to an Analogue Pocket (since it allows easy exporting). I've had this cartridge since I was a kid (I included 2 photos from back then for fun)! The following photos are from when I was a kid and have been sitting on the cartridge for 20+ years. ↑ This was one of the cats we had when I was a kid, his name was Benthem. He had massive cheeks! ↑ I imagine this was one of my friend's chickens that lived in the countryside.

0 views
Stratechery Yesterday

2026.19: Earning & Spending

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on Messaging AI in 2026. What We Learned from Big Tech’s First Quarter. Apple, Amazon, Meta, Google and Microsoft all reported earnings last week, and as four of the five megacaps continue to pour massive sums into AI (first quarter CapEx was more than three times that of the Manhattan Project), there are no signs of that pace slowing. Ben broke it all down across several days, including divergent market reactions to great Google numbers and Meta numbers that were arguably even better , as well as the stories for Microsoft and Apple after Q1 . Sandwiched between those Daily Updates, Tuesday’s Article zoomed out to connect Amazon’s infrastructure spending history with its AI strategy going forward . All of it was a great way to parse numbers that continue to boggle the mind, and strategy that actually looks a lot more rational than the numbers sound.  — Andrew Sharp A Conversation with Joanna Stern. How does one write a book about a tech story that seems to change every other week? Joanna Stern accepted that challenge, and explained how it went in this week’s Stratechery Interview . The resulting conversation is a delightful glimpse into the process for one of the most creative tech writers alive and the making of a book that Ben loved. Stern’s shares her thoughts on using an LLM to make a career change, as well as how AI is changing medicine (and mammograms), and limits of LLMs that are still very real. To the latter point, if you’d like to learn more about how ChatGPT misdiagnosed a preying mantis pregnancy, start with this week’s interview, and then  you can buy the book here.   — AS What’s Next for the Celtics? Like many others across the media, I picked the Boston Celtics to make the NBA Finals in June. Alas, they barely made it out of April and were eliminated in the first round to Joel Embiid and the 76ers. The GOAT podcast recapped that disaster first on Monday with a salute to the Sixers (now bittersweet after two losses to the Knicks), and on Thursday’s episode, a longer look at the mess in Boston and a variety of thorny choices from here. Get caught up on all of that and the rest of Playoffs, and if you need an additional hoops fix, this week’s Sharp Text is a salute to the maddening charms of the Minnesota Timberwolves .  — AS Google Earnings, Meta Earnings — Wall Street loved Google’s earnings, and hated Meta’s, even though the latter’s core business was more impressive. The difference is that Google is monetizing its investments now (and it might be all Anthropic). Amazon’s Durability — Amazon looked behind in AI in the training era, but is well place in the inference era, thanks to its continued investment in the long-term. Microsoft Earnings, Apple Earnings — Microsoft unveils its new agentic business model, and Apple confronts shortages in memory and chips even as the Mac benefits from AI. An Interview with Joanna Stern About Living With AI — An interview with Joanna Stern about her new book about living with AI, and starting her own media company. The Wolves Are Why We Do This — A salute to the playoff Timberwolves. Plus: Notes on Vogue history, NBA upheaval, and the “geo” in geopolitics. Google and Meta Earnings Anthropic and xAI Sweden Made DC Great Again The Sixers Get Their Moment in the Sun, A Nightmare for Celtics Fans, Thoughts on the Way Into the Second Round A Championship Response from the Spurs, What’s Next for the Celtics?, Pre-Lottery Thoughts and Emotions

0 views
David Bushell Yesterday

Unscrewing lightbulbs

Giving lightbulbs a MAC address was a mistake that I’m living with. I’m literally unscrewing lightbulbs to renew their DHCP lease @dbushell.com - Bluesky Instead of enjoying the bank holiday Monday I updated my homelab software. I was ‘inspired’ by the Copy Fail Linux bug to run full distro upgrades. This is my self-hosted update for Spring 2026 (rough documentation to give future me a chance). Monday’s fun risked a week of pain. I do have backups but restoring them on a broken LAN is tricky. I have an ISP provided wifi router to dust off in an emergency. Along with an absurdly long 15 metre HDMI cable I do not care to unravel. My winter update added a hardware fallback but that too requires careful rejigging. I have Proxmox hosts, virtual machines, and Raspberry DietPis . They were all on Debian 12 (Bookworm) with a kernel potentially susceptible to the bug. Minimal Debian installs are perfect because I run everything in Docker anyway. Data volumes are easy to backup or network mount. I can change host at will for any service. Debian is just sensible, well documented no-fuss Linux. I used to run “minimal” Ubuntu server. Following 24.04 I found myself debloating most of the Ubuntu part (i.e. snaps). It sounds like the new coreutils are a CVE party . Glad I escaped before that drama! As it happens, this week’s Linux Unplugged episode had Canonical’s VP of Engineering spewing embarrassing AI platitudes. “Ubuntu is not for you” was the only thing said worth remembering. I updated most of my VMs first because they’re easy to restore if anything fails. I followed Lubos Rendek’s guide . Start with a full package update and then change the package sources before running another step-by-step upgrade. The only non-Debian sources I have are Docker and Tailscale. Yes that means I run Docker inside Proxmox VMs — and you can’t stop me! That’s not even my worse crime… After the Trixie upgrade I found VMs were failing to obtain a LAN IP address. The virtual network device had been renamed from to . I edited and just changed the reference. There is surely a better/more predictable fix but this was the quickest. The same name was used across all VMs so I guess 18 is the magic number. Everything has been stable so far. If issues arise I’ll just nuke and pave from a Debian 13 ISO. Docker config and volumes are backed up independently of the VM images. DietPi has a long Trixie upgrade post I didn’t read. I just curled to bash: I gave the script a cursory glance before hitting enter. I have a Pi 4 running failover DNS and a Pi 5 running my public Forgejo instance . DietPi is ideal because of the tiny footprint; I run Docker here too. Raspberry Pi still hasn’t merged upstream Copy Fail fixes. I’m already in trouble if this bug can be exploited but I did the temporary fix out of caution. I wasn’t going to bother with Proxmox 9 but after a GUI update I was informed version 8 “end of life” was August 2026 . That is soon! I followed the official upgrade guide on my Mini-ITX server . Proxmox has a tool to check compatibility. I saw no red lights so I stopped all VMs, updated package sources to Trixie, and ran the upgrade. It is critical to run again before rebooting. I ran into the systemd-boot issue . Apparently if this is not removed the system fails to boot. If my particular box fails to boot I’m in big trouble because I broke video output and have yet to fix it. I have another Proxmox machine running virtualised OPNsense for my home router. I can’t stop the OPNsense VM and upgrade the host to Proxmox 9 because the host would have no network access. I had two options: I specifically set up option 1 for such a purpose. I went with option 2. I figured any software running in memory is still alive until I reboot, right? I didn’t question whether Proxmox would kill any processes itself (it didn’t). The update was suspiciously fast. I ran again and saw a lot of yellow warnings. Yikes. Eventually I noticed I’d failed to update some sources to Trixie and I’d installed a franken-distro. After fixing mistakes all I could do was reboot and pray for an agonising two minutes. OPNsense is the only non-Debian operating system in my homelab. I manage it entirely via the web GUI. The 26.1 update had quite a few significant changes. My DHCP setup was considered “legacy” and my firewall rules required a manual migration. Despite dumbening my smart home my lightbulbs still demand a WiFi connection. I program them myself to avoid Home Assistant and proprietary apps. Turns out I hard-coded IP addresses (discovery protocols are a joke.) Despite having dynamic IPs they remained stable until the OPNsense 26.1 DHCP update. I had no easy way to identify each light. Why would they name themselves anything useful? That’s how I ended up unscrewing the bulbs one by one to see which MAC address fell off the network. I gave them static IPs on a VLAN for future me to appreciate. And with that, my home network is up to date! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Use my failover VM YOLO it live

0 views

Tiny Visitor Counter

I created a tiny script for counting per-page visitors on your site. It's as simple as uploading the PHP file to your server and pointing a tag to it. Leveraging the script as an image is an attempt to seed out bots (since they typically don't render images). Here's a live version of the script: You can grab the script on my Codeberg . To setup with Pure Blog , upload the PHP file to your folder and add the following HTML to your page and post footer HTML under Settings->Site (this assumes the script was uploaded to ):

0 views

Premium: AI's Circular Psychosis

In this week’s free newsletter, I explained how bad the circular AI economy is in the simplest-possible terms :  In other words, the entire AI economy effectively comes down to Anthropic and OpenAI, who take up at least 70% of Amazon’s Google’s, and Microsoft’s compute capacity , 70% to 80% of their AI revenues and 50% of their entire revenue backlog, per The Information : That’s $748 billion of the entire revenue backlog — not just AI compute — that’s dependent on Anthropic and OpenAI, two companies that cannot afford to pay these bills without constant venture capital infusions from either investors or the hyperscalers themselves.  This is a big problem, because Anthropic seems to be losing so much money that it had to raise $10 billion from Google , $5 billion from Amazon , and is reportedly trying to raise another $50 billion from investors , less than three months after it raised $30 billion on February 12, 2026, which was five months after it raised $13 billion in September 2025 . That’s $58 billion in eight months, with the potential to reach $108 billion. Now Anthropic is taking over all 300MW of SpaceX/xAI/Elon Musk’s Colossus-1 data center , which will likely cost somewhere in the region of $2.5 billion to $3.5 billion a year, given that most of the data center is made up of H100 and H200 GPUs ( with around 30,000 GB200 GPUs ). I also don’t think people realize how bad a sign this is for the larger AI economy. Musk built the 300MW Colossus-1 to be “ the most powerful AI training system in the world ,” specifically saying that it was built “ for training Grok ,” with inference handled through Oracle ( which originally earmarked Abilene for Musk but didn’t move fast enough for him ) and other cloud providers. xAI, as one of the largest non-big-two providers, had so little need for AI capacity that it was able to hand off the entirety of its self-built data center capacity to Anthropic.  If xAI doesn’t need 300MW of compute capacity that it spent at least $4 billion to build , who, exactly, are the other large customers for AI compute? I’m not even being facetious. I truly don’t know, I can’t find them, I spent most of last week looking for them , and the only answer I had a week ago was “Elon Musk buying a lot of compute for xAI to make the freaks on the Grok Subreddit able to generate pornography.”  xAI is also the only non-OpenAI/Anthropic AI lab that’s built its own capacity , capacity it clearly didn’t need, which begs the question as to why Musk needs however much capacity he’ll build at Colossus-2 . Musk claims that xAI had moved all training to Colossus-2 , but also that xAI would “ provide compute to AI companies that are taking the right steps to ensure it is good for humanity .” This apparently includes Anthropic, which Musk called “ misanthropic and evil ” a little over two months ago. Researchers believe that the actual capacity of Colossus-2 is 350MW . At $2.5bn a year or so, Anthropic will be effectively the entirety of xAI’s revenue, which was at around $107 million in the third quarter of 2025 .  To put this very, very simply: xAI should, in theory, have massive demand for AI compute, but its demand is apparently so small that it can flog a multi-billion-dollar data center to a competitor.  Sightline Climate found that 15.2GW of capacity is under construction and due to be completed by the end of 2027, and at this point I’m not sure anybody can make a compelling argument as to why it’s being built or who it’s for.  Who needs it? Who are the customers? Who is buying AI compute at such a scale that it would warrant so much construction? Where is the demand coming from if it’s not OpenAI and Anthropic? These questions shouldn’t be that hard to answer, but trust me, I’ve tried and cannot find a GPU compute customer larger than $100 million a year, and honestly, that customer was xAI.  Through many hours of research, I’ve found that the vast majority — as much as 95% — of all compute demand comes from a few places: Otherwise, every data center deal you’ve ever read about is for a theoretical future customer or an unnamed “anchor tenant” that gives them “guaranteed, pre-committed occupancy” without being identified in any way. Yet even that “pre-committed” language seems to be something of a myth, which I’ve chased down to a report from real estate firm JLL, who says that 92% of capacity currently under construction is precommitted through binding lease agreements or owner-occupied development . CBRE said it was 74.3% for the first half of 2025, and Cushman & Wakefield said it was 89% , though it also said that there was 25.3GW of capacity under construction, while Sightline sees 19.8GW under construction through the end of 2030. And man, I cannot express how fucking difficult it is to find actual data center customers outside of the ones I’ve named above. In fact, it’s pretty difficult to find any customers for GPU compute not named Anthropic, OpenAI, Microsoft, Google, Meta or Amazon.  Outside of OpenAI and Anthropic, effectively no AI software makes more than a few hundred million dollars a year, and to make that money, they have to spend it on tokens generated by models run by one of those two companies. When those companies generate those tokens, they then flow to one of a few infrastructure providers — I’ll get to the breakdown shortly — to rent out GPUs.  As I’ve discussed this week , at least 75% of Microsoft, Google and Amazon’s AI revenues come from OpenAI or Anthropic, and that’s before you count the money that Microsoft, Google and Amazon make reselling models from both companies. To get specific, The Information reports that Anthropic will pay around $1.6 billion to Amazon for reselling its models. OpenAI, per my own reporting , sent Microsoft $659 million as part of its revenue share. AI startups — all of whom are terribly unprofitable — predominantly spend their funding on models sold by OpenAI and Anthropic. Per Newcomer , as of August last year, Cursor was spending 100% of its revenue on Anthropic. Harvey, an AI tool for lawyers, raised $960 million between February 2025 and March 2026 , with most of those costs flowing to Anthropic and OpenAI.  Effectively every AI startup is a feeder for API revenue for Anthropic or OpenAI, and as a result, almost every dollar of AI revenue flows to either Google, Microsoft or Amazon. As Anthropic and OpenAI are extremely unprofitable, Google, Microsoft and Amazon then take that money and either re-invest it in OpenAI and Anthropic, as Google , Amazon and Microsoft have all done in the past few years.  At the beginning of the bubble, all three companies believed that OpenAI and Anthropic were golden geese that were, through the startups they inspired and powered, laying golden eggs that necessitated expanding their operations, leading them to spunk hundreds of billions of dollars in capex , with Amazon building the massive Project Rainier in Indiana for Anthropic and Microsoft the Atlanta and Wisconsin-based Fairwater data centers for OpenAI . They likely also thought their own services would grow fast enough to warrant the expansion, or that other large GPU consumers would rear their heads. That never happened. Instead, OpenAI grew bigger and more-demanding of Microsoft’s compute capacity, leading to Microsoft allowing it to seek other partners , in part (per The Information) because some executives believed OpenAI would die: By November 2025, OpenAI had signed a $300 billion deal with Oracle , a $22 billion deal with CoreWeave , a $38 billion deal with Amazon , and a theoretical deal with both AMD and NVIDIA . Yet by this point, Microsoft realized it was in a bind, with the majority — at least 70% if not more  of its AI revenues were dependent on OpenAI, but it had already walked away from 2GW of data center capacity to reduce its capex costs. It had also, as part of OpenAI’s conversion to a for-profit company, had convinced it to spend $250 billion in incremental revenue on Azure .  So Microsoft chose to start spreading out that capacity to neoclouds like Nebius and Nscale , effectively bankrolling their entire futures based on theoretical revenue from OpenAI, a company that plans to burn $852 billion in the next four years and cannot afford to pay any of its bills without continual subsidies. These companies were now part of a multi-threaded dependency that ultimately ended up at one place: OpenAI, which also makes up the vast majority of inference chip maker Cerebras’ revenue with its 3-year, $20 billion deal . Meanwhile, Amazon and Google thought they had it made. Anthropic was growing, and its compute demands were reasonable enough that neither had to stretch themselves too thin…until the second quarter of 2025, when Anthropic’s accelerated growth led to it starting to push against the limits of Google and Amazon’s capacity.  So Google agreed to backstop several billion dollars behind two deals with Fluidstack, a brand new AI compute company, and Amazon continued expanding its Project Rainier data center.  Yet Anthropic’s hunger wasn’t sated. After mocking OpenAI in February 2026 for “YOLOing” into compute deals (and having signed a cloud deal with Microsoft ), it massively expanded its AWS and Google Cloud deals , signed a deal with CoreWeave , and as I discussed above, took over the entirety of Musk’s Colossus-1 data center . And all of this is only happening because, based on my analysis, very little actual demand for AI compute exists outside of OpenAI and Anthropic, and OpenAI and Anthropic only exist because of Microsoft, Google, and Amazon both building and expanding their infrastructure to cater to them.  In reality, OpenAI and Anthropic are the only meaningful companies in the AI industry. They are the majority of revenue, the majority of capacity and the majority of demand. Microsoft, Google and Amazon have exploited the desperation in a tech industry that’s run out of hypergrowth ideas , and created a near-imaginary industry by propping up both companies. The mistake that most make in measuring the circularity of OpenAI and Anthropic is to focus entirely on the money raised — $13 billion from Microsoft and up to $50 billion from Amazon for OpenAI, and as much as $80 billion from Amazon and Google for Anthropic. The correct analysis starts with measuring infrastructure. Based on discussions with sources and analysis of multiple years of reporting, I estimate that of the roughly $700 billion in capex spent by Google, Meta and Microsoft since 2023, at least 5.5GW of capacity costing at least $300 billion has been built entirely for two companies. This has in turn inflated sales through multiple counterparties involving NVIDIA, ODMs like Quanta, Foxconn, Supermicro and Dell, and created a form of market-driven AI psychosis that inspired Meta to burn over $158 billion in three years and the entire world to convince itself that AI was the biggest thing ever. The reason that there isn’t another OpenAI or Anthropic is that Google, Microsoft, and Amazon bankrolled their entire infrastructure, fed them billions of dollars, and then charged them discount rates for their early compute, with sources telling me that Anthropic pays vastly below-market-rates for Trainium compute from Amazon, and The Information reporting that OpenAI was paying $1.30-per-A100-per hour in 2024, or at or around the cost of running them. By sacrificing their entire infrastructure to OpenAI and Anthropic, the hyperscalers created the illusion of demand by feeding themselves money, all while buying endless GPUs and TPUs to fill further data centers for two customers, both of whom paid discount rates that lost them money.  This capex bacchanalia gave all three companies a massive boost to their stock prices, so they kept going, even though there wasn’t really demand other than for Anthropic or OpenAI, two companies that they had to constantly cater to with investment capital and server maintenance. The belief became that all you had to do was plan to build a data center and you’d print money, boosting NVIDIA’s sales and associated counterparties in memory stocks like Sandisk . Except that never happened.  Every data center provider that doesn’t have an Anthropic, OpenAI, or Meta-related contract makes pathetic amounts of revenue that can barely keep up with their debt. AI startups make meager revenues, and lose multitudes more than they can ever hope to make.  The entire AI industry relies upon two companies that expect to burn at least $1 trillion in the next four years, with Anthropic, the supposed “compute-conscious” AI company, committing to at least $330 billion in spend in the next few years. Where does that money come from, exactly? Because neither of these companies have anything approaching a path to profitability.  Based on a deep analysis of every publicly-available source on AI compute, I can find only two significant — over $100 million a year — purchasers of AI compute outside of Anthropic, OpenAI, Meta, or associated parties like NVIDIA, Microsoft, Google and Amazon. Those two are Poolside, which reportedly spends $400 million a year , an untenable position as it only raised $500 million in total funding before its $2 billion in funding collapsed earlier this year , and Perplexity, which appears to spend some amount of money with CoreWeave and Microsoft Azure. Both run at a massive loss. Nowhere is this lack of true demand more obvious than in the neoclouds, which only seem capable of signing big deals with Anthropic, OpenAI, Microsoft (for OpenAI), and Google (for OpenAI). Oh, and Meta, who is doing this because the existence of ChatGPT gave Mark Zuckerberg such profound AI psychosis that he’s made Meta build him a CEO chatbot to talk to and burned over $150 billion. The AI industry is a brittle, circular economy, one only made possible by a lack of financial regulation and a tech industry that’s run out of ideas. Without hyperscalers propping up OpenAI and Anthropic, there would be no reason to buy so many GPUs or build so many data centers, and neoclouds would have no reason to exist. This is a giant con, a giant illusion, and a giant mistake. Meta, for reasons that defy logic. Microsoft, for OpenAI’s compute. Google, for Anthropic’s compute. Amazon, for Anthropic. 90%+ of all AI revenues flow through Anthropic and OpenAI. 90%+ of all AI compute demand comes from Anthropic, OpenAI, Meta, or associated counterparties like Google and Amazon buying compute for Anthropic or OpenAI. The vast majority of AI operations don’t require more than a few hundred to a thousand GPUs for inference, and at most 20,000 GPUs for training models. This means that for the 15.2GW of data centers under construction before 2027 ($157 billion in annual revenue) to make sense, thousands of companies will have to rent hundreds or thousands of GPUs. This also means that the DeepSeek problem — the reason that everybody freaked out in January 2025 — is actually industry-wide. More than 50% of Microsoft, Google, Amazon, CoreWeave, and Oracle’s entire revenue backlogs are from OpenAI and Anthropic. Neoclouds are unsustainable, imaginary businesses only made possible by continual subsidies from NVIDIA and the compute demands of OpenAI, Anthropic and Meta. Outside of Anthropic and OpenAI, only around $13 billion in AI compute demand exists, with much of it taken up by Meta and NVIDIA backstopping neoclouds like CoreWeave and IREN. ODMs like Supermicro, Dell, Quanta and Foxconn are largely dependent on AI server revenues that largely flow through OpenAI and Anthropic’s counterparties to fuel their server demand.

0 views
Jeff Geerling Yesterday

HomePod mini feels like magic, but it's just good timing

Apple introduced the HomePod mini six years ago , in 2020. I'm not one into smart speakers, but the feature that made me take a closer look was their ability to form stereo pairs, without any direct wired connection. I know there are other speaker manufacturers with wireless speakers, but to my knowledge, Apple was just using AirPlay over WiFi... so how does it work? Through the magic of buying two HomePods mini (pictured above), I found out. A video detailing the process is embedded below:

0 views
iDiallo Yesterday

Hi stranger

I'm at home, sitting on the kitchen table. I just took my boys to school and I'm about to start my work. I'm writing this message directly to you. And you are reading it. Hello! Isn't that funny? I've been trying to write consistently, and it gives the impression that I am this serious person with some serious insights. But no, I'm just writing. Sometimes you respond, you send a nice email, other times it's complete silence. It ends up being like an entry in a journal, for me to stumble upon at a later date and reflect: "Oh yeah, that's what I was thinking that day." My job is a 2 hour drive away, so I rent an office close by. There I can focus and clearly delineate work time from home time. I don't like working when I'm home with my family. So I have some time to talk to you. Last year, I spent some time digging through my server logs to find who is reading me. I wanted to know who you are, and why you are interested in reading me? But I can't get an answer from just reading the logs. Instead what I found is that you and most other people come here via RSS. My rough count shows that there are 10,000 of you, or at least 10,000 unique IP addresses that ping the websites whenever I write something new. There are around 2,000 people subscribed via popular RSS readers like feedbin or Feedly. 1,500 of you also subscribe via email which I have neglected this year. It's weird because this data is invisible most of the time. I forget that when I write something, anything, the odds are that someone will find it intriguing. In fact when I look deeper into the logs, I see people are referred by other blogs I never heard about. And they mention me by name, "and then Ibrahim said this or that." It feels so personal. I often forget that this is all so human. That, what we call the small web is people not just writing, but telling us something. When I have an insight, or read something interesting, I'm telling you about it. Not directly, but in an asynchronous way. You get to know or read about it on your own terms. The small web has never died, it feels like it did at some point because it has remained small. But I don't think I want it to become any bigger, or any louder. It's right where it's supposed to be. I'm breaking the 4th wall today just to say Hi. How are you? I hope you are doing well. The world is weird sometimes, but you are not invisible. I see you. I hope you are having a good day.

0 views
Unsung Yesterday

“There seems to be a file that is just filled with undecipherable Morse.”

On April Fools in 2021, the popular xkcd comic ran Checkbox , which was a Morse code puzzle in disguise. (It’s interesting to see the community trying to figure out what it actually does .) Engineer Max Goodhart built the front-end and wrote a summary of the whole project : This year was a doozy. We specced and scrapped several different ideas in the months leading up to today. We finally settled on today’s concept just 3 days ago. The need to do something simple was a really useful constraint, and we leaned into the idea of making something primitive but deep. The team seems to have had a lot of fun with it, including even JavaScript being encoded in Morse Code (the link in the blog post no longer works, but you can still see it on the Internet Archive ). Goodhart also wrote about the immense challenge of adjusting the Morse tapping speed to the user, which counterintuitively ended up needing… adjusting the user to the speed. But the best part is that the server communications used the Morse code in URLs, as well: We took great pains to make the API for this project use morse code in the transport. If you take a look at the network inspector, you’ll notice that the URLs requested have morse code in them. This worked for every combination of letters imaginable, with two oddly specific exceptions: a solitary E, and a solitary I. I liked this description of what transpired next, which would have made me think I was going insane, too: Then, an even stranger thing happened . I copied and pasted the correct URL into my browser and pressed Enter, and right before my eyes, it deleted the ”.” from the end of the URL and returned a different result. I was delighted to discover an answer here, not only because in retrospect it’s such an obvious thing that was staring us all in the face for decades, but also because it has interesting URL construction consequences. #bugs #encoding #web

0 views

Automated Capitalism

Woke up to this email in my inbox. At first I though "ugh a sales pitch", but then I saw the line at the bottom. This company runs autonomously · polsia.com This led me to visiting Polsia. It's an entire platform for doing the minimal amount of work to try and sell slop to people. It vibe codes, spams people and provides "customer support" with just the help of your credit card. Is this seriously the future? Cause I don't want it.

0 views
Stone Tools Yesterday

PipeDream on the Acorn Archimedes

During the "throw everything at the wall and see what sticks" years of home computing, up to around 1995, a lot was thrown and a lot failed to stick. Sometimes clumps would form that appeared to have the combined friction necessary to maintain wall grip, each holding the other up. But, like Mitch Hedberg's observation of belts and belt loops, it was difficult to discern who was helping who stick to what. Take for example, our focus today. We have a completely novel CPU, built by a tiny team of engineers who had never designed a processor before, running a bespoke operating system squeezed out in a rush to meet the shipping deadline of a computer that wanted to carry on the legacy of a system beloved by British schoolchildren, hosting a productivity suite that completely rethought what the term "productivity suite" even meant. Together, they formed a complete computing dead-end. Yet separately, they each achieved life beyond expectations, given their shaky beginnings. Let's start with the hardware, Acorn Computer Ltd.'s follow-up to the famous 8-bit BBC Micro, the Archimedes. Feeling the 16-bit processors of the day didn't deliver enough bang-for-the-quid, they began an investigation into 32-bit processor options. After reading a U.C. Berkeley paper extolling the virtues of the RISC architecture, and seeing firsthand the ease with which chips could be designed, in 1983 Acorn launched the Acorn RISC Machine project to develop the 32-bit brain of their next system. The fruit of that labor, the ARM processor, defined the Archimedes line. Try as they might, Acorn could never crack the home market the way they did education. Still, those ARM CPUs had longevity well beyond the life of the company that commissioned it. Your smartphone likely has ARM in it right now, and Apple's entire current hardware ecosystem is built on its spec. That powerful hardware needed a preemptive multitasking operating system that befit its computing prowess. That was to be ARX , whose troubled development missed the product launch window. In the meantime, so the computer could have something driving it at launch, a stop-gap operating system called Arthur was shipped. It was similar to Acorn's previous BBC Micro MOS (Machine Operating System), with a graphical layer grafted on top; hit F12 and that text interface will peek out from behind the curtain. Over time it was decided that Arthur was doing a bang-up job and ARX was cancelled. Thus was born RISC OS, a cooperative multitasking WIMP (windows, icons, menu, pointer) with possibly the first application "dock" on a home computer. Its mandatory three-button mouse summons an application's current context menu at the pointer location; there are no menu bars whatsoever. Drag-and-drop is embraced as a central file management metaphor, even to save documents. On top of all that, it was the first to offer scalable, anti-aliased font rendering, even if its fonts were a little "off brand." On top of this unique foundation, we have PipeDream . Developer Mark Colton was convinced that the boundaries between word processor, spreadsheet, and database were artificial and could be eliminated. A document should be able to do any of those functions at any time, anywhere on the page, he posited. One might think, "Oh, like Google Sheets ." but PipeDream handles word processing more elegantly. Another might think, "Oh, like Apple Pages " but the spreadsheet and database functions are more robust in PipeDream . This particular balance of the three productivity functions feels unique amongst even its modern peers. Does a productivity suite work better when it's just a single app? Did Colton successfully execute his vision? And where is the Homerton documentary we deserve? (I didn't know Ghost blogging platform forces images to 2000px max; I've revised my design workflow to mitigate this in the future. To make amends for this timeline's illegibility at 2000px, please accept this PDF version) Testing Rig RPCEmu v371 on Windows 11 RISC OS v3.7 1024 x 768 15-bit color 64MB RAM PipeDream v4.13 Let's Get to Work My process when first examining unfamiliar systems is as follows: I do that across a variety of emulators to see which gives me the least grief; I need to be sure I can trust a basic productivity loop. I usually try to give it a go without research, to see how far I can get on pure skillz (with a Z). It's unusual to sit down at what appears to be a computer I understand and be baffled every step of the way. I've heard this system described as "elegant" and "easy to learn." This has me questioning if maybe I'm actually a very dumb person because my impression is "uncomfortable." You know that modern horror story, aka "creepypasta", The Backrooms ? It's a hidden world that co-exists with our own, which can be entered only by clipping through a seam of reality which separates the two. In there, buzzing fluorescents light an infinite maze of featureless, yellow-wallpapered office-style floor layouts. If one were to find a running computer there, I suspect RISC OS would drive it. It's just common enough in its GUI metaphors to feel familiar, and just off-kilter enough to turn that familiarity against you. Liam Proven wrote in The Register , "You will find it very disorienting, especially if all you know is post-1990s OSes." My dude, I've been computing since the 1970s and I find it disorienting. Nothing is unlearnable (I'm dumb, not incompetent), but I genuinely had to work through its manual to acclimate myself. To be clear, I enjoyed the thrill of venturing into the unknown. After all, one of the goals of this blog is to investigate the less-trodden paths in software history. Still, there are times when I feel RISC OS is " having me on." (trying to ingratiate myself with British readers in today's post) I'll start with the three-button mouse. From left to right the buttons are "Select", "Menu", and "Adjust." After weeks working with the system, I still can't figure out what problem the "Adjust" button solves. It's semi-analogous to on modern systems, as when clicking to add/remove elements to/from a set of selected items. Then, sometimes it does something unexpected like, "drag a window by its title bar without bringing that window to the front." Other times it is baffling. a file icon to a new folder location doesn't move the file to the new location. It copies the file. If you want to move the file, you must . Why are we "SHIFT" dragging anything when we have a perfectly good "Adjust" button? Sometimes the "Adjust" button does "opposite" actions. Click a "down" scroll arrow with "Adjust" and it will to scroll up instead. Is that an "adjustment?" What does it even mean, to "Adjust" a mouse click? It seems like it could mean anything , and that's kind of my point. It's unguessable and unintuitive. An interesting UI element (which predates NeXT and Windows 95) is the Icon Tray, an important tool inexplicably not described at all in the RISC OS 3 manual. Situated along the bottom of the screen, currently running applications and directory icons sit on a little shelf. Double-click "Select" on an application icon to launch it and... nothing. Its icon displays in the Icon Tray, and that's it. We must now Single-click "Select" on that icon to actually bring the application to the forefront and activate it. I don't know what that's all about, but that's how it works. Menus are fascinating in both the positive and negative meanings of the word. There are no menus on screen whatsoever, they are only made visible by the middle "Menu" mouse button. "Menu" clicking opens a given menu at the current mouse pointer location. Icons in the Icon Tray can be "Menu" clicked to get application-level menus, like "Make a new document." Within a document, "Menu" click will give us document-level options. Conceptually, I like the "Menu" button a lot. Within a menu, any choices which open dialog boxes or control panels tend to open in-menu. It's kind of cool, being able to type, or flip switches and radio buttons, directly inside the menu itself, rather than popping up a modal window. However, it is jarring to have large panels suddenly lunge out like a xenomorph's inner jaws when scrolling through menus. These can obscure the root menu, depending on screen position. 0:00 / 0:08 1× The last point to get our collective heads around is file saving. When saving a new document, simply typing in a file name is not sufficient. Save dialog boxes expect and require the full path to your save destination; no assumptions or default folder locations are provided. You can manually type in the full path to your desired save location like this: While you type, the system will not assist you in navigating the directory structure; no autocompletion here. You must know the path by heart. The other option, as described in manuals, is to drag-and-drop your document to its save location. Drag-and-drop really seems to be the RISC OS idiomatic way to manipulate files. In a Save dialog box there is a little icon for the application. It looks like decoration, but it physically represents your document. Type a name into the text field, then drag that icon to your desired save folder. 0:00 / 0:13 1× I don't want to get bogged down enumerating RISC OS's idiosyncrasies, but a few more things need mentioning. There is a kind of "programmer's art" ugliness to the user interface; those folder icons are terrible. There are graphical glitches, as when scrolling a window too quickly (though moving windows around shows full contents, which wasn't typical during that period). Everything you set up to customize the system, like desktop icons, window positions, desktop resolution, and other settings is reset every boot unless you manually tell the system to save the current state as the "boot file." The list goes on like that. Sheesh, what a journey just to understand the basics. I expect that kind of learning curve for the text-based systems, as those DOS-like commands are unknown to me. For a GUI system to throw this "spanner in the works" (continuing my pandering) is unexpected, but a fun challenge. I can't feel myself growing to love it, but the initial feeling of discombobulation is receding. A spreadsheet is an ordered matrix of cells, each of which can hold text or math. Cells with text are typically used as labels for columns and rows of numbers, and the math cells do the work of calculating relationships between those numbers. It's all very simple. No, wait, I mean it's "easy-peasy." (commitment to the bit) Lotus 1-2-3 felt "columns and rows" could also be useful for textual data. They said the line between spreadsheets and databases is pretty fuzzy, and even today spreadsheets are used to hold and manipulate simple databases. Then racecar driver Mark Colton pierced the veil entirely. It wasn't just spreadsheets and databases that had a fuzzy separation. If we can type arbitrary text into a cell in a spreadsheet, why couldn't we type an entire book? What if all applications were really just one application, in the end? He fired his first shot at uniting everything in View Professional . This was released as PipeDream on the Cambridge Z88, a portable Z80 machine by Sir Clive Sinclair's Cambridge Computer. Built into the ROM itself, it was insta-boot, insta-launch right into a multi-purpose integrated document suite. Jerry Pournelle, in BYTE Magazine 's February 1989 issue, was moderately enamored with the hardware, but PipeDream was, "disappointingly hard to use." With Acorn evolving their BBC Micro via the Archimedes, Colton continued to support their hardware line. In interviews, he seemed to really be leaning toward Windows for the future of his company. However, since he switched development to C and there was a C compiler for the Archimedes, he said it wasn't hard to provide his product to the Acorn crowd. Running on Arthur, the precursor to RISC OS, he embraced and extended the "one document, many forms" approach. Much like today's Google Sheets, we can add arbitrarily long sections of text, insert images, set up database information, perform spreadsheet calculations, run spellcheck, and generate inline graphs. However, try typing a chapter of a book into Google Sheets if you want to drive yourself "mental." (there's no stopping me) In PipeDream , that's frictionless (within a certain definition of "friction"). Like RISC OS itself, PipeDream also requires certain shifts in thinking to not lose a finger to its sharp edges. I suppose that when a developer offers a truly new paradigm, it is fair to ask users to meet it halfway. I'm not convinced the advertising (see "Historical Record" at the end) gave customers a full understanding of how drastic that shift was. "Menu" click the Icon Tray icon (i.e. the application-level menu) for PipeDream to start up a new "Text" file and begin typing into cell A1. You'll find that text overflows, across cell boundaries, until it hits the "row wrap marker" seen in the rightmost column header (shown as a "down arrow" icon). Every line of text is its own row, in spreadsheet terms. As you type, PipeDream fills the current row, then silently inserts a new row to catch overflow. Until a paragraph break, these rows are internally associated as a logical unit. Edits which alter or disrupt text flow across rows within a paragraph are not reflected immediately in the UI. Or maybe they are? It's hard to tell with the graphic glitches in the screen redraw, a constant source of frustration while working on this article. PipeDream concedes the reflow point itself. When in doubt about the current visual structure of your text, , a manual action, will force PipeDream to recalculate text wrapping and line spacing. This can be mitigated a bit through a hidden toggle in the "Options" screen, the confusingly named "Insert on Return." This reduces the need to force a manual reflow, but can still leave visual chaos. 0:00 / 0:44 1× I've altered the text flow and initiated a recalculation of the lines. It does the work, but visually shows no change until I trigger a graphics refresh in some way. Selecting the text works, but then leaves its own graphic artifacts behind. I've "gone nutter!" (yes, these are in the captions as well!) Interestingly, I saw similar redraw issues in View Professional on the BBC Micro. It would appear this is, to some extent, part of the software's DNA. Honestly, this is all "a bit of a shambles." (the hits keep coming) Have you ever wanted a word processor that won't indent paragraphs? PipeDream being a chimera, navigation idioms are forced to choose which parent they love most. An examination of the key demonstrates this. In a word processor, we usually have a horizontal page ruler with tab stops. Tab over to a tab stop and type to align text at that indentation point on the page. In a spreadsheet, navigates us to the next cell to the right. In PipeDream , the spreadsheet idiom wins TAB's love. In a text cell, sets an invisible indicator at paragraph start which forces every subsequent line of that paragraph to begin at that same column. For example, by default every line of text is added to column A, the leftmost. If we to column B, the text will start there but when it wraps to the next line, that will also begin in column B. "Indentation" is at the paragraph level, not the line level. How do we indent the first line of a paragraph? The manual has a solution. In looking back through the history of Colton's software on the Acorn line, I found this note in a review of View 2.1, his standalone word processor for the BBC Micro. "Why is there no numerical information on the rulers or cursors to assist formatting?" asked Acorn User , January 1985. It seems Colton had it in for rulers for a decade, and to my thinking this points to a disconnect between what a programmer thinks users need, versus what users actually need. A stubborn rejection of norms doesn't always mean we're on the right track. We can use the cell-based layout engine of the program to pull off a fun party trick. Under "Options" there is a toggle between Row and Column text wrap. "Row" behaves like a typical word processor. "Column" lets us divide the page into columns, like a newspaper. Tab between columns and the column width will be respected by the word wrap. Kind of cool, and could be useful in a "I need to make a newsletter, stat!" pinch. Like a spreadsheet, column widths are document-wide, so no mix-and-match. Someone very clever with the tools could probably coax complex layouts out of it, but that would require an ungodly amount of pre-planning, design, and patience before starting a document. You really have to try to get it right the first time, because I don't find PipeDream particularly adept at handling large structural changes after the fact. The column-based formatting gets frustrating, but in other ways the word processing is "bog-standard." (How many will I squeeze in? Place your bets!) We have a built-in spell check, user-definable dictionary, word count, text alignment, font choices, and an anagram/subgram maker. Bank Street Writer Plus had an anagram maker as well. Why was that such a thing back then? Have I forgotten some fad of the 80s and 90s? That's all fine and dandy, but I'll tell you what isn't: there's no simple cut/copy/paste, at least not as a modern audience may understand those tools. In the document, we are restricted to cell-level selection, meaning I can't select individual words inside cell A1. I can only select the entire cell A1, which in PipeDream means an entire line of text. We can ask PipeDream to edit a cell in its own window, where it pops out for surgical editing. "Edit Formula in Window" highjacks the spreadsheet formula editor in order to get character-level selection control. In this pop-out window, we can highlight individual words and do typical cut/copy/paste actions. Notice, though, we're still restricted to only the text within the cell, which means only that line (row) of text. It's highly likely any given row will contain the tail-end of the previous sentence and the first part of the next sentence. If we want to cut out a specific sentence which doesn't align neatly to the row structure, there is no way to do so. I will repeat that. There is no way to cut/copy/paste an arbitrary string of characters. Now I feel PipeDream's vision working against itself for anything but simple correspondence. Remember, this is version 4 of PipeDream, Colson's fifth software release to pursue this unified application dream, and this is where we're at. I can't imagine writing anything substantial within these frustrating limitations. As a spreadsheet, PipeDream performs far more admirably, even if certain conventions have been eschewed in favor of its new vision. Hey, if you're gonna quirk it up, might as well go for broke. Unlike its spreadsheet ancestors, there is no menu, nor is there a simple way to tell PipeDream that we want to enter a formula into a cell, as with to denote a function call, or to indicate we want to do math. Many of Lotus 1-2-3's innovations have been utterly ignored. The global "Options" allows us to set default behavior for cell entry. Setting it to "numbers" will put us into the right context for easy formula entry, or we can click into the ever-present formula entry line at the top of the window. Turn on the "Grid" overlay to draw cell boundaries and before you know it what was a word processing document is now a spreadsheet with "the full Monty." (TIL it doesn't mean "full-frontal nudity") The functions available to number crunchers are plentiful and robust. Trigonometric functions are a given, but its inclusion of matrix math may come as a surprise. Even complex functions like , which computes "the complex hyperbolic arc cosecant of as a complex number," are present and accounted for, so hardcore math nerds can breathe a sigh of relief. A wide number of financial functions, statistical functions, lookup tables, string manipulations, and date handling are all here. So too are flow control tools, like , , and more. There are even GUI controls available for showing error dialog boxes and prompts for user input, though those are only available from within custom functions. Yes, if you're missing a function, you can make your own. In a new worksheet, start a formula with (which can accept typed parameters) and end it with . In between, do the work. PipeDream will check syntax and accept or reject each line of your function. If accepted, it will prefix a line with In your real working worksheet, access the formula by . That file reference implies PipeDream can access data from other worksheets, and that is true. Even a cell reference in a formula can be pulled from a completely different worksheet. I find the syntax for custom functions opaque, and the manual does a poor job of explaining what is possible and how to use the tool. There are a handful of examples provided with the software installation, with bugs, that reveal secrets only upon very close inspection. For example, notice in the screenshot above that the parameters to the function are later referenced by prefix, but local variables, as set by the function are not prefixed when used in calculations. It's those subtle little things that tripped me up. The same with having the return value called . Or how the program has a selection of "Strings" functions, but when passing a string as a parameter its type is "Text." I stared at that syntax for a LONG TIME before finally realizing my various little misunderstandings. Customization doesn't stop there. Individual keys can be defined as shortcuts to longer string sequences, F-Keys (plain and modified) can be defined to trigger commands, and command sequences (triggered by the CTRL key) can be redefined to your liking (which risks overwriting built-in command shortcuts). You really can make PipeDream your own, though you're in for a struggle compared to Lotus 1-2-3 and the thousands of books available to help learn its principles. I found no actual books for PipeDream , just publishing announcements in old magazines. Something must exist, but the internet at large appears bereft. On the scorecard of "this amalgamation approach to productivity software is working," I'd say we're 1 and 1. The spreadsheet tools are fiddly, but robust. The word processing has me very underwhelmed. Time for the tie-breaker: databases. Using the supplied Lotus 1-2-3 conversion tool, I was able to bring in the data I originally created in CP/M dBASE II and had subsequently converted to DOS Lotus 1-2-3. Now it lives on in RISC OS PipeDream . This data has more passport stamps than Indiana Jones. Let's consider some of the basic things one might want to do with data. PipeDream beats out Lotus in sorting, giving us a five-stage, multi-row, sort with ascension. Not too shabby for the time, all things considered. Search and replace does what it "says on the tin" (in for a penny, in for a pound), and can also accept regex-like tokens and patterns. More interestingly, cells can be set up to directly perform queries on table data. There are a small handful of prefixed database functions to calculate averages, min/max, counts of things, and more. One last feature of note is how to use the query tools to extract a result into a new database. This is interesting as it utilizes RISC OS's drag-and-drop Save functionality in a clever way. 0:00 / 0:15 1× Note how the query for data extraction is much longer than the tiny little text field in the contextual menu can handle elegantly. This is one of those usability tradeoffs for the RISC OS way of doing things. I was initially ready to write off the database functionality as being underwhelming, until I reminded myself of the stated goal for PipeDream . Its core proposition is that there is no difference between the various aspects of the software. The word processor is the spreadsheet is the database. We're not limited to the "database" functions when manipulating our database data. We have access to everything the program has to offer, at all times. Let's clip through the inverted UV plane separating database and spreadsheet, and see what kind of trouble we can get into. I'm thinking back to the Lotus 1-2-3 article and how database information was queried there. With a table of data, we had to use the built-in query forms, define areas on the sheet to hold query parameters, and designate another section of the sheet into which query results would display. It was an obtuse Rube Goldberg machine that I couldn't understand until I drew a diagram of the process. In PipeDream , we just write a formula, the same as if it were a spreadsheet. Let's get the average rating of all adventure games in the database published before 1985. "Bob's your uncle!" (I was hoping to work that one in) Let's mix it up a little and get the same average, but only for titles which begin with "Zork." We can use wildcards, but let's leverage PipeDream's word processing string tools. The most awesome part about this is that, like any spreadsheet formula, it updates in real time. Change the ratings, or add a new Zork game to the mix, and get the new average instantly. The database is the spreadsheet is the database, so that calculation can then be referenced as a value for another cell's formula, perhaps adding sales tax to the average unit price. While we're at it, might as well throw in some fancier text formatting to make it look pretty. In the Lotus 1-2-3 investigation, I wanted a pie chart showing a breakdown by game categories. Lotus had a handy function which removed duplicates from lists, making it possible to extract the full list of unique game categories, which could then be used as the query parameters for generating a chart. PipeDream can't do that, but it does have other string parsing routines, variables, cross-file data referencing, and the ability to write custom functions and macros. I don't doubt it would be possible to homebrew a workaround to this missing function. In fact, let's "have a bash at it." (swish!) 0:00 / 0:12 1× Note the real-time update of the chart as I modify an external database. Ultimately, I couldn't achieve an elegant solution, but I could achieve my goal. I sorted the original data by genre, then created a column that checks if the genre for each row matches the one above it. If so, it's a otherwise a . Then, I extracted all rows with in the column. Last, I did (count any items in a list), where the source list is contained in the original database document. With the documents thus linked, I get real-time graph updates when I alter the core database, thanks to external reference handling. Everything's "tickety-boo!" (I'm trusting The Independent on this one) OK, PipeDream , you're winning me over a little more now. Time to take this to its logical conclusion. We haven't yet pushed it as the multi-purpose document creation tool it promises to be. We've done a little dabbling, with text formatting and data extraction, but I want to see everything come together. I want the borders to crumble . The approach I'm finding to be least troublesome is to begin with a "text" document, then decorate that with spreadsheet/database elements. 0:00 / 0:16 1× As I scroll, text will disappear until I trigger a redraw event in the window. (pay no attention to the content of the letter) In building that document, here's what I learned. We have a unique confluence of interesting technologies coming together to form a strangely flawed jewel. It sparkles and shines when the light hits it just right , and in those sparkles we may catch a fleeting glimpse of a world that might have been. Might have been, but wasn't . Let's see where each of the underlying technologies wound up and those in the know can feign shock with the rest of us when we learn that ARM isn't the only thing that survives to this day. We'll start with the obvious truth: ARM won. It's in everything, everywhere, all at once. If it isn't in your computer, it's in your phone, or your Newton, or your Palm Pilot, or your Canon camera, or your Nintendo DS, or your Nintendo 3DS, or your Nintendo Wii, or your Nintendo Switch, or your Nintendo Switch 2, or your Raspberry Pi, or maybe you're sidetalking on your N-Gage. Its combination of low power consumption with high performance makes it ideal for mobile devices, of which we are in abundance. But why ARM specifically? Others have swung for the RISC fences and stumbled, yet Acorn set two engineers to the task of designing their first ever microprocessor and somehow achieved a ubiquity that has remained (mostly) unchallenged. Apple/IBM/Motorola gathered their forces and developed their own RISC architecture, which debuted in Apple's Power Macintosh 6100. PowerPC doesn't mean much to a Windows/Intel crowd, but the Mac faithful remember all too well Apple's investment in that as the successor to the x68000. Frustrated by delays in the evolution of the chip line, Apple wound up ditching it for Intel x86 , even if they eventually rediscovered the joys of RISC. PowerPC went on to be adopted by a number of game consoles, notably the Nintendo Wii, XBox 360, and PS3 simultaneously. The line continues today, and heck, Mars rovers Curiosity and Perseverance both have PPC inside. Hard to call such a history a "failure," but who outside hardcore Amiga faithful today is clamoring for a PowerPC chip? The SPARC RISC architecture, of "Sun SPARC Workstation" fame, chugged along until as late as 2017, when Oracle purchased Sun. A notable achievement, in pop culture circles, is this is the hardware Pixar's first Toy Story was rendered on . Though Oracle disbanded the design team keeping the architecture alive, the architecture itself is free and open source. There's nothing stopping an intrepid reader from carrying on the lineage, I suppose. Fujitsu, the last of the production line for the series, has abandoned SPARC for ARM. I'll be honest, I can't figure out what ARM does so much better than other attempts, like SPARC, at making a great RISC processor. Reading through the Ars Technica story , it seems to be less about the underlying tech and more about the savvy promotional work of Robin Saxby and his absolute unwillingness to lose the RISC wars. Where others were building RISC for the server-side, ARM committed themselves to the mobile side, skating to where the puck would be . Whatever the case, whatever the magic, ARM makes it available to anyone who wants it, through their licensing partnerships. Ultimately, this really seems to be what has given ARM its staying power; a low barrier to entry to quickly join in on high-performance, low-power draw, ARM fun. It's important to note that ARM doesn't make processors; they only license their IP. <<record_scratch.mp3>> OK, be that as it may, it is still substantially correct to say that IP licenses are their bread and butter. A "core license" allows a company to manufacture a specific ARM-designed CPU, a popular choice for system-on-a-chip designs. Alternatively, an "architectural license" permits a company to design and build its own custom CPU around the ARM instruction set. That's what Apple does with their A- and M-series chips. In recent years, ARM is feeling light competitive pressure from the RISC-V architecture. Born in the same UC Berkeley labs that birthed the original RISC design reports that inspired Acorn to take a chance on RISC, its architecture, unlike ARM, is free and open source. Consumer-level devices running on RISC-V have already started shipping. A new race has begun. Acorn's Archimedes line ultimately never sold particularly well. It's hard to nail down specific sales figures , but a 1991 Acorn shareholder report said, "Acorn is now the UK number one supplier of 32-bit RISC machines with an installed base of over 150,000 units." For context, the Amiga line had sold some 2 million units by 1991. We can't say Acorn didn't put in the effort, releasing some 13 model variations in under a decade. The general consensus seems to be that they "cost a bomb." (that's a new one on me) Schools adopted them, as a natural evolution of Acorn's prior BBC Micro installations, but at US$3,000 to $9,000 (in 2026 money) families just couldn't afford to put one in the home. In the mid-90s, Acorn dropped the Archimedes line, switching tracks to the more business-like Risc PC line, and produced a handful of systems around the StrongARM CPU. However, while the CPU spirit was willing, the motherboard flesh was weak, leaving the CPU underutilized . The lineup ended concurrently with the end of Acorn around 1998. Castle Technology tried to keep the Risc PC line going, post Acorn, but called it quits shortly thereafter, in 2003. Open-sourced in 2018, RISC OS Open keeps it running and up to date for modern RISC-based hardware platforms, especially the Raspberry Pi. Currently at v5.30 at the time of this writing, it is still a 32-bit operating system with " moonshot" aspirations of 64-bit someday. * checks watch* Time is ticking to pull that together before fading into 32-bit irrelevance. Did I mention how tiny this thing is? The latest version for Raspberry Pi is a 155MB download. Version 3.7, which I used for this article, downloaded as a pre-configured emulator with OS and apps pre-installed, was a mere 129MB. Even the most up-to-date pre-configured package tops out at a "massive" 1GB, apps and emulator inclusive. How big is macOS on ARM? Leading in with his View lineup of productivity apps on the BBC Micro, Mark Colton was the man with the all-in-one vision. With View Professional , he took his first stab at providing an uber-app for that 8-bit workhorse. It's primitive and clunky to use, but the spark is present. He would then expand on his ideas through the PipeDream lineup, taking it all the way to version 4.5. Every version refined the vision, but ultimately its character-based layout engine roots became a limiting factor to its growth. One rewrite later, he had a true GUI-based implementation, for both the Archimedes and Windows, in Fireworkz released in 1993. Having created standalone products Wordz and Resultz, Fireworkz combined those back into one. By mid 1995, Fireworkz Pro added in the database functionality, merging the new Recordz into the product, and that's where Colton's involvement ended. Besides asking "What even is a spreadsheet anyway?" Colton's other passion was race car driving. In August 1995, an engineering defect in the front wing of his Pilbeam M72 caused it to fold under his car while he was at top speed. He lost control, crashing headlong into a telegraph pole, and was killed. Most shockingly, both PipeDream and Fireworkz continue to be maintained to this day. Mark's father, Richard, generously open-sourced both PipeDream and Fireworkz just before his own untimely death in 2015. Fireworkz Pro, the version that includes database functionality, is not open-sourced and is still for sale . The PipeDream package available for installation in RISC OS package manager is not the version I'm using for this article. That is the modern update, which adds a bunch of niceties, including a GUI toolbar for formatting text, expanded spreadsheet functions, and a mind-boggling number of bug fixes. This is all maintained by lone developer Stewart Swales , someone intimately involved in the RISC OS and PipeDream history. He worked at Acorn and helped develop Arthur, the OS that became RISC OS. Later, he joined Colton Software as lead developer, working on PipeDream and Fireworkz. There's really nobody better to carry on the legacy. Where, precisely, Colton's continuation of that legacy would have gone, we can't say with certainty. However, we do have a little insight into his thinking. In an interview with Acorn User , December 1994, he said, "Over the next few years...we won’t be writing spreadsheets either; we'll be writing a totally different style of program. I expect spreadsheets, word processors and so on to be provided as part of the operating system in the future." Let me start by making it clear that I appreciate the effort. I say that with all sincerity and for everyone involved. From the machine, to the OS, to the productivity suite, all katamari'd up into a unique star. It was a lot of fun feeling like a beginner again. I had moments of true learning, shedding expectations of "how things should be" and experiencing fresh, alternate ways to approach work. I said at the beginning, the question that needs answering is, "Did Colton successfully execute his vision?" and here I must waffle. From View Professional , through five major releases of PipeDream , and two Fireworkz releases, he held fast to a very particular line of exploration. That he never wavered in his pursuit of that vision, says to me that he must have felt he had achieved his goal to some degree. In that regard, we can say he successfully executed his vision. As an end-user, it is hard to align myself to that vision. I get what he's after, especially when trying to make sure documents always reflect the latest data. After using PipeDream for a number of weeks, I remain unconvinced that the solution is to graft all software into one uber-application. If we follow that thinking to its logical conclusion, then why not include paint features? Why not include robust desktop publishing features? Where would it stop? Had the amalgamation of these productivity apps birthed something uniquely unachievable by other means, or unlocked some latent potential in the individual apps, I'd be very willing to adapt to this "skew-whiff" (last one, I promise!) approach to application design. As it stands, I ultimately don't see what it does that wouldn't be equally well-served, perhaps better-served, by intelligent file link management with robust publish/subscribe functionality. In fairness, a deep implementation of that would work best as an OS-level feature, and Colton could only control his own works. Paradoxically, the most frustrating aspect in removing the barriers between applications is how we wind up with a slate of new barriers forged in that alliance. Colton said of View Professional that even when the apps are combined, none should feel like a compromised version of that app. Yet, compromises are what I feel with every document I build. Is it worth giving up easy text formatting and basic cut/copy/paste for the off-chance I might need to insert a little spreadsheet table? There's an 80/20 rule being almost willfully ignored here. I love that Colton had a unique vision and stuck to it. I love that someone tried to forge a new path in productivity application design. I love that PipeDream exists, but I don't love it . Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). Testing Rig RPCEmu v371 on Windows 11 RISC OS v3.7 1024 x 768 15-bit color PipeDream v4.13 boot the system launch my application of interest make a dummy document quit the emulator entirely and reboot load my saved document Because rows and columns are shared throughout the document, insertions and deletions, or moving things around, creates difficult-to-resolve layout issues. If a spreadsheet sits to the right of a block of text, and we want to insert a row into only the spreadsheet part, that's not possible. Doing so will also insert an empty row into the paragraph, leaving a gap. PipeDream has a strange concept of "global font" vs. "local font". Local fonts can't be changed until the global font is set to something other than the system font. The global font controls value cells, which cannot be styled individually. Local fonts will style a cell from wherever the cursor is currently located, and it is very easy to target a cell and style its font, but miss the first character or two, even though the entire cell is highlighted as a selection. "What will be the result of my action?" is not always crystal clear. The controls for styling charts are difficult to understand, and messing up is hard to reverse out. I accidentally added "New Text" to the chart and it took a long time to figure out how to delete it; selecting it and hitting "delete" doesn't work. There is no way to modify the legend. There's no facility for selecting elements for inclusion/exclusion from the graph. In my case, formatting to look good on the printed page meant adding empty columns which wound up in the pie chart. This is very representative of the struggles the layout engine introduces. Making data look good in one context risks "making a shambles of it" (are these working? have I won you over?) in another. Page layout settings are cryptic. Margins can only be set to the top and left (?!?!) and only in unspecified numeric units. I used the template default values, and the page wound up shifted down and to the left. Getting beautiful output is a challenge. How could I forget? There's no UNDO! Some programs, like !Draw (vector illustration) and !Edit (text editor) have undo, and others like !Paint and !PipeDream do not. Getting started with RPCEmu , using a pre-built package, was as dead simple to use as you'd imagine. I experienced no crashes of the emulator, operating system, or PipeDream . It was a very solid experience in that regard. PipeDream itself, at least the version I used, had a ton of annoying bugs and the graphical glitches were even noted in a review by Micro User , February 1992 . But emulator-wise, everything was smooth. I recommend first-time users grab a pre-built image for quickly jumping in and seeing what the fuss is all about. I also do recommend going through the RISC OS Manual. The operating system is almost unusable until you learn its little tricks and nuances of operation. Pre-built images: https://www.marutan.net/rpcemu/easystart.html v3 Manual: https://archive.org/details/ro-3-user-guide v5 Manual: https://archive.org/details/risc-os-5.28-user-guide Technically, I am cheating a bit in this review. RPCEmu doesn't emulate an Archimedes but rather Acorn's later Risc PC. I ran PipeDream from floppy in Arculator, which explicitly emulates Archimedes systems, to compare the experiences. Except for RPCEmu's snappier performance (which I want anyway), RISC OS itself abstracts away the hardware layer so much it didn't seem to matter one emulator over the other. The emulator itself expects some specific keyboard, with the key situated between and . I don't have that, and nothing on my extended keyboard would send the right code to the emulator. is used for logical in PipeDream data queries; I had to use Windows ALT keycodes. I mentioned earlier, but I'll make it explicit here: there is no undo. Fireworkz is available as a native Win32 app. It launches without issue on Windows 11 64-bit, and even in Wine on macOS. It looks and feels exactly like Fireworkz on RISC OS, which looks and feels a lot like the latest version of PipeDream (minus the database parts). The list of bug fixes and quality of life enhancements is vast. Scrolling through all changes since Colton passed is kind of pointless due to its scope. I'll say, "a lot has improved" and leave it at that. As a local-only alternative to the Google/Apple/Microsoft hegemony, it's worth checking out. It's free, open source, actively maintained, a mere 2.5MB download, and for God's sake at least it's trying to do something different. Getting documents out of RISC OS into a modern system is easy, but has its caveats. RPCEmu can directly save to the host operating system, so getting files out is a non-issue. PipeDream's options for saving documents will strip the document's uniqueness, however. Saving as ASCII will try to keep text precisely as shown in PipeDream, inserting line breaks at the end of every line of text. Tables are just tab-indented. Any text formatting, fonts, graphs, etc. are stripped, of course. Saving as "Paragraph" is like ASCII, but will keep text together as logical paragraphs. This is much better for pasting the text into new documents. We still lose anything done to make the document look pretty. PDF printing is an option in RISC OS, and proved to be the best way I could find to get PipeDream documents into the real world. This required two parts: activating the PDF printer and running a separate !PrintPDF application. With both active, PipeDream generated PDFs without issue.

0 views

Nicola Losito

This week on the People and Blogs series we have an interview with Nicola Losito, whose blog can be found at nicolalosito.it . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hi, my name is Nicola Losito. Born in the mid-70s in Bari, I lived through the first wave (in Italy) of the television invasion of Japanese cartoons—which I would later discover are called anime—and US TV series like the A-Team, Knight Rider, CHiPs, Dukes of Hazard, MacGyver, and many others. Other playmates were American comics and the first consoles (Atari, Intellivision, Commodore). Finally, I remember with immense love the long afternoons riding around on a Vespa, or in the garage with friends taking apart and putting back together our Vespas, fixing the small hiccups that cropped up or trying to make them go faster. All this led me to university studies in Mechanical Engineering until a break to give 10 months of my life to Military Service, which was mandatory in my day. Upon my return, I had missed the boat with my studies and, by then twenty-five, I finally convinced my parents that the computer was not just a toy but a multipurpose tool. I also discovered I had a bit of a knack for it, so I changed my field of study and city, graduating in Computer Science. Two years ago I received a great gift , a new heart from a 27-year-old guy that today allows me to continue living with my wife and see our son grow up. Comics, science fiction novels, motorcycles, and padel (instead of the tennis I played so much as a kid) are still part of my life. I continue to read superhero comics, along with more mature European and Japanese productions, with the recent addition of a couple of Korean authors. I ride a Ducati Monster 1200S, and my son and I are venturing into the world of minicross with an unfortunate LEM 50 DX3. Perhaps you have noticed that I have not told you about my job yet, because especially after the long period of illness and having re-evaluated the priorities of things, now for me it is just a task I have to face, something I no longer believe in and for which I can no longer get excited. Anyway, I have an anecdote to share: I found a job opportunity thanks to participating in a motorcycle mailing list for two or three years. The interactions on the list made me "interesting" or "reliable" enough that another member of the list eventually called me and invited me to participate in a selection process at the company where he had already been working for a dozen years. I started for fun, and it’s been twenty years now that I’ve been at the CNR . The lesson is: never rule out participating in something that interests you; you never know where life, passions, and the people you meet will take you. As far as I can remember, I started coming across "blogs" towards the end of 2001, and certainly by 2002 several college friends had one. Thanks to the advent of an Italian blogging platform very similar to the current Blogger (it was called Splinder), on February 28, 2003, I took the plunge and opened my first blog on that platform, starting to interact with all the other bloggers (essentially Italians) who had an account there, or on other then-nascent platforms. In September 2004, I registered my first and current domain, installed WordPress release 1.2, and imported the old content. Since then, I haven't left the platform, and I believe the current incarnation of https://koolinus.net/blog has only been re-installed once during these twenty-two years, performing updates release after release. Over time, I participated more actively in the international blogosphere, spanning various platforms: Live Journal, Jaiku, and WordPress.com practically since it was born in 2006 when I joined it to publish in English… Today my online activities are concentrated on the "historic" blog in Italian; I’ve made nicolalosito.it my personal space for English language content and I use Scribble for micro-blogging. I’ve always used Tumblr as a pinboard for images and quotes that strike me, and another instance of WordPress on a hidden subdomain to occasionally publish something more intimate that I felt like writing anyway. I essentially publish in three distinct ways: A recent and controversial post you wrote, Manuel, is exemplary of why I have this attitude. This then resulted in me sharing the following quote which somewhat summarizes the current mood: The fact is that certain things you can only say to those you know can understand them. Which is also the reason we talk so little about what really matters to us. by Enrico Galiano, Eppure Cadiamo Felici Anyway, in all these modes, I write directly in the WordPress editor (Gutenberg), publish, and then make grammatical and typographical corrections. As someone once said, the publish button is the best editor. WordPress database maintenance plugins are my great friends. Over the course of these twenty-two years, I’ve written just about everywhere: airports, hospital beds, car seats, cafe tables, desks at home or in the office. Very often with a soundtrack in the background, though in recent weeks often without music to accompany me. I’m working from home a lot—I’m "full remote"—and my neighbors are renovating. So the construction noises are more than enough as ‘ white noise ’. As I mentioned, I write directly on the computer, so I do not use notebooks or anything else. In my personal case, it is the inspiration of the moment that drives my writing, so the fact that I can immediately put my thoughts into bytes depends only on having a keyboard and an active internet connection available. In short, physical space in the strict sense has never compromised the desire or the possibility to knock out a post. My domains are all currently registered with the European provider OVH. For a few years now, I’ve been using SupportHost for hosting, after having tried almost all the big names in the industry: BlueHost, SiteGround, GreenGeeks along with a couple of national ISPs… problems arose with all of them sooner or later. Since my friend Lino Sabato told me about this company and I migrated all my content, I’ve become a happy and (above all) listened-to customer, and every time I’ve recommended this provider, those who migrated in turn have only thanked me. So to recap, I’m on a cPanel-based hosting, and I use WordPress as a CMS. I’m tempted to switch to something static, but so far I haven't found the courage or the time to approach it. Who knows if 2026 will see me make progress on this front. One thing I question a lot is the fact that I somehow gave in to splitting my nickname and my name; at many times, keeping a nickname associated with certain concepts would have allowed me to talk about them freely, without potential repercussions in real life. Having created a point of contact between these two parts is perhaps something I regret. Given how today's tech world has developed and is developing, guarding your anonymity with tooth and nail, or at least clearly separating public and private, is an effort that should be made at the expense of convenience. For me and for the vast majority of early bloggers, this is no longer possible. It serves as a warning, however, to those starting today or about to start (or for my son when he enters the web). From a practical point of view, however, I think the important thing is to choose any platform to start on and get a "feel" for your desire to tell your story, making sure you can export what you've written to another platform later on. I chose a hosting solution that allows me to have several domains (as well as sub-domains) within the same plan. I think this year we reached about €200 including taxes. I host 4 blogs for other friends, including a couple belonging to a friend who recently passed away, who contribute to the expense. Then there are the costs of the .it and .net domains, which run between €11 and €16. Could I save money? Probably yes, but currently I sleep soundly and I don't have any malfunctions (especially because for one of the domains I heavily use the email provided with the hosting plan and I have never, and I mean never, encountered a problem). On monetization, as expressed by almost all the friends already interviewed, I am indifferent to the fact that there are people who make blogging a profession. As long as this is done while respecting the reader and not treating them like a fool or a fruit to be squeezed, I can tolerate even the most aggressive ads or pop-ups. But when everything becomes self-referential and closed in an ecosystem, then I stop following. Personally, I try to support some authors by buying software, with a donation – either monetary or, occasionally, purchasing hardware or something else I read they are interested in. I support your work Manuel, and another couple of pals, with the 1 dollar a month initiative. I would have certainly pointed you to Luca , but as I mentioned, he passed away in the first days of march 2026. I would like to give visibility to Luigi Mozzillo and Nicola D'Agostino author, among other things, of the Stories of Apple project which in my opinion hasn't received the love it deserved from the public. Then there are people who don't have a blog but passionately curate newsletters. Is it okay to mention them too? In that case, I’d say the work of Anne-Laure Le Cunff is certainly noteworthy. I also really like the reflections of Tobias van Schneider both on his blog and in his newsletter. Among Italian newsletters, I’d highlight those by Gianvito Fanelli , the Polpette (meatballs) di Vanz , and everything Mafe De Baggis writes. I could probably write a whole post about the newsletters I follow. I'm not the type of person to suggest things off the cuff; what I like or inspires me is regularly described in the pages of my blogs. I would like to share these ‘life tips’ instead. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 140 interviews . People and Blogs is possible because kind people support it. I curate a regular and periodic column called on both the Italian and English blogs, where since May 2015 I have been publishing interesting URLs that I collect (currently on Bear ) during my daily browsing; I publish how-tos on how I solve specific IT problems (which happens VERY rarely today, unlike in the past); I publish on the emotional wave that a song, a quote, a photo, or a dialog triggers in me. These days I mull things over a lot in my head, and I very rarely expose my thoughts publicly in writing. Start being honest with yourself as soon as possible. Eliminate what you don't like from your life, or confine it to a cage, and don't let it eat up what is important to you. Remember that work is a gas that expands to occupy all the space it is given. You must be consistent with the things you say, even if it's often inconvenient. I believe you have to be kind regardless. A great luxury in life is being able to afford to trust others, even when they prove they don't deserve it, and thus not be too damaged by it. Above all, don't put off until some random tomorrow the things that make you feel good or make you happy; proceed step by step but without hesitating, and allow yourself to experience every single milestone. Tomorrow morning you don't know what will become of you or the world.

0 views