Latest Posts (20 found)

your social media habits sound like an abusive relationship

Most people in my life still use big social media platforms. My wife, for example, is on Tumblr. As someone who has been off of these platforms for quite a while, some of the things people share with me sound extremely odd to me; weird rules and behaviors they feel the need to abide by or else!.... Whatever that may be. Some I even recognize from back when I used them, but now I have a completely different view of them as I am no longer embedded in a culture that normalizes them. For one, apparently some people are scared of unfollowing others. " I can't unfollow them! We already follow each other for years and they'd notice and then it's awkward! " so they'd rather stick it out with someone they no longer like or whose posts they don't wanna see. They'd rather filter out all posts via keywords and other means than just unfollow. Internet strangers! Not even people in real life they'd run into. Why do you feel the need to lie so much just to protect a random person's feelings about having one less follower? The whole concept of being trapped with someone because you're "mutuals" is insane! Why do you care whether only one side follows the other? What does it matter? Why do you fuel the notion that unfollowing means downgrading a friendship or rejecting someone completely? It shouldn't be this way and you voluntarily participate in this. Same with blocking. " I can't block. That is so harsh. I can instead just block them and unblock them again so we are both unfollowed from each other. This is called softblocking. " Okay? And what for? So you can pretend it was totally a website glitch that made you guys unfollow each other? As if they wouldn't notice and know. Everyone knows what softblocking is on those platforms! Don't kid yourself. When they refollow you again, what then? What if they message you and ask why you unfollowed, the dreadful thing you fear? Many then go on to lie, saying it must have totally been an accident, and follow them again?? Guys, it's a website, pixels on a screen - you can be honest. They're not gonna stick a hand through your screen to strangle you? Thanks to digital mediums, it has never been easier to just ride out awkward shit and ignore things. Make use of it. Pressing a button is not being aggressive or dramatic. "*No, I cannot message them directly, that is awkward, we have never interacted before!" ... so? Damn, the website/app offers DMs and now you can't even privately message strangers on the internet anymore? What has this place come to? Now you're just there to scroll and passively consume ads and no longer talk to the people that share the ads around voluntarily? DMing someone is "intimate"? You are "harassing" someone with a simple message they can choose to open or ignore? Do you hear yourself? Then there is the far more subtle or platform-specific stuff... like the fact that people feel like they can't comment in the replies until others have done so, or cannot reblog something because the post is still "too small"; that liking old posts is "creepy"; watching or not watching a story, liking or not liking a post has deep consequences; you have to put things in the tags instead of the post body to be safe of OPs wrath and signal that this is for your followers only (just for OP to screenshot the tags anyway and rake you over the coals). There's also people that are too scared to challenge others directly and openly on the respective post, and instead screenshot it, put a water filter over it to visually signify they disagree with its content, and then post it themselves? The type of stuff they are comfortable to say when they think OP won't notice, while being too scared to do it underneath the post, and just living off of follower validation like " Look how dumb this is! Hype me up, like this post, comment that you agree! " is so embarrassing to see. As people on there are treating public interactions as definitive signs and ownership, when someone bad follows you and likes your posts, while you don't even follow back, you're still treated as attracting and tolerating the bad person, therefore implicitly agreeing to their vile views. I guess that's where the whole culture of " Do Not Interact " disclaimers comes from, because you have to prove from the get-go where your alliances are and as a precaution for when you haven't deeply vetted every follower you have. In the same vein, people seem to proactively confess old opinions, archive tweets, lock accounts, or add disclaimers to avoid or soften hypothetical future attacks. It all adds up to weird stories... I can't even completely recall it, investigative, roundabout stuff with second accounts and softblocking and other checks, weaponizing features of the platform, circumventing things, completely normalized mutual surveillance disguised as casual browsing, where they manually actively check who viewed stories, who liked posts, posting times, and other activity to judge the friendship level? All of this is tip-toeing around, scared to offend someone, worried about nebulous consequences and being subject to toxic rage; never getting out of the awful behaviors you're subjected to by your peers in high school. It's as if you're in an abusive relationship with the platform and its users, and it's uncomfortable to see from the outside how scared it makes you to actually interact with anyone online or use the space for what it is made for. It's like your online home constantly has signs of a punch hole in drywall. I see it with my wife as well, who also has a blog on here and sometimes would like to reply to some other blog posts on Bearblog, but never ends up doing it because " It's weird to barely post and then immediately shit on someone else's post. " and other convoluted reasons that only exist because social media culture is what it is. If you relate to anything in this post, you have been conditioned by people who can only scream and shout and " I am not reading all that " and siccing their followers on you. How sad! You're like a beaten puppy and your behaviors are completely warped. It's actively harmful for you, and I wouldn't be surprised if it significantly fuels the social anxiety you feel even when offline. In the online spaces you're in, you are always asked to put the needs of someone else above yours that you cannot even fully anticipate because they're a nebulous mob entity. Your nervous system constantly deals with the risk of using this app or site blowing up in your face, and you're always scared when you see a bunch of notifications coming up. I don't know how you can feel mentally well when this is always looming over your head. Spending my online time in places where none of this weird stuff exists has really put it into perspective. I can just reply! I can just send emails or reach out otherwise! No stress, no worries! No followers, no blocking! Again, I know why all these exist in theory, and many I've known from my own time on these platforms, but none of it is justified - period. You don't have to tell me why any of these are valid or why they happen; this is like listening to an abuse victim justify the abuse. Sometimes you can only see how badly you've been treated months or years after you leave. Reply via email Published 09 May, 2026

0 views

AI makes weak engineers less harmful

Like other kinds of puzzle-solving, software engineering ability is strongly heavy-tailed. The strongest engineers produce way more useful output than the average, and the weakest engineers often are actively net-negative: instead of moving projects along, they create problems that their colleagues have to spend time solving. That’s why many tech companies try to build a small, ludicrously well-paid team instead of a large team of more average engineers, and why so far this seems to be a winning strategy. Being effective in a large tech company is often about managing this phenomenon: trying to arrange things so that the most competent people land on projects you want to succeed, and the least competent are shunted out of the way 1 . For instance, if you’re technical lead on a project, you more or less have to ensure 2 that the most critical pieces are in the hands of people who won’t screw them up (whether by directly assigning the work, or by making sure someone can “sit on the shoulder” of the engineer who you’re worried about). Claude Code changed this. Frontier LLMs don’t have the taste or the system familiarity of a strong engineer, but they have absolutely raised the floor for weak engineers. Instead of getting a pull request that could never possibly work or would cause immediate problems, the worst you’ll now see is a standard LLM pull request: wrong in some ways, baffling in others, but at least functional on the line-by-line level and not so obviously incorrect that someone with no knowledge of the codebase could point it out. That is a huge improvement! You can try this out yourself. If you attempt to deliberately make mistakes while working with a coding agent, you’ll find that the agent pushes back hard against many obvious errors (i.e. caching user data with a non-user-specific key, writing an infinite loop that might never terminate, or leaking open files). Of course, the agent will still miss subtle errors, particularly ones that require understanding other parts of the codebase. Working with the least effective engineers is now sometimes like working with a Claude Opus or Codex instance that you communicate with over Slack. Occasionally it’s literally that: your colleague is simply pasting your messages into Claude Code and pasting you the response. This is annoying, but it’s a much better experience than working with this kind of engineer directly. After all, you probably already work with a bunch of LLM instances. The Slack interface is not ideal - unlike using Claude Code directly, you sometimes wait hours or days for a response, and you don’t get visibility into the agent’s thought processes - but it’s still helpful on the margin. More compute being thrown at your problem is better than less. Of course, this isn’t a great state of affairs for the engineer in question, who is almost certainly learning less than if they were making their own (bad) decisions. It’s also a bad state of affairs for the company, who is paying a human salary and getting a Copilot subscription (which they’re likely also paying for) 3 . After the current push to figure out what value AI is adding to engineers, I suspect there will be a push to figure out what value engineers are adding to AI , and the engineers who aren’t adding much may find themselves out of a job. You can’t talk to Claude-over-Slack like you’d talk to normal Claude. If you tend to handle LLMs roughly (insulting them, or just being very curt), you’ll have to change your communication style. A human is going to read your messages, after all, even if you’re really interacting with a LLM. There’s no point being rude. But if, like me, you say please-and-thank-you to the models 4 , you can treat your LLM-using coworker as just another Copilot window or Codex tab. It’s far better than having to treat them as an unwitting saboteur. Not all net-negative engineers use AI tools like this. Many are strongly convinced in their own wrong opinions about how to build good software, or mistrust AI in general, or believe that relying heavily on LLMs is not a good way to improve 5 . But no strong engineers use AI tools like this. Even when they’re being lazy or sloppy, a capable engineer will have enough baseline taste to catch obvious AI-generated errors. So the phenomenon of engineers 6 becoming thin wrappers around Claude Code is limited to the kind of engineers for whom this is an improvement in their work product. More charitably: many “least competent” engineers are just out of their comfort zone, and can be fine or even excel under the right circumstances (though in my view the best engineers are able to do good work in a wide variety of environments). Also, I don’t currently work with a lot of incompetent people. Much of this is based on past experience or talking to other engineers in the industry. Since your managers are doing the same thing, this can sometimes feel like Moneyball: you’re trying to identify underappreciated talent who are strong enough to help you win without being so high-profile that your boss poaches them to lead something else. I suppose it’s better to pay for nothing than to pay for net-negative output, but it still doesn’t seem good . I think this is actually the right way to hold Claude Opus 4.7. Is this true? I think relying on LLMs is not a great way for most engineers to improve, but if LLM output is consistently better than your own, it might be different. So long as you’re paying attention to where the LLM does better, it could actually be a good way to learn. I don’t have as much experience (or anecdotes) about non-engineers falling into this trap, but this post has convinced me that it might be worse. More charitably: many “least competent” engineers are just out of their comfort zone, and can be fine or even excel under the right circumstances (though in my view the best engineers are able to do good work in a wide variety of environments). Also, I don’t currently work with a lot of incompetent people. Much of this is based on past experience or talking to other engineers in the industry. ↩ Since your managers are doing the same thing, this can sometimes feel like Moneyball: you’re trying to identify underappreciated talent who are strong enough to help you win without being so high-profile that your boss poaches them to lead something else. ↩ I suppose it’s better to pay for nothing than to pay for net-negative output, but it still doesn’t seem good . ↩ I think this is actually the right way to hold Claude Opus 4.7. ↩ Is this true? I think relying on LLMs is not a great way for most engineers to improve, but if LLM output is consistently better than your own, it might be different. So long as you’re paying attention to where the LLM does better, it could actually be a good way to learn. ↩ I don’t have as much experience (or anecdotes) about non-engineers falling into this trap, but this post has convinced me that it might be worse. ↩

0 views

Photo Journal - Day 5

Thought I would try something different for this entry! Each of these photos were taken with my Gameboy Camera attached to an Analogue Pocket (since it allows easy exporting). I've had this cartridge since I was a kid (I included 2 photos from back then for fun)! The following photos are from when I was a kid and have been sitting on the cartridge for 20+ years. ↑ This was one of the cats we had when I was a kid, his name was Benthem. He had massive cheeks! ↑ I imagine this was one of my friend's chickens that lived in the countryside.

0 views

2026.19: Earning & Spending

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Sharp Tech video is on Messaging AI in 2026. What We Learned from Big Tech’s First Quarter. Apple, Amazon, Meta, Google and Microsoft all reported earnings last week, and as four of the five megacaps continue to pour massive sums into AI (first quarter CapEx was more than three times that of the Manhattan Project), there are no signs of that pace slowing. Ben broke it all down across several days, including divergent market reactions to great Google numbers and Meta numbers that were arguably even better , as well as the stories for Microsoft and Apple after Q1 . Sandwiched between those Daily Updates, Tuesday’s Article zoomed out to connect Amazon’s infrastructure spending history with its AI strategy going forward . All of it was a great way to parse numbers that continue to boggle the mind, and strategy that actually looks a lot more rational than the numbers sound.  — Andrew Sharp A Conversation with Joanna Stern. How does one write a book about a tech story that seems to change every other week? Joanna Stern accepted that challenge, and explained how it went in this week’s Stratechery Interview . The resulting conversation is a delightful glimpse into the process for one of the most creative tech writers alive and the making of a book that Ben loved. Stern’s shares her thoughts on using an LLM to make a career change, as well as how AI is changing medicine (and mammograms), and limits of LLMs that are still very real. To the latter point, if you’d like to learn more about how ChatGPT misdiagnosed a preying mantis pregnancy, start with this week’s interview, and then  you can buy the book here.   — AS What’s Next for the Celtics? Like many others across the media, I picked the Boston Celtics to make the NBA Finals in June. Alas, they barely made it out of April and were eliminated in the first round to Joel Embiid and the 76ers. The GOAT podcast recapped that disaster first on Monday with a salute to the Sixers (now bittersweet after two losses to the Knicks), and on Thursday’s episode, a longer look at the mess in Boston and a variety of thorny choices from here. Get caught up on all of that and the rest of Playoffs, and if you need an additional hoops fix, this week’s Sharp Text is a salute to the maddening charms of the Minnesota Timberwolves .  — AS Google Earnings, Meta Earnings — Wall Street loved Google’s earnings, and hated Meta’s, even though the latter’s core business was more impressive. The difference is that Google is monetizing its investments now (and it might be all Anthropic). Amazon’s Durability — Amazon looked behind in AI in the training era, but is well place in the inference era, thanks to its continued investment in the long-term. Microsoft Earnings, Apple Earnings — Microsoft unveils its new agentic business model, and Apple confronts shortages in memory and chips even as the Mac benefits from AI. An Interview with Joanna Stern About Living With AI — An interview with Joanna Stern about her new book about living with AI, and starting her own media company. The Wolves Are Why We Do This — A salute to the playoff Timberwolves. Plus: Notes on Vogue history, NBA upheaval, and the “geo” in geopolitics. Google and Meta Earnings Anthropic and xAI Sweden Made DC Great Again The Sixers Get Their Moment in the Sun, A Nightmare for Celtics Fans, Thoughts on the Way Into the Second Round A Championship Response from the Spurs, What’s Next for the Celtics?, Pre-Lottery Thoughts and Emotions

0 views

Unscrewing lightbulbs

Giving lightbulbs a MAC address was a mistake that I’m living with. I’m literally unscrewing lightbulbs to renew their DHCP lease @dbushell.com - Bluesky Instead of enjoying the bank holiday Monday I updated my homelab software. I was ‘inspired’ by the Copy Fail Linux bug to run full distro upgrades. This is my self-hosted update for Spring 2026 (rough documentation to give future me a chance). Monday’s fun risked a week of pain. I do have backups but restoring them on a broken LAN is tricky. I have an ISP provided wifi router to dust off in an emergency. Along with an absurdly long 15 metre HDMI cable I do not care to unravel. My winter update added a hardware fallback but that too requires careful rejigging. I have Proxmox hosts, virtual machines, and Raspberry DietPis . They were all on Debian 12 (Bookworm) with a kernel potentially susceptible to the bug. Minimal Debian installs are perfect because I run everything in Docker anyway. Data volumes are easy to backup or network mount. I can change host at will for any service. Debian is just sensible, well documented no-fuss Linux. I used to run “minimal” Ubuntu server. Following 24.04 I found myself debloating most of the Ubuntu part (i.e. snaps). It sounds like the new coreutils are a CVE party . Glad I escaped before that drama! As it happens, this week’s Linux Unplugged episode had Canonical’s VP of Engineering spewing embarrassing AI platitudes. “Ubuntu is not for you” was the only thing said worth remembering. I updated most of my VMs first because they’re easy to restore if anything fails. I followed Lubos Rendek’s guide . Start with a full package update and then change the package sources before running another step-by-step upgrade. The only non-Debian sources I have are Docker and Tailscale. Yes that means I run Docker inside Proxmox VMs — and you can’t stop me! That’s not even my worse crime… After the Trixie upgrade I found VMs were failing to obtain a LAN IP address. The virtual network device had been renamed from to . I edited and just changed the reference. There is surely a better/more predictable fix but this was the quickest. The same name was used across all VMs so I guess 18 is the magic number. Everything has been stable so far. If issues arise I’ll just nuke and pave from a Debian 13 ISO. Docker config and volumes are backed up independently of the VM images. DietPi has a long Trixie upgrade post I didn’t read. I just curled to bash: I gave the script a cursory glance before hitting enter. I have a Pi 4 running failover DNS and a Pi 5 running my public Forgejo instance . DietPi is ideal because of the tiny footprint; I run Docker here too. Raspberry Pi still hasn’t merged upstream Copy Fail fixes. I’m already in trouble if this bug can be exploited but I did the temporary fix out of caution. I wasn’t going to bother with Proxmox 9 but after a GUI update I was informed version 8 “end of life” was August 2026 . That is soon! I followed the official upgrade guide on my Mini-ITX server . Proxmox has a tool to check compatibility. I saw no red lights so I stopped all VMs, updated package sources to Trixie, and ran the upgrade. It is critical to run again before rebooting. I ran into the systemd-boot issue . Apparently if this is not removed the system fails to boot. If my particular box fails to boot I’m in big trouble because I broke video output and have yet to fix it. I have another Proxmox machine running virtualised OPNsense for my home router. I can’t stop the OPNsense VM and upgrade the host to Proxmox 9 because the host would have no network access. I had two options: I specifically set up option 1 for such a purpose. I went with option 2. I figured any software running in memory is still alive until I reboot, right? I didn’t question whether Proxmox would kill any processes itself (it didn’t). The update was suspiciously fast. I ran again and saw a lot of yellow warnings. Yikes. Eventually I noticed I’d failed to update some sources to Trixie and I’d installed a franken-distro. After fixing mistakes all I could do was reboot and pray for an agonising two minutes. OPNsense is the only non-Debian operating system in my homelab. I manage it entirely via the web GUI. The 26.1 update had quite a few significant changes. My DHCP setup was considered “legacy” and my firewall rules required a manual migration. Despite dumbening my smart home my lightbulbs still demand a WiFi connection. I program them myself to avoid Home Assistant and proprietary apps. Turns out I hard-coded IP addresses (discovery protocols are a joke.) Despite having dynamic IPs they remained stable until the OPNsense 26.1 DHCP update. I had no easy way to identify each light. Why would they name themselves anything useful? That’s how I ended up unscrewing the bulbs one by one to see which MAC address fell off the network. I gave them static IPs on a VLAN for future me to appreciate. And with that, my home network is up to date! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. Use my failover VM YOLO it live

0 views

Tiny Visitor Counter

I created a tiny script for counting per-page visitors on your site. It's as simple as uploading the PHP file to your server and pointing a tag to it. Leveraging the script as an image is an attempt to seed out bots (since they typically don't render images). Here's a live version of the script: You can grab the script on my Codeberg . To setup with Pure Blog , upload the PHP file to your folder and add the following HTML to your page and post footer HTML under Settings->Site (this assumes the script was uploaded to ):

0 views

Premium: AI's Circular Psychosis

In this week’s free newsletter, I explained how bad the circular AI economy is in the simplest-possible terms :  In other words, the entire AI economy effectively comes down to Anthropic and OpenAI, who take up at least 70% of Amazon’s Google’s, and Microsoft’s compute capacity , 70% to 80% of their AI revenues and 50% of their entire revenue backlog, per The Information : That’s $748 billion of the entire revenue backlog — not just AI compute — that’s dependent on Anthropic and OpenAI, two companies that cannot afford to pay these bills without constant venture capital infusions from either investors or the hyperscalers themselves.  This is a big problem, because Anthropic seems to be losing so much money that it had to raise $10 billion from Google , $5 billion from Amazon , and is reportedly trying to raise another $50 billion from investors , less than three months after it raised $30 billion on February 12, 2026, which was five months after it raised $13 billion in September 2025 . That’s $58 billion in eight months, with the potential to reach $108 billion. Now Anthropic is taking over all 300MW of SpaceX/xAI/Elon Musk’s Colossus-1 data center , which will likely cost somewhere in the region of $2.5 billion to $3.5 billion a year, given that most of the data center is made up of H100 and H200 GPUs ( with around 30,000 GB200 GPUs ). I also don’t think people realize how bad a sign this is for the larger AI economy. Musk built the 300MW Colossus-1 to be “ the most powerful AI training system in the world ,” specifically saying that it was built “ for training Grok ,” with inference handled through Oracle ( which originally earmarked Abilene for Musk but didn’t move fast enough for him ) and other cloud providers. xAI, as one of the largest non-big-two providers, had so little need for AI capacity that it was able to hand off the entirety of its self-built data center capacity to Anthropic.  If xAI doesn’t need 300MW of compute capacity that it spent at least $4 billion to build , who, exactly, are the other large customers for AI compute? I’m not even being facetious. I truly don’t know, I can’t find them, I spent most of last week looking for them , and the only answer I had a week ago was “Elon Musk buying a lot of compute for xAI to make the freaks on the Grok Subreddit able to generate pornography.”  xAI is also the only non-OpenAI/Anthropic AI lab that’s built its own capacity , capacity it clearly didn’t need, which begs the question as to why Musk needs however much capacity he’ll build at Colossus-2 . Musk claims that xAI had moved all training to Colossus-2 , but also that xAI would “ provide compute to AI companies that are taking the right steps to ensure it is good for humanity .” This apparently includes Anthropic, which Musk called “ misanthropic and evil ” a little over two months ago. Researchers believe that the actual capacity of Colossus-2 is 350MW . At $2.5bn a year or so, Anthropic will be effectively the entirety of xAI’s revenue, which was at around $107 million in the third quarter of 2025 .  To put this very, very simply: xAI should, in theory, have massive demand for AI compute, but its demand is apparently so small that it can flog a multi-billion-dollar data center to a competitor.  Sightline Climate found that 15.2GW of capacity is under construction and due to be completed by the end of 2027, and at this point I’m not sure anybody can make a compelling argument as to why it’s being built or who it’s for.  Who needs it? Who are the customers? Who is buying AI compute at such a scale that it would warrant so much construction? Where is the demand coming from if it’s not OpenAI and Anthropic? These questions shouldn’t be that hard to answer, but trust me, I’ve tried and cannot find a GPU compute customer larger than $100 million a year, and honestly, that customer was xAI.  Through many hours of research, I’ve found that the vast majority — as much as 95% — of all compute demand comes from a few places: Otherwise, every data center deal you’ve ever read about is for a theoretical future customer or an unnamed “anchor tenant” that gives them “guaranteed, pre-committed occupancy” without being identified in any way. Yet even that “pre-committed” language seems to be something of a myth, which I’ve chased down to a report from real estate firm JLL, who says that 92% of capacity currently under construction is precommitted through binding lease agreements or owner-occupied development . CBRE said it was 74.3% for the first half of 2025, and Cushman & Wakefield said it was 89% , though it also said that there was 25.3GW of capacity under construction, while Sightline sees 19.8GW under construction through the end of 2030. And man, I cannot express how fucking difficult it is to find actual data center customers outside of the ones I’ve named above. In fact, it’s pretty difficult to find any customers for GPU compute not named Anthropic, OpenAI, Microsoft, Google, Meta or Amazon.  Outside of OpenAI and Anthropic, effectively no AI software makes more than a few hundred million dollars a year, and to make that money, they have to spend it on tokens generated by models run by one of those two companies. When those companies generate those tokens, they then flow to one of a few infrastructure providers — I’ll get to the breakdown shortly — to rent out GPUs.  As I’ve discussed this week , at least 75% of Microsoft, Google and Amazon’s AI revenues come from OpenAI or Anthropic, and that’s before you count the money that Microsoft, Google and Amazon make reselling models from both companies. To get specific, The Information reports that Anthropic will pay around $1.6 billion to Amazon for reselling its models. OpenAI, per my own reporting , sent Microsoft $659 million as part of its revenue share. AI startups — all of whom are terribly unprofitable — predominantly spend their funding on models sold by OpenAI and Anthropic. Per Newcomer , as of August last year, Cursor was spending 100% of its revenue on Anthropic. Harvey, an AI tool for lawyers, raised $960 million between February 2025 and March 2026 , with most of those costs flowing to Anthropic and OpenAI.  Effectively every AI startup is a feeder for API revenue for Anthropic or OpenAI, and as a result, almost every dollar of AI revenue flows to either Google, Microsoft or Amazon. As Anthropic and OpenAI are extremely unprofitable, Google, Microsoft and Amazon then take that money and either re-invest it in OpenAI and Anthropic, as Google , Amazon and Microsoft have all done in the past few years.  At the beginning of the bubble, all three companies believed that OpenAI and Anthropic were golden geese that were, through the startups they inspired and powered, laying golden eggs that necessitated expanding their operations, leading them to spunk hundreds of billions of dollars in capex , with Amazon building the massive Project Rainier in Indiana for Anthropic and Microsoft the Atlanta and Wisconsin-based Fairwater data centers for OpenAI . They likely also thought their own services would grow fast enough to warrant the expansion, or that other large GPU consumers would rear their heads. That never happened. Instead, OpenAI grew bigger and more-demanding of Microsoft’s compute capacity, leading to Microsoft allowing it to seek other partners , in part (per The Information) because some executives believed OpenAI would die: By November 2025, OpenAI had signed a $300 billion deal with Oracle , a $22 billion deal with CoreWeave , a $38 billion deal with Amazon , and a theoretical deal with both AMD and NVIDIA . Yet by this point, Microsoft realized it was in a bind, with the majority — at least 70% if not more  of its AI revenues were dependent on OpenAI, but it had already walked away from 2GW of data center capacity to reduce its capex costs. It had also, as part of OpenAI’s conversion to a for-profit company, had convinced it to spend $250 billion in incremental revenue on Azure .  So Microsoft chose to start spreading out that capacity to neoclouds like Nebius and Nscale , effectively bankrolling their entire futures based on theoretical revenue from OpenAI, a company that plans to burn $852 billion in the next four years and cannot afford to pay any of its bills without continual subsidies. These companies were now part of a multi-threaded dependency that ultimately ended up at one place: OpenAI, which also makes up the vast majority of inference chip maker Cerebras’ revenue with its 3-year, $20 billion deal . Meanwhile, Amazon and Google thought they had it made. Anthropic was growing, and its compute demands were reasonable enough that neither had to stretch themselves too thin…until the second quarter of 2025, when Anthropic’s accelerated growth led to it starting to push against the limits of Google and Amazon’s capacity.  So Google agreed to backstop several billion dollars behind two deals with Fluidstack, a brand new AI compute company, and Amazon continued expanding its Project Rainier data center.  Yet Anthropic’s hunger wasn’t sated. After mocking OpenAI in February 2026 for “YOLOing” into compute deals (and having signed a cloud deal with Microsoft ), it massively expanded its AWS and Google Cloud deals , signed a deal with CoreWeave , and as I discussed above, took over the entirety of Musk’s Colossus-1 data center . And all of this is only happening because, based on my analysis, very little actual demand for AI compute exists outside of OpenAI and Anthropic, and OpenAI and Anthropic only exist because of Microsoft, Google, and Amazon both building and expanding their infrastructure to cater to them.  In reality, OpenAI and Anthropic are the only meaningful companies in the AI industry. They are the majority of revenue, the majority of capacity and the majority of demand. Microsoft, Google and Amazon have exploited the desperation in a tech industry that’s run out of hypergrowth ideas , and created a near-imaginary industry by propping up both companies. The mistake that most make in measuring the circularity of OpenAI and Anthropic is to focus entirely on the money raised — $13 billion from Microsoft and up to $50 billion from Amazon for OpenAI, and as much as $80 billion from Amazon and Google for Anthropic. The correct analysis starts with measuring infrastructure. Based on discussions with sources and analysis of multiple years of reporting, I estimate that of the roughly $700 billion in capex spent by Google, Meta and Microsoft since 2023, at least 5.5GW of capacity costing at least $300 billion has been built entirely for two companies. This has in turn inflated sales through multiple counterparties involving NVIDIA, ODMs like Quanta, Foxconn, Supermicro and Dell, and created a form of market-driven AI psychosis that inspired Meta to burn over $158 billion in three years and the entire world to convince itself that AI was the biggest thing ever. The reason that there isn’t another OpenAI or Anthropic is that Google, Microsoft, and Amazon bankrolled their entire infrastructure, fed them billions of dollars, and then charged them discount rates for their early compute, with sources telling me that Anthropic pays vastly below-market-rates for Trainium compute from Amazon, and The Information reporting that OpenAI was paying $1.30-per-A100-per hour in 2024, or at or around the cost of running them. By sacrificing their entire infrastructure to OpenAI and Anthropic, the hyperscalers created the illusion of demand by feeding themselves money, all while buying endless GPUs and TPUs to fill further data centers for two customers, both of whom paid discount rates that lost them money.  This capex bacchanalia gave all three companies a massive boost to their stock prices, so they kept going, even though there wasn’t really demand other than for Anthropic or OpenAI, two companies that they had to constantly cater to with investment capital and server maintenance. The belief became that all you had to do was plan to build a data center and you’d print money, boosting NVIDIA’s sales and associated counterparties in memory stocks like Sandisk . Except that never happened.  Every data center provider that doesn’t have an Anthropic, OpenAI, or Meta-related contract makes pathetic amounts of revenue that can barely keep up with their debt. AI startups make meager revenues, and lose multitudes more than they can ever hope to make.  The entire AI industry relies upon two companies that expect to burn at least $1 trillion in the next four years, with Anthropic, the supposed “compute-conscious” AI company, committing to at least $330 billion in spend in the next few years. Where does that money come from, exactly? Because neither of these companies have anything approaching a path to profitability.  Based on a deep analysis of every publicly-available source on AI compute, I can find only two significant — over $100 million a year — purchasers of AI compute outside of Anthropic, OpenAI, Meta, or associated parties like NVIDIA, Microsoft, Google and Amazon. Those two are Poolside, which reportedly spends $400 million a year , an untenable position as it only raised $500 million in total funding before its $2 billion in funding collapsed earlier this year , and Perplexity, which appears to spend some amount of money with CoreWeave and Microsoft Azure. Both run at a massive loss. Nowhere is this lack of true demand more obvious than in the neoclouds, which only seem capable of signing big deals with Anthropic, OpenAI, Microsoft (for OpenAI), and Google (for OpenAI). Oh, and Meta, who is doing this because the existence of ChatGPT gave Mark Zuckerberg such profound AI psychosis that he’s made Meta build him a CEO chatbot to talk to and burned over $150 billion. The AI industry is a brittle, circular economy, one only made possible by a lack of financial regulation and a tech industry that’s run out of ideas. Without hyperscalers propping up OpenAI and Anthropic, there would be no reason to buy so many GPUs or build so many data centers, and neoclouds would have no reason to exist. This is a giant con, a giant illusion, and a giant mistake. Meta, for reasons that defy logic. Microsoft, for OpenAI’s compute. Google, for Anthropic’s compute. Amazon, for Anthropic. 90%+ of all AI revenues flow through Anthropic and OpenAI. 90%+ of all AI compute demand comes from Anthropic, OpenAI, Meta, or associated counterparties like Google and Amazon buying compute for Anthropic or OpenAI. The vast majority of AI operations don’t require more than a few hundred to a thousand GPUs for inference, and at most 20,000 GPUs for training models. This means that for the 15.2GW of data centers under construction before 2027 ($157 billion in annual revenue) to make sense, thousands of companies will have to rent hundreds or thousands of GPUs. This also means that the DeepSeek problem — the reason that everybody freaked out in January 2025 — is actually industry-wide. More than 50% of Microsoft, Google, Amazon, CoreWeave, and Oracle’s entire revenue backlogs are from OpenAI and Anthropic. Neoclouds are unsustainable, imaginary businesses only made possible by continual subsidies from NVIDIA and the compute demands of OpenAI, Anthropic and Meta. Outside of Anthropic and OpenAI, only around $13 billion in AI compute demand exists, with much of it taken up by Meta and NVIDIA backstopping neoclouds like CoreWeave and IREN. ODMs like Supermicro, Dell, Quanta and Foxconn are largely dependent on AI server revenues that largely flow through OpenAI and Anthropic’s counterparties to fuel their server demand.

0 views
iDiallo Today

Hi stranger

I'm at home, sitting on the kitchen table. I just took my boys to school and I'm about to start my work. I'm writing this message directly to you. And you are reading it. Hello! Isn't that funny? I've been trying to write consistently, and it gives the impression that I am this serious person with some serious insights. But no, I'm just writing. Sometimes you respond, you send a nice email, other times it's complete silence. It ends up being like an entry in a journal, for me to stumble upon at a later date and reflect: "Oh yeah, that's what I was thinking that day." My job is a 2 hour drive away, so I rent an office close by. There I can focus and clearly delineate work time from home time. I don't like working when I'm home with my family. So I have some time to talk to you. Last year, I spent some time digging through my server logs to find who is reading me. I wanted to know who you are, and why you are interested in reading me? But I can't get an answer from just reading the logs. Instead what I found is that you and most other people come here via RSS. My rough count shows that there are 10,000 of you, or at least 10,000 unique IP addresses that ping the websites whenever I write something new. There are around 2,000 people subscribed via popular RSS readers like feedbin or Feedly. 1,500 of you also subscribe via email which I have neglected this year. It's weird because this data is invisible most of the time. I forget that when I write something, anything, the odds are that someone will find it intriguing. In fact when I look deeper into the logs, I see people are referred by other blogs I never heard about. And they mention me by name, "and then Ibrahim said this or that." It feels so personal. I often forget that this is all so human. That, what we call the small web is people not just writing, but telling us something. When I have an insight, or read something interesting, I'm telling you about it. Not directly, but in an asynchronous way. You get to know or read about it on your own terms. The small web has never died, it feels like it did at some point because it has remained small. But I don't think I want it to become any bigger, or any louder. It's right where it's supposed to be. I'm breaking the 4th wall today just to say Hi. How are you? I hope you are doing well. The world is weird sometimes, but you are not invisible. I see you. I hope you are having a good day.

0 views
Unsung Today

“There seems to be a file that is just filled with undecipherable Morse.”

On April Fools in 2021, the popular xkcd comic ran Checkbox , which was a Morse code puzzle in disguise. (It’s interesting to see the community trying to figure out what it actually does .) Engineer Max Goodhart built the front-end and wrote a summary of the whole project : This year was a doozy. We specced and scrapped several different ideas in the months leading up to today. We finally settled on today’s concept just 3 days ago. The need to do something simple was a really useful constraint, and we leaned into the idea of making something primitive but deep. The team seems to have had a lot of fun with it, including even JavaScript being encoded in Morse Code (the link in the blog post no longer works, but you can still see it on the Internet Archive ). Goodhart also wrote about the immense challenge of adjusting the Morse tapping speed to the user, which counterintuitively ended up needing… adjusting the user to the speed. But the best part is that the server communications used the Morse code in URLs, as well: We took great pains to make the API for this project use morse code in the transport. If you take a look at the network inspector, you’ll notice that the URLs requested have morse code in them. This worked for every combination of letters imaginable, with two oddly specific exceptions: a solitary E, and a solitary I. I liked this description of what transpired next, which would have made me think I was going insane, too: Then, an even stranger thing happened . I copied and pasted the correct URL into my browser and pressed Enter, and right before my eyes, it deleted the ”.” from the end of the URL and returned a different result. I was delighted to discover an answer here, not only because in retrospect it’s such an obvious thing that was staring us all in the face for decades, but also because it has interesting URL construction consequences. #bugs #encoding #web

0 views

Automated Capitalism

Woke up to this email in my inbox. At first I though "ugh a sales pitch", but then I saw the line at the bottom. This company runs autonomously · polsia.com This led me to visiting Polsia. It's an entire platform for doing the minimal amount of work to try and sell slop to people. It vibe codes, spams people and provides "customer support" with just the help of your credit card. Is this seriously the future? Cause I don't want it.

0 views

PipeDream on the Acorn Archimedes

During the "throw everything at the wall and see what sticks" years of home computing, up to around 1995, a lot was thrown and a lot failed to stick. Sometimes clumps would form that appeared to have the combined friction necessary to maintain wall grip, each holding the other up. But, like Mitch Hedberg's observation of belts and belt loops, it was difficult to discern who was helping who stick to what. Take for example, our focus today. We have a completely novel CPU, built by a tiny team of engineers who had never designed a processor before, running a bespoke operating system squeezed out in a rush to meet the shipping deadline of a computer that wanted to carry on the legacy of a system beloved by British schoolchildren, hosting a productivity suite that completely rethought what the term "productivity suite" even meant. Together, they formed a complete computing dead-end. Yet separately, they each achieved life beyond expectations, given their shaky beginnings. Let's start with the hardware, Acorn Computer Ltd.'s follow-up to the famous 8-bit BBC Micro, the Archimedes. Feeling the 16-bit processors of the day didn't deliver enough bang-for-the-quid, they began an investigation into 32-bit processor options. After reading a U.C. Berkeley paper extolling the virtues of the RISC architecture, and seeing firsthand the ease with which chips could be designed, in 1983 Acorn launched the Acorn RISC Machine project to develop the 32-bit brain of their next system. The fruit of that labor, the ARM processor, defined the Archimedes line. Try as they might, Acorn could never crack the home market the way they did education. Still, those ARM CPUs had longevity well beyond the life of the company that commissioned it. Your smartphone likely has ARM in it right now, and Apple's entire current hardware ecosystem is built on its spec. That powerful hardware needed a preemptive multitasking operating system that befit its computing prowess. That was to be ARX , whose troubled development missed the product launch window. In the meantime, so the computer could have something driving it at launch, a stop-gap operating system called Arthur was shipped. It was similar to Acorn's previous BBC Micro MOS (Machine Operating System), with a graphical layer grafted on top; hit F12 and that text interface will peek out from behind the curtain. Over time it was decided that Arthur was doing a bang-up job and ARX was cancelled. Thus was born RISC OS, a cooperative multitasking WIMP (windows, icons, menu, pointer) with possibly the first application "dock" on a home computer. Its mandatory three-button mouse summons an application's current context menu at the pointer location; there are no menu bars whatsoever. Drag-and-drop is embraced as a central file management metaphor, even to save documents. On top of all that, it was the first to offer scalable, anti-aliased font rendering, even if its fonts were a little "off brand." On top of this unique foundation, we have PipeDream . Developer Mark Colton was convinced that the boundaries between word processor, spreadsheet, and database were artificial and could be eliminated. A document should be able to do any of those functions at any time, anywhere on the page, he posited. One might think, "Oh, like Google Sheets ." but PipeDream handles word processing more elegantly. Another might think, "Oh, like Apple Pages " but the spreadsheet and database functions are more robust in PipeDream . This particular balance of the three productivity functions feels unique amongst even its modern peers. Does a productivity suite work better when it's just a single app? Did Colton successfully execute his vision? And where is the Homerton documentary we deserve? (I didn't know Ghost blogging platform forces images to 2000px max; I've revised my design workflow to mitigate this in the future. To make amends for this timeline's illegibility at 2000px, please accept this PDF version) Testing Rig RPCEmu v371 on Windows 11 RISC OS v3.7 1024 x 768 15-bit color 64MB RAM PipeDream v4.13 Let's Get to Work My process when first examining unfamiliar systems is as follows: I do that across a variety of emulators to see which gives me the least grief; I need to be sure I can trust a basic productivity loop. I usually try to give it a go without research, to see how far I can get on pure skillz (with a Z). It's unusual to sit down at what appears to be a computer I understand and be baffled every step of the way. I've heard this system described as "elegant" and "easy to learn." This has me questioning if maybe I'm actually a very dumb person because my impression is "uncomfortable." You know that modern horror story, aka "creepypasta", The Backrooms ? It's a hidden world that co-exists with our own, which can be entered only by clipping through a seam of reality which separates the two. In there, buzzing fluorescents light an infinite maze of featureless, yellow-wallpapered office-style floor layouts. If one were to find a running computer there, I suspect RISC OS would drive it. It's just common enough in its GUI metaphors to feel familiar, and just off-kilter enough to turn that familiarity against you. Liam Proven wrote in The Register , "You will find it very disorienting, especially if all you know is post-1990s OSes." My dude, I've been computing since the 1970s and I find it disorienting. Nothing is unlearnable (I'm dumb, not incompetent), but I genuinely had to work through its manual to acclimate myself. To be clear, I enjoyed the thrill of venturing into the unknown. After all, one of the goals of this blog is to investigate the less-trodden paths in software history. Still, there are times when I feel RISC OS is " having me on." (trying to ingratiate myself with British readers in today's post) I'll start with the three-button mouse. From left to right the buttons are "Select", "Menu", and "Adjust." After weeks working with the system, I still can't figure out what problem the "Adjust" button solves. It's semi-analogous to on modern systems, as when clicking to add/remove elements to/from a set of selected items. Then, sometimes it does something unexpected like, "drag a window by its title bar without bringing that window to the front." Other times it is baffling. a file icon to a new folder location doesn't move the file to the new location. It copies the file. If you want to move the file, you must . Why are we "SHIFT" dragging anything when we have a perfectly good "Adjust" button? Sometimes the "Adjust" button does "opposite" actions. Click a "down" scroll arrow with "Adjust" and it will to scroll up instead. Is that an "adjustment?" What does it even mean, to "Adjust" a mouse click? It seems like it could mean anything , and that's kind of my point. It's unguessable and unintuitive. An interesting UI element (which predates NeXT and Windows 95) is the Icon Tray, an important tool inexplicably not described at all in the RISC OS 3 manual. Situated along the bottom of the screen, currently running applications and directory icons sit on a little shelf. Double-click "Select" on an application icon to launch it and... nothing. Its icon displays in the Icon Tray, and that's it. We must now Single-click "Select" on that icon to actually bring the application to the forefront and activate it. I don't know what that's all about, but that's how it works. Menus are fascinating in both the positive and negative meanings of the word. There are no menus on screen whatsoever, they are only made visible by the middle "Menu" mouse button. "Menu" clicking opens a given menu at the current mouse pointer location. Icons in the Icon Tray can be "Menu" clicked to get application-level menus, like "Make a new document." Within a document, "Menu" click will give us document-level options. Conceptually, I like the "Menu" button a lot. Within a menu, any choices which open dialog boxes or control panels tend to open in-menu. It's kind of cool, being able to type, or flip switches and radio buttons, directly inside the menu itself, rather than popping up a modal window. However, it is jarring to have large panels suddenly lunge out like a xenomorph's inner jaws when scrolling through menus. These can obscure the root menu, depending on screen position. 0:00 / 0:08 1× The last point to get our collective heads around is file saving. When saving a new document, simply typing in a file name is not sufficient. Save dialog boxes expect and require the full path to your save destination; no assumptions or default folder locations are provided. You can manually type in the full path to your desired save location like this: While you type, the system will not assist you in navigating the directory structure; no autocompletion here. You must know the path by heart. The other option, as described in manuals, is to drag-and-drop your document to its save location. Drag-and-drop really seems to be the RISC OS idiomatic way to manipulate files. In a Save dialog box there is a little icon for the application. It looks like decoration, but it physically represents your document. Type a name into the text field, then drag that icon to your desired save folder. 0:00 / 0:13 1× I don't want to get bogged down enumerating RISC OS's idiosyncrasies, but a few more things need mentioning. There is a kind of "programmer's art" ugliness to the user interface; those folder icons are terrible. There are graphical glitches, as when scrolling a window too quickly (though moving windows around shows full contents, which wasn't typical during that period). Everything you set up to customize the system, like desktop icons, window positions, desktop resolution, and other settings is reset every boot unless you manually tell the system to save the current state as the "boot file." The list goes on like that. Sheesh, what a journey just to understand the basics. I expect that kind of learning curve for the text-based systems, as those DOS-like commands are unknown to me. For a GUI system to throw this "spanner in the works" (continuing my pandering) is unexpected, but a fun challenge. I can't feel myself growing to love it, but the initial feeling of discombobulation is receding. A spreadsheet is an ordered matrix of cells, each of which can hold text or math. Cells with text are typically used as labels for columns and rows of numbers, and the math cells do the work of calculating relationships between those numbers. It's all very simple. No, wait, I mean it's "easy-peasy." (commitment to the bit) Lotus 1-2-3 felt "columns and rows" could also be useful for textual data. They said the line between spreadsheets and databases is pretty fuzzy, and even today spreadsheets are used to hold and manipulate simple databases. Then racecar driver Mark Colton pierced the veil entirely. It wasn't just spreadsheets and databases that had a fuzzy separation. If we can type arbitrary text into a cell in a spreadsheet, why couldn't we type an entire book? What if all applications were really just one application, in the end? He fired his first shot at uniting everything in View Professional . This was released as PipeDream on the Cambridge Z88, a portable Z80 machine by Sir Clive Sinclair's Cambridge Computer. Built into the ROM itself, it was insta-boot, insta-launch right into a multi-purpose integrated document suite. Jerry Pournelle, in BYTE Magazine 's February 1989 issue, was moderately enamored with the hardware, but PipeDream was, "disappointingly hard to use." With Acorn evolving their BBC Micro via the Archimedes, Colton continued to support their hardware line. In interviews, he seemed to really be leaning toward Windows for the future of his company. However, since he switched development to C and there was a C compiler for the Archimedes, he said it wasn't hard to provide his product to the Acorn crowd. Running on Arthur, the precursor to RISC OS, he embraced and extended the "one document, many forms" approach. Much like today's Google Sheets, we can add arbitrarily long sections of text, insert images, set up database information, perform spreadsheet calculations, run spellcheck, and generate inline graphs. However, try typing a chapter of a book into Google Sheets if you want to drive yourself "mental." (there's no stopping me) In PipeDream , that's frictionless (within a certain definition of "friction"). Like RISC OS itself, PipeDream also requires certain shifts in thinking to not lose a finger to its sharp edges. I suppose that when a developer offers a truly new paradigm, it is fair to ask users to meet it halfway. I'm not convinced the advertising (see "Historical Record" at the end) gave customers a full understanding of how drastic that shift was. "Menu" click the Icon Tray icon (i.e. the application-level menu) for PipeDream to start up a new "Text" file and begin typing into cell A1. You'll find that text overflows, across cell boundaries, until it hits the "row wrap marker" seen in the rightmost column header (shown as a "down arrow" icon). Every line of text is its own row, in spreadsheet terms. As you type, PipeDream fills the current row, then silently inserts a new row to catch overflow. Until a paragraph break, these rows are internally associated as a logical unit. Edits which alter or disrupt text flow across rows within a paragraph are not reflected immediately in the UI. Or maybe they are? It's hard to tell with the graphic glitches in the screen redraw, a constant source of frustration while working on this article. PipeDream concedes the reflow point itself. When in doubt about the current visual structure of your text, , a manual action, will force PipeDream to recalculate text wrapping and line spacing. This can be mitigated a bit through a hidden toggle in the "Options" screen, the confusingly named "Insert on Return." This reduces the need to force a manual reflow, but can still leave visual chaos. 0:00 / 0:44 1× I've altered the text flow and initiated a recalculation of the lines. It does the work, but visually shows no change until I trigger a graphics refresh in some way. Selecting the text works, but then leaves its own graphic artifacts behind. I've "gone nutter!" (yes, these are in the captions as well!) Interestingly, I saw similar redraw issues in View Professional on the BBC Micro. It would appear this is, to some extent, part of the software's DNA. Honestly, this is all "a bit of a shambles." (the hits keep coming) Have you ever wanted a word processor that won't indent paragraphs? PipeDream being a chimera, navigation idioms are forced to choose which parent they love most. An examination of the key demonstrates this. In a word processor, we usually have a horizontal page ruler with tab stops. Tab over to a tab stop and type to align text at that indentation point on the page. In a spreadsheet, navigates us to the next cell to the right. In PipeDream , the spreadsheet idiom wins TAB's love. In a text cell, sets an invisible indicator at paragraph start which forces every subsequent line of that paragraph to begin at that same column. For example, by default every line of text is added to column A, the leftmost. If we to column B, the text will start there but when it wraps to the next line, that will also begin in column B. "Indentation" is at the paragraph level, not the line level. How do we indent the first line of a paragraph? The manual has a solution. In looking back through the history of Colton's software on the Acorn line, I found this note in a review of View 2.1, his standalone word processor for the BBC Micro. "Why is there no numerical information on the rulers or cursors to assist formatting?" asked Acorn User , January 1985. It seems Colton had it in for rulers for a decade, and to my thinking this points to a disconnect between what a programmer thinks users need, versus what users actually need. A stubborn rejection of norms doesn't always mean we're on the right track. We can use the cell-based layout engine of the program to pull off a fun party trick. Under "Options" there is a toggle between Row and Column text wrap. "Row" behaves like a typical word processor. "Column" lets us divide the page into columns, like a newspaper. Tab between columns and the column width will be respected by the word wrap. Kind of cool, and could be useful in a "I need to make a newsletter, stat!" pinch. Like a spreadsheet, column widths are document-wide, so no mix-and-match. Someone very clever with the tools could probably coax complex layouts out of it, but that would require an ungodly amount of pre-planning, design, and patience before starting a document. You really have to try to get it right the first time, because I don't find PipeDream particularly adept at handling large structural changes after the fact. The column-based formatting gets frustrating, but in other ways the word processing is "bog-standard." (How many will I squeeze in? Place your bets!) We have a built-in spell check, user-definable dictionary, word count, text alignment, font choices, and an anagram/subgram maker. Bank Street Writer Plus had an anagram maker as well. Why was that such a thing back then? Have I forgotten some fad of the 80s and 90s? That's all fine and dandy, but I'll tell you what isn't: there's no simple cut/copy/paste, at least not as a modern audience may understand those tools. In the document, we are restricted to cell-level selection, meaning I can't select individual words inside cell A1. I can only select the entire cell A1, which in PipeDream means an entire line of text. We can ask PipeDream to edit a cell in its own window, where it pops out for surgical editing. "Edit Formula in Window" highjacks the spreadsheet formula editor in order to get character-level selection control. In this pop-out window, we can highlight individual words and do typical cut/copy/paste actions. Notice, though, we're still restricted to only the text within the cell, which means only that line (row) of text. It's highly likely any given row will contain the tail-end of the previous sentence and the first part of the next sentence. If we want to cut out a specific sentence which doesn't align neatly to the row structure, there is no way to do so. I will repeat that. There is no way to cut/copy/paste an arbitrary string of characters. Now I feel PipeDream's vision working against itself for anything but simple correspondence. Remember, this is version 4 of PipeDream, Colson's fifth software release to pursue this unified application dream, and this is where we're at. I can't imagine writing anything substantial within these frustrating limitations. As a spreadsheet, PipeDream performs far more admirably, even if certain conventions have been eschewed in favor of its new vision. Hey, if you're gonna quirk it up, might as well go for broke. Unlike its spreadsheet ancestors, there is no menu, nor is there a simple way to tell PipeDream that we want to enter a formula into a cell, as with to denote a function call, or to indicate we want to do math. Many of Lotus 1-2-3's innovations have been utterly ignored. The global "Options" allows us to set default behavior for cell entry. Setting it to "numbers" will put us into the right context for easy formula entry, or we can click into the ever-present formula entry line at the top of the window. Turn on the "Grid" overlay to draw cell boundaries and before you know it what was a word processing document is now a spreadsheet with "the full Monty." (TIL it doesn't mean "full-frontal nudity") The functions available to number crunchers are plentiful and robust. Trigonometric functions are a given, but its inclusion of matrix math may come as a surprise. Even complex functions like , which computes "the complex hyperbolic arc cosecant of as a complex number," are present and accounted for, so hardcore math nerds can breathe a sigh of relief. A wide number of financial functions, statistical functions, lookup tables, string manipulations, and date handling are all here. So too are flow control tools, like , , and more. There are even GUI controls available for showing error dialog boxes and prompts for user input, though those are only available from within custom functions. Yes, if you're missing a function, you can make your own. In a new worksheet, start a formula with (which can accept typed parameters) and end it with . In between, do the work. PipeDream will check syntax and accept or reject each line of your function. If accepted, it will prefix a line with In your real working worksheet, access the formula by . That file reference implies PipeDream can access data from other worksheets, and that is true. Even a cell reference in a formula can be pulled from a completely different worksheet. I find the syntax for custom functions opaque, and the manual does a poor job of explaining what is possible and how to use the tool. There are a handful of examples provided with the software installation, with bugs, that reveal secrets only upon very close inspection. For example, notice in the screenshot above that the parameters to the function are later referenced by prefix, but local variables, as set by the function are not prefixed when used in calculations. It's those subtle little things that tripped me up. The same with having the return value called . Or how the program has a selection of "Strings" functions, but when passing a string as a parameter its type is "Text." I stared at that syntax for a LONG TIME before finally realizing my various little misunderstandings. Customization doesn't stop there. Individual keys can be defined as shortcuts to longer string sequences, F-Keys (plain and modified) can be defined to trigger commands, and command sequences (triggered by the CTRL key) can be redefined to your liking (which risks overwriting built-in command shortcuts). You really can make PipeDream your own, though you're in for a struggle compared to Lotus 1-2-3 and the thousands of books available to help learn its principles. I found no actual books for PipeDream , just publishing announcements in old magazines. Something must exist, but the internet at large appears bereft. On the scorecard of "this amalgamation approach to productivity software is working," I'd say we're 1 and 1. The spreadsheet tools are fiddly, but robust. The word processing has me very underwhelmed. Time for the tie-breaker: databases. Using the supplied Lotus 1-2-3 conversion tool, I was able to bring in the data I originally created in CP/M dBASE II and had subsequently converted to DOS Lotus 1-2-3. Now it lives on in RISC OS PipeDream . This data has more passport stamps than Indiana Jones. Let's consider some of the basic things one might want to do with data. PipeDream beats out Lotus in sorting, giving us a five-stage, multi-row, sort with ascension. Not too shabby for the time, all things considered. Search and replace does what it "says on the tin" (in for a penny, in for a pound), and can also accept regex-like tokens and patterns. More interestingly, cells can be set up to directly perform queries on table data. There are a small handful of prefixed database functions to calculate averages, min/max, counts of things, and more. One last feature of note is how to use the query tools to extract a result into a new database. This is interesting as it utilizes RISC OS's drag-and-drop Save functionality in a clever way. 0:00 / 0:15 1× Note how the query for data extraction is much longer than the tiny little text field in the contextual menu can handle elegantly. This is one of those usability tradeoffs for the RISC OS way of doing things. I was initially ready to write off the database functionality as being underwhelming, until I reminded myself of the stated goal for PipeDream . Its core proposition is that there is no difference between the various aspects of the software. The word processor is the spreadsheet is the database. We're not limited to the "database" functions when manipulating our database data. We have access to everything the program has to offer, at all times. Let's clip through the inverted UV plane separating database and spreadsheet, and see what kind of trouble we can get into. I'm thinking back to the Lotus 1-2-3 article and how database information was queried there. With a table of data, we had to use the built-in query forms, define areas on the sheet to hold query parameters, and designate another section of the sheet into which query results would display. It was an obtuse Rube Goldberg machine that I couldn't understand until I drew a diagram of the process. In PipeDream , we just write a formula, the same as if it were a spreadsheet. Let's get the average rating of all adventure games in the database published before 1985. "Bob's your uncle!" (I was hoping to work that one in) Let's mix it up a little and get the same average, but only for titles which begin with "Zork." We can use wildcards, but let's leverage PipeDream's word processing string tools. The most awesome part about this is that, like any spreadsheet formula, it updates in real time. Change the ratings, or add a new Zork game to the mix, and get the new average instantly. The database is the spreadsheet is the database, so that calculation can then be referenced as a value for another cell's formula, perhaps adding sales tax to the average unit price. While we're at it, might as well throw in some fancier text formatting to make it look pretty. In the Lotus 1-2-3 investigation, I wanted a pie chart showing a breakdown by game categories. Lotus had a handy function which removed duplicates from lists, making it possible to extract the full list of unique game categories, which could then be used as the query parameters for generating a chart. PipeDream can't do that, but it does have other string parsing routines, variables, cross-file data referencing, and the ability to write custom functions and macros. I don't doubt it would be possible to homebrew a workaround to this missing function. In fact, let's "have a bash at it." (swish!) 0:00 / 0:12 1× Note the real-time update of the chart as I modify an external database. Ultimately, I couldn't achieve an elegant solution, but I could achieve my goal. I sorted the original data by genre, then created a column that checks if the genre for each row matches the one above it. If so, it's a otherwise a . Then, I extracted all rows with in the column. Last, I did (count any items in a list), where the source list is contained in the original database document. With the documents thus linked, I get real-time graph updates when I alter the core database, thanks to external reference handling. Everything's "tickety-boo!" (I'm trusting The Independent on this one) OK, PipeDream , you're winning me over a little more now. Time to take this to its logical conclusion. We haven't yet pushed it as the multi-purpose document creation tool it promises to be. We've done a little dabbling, with text formatting and data extraction, but I want to see everything come together. I want the borders to crumble . The approach I'm finding to be least troublesome is to begin with a "text" document, then decorate that with spreadsheet/database elements. 0:00 / 0:16 1× As I scroll, text will disappear until I trigger a redraw event in the window. (pay no attention to the content of the letter) In building that document, here's what I learned. We have a unique confluence of interesting technologies coming together to form a strangely flawed jewel. It sparkles and shines when the light hits it just right , and in those sparkles we may catch a fleeting glimpse of a world that might have been. Might have been, but wasn't . Let's see where each of the underlying technologies wound up and those in the know can feign shock with the rest of us when we learn that ARM isn't the only thing that survives to this day. We'll start with the obvious truth: ARM won. It's in everything, everywhere, all at once. If it isn't in your computer, it's in your phone, or your Newton, or your Palm Pilot, or your Canon camera, or your Nintendo DS, or your Nintendo 3DS, or your Nintendo Wii, or your Nintendo Switch, or your Nintendo Switch 2, or your Raspberry Pi, or maybe you're sidetalking on your N-Gage. Its combination of low power consumption with high performance makes it ideal for mobile devices, of which we are in abundance. But why ARM specifically? Others have swung for the RISC fences and stumbled, yet Acorn set two engineers to the task of designing their first ever microprocessor and somehow achieved a ubiquity that has remained (mostly) unchallenged. Apple/IBM/Motorola gathered their forces and developed their own RISC architecture, which debuted in Apple's Power Macintosh 6100. PowerPC doesn't mean much to a Windows/Intel crowd, but the Mac faithful remember all too well Apple's investment in that as the successor to the x68000. Frustrated by delays in the evolution of the chip line, Apple wound up ditching it for Intel x86 , even if they eventually rediscovered the joys of RISC. PowerPC went on to be adopted by a number of game consoles, notably the Nintendo Wii, XBox 360, and PS3 simultaneously. The line continues today, and heck, Mars rovers Curiosity and Perseverance both have PPC inside. Hard to call such a history a "failure," but who outside hardcore Amiga faithful today is clamoring for a PowerPC chip? The SPARC RISC architecture, of "Sun SPARC Workstation" fame, chugged along until as late as 2017, when Oracle purchased Sun. A notable achievement, in pop culture circles, is this is the hardware Pixar's first Toy Story was rendered on . Though Oracle disbanded the design team keeping the architecture alive, the architecture itself is free and open source. There's nothing stopping an intrepid reader from carrying on the lineage, I suppose. Fujitsu, the last of the production line for the series, has abandoned SPARC for ARM. I'll be honest, I can't figure out what ARM does so much better than other attempts, like SPARC, at making a great RISC processor. Reading through the Ars Technica story , it seems to be less about the underlying tech and more about the savvy promotional work of Robin Saxby and his absolute unwillingness to lose the RISC wars. Where others were building RISC for the server-side, ARM committed themselves to the mobile side, skating to where the puck would be . Whatever the case, whatever the magic, ARM makes it available to anyone who wants it, through their licensing partnerships. Ultimately, this really seems to be what has given ARM its staying power; a low barrier to entry to quickly join in on high-performance, low-power draw, ARM fun. It's important to note that ARM doesn't make processors; they only license their IP. <<record_scratch.mp3>> OK, be that as it may, it is still substantially correct to say that IP licenses are their bread and butter. A "core license" allows a company to manufacture a specific ARM-designed CPU, a popular choice for system-on-a-chip designs. Alternatively, an "architectural license" permits a company to design and build its own custom CPU around the ARM instruction set. That's what Apple does with their A- and M-series chips. In recent years, ARM is feeling light competitive pressure from the RISC-V architecture. Born in the same UC Berkeley labs that birthed the original RISC design reports that inspired Acorn to take a chance on RISC, its architecture, unlike ARM, is free and open source. Consumer-level devices running on RISC-V have already started shipping. A new race has begun. Acorn's Archimedes line ultimately never sold particularly well. It's hard to nail down specific sales figures , but a 1991 Acorn shareholder report said, "Acorn is now the UK number one supplier of 32-bit RISC machines with an installed base of over 150,000 units." For context, the Amiga line had sold some 2 million units by 1991. We can't say Acorn didn't put in the effort, releasing some 13 model variations in under a decade. The general consensus seems to be that they "cost a bomb." (that's a new one on me) Schools adopted them, as a natural evolution of Acorn's prior BBC Micro installations, but at US$3,000 to $9,000 (in 2026 money) families just couldn't afford to put one in the home. In the mid-90s, Acorn dropped the Archimedes line, switching tracks to the more business-like Risc PC line, and produced a handful of systems around the StrongARM CPU. However, while the CPU spirit was willing, the motherboard flesh was weak, leaving the CPU underutilized . The lineup ended concurrently with the end of Acorn around 1998. Castle Technology tried to keep the Risc PC line going, post Acorn, but called it quits shortly thereafter, in 2003. Open-sourced in 2018, RISC OS Open keeps it running and up to date for modern RISC-based hardware platforms, especially the Raspberry Pi. Currently at v5.30 at the time of this writing, it is still a 32-bit operating system with " moonshot" aspirations of 64-bit someday. * checks watch* Time is ticking to pull that together before fading into 32-bit irrelevance. Did I mention how tiny this thing is? The latest version for Raspberry Pi is a 155MB download. Version 3.7, which I used for this article, downloaded as a pre-configured emulator with OS and apps pre-installed, was a mere 129MB. Even the most up-to-date pre-configured package tops out at a "massive" 1GB, apps and emulator inclusive. How big is macOS on ARM? Leading in with his View lineup of productivity apps on the BBC Micro, Mark Colton was the man with the all-in-one vision. With View Professional , he took his first stab at providing an uber-app for that 8-bit workhorse. It's primitive and clunky to use, but the spark is present. He would then expand on his ideas through the PipeDream lineup, taking it all the way to version 4.5. Every version refined the vision, but ultimately its character-based layout engine roots became a limiting factor to its growth. One rewrite later, he had a true GUI-based implementation, for both the Archimedes and Windows, in Fireworkz released in 1993. Having created standalone products Wordz and Resultz, Fireworkz combined those back into one. By mid 1995, Fireworkz Pro added in the database functionality, merging the new Recordz into the product, and that's where Colton's involvement ended. Besides asking "What even is a spreadsheet anyway?" Colton's other passion was race car driving. In August 1995, an engineering defect in the front wing of his Pilbeam M72 caused it to fold under his car while he was at top speed. He lost control, crashing headlong into a telegraph pole, and was killed. Most shockingly, both PipeDream and Fireworkz continue to be maintained to this day. Mark's father, Richard, generously open-sourced both PipeDream and Fireworkz just before his own untimely death in 2015. Fireworkz Pro, the version that includes database functionality, is not open-sourced and is still for sale . The PipeDream package available for installation in RISC OS package manager is not the version I'm using for this article. That is the modern update, which adds a bunch of niceties, including a GUI toolbar for formatting text, expanded spreadsheet functions, and a mind-boggling number of bug fixes. This is all maintained by lone developer Stewart Swales , someone intimately involved in the RISC OS and PipeDream history. He worked at Acorn and helped develop Arthur, the OS that became RISC OS. Later, he joined Colton Software as lead developer, working on PipeDream and Fireworkz. There's really nobody better to carry on the legacy. Where, precisely, Colton's continuation of that legacy would have gone, we can't say with certainty. However, we do have a little insight into his thinking. In an interview with Acorn User , December 1994, he said, "Over the next few years...we won’t be writing spreadsheets either; we'll be writing a totally different style of program. I expect spreadsheets, word processors and so on to be provided as part of the operating system in the future." Let me start by making it clear that I appreciate the effort. I say that with all sincerity and for everyone involved. From the machine, to the OS, to the productivity suite, all katamari'd up into a unique star. It was a lot of fun feeling like a beginner again. I had moments of true learning, shedding expectations of "how things should be" and experiencing fresh, alternate ways to approach work. I said at the beginning, the question that needs answering is, "Did Colton successfully execute his vision?" and here I must waffle. From View Professional , through five major releases of PipeDream , and two Fireworkz releases, he held fast to a very particular line of exploration. That he never wavered in his pursuit of that vision, says to me that he must have felt he had achieved his goal to some degree. In that regard, we can say he successfully executed his vision. As an end-user, it is hard to align myself to that vision. I get what he's after, especially when trying to make sure documents always reflect the latest data. After using PipeDream for a number of weeks, I remain unconvinced that the solution is to graft all software into one uber-application. If we follow that thinking to its logical conclusion, then why not include paint features? Why not include robust desktop publishing features? Where would it stop? Had the amalgamation of these productivity apps birthed something uniquely unachievable by other means, or unlocked some latent potential in the individual apps, I'd be very willing to adapt to this "skew-whiff" (last one, I promise!) approach to application design. As it stands, I ultimately don't see what it does that wouldn't be equally well-served, perhaps better-served, by intelligent file link management with robust publish/subscribe functionality. In fairness, a deep implementation of that would work best as an OS-level feature, and Colton could only control his own works. Paradoxically, the most frustrating aspect in removing the barriers between applications is how we wind up with a slate of new barriers forged in that alliance. Colton said of View Professional that even when the apps are combined, none should feel like a compromised version of that app. Yet, compromises are what I feel with every document I build. Is it worth giving up easy text formatting and basic cut/copy/paste for the off-chance I might need to insert a little spreadsheet table? There's an 80/20 rule being almost willfully ignored here. I love that Colton had a unique vision and stuck to it. I love that someone tried to forge a new path in productivity application design. I love that PipeDream exists, but I don't love it . Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). Testing Rig RPCEmu v371 on Windows 11 RISC OS v3.7 1024 x 768 15-bit color PipeDream v4.13 boot the system launch my application of interest make a dummy document quit the emulator entirely and reboot load my saved document Because rows and columns are shared throughout the document, insertions and deletions, or moving things around, creates difficult-to-resolve layout issues. If a spreadsheet sits to the right of a block of text, and we want to insert a row into only the spreadsheet part, that's not possible. Doing so will also insert an empty row into the paragraph, leaving a gap. PipeDream has a strange concept of "global font" vs. "local font". Local fonts can't be changed until the global font is set to something other than the system font. The global font controls value cells, which cannot be styled individually. Local fonts will style a cell from wherever the cursor is currently located, and it is very easy to target a cell and style its font, but miss the first character or two, even though the entire cell is highlighted as a selection. "What will be the result of my action?" is not always crystal clear. The controls for styling charts are difficult to understand, and messing up is hard to reverse out. I accidentally added "New Text" to the chart and it took a long time to figure out how to delete it; selecting it and hitting "delete" doesn't work. There is no way to modify the legend. There's no facility for selecting elements for inclusion/exclusion from the graph. In my case, formatting to look good on the printed page meant adding empty columns which wound up in the pie chart. This is very representative of the struggles the layout engine introduces. Making data look good in one context risks "making a shambles of it" (are these working? have I won you over?) in another. Page layout settings are cryptic. Margins can only be set to the top and left (?!?!) and only in unspecified numeric units. I used the template default values, and the page wound up shifted down and to the left. Getting beautiful output is a challenge. How could I forget? There's no UNDO! Some programs, like !Draw (vector illustration) and !Edit (text editor) have undo, and others like !Paint and !PipeDream do not. Getting started with RPCEmu , using a pre-built package, was as dead simple to use as you'd imagine. I experienced no crashes of the emulator, operating system, or PipeDream . It was a very solid experience in that regard. PipeDream itself, at least the version I used, had a ton of annoying bugs and the graphical glitches were even noted in a review by Micro User , February 1992 . But emulator-wise, everything was smooth. I recommend first-time users grab a pre-built image for quickly jumping in and seeing what the fuss is all about. I also do recommend going through the RISC OS Manual. The operating system is almost unusable until you learn its little tricks and nuances of operation. Pre-built images: https://www.marutan.net/rpcemu/easystart.html v3 Manual: https://archive.org/details/ro-3-user-guide v5 Manual: https://archive.org/details/risc-os-5.28-user-guide Technically, I am cheating a bit in this review. RPCEmu doesn't emulate an Archimedes but rather Acorn's later Risc PC. I ran PipeDream from floppy in Arculator, which explicitly emulates Archimedes systems, to compare the experiences. Except for RPCEmu's snappier performance (which I want anyway), RISC OS itself abstracts away the hardware layer so much it didn't seem to matter one emulator over the other. The emulator itself expects some specific keyboard, with the key situated between and . I don't have that, and nothing on my extended keyboard would send the right code to the emulator. is used for logical in PipeDream data queries; I had to use Windows ALT keycodes. I mentioned earlier, but I'll make it explicit here: there is no undo. Fireworkz is available as a native Win32 app. It launches without issue on Windows 11 64-bit, and even in Wine on macOS. It looks and feels exactly like Fireworkz on RISC OS, which looks and feels a lot like the latest version of PipeDream (minus the database parts). The list of bug fixes and quality of life enhancements is vast. Scrolling through all changes since Colton passed is kind of pointless due to its scope. I'll say, "a lot has improved" and leave it at that. As a local-only alternative to the Google/Apple/Microsoft hegemony, it's worth checking out. It's free, open source, actively maintained, a mere 2.5MB download, and for God's sake at least it's trying to do something different. Getting documents out of RISC OS into a modern system is easy, but has its caveats. RPCEmu can directly save to the host operating system, so getting files out is a non-issue. PipeDream's options for saving documents will strip the document's uniqueness, however. Saving as ASCII will try to keep text precisely as shown in PipeDream, inserting line breaks at the end of every line of text. Tables are just tab-indented. Any text formatting, fonts, graphs, etc. are stripped, of course. Saving as "Paragraph" is like ASCII, but will keep text together as logical paragraphs. This is much better for pasting the text into new documents. We still lose anything done to make the document look pretty. PDF printing is an option in RISC OS, and proved to be the best way I could find to get PipeDream documents into the real world. This required two parts: activating the PDF printer and running a separate !PrintPDF application. With both active, PipeDream generated PDFs without issue.

0 views
ava's blog Yesterday

kicking out human slop from my online space

I have made slow adjustments to my feed reader every other day now to exclude some empty negativity from my online space, and also to no longer click on some YouTube frontpage stuff. Don't get me wrong: There is still plenty of negativity in there in some ways, but at least it is productive negativity that lets me know how Big Tech has messed up again, new privacy-invading laws, policy proposals and more that relate to my field of interest. I am just no longer engaging with: How I slid into this happened slowly. Some of the creators making these weren't always this way, I just noticed them pivoting more and more into this and now have lost interest. On the other hand, I often needed 30-60 minute videos for the treadmill, and these were easily available and at least somewhat entertaining. I am also not immune to certain shock topics. But I want something more than just pointing at some rando and laughing, or saying how stupid this or that thing is, when this just increases its visibility and no one would really disagree anyway. No one thinks these people are reasonable or this product is the best thing since sliced bread, which is why there are no compelling arguments ever. It's just the most obvious takeaways, showing the original video in fullsize while they are in the corner. It feels True Crime adjacent, as it is also thrilling, cathartic, validating, " yeah I also think this is bad, we are all in the in-group, we are the reasonable people, I am on the right side ". Shock, upset, rage, disbelief, while being reassured by the creator that you aren't insane for feeling that way. At the same time, this builds a connection with you, because you are seemingly going through this "together". You feel happy when the creator comes to the same conclusions as you, like ahh, they're just like me! Maybe this type of online commentary genre should be called True Asshole , since you're not talking about crime, but assholes out there, and still employ the same tactics for viewers. I'm okay with it if it's about showing up a general overarching trend and really adding some own perspective, analysis, studies, article excerpts, statistics etc. to the topic, with the focus being you, and an example here and there that is not focused on the person you took it from. I also love when there's a company analysis/takedown. But it is so boring and dreadful at this point to watch the same few YouTubers react to the same topic with the same video examples, where most of the video is just you being tricked into feeling like you're spending time with the creator watching stupid shit on their phone. I am also tired of the flattening everywhere, which is at the core of why every video by some creators now is that way. Everyone finds one thing that "works" and then obsessively laser-focuses on that to appease the opaque and mysterious algorithm and the mob. A switch is flipped, and they immediately make that their brand and only produce things in that style. Everyone carefully separates different aspects of themselves into different accounts and platforms, as if it was offensive to be multi-layered online (and I guess it is to the algorithm). Even people who started something as a hobby and didn't plan for monetization are suddenly intrigued by it when one of their things pops off, and then the hunt for money changes everything, because more eyes = more money, and so you have to box yourself in to what the masses want. Others look at that and then think " This is the proper way to vlog/blog/make commentary/..., I should do this too, and not whatever childish amateurish shit I have been doing where I just talk about what I enjoy !" and that's so wrong! They aren't doing it " correctly ", you are just watching someone in a hamster wheel of their own making, pandering to what is most successful at the moment. It's actually sad to see, because you enjoy the creations of great people that then go on to flatten themselves. One piece with more plump, petty, crass language get more engagement because " Finally someone says it how it is! So cathartic to read! " and now every one of their releases is employing this as a strategy, usually getting more extreme with time. There's a great writer I like to read, but reading his pieces, I always have to sort of ignore these petty squabbles because it just makes me sad, as they don't seem authentic and passionate anymore, but like he has locked himself into this Say-The-Line -esque role. There is also a YouTube channel I used to watch who gained popularity on ribbing a little on a specific company in good fun, and being embroiled in a legal battle with them at some point, but nowadays is just acting without any sort of class or professionalism about it as outrage and extremism makes more money. I cannot bear to watch him anymore, as I just see a capitalism muppet do a weird shtick and nothing more. Throwing good and reliable review content away for this is a sign of the times, I guess. Anyway, more time and space for uplifting and genuinely creative things the creator actually wanted to make. :) Reply via email Published 08 May, 2026 LOOK AT THESE DISRESPECTFUL RANDOS (IN-LAWS FROM HELL! UNREASONABLE AIRBNB HOSTS!) overconsumption has gone too far [new trend] Meet Dumbass69, the biggest PREDATOR you have NEVER HEARD OF There are more victims by Dumbass69 than we previously thought... This random person on TikTok with 3k followers did something weird and gross.... The Rise And Fall of this Creator Look at this new stupid and cringe content kids like (and I am an adult) Watch me react to content meant to entertain and be a little silly, while I act like the premise is UNHINGED AND CRAZY and the creators must be ON DRUGS to think of this.... Society is cooked (based on 5 cherry-picked examples) EVERYONE HATES THIS PART OF THIS MOVIE (it was just 5 people in a comment section)

0 views

Canvas Breach Disrupts Schools & Colleges Nationwide

An ongoing data extortion attack targeting the widely-used education technology platform Canvas disrupted classes and coursework at school districts and universities across the United States today, after a cybercrime group defaced the service’s login page with a ransom demand that threatened to leak data from 275 million students and faculty across nearly 9,000 educational institutions. A screenshot shared by a reader showing the extortion message that was shown on the Canvas login page today. Canvas parent firm Instructure responded to today’s defacement attacks by disabling the platform, which is used by thousands of schools, universities and businesses to manage coursework and assignments, and to communicate with students. Instructure acknowledged a data breach earlier this week, after the cybercrime group ShinyHunters claimed responsibility and said they would leak data on tens of millions of students and faculty unless paid a ransom. The stated deadline for payment was initially set at May 6, but it was later pushed back to May 12. In a statement on May 6, Instructure said the investigation so far shows the stolen information includes “certain identifying information of users at affected institutions, such as names, email addresses, and student ID numbers, as well as as messages among users.” The company said it found no evidence the breached data included more sensitive information, such as passwords, dates of birth, government identifiers or financial information. The May 6 update stated that Canvas was fully operational, and that Instructure was not seeing any ongoing unauthorized activity on their platform. “At this stage, we believe the incident has been contained,” Instructure wrote. However, by mid-day on Thursday, May 7, students and faculty at dozens of schools and universities were flooding social media sites with comments saying that a ransom demand from ShinyHunters had replaced the usual Canvas login page. Instructure responded by pulling Canvas offline and replacing the portal with the message, “Canvas is currently undergoing scheduled maintenance. Check back soon.” “We anticipate being up soon, and will provide updates as soon as possible,” reads the current message on Instructure’s status page . While the data stolen by ShinyHunters may or may not contain particularly sensitive information (ShinyHunters claims it includes several billion private messages among students and teachers, as well as names, phone numbers and email addresses), this attack could hardly have come at a worse time for Instructure: Many of the affected schools and universities are in the middle of final exams, and a prolonged outage could be highly damaging for the company. The extortion message that greeted countless Canvas users today advised the affected schools to negotiate their own ransom payments to prevent the publication of their data — regardless of whether Instructure decides to pay. “ShinyHunters has breached Instructure (again),” the extortion message read. “Instead of contacting us to resolve it they ignored us and did some ‘security patches.'” A source close to the investigation who was not authorized to speak to the press told KrebsOnSecurity that a number of universities have already approached the cybercrime group about paying. The same source also pointed out that the ShinyHunters data leak blog no longer lists Instructure among its current extortion victims, and that the samples of data stolen from Canvas customers were removed as well. Data extortion groups like ShinyHunters will typically only remove victims from their leak sites after receiving an extortion payment or after a victim agrees to negotiate. Dipan Mann , founder and CEO of the security firm Cloudskope , slammed Instructure for referring to today’s outage as a “scheduled maintenance” event on its status page. Mann said Shiny Hunters first demonstrated they’d breached Instructure on May 1, prompting Instructure’s Chief Information Security Officer Steve Proud to declare the following day that the incident had been contained. But Mann said today’s attack is at least the third time in the past eight months that Instructure has been breached by ShinyHunters. In a blog post today, Mann noted that in September 2025, ShinyHunters released thousands of internal University of Pennsylvania files — donor records, internal memos, and other confidential materials — through what the Daily Pennsylvanian and other outlets later determined was, in part, a Canvas/Instructure-mediated access path. “Penn was the named victim,” Mann wrote . “Instructure was the mechanism. The incident was treated as a Penn-specific story by most of the national press and quietly handled by Instructure as a customer-specific matter. That framing was wrong then. It is dramatically more wrong in light of the May 2026 events, which now look like the planned escalation of an attack pattern that ShinyHunters had been working against Instructure’s environment for at least eight months prior. The September 2025 Penn breach was the proof of concept. The May 1, 2026 incident was the production run. The May 7, 2026 recompromise was ShinyHunters demonstrating publicly that the May 2 ‘containment’ did not happen.” In February, a ShinyHunters spokesperson told The Daily Pennsylvanian that Penn failed to pay a $1 million ransom demand . On March 5, ShinyHunters published 461 megabytes worth of data stolen from Penn, including thousands of files such as donor records and internal memos. ShinyHunters is a prolific and fluid cybercriminal group that specializes in data theft and extortion. They typically gain access to companies through voice phishing and social engineering attacks that often involve impersonating IT personnel or other trusted members of a targeted organization. Last month, ShinyHunters relieved the home security giant ADT of personal information on 5.5 million customers. The extortion group told BleepingComputer they breached the company by compromising an employee’s Okta single sign-on account in a voice phishing attack that enabled access to ADT’s Salesforce instance. BleepingComputer says ShinyHunters recently has taken credit for a number of extortion attacks against high-profile organizations, including Medtronic, Rockstar Games, McGraw Hill, 7-Eleven and the cruise line operator Carnival. The attack on Canvas customers is just one of several major cybercrime campaigns being launched by ShinyHunters at the moment, said Charles Carmakal , chief technology officer at the Google-owned Mandiant Consulting . Carmakal declined to comment specifically on the Canvas breach, but said “there are multiple concurrent and discrete ShinyHunters intrusion and extortion campaigns happening right now.” Cloudskope’s Mann said what happens next depends largely on whether Instructure’s customers — the universities, K-12 districts, and education ministries paying for Canvas — choose to apply pressure or absorb the breach quietly. “The history of education-vendor incidents suggests the path of least resistance is the second one,” he concluded.

0 views

Notes on the Hantavirus Outbreak

Right now there’s a cruise ship parked outside Cabo Verde because of an outbreak of Andes virus . Yep, another cruise ship. I don’t get the appeal. It’s like a big open-air serial passage experiment: you get a bunch of old people with failing immune systems in close contact and race a pathogen through them. How much should I worry about this? Is this early January of 2020? I tried asking Claude but the biosecurity filter kept blocking my queries. The WHO says : Although uncommon, limited human‑to‑human transmission of HPS due to Andes virus has been reported in community settings involving close and prolonged contact. Secondary infections among healthcare workers have been previously documented in healthcare facilities, though remain rare. WHO currently assesses the risk to the global population from this event as low […] WHO advises against the application of any travel or trade restrictions based on the current information available on this event. So, hantavirus is the family. They are carried by rodents and spread by aerosols. In humans they can cause hantavirus pulmonary syndrome (HPS), which has a case fatality rate (CFR) of between 30 and 60%. Not great! Used to be these infections were mouse-to-human dead-ends. But Andes virus (ANDV), first identified in 1995, is known to spread from human to human. The last time there was an outbreak was 2018–2019 in Epuyén , Chubut , a town of 1,500 on the lee side of the Andes ( quite beautiful ). Described in this paper . 34 known infections and 11 deaths for a CFR of 32%. The $R_0$ was 2.12, reduced to 0.96 after control measures were implemented. Given the small number of cases, there should be some uncertainty about the $R_0$. But $R_0 > 1$ is the threshold for sustainable transmission. In this outbreak, the index case , while symptomatic, attended a birthday party with 100 other people, and infected five guests in 90 minutes, who went on to infect more people. The authors write: The super-spreading capability of the ANDV Epuyén/18−19 strain shows a facility ($R>2$) for sustaining continuous chains of transmission if no control measures are enforced. The appendix has some interesting stuff on how patients were infected at the birthday party. A further concern here is the incubation period: Wikipedia says the incubation period is between one and eight weeks . In the Chubut outbreak, the distribution was: Which is not good. I don’t have more data to draw a nice-looking CDF . Now this all sounds quite bad. Are there reasons to be optimistic? First, Argentina has had 710 cases of HPS in the period 1995–2008 ( Martínez 2010 ) and a further 533 cases in the period 2009–2017 ( Alonso 2019 ), and we are all still alive. In the latter period, most of these cases are from occupational/recreational exposure to rodent feces and only 1.8% of cases are from suspected human-to-human transmission. So, over 1,200 cases and every one of them fizzled out, but for one outbreak which was limited after successful contact tracing and quarantine. Second, the virus has left Argentina before: once to Switzerland in 2016, and once to the United States in 2018. In the second case the patient “while ill, [traveled] on two commercial domestic flights”. And neither export led to a general outbreak. Thirdly, in a small outbreak like the Chubut one, the $R_0$ can vary wildly from social factors unconnected to the virus, e.g. if the birthday party had not happened. You need a large $n$ to get the $R_0$ as a property of the virus itself. It’s possible the Chubut outbreak just had anomalously high transmission. What does this add up to? I don’t know. On the balance of evidence, I think this outbreak is more likely than not to fizzle out. In the interest of accountability, and putting my beliefs on record (which is the only objective way to judge the accuracy of your mental model) I’m gonna say: And yet. And yet it feels so much like early COVID, particularly with public health authorities making very complacent remarks that “it’s not that transmissible, contact tracing will work, quarantine will work”. Complacency at the start, and severity at the end, is exactly why COVID was such a fuckup. 70% probability the outbreak ends with fewer than 300 deaths. 90% probability the outbreak ends with fewer than 1,000 deaths.

0 views
Sean Goedecke Yesterday

Notes on incidents

Incidents are boring. Most of what you actually do during an incident is wait: for some other team to investigate, or for a deploy to finish, or for the result of some change to become apparent, or for someone else who’s been paged to come online. It’s stressful, but there’s often just not that much to do. Most incidents resolve on their own. People love to share war stories about incidents where some hero engineer improvised a clever fix that instantly repaired the system. That rarely happens. Well-designed software systems tend to come good by themselves, and many modern systems are at least partly well-designed, by virtue of being built out of really solid pieces. If a server process is crashing or leaking memory, Kubernetes will kill the pod and bring it back up. If a service is overloaded and jammed up, clients will (hopefully) trigger circuit breakers and back off until it can recover. Temporary spikes in expensive operations will often just fill up a queue instead of taking the entire system down. Most incident calls I’ve been on - well over half - would have come good by themselves in roughly the same time without any human intervention. Most incident-resolving actions make incidents worse. Engineers jump too quickly to resolve incidents. Oh, the queue size is huge? Don’t worry, I’m here in a production console to clear the queue! Unfortunately, some of the jobs I just nuked were doing important billing work and aren’t automatically re-queued, so this queue-latency incident just became a billing incident as well. Another classic in this genre is “engineer forces a series of redeploys to “fix” a concerning-looking metric, and the concurrent deploys cause far more stress on the system than whatever was causing the metric to look weird”. For that reason, the first thing you should do in an incident is nothing . When I was paged late at night, I used to have a habit of pouring myself a glass of scotch before I joined the call. This was only partly for the tranquilizing effects of alcohol: the main reason was to have a ritual I could go through to convince myself that I wasn’t rushing, and that it was OK to take a few breaths and relax before jumping into the problem 1 . Making a cup of tea or going for a walk around the house would probably have served as well. Effective incident-resolving actions are often dull. Typically the action needed to resolve the incident - assuming it doesn’t resolve on its own - is to temporarily disable some problematic feature until the system recovers. This is never a complex code change. Typically someone spends five minutes putting together the patch, and then an hour waiting for reviews, CI, and deploying. If you’re very lucky, you’ll get to write a “wrap a cache around it” code change. In an incident, there is no substitute for knowledge of the system. Five strong engineers can troubleshoot on an incident call and get nowhere, while one half-drunk engineer who’s familiar with the codebase can swan in and immediately fix the problem. This is because the kinds of actions that resolve incidents are so simple: if you’ve been the one working on the project, you likely already know exactly what feature flag to check and disable, or what code change to revert. Resolving incidents requires courage. Incident calls can be scary. When engineers are scared, they often reach for consensus: hedging their statements, asking the group if they agree a particular course of action is safe, deferring to each other, and so on. But if you’re the one with knowledge of the system, you have to be decisive. Say “I’m going to do X”, wait thirty seconds, then do it. While it’s usually net-negative to have a powerful manager fidgeting on the incident call, this is one of the rare cases where it can be helpful - executives are very comfortable saying “okay, do it now” about technical courses of action they don’t fully understand. Resolving incidents buys a lot of political credit. One thing that I think surprises a lot of engineers who are new to on-call is how grateful managers and executives are for even really simple fixes (i.e. “turn off the feature flag”). This is because incidents are one of the few times that non-technical leadership are directly confronted with their lack of control over the technical sphere. When the team is building a product, your VP has a lot of freedom to guide the process and make decisions. But when there’s an active incident, they have to just sit there and trust that their technical employees are going to pull them out of the fire. It’s a scary situation, particularly for someone who’s used to exercising a degree of power in the workplace. However, always resolving incidents is (by itself) not a durable position of power. This is a little counter-intuitive. Surely if you’re always resolving incidents, you’re indispensable? The problem is that incident-resolving work is almost always so techical as to be completely opaque to executives. They know the incident has resolved, but they don’t know if you did a heroic effort or merely did the obvious thing. They also can’t point to your successes as theirs (which is always the most reliable way to get VPs and directors on your side), because incidents are expected to be fixed , and it’s always better not to have had the incident at all . I don’t need to do this anymore because I just don’t get as keyed up about incidents as I used to. I don’t need to do this anymore because I just don’t get as keyed up about incidents as I used to. ↩

0 views
Unsung Yesterday

“This was a user-friendly computer.”

The Pixar animated short Lifted was released in front of Ratatouille in 2006: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/yt1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/yt1.1600w.avif" type="image/avif"> I’ve always been amused by this imaginary interface, which is so clearly not how any sort of computer would work. Or so I thought. These are photos I took in Melbourne in 2024 of CSIRAC, Australia’s first digital computer from about 1949: = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/1.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/1.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/2.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/2.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/3.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/3.1600w.avif" type="image/avif"> This is a “console” of the computer, used to tactically probe or input specific memory addresses (in binary), and to control functions like stopping and starting the program. Any proper programming and eventually inputting data would happen using gentler I/O devices like typewriter keyboards, paper tape, and magnetic storage. = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/4.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/4.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/5.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/5.1600w.avif" type="image/avif"> Physical consoles like this one were last seen in the 1970s on hobbyist home computers such as the Altair 8800 , and the Console app on your Mac diligently spitting out logs is its spiritual and virtual successor. But even if a CSIRAC console feels hostile today, 75 years ago it was quite the opposite : And [CSIRAC] helped there too. It could display all its working registers and the last 16 instructions executed. It could be given an address at which to stop (a “breakpoint”), and be stepped by one instruction at a time. It even had lights to show the computer’s internal states. This was a user-friendly computer. CSIRAC stood for Commonwealth Scientific and Industrial Research Automatic Computer, a typical naming scheme of the era. We also got ENIAC (Electronic Numerical Integrator and Computer) in 1945, BINAC (Binary Automatic Computer) in 1949, EDVAC (Electronic Discrete Variable Automatic Computer) in 1946, ILLIAC (Illinois Automatic Computer) in 1952, and then SEAC, SWAC, ORDVAC, TREAC, AVIDAC, FLAC, WEIZAC, BIZMAC, RAMAC, and UNIVAC. The story goes that the name of 1952’s MANIAC (Mathematical Analyzer Numerical Integrator and Automatic Computer) was chosen to highlight and put a stop to the goofy naming practice. Did it work? I am not sure. Not only two more MANIACs were produced, but we also got 1953’s JOHNNIAC (nicknamed “pneumoniac” since it needed a lot of air conditioning), and SILLIAC (Sydney ILLIAC) in 1956. The last computer I can find using that naming scheme was TIFRAC, operating in India between 1960 and 1965. CSIRAC had real work to do, but today it is known chiefly for being the first computer to play music in real time . The quality is… I’ll let you judge, with links below pointing to short MP3s preserved by Paul Doornbusch and subsequently Internet Archive: Do you miss your PC speaker yet? Engineers working on other room-sized computers of that era did similar things ; whether this was solely one of the first attempts to humanize the big scary machines, or a distraction from the computers’s typically military uses is left as an exercise for the listener. Today, one of the 1960s machines still plays music, headlining a fascinating annual tradition – every December, the PDP-1 restoration crew at the Computer History Museum in California invites visitors to sing carols with the computer older than most of them. = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/yt2.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/yt2.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/6.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/6.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/7.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/7.1600w.avif" type="image/avif"> = 2x) and (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/8.2096w.avif" type="image/avif"> = 3x) or (width >= 700px)" srcset="https://unsung.aresluna.org/_media/this-was-a-user-friendly-computer/8.1600w.avif" type="image/avif"> The last photo takes us back to where we started. Neither CSIRAC nor PDP-1 might be user-friendly by today’s standards but damn, wouldn’t you want some of your computer’s interface to feel this way? #history #sound design #youtube Auld Lang Syne Chopin’s March In Cellar Cool (I particularly enjoyed an alt recording of In Cellar Cool where CSIRAC itself appears in a background as a constant humming presence.)

0 views

Notes on the xAI/Anthropic data center deal

There weren't a lot of big new announcements from Anthropic at yesterday's Code w/ Claude event, but the biggest by far was the deal they've struck with SpaceX/xAI to use "all of the capacity of their Colossus data center". As I mentioned in my live blog of the keynote , that's the one with the particularly bad environmental record . The gas turbines installed to power the facility initially ran without Clean Air Act permits or pollution control devices, which they got away with by classifying them as "temporary". Credible reports link it to increases in hospital admissions relating to low air quality. Andy Masley, one of the most prolific voices pushing back against misleading rhetoric about data centers (see The AI water issue is fake and Data center land issues are fake ), had this to say about Colossus: I would simply not run my computing out of this specific data center I get that Anthropic are severely compute-constrained, but in a world where the very existence of "AI data centers" is a red-hot political issue (see recent news out of Utah for a fresh example), signing up with this particular data center is a really bad look. There was a lot of initial chatter about how this meant xAI were clearly giving up on their own Grok models, since all of their capacity would be sold to Anthropic instead. That was a misconception - Anthropic are getting Colossus 1, but xAI are keeping their larger Colossus 2 data center for their own work. As an interesting side note, the night before the Anthropic announcement, xAI sent out a deprecation notice for Grok 4.1 Fast and several other models providing just two weeks' notice before shutdown, reported here by @xlr8harder from SpeechMap: This is terrible @xai. I just spent time and money to migrate to grok 4.1 fast, and you're disabling it with less than two weeks notice, after releasing it in November, with no migration path to a fast/cheap alternative. I will never depend on one of your products again. Here's SpeechMap's detailed explanation of how they selected Grok 4.1 Fast for their project in March. Were xAI serving those models out of Colossus 1? xAI owner Elon Musk (who previously delighted in calling Anthropic "Misanthropic" ) tweeted the following: By way of background for those who care, I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed. [...] After that, I was ok leasing Colossus 1 to Anthropic, as SpaceXAI had already moved training to Colossus 2. And then shortly afterwards : Just as SpaceX launches hundreds of satellites for competitors with fair terms and pricing, we will provide compute to AI companies that are taking the right steps to ensure it is good for humanity. We reserve the right to reclaim the compute if their AI engages in actions that harm humanity. Presumably the criteria for "harm humanity" are decided by Elon himself. Sounds like a new form of supply chain risk for Anthropic to me! You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views
Thomasorus Yesterday

Chemical love combo

/# The chemical love combo Chemical Love is one of the a signature moves of I-no, a character from the fighting game series Guilty Gear. The Chemical Love combos are one of the most complex things I ever had to master in a fighting game, and the biggest knowledge check I ever encountered just to be able to properly use a character. It took me days of trial and error to make it work for the first time. It's a great testimony about how older fighting games used to be humbling bottomless pits. Chemical Love is a powerful move: when doing it, I-no flips herself into the air, does a front split, and launches a sound wave what goes almost full screen and knocks the opponent down (a very powerful thing in Guilty Gear as recovery timings are fixed and give plenty of time to prepare your next attack). It's counterpart is that his execution is demanding: a half circle backward then forward motion + Kick button (or →↘↓↙← → Kick.) But this move isn't just a standalone attack you throw out of nowhere, it's also a core part of I-no's combos. And that's where it becomes both complicated and amazing. Guilty Gear XX provides a move cancelling technique called Roman Cancel . By pressing 3 buttons, it frees the character from what is it currently doing, while leaving its existing projectiles active. The basic version creates a red halo around the character. Some of those cancels, when done during a specific time window that usually last 2 to 4 frames (32 to 64 milliseconds) and only exists on specific moves, are called Force Roman Cancel (or FRC) and show a blue halo instead of red. The Chemical Love has an FRC point, meaning it can be used during combo. But as I-no lifts herself in the air at the first frame of the move, cancelling it makes her fall to the ground before she can continue and attack. And as I-no does not have a traditional running motion, she cannot catch the opponent she just hit with the sound wave. Guilty Gear XX allows character to dash in the air, and I-no can do it too. Since we are canceling the Chemical Love, a move that lifts I-no in the air, she should be able to just air dash without having to go on the ground first, right? Well no. Because if the moves gives some properties akin to being in the air (like invincibility on the legs), I-no is considered on the ground , thus she cannot air dash. She has to jump first, then do the Chemical Love in the air, then cancel it with the FRC, then air dash. Jumping and then inputing the complicated motion of the Chemical Love is near to impossible to do in such a short amount of time, which is why a tiger knee motion has to be used. A Tiger Knee motion means inputing the move motion, then an extra upward direction, then the button that triggers the move. By doing so, the game input buffer is tricked: it has the move motion in memory, but only triggers it when the button is pressed, which happens after the jump. For I-no, it means doing a half-circle backward that ends with a up-backward direction that acts as a jump, then forward, then the button (or →↘↓↙←↖ → Kick.). By doing so, I-no does the Chemical Love close to the ground, but is considered in the air. Once the character is properly considered in the air and the cancel can happen and I-no can technically air dash, the player has to input a double tap forward extremely fast before I-no falls. It's almost impossible to do when close to the ground, so once again the game buffer must be abused. Instead of doing Chemical Love > 3 buttons > Forward, Forward, it's necessary to input Chemical Love > Forward > 3 Buttons > Forward. This way, the I-no doesn't have the time to fall back on the ground and instantly air dashes outside of the cancel, allowing the combo to continue. And that's it, that's all the basics needed to do a proper combo with I-no! But it's not over. Depending on the starter before the Chemical Love, the distance, and other factors, like the weight of the character, the remaining of the combo changes. There's actually another way of doing this combo! In Guilty Gear XX, there's a technique called Jump Install , which consists of making the game believe the character is both on the ground and in the air, allowing them to store a jump or an air dash until they go back to their idle stance. The way it works is by using normal moves that are jump-cancellable , which means their recovery animations can be cancelled by the character jumping. But the animation of the jump can also be cancelled by another move before the jump starts, yet the jump still registers. Which means the character does an attack, cancels its animation to jump, then cancels the jump animation with another attack. Nothing on the screen indicates a Jump Install happened, it can only be seen after the fact, when a character uses the air dash or the additional jump, in a situation where it's normally not possible. Which is perfect for I-no! If she can make the game believe she is in the air while she is on the ground, when she cancels the Chemical Love with the FRC, she can air dash without doing the additional Tiger Knee motion. This combo is an incredible knowledge check even by fighting games standards of the period. It requires not only to know how I-no works, but also to know some Guilty Gear specific techniques that are not explained in the game manual, as well as others, more generic fighting games techniques. Before the internet allowed knowledge to be shared more easily, it was near to impossible to learn this by yourself. Today, most fighting games don't try put essential stuff like this behind a knowledge or execution barrier. For business reasons, they prefer to tone down the strength of the moves and reduce the complexity of motions, which makes the game more accessible, less punitive, and overall, more successful. Still, it feels good to do this combo in a real match. It's the same feeling as beating a hard boss in any other game. I miss this kind of feeling in more recent games.

0 views