Posts in Hardware (20 found)

Meta Compute, The Meta-OpenAI Battle, The Reality Labs Sacrifice

Mark Zuckerberg announced Meta Compute, a bet that winning in AI means winning with infrastructure; this, however, means retreating from Reality Labs.

0 views

Running cheap and crappy USB hard drives in RAID0 is indeed a very terrible idea

Some of my dumb experiments result in interesting findings and unexpected successes. Some end up with very predictable failures. What happens when you have two crappy USB hard drives running 1 in mode? Nothing, until something goes wrong on one of the drives. Here’s what it looks like: But in a way, this setup worked exactly as expected. If you want to have a lot of storage on the cheap, or simply care about performance, or both, then running disks in RAID0 mode is a very sensible thing to do. I used it mainly for having a place where I can store a bunch of data temporarily, such a full disk images or data that I can easily replace. Now I can test that theory out! I feel like I need to point out that this is not the fault of . When you instruct a file system to provide zero redundancy, then that is what you will get.  ↩︎ I feel like I need to point out that this is not the fault of . When you instruct a file system to provide zero redundancy, then that is what you will get.  ↩︎

0 views

LoopFrog: In-Core Hint-Based Loop Parallelization

LoopFrog: In-Core Hint-Based Loop Parallelization Marton Erdos, Utpal Bora, Akshay Bhosale, Bob Lytton, Ali M. Zaidi, Alexandra W. Chadwick, Yuxin Guo, Giacomo Gabrielli, and Timothy M. Jones MICRO'25 To my Kanagawa pals: I think hardware like this would make a great target for Kanagawa, what do you think? The message of this paper is that there is plenty of loop-level parallelism available which superscalar cores are not yet harvesting. Fig. 1 illustrates the classic motivation for multi-core processors: scaling the processor width by 4x yields a 2x IPC improvement. In general, wider cores are heavily underutilized. Source: https://dl.acm.org/doi/10.1145/3725843.3756051 The main idea behind is to add hints to the ISA which allow a wide core to exploit more loop-level parallelism in sequential code. If you understand Fig. 2, then you understand , the rest is just details: Source: https://dl.acm.org/doi/10.1145/3725843.3756051 The compiler emits instructions which the processor can use to understand the structure of a loop. Processors are free to ignore the hints. A loop which can be optimized by comprises three sections: A header , which launches each loop iteration A body , which accepts values from the header A continuation , which computes values needed for the next loop iteration (e.g., the value of induction variables). Each execution of the header launches two threadlets . A threadlet is like a thread but is only ever executed on the core which launched it. One threadlet launched by the header executes the body of the loop. The other threadlet launched by the header is the continuation, which computes values needed for the next loop iteration. Register loop-carried dependencies are allowed between the header and continuation, but not between body invocations. That is the key which allows multiple bodies to execute in parallel (see Fig. 2c above). At any one time, there is one architectural threadlet (the oldest one), which can update architectural state. All other threadlets are speculative . Once the architectural threadlet for loop iteration completes, it hands the baton over to the threadlet executing iteration , which becomes architectural. Dependencies through memory are handled by the speculative state buffer (SSB). When a speculative threadlet executes a memory store, data is stored in the SSB and actually written to memory later on (i.e., after that threadlet is no longer speculative). Memory loads read from both the L1 cache and the SSB, and then disambiguation hardware determines which data to use and which to ignore. The hardware implementation evaluated by the paper does not support nested parallelization, it simply ignores hints inside of nested loops. Fig. 6 shows simulated performance results for an 8-wide core. A core which supports 4 threadlets is compared against a baseline which does not implement . Source: https://dl.acm.org/doi/10.1145/3725843.3756051 can improve performance by about 10%. Fig. 1 at the top shows that an 8-wide core experiences about 25% utilization, so there may be more fruit left to pick. Thanks for reading Dangling Pointers! Subscribe for free to receive new posts and support my work. Source: https://dl.acm.org/doi/10.1145/3725843.3756051 The main idea behind is to add hints to the ISA which allow a wide core to exploit more loop-level parallelism in sequential code. Structured Loops If you understand Fig. 2, then you understand , the rest is just details: Source: https://dl.acm.org/doi/10.1145/3725843.3756051 The compiler emits instructions which the processor can use to understand the structure of a loop. Processors are free to ignore the hints. A loop which can be optimized by comprises three sections: A header , which launches each loop iteration A body , which accepts values from the header A continuation , which computes values needed for the next loop iteration (e.g., the value of induction variables).

0 views
Jeff Geerling 2 days ago

Raspberry Pi Pico Mini Rack GPS Clock

I wanted to have the most accurate timepiece possible mounted in my mini rack. Therefore I built this: This is a GPS-based clock running on a Raspberry Pi Pico in a custom 1U 10" rack faceplate. The clock displays time based on a GPS input, and will not display time until a GPS timing lock has been acquired. For full details on designing and building this clock, see: When you turn on the Pico, the display reads Upon 3D fix, you get a time on the clock, and the colon starts blinking If the 3D fix is lost, the colon goes solid When the 3D fix is regained, the colon starts blinking again

0 views
Stratechery 2 days ago

Apple: You (Still) Don’t Understand the Vision Pro

Dear Apple, I was, given my interest in virtual and augmented reality, already primed to have a high degree of interest in the Vision Pro, but even so, I appreciate how you have gone out of your way to make sure I’m intrigued. You let me try the Vision Pro the day it was announced , and while I purchased my own the day it shipped (and had it flown over to Taiwan), you recently sent me a demo version of the M5 Vision Pro (it’s definitely snappier, although I don’t like the Dual Knit Band at all; the Solo Knit Band continues to fit my head best). However, the reason I truly know you are trying to win my heart is that not only did you finally show a live sporting event in the Vision Pro, and not only was it an NBA basketball game, but the game actually featured my Milwaukee Bucks! Sure, I had to jump through VPN hoops to watch the broadcast, which was only available in the Lakers home market, but who am I to complain about watching Giannis Antetokounmpo seal the game with a block and a steal on LeBron James in my M5 Vision Pro? And yet, complain I shall: you have — like almost every video you have produced for the Vision Pro — once again shown that you fundamentally do not understand the device you are selling. I’m incredibly disappointed, and cannot in good faith recommend any model of the Vision Pro to basketball fans (or anyone else for that matter). Apple, you are one of the grandfather’s of the tech industry at this point; it’s hard to believe that you are turning 50 this year! Still, you are much younger than TV generally, and sports on TV specifically. The first U.S. television broadcast of a sporting event was a Columbia-Princeton baseball game on May 17, 1939 on NBC; there was one camera accompanying the radio announcer. Three months later NBC televised the first Major League Baseball game between the Brooklyn Dodgers and Cincinnati Reds; this time they used two cameras. All televised sports face a fundamental limitation when it comes to the fan experience: the viewer is experiencing something that is happening in real life 3D on a 2D screen; the solution NBC discovered from the very beginning was to not try and recreate the in-person experience, but to instead create something uniquely suited to this new medium. Two cameras became three, then four, then 147 — that’s how many cameras Fox used for last year’s Super Bowl broadcast . Of course many of those cameras were specialized: included in that number were 27 super slow motion cameras, 23 high resolution cameras, 16 robotic cameras, 10 wireless cameras, and two SkyCams. The job of stitching all of those cameras together into one coherent broadcast falls on the production team, housed in a specially equipped truck outside the stadium; that team coordinates with the broadcast booth to provide a seamless experience where every jump feels natural and pre-meditated, even though it’s happening in real time. It’s a great experience! And, of course, there is the pre-game, half-time, and post-game shows, which used an additional 64 cameras, including 12 wireless cameras, eight robotic cameras, seven augmented reality cameras, and a FlyCam. No broadcast is complete without something to fill the time when the game isn’t on. After all, as advanced as TV broadcasts may be, they still face the fundamental limitation that confronted NBC: how do you translate an in-person experience into something that is compelling for people on their couch looking at a 2D screen? When I first tried the Vision Pro the demo included a clip from an NBA game that was later cut from the demo that shipped with the device (which was the one available in Apple Stores); it jumped out at me at the time : What was much more compelling were a series of immersive video experiences that Apple did not show in the keynote. The most striking to me were, unsurprisingly, sports. There was one clip of an NBA basketball game that was incredibly realistic: the game clip was shot from the baseline, and as someone who has had the good fortune to sit courtside, it felt exactly the same, and, it must be said, much more immersive than similar experiences on the Quest. It turns out that one reason for the immersion is that Apple actually created its own cameras to capture the game using its new Apple Immersive Video Format. The company was fairly mum about how it planned to make those cameras and its format more widely available, but I am completely serious when I say that I would pay the NBA thousands of dollars to get a season pass to watch games captured in this way. Yes, that’s a crazy statement to make, but courtside seats cost that much or more, and that 10-second clip was shockingly close to the real thing. What is fascinating is that such a season pass should, in my estimation, look very different from a traditional TV broadcast, what with its multiple camera angles, announcers, scoreboard slug, etc. I wouldn’t want any of that: if I want to see the score, I can simply look up at the scoreboard as if I’m in the stadium; the sounds are provided by the crowd and PA announcer. To put it another way, the Apple Immersive Video Format, to a far greater extent than I thought possible, truly makes you feel like you are in a different place. The first thing that has been frustrating about the Vision Pro has been the overall absence of content; Apple, you produced a number of shows for launch, and then added nothing for months. The pace has picked up a bit, but that has revealed a second frustration: I think that your production stinks! One of the first pieces of sports content that you released was an MLS Season in Review immersive video in March 2024; I wrote in an Update : I have a lot to say about this video and, by extension, the Vision Pro specifically, and Apple generally. Let me work my way up, starting with the video: it’s terrible. The problem — one that was immediately apparent before I got into all of the pedantry below — is that while the format is immersive, the video is not immersive at all. This is the big problem: This is a screenshot of a stopwatch Mac app I downloaded because it supported keyboard shortcuts (and could thus use it while watching the immersive video). There are, in a five minute video, 54 distinct shots; that’s an average of one cut every six seconds! Moreover, there wasn’t that much gameplay: only 2 minutes and 32 seconds. Worse, some of the cuts happen in the same highlight — there was one play where there was a sideline view of the ball being passed up the field, and then it switched to a behind-the-goal view for the goal. I actually missed the goal the first time because I was so discombobulated that it took me a few seconds to even figure out where the ball was. In short, this video was created by a team that had zero understanding of the Vision Pro or why sports fans might be so excited about it. I never got the opportunity to feel like I was at one of these games, because the moment I started to feel the atmosphere or some amount of immersion there was another cut (and frankly, the cuts were so fast that I rarely if ever felt anything). This edit may have been perfect for a traditional 2D-video posted on YouTube; the entire point of immersive video on the Vision Pro, though, is that it is an entirely new kind of experience that requires an entirely new approach. I had the exact same response when you released a video of a Metallica concert last March : As for the concert itself, the video was indeed very cool. The opening shot following James Hetfield walking into the stadium was very compelling, and, well, it was immersive. And then you cut to another camera angle, and while that camera angle was also immersive, the video as a whole no longer was. What followed was a very enjoyable 30 minutes or so — I’ll probably watch it again — but it felt like a particularly neat documentary, not like I was at a concert. You had a monologue from each member of the band, you had shots of the crowd, you had three songs, all, as Apple proudly noted in their press release, shot with “14 Apple Immersive Video cameras using a mix of stabilized cameras, cable-suspended cameras, and remote-controlled camera dolly systems that moved around the stage.” That means the final product was edited together from those 14 cameras and the four interviews, which is to say it was a produced artifact of a live experience; at no point did I feel like I was at the concert. News flash: I didn’t watch the video again. I’m just not that interested in a TV-style documentary of Metallica. I added: We are nearly two years on from that introduction, and over a year beyond the actual launch of the Vision Pro, and there has yet to be an experience like I envisioned and thought was coming. What is frustrating is that the limiting factor is Apple itself: the company had 14 Apple Immersive Video cameras at this concert, but what I want is only one. I want an Apple Immersive Video camera planted in the audience, and the opportunity to experience the concert as if I were there, without an Apple editor deciding what I get to see and when. Needless to say, you probably already know why I thought Friday’s telecast was a big disappointment. I understand, Apple, why it’s not easy to record or even take a screenshot of a copyrighted game; please bear with me while I describe the experience using text. When I started the broadcast I had, surprise surprise, a studio show, specially tailored for the Apple Vision Pro. In other words, there was a dedicated camera, a dedicated presenter, a dedicated graphics team, etc. There was even a dedicated announcing team! This all sounds expensive and special, and I think it was a total waste. Here’s the thing that you don’t seem to get, Apple: the entire reason why the Vision Pro is compelling is because it is not a 2D screen in my living room; it’s an immersive experience I wear on my head. That means that all of the lessons of TV sports production are immaterial. In fact, it’s worse than that: insisting on all of the trappings of a traditional sports broadcast has two big problems: first, because it is costly, it means that less content is available than might be otherwise. And second, it makes the experience significantly worse . Jump ahead to game action. The best camera was this one on the scorer’s table: I have, as I noted, had the good fortune of sitting courtside at an NBA game, and this very much captured the experience. The biggest sensation you get by being close to the players is just how tall and fast and powerful they are, and you got that sensation with the Vision Pro; it was amazing. The problem, however, is that you would be sitting there watching Giannis or LeBron or Luka glide down the court, and suddenly you would be ripped out of the experience because the entirely unnecessary producer decided you should be looking through one of these baseline cameras under the hoop: These are also not bad seats! I’ve had the good fortune of sitting under the basket as well. These are the seats where you really get a sense of not just the power but also the physicality of an NBA game: I would gladly watch an entire game from here. But alas, I was only granted a few seconds, before the camera changed again. This was absolutely maddening — so maddening, that I am devoting a front page Article to a device no one but me cares about, in the desperate attempt to get someone at your company to listen. What makes the Vision Pro unique is the sense of presence: you really feel like you are wherever the Vision Pro takes you. In other words, when I’m wearing the Vision Pro, and the camera actually stays fixed — like, for example, when you set up a special fourth camera specifically for the Lakers Girls performance, which I think was the single longest continual shot in the entire broadcast — I get the sensation of sitting courtside at Crypto.com Arena, and it’s amazing. Suddenly $3,499 feels cheap! However, when I’m getting yanked around from camera to camera, the experience is flat out worse than just watching on TV. Just think about it: would it be enjoyable to be teleported from sideline to baseline to baseline and back again, completely at the whim of some producer, and often in the middle of the play, such that you have to get your bearings to even figure out what is going on? It would be physically uncomfortable — and that’s exactly what it was in the Vision Pro. What is so frustrating is that the right approach is so obvious that I wrote about it the day you announced this device: one camera, with no production. Just let me sit courtside and watch an NBA game. I don’t need a scoreboard, I can look up and see it. I don’t need a pre-game or post-game show, I can simply watch the players warm-up. I don’t need announcers, I’d rather listen to the crowd and the players on the court. You have made a device that, for this specific use case, is better than TV in every way, yet you insist on producing content for it like it is TV! Just stop! There will be more games this year; from your press release last October : Basketball fans will soon be able to experience NBA games like never before in Apple Immersive on Apple Vision Pro, with a selection of live Los Angeles Lakers matchups during the 2025-26 season, courtesy of Spectrum SportsNet. Viewers will feel the intensity of each game as if they were courtside, with perspectives impossible to capture in traditional broadcasts. The schedule of games will be revealed later this fall, with the first game streaming by early next year, available through the forthcoming Spectrum SportsNet app for Vision Pro. That schedule was announced last week , and there are six games total (including last Friday’s). Six! That’s it. I get it, though: producing these games is expensive: you need a dedicated studio host, a dedicated broadcast crew, multiple cameras, a dedicated production crew, and that costs money. Except you don’t need those things at all . All that you need to do, to not just create a good-enough experience but a superior experience, is simply set up the cameras and let me get from the Vision Pro what I can’t get from anything else: the feeling that I am actually there. And, I would add, you shouldn’t stop with the Lakers: there should be Vision Pro cameras at every NBA game, at every NFL game, at every NHL game, at every MLB game — they should be standard issue at every stadium in the world. There should be Vision Pro cameras at every concert hall and convention center. None of these cameras need a dedicated host or announcers or production crew, because the Vision Pro isn’t TV; it’s actual presence, and presence is all you need. $3,499 is a lot of money for physically uncomfortable TV; it’s an absolute bargain if it’s a way to experience any live experience in the world on demand. But, alas, you refuse. So nope, I still can’t recommend the Vision Pro, not because it’s heavy or expensive or has an external battery, but because you, Apple, have no idea what makes it special.

0 views
Stratechery 5 days ago

2026.02: AI Power, Now and In 100 Years

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Stratechery video is on Netflix and the Hollywood End Game . Will AI Replace Humans? Dorm room discussions are not generally Stratechery’s domain, but for the terminally online it’s getting harder to escape the insistence in some corners of the AI world that humans will no longer be economically necessary, and that we should make changes now to address what they insist is a foregone conclusion. I disagree : humans may be doomed, but as long we are around, we’ll want other humans — and will build economies that reflect that fact. More generally, as we face this new paradigm, it’s essential to remember that technology is an amoral force: what is good or bad comes down to decisions we make, and trying to preemptively engineer our way around human nature guarantees a bad outcome. — Ben Thompson The Future of Power Generation. Through the second half of last year it became clear that one of the defining challenges of the modern era in the U.S. will be related to power: we need more than we can generate right now, electrical bills are skyrocketing, and that tension figures to compound as AI becomes more integral to the economy. If these topics interest you (and they should!), I heartily recommend  this week’s Stratechery Interview with Jeremie Eliahou Ontiveros and Ajey Pandey , two SemiAnalysis analysts who provide a rundown on what the biggest AI labs are doing to address these challenges, the important of natural gas, and how the market is responding to the most fundamental AI infrastructure challenge of them all. Reminder on that last point: Bubbles have benefits !  — Andrew Sharp What China Thinks of What Happened in Caracas. On this week’s Sharp China, Bill Bishop and I returned from a holiday that was busier than expected and broke down various aspects of China’s response to the upheaval in Venezuela after the CCP’s “all-weather strategic partnership” with the Nicolás Maduro regime was rendered null and void by the United States. Topics include: A likely unchanged calculus on Taiwan, a propaganda gift, questions about oil imports, and Iran looming as an even bigger wild card for PRC fortunes. Related to all this, on Sharp Text this week I wrote about the logic of the Maduro operation on the American side, and the dizzying nature of decoding U.S. foreign policy  in a modern era increasingly defined by cold war objectives but without cold war rhetoric.  — AS AI and the Human Condition — AI might replace all of the jobs; that’s only a problem if you think that humans will care, but if they care, they will create new jobs. Nvidia and Groq, A Stinkily Brilliant Deal, Why This Deal Makes Sense — Nvidia is licensing Groq’s technology and hiring most of its employees; it’s the most potent application of tech’s don’t-call-it-an-acquisition deal model yet. Nvidia at CES, Vera Rubin and AI-Native Storage Infrastructure, Alpamayo — Nvidia’s CES announcements didn’t have much for consumers, but affects them all the same. An Interview with Jeremie Eliahou Ontiveros and Ajey Pandey About Building Power for AI — An Interview with Jeremie Eliahou Ontiveros and Ajey Pandey about how AI labs and hyperscalers are leveraging demand to build out entirely new electrical infrastructure for AI. Notes from Schrödinger’s Cold War — Why the U.S. captured Nicolás Maduro, and the challenge of decoding U.S. foreign policy in an era defined by cold war objectives, but without cold war rhetoric. CES and Humanity Tahoe Icons High-Five to the Belgrade Hand The Remarkable Computers Built Not to Fail China’s Venezuela Calculations; Japan’s Rare Earth Access; A Reported Pause on Nvidia Purchases; The Meta-Manus Deal Under Review A New Year’s MVP Ballot, The Jokic Injury and a Jaylen Brown Renaissance, The Pistons and Their Possibilities The Economy in the 22nd Century, Amoral Tech and Silicon Valley Micro-Culture, What Nvidia Is Getting From Groq

0 views

Dynamic Load Balancer in Intel Xeon Scalable Processor

Dynamic Load Balancer in Intel Xeon Scalable Processor: Performance Analyses, Enhancements, and Guidelines Jiaqi Lou, Srikar Vanavasam, Yifan Yuan, Ren Wang, and Nam Sung Kim ISCA'25 This paper describes the DLB accelerator present in modern Xeon CPUs. The DLB addresses a similar problem discussed in the state-compute replication paper : how to parallelize packet processing when RSS (static NIC-based load balancing) is insufficient. Imagine a 100 Gbps NIC is receiving a steady stream of 64B packets and sending them to the host. If RSS is inappropriate for the application, then another parallelization strategy would be for a single CPU core to distribute incoming packets to all of the others. To keep up, that load-distribution core would have to be able to process 200M packets per second, but state-of-the-art results top out at 30M packets per second. The DLB is an accelerator designed to solve this problem. Fig. 2 illustrates the DLB hardware and software architecture: Source: https://dl.acm.org/doi/10.1145/3695053.3731026 A set of producer cores can write 16B queue elements (QEs) into a set of producer ports (PPs). In a networking application, one QE could map to a single packet. A set of consumer cores can read QEs out of consumer queues (CQs). QEs contain metadata which producers can set to enable ordering within a flow/connection, and to control relative priorities. The DLB balances the load at each consumer, while honoring ordering constraints and priorities. A set of cores can send QEs to the DLB in parallel without suffering too much from skew. For example, imagine a CPU with 128 cores. If DLB is not used, and instead RSS is configured to statically distribute connections among those 128 cores, then skew could be a big problem. If DLB is used, and there are 4 cores which write into the producer ports, then RSS can be configured to statically distribute connections among those 4 cores, and skew is much less likely to be a problem. Fig. 5 shows that DLB works pretty well. and are software load balancers, while uses the DLB accelerator. DLB offers similar throughput and latency to RSS, but with much more flexibility. Source: https://dl.acm.org/doi/10.1145/3695053.3731026 AccDirect One awkward point in the design above is the large number of CPU cycles consumed by the set of producer cores which write QEs into the DLB. The paper proposes AccDirect to solve this. The idea is that DLB appears as a PCIe device, and therefore a flexible NIC can use PCIe peer-to-peer writes to send packets directly to the DLB. The authors find that the NVIDIA BlueField-3 has enough programmability to support this. Fig. 9 show that this results in a significant power savings, but not too much of a latency improvement: Source: https://dl.acm.org/doi/10.1145/3695053.3731026 Dangling Pointers I feel like it is common knowledge that fine-grained parallelism doesn’t work well on multi-core CPUs. In the context of this paper, the implication is that it is infeasible to write a multi-core packet processor that primarily uses pipeline parallelism. Back-of-the-envelope: at 400Gbps, and 64B packets, there is a budget of about 40 8-wide SIMD instructions to process a batch of 8 packets. If there are 128 cores, then maybe the aggregate budget is 4K instructions per batch of 8 packets across all cores. This doesn’t seem implausible to me. Thanks for reading Dangling Pointers! Subscribe for free to receive new posts and support my work.

0 views
Stratechery 6 days ago

An Interview with Jeremie Eliahou Ontiveros and Ajey Pandey About Building Power for AI

An Interview with Jeremie Eliahou Ontiveros and Ajey Pandey about how AI labs and hyperscalers are leveraging demand to build out entirely new electrical infrastructure for AI.

0 views
Stratechery 1 weeks ago

Nvidia at CES, Vera Rubin and AI-Native Storage Infrastructure, Alpamayo

Nvidia's CES announcements didn't have much for consumers, but affects them all the same.

0 views

The XOR Cache: A Catalyst for Compression

The XOR Cache: A Catalyst for Compression Zhewen Pan and Joshua San Miguel ISCA'25 Inclusive caches seem a bit wasteful. This paper describes a clever mechanism for reducing that waste: store two cache lines of data in each physical cache line! In most of the paper there are only two caches in play (L1 and LLC), but the idea generalizes. Fig. 1 illustrates the core concept. In this example, there are two cache lines ( , and ) in the LLC. is also present in the L1 cache of a core. Here is the punchline: the LLC stores . If the core which has cached experiences a miss when trying to access , then the core resolves the miss by asking the LLC for . Once that data arrives at the L1, B can be recovered by computing . The L1 stores and separately, so as to not impact L1 hit latency. Source: https://dl.acm.org/doi/10.1145/3695053.3730995 Coherence Protocol The mechanics of doing this correctly are implemented in a cache coherence protocol described in section 4 of the paper. We’ve discussed the local recovery case, where the core which needs already holds in its L1 cache. If the core which requires B does not hold A, then two fallbacks are possible. Before describing those cases, the following property is important to highlight. The cache coherence protocol ensures that if the LLC holds , then some L1 cache in the system will hold a copy of either or . If some action was about to violate this property, then the LLC would request a copy of or from some L1, and use it to split into separate cache lines in the LLC holding and . Direct forwarding occurs when some other core holds a copy of B. In this case, the system requests the other core to send B to the core that needs it. The final case is called remote recovery . If the LLC holds and no L1 cache holds a copy of , then some L1 cache in the system must hold a copy of . The LLC sends to that core which has locally cached. That core computes to recover a copy of and sends it to the requestor. Another tricky case to handle is when a core writes to . The cache coherence protocol handles this case similar to eviction and ensures that the LLC will split cache lines as necessary so that all data is always recoverable. The LLC has a lot of freedom when it comes to deciding which cache lines to pair up. The policies described in this paper optimize for intra-cache line compression (compressing the data within a single cache line). The LLC hardware maintains a hash table. When searching for a partner for cache line , the LLC computes a hash of the contents of to find a set of potential partners. One hash function described by the paper is sparse byte labeling , which produces 6 bits for each 8-byte word in a cache line. Each bit is set to 0 if the corresponding byte in the word is zero. The lower two bytes of each word are ignored. The idea here is that frequently the upper bytes of a word are zero. If two cache lines have the same byte label, then when you XOR them together the merged cache line will have many zero bytes (i.e., they have low entropy and are thus compressible). The LLC can optimize for this case by storing compressed data in the cache and thus increasing its effective capacity. This paper relies on prior work related to compressed caches. The takeaway is that not only is there a potential 2x savings possible from logically storing two cache lines in one physical location, but there are also further savings in compressing these merged cache lines. Fig. 13 compares the compression ratio achieved by XOR cache against prior work (taller bars are better). The right-most set of bars show the geometric mean: Source: https://dl.acm.org/doi/10.1145/3695053.3730995 Fig. 15 shows performance impacts associated with this scheme: Source: https://dl.acm.org/doi/10.1145/3695053.3730995 Dangling Pointers It seems to me that XOR Cache mostly benefits from cache lines that are rarely written. I wonder if there are ways to predict if a particular cache line is likely to be written in the near future. Thanks for reading Dangling Pointers! Subscribe for free to receive new posts and support my work.

0 views
Stratechery 1 weeks ago

Nvidia and Groq, A Stinkily Brilliant Deal, Why This Deal Makes Sense

Nvidia is licensing Groq's technology and hiring most of its employees; it's the most potent application of tech's don't-call-it-an-acquisition deal model yet.

0 views
Jason Fried 1 weeks ago

The big regression

My folks are in town visiting us for a couple months so we rented them a house nearby. It’s new construction. No one has lived in it yet. It’s amped up with state of the art systems. You know, the ones with touchscreens of various sizes, IoT appliances, and interfaces that try too hard. And it’s terrible. What a regression. The lights are powered by Control4. And require a demo to understand how to use the switches, understand which ones control what, and to be sure not to hit THAT ONE because it’ll turn off all the lights in the house when you didn’t mean to. Worse. The TV is the latest Samsung which has a baffling UI just to watch CNN. My parents aren’t idiots, but definitely feel like they’re missing something obvious. They aren’t — TVs have simply gotten worse. You don’t turn them on anymore, you boot them up. The Miele dishwasher is hidden flush with the counters. That part is fine, but here’s what isn’t: It wouldn’t even operate the first time without connecting it with an app. This meant another call to the house manager to have them install an app they didn’t know they needed either. An app to clean some peanut butter off a plate? For serious? Worse. Thermostats... Nest would have been an upgrade, but these other propriety ones from some other company trying to be nest-like are baffling. Round touchscreens that take you into a dark labyrinth of options just to be sure it’s set at 68. Or is it 68 now? Or is that what we want it at, but it’s at 72? Wait... What? Which number is this? Worse. The alarm system is essentially a 10” iPad bolted to the wall that has the fucking weather forecast on it. And it’s bright! I’m sure there’s a way to turn that off, but then the screen would be so barren that it would be filled with the news instead. Why can’t the alarm panel just be an alarm panel? Worse. And the lag. Lag everywhere. Everything feels a beat or two behind. Everything. Lag is the giveaway that the system is working too hard for too little. Real-time must be the hardest problem. Now look... I’m no luddite. But this experience is close to conversion therapy. Tech can make things better, but I simply can’t see in these cases. I’ve heard the pitches too — you can set up scenes and one button can change EVERYTHING. Not buying it. It actually feels primitive, like we haven’t figured out how to make things easy yet. That some breakthrough will eventually come when you can simply knock a switch up or down and it’ll all makes sense. But we haven’t evolved to that point yet. It’s really the contrast that makes it alarming. We just got back from a vacation in Montana. Rented a house there. They did have a fancy TV — seems those can’t be avoided these days — but everything else was old school and clear. Physical up/down light switches in the right places. Appliances without the internet. Buttons with depth and physically-confirmed state change rather than surfaces that don’t obviously register your choice. More traditional round rotating Honeywell thermostats that are just clear and obvious. No tours, no instructions, no questions, no fearing you’re going to do something wrong, no wondering how something works. Useful and universally clear. That’s human that’s modern. -Jason

0 views
Jeff Geerling 1 weeks ago

Raspberry Pi is cheaper than a Mini PC again (that's not good)

Almost a year ago, I found that N100 Mini PCs were cheaper than a decked-out Raspberry Pi 5 . So comparing systems with: Back in March last year, a GMKtec Mini PC was $159, and a similar-spec Pi 5 was $208. Today? The same GMKtec Mini PC is $246.99, and the same Pi 5 is $246.95: Today, because of the wonderful RAM shortages 1 , the Mini PC is the same price as a fully kitted-out Raspberry Pi 5. 16GB of RAM 512GB NVMe SSD Including case, cooler, and power adapter

0 views
Grumpy Gamer 1 weeks ago

This Time For Sure

I think 2026 is the year of Linux for me. I know I’ve said this before, but it feels like Apple has lost it’s way. Liquid Glass is the last straw plus their draconian desire to lock everything down gives me moral pause. It is only a matter of time before we can’t run software on the Mac that wasn’t purchased from the App Store. I use Linux on my servers so I am comfortable using it, just not in a desktop environment. Some things I worry about: A really good C++ IDE. I get a lot of advice for C++ IDEs from people who only use them now and then or just to compile, but don’t live in them all day and need to visually step into code and even ASM. I worry about CLion but am willing to give it a good try. Please don’t suggest an IDE unless you use them for hardcore C++ debugging. I will still make Mac versions of my games and code signing might be a problem. I’ll have to look, but I don’t think you can do it without a Mac. I can’t do that on a CI machine because for my pipeline the CI machine only compiles the code. The .app is built locally and that is where the code signing happens. I don’t want to spin up a CI machine to make changes when the engine didn’t change. My build pipeline is a running bash script, I don’t want to be hoping between machines just to do a build (which I can do 3 or 4 times a day) The only monitor I have is a Mac Studio monitor. I assume I can plug a Linux machine to it, but I worry about the webcam. It wouldn’t surprise me if Apple made it Mac only. The only keyboard I have is a Mac keyboard, I really like the keyboard especially how I can unlock the computer with the touch of my finger. I assume something like this exist for Linux. I have an iPhone but I only connect it to the computer to charge it. So not an issue. I worry about drivers for sound, video, webcams, controllers, etc. I know this is all solvable but I’m not looking forward to it. I know from releasing games on Linux our number-one complaint is related to drivers. Choosing a distro. Why is this so hard? A lot of people have said that it doesn’t really matter so just choose one. Why don’t more people use Linux on the Desktop? This is why. To a Linux desktop newbie, this is paralyzing. I’m going to miss Time Machine for local backups. Maybe there is something like it for Linux. I really like the Apple M processors. I might be able to install Linux on Mac hardware, but then I really worry about drivers. I just watched this video from Veronica Explains on installing Linux on Mac silicon. The big big worry is that there us something big I forgot. I need this to work for my game dev. It’s not a weekend hobby computer. I’ve said I was switching to Linux before, we’ll see if it sticks this time. I have a Linux laptop but when I moved I didn’t turn it on for over year and now I get BIOS errors when I boot. Some battery probably went dead. I’ve played with it a bit and nothing seems to work. It was an old laptop and I’ll need a new faster one for game dev anyway. This will be along well-thought out journey. Stay tuned for the “2027 - This Time For Sure” post. A really good C++ IDE. I get a lot of advice for C++ IDEs from people who only use them now and then or just to compile, but don’t live in them all day and need to visually step into code and even ASM. I worry about CLion but am willing to give it a good try. Please don’t suggest an IDE unless you use them for hardcore C++ debugging. I will still make Mac versions of my games and code signing might be a problem. I’ll have to look, but I don’t think you can do it without a Mac. I can’t do that on a CI machine because for my pipeline the CI machine only compiles the code. The .app is built locally and that is where the code signing happens. I don’t want to spin up a CI machine to make changes when the engine didn’t change. My build pipeline is a running bash script, I don’t want to be hoping between machines just to do a build (which I can do 3 or 4 times a day) The only monitor I have is a Mac Studio monitor. I assume I can plug a Linux machine to it, but I worry about the webcam. It wouldn’t surprise me if Apple made it Mac only. The only keyboard I have is a Mac keyboard, I really like the keyboard especially how I can unlock the computer with the touch of my finger. I assume something like this exist for Linux. I have an iPhone but I only connect it to the computer to charge it. So not an issue. I worry about drivers for sound, video, webcams, controllers, etc. I know this is all solvable but I’m not looking forward to it. I know from releasing games on Linux our number-one complaint is related to drivers. Choosing a distro. Why is this so hard? A lot of people have said that it doesn’t really matter so just choose one. Why don’t more people use Linux on the Desktop? This is why. To a Linux desktop newbie, this is paralyzing. I’m going to miss Time Machine for local backups. Maybe there is something like it for Linux. I really like the Apple M processors. I might be able to install Linux on Mac hardware, but then I really worry about drivers. I just watched this video from Veronica Explains on installing Linux on Mac silicon. The big big worry is that there us something big I forgot. I need this to work for my game dev. It’s not a weekend hobby computer.

1 views
Kev Quirk 1 weeks ago

I've Pre-Ordered the Clicks Communicator

I've yearned for a Blackberry form-factor for years, and now Clicks have made that wish come true. I had to pre-order one! If you don’t know what the Clicks Communicator is, this 12 minute video should help: BlackBerry’s design will always have a special place in my heart. I much prefer a physical keyboard over a touchscreen, and I’ve said many times that smartphones are far too big these days. The Clicks Communicator is smaller and it has a proper QWERTY keyboard. It is all very BlackBerry, and I love that. The team have also teamed up with the Niagara Launcher developer to deliver a more focused UI. That was yet more good news for me, as I already use Niagara Launcher on my Pixel 9a. It felt like a match made in heaven, so I pre-ordered one immediately. In all honesty, I do not understand why Clicks are marketing the Communicator as a companion device. I assume they are positioning it as a slimmed down alternative for people who still want a flagship phone, but that framing feels odd. It will be running full fat Android 16, and their FAQ confirms (in the very first question, no less) that the Communicator can be used as a primary device. That is exactly how I intend to use it. The companion device messaging is confusing. At first, I assumed it was something closer to the Light Phone , but it is not that at all. It’s a normal phone. I am not a marketer, so perhaps there is a strategy here that I am missing, I just hope it does not hurt their sales. Either way, I am genuinely looking forward to receiving my Clicks Communicator later this year. I will, of course, write about it once it arrives. Has this cool new phone piqued anyone else’s interest? Thanks for reading this post via RSS. RSS is great, and you're great for using it. ❤️ You can reply to this post by email , or leave a comment .

0 views
Bill Mill 1 weeks ago

my tools in 2026

Here's a brief survey of the tools I'm currently using I use a 14-inch MacBook Pro M1 Max that I bought in 2021. It is, on balance, the best computer I have ever owned. The keyboard is usable (not the best macbook keyboard ever, but... fine), the screen is lovely, it sleeps when it's supposed to sleep and the battery still lasts a long time. The right shift key is broken, but using the left one hasn't proved to be much of a challenge. I usually replace my computers every 5 years, but I don't see any reason why I'd need a new one next year. The most important piece of software on my computer is neovim . I've been using vim-ish software since 2003 or so, and on balance I think the neovim team has done a great job shepherding the software into the modern age. I get a bit irritated that I need to edit my configuration more often than I used to with Vim, but coding tools are changing so fast right now that it doesn't seem possible to go back to the world where I edited my config file once every other year. My vim config is available here . I won't list all the plugins; there aren't that many , and most of them are trivial, rarely-used or both. The ones I need are: Now that vim has native LSP support, I could honestly probably get away with just those plugins. Here's a screenshot of what nvim looks like in a terminal window for me, with a telescope grep open: I use kitty and my config is here . I'd probably switch to ghostty if it supported opening hyperlinks in a terminal application , but it doesn't. I use this feature constantly so I want to explain a bit about why I find it so valuable. A common workflow looks like this: Here's a video demonstrating how it works: The kitty actions are configured in this file - the idea is that you connect a mime type or file extension to an action; in this case it's for a filename without a line number, and if it has a line number. I switched from Firefox to Orion recently, mostly because I get the urge to switch browsers every six months or so when each of their annoyances accumulate. Orion definitely has bugs, and is slow in places, and Safari's inspector isn't nearly as nice as Firefox or Chrome. I wouldn't recommend other people switch, even though I'm currently enjoying it. That's it, I don't use any more. Browser plugins have a terrible security story and should be avoided as much as possible. I use Obsidian , which I publish to the web with the code here (see generating HTML )) It's not clear to me why I like using obsidian rather than just editing markdown files in vim, but I'm very happy with it. I use Apple's Mail.app. It's... fine enough I guess. I also occasionally use the fastmail web app, and it's alright too. I am very happy with Fastmail as an email host, and glad I switched from gmail a decade or so ago. I use Slack for work chat and several friend group chats. I hate it even though it's the best chat app I've ever used. I desperately want a chat app that doesn't suck and isn't beholden to Salesforce, but I hate Discord and IRC. I've made some minor attempts at replacing it with no success. It's the piece of software I use day to day that I would most love to replace. I also use Messages.app for SMS and texts I switched in October from Spotify to Apple Music. I dislike Apple Music, but I also disliked Spotify ever since I switched from Rdio when it died. I'm still on the lookout for a good music listening app. Maybe I'll try Qobuz or something? I don't know. I've also used yt-dlp to download a whole bunch of concerts and DJ sets from youtube (see youtube concerts for a list of some of them) and I often listen to those. I have an old iPhone mounted to my desk, and use Reincubate Camo to connect it to video apps. I occasionally use OBS to record a video or add a goofy overlay to video calls, but not that often. I use Adobe Lightroom to import photos from my Fuji X-T30 and Apple Photos to manage photos. With the demise of flickr, I really have no place to post my photos and I've considered adding something to my website but haven't gotten it done. I use vlc and IINA for playing videos, and ffmpeg for chopping them up from the command line Software editor neovim plugins terminal software Browser browser plugins Note Taking telescope.nvim I have "open by filename search" bound to and "open by grep search" bound to , and those are probably the two most common tools I use in vim. Telescope looks great and works fast (I use telescope-fzf-native for grep) codecompanion I added this in January last year, and it's become the default way I interact with LLMs. It has generally very nice features for working with buffers in vim and handles communications with LLMs cleanly and simply. You don't need Cursor et al to work with LLMs in your editor! I do use claude code for agentic missions, as I have not had much success with agentic mode in codecompanion - I use it more for smaller tasks while I'm coding, and it's low-friction enough to make that pretty painless sonokai colorscheme I use a customized version , and it makes me happy. I to a project directory I either remember a file name or a string I can search for to find where I want to work In the former case, I do In the latter, I do Then I click on the filename or line number to open that file or jump to that line number in a file I love mise for managing versions of programming environments. I use it for node, terraform, go, python, etc etc I have files in most of my projects which set important environment variables, so it has replaced direnv for me as well fd for finding files, a better find ripgrep for grepping atuin for recording my command line history GNU Make and occasionally Just for running tasks is broken and annoying in old, predictable ways; but I know how it works and it's available everywhere here's an example makefile I made for a modern js project is modern in important ways but also doesn't support output file targets, which is a feature I commonly use in ; see the above file for an example gh for interacting with github. I particularly use my alias for quite a lot to open pull requests jq for manipulating json llm for interacting with LLMs from the command line; see An AI tool I find useful for an example Bitwarden - I dunno, it's fine, it mostly doesn't annoy me uBlock origin - I'm very happy with it as an ad blocker

0 views
Stone Tools 1 weeks ago

XPER on the Commodore 64

In 1984, Gary Kildall and Stewart Chiefet covered "The Fifth Generation" of computing and spoke with Edward Feigenbaum, the father of "expert systems." Kildall started the show saying AI/expert systems/knowledge-based systems (it's all referred to interchangeably) represented a "quantum leap" in computing. "It's one of the most promising new softwares we see coming over the horizon." One year later, Kildall seemed pretty much over the whole AI scene. In an episode on "Artificial Intelligence" he did nothing to hide his fatigue from the guests, "AI is one of those things that people are pinning to their products now to make them fancier and to make them sell better." He pushed back hard against the claims of the guests, and seemed less-than-impressed with an expert system demonstration. The software makers of those "expert systems" begged to differ. There is a fundamental programmatic difference in the implementation of expert systems which enables a radical reinterpretation of existing data, they argued. Guest Dr. Hubert Dreyfus re-begged to re-differ, suggesting it should really be called a "competent system." Rules-based approaches can only get you about 85% of the way toward expertise; it is intuition which separates man from machine, he posited. I doubt Dreyfus would have placed as high as 85% competence on a Commodore 64. The creator of XPER , Dr. Jacques Lebbe, was undeterred, putting what he knew of mushrooms into it to democratize his knowledge. XPER , he reasoned, could do the same for other schools of knowledge even on humble hardware. So, just how much expertise can one cram into 64K anyway? So, what is an "expert system" precisely? According to Edward Feigenbaum, creator of the first expert system DENDRAL, in his book The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World , "It is a computer program that has built into it the knowledge and capability that will allow it to operate at the expert's level. (It is) a high-level intellectual support for the human expert." That's a little vauge, and verges on over-promising. Let's read on. "Expert systems operate particularly well where the thinking is mostly reasoning, not calculating - and that means most of the world's work." Now he's definitely over-promising. After going through the examples of expert systems in use, it boils down to a system which can handle combinatorial decision trees efficiently. Let's look at an example. A doctor is evaluating a patient's symptoms. A way to visualize her thought process for a diagnosis might take the below form. An expert system says, "That looks like a simple decision tree. I happen to know someone who specializes in things like that, hint hint." XPER is a general-purpose tool for building such a tree from expert knowledge, carrying the subtle implication (hope? prayer?) that some ephemeral quality of the decision making process might also be captured as a result. Once the tree is captured, it is untethered from the human expert and can be used by anyone. XPER claims you can use it to build lots of interesting things. It was created to catalog mushrooms, but maybe you want to build a toy. How about a study tool for your children? Let's go for broke and predict the stock market! All are possible , though I'm going to get ahead of your question and say one of those is improbable . I have a couple of specific goals this time. First, the tutorial is a must-do, just look at this help menu. This is the program trying to HELP ME . After I get my head around that alphabet soup, I want to build a weather predictor. The manual explicitly states it as a use case and by gum I'ma gonna do it. I'm hoping that facts like, "Corns ache something intolerable today" and "Hip making that popping sound again" can factor into the prediction at some point. First things first, what does this program do? I don't mean in the high-level, advertising slogan sense, I mean "What specific data am I creating and manipulating with XPER ?" It claims "knowledge" but obviously human knowledge will need to be molded into XPER knowledge somehow. Presently, we don't speak the same language. XPER asks us to break our knowledge down into three discrete categories, with the following relationships of object, feature, and attribute: My Gen-X BS alarm is ringing that something's not fully formed with this method for defining knowledge. Can everything I know really be broken down into three meager components and simple evaluations of their qualities? Defining objects happens in a different order than querying, which makes it a little fuzzy to understand how the two relate. While we define objects as collections of attributes, when querying against attributes to uncover matching objects. The program is well-suited to taxonomic identification. Objects like mushrooms and felines have well-defined, observable attributes that can be cleanly listed. A user of the system could later go through attribute lists to evaluate, "If a feline is over 100kg , has stripes , and climbs trees which feline might it be?" For a weather predictor, I find it difficult to determine what objects I should define. My initial thought was to model "a rainy day" but that isn't predictive. What I really want is to be able to identify characteristics which lead into rainy days. "Tomorrow's rain" is an attribute on today's weather, I have naively decided. This is getting at the heart of what XPER is all about; it is a vessel to hold data points. Choosing those data points is the real work, and XPER has nothing to offer the process. This is where the manual really lets us down. In the Superbase 64 article, I noted how the manual fails by not explaining how to transition from the "old way" to the "new way" of data cataloging. For a program which suggests building toys from it, the XPER manual doesn't provide even a modicum of help in understanding how to translate my goals into XPER objects. The on-disk tutorial database of "felines" shows how neatly concepts like "cat identification" fit into XPER framing. Objects are specific felines like "jaguar," "panther," "mountain lion." Features suggest measureable qualities like "weight," "tree climbing," "fur appearance," "habitat" etc. Attributes get more specific, as "over 75kg," "yes," "striped," and "jungle." For the weather predictor, the categories of data are similarly precise. "cloud coverage," "temperature," "barometer reading," "precipitation," "time of year," "location," and so forth may serve our model. Notice that for felines we could only define rough ranges like "over 75kg" and not an exact value. We cannot set a specific weight and ask for "all cats over some value." XPER contains no tools for "fuzzy" evaluations and there is no way to input continuous data. Let's look at the barometer reading, as an example. Barometer data is hourly, meaning 24 values per day. How do I convert that into a fixed value for XPER ? To accurately enter 24 hours of data, I would need to set up hourly barometer features and assign 30? 50? possible attributes for the barometer reading. Should we do the same for temperature? Another 24 features, each with 30 or more attributes, one per degree change? Precipitation? Cloud coverage? Wind speed and direction? Adorableness of feline? Besides the fact that creating a list of every possible barometric reading would be a ridiculous waste of time, it's not even possible in the software. A project is limited to We must think deeply about what data is important to our problem, and I'd say that not even the expert whose knowledge is being captured would know precisely how to structure XPER for maximum accuracy. The Fifth Generation warns us: "GiT GuD" as the kids say. ( Do they still say that?! ) The graphic above, output by XPER 's "Printer" module, reveals the underlying data structure of the program. Its model of the data is called a "frame," a flat 2-D graph where objects and attributes collide. That's it. Kind of anticlimactic I suppose, but it imbues our data with tricks our friend Superbase can't perform. First, this lets us query the data in human-relatable terms, as a kind of Q&A session with an expert. "Is it a mammal?" "Does it have striped fur?" "Does it go crazy when a laser pointer shines at a spot on the floor?" Through a session, the user is guided toward an object, by process of elimination, which matches all known criteria, if one exists. Second, we can set up the database to exclude certain questions depending upon previous answers. "What kind of fur does it have?" is irrelevant if we told it the animal is a fish, and features can be set up to have such dependencies. This is called a father/son relationship in the program, and also a parent/child relationship in the manual. "fur depends on being a mammal," put simply. Third, we can do reverse queries to extract new understandings which aren't immediately evident. In the feline example it isn't self-evident, but can be extracted, that "all African felines which climb trees have retractile claws." For the weather predictor I hope to see if "days preceding a rainy day" share common attributes. The biggest frustration with the system is how all knowledge is boxed into the frame. For the weather predictor, this is frustrating. With zero relationship between data points, trends cannot be identified. Questions which examine change over time are not possible, just "Does an object have an attribute, yes or no?" To simulate continuous data, I need to pre-bake trends of interest into each object's attributes. For example, I know the average barometric pressure for a given day, but because XPER can't evaluate prior data, it can't evalute if the pressure is rising or falling. Since it can't determine this for itself, I must encode that as a feature like "Barometric Trend" with attributes "Rising," "No Change," and "Falling." The more I think about the coarseness with which I am forced to represent my data, the more clear it is to me how much is being lost with each decision. That 85/15 competency is looking more like 15/85 the other direction. Collecting data for the weather predictor isn't too difficult. I'm using https://open-meteo.com to pull a spreadsheet on one month of data. I'll coalesce hourly readings, like barometric pressure, into average daily values. Temperature will be a simple "min" and "max" for the day. Precipitation will be represented as the sum of all precipitation for the day. And so on. As a professional not-a-weather-forecaster, I'm pulling whatever data strikes me as "interesting." In the spirit of transparency, I mentally abandoned the "expert" part of "expert system" pretty early on. This guy *points at self with thumbs* ain't no expert. Having somewhat hippy-dippy parents, I've decided that Mother Earth holds secrets which elude casual human observation. To that end, I'm including "soil temperature (0 - 7cm)" as a data point, along with cloud coverage, and relative humidity to round out my data for both systematic and "I can't spend months of my life on this project" reasons. After collecting November data for checkpoint years 2020, 2022, and 2024, actually entering the data is easier than expected. XPER provides useful F-Key shortcuts which let me step through objects and features swiftly. What I thought would take days to input wound up only being a full afternoon. Deciding which data I want, collecting it, and preparing it for input was the actual work, which makes sense. Entering data is easy; becoming the expert is hard. Even as I enter the data, I catch fleeting glimpses of patterns emerging and they're not good. It's an interesting phenomenon, having utterly foreign data start to feel familiar. Occasionally I accidentally correctly predict if the next day's weather has rain. Am I picking up on some subliminal pattern? If so, might XPER "see" what I'm seeing? I'm not getting my hopes up, but I wonder if early fascination with these forays into AI was driven by a similar feeling of possibility? We're putting information into a system and legimitately not knowing what will come out of the processing. There is a strong sense of anticipation; a powerful gravity to this work. It is easy to fool myself into believing I'm unlocking a cheat code to the universe. Compare this to modern day events if you feel so inclined. At the same time, there's obviously not enough substance to this restricted data subset. As I enter that soil temperature data, 90% of the values keep falling into the same bin. My brainstorm for this was too clever by half, and wrong. As well, as I enter data I find sometimes that I'm entering exactly the same information twice in a row, but the weather results are different enough as to make me pause. Expert systems have a concept of "discriminating" and "non-discriminating" features. If a given data point for every object in a group of non-eliminated objects is the same, that data point is said to be "non-discriminating." In other words, "it don't matter" and will be skipped by XPER during further queries. The question then is, whose fault is this? Did I define my data attributes incorrectly for this data point or is the data itself dumb? I can only shrug, "Hey, I just work here." XPER has a bit of a split personality. Consider how a new dataset is created. From the main menu, enter the Editor. From there you have four options. First, I go to option for the seemingly redundantly named "Initializing Creating." Then I set up any features, attributes, and objects I know about, return to this screen, and save with option . Later I want to create new objects. I type for "Creating" and am asked, "Are you sure y/n" Am I sure ? Am I sure about what ? I don't follow, but yes, I am sure I want to create some objects. I hit and I'm back at a blank screen, my data wiped. That word "initializing" is doing the heavy lifting on this menu. "Initialize" means "first time setup of a dataset," which also allows, almost as a side effect, the user an opportunity to input whatever data happens to be known at that moment . "Initial Creation" might be more accurate? Later, when you want to add more data, that means you now want to edit your data, called "revising" in XPER , and that means option . Option is only ever used the very first time you start a new data set. is for every time you append/delete afterward. The prompts and UI are unfortunately obtuse and unhelpful. "Are you sure y/n" is too vague to make an informed decision. The program would benefit greatly from a status bar displaying the name of the current in-memory dataset, if it has been saved or not, and a hint on how close we are to the database limit. Prompts should be far more verbose, explaining intent and consequence. A status bar showing the current data set would be especially useful because of the other weird quirk of the program: how often it dumps data to load in a new part of the program. XPER is four independent programs bound together by a central menu. Entering a new area of the program means effectively loading a new program entirely, which requires its own separate data load. If you see the prompt "Are you sure y/n" what it really means is, "Are you sure you want me to forget your data because the next screen you go to will not preserve it. y/n" That's wordy, but honest. With that lesson learned, I'm adding three more data points to the weather predictor: temperature trend, barometric trend, and vapor pressure deficit (another "gut feeling" choice on my part). Trends should make up a little for the lack of continuous data. This will give me a small thread of data which leads into a given day, the data for that day, and a little data leading out into the next day. That fuzzes up the boundaries. It feels right, at the very least. Appending the new information is easy and painless. Before, I used F3/F4 to step through all features of a given object. This time I'm using F5/F6 to step through a single feature across all objects. This only took about fifteen minutes. I'm firmly in "manual memory management" territory with this generation of hardware. Let's see where we sit relative to the maximum potential. Features like this really makes one appreciate the simple things in life like a mouse, gui checklists, and simple grouping mechanisms. XPER can compare objects or groups of objects against one another, identifying elements which are unique to one group or the other. You get two groups, full stop. Items in those groups and only those groups will be compared when using the command. We can put objects individually into one of those two groups, or we can create an object definition and request that "all objects matching this definition go into group 1 or 2". This is called a STAR object. I created two star objects: one with tomorrow's weather as rain, and one with tomorrow's weather as every type except rain. Groups were insta-built with the simple command where means and means , my "rainy day" star object. I can ask for an AND or OR comparison between the two groups, and with any luck some attribute will be highlighted (invert text) or marked (with ) as being unique or exclusive to one group or the other. If we find something, we've unlocked the secret to rain prediction! Take THAT, Cobra Commander ! Contrary to decades of well-practiced Gen-X cynicism, I do feel a tiny flutter of "hope" in my stomach. Let's see what the XPER analysis reveals! The only thing unique between rainy days and not is the fact that it rained. The Jaccard Distance , developed by Grove Karl Gilbert in 1884, is a measure of the similiarity/diversity between two sets (as in "set theory" sets). The shorter the distance, the more "similar" the compared sets are. XPER can measure this distance between objects. where is the object ID of interest, will run a distance check of that object against all other objects. On my weather set with about 90 objects, it took one minute to compare Nov. 1, 2020 with all other days at 100% C64 speed. Not bad! What can we squeeze out of this thing? By switching into "Inquirer" mode, then loading up the data set of interest, a list of top level object features are presented. Any features not masked by a parent feature are "in the running" as possible filters to narrow down our data. So, we start by entering what we know about our target subject. One by one, we fill in information by selecting a feature then selecting the attribute(s) of that feature, and the database updates its internal state, quietly eliminating objects which fall outside our inquiry. The command will look at the "remaining objects," meaning "objects which have not yet been eliminated by our inquiry so far." With the command as in to run it against the "jaguar" we can ask XPER to tell us which features, in order, should we answer to narrrow down to the jaguar as quickly as possible. It's kind of ranking the features in order of importance to that specific object. It sounds a bit like feature weighting , but it's definitely not. XPER isn't anywhere close to that level of sophistication. In this data set, if I answer "big" for "prey size" I immediately zero in on the jaguar, it being the currently most-discriminating feature for that feline. You might be looking at this and wondering how, exactly, this could possibly predict the weather. You and me, both, buddy. The promise of Fifth Gen systems and the reality are colliding pretty hard now. Feigenbaum and "The Fifth Generation" have been mentioned a few times so far, so I should explain that a little. Announced in 1981, started in 1982, and lasting a little more than a decade, "The Fifth Generation" of computing was Japan's moniker for an ambitious nationwide initiative. According to the report of Japan's announcement, Fifth Generation Computer Systems : Proceedings of the International Conference on Fifth Generation Computer Systems, Tokyo, Japan, October 19-22, 1981, Japan had four goals: In Fifth Generation Computers: Concepts, Implementation, and Uses (1986), Peter Bishop wrote, "The impact on those attending the conference was similar to that of the launch of the Soviet Sputnik in 1957." During a hearing before the Committee on Science and Technology in 1981, Representative Margaret Heckler said , "When the Soviets launched Sputnik I, a remarkable engineering accomplishment, the United States rose to the challenge with new dedication to science and technology. Today, our technology lead is again being challenged, not just by the Soviet Union, but by Japan, West Germany, and others." Scott Armstrong writing for The Christian Science Monitor in 1983, in an article titled, "Fuchi - Japan's computer guru" said, "The debate now - one bound to intensify in the future - is whether the US needs a post-Sputnik-like effort to counter the Japanese challenge. Japan's motive (reflects) a sense of nationalism as much as any economic drive." Innovation Policies for the 21st Century: Report of a Symposium (2007) remarked of Japan's Fifth Generation inroads into supercomputers, "This occasioned some alarm in the United States, particularly within the military." It would be fair to say there was "Concern," with a capital C. In 1989's The Fifth Generation: The Future of Computer Technology by Jeffrey Hsu and Joseph Kusnan (separate from Feigenbaum's The Fifth Generation ) said Japan had three research projects The "Fifth Generation" was specifically the software side which the conference claimed, "will be knowledge information processing systems based on innovative theories and technologies that can offer the advanced functions expected to be required in the 1990's overcoming the technical limitations inherent in conventional computers." Expert systems played a huge role during the AI boom of the 80s, possibly by distancing itself from "AI" as a concept, focusing instead on far more plausible goals. It's adjacent to, but isn't really, "artificial intelligence." This Google N-Gram chart shows how "expert system" had more traction than the ill-defined "artificial intelligence." Though they do contain interesting heuristics, there is no "intelligence" in an expert system. Even the state of the art demonstrated on Computer Chronicles looked no more "intelligent" than a Twine game . That sounds non-threatening; I don't think anyone ever lost a job to a Choose Your Own Adventure book. In those days, even something that basic had cultural punch. Feigenbaum's The Fifth Generation foreshadowed today's AI climate, if perhaps a bit blithely. That guy wasn't alone. In 1985, Aldo Cimino, of Campbell's Soup Co., had his 43 years of experience trouble-shooting canned soup sterilizers dumped onto floppy by knowledge engineers before he retired. They called it "Aldo on a Disk" for a time. He didn't mind, and made no extra money off the brain dump, but said the computer "only knows 85% of what he does." Hey, that's the same percentage Hubert Dreyfus posited at the start of this article! That system was retired about 10 years later, suffering from the same thing that a lot of expert systems of the day did: brittleness. From the paper, "Expert Systems and Knowledge-Based Engineering (1984-1991)" by Jo Ann Oravec, "Brittleness (inability of the system to adapt to changing conditions and input, thus producing nonsensical results) and “knowledge engineering bottlenecks” were two of the more popular explanations why early expert system strategies have failed in application." Basically, such systems were inflexible to changing inputs (that's life), and nobody wanted to spend the time or money to teach them the new rules. The Campbell's story was held up as an exemplar of the success possible with such systems, and even it couldn't keep its job. It was canned. (Folks, jokes like that are the Stone Tools Guarantee™ an AI didn't write this.) Funnily enough, the battles lost during the project may have actually won the war. There was a huge push toward parallelism in compute during this period. You might be familiar with a particularly gorgeous chunk of hardware called the Connection Machine. Japan's own highly parallel computers, the Parallel Inference Machines (PIM), running software built with their own bespoke programming language, KL1, seemed like the future. Until it didn't. PIM and Thinking Machines and others all fell to the same culprit. Any gains enjoyed by parallel systems were relatively slight and the software to take advantage of those parallel processors was difficult to write. In the end the rise of fast, cheap CPUs evaporated whatever advantages parallel systems promised. Today we've reversed course once more on our approach to scaling compute. As Wikipedia says, "the hardware limitations foreseen in the 1980s were finally reached in the 2000s" and parallelism became fashionable once more. Multi-core CPUs and GPUs with massive parallelism are now put to use in modern AI systems, bringing Fifth Generation dreams closer to reality 35 years after Japan gave up. In a "Webster's defines an expert system as..." sense, I suppose XPER meets a narrow definition. It can store symbolic knowledge in a structured format and allow non-experts to interrogate expert knowledge and discover patterns within. That's not bad for a Commodore 64! If we squint, it could be mistaken for a "real" expert system at a distance, but it's not a "focuser ." It borrows the melody of expert systems, yet is nowhere near the orchestral maneuverings of its true "fifth generation" brothers and sisters. Because XPER lacks inference, the fuzzy result of inquiry relies on the human operator to make sense of it. Except for mushroom and feline taxonomy, you're unlikely to get a "definitive answer" to queries. Rather, the approach is to put in data and hope to narrow the possibility space down enough to have something approachable. Then, look through that subset and see if a tendency can be inferred. The expert was in our own hearts all along. Before I reveal the weather prediction results, we must heed an omen from page 10 of the manual. I'm man enough to admit my limits: I'm a dummy. When a dummy feeds information into XPER , the only possible result is that XPER itself also becomes a dummy. With that out of the way, here's a Commodore 64, using 1985's state-of-the-art AI expert system, predicting tomorrow's weather over two weeks. Honestly, not a lot. Ultimately, this wound up being far more "toy" than "productivity," much to my disappointment. A lot of that can be placed on me, for not having an adequate sense of the program's limitations going in. Some of that's on XPER though, making promises it clearly can't keep. Perhaps that was pie-in-the-sky thinking, and a general "AI is going to change everything" attitude. Everyone was excited for the sea-change! It was used for real scientific data analysis by real scientists, so it would be very unfair of me to dismiss it entirely. On the other hand, there were contemporary expert systems on desktop microcomputers which provided far more robust expert system implementations and the advantages of those heuristic evaluations. In that light, XPER can't keep up though it is a noble effort. Overall, I had fun working with it. I honestly enjoyed finding and studying the data, and imagining what could be accomplished by inspecting it just right. Notice that XPER was conspicuously absent during that part of that process, though. Perhaps the biggest takeaway is "learning is fun," but I didn't need XPER to teach me that. Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). VICE, in C64 mode Speed: ~200% (quirk noted later) Snapshots are in use; very handy for database work Drives: XPER seems to only support a single drive XPER v1.0.3 (claims C128, but that seems to be only in "C64 Mode") 250 objects 50 features 300 attributes (but no more than 14 in any given feature) To increase productivity in low-productivity areas. To meet international competition and contribute toward international cooperation. To assist in saving energy and resources. To cope with an aged society. Superspeed Computer Project (the name says it all) The Next-Generation Industries Project (developing the industrial infrastructure to produce components for a superspeed computer) Fifth-Generation Computer Project Any time we're in C64 land, disk loading needs Warp mode turned on. Actual processing of data is fairly snappy, even at normal 100% CPU speed; certainly much faster than Superbase . I suspect XPER is mostly doing bitwise manipulations, nothing processing intensive. XPER did crash once while doing inquiry on the sample feline database. Warp mode sometimes presents repeating input, sometimes not. I'm not entirely certain why it was inconsistent in that way. XPER seems to only recognize a single disk drive. Don't even think about it, you're firmly in XPER Land. XPER 2 might be able to import your data, though. You'll still be in XPER Land, but you won't be constrained by the C64 any longer. As a toy, it's fine. For anything serious, it can't keep up even with its contemporaries: No fuzzy values No weighted probabilities/certainties No forward/backward chaining Limited "explanation" system, as in "Why did you choose that, XPER ?" (demonstrated by another product in Computer Chronicles 1985 "Artificial Intelligence" episode) No temporal sequences (i.e. data changes over time) No ability to "learn" or self-adapt over time No inference

0 views

Pixoo64 Ruby Client

I bought a Pixoo64 LED Display to play around with, and I love it! It connects to WiFi and has an on-board HTTP API so you can program it. I made a Ruby client for it that even includes code to convert PNG files to the binary format the sign wants. One cool thing is that the display can be configured to fetch data from a remote server, so I configured mine to fetch PM2.5 and CO2 data for my office. Here’s what it’s looking like so far: Yes, this is how I discovered I need to open a window 😂

0 views
Circus Scientist 1 weeks ago

Happy New Year (resolutions)

I feel like I got a lot done in 2025 – but there is still more to do when it comes to the Open Source POV Poi and other related projects I am working on. First – a quick personal note: I didn’t get as much circus related work as usual in December. I wrote a blog post where I blamed it on Google and it went VIRAL on Hacker News. More than 150 000 people visited my site in 48 hours! It seems I am not alone in thinking that Google Search is going away. Read the post here: https://www.circusscientist.com/2025/12/29/google-is-dead-where-do-we-go-now/ – note: read until the end, there is a happy ending! Since some people are still having trouble with ESP32 version of SmartPoi I am going to first update the ESP8266 (D1 Mini) Arduino version. I haven’t touched the code in more than a year but it still doesn’t have the single image display I added to C3 version. Look out for this update before the end of January. Next I have to finish my ESP32 C3 poi – I have one fully soldered and all of the components and pieces for both poi on my desk. This will be a reference for anyone trying to make their own, and hopefully after doing a full build we can work out the best working version – without power issues or re-starting or anything else. I also have everything ready for a cheap IR LED poi set. This is going to help anyone (like me at the moment) who is on a budget. I will be doing a full tutorial on that. Happy New Year and a big shout out to my Patreon Supporters . Did you know you can buy SmartPoi (and Smart Hoops!) in Brazil right now? Commercial design and build by Flavio ( https://www.instagram.com/hoop_roots_lovers/ ). I also am in contact with a supporter from Dominican Republic who is developing his own version which will also be for sale soon. Not to mention the Magic Poi being built over in Australia. The post Happy New Year (resolutions) appeared first on Circus Scientist .

0 views