Posts in Hardware (20 found)

MagiCache: A Virtual In-Cache Computing Engine

MagiCache: A Virtual In-Cache Computing Engine Renhao Fan, Yikai Cui, Weike Li, Mingyu Wang, and Zhaolin Li ISCA'25 This paper presents an implementation of RISC-V vector extensions where all vector computation occurs in the cache (i.e., SRAM-based in-memory computation). It contains an accessible description of in-SRAM computation, and some novel extensions. Recall that SRAM is organized as a 2D array of bits. Each row represents a word, and each column represents a single bit location in many words. A traditional read operation occurs by activating a single row. Analog values are read out from each bit and placed onto shared bit lines. There are two bit lines per column (one holding the value, one holding the complement). Values flow down to sense amplifiers that output digital values. Prior work has shown that this basic structure can be augmented to perform computation. Rather than activating a single row, two rows are activated simultaneously (let’s call the values of these rows and ). The shared bit lines perform computation in the analog domain, which results in two expressions appearing on the output of the sense amplifiers: ( AND ) and ( NOR ). Fig. 1(a) shows a diagram of such an SRAM array: Source: https://dl.acm.org/doi/10.1145/3695053.3731113 If you slap some digital logic at the end of the sense amplifiers, then you can generate other functions like OR, XOR, XNOR, NAND, shift, add. Shift and add involve horizontal connections. Fig. 4(c) shows a hardware diagram of this additional logic at the end of the sense amplifiers. Note that the resulting value can be written back into the SRAM array for future use. Multiplication is not directly supported but can be implemented with a sequence of shift and add operations. Source: https://dl.acm.org/doi/10.1145/3695053.3731113 Virtual Engine The innovation in this paper is to dynamically share a fixed amount of on-chip SRAM for two separate purposes: caching and a vector register file. The logical vector register file capacity required for a particular algorithm depends on the number of architectural registers used, and the width of each architectural register (RISC-V vector extensions allow software to configure a logical vector width). Note that this hardware does not have separate vector ALUs, the computation is performed directly in the SRAM arrays. Fig. 6 illustrates how the hardware dynamically allocates SRAM space between generic cache storage and vector registers (with in-memory compute). The unit of allocation is a segment . The width of a vector register determines how many segments it requires. Source: https://dl.acm.org/doi/10.1145/3695053.3731113 Initially, all SRAM space is dedicated to caching. When the hardware processes an instruction that writes to an uninitialized vector register, then the hardware allocates segments to hold data for that register (evicting cached data if necessary). This system assumes an enlightened compiler which will emit a instruction to hint to the hardware when it has reached a point in the instruction stream where no vector register has valid content. The hardware can use this hint to reallocate all memory back to being used for caching. Fig. 8 shows performance results normalized against prior work (labeled here). This shows a 20%-60% performance improvement, which is pretty good considering that the baseline offers an order-of-magnitude improvement over a standard in-order vector processor. Source: https://dl.acm.org/doi/10.1145/3695053.3731113 Dangling Pointers I wonder how this would compare to hardware that did not have a cache, but rather a scratchpad with support for in-memory computing. Subscribe now

0 views
Brain Baking Yesterday

My Workspaces

This post is inspired by Franck Sauer’s My Workspaces . I love Franck’s setup and background story behind each photo. I’ve been meaning to write this for months but postponed the search for old desktop setup photos because I wasn’t sure where to start. Back in the nineties, we didn’t brainlessly press that button: every shot was one less on the film roll and added to the cost. Hence my oldest setup—the 486 in my dad’s makeshift office that also served as the washing machine room—is lost forever. My parents got me a sturdy but boring looking IKEA desk I’ve been using extensively up until 2015. My room looked more or less the same once that piece of furniture got in up until I moved out. Here’s a picture of my then brand new flatron CRT that’s showcasing Smash Bros. Melee (I played on the GameCube through a PCI TV card): My 'workspace' in 2006. Note the white DS Lite in the background, putting this photo somewhere after June 2006. There are more photos of my gaming setup from 2002-2007 in case you’re interested. I once built a virtual tour of my room in the form of an HTML imagemap website but that too is lost in time meaning there’s nothing much to see on the photo now, except for a sliver of a blue DELL laptop I used for more serious university work. I wish I kept that keyboard around though, it was surprisingly comfortable. Not as cool as Microsoft’s Natural Keyboard Elite , but still! At some point in time, I was also dumb enough to sell the Wavebird and all GameCube games. What was I thinking… I moved out in 2008 and rented a cheap flat for three years to save up on money before my wife & I bought our first home. Again, my meticulous archival work proves to be not that meticulous after all: I can’t find a single photo of that apartment, except for the empty rooms just before I got in. The IKEA desk moved to the living room as I didn’t own a TV. On the other hand, it probably wasn’t worth saving, as workspace denotes some work had to be done there. I was a software development consultant back then and worked at the client’s offices. Those were long hours and long commutes meaning nothing much was done at home. Here’s an unremarkable at best picture of what that typical office space looked like in those years: My office workspace in 2008 with a corporate HP laptop plugged into a then already older CRT from a client. Yes, that on the lower right is my wallet. I believe it still is now. When we bought a house and started living together, we had a spare room to throw in everything we couldn’t find a good spot for. This included my cheap bookcase and the very same IKEA desk: My workspace in 2013. I can't recall any work has been done there at all. The Monkey Island poster I already had hanging on the wall a year before I left my parents’ place; it’s still with me now as you’ll see in the later pics. I can’t believe any work has been done at all in that “office”: I was still a consultant and working from home was a big no-no. That meant the space was largely unused, which is a shame, because now that I look back at it, it looks cosy, especially with that chicken hug stuffed in the lower left of the bookcase! I started to resent the commutes. I quit my job and we sold and bought another house where we still live in as I type this. One of the three bedrooms became my “office”—I’ll still use quotes here as again nothing much was done there. I didn’t like locking myself in that room upstairs as my wife was downstairs watching TV. The Nintendo Switch was my big savour 1 : a hybrid handheld system that I could play on the couch! My workspace in 2014. Left: that same IKEA desk survived yet another move. This photo was taken right after we moved in, hence the lack of decorations. Right: in the living room/kitchen, were most of my writing was done. Again, this post is far from impressive compared to Franck’s cool setups. Most of my writing and thinking happened on the kitchen table. In 2012-2013 I bought a MacBook Air and since then loved inventing a makeshift workspace wherever. Working from home still was the big exception. After four years I quit my job again to rejoin academia and pursue a PhD. That meant the way I worked radically shifted: more individually, and more from home. On top of that, in 2020, a thing called COVID happened, where we suddenly were forced to work from home. Just like many others, I finally started taking the home workspace environment seriously. I already published the result in the 2021 retro desktop setup post: My 2020 workspace featuring a 486 machine, a beige Win98 tower, a WinXP one, and on the far right, the 'work horse' MacBook and second screen. If you look closely enough, you’ll notice the same skylight as the leftmost photo in 2014. I jammed as much retro hardware as I could find in that tiny room, binning the IKEA desk (R.I.P.) and buying more IKEA stuff (Linnmon). In 2020, after eight years of faithful service, the old MacBook Air got replaced by the one I’m typing this on (on the far right). Thankfully, the Monkey Island posters survived. There are more photos of this setup in the linked post. For the first time in my life, I felt truly happy in my home workspace. It became my sanctuary: me, surrounded by old junk. And then our daughter started poisoning the place with baby toys: The other side of the retro room: Billy bookshelves and baby toys. At least I managed to fend off most of the toys and eventually, when she got older, we managed to contain her junk within her room or below stairs. Until the second kid came along and kicked me out. Our house looks big but really isn’t, so we renovated to create more space. Still, my workspace became his bedroom, so I had to move to the old living room : My workspace in 2025, with a bigger window overlooking the front garden and street. Later that year I properly fixed the cable work, relayed another Ethernet cable, and started thinking about how I could restore my retro hardware. Unfortunately, only the 486 is on display right now, and that one hasn’t been touched in almost a year due to busy parenthood. At least now there was room for another IKEA case that can hold more board games than the previous one could in the hallway (that of course got claimed by the kids). I prepare my lessons here and like the bigger window but do miss the previous workspace. Hardware-wise, nothing much changed, except for a mechanical keyboard . Perhaps I should throw in a retro TV to hook up the SNES. I don’t know. Since becoming a parent, this stuff matters less but I miss it more, it’s hard to explain. As for gaming, most of it is done on the couch with the Analogue Pocket, Switch, or just with the MacBook on my lap. So far for having a dedicated workspace… As a bonus photo, here’s the current state of the above workspace at the time of writing: The current state of the 2025 workspace. Whoops... Yeah, I know… That’s a mild exaggeration as I was already a big GB(A) and DS fanboy. It did rejuvenate my interest in handheld gaming.  ↩︎ Related topics: / setup / By Wouter Groeneveld on 14 April 2026.  Reply via email . That’s a mild exaggeration as I was already a big GB(A) and DS fanboy. It did rejuvenate my interest in handheld gaming.  ↩︎

0 views
iDiallo 2 days ago

You paid for it, you should be comfortable in it

A friend of mine bought a Tesla Roadster back in the early 2010s. At the time, spotting a Tesla on the road was a rare event. Maybe even occasion enough to stop and take a picture. I never got the chance to photograph one, let alone drive one, until I met this new friend recently. This was my chance to experience the car firsthand. We walked to the parking structure to see it. As soon as he opened the door, something looked... off. On the outside, it was a pristine, six-figure roadster. But the inside looked completely custom. Not "custom" in the sense of a professional shop install, but more like the driver himself grabbed a hammer and chisel and made it his own. First, the driver's seat had been altered. It was much lower than usual and didn't match the passenger seat. My friend stands 6'7", and the Roadster is a tiny car. He physically couldn't fit, so he modified the seat rails to lower it. But that fix created a new problem: the door armrest now dug into his hip. So, he took a file to the interior panel, shaved it down, and 3D printed a smaller, ergonomic armrest. He even 3D printed a cup holder for the passenger side so his coffee was within reach. To me, the idea of taking a Dremel or a file to a $100,000+ car was unimaginable. You must be crazy to do it. He caught the look on my face and shrugged. "Hey, it's my car. I paid for it. I intend to be comfortable in it." I never thought of it like this. That sentiment stuck with me. Recently when I read an article by Kent Walters about filing the corners of his MacBook , those same feelings resurfaced. My work MacBook has edges so sharp that I've often felt like I was slicing my wrist on the chassis. I treated this as a design flaw I had to endure. But not Kent. He treated it as an obstacle to be removed. He literally filed down the corners of his laptop to ensure the machine he uses every day was comfortable. I may not have the guts to file my work issued MacBook, but I'm no stranger to customization... in software. I modify my tools constantly. I spend days tweaking my IDE, remapping keyboard shortcuts, and writing custom scripts until the software is unrecognizable to anyone else on my team. I don't think twice about rewriting a config file to make the tool fit my brain. When I was a kid, I always had a screw driver around, fixing a device that wasn't really broken. On the home computer, I modified everything. I once deleted all files to improve performance. It didn't work, but it led to a fruitful career. But somehow, when it comes to expensive hardware now, I freeze. I treat the physical object as a museum piece to be preserved. I bought a docking station to banish the laptop to a shelf, using an external mouse and keyboard to avoid touching the sharp chassis. I built a complex workaround to accommodate the tool, rather than performing the simple, brutal act of modifying the tool to accommodate me. We treat our physical tools as if they are on loan from the manufacturer. You'll see a musician buying a vintage guitar but refuses to adjust the action, terrified of ruining the "collector's value." Meanwhile, the working guitarist has sanded down the neck and covered it in stickers because it feels better in their hand. The software engineer accepts the default keybindings to avoid "bad habits," while the power user creates a layout that doubles their speed. If you own a tool, whether it's a car, a computer, or a line of code, you own the right to change it. The manufacturer designed it for the "average" user, but you are a specific human with specific needs. Remember grandma's couch in the living room? It had that plastic cover on it. It was so uncomfortable, but no one dared to remove it. The plastic was to preserve the sofa. No one got to enjoy it, instead everyone accommodated the couch only to preserve its value. A value that one ever benefits from. Don't let the perceived value of an object stop you from making it truly yours. A tool with battle scars is a tool that is loved.

0 views
マリウス 4 days ago

KTT x 80Retros GAME 1989 Orange

I picked up the KTT x 80Retros GAME 1989 Orange switches a while ago at Funkeys , a physical brick-and-mortar mechanical keyboard store in Yongsan-gu, Seoul , and it’s my first linear switch. Given its surprisingly cheap price I really didn’t expect much from it to be honest. KTT is a name people normally associate with budget options, like Peaches , Sea Salts , and Strawberries . It’s the kind of switches that show up in beginner build guides and they are generally good stuff, but not really the kind of thing that made me stop and think about what I was typing on. However, the GAME 1989 Orange changed that perception for me, and it did it in a way I genuinely didn’t see coming. But before we get into the switch itself, we need to talk about the vibe , because the vibe is half the story here. 80Retros is a relatively young brand out of China that debuted on ZFrontier around December 2023 with an interest check for their GAME 1989 cherry-profile PBT keycap set inspired by the original Game Boy . They describe themselves as lovers of all things vintage and retro, and unlike a lot of brands that slap “retro” on things as a marketing afterthought, they actually seem to mean it. What’s remarkable is how fast they’ve moved since then. Within a few years, they went from a single keycap IC to pushing out nearly a dozen different switches across two separate manufacturers ( KTT and HMX ), along with matching keycap sets in multiple colorways. The G.O.A.T. of switch reviews himself, ThereminGoat , covered this in detail in his HMX Volume 0-T review , and the GAME timeline is pretty interesting: The original HMX -manufactured GAME 1989 switches came first, followed by what he calls the “Film Trio” (the KD200 , FJ400 , and GAME 1989 Classic ), all packaged in these absolutely gorgeous film canister-inspired containers that look like oversized Kodak rolls. The film canister thing started as a nod to the KD200 and FJ400 being camera-brand-inspired, but the community loved the packaging so much that 80Retros seemingly just kept using it for everything. Even for switches that have nothing to do with photography. The KTT -manufactured GAME 1989 Orange and Red are the newer entries in this expanding catalogue, released as part of an “Expanded Film Series” in early 2025 alongside a Silent White variant and an HMX XMAS switch. So we’re looking at a brand that is absolutely not slowing down. On paper, PC top and PA66 bottom is a pretty classic material combo. KTT has used variations of this pairing for years. What makes this switch interesting is the KT2 stem made out of their proprietary UPE blend. UPE ( ultra-high molecular weight polyethylene ) is a material that’s been showing up more and more in the switch world, but it’s one of those things where the specific manufacturer’s blend matters enormously. Keygeek ’s U4 , for example, sounds glassy and solid. KTT ’s KT2 is more dry, a bit foamy, and (this is the part I didn’t expect) it brings an audible character that I can only describe as “marble-y” . It’s not soft, but it’s not hard either. It sits in this interesting middle ground. At 4mm travel with a pole bottom-out the switch is technically a long-pole linear, but the full travel distance means it doesn’t feel like one in the snappy, sharp way that most long-poles do. The pole bottom-out is there, but it’s mellowed out by the travel length and the stem material. More on that later. Stock smoothness is good, and I mean genuinely good. Probably not HMX -tier buttery, and probably not the absolute smoothest thing I’ve tried in the recent years, but there’s a quality to the travel that feels deliberate and controlled. The factory lube is present but light. A thin coating on the bottom housing railings, some on the stem legs and leaf, and the springs seem lightly done too. There is a texture to the keystroke and some people might call it scratch, but I’m not sure that would be fair, though it’s not entirely wrong either. UPE blends can be unpredictable when paired with other housing materials. Sometimes you get something silky, sometimes you get audible friction. The KT2 blend with this PC/PA66 housing produces a slight tactile grain in the travel that I genuinely enjoy. It’s subtle enough that you won’t notice it during normal typing speed, but if you slow-press a single key at ear level, it’s there. Spring-wise, 40g actuation bottoming out at around 50g is on the lighter side, especially for me and my usual Frankenswitches . I wouldn’t call it featherweight, but if you tend to bottom out hard, you’ll definitely hit the end of the stroke with minimal effort. The springs are clean, without noticeable ping in my set. The factory lube on the springs seems to do its job. One thing to note is that there’s reportedly about a 3g variance between individual switches. I couldn’t verify that precisely, but I did notice the occasional key that felt marginally different. Not a dealbreaker for me, but if you’re the kind of person who weighs every spring in a batch, keep it in mind. As for wobble, it is present. There’s some slight vertical (north-south) wobble and maybe a touch of east-west if you go looking for it. This seems to be a known trade-off with KTT ’s newer molds. Their older switches like the Hyacinths seemingly had incredibly tight tolerances, but those molds are from a different era. KTT has been retooling to accommodate new materials like their KT2 and KT3 blends, and the fit isn’t quite as snug as the old stuff. As for films, they probably do help to tighten up the housings and I’ve read that filming the switches apparently also compresses the sound profile slightly. Personally, the wobble doesn’t bother me too much. The sound profile is where the GAME 1989 Orange gets genuinely interesting, because the sound profile is busy , and I mean that in a good way. The bottom-out is lower-pitched than you’d typically expect from a PC -topped switch. The PA66 bottom housing and the KT2 stem material seemingly pull the tone down into a territory that’s thocky without being mushy. There’s a definite pop to the keystroke, and the bottom-out has weight to it. The top-out (the return stroke) is a touch brighter, creating this slight tonal contrast between the downstroke and upstroke that gives the switch a lot of auditory dimension. There’s a lot happening acoustically at any given keystroke and none of it sounds muddied or confused. The “marble-y” quality I mentioned earlier really comes through in the sound. It’s not a wet, lubed sound, but a relatively dry and more textured one, with a character that feels… natural, in lack of better words. The slight scratch in the travel actually adds to the sound profile rather than detracting from it. The initial contact, the pole hitting bottom, the spring compression, the return remains distinct of each other and layered. Volume-wise, it’s moderate. Definitely not silent, but also not exactly loud. Slightly quieter than your average long-pole, which makes sense given the full 4mm travel and the way the KT2 material absorbs some of the impact energy. I haven’t yet tested it on any of my aluminium builds , but at least on the few keyboards Funkeys had these switches on, as well as on my Kunai , I find that the sound profile works beautifully. Having that said, these switches are definitely less ideal for quiet/public environments, like open space offices and cafes. The switches come factory lubed and they work just fine stock. I’d personally resist the urge to lube them further unless you specifically want to kill the audible scratch, which I think is part of the charm. If you do lube, know that you’re trading character for smoothness, and these are already reasonably smooth to begin with. They accept films, and filming them does seem to tighten the sound slightly with less resonance in the housing, a more compressed signature. Depending on your build and plate material, that might be exactly what you want or exactly what you don’t. Try a few with and without before committing. As for the packaging, if you buy the 35-switch sets, they come in those aforementioned film canister containers. It’s genuinely lovely and a nice touch that makes the whole experience feel considered. Not something I’d pay extra for, but it’s a detail that matters for the overall product identity. One thing to note is that the canisters open very easily. I wouldn’t walk around holding them upside down unless I’d want to play find 35 switches hidden underneath the furniture . The KTT x 80Retros GAME 1989 Orange surprised me. It’s a switch that trades the ultra-polished, frictionless perfection for something with a dry, textured, slightly scratchy keystroke that somehow comes together into a sound profile that’s warm, full, and more complex than it has any right to be at this price point. It’s not perfect. The wobble is there, and the housing tolerances aren’t as tight as the best in the business. It doesn’t feel like every other linear on the market, at least not like the ones I had the chance to try over the past years. It has character, which, in a hobby that’s increasingly crowded with technically excellent but personality-free switches, has its charm. If you want the smoothest linear available, look elsewhere. If you want something that sounds interesting, feels engaging, and comes wrapped an homage to a long gone era give the 1989 Orange a shot. I’m genuinely glad I did. Disclaimer: I’m not a switch scientist. I don’t own a force curve rig, I can’t tell you the exact durometer of the KT2 blend, and my ears are probably not calibrated to the standards of someone like ThereminGoat . This review is based on my personal experience typing on these switches across a few different boards and ultimately actively using them on my primary keyboard . Your mileage may vary based on your plate material, case, keycaps, and other factors. Take everything here as one person’s experience and use it as a starting point for your own.

0 views

BlogLog April 10 2026

Subscribe via email or RSS I added a new page to my blog in the header showing all the specifications of my homelab and self-hosted services. It will be updated as I continue to update my services or infrastructure. Fixed misspellings in Overview of My Homelab post.

0 views
Kev Quirk 5 days ago

Motorbike Servicing Rant

So my BMW S1000XR is now a year old and it's going in for its first "full service" . It had it's "break in" service after a few weeks of ownership, but that's just an oil change. New bikes come with a very thin oil inside the engine that's used to help with the break-in process. After 500 or so miles, this needs to be swapped out for proper oil. I contacted the dealership for a price and some potential dates, this is the breakdown they came back with: So nearly £350 for what's effectively an hour's work and around £50 in parts. I'm mechanically minded and could easily do this at home, but like most modern vehicles, my BMW doesn't come with a service book that is stamped. These days the service history is all stored centrally with BMW, so means that the service has to be carried out by them. There is a misconception that home servicing will void the warranty of a new bike. It won't as long as the person doing the service uses OEM parts and has done it to manufacturers specification - which I always do. But I bought this bike from BMW, so if I hand it back after 3 years with a generic eBay service book that's been stamped by me, even though it's been done to a high standard, it will affect the trade-in value. Ipso facto, they have me by the balls. I get it, margins are small and this is how dealerships make money, but I wish they would make it accessible for mechanically minded people, like me, to service at home. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Labour - £150 Oil disposal - £20 Oil - £80.60 Sump plug washer - £0.96 Oil filter - £17.29 Brake fluid - £11.92 Tax @ 20% - £56.15 Total: £336.92 (~$455)

0 views
Daniel Mangum 5 days ago

PSA Crypto: The P is for Portability

Arm’s Platform Security Architecture (PSA) was released in 2017, but it was two years until the first beta release of the PSA Cryptography API in 2019, and another year until the 1.0 specification in 2020. Aimed at securing connected devices and originally targeting only Arm-based systems, PSA has evolved with the donation of the PSA Certified program to GlobalPlatform in 2025, allowing non-Arm devices, such as popular RISC-V microcontrollers (MCUs), to achieve certification.

0 views

Overview of My Homelab

I've had a homelab for quite some time now, although it hasn't been a linear process. I first got into it when I heard about Plex, which at first, I was under the impression of it being a free streaming service with everything. I set it up with the installer on my computer and was frustrated and confused to learn that it I gave up on it for who knows how long. Then, I heard about Jellyfin, which is an open-source version that a lot of people seemed to like. I wanted to learn more. I set up Jellyfin on my computer and loaded some movies onto it, then streamed them from the same PC hosting it. Okay, I thought. So it provides a video player basically. Big deal. I have no idea how to access it from other devices or anything interesting. So again I gave up. It wasn't until me and my brother went halfsies on a Synology NAS on June 14, 2024 1 and I had a few years of university and self-tinkering knowledge under my belt that I truly got into homelabbing and self-hosting. At that point, I knew full well what a server and client was, and all about networking. 2 I set up the Synology NAS, at the time living with my parents, and installed both the 8TB HDD that I had bought for my items, and the 16TB HDD that my brother bought for his. 3 I used it as a network-attached storage, as intended at first. Backups and all that. However, I really wanted to get into hosting services . I had been following technical blogs at that point as well as r/selfhosted and really wanted to sink my teeth into it. The Synology NAS has limited resources, being mainly for storage. That didn't stop me from hosting some basic items. I started with Plex, then moved on to Jellyfin. I hosted both at the same time so that if Jellyfin didn't work, I could just use Plex. To this day I use Infuse on my Apple TV and other devices and have it hooked up to my Jellyfin server. Next, I tried Mealie, then switched to Tandoor, since I love to cook and bake at home. I also set up Actual Budget, which is probably one of my top-used services now. It completely changed the way I handle my money. Eventually, I went in on a used Dell PowerEdge R730, which is a 2U rack-mounted enterprise server designed for data center and business-critical workloads. For me, it's a great noise-making machine that has lots of upgrade potential! Here is the boring technical details: A year into using it, and it does exactly what I need it to do every time, no questions asked. Over time, I connected it to an APC UPS to protect it from power outages, and hooked up a used Dell Optiplex I had sitting around to the same UPS. I used to call the Optiplex my "Minecraft Machine," because all it did was run Minecraft servers (and worked excellently). At this point, I've moved all my servers to the PowerEdge, managed by the service CraftyController for easy setup and server start-and-stop. The Optiplex now serves as a remote desktop solution, since my lab is at my parents', 4 allowing me to access the network easily. I also use Tailscale to access serveral services remotely without fully exposing them. When I want to expose a service normally, I use free cloudflare tunnels . For my hypervisor, I have Proxmox installed on the PowerEdge, and all of my services run in their own LXC containers. In the future, I hope to migrate most services to a more energy-efficient and compact mini computer running Ubuntu or Debian Server and managed with Docker instead. For now, Proxmox is very powerful and intuitive, and made it incredibly easy for me to set up snapshots and backups as well as monitor resource usage. Finally, here is a list of my services: It's quite easy to get started yourself making a homelab or self-hosting services. Buying a VPS can make it even easier, like Hostinger's one-click deployment options. You can also simply install Linux with docker containers on an old laptop or other computer you don't use anymore. I know it's been more than worth it for me. Check out r/selfhosted , self.hst newsletter, and YouTube if you want to learn more about selfhosting. Subscribe via email or RSS I went through my Amazon order history for this date. ↩ I would say my first experience hosting a server was hosting multiple Minecraft servers over the years for me and my friends. This is also where I learned basic networking concepts, like what a LAN is, what TCP/UDP is, port forwarding, etc. ↩ I thought this was enough storage to last a lifetime at the time. Scroll through r/DataHoarder and think again. ↩ My parents' house is powered by solar panels, making this a much cheaper and more manageable option for my poor student situation. ↩ Wouldn't work unless my PC stayed on, Didn't really have ad-free subscription-free streaming. Apparently you had to acquire the content yourself. 8 Bay 2.5" SFF H730 Raid Adapter Dual Xeon Processors Dual 750W PSU Total PCI Express X8 Slots: 3 Optical Drive Type: DVD Player Number of Processor Cores: 16 Total PCI Express X16 Slots: 1 Memory Type: DDR4 Memory Frequencies Supported: 1333, 1600, 1866, 2133 Total USB Ports: 4 Processor Series: Intel Xeon E5 Total Serial Ports: 1 Server CPU Model: E5-2667 v4 Maximum # of Hard Drives: 8 Total Memory Slots Available: 24 Server Series: PowerEdge R730 LAN Compatibility: 10/100/1000 Gigabit Maximum Hard Drive Size Supported (GB): 43200 CPU Socket: Dual LGA 2011 Front USB 2.0 Ports: 2 Total Hot-Swap Bays: 8 Total RAM (GB): 16 Maximum Memory Supported (GB): 768 I went through my Amazon order history for this date. ↩ I would say my first experience hosting a server was hosting multiple Minecraft servers over the years for me and my friends. This is also where I learned basic networking concepts, like what a LAN is, what TCP/UDP is, port forwarding, etc. ↩ I thought this was enough storage to last a lifetime at the time. Scroll through r/DataHoarder and think again. ↩ My parents' house is powered by solar panels, making this a much cheaper and more manageable option for my poor student situation. ↩

0 views
neilzone 1 weeks ago

Thoughts on increasing ssh security using a hardware security key

I have been using hardware security keys (including YubiKeys and Titan keys) for FIDO2 and TOTP for a while, but not for ssh. At the moment, I harden the ssh config on my servers, lock down access by IP address, and use password-protected certificates for authentication, blocking password-based authentication. So I think that I do at least reasonably well as it is. But I was interested to see if I could introduce a further aspect of security for ssh, using a security key. My security keys support the generation of both resident and non-resident keys. Resident keys are stored on a slot on the YubiKey, while non-resident keys are stored on the client computer, but require the YubiKey. I picked non-resident. I set a passphrase as part of the ssh-keygen process, so, when it comes to using that key, I need to enter that passphrase and insert and touch the security key. So now someone would need: I can, I think, add a PIN to the YubiKey but, to date, I have not done this. Perhaps I should. Honestly, I was probably fine without this, but, well, I had the security keys, so why not. But, while this works fine from my laptop, I can’t get it to work on my phone (GrapheneOS). At the moment, I use Termux, and from there, I can ssh in to my servers. But I can’t get Termux to use my _*_-sk keypair. There is a six year old issue in the Termux Github repo which indicates that it might, some point, be coming, and that would be welcome. Apparently it can be done using a closed source tool, but since I’m only looking to use FOSS, that’s not on the cards for me. So that is a bit of a pain, as it is convenient to be able to log in from my phone from time to time. to be connected to the correct network to have a copy of my private key to know the passphrase for that private key to have one of my security keys (my main security key, and my backup security key)

0 views
DHH 1 weeks ago

Panther Lake is the real deal

Intel really delivered with Panther Lake. A 2026 Dell XPS 14 using this chipset with an IPS screen can hit just 1.4 watts of idle power draw on Omarchy. That's good enough for over 47 hours!! And in real-world mixed use on another 74-Wh machine, I've seen around 16 hours of battery life. That's a huge jump over the ~6 hours I was getting over the past two years from AMD-powered Framework laptops. Technically, Intel already had something close to Panther Lake on efficiency with the Lunar Lake chips from last year, but those were quite slow on any multi-core workloads (like a developer would need). With Panther Lake (358H), I'm getting 17,500 on Geekbench 6, which is about 10% faster than the already excellent AMD HX370, and a match for Apple's M5.  Apple remains ahead on single-core performance, but even there, Panther Lake is on par with an M3. And I don't remember anyone complaining that those were too slow. What everyone has been pining for was better battery life, and now we got it. On a machine with excellent integrated graphics that are good enough to play a ton of triple-A games no less! But we're getting more than that. The PC makers are getting their act together on all fronts. Haptic touchpads on level with Apple's is now standard on both high-end Dell and Asus laptops. Many of the new machines also have tandem OLED screens that blow even the nice micro-LED options from Apple out of the water. And PCs are now somehow both sleeker and slimmer than the MacBooks. Jonathan Ive knew this, he was just a bit ahead of the components, and he was willing to sacrifice reliability to get to what wasn't possible back then. But now it is, and the PC makers are taking full advantage. Now I know that any comparison between Macs and PCs are moot for most people. There's not a lot of cross-shopping going on these days. If you're locked into the Apple walled garden, it's hard to untangle yourself, so most just continue to buy whatever their team offers. But for the few who are either fed up with Apple in general, macOS Tahoe in particular, or just want to try a whole new way of computing with Omarchy, it's fantastic that battery life is no longer a blocker. It's been the #1 reason cited by folks who've been interested in trying Omarchy, but felt like they couldn't let go of Apple's efficiency advantage. Now that's largely gone. I also just love a good turnaround story. Intel had been on the ropes for years. Now they have a fantastic integrated GPU that's compatible with all the tens of thousands of PC games on the market, a super-efficient CPU that's a match for an M5 on multi-core and an M3 on single-core performance, and a range of PC makers finally taking the fight directly to Apple on touchpads, build quality, and weight. These new Panther Lake CPUs are made in Arizona too, btw. With the world as it is, I think any American should breathe a sigh of relief that if things get spicy with Taiwan, there's more to frontier computing than a TSMC plant within a short reach of China. There's still more work to be done on that front (as Intel CPU cores still come from TSMC!), but it's a huge step in the right direction. Personally, I'm just thrilled that competition is lifting all boats. Apple gave the entire laptop industry a huge wake-up call in 2020 with the introduction of the M chips. Intel's former CEO, Pat Gelsinger, saw the threat clearly, kicked off the 18A plan, but sadly didn't last long enough in the top seat to see his bet pay off with Panther Lake. The rest of us now benefit from his boldness. I'm also thrilled to see both Dell and Intel leaning into Linux. Omarchy 3.5 ships with every possible tweak to make these Panther Lake chips perform at their best, and that was only possible because Michael Dell assigned a team to work on it. So much love to Mr Dell for letting us borrow the brains and commits from senior engineers within both his company and Intel to ship this big new release. If you've been waiting on the sidelines for a laptop that can run Omarchy and still get amazing battery life, now is your magic moment. Give the new Dell XPS series, or any of the other laptops shipping with Panther Lake, a try. I think you'll be as impressed as I've been.

0 views
Martin Alderson 1 weeks ago

What next for the compute crunch?

I thought it'd be a good time to continue on the same theme as my previous two articles The Coming AI Compute Crunch and Is the AI Compute Crunch Here? given that both OpenAI and Anthropic are now publicly agreeing they are (very?) compute starved. I came across this really interesting tweet from the COO of GitHub which really underlines the scale of change that the world is seeing now: This shows that GitHub in the last 3 months (!) has seen a ~14x annualised increase in the number of commits. Commits are a crude proxy for inference demand - but even directionally, if we assume that most of the increase is due to coding agents hitting the mainstream, it points to an outrageously large increase in compute requirements for inference. If anything, this is probably a huge undercount - many people new to "vibe coding" are unlikely to get their heads round Git(Hub) quickly - distributed source control is quite confusing to non engineers (and, at least for me, took longer than I'd like to admit to get totally fluent with it as an engineer). Plus this doesn't include all the Cowork usage which is very unlikely to go anywhere near GitHub. OpenAI's Thibault Sottiaux (head of the Codex team) also tweeted recently that AI companies are going through a phase of demand outstripping supply: It's been rumoured - and indeed in my opinion highly likely given how compute intensive video generation is - that Sora was shut down to free up compute for other tasks. All AI companies are feeling this intensely . Even worse, there is a domino effect with this - when Claude Code starts tightening usage limits or experiencing compute-related outages, people start switching to e.g. Codex or OpenCode, putting increased pressure on them. As I mentioned in my last articles, I believe everyone was looking at these "crazy" compute deals that OpenAI, Anthropic, Microsoft etc were signing like they were going out of fashion back in ~2025 the wrong way. Signing a $100bn "commitment" to buy a load of GPU capacity does not suddenly create said capacity. Concrete needs poured, power needs to be connected, natural gas turbines need to be ordered [1] and GPUs need to be fabricated, racked and networked. All of these products are in short supply, as is the labour required. One of the key points I think worth highlighting that often gets overlooked is how difficult the rollout of GB200 (NVidia's latest chips) has been. Unlike previous generations of GPUs from NVidia the GB200-series is fully liquid cooled - not air cooled as before. Liquid cooling at gigawatt scale just hasn't really been done in datacentres before. From what I've heard it's been unbelievably painful. Liquid cooling significantly increases the power density/m 2 , which makes the electrical engineering required harder - plus a real shortage of skilled labour [2] to plumb it all together - and even shortages of various high end plumbing components has led to most (all?) of the GB200 rollout being vastly behind schedule. While no doubt these issues will get resolved - and the supply chains will gain experience and velocity in delivering liquid cooled parts - this has no doubt put even more pressure on what compute is available in the short to medium term. Even worse, Stargate's 1GW under construction datacentre in the UAE is now a chess piece in the current geopolitical tensions in the recent US/Iran conflict, with the Iranian government putting out a video featuring the construction site. The longer term issue I wrote about in my previous articles on this subject is the hard constraints on DRAM fabrication. While SK Hynix recently signed a $8bn deal for more EUV production equipment from ASML, it's unlikely to come online for another couple of years. Indeed I noticed Sundar Pichai specifically called out memory as a significant constraint on his recent appearance on the Stripe podcast. While recent innovations like TurboQuant are extremely promising in driving memory requirements down via KV cache compression, given the pace at which AI usage is growing it at best buys a small window of breathing room. I believe the next 18-24 months are going to be defined by compute shortages. When you have exponential demand increases and ~linear additions on the supply side, the market is going to be pretty volatile, to say the least. The cracks are already showing. Anthropic's uptime is famously now at "1" nine reliability, and doesn't seem to be getting any better. I don't envy the pressure on SRE teams trying to scale these systems dramatically while deploying new models and efficiency strategies. We've seen Anthropic introduce increasingly more heavy handed measures on the Claude subscription side - starting with "peak time" usage limits being cut significantly, and now moving to ban even usage from 3rd party agent harnesses - no doubt to try and reduce demand. The issue is that if my guesswork at the start of the article is correct and Anthropic is seeing ~10x Q/Q inference demand there is only so much you can do by banning 3rd party use of the product - 1st party use will quickly eat that up. And time based rationing - while extremely useful to smooth out the peaks and troughs - can only go so far. Eventually you incentivise it enough that you max out your compute 24/7. That's not to say there isn't a lot more they can (and will) do here, but when you are facing those kind of demand increases it doesn't end up getting you to a steady state. That really only leaves one lever to pull - price. I was hesitant in my previous articles to suggest major price increases, as gaining marketshare is so important to everyone involved in this trillion dollar race, but if all AI providers are compute starved then I think the game theory involved changes. The paradox of this though is that as models get better and better - and the rumours around the new "Spud" and "Mythos" models from OpenAI and Anthropic point that way - users get less price sensitive. While spending $200/month when ChatGPT first brought out their Pro subscription seemed almost comically expensive for the value you could get out of it, I class my $200/month Anthropic subscription as some of the best value going and would probably pay a lot more for it if I had to, even with current models. We're in completely uncharted territory as far as I can tell. I've been doing a lot of reading about the initial electrification of Europe and North America recently in the late 1800s/early 1900s but the analogy quickly breaks down - the demand growth is so much steeper and the supply issues were far less concentrated. So, we're about to find out what people will actually pay for intelligence on tap. My guess is a lot more than most expect - which is both extremely bullish for the industry and going to be extremely painful for users in the short term. [3] Fundamentally, I believe there is a near infinite demand for machines approaching or surpassing human cognition, even if that capability is spread unevenly across domains. The supply will catch up eventually. But it's the "eventually" that's going to hurt. Increasingly large AI datacentres are skipping grid connections (too slow to come online) and connecting straight to natural gas pipelines and installing their own gas turbines and generation sets ↩︎ I've also read that various manufacturing problems from NVidia has lead to parts leaking, which famously does not combine well with high voltage electrical systems. ↩︎ One flip side of this is how much better the small models have got. I'll be writing a lot more on this, but Gemma 4 26b-a4b running locally is hugely impressive for software engineering. It's not quite good enough, but perhaps we are only a few months off local models on consumer hardware being "good enough". Maybe it's worth buying that Mac or GPU you were thinking about as a hedge? ↩︎ Increasingly large AI datacentres are skipping grid connections (too slow to come online) and connecting straight to natural gas pipelines and installing their own gas turbines and generation sets ↩︎ I've also read that various manufacturing problems from NVidia has lead to parts leaking, which famously does not combine well with high voltage electrical systems. ↩︎ One flip side of this is how much better the small models have got. I'll be writing a lot more on this, but Gemma 4 26b-a4b running locally is hugely impressive for software engineering. It's not quite good enough, but perhaps we are only a few months off local models on consumer hardware being "good enough". Maybe it's worth buying that Mac or GPU you were thinking about as a hedge? ↩︎

0 views

How do you compute?

A recent Tildes thread about computer monitor usage made me wonder what kind of setup others are using, so I spun up a survey! I have a theory about what reader's of this blog will most likely respond, but very curious to see the reality. You can take the 3 question survey here: surveys.darnfinesoftware.com The survey will be open for ~7 days before it auto implodes and all responses are deleted. I'll post a follow-up on what the data looks like (or you can view yourself at the above link).

0 views
./techtipsy 1 weeks ago

The most unstable computer in my fleet is now the most critical one

Remember that failed experiment where I ran Jellyfin off of a LattePanda V1? Do you recall all the parts where I said what this single board computer cannot do? Yeah, I remember. Then I took it and put the two of the most critical services running on it: the blog you’re reading right now, and my Wireguard setup. Trust me, it makes more sense with some context. The board is incapable of doing anything else other than serving content from the eMMC module, and it has a functioning network port. It doesn’t seem to crash in these scenarios. When I try anything else with this board, especially things that include USB connectivity, things break. This makes the board ideal for a light workload that needs to be up 24/7. The biggest threat to my uptime is not internet connectivity or loss of power (although that did happen for the first time in a year recently), it’s me getting new ideas to try out on my setup, which results in downtime. This board is so unreliable for trying those ideas out that it removes any and all temptation to do that, resulting in a computer that has the highest chance of actually being up and running for a very long time. To play things safe, I used an IKEA SJÖSS 20W USB-C power adapter that I got for 3 EUR, with a cheap USB-C to USB-A adapter thrown into the mix. It looks janky, but the adapter outputs 5V 3A, which makes it the beefiest power adapter that I have in my fleet for plain USB-A powered devices. I then hit the board with some commands, including hitting the 2 GB of memory. It ran really well for days, no issues at all. I also improved the cooling situation. I am now a proud owner of an assortment of M2, M2.5 and M3 screws and bits, and equipped with a Makita cordless drill, I made some mounting holes into an old aluminium server heat sink. The drilling was a complete hack job, everything was misaligned, but it was good enough. Certainly better than holding the board and heat sink together with thin velcro strips. The cooling performance is completely adequate, the board hits a maximum of 65°C with the heat sink facing down. This is well below the point at which the board starts to throttle its CPU. The theoretical maximum Wireguard throughput on this board is about 340 Mbps, measured using the fantastic wg-bench solution. Remember the part about the USB ports being flaky? Yeah. That didn’t stop me from getting a USB Gigabit Ethernet adapter to remove one of the main limitations of the LattePanda V1. Based off of vibe-recommendations by Claude, I got a TP-Link UE300 for its alleged low power usage and its availability at a local computer store in Estonia. It seems to work well enough, you can push gigabit speeds through it measured by , and the actual Wireguard performance that I could push through it with an actual workload was at about 420 Mbit/s, higher than indicated by the benchmark, and plenty fast for most workloads, especially in external networks that are usually slower than that. A few hours after making that change, a HN post put some mild load on the LattePanda V1, what good timing. As of publishing this post, the blog has been running mostly off of the LattePanda V1 for over a month now, with that gap in it being caused by contemplating getting that USB Ethernet adapter and temporarily running the blog and Wireguard off of another mini PC during that time. Did you notice?

0 views
Jeff Geerling 1 weeks ago

Build your own Dial-up ISP with a Raspberry Pi

Last year my aunt let me add her original Tangerine iBook G3 clamshell to my collection of old Macs 1 . It came with an AirPort card—a $99 add-on Apple made that ushered in the Wi-Fi era. The iBook G3 was the first consumer laptop with built-in Wi-Fi antennas, and by far the cheapest way to get a computer onto an 802.11 wireless network.

0 views
Jeff Geerling 2 weeks ago

DRAM pricing is killing the hobbyist SBC market

Today Raspberry Pi announced more price increases for all Pis with LPDDR4 RAM , alongside a 'right-sized' 3GB RAM Pi 4 for $83.75. The price increases bring the 16GB Pi 5 up to $299.99 . Despite today's date, this is not a joke. I published a video going over the state of the hobbyist 'high end SBC' market (4/8/16 GB models in the current generation), which I'll embed below: But if you'd like the tl;dr :

0 views
Brain Baking 2 weeks ago

Favourites of March 2026

Our daughter turned three. We’re beyond exhausted but a ripgrep search in this repository yields five more instances of the word exhausted in combination of parenting so I’ll shut up. I guess we also celebrate that after three years of pure chaos, we’re… still alive? Previous month: February 2026 . I am just two levels short of finishing Gobliins 6 before deciding to throw in the towel. Thanks to the increased amount of moon logic presence, the entire adventure was more frustrating than relaxing. As a big Gobliins fan, I have to admit: the game left me a bit disappointed. It’s all right; I’ll just replay Gob3 again. As it left me wanting more, I went back to the original Gobliiins game that I somehow missed as back in the day my dad bought Gobliins 2 and we just continued with 3 without looking back. It’s still worth exploring but very basic and the presence of the life bar is a very strange (and bad!) design choice that fortunately was abandoned in the sequels. I charged the Analogue Pocket and hope to get in some good ol’ Game Boy (Color) games in the coming month. I read a depressing amount of personal genAI tales; more than enough to fill another blog post. I’ll try to keep these out of here as much as possible. My wife bumped into an hacker called Un Kyu Lee crafting his own micro journal hardware. The result looks very cool, including hinge to hang on the door as a physical reminder: I’d rather keep on journaling with my fountain pens, but still, very cool! Related topics: / metapost / By Wouter Groeneveld on 1 April 2026.  Reply via email . Michael vibe-code-ported an X11 window manager into Wayland ; an interesting Claude experiment to see how agentic development works. Greg Newman hosted the Emacs Blog Post Carnival 2025-07 on writing experiences and summarised the participating links. Lots of little gems in there. Rijksmuseum writes about the discovery of the new Rembrandt painting . Well, “new”—it’s been in private collection for years and only recently resurfaced. Peter Bridger shares his experience in the retro happening SWAG February 2026 . I wish we had something similar nearby! Chuck Jordan shares SimCity vibes . As one of the original programmers involved in the projects, he would know. (Via The Virtual Moose ) The 1MB Club has an interesting (older) article I read last month: consider disabling HTTPS auto redirects . I can’t remember why I turned this back on: I want my old WinXP machine to be able to reach as well without the extra TLS overhead. Funny though: they mention “You can freely view this website on both HTTPS and HTTP.”. I remove the in the protocol, press , and get redirected. Whoops. PolyWolf has been thinking about blazing fast static site generators . This is a goldmine as I have a wild idea to write my own generator in Clojure. When the exhaustion and brain fog go away, that is. According to Rishi Baldawa the reviewer isn’t the bottleneck . This one’s a bit AI flavoured, so beware if you’re coming down with an AI cold. (I know I have. Handkerchiefs full.) Marcin Wichary’s keyboard grandmastery again shines through in his Apple Fn endgame article . I wish his keyboard book wasn’t sold out. Wordsmith writes about the underrated simplicity of the original Harvest Moon (1996) video game. Dale Mellor defends sing a dynamically-produced blog site which is a nice change given the static site generator craziness. I’m still on Hugo and have little need for the points he brings up, but still, some others might. Tazjin tries out Guix as a Nixer . I was eyeing on Guix as a budding Lisp fanboy, but both options still can’t seem to fit in my head. I’ll let it stew for a little while longer. Homo Ludditus announces distro hopping time . The conclusion? “The madhouse could be a valid destination. But I’m still looking for better alternatives.” So far for 2026 as the year of the Linux desktop huh. The Digital Antiquarian writes about the year of peak Might & Magic , when New World Computing still was on top of the world. Here’s an interesting thought experiment by Andrey Listopadov: What if structural editing was a mistake? In this 2020 post by Vincent Bernat, photos of a bunch of cool vintage PC expansion cards are shared in conjunction with timeperiod-correct software that made great use of them. Gabor Torok switched to KDE Plasma , an interesting read because we both switched to OSX because of resons and are trying to crawl out of the Apple hole. I don’t know if I’m quite ready yet. Did you know there’s a relation between knitting and programming ? Abbey Perini does. Mykal Machon shares some insightful guiding principles to lead a fuller life. Judging by the principles, I don’t think Mykal has any young kids. I’m using this as a checklist to find out if I missed essential albums: Hip Hop Golden Age’s Top 40 Hip Hop Albums of 1998 . Here’s another GitHub “awesome” list; this time public APIs . Could be useful. Already used for my courses. It doesn’t hurt to link to the 2007 Slow Code manifesto . FontCrafter is a cool way to generate a real font based on your handwriting. WireTap is an open source Ngrok alternative. The Stump Window Manager is the only WM (except the obvious EXWM) I could find that’s written in Common Lisp. I should look into Ulauncher if I ever want to make the switch to Linux to replace Alfred. Christoph Frick shares a cool GitHub Gist showcasing you can write your AwesomeWM config in Fennel instead of Lua. Yazi looks like an Emacs Dired inside a shell?

0 views
Andy Bell 2 weeks ago

I want an alarm clock

Nothing fancy is needed here and certainly nothing “smart”, but my one actual use for an Apple Watch — as a chill alarm clock — is silly really. I’m so fed up of my Apple Watch, so has anyone got a recommendation for an alarm clock that: Is chill with the sounds. I don’t need to be yelled awake thanks. Allows me to set a different time alarm — or no alarm — for different days Is not smart and never connects to the internet Doesn’t tick

2 views

RTSpMSpM: Harnessing Ray Tracing for Efficient Sparse Matrix Computations

RTSpMSpM: Harnessing Ray Tracing for Efficient Sparse Matrix Computations Hongrui Zhang, Yunan Zhang, and Hung-Wei Tseng ISCA'25 I recall a couple of decades ago when Pat Hanrahan said something like “all hardware wants to be programmable”. You can find a similar sentiment here : With most SGI machines, if you opened one up and looked at what was actually in there—processing vertexes in particular, but for some machines, processing the fragments—it was a programmable engine. It’s just that it was not programmable by you; it was programmable by me. And now, twenty years later, GPU companies have bucked the programmability trend and added dedicated ray tracing hardware to their chips. Little did they know, users would find a way to utilize this hardware for applications that have nothing to do with graphics. The task at hand is multiplying two (very) sparse matrices ( and ). Each matrix can be partitioned into a 2D grid, where most cells in the grid contain all 0’s. Cells in with non-zero entries must be multiplied by specific cells in with non-zero entries (using a dense matrix multiplication for each product of two cells). The core idea is elegantly simple, and is illustrated in Fig. 5: Source: https://dl.acm.org/doi/full/10.1145/3695053.3731072 The steps are: Build a ray tracing acceleration structure corresponding to the non-zero cells in For each non-zero cell in Trace a ray through to determine if there are any non-zero cells in that need to be multiplied by the current cell in In fig. 5 the coordinates of the non-zero cells in matrix are: [(2, 1) (2, 3) (3, 3) (7, 1)]. The figure shows rays overlaid on top of the result matrix, but I find it easier to think of the rays traced through matrix . The ray corresponding to the cell in at (2, 1) has a column index of 1, so the algorithm traces a ray horizontally through B at row 1. The ray tracing hardware will find that this ray intersects with the cell from at coordinate (1, 4). So, these cells are multiplied together to determine their contribution to the result. Fig. 7 has benchmark results. All results are normalized to the performance of the library (i.e., values greater than one represent a speedup). corresponds to the Intel MKL library running on a Core i7 14700K processor. The “w/o RT cores” bars show results from the same algorithm with ray tracing implemented in general CUDA code rather than using the ray tracing accelerators. It is amazing that this beats across the board. Source: https://dl.acm.org/doi/full/10.1145/3695053.3731072 Dangling Pointers It seems like the core problem to be solved here is pointer-chasing. I wonder if a more general-purpose processor that is located closer to off-chip memory could provide similar benefits. Subscribe now Build a ray tracing acceleration structure corresponding to the non-zero cells in For each non-zero cell in Trace a ray through to determine if there are any non-zero cells in that need to be multiplied by the current cell in

0 views

Moving to Windows

After nearly a decade of using Mac OS, and recently years of using Linux, I've come to realize I've been denying myself. I'm a Windows guy, through and through. I love Copilot, and only Windows puts it everywhere, including Notepad! I don't want to write words, that's so last year. Basic computing skills are a thing of the past, chatting with Copilot is the future. I get too comfortable with what I use day to day, and want recommendations on apps and services I should subscribe to. I love that Windows reminds me of things I could spend money on via the Start menu, the taskbar and the lockscreen. It's refreshing. I can't stand open source, it's terrifying. I prefer a closed platform. Why would I trust random contributors over a multi-billion dollar enterprise that has my best interests at heart? I love friends, so why wouldn't I want to make new friends by sharing my information with "advertising partners"? They'll make my life better, using complex algorithms to figure out exactly what I want to buy before even I know. I have a laptop with 32GiB of RAM and Ryzen 7 that go mostly unused on Fedora. What a waste. Windows will make sure I get my money's worth by filling that memory and running that CPU. Stability is boring, why would I want the same experience day in and day out? Look at how exciting Microsoft has made Github, every day is an adventure on the status page. I want that joy in my operating system. So there it is, the truth is out, Windows is my home. I'm nuking Fedora on my System76 Pang12 and installing Windows 11 (well, once I create Microsoft account and have WiFi so I can get through the installer). Now if I could just get past this blue screen of death that says "April Fools - Brought To You By Microslop". Comments? Email me !

0 views
HeyDingus 2 weeks ago

I’m returning my Studio Display XDR and buying another one

Sooo… I did a thing. I couldn’t help but be slightly dissatisfied by the clarity of my Studio Display XDR ’ s nano-texture display. It just made everything look a little less than Retina-quality. And for this price, I don’t want to have lingering regrets each time I use it. So, I ordered a second non-nano-texture version, banking on Apple’s generous return policy . It came in today. I set it up about 30 minutes ago. I put the two displays side by side and… it’s no question. The nano-texture is going back. Showing the same content on each display, at the same brightness level, I can absolutely see the fuzziness introduced by the “ matte” display. It’s not that nano-texture is all bad. I love how it looks when the display is dark — there are zero reflections. 1 But the point is to enjoy it while the display is on . Without nano-texture, everything is as crisp as I had hoped. I tend to lean toward the display when I’m concentrating, and even close up, the display is razor sharp. I technically have until April 9th to send back the nano-texture XDR , but, honestly, I think I’m going to package it up tonight. Well… maybe tomorrow. I might as well enjoy having 10k pixels of display at my disposal while I can. If I hold onto the original display until the last day that I can send it back, I will have had it for 24 days. That’s a full 10 extra days beyond the stated 14-day return period. It’s possible that I could have squeezed in even a few more days by initiating the return today, the 14th day after it was delivered, instead of the 11th. With that in mind, one could get nearly a month of use for testing and comparison of Apple’s products, with the ability to return it (free shipping both ways) for a full refund. That’s serious commitment to customer satisfaction, and one area where Apple’s standards haven’t slipped. To boot, by paying with Apple Card’s Monthly Installments (which allow you to pay for an item over 12 months with 0% interest), I’ve only been charged $287.92 for the nano-texture display, and $263.92 for the regular one. I think that was just the taxes for each one. To be sure, it’s a privileged position I’m in to be able to do these shenanigans, but there’s a lot to be said for how easy Apple has made it to purchase even it’s most expensive products with very little risk. If I were in an environment with light sources behind me, my decision might be very different. I think there’s definitely a place for this non-reflective display — it’s just not in my home office. ↩︎ HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email . If I were in an environment with light sources behind me, my decision might be very different. I think there’s definitely a place for this non-reflective display — it’s just not in my home office. ↩︎

0 views