Posts in Hardware (20 found)
HeyDingus Yesterday

I’m not a ring guy, but…

I’m not a ring guy. My parents had to cajole me into getting a class ring back in high school, telling me that it would be something that I would later regret if I didn’t get one. So I got one, tried wearing it, and ended up hating the feeling of it always spinning ’ round my finger. And then I lost it in my bowling ball bag for like a year. I’ve got no idea where it is today. My next ring was my wedding band. Again, following customary traditions, I spent so much of my savings on an engagement and wedding ring combo for my wife. But for my own ring, I wasn’t particular. I looked around online for design ideas, liked the look of a tungsten one, found one for like $15 on Amazon, and clicked ‘ Buy Now’. It still looks good as new over seven years later. And while I liked the feel of it better than my old class ring since it was symmetrical and didn’t tend to fall to one side of my finger or the other, I still prefer my fingers unornamented. In fact, since becoming a mountain guide, I’ve worn my wedding band on a piece of cord around my neck, lest it get wedged in a rock somewhere while I’m climbing, which could be disastrous. I’d like to get a tattooed ring on my finger someday. 1 Likewise, I’ve tended to be skeptical of the fitness rings, such as the Oura , partly because I figure I’d dislike wearing it at least as much as any other ring. But also because my Apple Watch already handles all my fitness tracking, and I wouldn’t want another thing to remember to charge. All that being said, I’m as surprised as anyone that the Index 01 , Pebble’s latest gadget, caught my interest. It’s a ring, but instead of packing in more features than its competition, the Index is designed to do less . Its primary role is to be an ever-present way to record short notes-to-self. It’s got a tiny LED and a little microphone that’s activated by pressing a physical button. That’s it. Eric Migicovsky, Pebble’s founder, is selling the Index as “ external memory for your brain”. It doesn’t have any fitness tracking sensors. It doesn’t record everything around you, 24/7, like other AI gadgets , to make a perfect transcript of your life. It’s basically a dedicated personal note taker, and that’s what makes it so interesting to me. In fact, I’ve been trying to solve this ‘ take a quick note’ problem on my own for years. My brain comes up with its best ideas when I’m out for a hike, but that’s also when I least want to pull out my phone to type it out. So, I rigged up a solution with Apple Shortcuts to trigger voice-to-text with my iPhone’s Action button so that I can easily save my ideas and to-dos to Drafts without breaking stride. But it’s an imperfect solution as I look a little goofy in front of my clients when I mutter into my phone in the backcountry. Plus, I have to have my phone with me, and the audio isn’t saved, just the transcript. The Index remedies a lot of that rigmarole by virtue of being a dedicated device that’s always with you, that saves the audio recording, and that’s less intrusive and distracting than pulling out a smartphone. The physical button. You have to hold it down to make a recording. No wondering if it’s working. Migicovsky insists it has a great click-feel, and I’m inclined to believe him. It’s designed to be worn on your index finger, putting the button always in reach of your thumb to start a recording. That’s so smart, as it means it can be used discreetly with one hand. My Apple Watch often needs to be operated with the other hand, and its raise-to-speak to Siri feature is somewhat unreliable. Adding the button was a great idea. You can’t charge it. This one’s a bit controversial, I know. Just read the comments on the announcement video — it’s basically the only thing people are talking about. The non-replaceable battery is a bummer, but I get it. I’d want a ring to be as unobtrusive as possible, and leaving out the charging bits and accessible battery cuts down on a lot of bulk. It’s definitely more svelte than an Oura. Furthermore, I have enough gadgets that I need to remember to charge every day. If it can just stay on my finger, it has a way higher chance of becoming an ingrained workflow. While I don’t want to contribute to e-waste, Pebble says they’ll recycle it when the battery dies, supposedly in two or so years with typical use. The price. If this thing cost $300+, like most smart rings , I certainly wouldn’t be psyched to replace it every two years. But at $99 ($75 for pre-orders), I think they priced it well to be a reasonable curiosity purchase. And it’s a one-time payment — there’s no ongoing subscription cost! Additional actions. While its primary purpose — and my main interest in it — rests with its always-ready note-taking, it sounds like the Index can do a little processing and take action on some commands. From the announcement post : Actions: While the primary task is remembering things for you, you can also ask it to do things like ‘ Send a Beeper message to my wife - running late’ or answer simple questions that could be answered by searching the web. You can configure button clicks to control your music - I love using this to play/pause or skip tracks. You can also configure where to save your notes and reminders (I have it set to add to Notion). Customizable and hackable: Configure single/double button clicks to control whatever you want (take a photo, turn on lights, Tasker, etc). Add your own voice actions via MCP . Or route the audio recordings directly to your own app or server! Supposedly, you’ll be able to hook it up to MCP to do more AI stuff with the recordings. I don’t know enough about MCP , so that’s not of huge interest to me. But if it can send quick messages, make reminders and calendar events, and control audio playback — and do so reliably — that’d be pretty great. Works offline. It doesn’t have or need an internet connection to work. Transferring the audio file goes directly to your phone, and the transcription is done there, on-device. If you set those additional actions that need the internet, that’s another story, but the Index will serve its primary purpose offline, without sending your (potentially very personal) recordings to anyone’s servers. Less-than-stellar water-resistance. Pebble’s billed the Index as something that you never have to take off, but then notes it’s water-resistant only to 1 meter. They note, “ You can wash your hands, do dishes, and shower with it on, but we don’t recommend swimming with it.” That’s not a deal-breaker, but I’ve grown so used to not worrying about swimming with my watch that I’d be a little grumpy about having to remember to take off my ring before jumping in a pool or lake. Short answer, yes. I’m intrigued enough that I placed a pre-order this morning. But I’m still a little iffy on whether I’ll keep it. As I mentioned, I wear my wedding band as a necklace so that it doesn’t put my finger at risk when I’m climbing. That would still be a factor with the Index. But I’m willing to give it a shot. My wife insists that I put my wedding ring back on my finger for date night, or culturally significant events like weddings and such. I don’t mind. ↩︎ HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email . Actions: While the primary task is remembering things for you, you can also ask it to do things like ‘ Send a Beeper message to my wife - running late’ or answer simple questions that could be answered by searching the web. You can configure button clicks to control your music - I love using this to play/pause or skip tracks. You can also configure where to save your notes and reminders (I have it set to add to Notion). Customizable and hackable: Configure single/double button clicks to control whatever you want (take a photo, turn on lights, Tasker, etc). Add your own voice actions via MCP . Or route the audio recordings directly to your own app or server! My wife insists that I put my wedding ring back on my finger for date night, or culturally significant events like weddings and such. I don’t mind. ↩︎

0 views
Jeff Geerling 2 days ago

The DC-ROMA II is the fastest RISC-V laptop and is odd

Inside this Framework 13 laptop is a special mainboard developed by DeepComputing in collaboration with Framework. It has an 8-core RISC-V processor, the ESWIN 7702X—not your typical AMD, Intel, or even Arm SoC. The full laptop version I tested costs $1119 and gets you about the performance of a Raspberry Pi. A Pi 4—the one that came out in 2019.

0 views

AMD GPU Debugger

I’ve always wondered why we don’t have a GPU debugger similar to the one used for CPUs. A tool that allows pausing execution and examining the current state. This capability feels essential, especially since the GPU’s concurrent execution model is much harder to reason about. After searching for solutions, I came across rocgdb, a debugger for AMD’s ROCm environment. Unfortunately, its scope is limited to that environment. Still, this shows it’s technically possible. I then found a helpful series of blog posts by Marcell Kiss , detailing how he achieved this, which inspired me to try to recreate the process myself. The best place to start learning about this is RADV . By tracing what it does, we can find how to do it. Our goal here is to run the most basic shader without using Vulkan, aka RADV in our case. First of all, we need to open the DRM file to establish a connection with the KMD, using a simple open(“/dev/dri/cardX”), then we find that it’s calling , which is a function defined in , which is a library that acts as middleware between user mode drivers(UMD) like and and kernel mode drivers(KMD) like amdgpu driver, and then when we try to do some actual work we have to create a context which can be achieved by calling from again, next up we need to allocate 2 buffers one of them for our code and the other for writing our commands into, we do this by calling a couple of functions, here’s how I do it: Here we’re choosing the domain and assigning flags based on the params, some buffers we will need uncached, as we will see: Now we have the memory, we need to map it. I opt to map anything that can be CPU-mapped for ease of use. We have to map the memory to both the GPU and the CPU virtual space. The KMD creates the page table when we open the DRM file, as shown here . So map it to the GPU VM and, if possible, to the CPU VM as well. Here, at this point, there’s a libdrm function that does all of this setup for us and maps the memory, but I found that even when specifying , it doesn’t always tag the page as uncached, not quite sure if it’s a bug in my code or something in anyways, the function is , I opted to do it manually here and issue the IOCTL call myself: Now we have the context and 2 buffers. Next, fill those buffers and send our commands to the KMD, which will then forward them to the Command Processor (CP) in the GPU for processing. Let’s compile our code. We can use clang assembler for that, like this: The bash script compiles the code, and then we’re only interested in the actual machine code, so we use objdump to figure out the offset and the size of the section and copy it to a new file called asmc.bin, then we can just load the file and write its bytes to the CPU-mapped address of the code buffer. Next up, filling in the commands. This was extremely confusing for me because it’s not well documented. It was mostly learning how does things and trying to do similar things. Also, shout-out to the folks on the Graphics Programming Discord server for helping me, especially Picoduck. The commands are encoded in a special format called , which has multiple types. We only care about : each packet has an opcode and the number of bytes it contains. The first thing we need to do is program the GPU registers, then dispatch the shader. Some of those registers are ; those registers are responsible for a number of configurations, pgm_[lo/hi], which hold the pointer to the code buffer and ; those are responsible for the number of threads inside a work group. All of those are set using the packets, and here is how to encode them: {% mark %} It’s worth mentioning that we can set multiple registers in 1 packet if they’re consecutive.{% /mark %} Then we append the dispatch command: Now we want to write those commands into our buffer and send them to the KMD: {% mark %} Here is a good point to make a more complex shader that outputs something. For example, writing 1 to a buffer. {% /mark %} No GPU hangs ?! nothing happened ?! cool, cool, now we have a shader that runs on the GPU, what’s next? Let’s try to hang the GPU by pausing the execution, aka make the GPU trap. The RDNA3’s ISA manual does mention 2 registers, ; here’s how they describe them respectively: Holds the pointer to the current trap handler program address. Per-VMID register. Bit [63] indicates if the trap handler is present (1) or not (0) and is not considered part of the address (bit[62] is replicated into address bit[63]). Accessed via S_SENDMSG_RTN. Temporary register for shader operations. For example, it can hold a pointer to memory used by the trap handler. {%mark%} You can configure the GPU to enter the trap handler when encountering certain exceptions listed in the RDNA3 ISA manual. {%/mark%} We know from Marcell Kiss’s blog posts that we need to compile a trap handler, which is a normal shader the GPU switches to when encountering a . The TBA register has a special bit that indicates whether the trap handler is enabled. Since these are privileged registers, we cannot write to them from user space. To bridge this gap for debugging, we can utilize the debugfs interface. Luckily, we have UMR , which uses that debugfs interface, and it’s open source; we copy AMD’s homework here which is great. The amdgpu KMD has a couple of files in debugfs under ; one of them is , which is an interface to a in the kernel that writes to the registers. It works by simply opening the file, seeking the register’s offset, and then writing; it also performs some synchronisation and writes the value correctly. We need to provide more parameters about the register before writing to the file, tho and do that by using an ioctl call. Here are the ioctl arguments: The 2 structs are because there are 2 types of registers, GRBM and SRBM, each of which is banked by different constructs; you can learn more about some of them here in the Linux kernel documentation . Turns out our registers here are SBRM registers and banked by VMIDs, meaning each VMID has its own TBA and TMA registers. Cool, now we need to figure out the VMID of our process. As far as I understand, VMIDs are a way for the GPU to identify a specific process context, including the page table base address, so the address translation unit can translate a virtual memory address. The context is created when we open the DRM file. They get assigned dynamically at dispatch time, which is a problem for us; we want to write to those registers before dispatch. We can obtain the VMID of the dispatched process by querying the register with s_getreg_b32. I do a hack here, by enabling the trap handler in every VMID, and there are 16 of them, the first being special, and used by the KMD and the last 8 allocated to the amdkfd driver. We loop over the remaining VMIDs and write to those registers. This can cause issues to other processes using other VMIDs by enabling trap handlers in them and writing the virtual address of our trap handler, which is only valid within our virtual memory address space. It’s relatively safe tho since most other processes won’t cause a trap[^1]. [^1]: Other processes need to have a s_trap instruction or have trap on exception flags set, which is not true for most normal GPU processes. Now we can write to TMA and TBA, here’s the code: And here’s how we write to and : {%mark%} If you noticed, I’m using bitfields. I use them because working with them is much easier than macros, and while the byte order is not guaranteed by the C spec, it’s guaranteed by System V ABI, which Linux adheres to. {%/mark%} Anyway, now that we can write to those registers, if we enable the trap handler correctly, the GPU should hang when we launch our shader if we added instruction to it, or we enabled the bit in rsrc3[^2] register. [^2]: Available since RDNA3, if I’m not mistaken. Now, let’s try to write a trap handler. {%mark%} If you wrote a different shader that outputs to a buffer, u can try writing to that shader from the trap handler, which is nice to make sure it’s actually being run. {%/mark%} We need 2 things: our trap handler and some scratch memory to use when needed, which we will store the address of in the TMA register. The trap handler is just a normal program running in privileged state, meaning we have access to special registers like TTMP[0-15]. When we enter a trap handler, we need to first ensure that the state of the GPU registers is saved, just as the kernel does for CPU processes when context-switching, by saving a copy of the stable registers and the program counter, etc. The problem, tho, is that we don’t have a stable ABI for GPUs, or at least not one I’m aware of, and compilers use all the registers they can, so we need to save everything. AMD GPUs’ Command Processors (CPs) have context-switching functionality, and the amdkfd driver does implement some context-switching shaders . The problem is they’re not documented, and we have to figure them out from the amdkfd driver source and from other parts of the driver stack that interact with it, which is a pain in the ass. I kinda did a workaround here since I didn’t find luck understanding how it works, and some other reasons I’ll discuss later in the post. The workaround here is to use only TTMP registers and a combination of specific instructions to copy the values of some registers, allowing us to use more instructions to copy the remaining registers. The main idea is to make use of the instruction, which adds the index of the current thread within the wave to the writing address, aka $$ ID_{thread} * 4 + address $$ This allows us to write a unique value per thread using only TTMP registers, which are unique per wave, not per thread[^3], so we can save the context of a single wave. [^3]: VGPRs are unique per thread, and SGPRs are unique per wave The problem is that if we have more than 1 wave, they will overlap, and we will have a race condition. Here is the code: Now that we have those values in memory, we need to tell the CPU: Hey, we got the data, and pause the GPU’s execution until the CPU issues a command. Also, notice we can just modify those from the CPU. Before we tell the CPU, we need to write some values that might help the CPU. Here are they: Now the GPU should just wait for the CPU, and here’s the spin code it’s implemented as described by Marcell Kiss here : The main loop in the CPU is like enable trap handler, then dispatch shader, then wait for the GPU to write some specific value in a specific address to signal all data is there, then examine and display, and tell the GPU all clear, go ahead. Now that our uncached buffers are in play, we just keep looping and checking whether the GPU has written the register values. When it does, the first thing we do is halt the wave by writing into the register to allow us to do whatever with the wave without causing any issues, tho if we halt for too long, the GPU CP will reset the command queue and kill the process, but we can change that behaviour by adjusting lockup_timeout parameter of the amdgpu kernel module: From here on, we can do whatever with the data we have. All the data we need to build a proper debugger. We will come back to what to do with the data in a bit; let’s assume we did what was needed for now. Now that we’re done with the CPU, we need to write to the first byte in our TMA buffer, since the trap handler checks for that, then resume the wave, and the trap handler should pick it up. We can resume by writing to the register again: Then the GPU should continue. We need to restore everything and return the program counter to the original address. Based on whether it’s a hardware trap or not, the program counter may point to the instruction before or the instruction itself. The ISA manual and Marcell Kiss’s posts explain that well, so refer to them. Now we can run compiled code directly, but we don’t want people to compile their code manually, then extract the text section, and give it to us. The plan is to take SPIR-V code, compile it correctly, then run it, or, even better, integrate with RADV and let RADV give us more information to work with. My main plan was making like fork RADV and then add then make report for us the vulkan calls and then we can have a better view on the GPU work know the buffers/textures it’s using etc, This seems like a lot more work tho so I’ll keep it in mind but not doing that for now unless someone is willing to pay me for that ;). For now, let’s just use RADV’s compiler . Luckily, RADV has a mode, aka it will not do actual work or open DRM files, just a fake Vulkan device, which is perfect for our case here, since we care about nothing other than just compiling code. We can enable it by setting the env var , then we just call what we need like this: Now that we have a well-structured loop and communication between the GPU and the CPU, we can run SPIR-V binaries to some extent. Let’s see how we can make it an actual debugger. We talked earlier about CPs natively supporting context-switching, this appears to be compute spcific feature, which prevents from implementing it for other types of shaders, tho, it appears that mesh shaders and raytracing shaders are just compute shaders under the hood, which will allow us to use that functionality. For now debugging one wave feels enough, also we can moify the wave parameters to debug some specific indices. Here’s some of the features For stepping, we can use 2 bits: one in and the other in . They’re and , respectively. The former enters the trap handler after each instruction, and the latter enters before the first instruction. This means we can automatically enable instruction-level stepping. Regarding breakpoints, I haven’t implemented them, but they’re rather simple to implement here by us having the base address of the code buffer and knowing the size of each instruction; we can calculate the program counter location ahead and have a list of them available to the GPU, and we can do a binary search on the trap handler. The ACO shader compiler does generate instruction-level source code mapping, which is good enough for our purposes here. By taking the offset[^4] of the current program counter and indexing into the code buffer, we can retrieve the current instruction and disassemble it, as well as find the source code mapping from the debug info. [^4]: We can get that by subtracting the current program counter from the address of the code buffer. We can implement this by marking the GPU page as protected. On a GPU fault, we enter the trap handler, check whether it’s within the range of our buffers and textures, and then act accordingly. Also, looking at the registers, we can find these: which suggests that the hardware already supports this natively, so we don’t even need to do that dance. It needs more investigation on my part, tho, since I didn’t implement this. This needs some serious plumbing, since we need to make NIR(Mesa’s intermediate representation) optimisation passes propagate debug info correctly. I already started on this here . Then we need to make ACO track variables and store the information. This requires ditching our simple UMD we made earlier and using RADV, which is what should happen eventually, then we have our custom driver maybe pause on before a specific frame, or get triggered by a key, and then ask before each dispatch if to attach to it or not, or something similar, since we have a full proper Vulkan implementation we already have all the information we would need like buffers, textures, push constants, types, variable names, … etc, that would be a much better and more pleasant debugger to use. Finally, here’s some live footage: ::youtube{url=“ https://youtu.be/HDMC9GhaLyc ”} Here is an incomplete user-mode page walking code for gfx11, aka rx7900xtx

0 views
Jeff Geerling 6 days ago

The RAM Shortage Comes for Us All

Memory price inflation comes for us all, and if you're not affected yet, just wait. I was building a new PC last month using some parts I had bought earlier this year. The 64 Gigabyte T-Create DDR5 memory kit I used cost $209 then. Today? The same kit costs $650 !

0 views
Jeff Geerling 6 days ago

Why doesn't Apple make a standalone Touch ID?

I finally upgraded to a mechanical keyboard. But because Apple's so protective of their Touch ID hardware, there aren't any mechanical keyboards with that feature built in. But there is a way to hack it. It's incredibly wasteful, and takes a bit more patience than I think most people have, but you basically take an Apple Magic Keyboard with Touch ID, rip out the Touch ID, and install it in a 3D printed box, along with the keyboard's logic board.

0 views
neilzone 1 weeks ago

Using gpioset and gpioget to control the gpio pins on a Raspberry Pi with a relay board under Debian Trixie

A couple of years ago, I bodged a web-controlled garage door opener with a Raspberry Pi . It worked fine, until I upgraded the Raspberry Pi in question to Debian Trixie. I noted that the relevant files in were no longer present, and some further research showed that this was an intentional change: In the upstream kernel, /sys/class/gpio (the sysfs interface) has been deprecated in favor of a device interface, /dev/gpiochipN. The old interface is gone from the kernel in the nightly Debian builds (bookworm/sid) for Raspberry Pi. So, unsurprisingly, my old way of doing things was not working. The documentation for the relay board has not been updated. Fortunately, after a bit of experimentation, I could get it working again using and . I could not find a way of stopping after a fixed period of time (I was expecting to do it, but it did not), so I ended up wrapping it in , which is also a bodge. Anyway, this is now what I am using:

0 views
Michael Lynch 1 weeks ago

My First Impressions of MeshCore Off-Grid Messaging

When my wife saw me playing with my new encrypted radio, she asked what it was for. “Imagine,” I said, “if I could type a message on my phone and send it to you, and the message would appear on your phone. Instantly!” She wasn’t impressed. “It also works if phone lines are down due to a power outage… or societal collapse.” Still nothing. “If we’re not within radio range of each other, we can route our messages through a mesh network of our neighbors’ radios. But don’t worry! The radios encrypt our messages end-to-end, so nobody else can read what we’re saying.” By this point, she’d left the room. My wife has many wonderful qualities, but, if I’m being honest, “enthusiasm for encrypted off-grid messaging” has never been one of them. The technology I was pitching to my wife was, of course, MeshCore. If you’d like to skip to the end, check out the summary . MeshCore is software that runs on inexpensive long-range (LoRa) radios . LoRa radios transmit up to several miles depending on how clear the path is. Unlike HAM radios, you don’t need a license to broadcast over LoRa frequencies in the US, so anyone can pick up a LoRa radio and start chatting. MeshCore is more than just sending messages over radio. The “mesh” in the name is because MeshCore users form a mesh network. If Alice wants to send a message to her friend Charlie, but Charlie’s out of range of her radio, she can route her message through Bob, another MeshCore user in her area, and Bob will forward the message to Charlie. If Alice is within radio range of Bob but not Charlie, she can tell Bob’s MeshCore radio to forward her message to Charlie. I’m not exactly a doomsday prepper, but I plan for realistic disaster scenarios like extended power outages, food shortages, and droughts. When I heard about MeshCore, I thought it would be neat to give some devices to friends nearby so we could communicate in an emergency. And if it turned out that we’re out of radio range of each other, maybe I could convince a few neighbors to get involved as well. We could form a messaging network that’s robust against power failures and phone outages. MeshCore is a newer implementation of an idea that was popularized by a technology called Meshtastic . I first heard about Meshtastic from Tyler Cipriani’s 2022 blog post . I thought the idea sounded neat, but Tyler’s conclusion was that Meshtastic was too buggy and difficult for mainstream adoption at the time. I have no particular allegiance to MeshCore or Meshtastic, as I’ve never tried either. Some people I follow on Mastodon have been excited about MeshCore, so I thought I’d check it out. Most MeshCore-compatible devices are also compatible with Meshtastic, so I can easily experiment with one and later try the other. I only have a limited understanding of the differences between Meshtastic and MeshCore, but what I gather is that MeshCore’s key differentiator is preserving bandwidth. Apparently, Meshtastic hits scaling issues when many users are located close to each other. The Meshtastic protocol is chattier than MeshCore, so I’ve seen complaints that Meshtastic chatter floods the airwaves and interferes with message delivery. MeshCore attempts to solve that problem by minimizing network chatter. I should say at this point that I’m not a radio guy. It seems like many people in the LoRa community are radio enthusiasts who have experience with HAM radios or other types of radio broadcasting. I’m a tech-savvy software developer, but I know nothing about radio communication. If I have an incorrect mental model of radio transmission, that’s why. The MeshCore firmware runs on a couple dozen devices, but the official website recommends three devices in particular. The cheapest one is the Heltec v3. I bought two for $27/ea. At $27, the Heltec v3 is the cheapest MeshCore-compatible device I could find. I connected the Heltec v3 to my computer via the USB-C port and used the MeshCore web flasher to flash the latest firmware. I selected “Heltec v3” as my device, “Companion Bluetooth” as the mode, and “v1.9.0” as the version. I clicked “Erase device” since this was a fresh install. Then, I used the MeshCore web app to pair the Heltec with my phone over Bluetooth. Okay, I’ve paired my phone with my MeshCore device, but… now what? The app doesn’t help me out much in terms of onboarding. I try clicking “Map” to see if there are any other MeshCore users nearby. Okay, that’s a map of New Zealand. I live in the US, so that’s a bit surprising. Even if I explore the map, I don’t see any MeshCore activity anywhere, so I don’t know what the map is supposed to do. The map of New Zealand reminded me that different countries use different radio frequencies for LoRa, and if the app defaults to New Zealand’s location, it’s probably defaulting to New Zealand broadcast frequencies as well. I went to settings and saw fields for “Radio Settings,” and I clicked them expecting a dropdown, but it expects me to enter a number. And then I noticed a subtle “Choose Preset” button, which listed presets for different countries that were “suggested by the community.” I had no idea what any of them meant, but who am I to argue with the community? I chose “USA/Canada (Recommended).” I also noticed that the settings let me change my device name, so that seemed useful: It seemed like there were no other MeshCore users within range of me, which I expected. That’s why I bought the second Heltec. I repeated the process with an old phone and my second Heltec v3, but they couldn’t see each other. I eventually realized that I’d forgotten to configure my second device for the US frequency. This is another reason I wish the MeshCore app took initial onboarding more seriously. Okay, they finally see each other! They can both publish messages to the public channel. My devices could finally talk to each other over a public channel. If I communicate with friends over MeshCore, I don’t want to broadcast our whole conversation over the public channel, so it was time to test out direct messaging. I expected some way to view a contact in the public channel and send them a direct message, but I couldn’t. Clicking their name did nothing. There’s a “Participants” view, but the only option is to block, not send a direct message. This seems like an odd design choice. If a MeshCore user posts to the public channel, why can’t I talk to them? I eventually figured out that I have to “Advert.” There are three options: “Zero Hop,” “Flood Routed,” and “To Clipboard.” I don’t know what any of these mean, but I figure “flood” sounds kind of rude, whereas “Zero Hop” sounds elegant, so I do a “Zero Hop.” Great! Device 2 now sees device 1. Let’s say hi to Device 1 from Device 2. Whoops, what’s wrong? Maybe I need to “Advert” from Device 2 as well? Okay, I do, and voila! Messages now work. This is a frustrating user experience. If I have to advert from both ends, why did MeshCore let me send a message on a half-completed handshake? I’m assuming “Advert” is me announcing my device’s public key, but I don’t understand why that’s an explicit step I have to do ahead of time. Why can’t MeshCore do that implicitly when I post to a public channel or attempt to send someone a direct message? Anyway, I can talk to myself in both public channels and DMs. Onward! The Heltec v3 boards were a good way to experiment with MeshCore, but they’re impractical for real-world scenarios. They require their own power source, and a phone to pair. I wanted to power it from my phone with a USB-C to USB-C cable, but the Heltec board wouldn’t power up from my phone. In a real emergency, that’s too many points of failure. The MeshCore website recommends two other MeshCore-compatible devices, so I ordered those: the Seeed SenseCAP T-1000e ($40) and the Lilygo T-Deck+ ($100). I bought the Seeed SenseCAP T-1000e (left) and the Lilygo T-Deck+ (right) to continue experimenting with MeshCore. The T-1000e was a clear improvement over the Heltec v3. It’s self-contained and has its own battery and antenna, which feels simpler and more robust. It’s also nice and light. You could toss it into a backpack and not notice it’s there. The T-1000e feels like a more user-friendly product compared to the bare circuit board of the Heltec v3. Annoyingly, the T-1000e uses a custom USB cable, so I can’t charge it or flash it from my computer with one of my standard USB cables: The Seeed T-1000e uses a custom USB cable for charging and flashing. I used the web flasher for the Heltec, but I decided to try flashing the T-1000e directly from source: I use Nix, and the repo conveniently has a , so the dependencies installed automatically with . I then flashed the firmware for the T-1000e like this: From there, I paired the T-1000e with my phone, and it was basically the same as using the Heltec. The only difference was that the T-1000e has no screen, so it defaults to the Bluetooth pairing password of . Does that mean anyone within Bluetooth range can trivially take over my T-1000e and read all my messages? It also seems impossible to turn off the T-1000e, which is undesirable for a broadcasting device. The manufacturer advises users to just leave it unplugged for several days until the battery runs out. Update : MeshCore contributor Frieder Schrempf just fixed this in commit 07e7e2d , which is included in the v.1.11.0 MeshCore firmware. You can now power off the device by holding down the button at the top of the T-1000e. Now it was time to test the Lilygo T-Deck. This was the part of MeshCore I’d been most excited about since the very beginning. If I handed my non-techy friends a device like the T-1000e, there were too many things that could go wrong in an actual emergency. “Oh, you don’t have the MeshCore app? Oh, you’re having trouble pairing it with your phone? Oh, your phone battery is dead?” The T-Deck looked like a 2000s era Blackberry. It seemed dead-simple to use because it was an all-in-one device: no phone pairing step or app to download. I wanted to buy a bunch, and hand them out to my friends. If society collapsed and our city fell into chaos, we’d still be able to chat on our doomsday hacker Blackberries like it was 2005. As soon as I turned on my T-Deck, my berry was burst. This was not a Blackberry at all. As a reminder, this is what a Blackberry looked like in 2003: A Blackberry smartphone in 2003 Before I even get to the T-Deck software experience, the hardware itself is so big and clunky. We can’t match the quality of a hardware product that we produced 22 years ago ? Right off the bat, the T-Deck was a pain to use. You navigate the UI by clicking a flimsy little thumbwheel in the center of the device, but it’s temperamental and ignores half of my scrolls. Good news: there’s a touchscreen. But the touchscreen misses half my taps: There are three ways to “click” a UI element. You can click the trackball, push the “Enter” key, or tap the screen. Which one does a particular UI element expect? You just have to try all three to find out! I had a hard time even finding instructions for how to reflash the T-Deck+. I found this long Jeff Geerling video where he expresses frustration with how long it took him to find reflashing instructions… and then he never explains how he did it! This is what worked for me: Confusingly, there’s no indication that the device is in DFU mode. I guess the fact that the screen doesn’t load is sort of an indication. On my system, I also see logs indicating a connection. Once I figured out how to navigate the T-Deck, I tried messaging, and the experience remained baffling. For example, guess what screen I’m on here: What does this screen do? If you guessed “chat on Public channel,” you’re a better guesser than I am, because the screen looks like nothing to me. Even when it displays chat messages, it only vaguely looks like a chat interface: Oh, it’s a chat UI. I encountered lots of other instances of confusing UX, but it’s too tedious to recount them all here. The tragic upshot for me is that this is not a device I’d rely on in an emergency. There are so many gotchas and dead-ends in the UX that would trip people up and prevent them from communicating with me. Even though the T-Deck broke my heart, I still hoped to use MeshCore with a different device. I needed to see how these devices worked in the real world rather than a few inches away from each other on my desk. First, I took my T-1000e to a friend’s house about a mile away and tried messaging the Heltec back in my home office. The transmission failed, as it seemed the two devices couldn’t see each other at all from that distance. Okay, fair enough. I’m in a suburban neighborhood, and there are lots of houses, trees, and cars between my house and my friend’s place. The next time I was riding in a car away from my house, I took along my T-1000e and tried messaging the Heltec v3 in my office. One block away: messages succeeded. Three blocks away: still working. Five blocks away: failure. And then I was never able to reach my home device until returning home later that day. Maybe the issue is the Heltec? I keep trying to leave the Heltec at home, but I read that the Heltec v3 has a particularly weak antenna. I tried again by leaving my T-1000e at home and taking the T-Deck out with me. I could successfully message my T-1000e from about five blocks away, but everything beyond that failed. The other part of the MeshCore ecosystem I haven’t mentioned yet is repeaters. The SenseCAP Solar P1-Pro , a solar-powered MeshCore repeater MeshCore repeaters are like WiFi extenders. They receive MeshCore messages and re-broadcast them to extend their reach. Repeaters are what create the “mesh” in MeshCore. The repeaters send messages to other repeaters and carry your MeshCore messages over longer distances. There are some technologically cool repeaters available. They’re solar powered with an internal battery, so they run independently and can survive a few days without sun. The problem was that I didn’t know how much difference a repeater makes. A repeater with a strong antenna would broadcast messages well, but does that solve my problem? If my T-Deck can’t send messages to my T-1000e from six blocks away, how is it going to reach the repeater? By this point, my enthusiasm for MeshCore had waned, and I didn’t want to spend another $100 and mount a broadcasting device to my house when I didn’t know how much it would improve my experience. MeshCore’s firmware is open-source , so I took a look to see if there was anything I could do to improve the user experience on the T-Deck. The first surprise with the source code was that there were no automated tests. I wrote simple unit tests , but nobody from the MeshCore team has responded to my proposal, and it’s been about two months. From casually browsing, the codebase feels messy but not outrageously so. It’s written in C++, and most of the classes have a large surface area with 20+ non-private functions and fields, but that’s what I see in a lot of embedded software projects. Another code smell was that my unit test calls the function, which encodes raw bytes to a hex string . MeshCore’s implementation depends on headers for two crypto libraries , even though the function has nothing to do with cryptography. It’s the kind of needless coupling MeshCore would avoid if they wrote unit tests for each component. My other petty gripe was that the code doesn’t have consistent style conventions. Someone proposed using the file that’s already in the repo , but a maintainer closed the issue with the guidance, “Just make sure your own IDE isn’t making unnecessary changes when you do a commit.” Why? Why in 2025 do I have to think about where to place my curly braces to match the local style? Just set up a formatter so I don’t have to think about mundane style issues anymore. I originally started digging into the MeshCore source to understand the T-Deck UI, but I couldn’t find any code for it. I couldn’t find the source to the MeshCore Android or web apps either. And then I realized: it’s all closed-source. All of the official MeshCore client implementations are closed-source and proprietary. Reading the MeshCore FAQ , I realized critical components are closed-source. What!?! They’d advertised this as open-source! How could they trick me? And then I went back to the MeshCore website and realized they never say “open-source” anywhere. I must have dreamed the part where they advertised MeshCore as open-source. It just seems like such an open-source thing that I assumed it was. But I was severely disappointed to discover that critical parts of MeshCore are proprietary. Without open-source clients, MeshCore doesn’t work for me. I’m not an open-source zealot, and I think it’s fine for software to be proprietary, but the whole point of off-grid communication is decentralization and technology freedom, so I can’t get on board with a closed-source solution. Some parts of the MeshCore ecosystem are indeed open-source and liberally licensed, but critically the T-Deck firmware, the web app, and the mobile apps are all closed-source and proprietary. The firmware I flashed to my Heltec v3 and T-1000e is open-source, but the mobile and Android apps (clients) I used to use the radios were closed-source and proprietary. As far as I see, there are no open-source MeshCore clients aside from the development CLI . I still love the idea of MeshCore, but it doesn’t yet feel practical for communicating in an emergency. The software is too difficult to use, and I’ve been unable to send messages farther than five blocks (about 0.3 miles). I’m open to revisiting MeshCore, but I’m waiting on open-source clients and improvements in usability. Disconnect the T-Deck from USB-C. Power off the T-Deck. Connect the T-Deck to your computer via the USB-C port. Hold down the thumbwheel in the center. Power on the device. It is incredibly cool to send text messages without relying on a big company’s infrastructure. The concept delights the part of my brain that enjoys disaster prep. MeshCore runs on a wide variety of low-cost devices, many of which also work for Meshtastic. There’s an active, enthusiastic community around it. All of the official MeshCore clients are closed-source and proprietary. The user experience is too brittle for me to rely on in an emergency, especially if I’m trying to communicate with MeshCore beginners. Most of the hardware assumes you’ll pair it with your mobile phone over Bluetooth, which introduces many more points of failure and complexity. The only official standalone device is the T-Deck+, but I found it confusing and frustrating to use. There’s no written getting started guide. There’s a FAQ , but it’s a hodgepodge of details without much organization. There’s a good unofficial intro video , but I prefer text documentation.

0 views
./techtipsy 1 weeks ago

Oops, I accidentally built a Steam Machine

I like the Steam Deck. It’s what convinced me that gaming on Linux is actually viable now. But after playing through games like God of War Ragnarök 1 , I felt like I needed an upgrade. I love playing with the Steam Deck, but what I love more is playing without having to worry about playing around with graphics settings a lot. Great story and gameplay can only hide the fact that you’re running at 720p 30Hz on a big screen for a little bit. I also get to play relatively rarely, so I might as well make it a better, more enjoyable experience. Quality vs quantity. I went on a look-out for a used PC with roughly these requirements: The AMD GPU being a hard requirement turned out to be an interesting challenge. I wasn’t looking into putting together a custom build, but was rather going for a setup that works and that I can customize according to my specific needs. Turns out that most of the PC-s out there on the market are all based around NVIDIA GPU-s, and AMD builds of this range are relatively rare, with a guesstimate of the ratio being roughly 10 NVIDIA-based machines to 1 AMD-based machine. The good side of this is that the selection process was made way simpler as I got to choose between 3-4 options in the end. During my search I also saw some machines that I would call absolute overkill, and I almost got one in a bidding war, but eventually I found a more sensible option. It also included a monitor, keyboard, mouse and three SSD-s that I didn’t really need, but the PC itself was decent. Here’s what I landed on: All-in-all, it cost me 365 EUR in Estonia in October 2025, and so far I’ve made about 25 EUR back from the SSD sales alone, with some items still up for sale. It’s not as portable as a Steam Deck, but it’s cheaper even if we account for the cost of the game controller and cables/accessories/adapters that you usually need. Regarding the operating system choice, I tried both SteamOS from the Steam Deck recovery image, and Bazzite . Both work fine and in the default couch gaming mode you won’t notice a difference, but I ended up defaulting to SteamOS because I had my setup and configuration changes tuned around that. The SteamOS recovery image approach does assume that you have an NVMe drive available, so if you lack one, you’re better off trying Bazzite as that can be installed on any drive. I replaced the NVMe SSD with a cheap 128GB one and utilized the bigger drive in the LattePanda IOTA setup that now serves as my home server. As a game library drive, I took a 1TB Samsung SSD that I had around, which roughly matches the storage that I had available on my Steam Deck that I ended up modding with a 1TB M.2 2230 SSD. With games like God of War Ragnarök taking up around 176GB , it’s not going to be the most luxurious arrangement, but for now it’s okay. The Fractal case that it came up with was one that is fully metal, with sound dampening material present on the side panels. It’s a bit banged up, but still a pretty nice experience if you have the room for storing one in your setup. The case had one flaw that I stumbled upon: the power button on the Fractal case liked to get stuck, which seems to be a common issue with that model. I fixed that with a random power button that I sourced from a local electronic parts supplier for a few euros and that works really well now, with the additional bonus of it being slightly more cat-proof. The default fan curves on the motherboard were a bit too aggressive, so I had to slightly tune them down, and now the machine is quiet while doing a great job with keeping the internals cool. You can hear a subtle whirring when you’re in the same room with it, but during gaming it stays at reasonable volumes and is not noticeable. Certainly quieter than a Steam Deck would be. The AMD GPU is a low/midrange model, but it gets the job done in 1080p gaming, and with a lot of titles it can do 4K with ease. In God of War Ragnarök I stuck with 1080p and cranked the settings, but with games like Need for Speed Hot Pursuit Remastered, I pushed the resolution to 4K with high/ultra settings, and it runs smoothly at 60Hz. This setup also taught me that Linux supports HDR now , which was news to me! My tech setup usually lags behind the state of the art, mostly because I don’t really see a need to upgrade to the latest and greatest thing out there if the current one works well enough, but this was a really nice surprise. My TV has a crappy HDR implementation, so I don’t get the full HDR experience, but it’s nice to see the TV show that HDR logo when I start up the machine. Regarding the gaming experience, I’ve only noticed a few sore spots. For whatever reason, the Need for Speed (2015) just does not start up on anything but an actual Steam Deck. It just doesn’t work here. I can’t be arsed to investigate this yet, the wonky physics in this game are perhaps not worth that effort. It’s also clear that the choice of an Intel CPU is generally fine, but in God of War Ragnarök it was running too well, so the CPU kept dropping down to lower clock speeds, which then made the game performance inconsistent. Finding that this was the issue was actually quite straightforward: when I first loaded the game, the shader compilation was taking place in the background and even though the CPU was at a constant 100% usage, the game ran quite smoothly. It only started stuttering after that was done, and the integrated setup helped confirm the issue as its most detailed preset shows the frame time and CPU clock speed graphs really well. Since this is just a Linux box, then you can of course run a few commands to fix it. 3 Here’s how I fixed it. Create a desktop entry at with the contents: Create a file with the contents: Don’t forget to mark the script as executable with . Note that the script above does require that you have set up passwordless on the SteamOS installation. This can be configured in , just make sure that the line starting with looks like this: With all that set up, in desktop mode, right-click on the desktop shortcut, “Add to Steam”, and now you can run this script any time in Steam gaming mode, even while a game is running! All-in-all, I’m very satisfied with the experience that a cheap gaming PC box provides with SteamOS. The installation is painless, my wireless controllers just work, and aside from a few rare exceptions, my games run really well. It’s also way easier on my eyes and with the 4K resolution I can actually see oncoming cars better in games like Need for Speed Hot Pursuit Remastered. 4 Less than three weeks after buying that gaming PC, the Steam Machine was officially announced. The rumored specs suggesting a 6 core/12 thread CPU, 16GB DDR5 RAM, and a custom 8GB VRAM AMD GPU that seems to be roughly comparable to an AMD RX 6600XT-ish level of performance. It seems that I have accidentally built a Steam Machine. Oops. Of course, the specs and final performance are not public at the time of writing, and the Steam Machine has many benefits (better SteamOS compatibility, good WiFi, smaller size, likely more efficient and quiet), but it’s still interesting how close I got with my setup and selection criteria. I was slightly disappointed that I got this machine right before that announcement, but then I reminded myself of the fact that I can enjoy games on the big screen right now, and the Steam Machine is scheduled for a release in Q1 2026, which can be as late as 31st of March 2026. And hey, when the Steam Machine does come out and I decide to get one, the current gaming desktop will make for a very good home server candidate with all the room that it has available, and all the six SATA ports on the motherboard sure look tempting. I’m pretty sure that the Fractal case also allows something crazy like 17+ hard drives installed in it. This approach of building my own Steam Machine of sorts did lead to me selling my Steam Deck. Better to have someone else enjoy it than having it sit in a box until its battery dies. That also serves as a major sign of confidence for this big box that makes my sparse downtime sessions more fun. If you have a machine with a modern AMD GPU, then give SteamOS a try, you might be surprised at how well it works. Even a laptop with an AMD APU can do it, as long as you temper your expectations regarding the image quality. it’s a banger, try it if you’re into the story, or you just want to indiscriminately smash and kill.  ↩︎ this is called foreshadowing   ↩︎ some might see it as “ugh, Linux moment” type of thing, but I see it as freedom to fix issues that you would otherwise be unable to even diagnose and address. Power to the players!  ↩︎ you can probably tell that I had a blast replaying that game for the 5th time recently. It’s not even the best NFS game, and yet I love playing it over and over again.  ↩︎ any modern 6-core CPU or better includes both Intel and AMD as the CPU does not matter much here 2 an AMD GPU that can do 1080p/4K gaming, depending on the game NVIDIA was out of the question due to lack of support on SteamOS Intel GPUs are a risk that I was not willing to take right now has to support an NVMe drive using the SteamOS recovery image method is dependent on this acceptable case, PSU and cooling setup if it does not burn the house down and makes the machine cool and quiet, then I’m fine with anything Intel i5-10500 6 cores 12 threads at a reasonable speed (4.2 GHz in real-life use) adequate Cooler Master CPU cooler that does a lot of RGB if needed 16 GB DDR4 RAM @ 2666 MT/s I soon upgraded this to 32 GB because my brother had some leftover modules from his own memory upgrade I forced the modules to run at 3200 MT/s. It’s memtest-stable so good enough for me. AMD RX 6600XT with 8GB VRAM some might scoff at the VRAM amount, but coming from a Steam Deck where 16GB was shared between CPU and GPU, this is plenty! 512GB NVMe SSD three 256GB SATA SSD-s previous owner put them in as RAID0, which is clever and works well as a game library some Gigabyte motherboard that works it really doesn’t matter here some Fractal Design case, possibly a Define-series one all I know is that it’s huuuuuuuuuge it’s a banger, try it if you’re into the story, or you just want to indiscriminately smash and kill.  ↩︎ this is called foreshadowing   ↩︎ some might see it as “ugh, Linux moment” type of thing, but I see it as freedom to fix issues that you would otherwise be unable to even diagnose and address. Power to the players!  ↩︎ you can probably tell that I had a blast replaying that game for the 5th time recently. It’s not even the best NFS game, and yet I love playing it over and over again.  ↩︎

1 views

Self-hosting my photos with Immich

For every cloud service I use, I want to have a local copy of my data for backup purposes and independence. Unfortunately, the tool stopped working in March 2025 when Google restricted the OAuth scopes, so I needed an alternative for my existing Google Photos setup. In this post, I describe how I have set up Immich , a self-hostable photo manager. Here is the end result: a few (live) photos from NixCon 2025 : I am running Immich on my Ryzen 7 Mini PC (ASRock DeskMini X600) , which consumes less than 10 W of power in idle and has plenty of resources for VMs (64 GB RAM, 1 TB disk). You can read more about it in my blog post from July 2024: When I saw the first reviews of the ASRock DeskMini X600 barebone, I was immediately interested in building a home-lab hypervisor (VM host) with it. Apparently, the DeskMini X600 uses less than 10W of power but supports latest-generation AMD CPUs like the Ryzen 7 8700G! Read more → I installed Proxmox , an Open Source virtualization platform, to divide this mini server into VMs, but you could of course also install Immich directly on any server. I created a VM (named “photos”) with 500 GB of disk space, 4 CPU cores and 4 GB of RAM. For the initial import, you could assign more CPU and RAM, but for normal usage, that’s enough. I (declaratively) installed NixOS on that VM as described in this blog post: For one of my network storage PC builds, I was looking for an alternative to Flatcar Container Linux and tried out NixOS again (after an almost 10 year break). There are many ways to install NixOS, and in this article I will outline how I like to install NixOS on physical hardware or virtual machines: over the network and fully declaratively. Read more → Afterwards, I enabled Immich, with this exact configuration: At this point, Immich is available on , but not over the network, because NixOS enables a firewall by default. I could enable the option, but I actually want Immich to only be available via my Tailscale VPN, for which I don’t need to open firewall access — instead, I use to forward traffic to : Because I have Tailscale’s MagicDNS and TLS certificate provisioning enabled, that means I can now open https://photos.example.ts.net in my browser on my PC, laptop or phone. At first, I tried importing my photos using the official Immich CLI: Unfortunately, the upload was not running reliably and had to be restarted manually a few times after running into a timeout. Later I realized that this was because the Immich server runs background jobs like thumbnail creation, metadata extraction or face detection, and these background jobs slow down the upload to the extent that the upload can fail with a timeout. The other issue was that even after the upload was done, I realized that Google Takeout archives for Google Photos contain metadata in separate JSON files next to the original image files: Unfortunately, these files are not considered by . Luckily, there is a great third-party tool called immich-go , which solves both of these issues! It pauses background tasks before uploading and restarts them afterwards, which works much better, and it does its best to understand Google Takeout archives. I ran as follows and it worked beautifully: My main source of new photos is my phone, so I installed the Immich app on my iPhone, logged into my Immich server via its Tailscale URL and enabled automatic backup of new photos via the icon at the top right. I am not 100% sure whether these settings are correct, but it seems like camera photos generally go into Live Photos, and Recent should cover other files…?! If anyone knows, please send an explanation (or a link!) and I will update the article. I also strongly recommend to disable notifications for Immich, because otherwise you get notifications whenever it uploads images in the background. These notifications are not required for background upload to work, as an Immich developer confirmed on Reddit . Open Settings → Apps → Immich → Notifications and un-tick the permission checkbox: Immich’s documentation on backups contains some good recommendations. The Immich developers recommend backing up the entire contents of , which is on NixOS. The subdirectory contains SQL dumps, whereas the 3 directories , and contain all user-uploaded data. Hence, I have set up a systemd timer that runs to copy onto my PC, which is enrolled in a 3-2-1 backup scheme . Immich (currently?) does not contain photo editing features, so to rotate or crop an image, I download the image and use GIMP . To share images, I still upload them to Google Photos (depending on who I share them with). The two most promising options in the space of self-hosted image management tools seem to be Immich and Ente . I got the impression that Immich is more popular in my bubble, and Ente made the impression on me that its scope is far larger than what I am looking for: Ente is a service that provides a fully open source, end-to-end encrypted platform for you to store your data in the cloud without needing to trust the service provider. On top of this platform, we have built two apps so far: Ente Photos (an alternative to Apple and Google Photos) and Ente Auth (a 2FA alternative to the deprecated Authy). I don’t need an end-to-end encrypted platform. I already have encryption on the transit layer (Tailscale) and disk layer (LUKS), no need for more complexity. Immich is a delightful app! It’s very fast and generally seems to work well. The initial import is smooth, but only if you use the right tool. Ideally, the official could be improved. Or maybe could be made the official one. I think the auto backup is too hard to configure on an iPhone, so that could also be improved. But aside from these initial stumbling blocks, I have no complaints.

0 views
Taranis 2 weeks ago

Datacenters in space are a terrible, horrible, no good idea.

In the interests of clarity, I am a former NASA engineer/scientist with a PhD in space electronics. I also worked at Google for 10 years, in various parts of the company including YouTube and the bit of Cloud responsible for deploying AI capacity, so I'm quite well placed to have an opinion here. The short version: this is an absolutely terrible idea, and really makes zero sense whatsoever. There are multiple reasons for this, but they all amount to saying that the kind of electronics needed to make a datacenter work, particularly a datacenter deploying AI capacity in the form of GPUs and TPUs, is exactly the opposite of what works in space. If you've not worked specifically in this area before, I'll caution against making gut assumptions, because the reality of making space hardware actually function in space is not necessarily intuitively obvious. The first reason for doing this that seems to come up is abundant access to power in space. This really isn't the case. You basically have two options: solar and nuclear. Solar means deploying a solar array with photovoltaic cells – something essentially equivalent to what I have on the roof of my house here in Ireland, just in space. It works, but it isn't somehow magically better than installing solar panels on the ground – you don't lose that much power through the atmosphere, so intuition about the area needed transfers pretty well. The biggest solar array ever deployed in space is that of the International Space Station (ISS), which at peak can deliver a bit over 200kW of power. It is important to mention that it took several Shuttle flights and a lot of work to deploy this system – it measures about 2500 square metres, over half the size of an American football field. Taking the NVIDIA H200 as a reference, the per-GPU-device power requirements are on the order of 0.7kW per chip. These won't work on their own, and power conversion isn't 100% efficient, so in practice 1kW per GPU might be a better baseline. A huge, ISS-sized, array could therefore power roughly 200 GPUs. This sounds like a lot, but lets keep some perspective: OpenAI's upcoming Norway datacenter is intending to house 100,000 GPUs, probably each more power hungry than the H200. To equal this capacity, you'd need to launch 500 ISS-sized satellites. In contrast, a single server rack (as sold by NVIDIA preconfigured) will house 72 GPUs, so each monster satellite is only equivalent to roughly three racks. Nuclear won't help. We are not talking nuclear reactors here – we are talking about radioisotope thermal generators (RTGs) , which typically have a power output of about 50W - 150W. So not enough to even run a single GPU, even if you can persuade someone to give you a subcritical lump of plutonium and not mind you having hundreds of chances to scatter it across a wide area when your launch vehicle explosively self-disassembles. Thermal Regulation I've seen quite a few comments about this concept where people are saying things like, "Well, space is cold, so that will make cooling really easy, right?" Really, really no. Cooling on Earth is relatively straightforward. Air convection works pretty well – blow air across a surface, particularly one designed to have a large surface area to volume ratio like a heatsink, will transfer heat from the heatsink to the air quite effectively. If you need more power density than can be directly cooled in this way (and higher power GPUs are definitely in that category), you can use liquid cooling to transfer heat from the chip to a larger radiator/heatsink elsewhere. In datacenters on Earth, it is common to set up cooling loops where machines are cooled via chilled coolant (usually water) that is pumped around racks, with the heat extracted and cold coolant returned to the loop. Typically the coolant is cooled via convective cooling to the air, so one way or another this is how things work on Earth. In space, there is no air. The environment is close enough to a hard, total vacuum as makes no practical difference, so convection just doesn't happen. On the space engineering side, we typically think about thermal management , not just cooling. Thing is, space doesn't really have a temperature as-such. Only materials have a temperature. It may come as a surprise, but in the Earth-Moon system the average temperature of pretty much anything is basically the same as the average temperature of Earth, because this is why Earth has that particular temperature. If a satellite is rotating, a bit like a chicken on a rotisserie, it will tend toward having a consistent temperature that's roughly similar to that of the Earth surface. If it isn't rotating, the side pointing away from the sun will tend to get progressively colder, with a limit due to the cosmic microwave background, around 4 Kelvin, just a little bit above absolute zero. On the sunward side, things can get a bit cooked, hitting hundreds of centigrade. Thermal management therefore requires very careful design, making sure that heat is carefully directed where it needs to go. Because there is no convection in a vacuum, this can only be achieved by conduction, or via some kind of heat pump. I've designed space hardware that has flown in space. In one particular case, I designed a camera system that needed to be very small and lightweight, whilst still providing science-grade imaging capabilities. Thermal management was front and centre in the design process – it had to be, because power is scarce in small spacecraft, and thermal management has to be achieved whilst keeping mass to a minimum. So no heat pumps or fancy stuff for me – I went in the other direction, designing the system to draw a maximum of about 1 watt at peak, dropping to around 10% of that when the camera was idle. All this electrical power turns into heat, so if I can draw 1 watt only while capturing an image, then turn the image sensor off as soon as the data is in RAM, I can halve the consumption, then when the image has been downloaded to the flight computer I can turn the RAM off and drop the power down to a comparative trickle. The only thermal management needed was bolting the edge of the board to the chassis so the internal copper planes in the board could transfer any heat generated. Cooling even a single H200 will be an absolute nightmare. Clearly a heatsink and fan won't do anything at all, but there is a liquid cooled H200 variant. Let's say this was used. This heat would need to be transferred to a radiator panel – this isn't like the radiator in your car, no convection, remember? – which needs to radiate heat into space. Let's assume that we can point this away from the sun. The Active Thermal Control System (ATCS) on the ISS is an example of such a thermal control system. This is a very complex system, using an ammonia cooling loop and a large thermal radiator panel system. It has a dissipation limit of 16kW, so roughly 16 H200 GPUs, a bit over the equivalent to a quarter of a ground-based rack. The thermal radiator panel system measures 13.6m x 3.12 m, i.e., roughly 42.5 square metres. If we use 200kW as a baseline and assume all of that power will be fed to GPUs, we'd need a system 12.5 times bigger, i.e., roughly 531 square metres, or about 2.6 times the size of the relevant solar array. This is now going to be a very large satellite, dwarfing the ISS in area, all for the equivalent of three standard server racks on Earth. Radiation Tolerance This is getting into my PhD work now. Assuming you can both power and cool your electronics in space, you have the further problem of radiation tolerance. The first question is where in space? If you are in low Earth orbit (LEO), you are inside the inner radiation belt, where radiation dose is similar to that experienced by high altitude aircraft – more than an airliner, but not terrible. Further out, in mid Earth orbit (MEO), where the GPS satellites live, they are not protected by the Van Allen belts – worse, this orbit is literally inside them. Outside the belts, you are essentially in deep space (details vary with how close to the Sun you happen to be, but the principles are similar). There are two main sources of radiation in space – from our own star, the Sun, and from deep space. This basically involves charged particles moving at a substantial percentage of the speed of light, from electrons to the nuclei of atoms with masses up to roughly that of oxygen. These can cause direct damage, by smashing into the material from which chips are made, or indirectly, by travelling through the silicon die without hitting anything but still leaving a trail of charge behind them. The most common conseqence of this happening is a single-event upset (SEU), where a direct impact or (more commonly) a particle passing through a transistor briefly (approx 600 picoseconds) causes a pulse to happen where it shouldn't have. If this causes a bit to be flipped, we call this a SEU. Other than damage to data, they don't cause permanent damage. Worse is single-event latch-up. This happens when a pulse from a charged particle causes a voltage to go outside the power rails powering the chip, causing a transistor essentially to turn on and stay on indefinitely. I'll skip the semiconductor physics involved, but the short version is that if this happens in a bad way, you can get a pathway connected between the power rails that shouldn't be there, burning out a gate permanently. This may or may not destroy the chip, but without mitigation it can make it unusable. For longer duration missions, which would be the case with space based datacenters because they would be so expensive that they would have to fly for a long time in order to be economically viable, it's also necessary to consider total dose effects . Over time, the performance of chips in space degrades, because repeated particle impacts make the tiny field-effect transistors switch more slowly and turn on and off less completely. In practice, this causes maximum viable clock rates to decay over time, and for power consumption to increase. Though not the hardest issue to deal with, this must still be mitigated or you tend to run into a situation where a chip that was working fine at launch stops working because either the power supply or cooling has become inadequate, or the clock is running faster than the chip can cope with. It's therefore necessary to have a clock generator that can throttle down to a lower speed as needed – this can also be used to control power consumption, so rather than a chip ceasing to function it will just get slower. The next FAQ is, can't you just use shielding? No, not really, or maybe up to a point. Some kinds of shielding can make the problem worse – an impact to the shield can cause a shower of particles that then cause multiple impact at once, which is far harder to mitigate. The very strongest cosmic rays can go through an astonishing amount of solid lead – since mass is always at a premium, it's rarely possible to deploy significant amounts of shielding, so radiation tolerance must be built into the system (this is often described as Radiation Hardness By Design, RHBD). GPUs and TPUs and the high bandwidth RAM they depend on are absolutely worst case for radiation tolerance purposes. Small geometry transistors are inherently much more prone both to SEUs and latch-up. The very large silicon die area also makes the frequency of impacts higher, since that scales with area. Chips genuinely designed to work in space are taped out with different gate structures and much larger geometries. The processors that are typically used have the performance of roughly a 20-year-old PowerPC from 2005. Bigger geometries are inherently more tolerant, both to SEUs and total dose, and the different gate topologies are immune to latch up, whilst providing some degree of SEU mitigation via fine-grained redundancy at the circuit level. Taping out a GPU or TPU with this kind of approach is certainly possible, but the performance would be a tiny fraction of that of a current generation Earth-based GPU/TPU. There is a you-only-live-once (my terminology!) approach, where you launch the thing and hope for the best. This is commonplace in small cubesats, and also why small cubesats often fail after a few weeks on orbit. Caveat emptor! Communications Most satellites communicate with the ground via radio. It is difficult to get much more than about 1Gbps reliably. There is some interesting work using lasers to communicate with satellites, but this depends on good atmospheric conditions to be feasible. Contrasting this with a typical server rack on Earth, where 100Gbps rack-to-rack interconnect would be considered at the low end, and it's easy to see that this is also a significant gap. Conclusions I suppose this is just about possible if you really want to do it, but I think I've demonstrated above that it would firstly be extremely difficult to achieve, disproportionately costly in comparison with Earth-based datacenters, and offer mediocre performance at best. If you still think this is worth doing, good luck, space is hard. Myself, I think it's a catastrophically bad idea, but you do you.

0 views
DHH 2 weeks ago

Local LLMs are how nerds now justify a big computer they don't need

It's pretty incredible that we're able to run all these awesome AI models on our own hardware now. From downscaled versions of DeepSeek to gpt-oss-20b, there are many options for many types of computers. But let's get real here: they're all vastly behind the frontier models available for rent, and thus for most developers a curiosity at best. This doesn't take anything away from the technical accomplishment. It doesn't take anything away from the fact that small models are improving, and that maybe one day they'll indeed be good enough for developers to rely on them in their daily work. But that day is not today. Thus, I find it spurious to hear developers evaluate their next computer on the prospect of how well it's capable of running local models. Because they all suck! Whether one sucks a little less than the other doesn't really matter. And as soon as you discover this, you'll be back to using the rented models for the vast majority of the work you're doing. This is actually great news! It means you really don't need a 128GB VRAM computer on your desk. Which should come as a relief now that RAM prices are skyrocketing, exactly because of AI's insatiable demand for more resources. Most developers these days can get by with very little, especially if they're running Linux. So as an experiment, I've parked my lovely $2,000 Framework Desktop for a while. It's an incredible machine, but in the day-to-day, I've actually found I barely notice the difference compared to a $500 mini PC from Beelink (or Minisforum). I bet you likely need way less than you think too.

1 views
Jeff Geerling 2 weeks ago

Air Lab is the Flipper Zero of air quality monitors

This air quality monitor costs $250. It's called the Air Lab , and I've been using it to measure the air in my car, home, studio, and a few events over the past few months. And in using it over the course of a road trip I learned to not run recirculate in my car quite as often—more on that later. Networked Artifacts built in some personality:

0 views
@hannahilea 2 weeks ago

Learning to learn how to play with electronics

A journey of a thousand doofy hardware projects starts with a single Adafruit blink

0 views
Jeff Geerling 2 weeks ago

How to silence the fan on a CM5 after shutdown

Out of the box, if you buy a Raspberry Pi Compute Module 5, install it on the official CM5 IO Board, and install a fan on it (e.g. my current favorite, the EDAtec CM5 Active Cooler ), you'll notice the fan ramps up to 100% speed after you shut down the Pi. That's not fun, since at least for a couple of my CM5s, they are more often powered down than running, creating a slight cacophany!

0 views
Jason Fried 3 weeks ago

Quality: The Concept2 RowErg

The Concept2 RowErg is one of the highest quality products I've ever used. Had one for years now, feels like it'll last another 100. Simple construction, durable materials, low maintenance. Comically easy to assemble. Tips up for storage, leaving a tiny footprint. The PM5 display is simple B&W, no touchscreen, just a few easy-to-use-when-sweaty rubberized buttons. Just two D batteries that seem to last forever. No plugs, no charging, no cables needed. Roll it around on wheels, steady once flat. Perfectly grips the ground, no wobble, no rattle, no movement. The whole thing is just right. I've rarely encountered a product so well considered. They knew where to stop. To me, this is a pinnacle product. The model to build towards. No matter what you make, aim to make it as well as the Concept 2 RowErg. And all that for under $1000. One of the few products I've paid this much for that feels like a steal. No affiliation, just a fan. https://concept2.com/ergs/rowerg -Jason

0 views
./techtipsy 3 weeks ago

Every time I write about a single board computer, half the internet goes down

It happened again. This time it’s Cloudflare, The last time I wrote about a single board computer, it was AWS that went down on the same day. Today, I wrote about the LattePanda IOTA. I’ll let y’all know once I plan on writing about another single board computer, seems to be bad for the internet.

0 views
./techtipsy 3 weeks ago

LattePanda IOTA review: how does it perform as a home server?

Disclosure: the review sample was provided by DFRobot, the makers of LattePanda. I am allowed to keep the review sample indefinitely, no money exchanged hands, and as always, this post covers my own thoughts and views on the product. 1 In 2023, I happened to find a LattePanda V1 for sale at a good price. Given the then-poor availability of affordable Raspberry Pi units, I got one for testing and finding potential use cases for it in my setup. However, it was just a little bit too weak for any practical uses in 2023, with its CPU and USB connectivity being just slow enough to be of less use, and the networking being capped at 100 Mbit/s. In 2025, we have the spiritual successor to it: the LattePanda IOTA. It keeps the same form factor, but the connectivity and raw power have all received a significant jump, with the CPU performance rivalling my current home server, the trusty ThinkPad T430. The marketing materials list all sorts of sensible use cases for it. I’m sure that it works fine for those, but I’m only interested in one thing: how close does this board get to being the perfect home server? The perfect home server uses very little power, offers plenty of affordable storage and provides a lot of performance when it’s actually being relied upon. In my case, low power means less than 5 W while idling, 10+ TB of redundant storage for data resilience and integrity concerns, and performance means about 4 modern CPU cores’ worth (low-to-midrange desktop CPU performance). The model I’m reviewing is the 8GB RAM/64GB eMMC one, with a Windows 11 installation on it (not activated). Along with the review unit itself, I got sent the following accessories: The board was tested with a Lenovo 65W USB-C power adapter, because that’s what I had available. Given the specs of the board and the accessories, that should be plenty. As far as I know, USB power delivery seems to work fine and it’s not just a weird USB-C connector that requires specific voltages to work. The M.2 NVMe SSD used in this review is a 512 GB Samsung PM9A1. I got that one from another PC that really didn’t need a boot drive that large. Most of the testing was done with a fresh Fedora Server 43 installation, kernel version 6.17.7. I suggest looking at the spec sheet if you’re interested in all the fine details and available configurations. The overall connectivity has been improved with the new version of this board compared to the old board. The USB ports are all fast 10 Gbit/s ones, and we have actual PCIe connectivity to play with, although the available bandwidth is quite limited with a PCIe 3.0 x1 lane available on the port that both the M.2 M-key and PoE adapter connect to. What caught my eye was the CPU performance I’ve been proudly running an old ThinkPad T430 as a server for a while now, with some failed attempts to find a more low-power and efficient solution. The Intel N150 is now offering similar levels of performance, but in a much smaller power envelope. When it comes to more specialized functionalities, such as GPIO and the RP2040 microcontroller, I don’t currently have a solid use case for them, so they won’t be covered in this review. I might fancy giving them a go in the future though, it would be nice to get some environmental sensors on it to monitor the temperature and humidity of the server room (which is a closet). Since I also don’t have an 4G LTE modem available, I did not test the associated adapter. The way you can add expansion boards to the LattePanda IOTA is quite similar to how Raspberry Pi 5 and other similar single board computers do it: you simply run a flexible cable to an adapter board, and bam, you have extra connectivity! With the M.2 M-key adapter kit, you get the adapter itself, some mounting screws and brass stand-offs, and a tiny little flexible cable for the PCIe signal. The link speed is PCIe 3.0, with one lane available. In theory, this means a maximum of 1 GB/s of throughput. In practice and with this board and SSD combination, I got a maximum of ~810 MB/s. I expect some levels of losses with these types of setups, so in my view this seems normal. For the test, I just did a . The SSD itself supports up to 4 lanes of PCIe connectivity so that should not be a limiting factor here. The lovely part about M.2 NVMe ports is that you can use it for a lot of off-label use cases. Fancy some SATA ports? There’s an adapter for that. 2 Or a network card? Some fancy AI accelator thingy? Or a full-sized GPU? Anything will work (probably), as long as the cables and adapters are high quality, and you provide extra power to the device through other means. The only device on my network that is connected over PoE is currently an Ubiquiti Wi-Fi access point, and that is unlikely to change in the near future because that would require a full replacement of my networking gear. 3 However, I still gave this board a quick go, and I’m happy to report that it also works as an additional standalone Ethernet port. The Ethernet controller seems to be similar or the same as on the main board, and it shows up as a separate networking device. Both are Realtek NIC-s ( ), and they work with the driver. Realtek has a spotty compatibility story overall on Linux from what I’ve read, but this one seems to work fine on Fedora Server 43. I was very close to pulling the trigger and turning it into a beefy router so that I can finally move my Wireguard networks on the router as my current one cannot do more than 20 Mbit/s of Wireguard traffic, but I didn’t end up going through with that idea because of how well the SBC did in other areas. As some of you might know, I’m a fan of playing with fir- 18650 Li-ion battery cells, and I’m hoping to one day build a solar-powered server of my own (of which there are many examples ). I took some spare 18650 cells that came from an old ThinkPad battery, made sure that the voltages are more-or-less the same, and threw them on the board. Connecting the UPS board with the standoffs was fine, but the cable connecting it with the SBC was finicky. I triple-checked that the connector was the right way, but had to still use an uncomfortable amount of force to connect it all up. The battery cells themselves sit snugly on the board, and unless you drop the board, they should not fall out on their own. You’d still want to build a case around it if you’re going to actually put it to use in rough environments. The manual for the UPS board emphasizes that it only works on Windows 10/11, and sadly that seems to be the case, the UPS does not seem to show up as an USB-listed device, and tools like NUT did not find anything to monitor with a quick 5-minute investigation. The UPS board also has an interesting selection of switches that you can use to adjust the behaviour of the board, like automatically turning the board on when power comes back on, and setting an 80% battery charge limit. The first one was not really necessary to use, the board would follow whatever setting you have enabled on the SBC itself. I configured mine via UEFI settings to automatically turn on with a power adapter connected, and that worked here as well. The run time of your LattePanda IOTA with the UPS expansion board will heavily depend on your workloads and quality of your battery cells. Mine were used cells, and then I hit the board with to create some load on it. It ran for over an hour like that, and then I got bored and wanted to proceed with testing other accessories. The marketing materials mention up to 8 hours of runtime, and I suspect that with good Li-ion cells and workloads where you idle most of the time, it will likely be achievable. The board seems to trigger a hard shutdown on Linux because the host OS is not aware of a battery being connected. Not that catastrophic for most modern filesystems and database engines, but something to consider in your own workloads in case they are Linux-based. The UPS board seems to handle power connection and disconnection events well enough, it did not do anything weird when repeatedly plugging and unplugging the USB-C cable. 4 Based on the readings from a wall outlet energy meter, the board uses up to 20W when charging the cells. It’s possible for the board to pull more than that with a maximum CPU load and connected peripherals, so I wonder if that may be an issue with more intense usage scenarios. During charging and discharging cycles, even under heavy loads, the battery cells did not get hot and were at best warm to touch. It’s gigabit. Fine for my use case given that I still live in 2006 and only have devices that support gigabit Ethernet speeds at best (excluding the Ubiquiti Wi-Fi AP), but certainly less than some competing products. Compared to the LattePanda V1, the USB port performance is actually decent for my use case. I can connect up to three USB-connected storage devices to the board, so that’s exactly what I did. I set up three different USB-connected devices: For each device (including on-board eMMC device), I ran a , which puts a sequential read workload on all the drives in an infinite loop. After about 72 TB of data read in less than 24 hours, I checked the kernel logs and there were no stability issues whatsoever. The NVMe SSD started throttling due to heat, which was expected with that cheap adapter. Assuming no issues with any cables and adapters, the USB ports seem to be solid enough for running storage devices off of. Yes, it can be a horrible idea in some use cases, but at the same time my ThinkPad T430 has been excellent with USB-based storage, and that’s with one of the USB ports being coffee-stained! The eMMC chip is also more performant compared to the previous iteration, with sequential read speeds averaging around 316 MB/s, writes around 175 MB/s, and average read latency being around 0.15 ms. Certainly good enough for a boot drive. The LattePanda V1 struggled with larger displays, and when I gave it a go during this review, it would not properly display an image on my 3440x1440p monitor. The LattePanda IOTA just did it, at 60 Hz. On Fedora Workstation and GNOME, the experience was smooth. Once you start doing things in the browser, like video playback, the situation is less optimal, but as a makeshift desktop PC it is alright for most low/mid-range activities. The board came with a Windows 11 installation (not activated). As is tradition with Windows, the initial impressions are horrible, update processes running in the background made the active cooler go wild and the device felt sluggish. But after that process is done, the experience is not bad at all if you look past the OS being Windows. I did not do a thorough investigation and I suggest formatting the device boot drive either way when receiving it, but the Windows 11 installation looked clean enough, with no obvious bloatware. The LattePanda V1 had some quirks. The performance was iffy, and you had to specify a Linux kernel parameter on first boot so that Fedora Linux does not confuse the optional display interface to be an always-connected primary display. The previous version also didn’t include a real-time clock (RTC) by default, which meant that it was impossible to schedule some systemd timers as the time would always jump on boot years ahead on distros like Fedora Server. I got stuck in a reboot loop with a scheduled reboot job that way, was not fun to recover from. With the LattePanda IOTA, I have not observed any weird oddities and quirks with it. Even the kernel logs don’t show anything that’s problematic, and the RTC is handy to have around as that helps avoid the issue mentioned above. With the LattePanda V1, the cooler was not strictly required, but strongly recommended if you were going to use the board with moderate to high sustained loads. My solution was to slap an inappropriately sized heat sink to it with a thermal pad and zip ties and/or velcro strips, which looked horrible, because it was. With the LattePanda IOTA, the cooler is now a mandatory part of the assembly. It can be fitted with either a passive cooler , a case with passive cooling , or an active cooler . The active cooler does a good job of keeping the board cool, but it does get super loud at higher loads. The default fan curve is very primitive, with the fan changing it speeds in big and sudden increments. Bursty workloads certainly feel bursty with this fan. You will not want to be in the same room with this active cooler. The sound profile is very similar to a thin and light laptop, and the fan has a very strong high-pitched whine to it. Here’s an audio recording of the noise under heavy load if you’re interested (MP3 file). Recorded using a Google Pixel 8a. You can mitigate the active cooler noise issue by reducing the CPU clock speed by setting a lower power limit in UEFI settings, or on Linux, setting a lower CPU performance ceiling using driver option once on boot. This comes at the obvious cost of some raw performance, but given that CPU power scales non-linearly, you may not even notice it that much. If you are sensitive to fan noise, then do get the passive cooler and slap a Noctua fan on it, it will likely be a much better experience with both the cooling performance and noise levels. Oh, and fun fact: I got so carried away with testing that I actually forgot to remove the plastic film on the larger thermal pad that cools supporting components. And then I did about 24 hours of stress testing with that arrangement. I can confirm that the design of the board is idiot-proof, as I did not actually notice any severe throttling or thermal issues with that mistake. You can actually see the plastic film being present in a few photos of the board in this review. I still can’t believe that after all these years I ended up making that one mistake that you usually see online in tech support gore posts. The idle power consumption of the LattePanda IOTA seems to be around 4.0W, which is more than the Raspberry Pi 5 8GB with its power consumption being around 3.2W. Slightly higher compared to that, but lower than most x86 mini PC-s with idle power consumption typically in the range of 6-14W. During the disk read speed stress test, I saw a maximum of 24.4W pulled from the wall. With the disk read stress test and a full CPU stress test, I saw a peak of 36.3W, with it quickly dropping down as the CPU settled down at a lower clock speed. This board came surprisingly close to my perfect home server criteria that I had outlined earlier this year. Less than 5W when idling? Check. 10+ TB of redundant storage? Check. 4 modern cores’ worth of CPU performance? Check. Enough performance during bursty workloads? So far, yes. I then installed a fresh copy of Fedora Server 43 and moved all my home server workloads to it. The eMMC storage is used as a boot drive, writes are disabled, workloads requiring good latency and speed are on the 512GB NVMe SSD, and bulk storage is connected via two existing USB-SATA adapters taken from one of those WD Elements/MyBook external hard drive enclosures. Then it just worked. No issues. 5 The drop in the overall power consumption of my whole home server and networking stack was also immediately noticeable. Here are my observations of the CPU performance and behaviour after hitting it with an all-core CPU load: I have seen the CPU hit around 3.6 GHz with a single core load while there is nothing running in the background, but during my normal home server operations the cores are doing enough work across all 4 cores, so that doesn’t happen all that often, and 2.9 GHz is the ceiling for single core performance. The only limiting factor so far has been the 8 GB of memory on my review unit, but on the bright side that limitation forced me to review the memory usage of some of the jobs that I run on my home server, which ended up with me finding a few resource hogs and then fixing them all up. Now I can run about 30 Docker containers of various resource consumption on a single board computer, and with less than 4GB of RAM used. I set up an 8GB swap file on the SSD, just in case. Thanks to the relatively small boot drive, I also learned that even if you move the Docker folder to another location, will still clutter up your boot drive, so you’ll have to change that path in its file setting. I’m genuinely impressed with how well the LattePanda IOTA runs as a home server. The board isn’t really designed with that use case in mind, and I suspect that the Intel N150 might be doing most of the heavy lifting here, but still, very impressive! Is it the perfect home server? No, but it’s pretty damn close to my definition of it. For those interested in what options are available on the board via its UEFI settings, here are some screenshots of the settings. 6 If the LattePanda IOTA with its adapters fits your project requirements, you’re aware of its limitations, and the price is right, then I believe it’s a solid choice for your next project. My testing didn’t immediately break it, even when I forgot to remove the plastic film on one of the thermal pads. The current pricing of it and its accessories seem to be roughly in the ballpark of the Raspberry Pi 5 8GB (based on prices in Estonia). Boards like the Zimaboard 2 (have not tested it myself) are more expensive, but they’re also catering to a slightly different audience and have better specs, like 2.5G Ethernet ports and SATA ports with power delivery suitable for running two 3.5" hard drives straight from the board. It’s hard to beat the bargain that you can get from a used mini PC or NAS, but it won’t come with the charm, low power consumption and bragging rights that a single board computer gets you, especially if you’re using it for an off-label use case like I am. 7 In the meantime, I’ll keep rocking it as a home server. In case something noteworthy happens, I’ll update this post, which is brought to you by the very same LattePanda IOTA at the time of publishing. this also marks the first time that I’ve been sent a review sample throughout the course of running this blog!  ↩︎ do note that with most M.2 PCIe->SATA adapters, the controller of the adapter determines how good of an experience you will have. With some, I’ve read that the controllers may not handle some failure scenarios well, one device having issues may throw off the whole controller, and now you have a bigger mess.  ↩︎ the earliest PC motherboard with a gigabit Ethernet connection that I’ve personally used was manufactured in 2006. That’s how long gigabit Ethernet has been around for in the consumer space.  ↩︎ say that 10 times in a row!  ↩︎ I know, that usually does not happen on this blog.  ↩︎ being a prolific open source influencer does not bring in as much money as you’d think, so I haven’t bought a proper capture device yet.  ↩︎ no, but seriously, I cannot be the only one who has a strange affection towards SBC-s with their bare PCB-s. I can’t tell a capacitor from a resistor, but the boards are just so damn cool, right?  ↩︎ active cooler M.2 M-key expansion board 51W PoE expansion board M.2 4G LTE expansion board UPS expansion board CPU: Intel N150, 4 cores, 4 threads, up to 3.6 GHz RAM: 8/16 GB (depending on model) Onboard storage: 64/128GB eMMC (depending on model) Networking: gigabit Ethernet port Real-time clock: yes! USB hard drive (Seagate Basic) USB SATA SSD (Samsung QVO 4TB in ICY BOX USB-SATA adapter) USB NVMe SSD (512 GB Samsung PM9A1 with some random cheap USB to M.2 NVMe adapter) 2.9 GHz for a short time period (10-15 seconds), with CPU hovering around 80°C 2.2-2.3 GHz after that, with the CPU dropping to around 70°C Advanced -> ACPI Advanced -> CPU configuration Advanced -> Super IO configuration Advanced -> Serial port 1 configuration Advanced -> SMART Fan Control Advanced -> Trusted Computing Advanced -> NVMe configuration (no device connected at time of screenshot, oops) Advanced -> Power configuration Advanced -> USB configuration Advanced -> Serial Port console redirection Advanced -> SDIO configuration Advanced -> Realtek PCIe Ethernet controller Chipset -> System Agent (SA) configuration Chipset -> Device configuration Security -> Secure Boot Save & Exit this also marks the first time that I’ve been sent a review sample throughout the course of running this blog!  ↩︎ do note that with most M.2 PCIe->SATA adapters, the controller of the adapter determines how good of an experience you will have. With some, I’ve read that the controllers may not handle some failure scenarios well, one device having issues may throw off the whole controller, and now you have a bigger mess.  ↩︎ the earliest PC motherboard with a gigabit Ethernet connection that I’ve personally used was manufactured in 2006. That’s how long gigabit Ethernet has been around for in the consumer space.  ↩︎ say that 10 times in a row!  ↩︎ I know, that usually does not happen on this blog.  ↩︎ being a prolific open source influencer does not bring in as much money as you’d think, so I haven’t bought a proper capture device yet.  ↩︎ no, but seriously, I cannot be the only one who has a strange affection towards SBC-s with their bare PCB-s. I can’t tell a capacitor from a resistor, but the boards are just so damn cool, right?  ↩︎

0 views
Ruslan Osipov 3 weeks ago

Modality, tactility, and car interfaces

Modal interfaces are genuinely cool. For the uninitiated, a “modal” interface is one where the same input does different things depending on the state (or mode) the system is in. Think of your smartphone keyboard popping up only when you need to type, or a gas pedal driving the car forward or backward depending on the gear. I love the concept enough to dedicate a whole chapter of Mastering Vim to it. But there’s a time and a place for everything, and a car’s center console is neither the time nor the place for a flat sheet of glass. I was traveling this week and rented a Kia EV6 - a perfectly serviceable electric car. I was greeted by a sleek touch panel that toggles control between the air conditioning and the audio system. Dear car manufacturers: please, I am begging you, stop. When I’m driving down the highway at 75 miles per hour, the absolute last thing I should be doing is taking my eyes off the road to visually verify which mode my AC knobs are in so I can turn down the volume. I can’t feel my way around the controls because gently grazing the surface of the screen registers as a button press. It’s not just annoying - it’s unsafe. Modality works fine when you have physical feedback. My old Pebble Time Round ( may it rest in peace ) had a tactile modal interface. It had four buttons that did different things depending on the context. But because they were physical, clicky buttons, I could operate the watch without ever looking at it. I could skip a track or dismiss a notification while riding my bike, purely by feel. Compare that to modern smart watches, or, worse, earbuds. Don’t even get me started on touch controls on earbuds. I’m out here riding my bike through rough terrain - I do not have the fine motor control required to perform a delicate gesture on a wet piece of plastic lodged in my ear. I miss the click. I miss the resistance. I miss knowing I’ve pressed a button without needing confirmation from the software. We’ve optimized for screens that can be anything in so many areas of our lives, but these screens aren’t particularly good at controlling stuff when we’re living said lives. Yeah, I miss analog buttons.

0 views
Brain Baking 3 weeks ago

Why I Don't Need a Steam Machine

For those of you who are living under a rock, Valve announced three new hardware devices joining their Steam Deck line-up: a new controller, a VR headset, and the GameCube—no wait, GabeCube—no wait, Steam Machine. The shiny little cube is undoubtedly Valve’s (second) attempt to break into the console market. This time, it might just work. The hardware is ready to arrive in at your living room spring next year. The biggest question is: will it arrive at our living room? Reading all the hype has certainly enthused me (e.g. Brendon’s The Steam Machine is the Future , PC Gamer’s Valve is all over ARM , Eurogamer’s Steam Machine preview , ResetEra’s Steam Hardware thread ); especially the part where the Machine is just a PC that happens to be tailored towards console gaming. According to Valve, you can install anything you want on it—it’s just SteamOS just like your trusty Deck, meaning you can boot into KDE and totally do your thing. Except that this shiny little cube is six times as powerful. I’m sure Digital Foundry will validate that next year. Valve's newly announced Steam Machine: a mysterious looking sleek black box. However, this post isn’t about specs, expectations, or dreams: it’s about tempering my own enthusiasm. I’d like to tell myself why I don’t really need a Steam Machine. The following list will hopefully make it easier to say no when the buy buttons become available. So you see, I don’t really need a Steam Machine… Fuck it, I’m getting one. Related topics: / steam / games / By Wouter Groeneveld on 16 November 2025.  Reply via email . You’re a retro gamer. You don’t need the power of six Steam Decks. To do what, run DOSBox? Your TV doesn’t support 4K . Again, no need for those 4K 60 FPS. You generally dislike AAA games. With The Steam Machine, you might be able to finally properly run DOOM Eternal and all of the Assassin’s Creed games. That you don’t like playing. You don’t have time to play games anyway. Ouch, that hurts but it’s not untrue. The TV will be occupied anyway. The Steam Machine is not a Switch: you can’t switch to handheld mode. When are you going to play on the Machine if the TV is being used to watch your wife’s favourite shows? You already have too many gaming related hardware pieces. That’ll mean you’ll have to divide your time by an even bigger number to devote an equal amount to playing them. There’s no room for yet another nondescript box under the TV. See above: why don’t you first try to do something with that SNES Mini and PlayStation Mini besides letting it collect dust? You’re a physical gamer. This is Steam. There will be no insertion of cartridges, no blowing of carts, and no staring at game collections on a shelf. It’s Steam, not Good Old Games. Sure it can run GOG games but the Machine is primarily designed to run Steam. You avoid purchasing from Steam like the plague, yet you’re willing to buy a Machine dedicated to it? Are you crazy? The last time you booted Steam was over a year ago. Don’t tell me you’re suddenly interested in running the platform on a dedicated machine. You don’t have time to fiddle with configuration. Button and trackpad mappings to get the controls just right enough to play strategy games designed to be played with keyboard and mouse will only leave you frustrated. Your MacBook can emulate Windows games just fine. You recently bought CrossOver and played Wizordum and older Windows 98/XP stuff on it. It even runs Against The Storm flawlessly. No need for Proton or whatever. In two years, you’ll upgrade your M1 to an M4+: there’s the power upgrade. If CrossOver is struggling to run that particular game you so badly want to play, it’ll be buttery smooth in a few years. You’re going to do the laptop upgrade anyway regardless of the Steam Machine. You already have a huge gaming backlog. Thanks to your buddy Joel you bought too many physical Switch games that are still waiting to be touched. Are you really ready to open up another can of worms? You dislike a digital backlog. It’s easy to have hundreds of games on there: see your GOG purchases. Why don’t you try to count the ones that you actually played, let alone finished. You’re not going to use the Machine to run office software. Your laptop and other retro machines are good enough at handling that task. What are you really going to do with this cube besides gaming? Those cool looking indie games will be released for Switch in due time anyway. Remember Pizza Tower ? It’s out on Switch now. Remember to buy the cart on Fangamer, together with the Anton Blast one. It’s rumoured to cost more than . Save that money for a Switch 2 if the games are starting to become interesting to justify that upgrade, as currently, they’re not. Also, see the backlog point above. All HDMI ports both on the TV and your external monitors are occupied . Unless you’re willing to constantly switch cables, you’ll need to invest in a HDMI switch. Another . You can’t buy this without buying the Steam Controller. That’s easily another you already spent buying the Mobapad controller for your Switch as a replacement for the semi-broken Joy Cons. You can’t buy this as an expense on the company. You’re closing down the company, remember. (More on that later) The cool looking LED and programmable front display don’t justify an expensive purchase. After the initial excitement wears off, the LED will become annoying and you’ll simply turn it off.

0 views
Jason Scheirer 3 weeks ago

The Innioasis Y1 Music Player

I’ve been enjoying standalong MP3 players! The Innioasis Y1 kept coming across my radar, I like the the form factor, it was $50. What the heck, why not. The community for this thing is insane , it’s just as active as the people doing weird things with my RG35XX . It’s really cool seeing so many people doing neat things with such a simple piece of hardware. And like the RG35XX, part of the value proposition is this is a cheap peice of commodity hardware that would not have been possible in this way even 5 years ago, but is now inexpensive enough and flexible enough to be an incredible product for the money. I saw you could put a flavor of Rockbox on the thing so I did that. The UI out of the box isn’t nearly as polished but there’s a neat community-supported updater that goes so far as to install skins for you. I’m currently using another Adwaita adaptation I found on the y1 subreddit which handles CJK correctly, which turns out to be important to me. Rockbox has the ability to create a play log file, so I can scrobble my commute/work listening again ! I use the LastFMLog plugin to manually create a file, then use rb-scrobbler to upload it. It’s a manual process I only do every week or so, but it’s okay. This is awesome. Three surprises. Not quite complaint territory but worth knowing about: This thing is a lot of fun to use, though! The novelty will eventually wear off but it feels good to have something iPod shaped in my life again. No external storage. My Shanlings had a TF card slot so I could expand and swap the storage easily. This is internal. 128GB so it won’t hold my whole library but it holds everything I care about. No touchscreen. Again, coming from Shanling this took a little bit of getting used to. Pure iPod classic ergonomics, buttons only. Build quality is not super solid. The screen is plastic, not glass, and it scratched almost immediately. You can definitely “feel” a center of gravity while the majority of the device fgeels light. No metal in its construction. It doesn’t feel brittle but by no means is it a luxury experience.

0 views