Latest Posts (20 found)
./techtipsy 1 weeks ago

Every time I write about a single board computer, half the internet goes down

It happened again. This time it’s Cloudflare, The last time I wrote about a single board computer, it was AWS that went down on the same day. Today, I wrote about the LattePanda IOTA. I’ll let y’all know once I plan on writing about another single board computer, seems to be bad for the internet.

0 views
./techtipsy 1 weeks ago

LattePanda IOTA review: how does it perform as a home server?

Disclosure: the review sample was provided by DFRobot, the makers of LattePanda. I am allowed to keep the review sample indefinitely, no money exchanged hands, and as always, this post covers my own thoughts and views on the product. 1 In 2023, I happened to find a LattePanda V1 for sale at a good price. Given the then-poor availability of affordable Raspberry Pi units, I got one for testing and finding potential use cases for it in my setup. However, it was just a little bit too weak for any practical uses in 2023, with its CPU and USB connectivity being just slow enough to be of less use, and the networking being capped at 100 Mbit/s. In 2025, we have the spiritual successor to it: the LattePanda IOTA. It keeps the same form factor, but the connectivity and raw power have all received a significant jump, with the CPU performance rivalling my current home server, the trusty ThinkPad T430. The marketing materials list all sorts of sensible use cases for it. I’m sure that it works fine for those, but I’m only interested in one thing: how close does this board get to being the perfect home server? The perfect home server uses very little power, offers plenty of affordable storage and provides a lot of performance when it’s actually being relied upon. In my case, low power means less than 5 W while idling, 10+ TB of redundant storage for data resilience and integrity concerns, and performance means about 4 modern CPU cores’ worth (low-to-midrange desktop CPU performance). The model I’m reviewing is the 8GB RAM/64GB eMMC one, with a Windows 11 installation on it (not activated). Along with the review unit itself, I got sent the following accessories: The board was tested with a Lenovo 65W USB-C power adapter, because that’s what I had available. Given the specs of the board and the accessories, that should be plenty. As far as I know, USB power delivery seems to work fine and it’s not just a weird USB-C connector that requires specific voltages to work. The M.2 NVMe SSD used in this review is a 512 GB Samsung PM9A1. I got that one from another PC that really didn’t need a boot drive that large. Most of the testing was done with a fresh Fedora Server 43 installation, kernel version 6.17.7. I suggest looking at the spec sheet if you’re interested in all the fine details and available configurations. The overall connectivity has been improved with the new version of this board compared to the old board. The USB ports are all fast 10 Gbit/s ones, and we have actual PCIe connectivity to play with, although the available bandwidth is quite limited with a PCIe 3.0 x1 lane available on the port that both the M.2 M-key and PoE adapter connect to. What caught my eye was the CPU performance I’ve been proudly running an old ThinkPad T430 as a server for a while now, with some failed attempts to find a more low-power and efficient solution. The Intel N150 is now offering similar levels of performance, but in a much smaller power envelope. When it comes to more specialized functionalities, such as GPIO and the RP2040 microcontroller, I don’t currently have a solid use case for them, so they won’t be covered in this review. I might fancy giving them a go in the future though, it would be nice to get some environmental sensors on it to monitor the temperature and humidity of the server room (which is a closet). Since I also don’t have an 4G LTE modem available, I did not test the associated adapter. The way you can add expansion boards to the LattePanda IOTA is quite similar to how Raspberry Pi 5 and other similar single board computers do it: you simply run a flexible cable to an adapter board, and bam, you have extra connectivity! With the M.2 M-key adapter kit, you get the adapter itself, some mounting screws and brass stand-offs, and a tiny little flexible cable for the PCIe signal. The link speed is PCIe 3.0, with one lane available. In theory, this means a maximum of 1 GB/s of throughput. In practice and with this board and SSD combination, I got a maximum of ~810 MB/s. I expect some levels of losses with these types of setups, so in my view this seems normal. For the test, I just did a . The SSD itself supports up to 4 lanes of PCIe connectivity so that should not be a limiting factor here. The lovely part about M.2 NVMe ports is that you can use it for a lot of off-label use cases. Fancy some SATA ports? There’s an adapter for that. 2 Or a network card? Some fancy AI accelator thingy? Or a full-sized GPU? Anything will work (probably), as long as the cables and adapters are high quality, and you provide extra power to the device through other means. The only device on my network that is connected over PoE is currently an Ubiquiti Wi-Fi access point, and that is unlikely to change in the near future because that would require a full replacement of my networking gear. 3 However, I still gave this board a quick go, and I’m happy to report that it also works as an additional standalone Ethernet port. The Ethernet controller seems to be similar or the same as on the main board, and it shows up as a separate networking device. Both are Realtek NIC-s ( ), and they work with the driver. Realtek has a spotty compatibility story overall on Linux from what I’ve read, but this one seems to work fine on Fedora Server 43. I was very close to pulling the trigger and turning it into a beefy router so that I can finally move my Wireguard networks on the router as my current one cannot do more than 20 Mbit/s of Wireguard traffic, but I didn’t end up going through with that idea because of how well the SBC did in other areas. As some of you might know, I’m a fan of playing with fir- 18650 Li-ion battery cells, and I’m hoping to one day build a solar-powered server of my own (of which there are many examples ). I took some spare 18650 cells that came from an old ThinkPad battery, made sure that the voltages are more-or-less the same, and threw them on the board. Connecting the UPS board with the standoffs was fine, but the cable connecting it with the SBC was finicky. I triple-checked that the connector was the right way, but had to still use an uncomfortable amount of force to connect it all up. The battery cells themselves sit snugly on the board, and unless you drop the board, they should not fall out on their own. You’d still want to build a case around it if you’re going to actually put it to use in rough environments. The manual for the UPS board emphasizes that it only works on Windows 10/11, and sadly that seems to be the case, the UPS does not seem to show up as an USB-listed device, and tools like NUT did not find anything to monitor with a quick 5-minute investigation. The UPS board also has an interesting selection of switches that you can use to adjust the behaviour of the board, like automatically turning the board on when power comes back on, and setting an 80% battery charge limit. The first one was not really necessary to use, the board would follow whatever setting you have enabled on the SBC itself. I configured mine via UEFI settings to automatically turn on with a power adapter connected, and that worked here as well. The run time of your LattePanda IOTA with the UPS expansion board will heavily depend on your workloads and quality of your battery cells. Mine were used cells, and then I hit the board with to create some load on it. It ran for over an hour like that, and then I got bored and wanted to proceed with testing other accessories. The marketing materials mention up to 8 hours of runtime, and I suspect that with good Li-ion cells and workloads where you idle most of the time, it will likely be achievable. The board seems to trigger a hard shutdown on Linux because the host OS is not aware of a battery being connected. Not that catastrophic for most modern filesystems and database engines, but something to consider in your own workloads in case they are Linux-based. The UPS board seems to handle power connection and disconnection events well enough, it did not do anything weird when repeatedly plugging and unplugging the USB-C cable. 4 Based on the readings from a wall outlet energy meter, the board uses up to 20W when charging the cells. It’s possible for the board to pull more than that with a maximum CPU load and connected peripherals, so I wonder if that may be an issue with more intense usage scenarios. During charging and discharging cycles, even under heavy loads, the battery cells did not get hot and were at best warm to touch. It’s gigabit. Fine for my use case given that I still live in 2006 and only have devices that support gigabit Ethernet speeds at best (excluding the Ubiquiti Wi-Fi AP), but certainly less than some competing products. Compared to the LattePanda V1, the USB port performance is actually decent for my use case. I can connect up to three USB-connected storage devices to the board, so that’s exactly what I did. I set up three different USB-connected devices: For each device (including on-board eMMC device), I ran a , which puts a sequential read workload on all the drives in an infinite loop. After about 72 TB of data read in less than 24 hours, I checked the kernel logs and there were no stability issues whatsoever. The NVMe SSD started throttling due to heat, which was expected with that cheap adapter. Assuming no issues with any cables and adapters, the USB ports seem to be solid enough for running storage devices off of. Yes, it can be a horrible idea in some use cases, but at the same time my ThinkPad T430 has been excellent with USB-based storage, and that’s with one of the USB ports being coffee-stained! The eMMC chip is also more performant compared to the previous iteration, with sequential read speeds averaging around 316 MB/s, writes around 175 MB/s, and average read latency being around 0.15 ms. Certainly good enough for a boot drive. The LattePanda V1 struggled with larger displays, and when I gave it a go during this review, it would not properly display an image on my 3440x1440p monitor. The LattePanda IOTA just did it, at 60 Hz. On Fedora Workstation and GNOME, the experience was smooth. Once you start doing things in the browser, like video playback, the situation is less optimal, but as a makeshift desktop PC it is alright for most low/mid-range activities. The board came with a Windows 11 installation (not activated). As is tradition with Windows, the initial impressions are horrible, update processes running in the background made the active cooler go wild and the device felt sluggish. But after that process is done, the experience is not bad at all if you look past the OS being Windows. I did not do a thorough investigation and I suggest formatting the device boot drive either way when receiving it, but the Windows 11 installation looked clean enough, with no obvious bloatware. The LattePanda V1 had some quirks. The performance was iffy, and you had to specify a Linux kernel parameter on first boot so that Fedora Linux does not confuse the optional display interface to be an always-connected primary display. The previous version also didn’t include a real-time clock (RTC) by default, which meant that it was impossible to schedule some systemd timers as the time would always jump on boot years ahead on distros like Fedora Server. I got stuck in a reboot loop with a scheduled reboot job that way, was not fun to recover from. With the LattePanda IOTA, I have not observed any weird oddities and quirks with it. Even the kernel logs don’t show anything that’s problematic, and the RTC is handy to have around as that helps avoid the issue mentioned above. With the LattePanda V1, the cooler was not strictly required, but strongly recommended if you were going to use the board with moderate to high sustained loads. My solution was to slap an inappropriately sized heat sink to it with a thermal pad and zip ties and/or velcro strips, which looked horrible, because it was. With the LattePanda IOTA, the cooler is now a mandatory part of the assembly. It can be fitted with either a passive cooler , a case with passive cooling , or an active cooler . The active cooler does a good job of keeping the board cool, but it does get super loud at higher loads. The default fan curve is very primitive, with the fan changing it speeds in big and sudden increments. Bursty workloads certainly feel bursty with this fan. You will not want to be in the same room with this active cooler. The sound profile is very similar to a thin and light laptop, and the fan has a very strong high-pitched whine to it. Here’s an audio recording of the noise under heavy load if you’re interested (MP3 file). Recorded using a Google Pixel 8a. You can mitigate the active cooler noise issue by reducing the CPU clock speed by setting a lower power limit in UEFI settings, or on Linux, setting a lower CPU performance ceiling using driver option once on boot. This comes at the obvious cost of some raw performance, but given that CPU power scales non-linearly, you may not even notice it that much. If you are sensitive to fan noise, then do get the passive cooler and slap a Noctua fan on it, it will likely be a much better experience with both the cooling performance and noise levels. Oh, and fun fact: I got so carried away with testing that I actually forgot to remove the plastic film on the larger thermal pad that cools supporting components. And then I did about 24 hours of stress testing with that arrangement. I can confirm that the design of the board is idiot-proof, as I did not actually notice any severe throttling or thermal issues with that mistake. You can actually see the plastic film being present in a few photos of the board in this review. I still can’t believe that after all these years I ended up making that one mistake that you usually see online in tech support gore posts. The idle power consumption of the LattePanda IOTA seems to be around 4.0W, which is more than the Raspberry Pi 5 8GB with its power consumption being around 3.2W. Slightly higher compared to that, but lower than most x86 mini PC-s with idle power consumption typically in the range of 6-14W. During the disk read speed stress test, I saw a maximum of 24.4W pulled from the wall. With the disk read stress test and a full CPU stress test, I saw a peak of 36.3W, with it quickly dropping down as the CPU settled down at a lower clock speed. This board came surprisingly close to my perfect home server criteria that I had outlined earlier this year. Less than 5W when idling? Check. 10+ TB of redundant storage? Check. 4 modern cores’ worth of CPU performance? Check. Enough performance during bursty workloads? So far, yes. I then installed a fresh copy of Fedora Server 43 and moved all my home server workloads to it. The eMMC storage is used as a boot drive, writes are disabled, workloads requiring good latency and speed are on the 512GB NVMe SSD, and bulk storage is connected via two existing USB-SATA adapters taken from one of those WD Elements/MyBook external hard drive enclosures. Then it just worked. No issues. 5 The drop in the overall power consumption of my whole home server and networking stack was also immediately noticeable. Here are my observations of the CPU performance and behaviour after hitting it with an all-core CPU load: I have seen the CPU hit around 3.6 GHz with a single core load while there is nothing running in the background, but during my normal home server operations the cores are doing enough work across all 4 cores, so that doesn’t happen all that often, and 2.9 GHz is the ceiling for single core performance. The only limiting factor so far has been the 8 GB of memory on my review unit, but on the bright side that limitation forced me to review the memory usage of some of the jobs that I run on my home server, which ended up with me finding a few resource hogs and then fixing them all up. Now I can run about 30 Docker containers of various resource consumption on a single board computer, and with less than 4GB of RAM used. I set up an 8GB swap file on the SSD, just in case. Thanks to the relatively small boot drive, I also learned that even if you move the Docker folder to another location, will still clutter up your boot drive, so you’ll have to change that path in its file setting. I’m genuinely impressed with how well the LattePanda IOTA runs as a home server. The board isn’t really designed with that use case in mind, and I suspect that the Intel N150 might be doing most of the heavy lifting here, but still, very impressive! Is it the perfect home server? No, but it’s pretty damn close to my definition of it. For those interested in what options are available on the board via its UEFI settings, here are some screenshots of the settings. 6 If the LattePanda IOTA with its adapters fits your project requirements, you’re aware of its limitations, and the price is right, then I believe it’s a solid choice for your next project. My testing didn’t immediately break it, even when I forgot to remove the plastic film on one of the thermal pads. The current pricing of it and its accessories seem to be roughly in the ballpark of the Raspberry Pi 5 8GB (based on prices in Estonia). Boards like the Zimaboard 2 (have not tested it myself) are more expensive, but they’re also catering to a slightly different audience and have better specs, like 2.5G Ethernet ports and SATA ports with power delivery suitable for running two 3.5" hard drives straight from the board. It’s hard to beat the bargain that you can get from a used mini PC or NAS, but it won’t come with the charm, low power consumption and bragging rights that a single board computer gets you, especially if you’re using it for an off-label use case like I am. 7 In the meantime, I’ll keep rocking it as a home server. In case something noteworthy happens, I’ll update this post, which is brought to you by the very same LattePanda IOTA at the time of publishing. this also marks the first time that I’ve been sent a review sample throughout the course of running this blog!  ↩︎ do note that with most M.2 PCIe->SATA adapters, the controller of the adapter determines how good of an experience you will have. With some, I’ve read that the controllers may not handle some failure scenarios well, one device having issues may throw off the whole controller, and now you have a bigger mess.  ↩︎ the earliest PC motherboard with a gigabit Ethernet connection that I’ve personally used was manufactured in 2006. That’s how long gigabit Ethernet has been around for in the consumer space.  ↩︎ say that 10 times in a row!  ↩︎ I know, that usually does not happen on this blog.  ↩︎ being a prolific open source influencer does not bring in as much money as you’d think, so I haven’t bought a proper capture device yet.  ↩︎ no, but seriously, I cannot be the only one who has a strange affection towards SBC-s with their bare PCB-s. I can’t tell a capacitor from a resistor, but the boards are just so damn cool, right?  ↩︎ active cooler M.2 M-key expansion board 51W PoE expansion board M.2 4G LTE expansion board UPS expansion board CPU: Intel N150, 4 cores, 4 threads, up to 3.6 GHz RAM: 8/16 GB (depending on model) Onboard storage: 64/128GB eMMC (depending on model) Networking: gigabit Ethernet port Real-time clock: yes! USB hard drive (Seagate Basic) USB SATA SSD (Samsung QVO 4TB in ICY BOX USB-SATA adapter) USB NVMe SSD (512 GB Samsung PM9A1 with some random cheap USB to M.2 NVMe adapter) 2.9 GHz for a short time period (10-15 seconds), with CPU hovering around 80°C 2.2-2.3 GHz after that, with the CPU dropping to around 70°C Advanced -> ACPI Advanced -> CPU configuration Advanced -> Super IO configuration Advanced -> Serial port 1 configuration Advanced -> SMART Fan Control Advanced -> Trusted Computing Advanced -> NVMe configuration (no device connected at time of screenshot, oops) Advanced -> Power configuration Advanced -> USB configuration Advanced -> Serial Port console redirection Advanced -> SDIO configuration Advanced -> Realtek PCIe Ethernet controller Chipset -> System Agent (SA) configuration Chipset -> Device configuration Security -> Secure Boot Save & Exit this also marks the first time that I’ve been sent a review sample throughout the course of running this blog!  ↩︎ do note that with most M.2 PCIe->SATA adapters, the controller of the adapter determines how good of an experience you will have. With some, I’ve read that the controllers may not handle some failure scenarios well, one device having issues may throw off the whole controller, and now you have a bigger mess.  ↩︎ the earliest PC motherboard with a gigabit Ethernet connection that I’ve personally used was manufactured in 2006. That’s how long gigabit Ethernet has been around for in the consumer space.  ↩︎ say that 10 times in a row!  ↩︎ I know, that usually does not happen on this blog.  ↩︎ being a prolific open source influencer does not bring in as much money as you’d think, so I haven’t bought a proper capture device yet.  ↩︎ no, but seriously, I cannot be the only one who has a strange affection towards SBC-s with their bare PCB-s. I can’t tell a capacitor from a resistor, but the boards are just so damn cool, right?  ↩︎

0 views
./techtipsy 2 weeks ago

I found the best use case for AI

In my professional career, I’ve started experimenting with LLM-based tooling to see if they are all hype or if there is some actual substance in it. I’ve seen the good and bad parts, but there’s one use case that worked out really well within our team. 1 Tooling like Claude Code and Cursor rely on various text files that describe the project, the practices used in it and instructions on how to perform certain actions in the repository, with it mostly being about highlighting project-specific knowledge. A lot of that can be generated with the tooling, and it’s a good practice to update those instructions whenever you notice an LLM-based tool doing something unexpected or plain wrong on a constant basis. The next time your coworker is going on a longer vacation, sneak in an instruction that sets their name as the name for the tool. It’s even better if it’s added with a bunch of legitimate changes, like a 1000-line PR that does something useful. It can be something as simple as: And just like that, you’ve replaced your coworker with AI! Now, when your coworker returns from vacation, see how long it will take until they catch on. In our team, it took about 3 working days until they discovered what was causing that. It’s such a basic and dumb prank, but it cheered me and my team up a lot shortly after we set the stage for this prank, because Claude Code constantly referred to itself as Heino in all sorts of situations, and especially after I grilled the LLM-based tool about it doing a poor job. 2 Given that we were doing a lot of heavy lifting around that time in the project with deadlines looming, I really needed that laugh. One odd thing that I observed is that Claude Code would quite often start calling me Heino. That, and the fact that Claude Code would usually ignore about a third of the instructions given to it, helped me understand one of its limitations well. it’s a vibes-based world out there.  ↩︎ these are paraphrased, but you get the idea.  ↩︎ it’s a vibes-based world out there.  ↩︎ these are paraphrased, but you get the idea.  ↩︎

1 views
./techtipsy 3 weeks ago

The day IPv6 went away

I take pride in hosting my blog on a 13-year old ThinkPad acting as a home server , but sometimes it’s kind of a pain. It’s only fair that I cover the downsides of this setup in contrast to all the positives. Yesterday, I happened to notice that a connection to a backup endpoint was gone. Okay, happens sometimes. Then I went into the router and noticed that hey, that’s odd, there’s no WAN6 connection showing up. All gone. Just as if I had gone back to a crappy ISP that only provides IPv4 ! Restarting the interface did not work, but a full router restart worked. Since the IPv4 address and IPv6 prefix are all dynamic, that meant that my DNS entries had just gone stale. I do have a custom DNS auto-updater script for my DNS provider, but DNS propagation takes time. Luckily not a lot of time, my uptime checker only reported downtime of 5-15 minutes, depending on the domain. Here’s what it looked like on OpenWRT. Impact to my blog? Not really noticeable, since IPv4 kept trucking along. Perhaps a few IPv6-only readers may have noticed this. 1 I can always move to a cheap VPS or the cloud at a moments’ notice, but where’s the fun in that? I can produce AWS levels of uptime at home, thankyouverymuch ! I think I’ll now need to figure out some safeguards, even if it means scheduling a weekly router reboot if the WAN6 interface is not up for X amount of time. That, and better monitoring. if you are that person, say hi!  ↩︎ if you are that person, say hi!  ↩︎

0 views
./techtipsy 3 weeks ago

Why Nextcloud feels slow to use

Nextcloud. I really want to like it, but it’s making it really difficult. I like what Nextcloud offers with its feature set and how easily it replaces a bunch of services under one roof (files, calendar, contacts, notes, to-do lists, photos etc.), but no matter how hard I try and how much I optimize its resources on my home server, it feels slow to use, even on hardware that is ranging from decent to good. Then I opened developer tools and found the culprit. It’s the Javascript. On a clean page load, you will be downloading about 15-20 MB of Javascript, which does compress down to about 4-5 MB in transit, but that is still a huge amount of Javascript. For context, I consider 1 MB of Javascript to be on the heavy side for a web page/app. Yes, that Javascript will be cached in the browser for a while, but you will still be executing all of that on each visit to your Nextcloud instance, and that will take a long time due to the sheer amount of code your browser now has to execute on the page. A significant contributor to this heft seems to be the bundle, which based on its name seems to provide some common functionality that’s shared across different Nextcloud apps that one can install. It’s coming in at 4.71 MB at the time of writing. Then you want notifications, right? is here to cover you, at 1.06 MB . Then there are the app-specific views. The Calendar app is taking up 5.94 MB to show a basic calendar view. Files app includes a bunch of individual scripts, such as ( 1.77 MB ), ( 1.17 MB ), ( 1.09 MB ), ( 0.9 MB which I’ve never used!) and many smaller ones. Notes app with its basic bare-bones editor? 4.36 MB for the ! This means that even on an iPhone 13 mini, opening the Tasks app (to-do list), will take a ridiculously long time. Imagine opening your shopping list at the store and having to wait 5-10 seconds before you see anything, even with a solid 5G connection. Sounds extremely annoying, right? I suspect that a lot of this is due to how Nextcloud is architected. There’s bound to be some hefty common libraries and tools that allow app developers to provide a unified experience, but even then there is something seriously wrong with the end result, the functionality to bundle size ratio is way off. As a result, I’ve started branching out some things from Nextcloud, such as replacing the Tasks app with using a private Vikunja instance, and Photos to a private Immich instance. Vikunja is not perfect, but its 1.5 MB of Javascript is an order of magnitude smaller compared to Nextcloud, making it feel incredibly fast in comparison. However, with other functionality I have to admit that the convenience of Nextcloud is enough to dissuade me from replacing it elsewhere, due to the available feature set comparing well to alternatives. I’m sure that there are some legitimate reasons behind the current state, and overworked development teams and volunteers are unfortunately the norm in the industry, but it doesn’t take away the fact that the user experience and accessibility suffers as a result. I’d like to thank Alex Russell for writing about web performance and why it matters, with supporting evidence and actionable advice, it has changed how I view websites and web apps and has pushed me to be better in my own work. I highly suggest reading his content, starting with the performance inequality gap series. It’s educational, insightful and incredibly irritating once you learn how crap most things are and how careless a lot of development teams are towards performance and accessibility.

0 views
./techtipsy 1 months ago

./project2038: can I keep the Orange Pi Zero running until 2038 and beyond?

Start of experiment: September 2025 Post last updated: October 2025 Check the live status of my Orange Pi Zero here! I love the Orange Pi Zero. It’s tiny, uses very little power and it’s just neat! It’s also the subject of the very first post on my blog, which makes it a bit special. Unfortunately I haven’t really found a good use case for it, given that its performance is quite limited and the CPU is a 32-bit ARM CPU with 4 relatively weak cores, which rules out using it as a Docker container host due to the architectural limitations. I’ve currently set it up as an additional online backup endpoint, tacking a 4 TB hard drive to it and letting restic handle the rest. The board also has a few quirks at the moment that I’ve worked around. For example, rebooting seems to be broken, and it’s unlikely to get fixed any time soon. I resolved it by simply not rebooting it 1 , at least not on a regular schedule, anyway. I’m hoping that any short-term power outages at home will take care of the need to reboot it. It also runs quite hot in its stock Armbian configuration, which I worked around by running on startup as that forces the CPU to always run at its slowest clock speed (480 MHz). Without it, I found that this board can run its CPU at 105°C, and it does have a thermal shutdown feature. This board has been featured in a few previous posts as well, and even then it was quite underpowered: Now that I have it set up, will it be able to survive to year 2038 and beyond? Only time will tell. Which issue will we run into first? Plausible options based on my previous experience: The board is currently running the latest version of Armbian, and I might occasionally refresh its version from time to time. If you like Armbian, then please support them! They’re doing great work with keeping all sorts of SBC-s up and running with usable versions of Debian and Ubuntu Linux. What are the chances that the day I make this project public, AWS suffers a massive outage? Poetic in a way. And yes, my website and the Orange Pi Zero were fully operational. the first time in my career that the solution ended up being “have you tried not turning it off and on again?”  ↩︎ the little Wi-Fi AP that could seedbox on a wall database optimization adventures on low-end hardware the cheap 8GB SD card craps out component on the board dies from heat-related issues the backup hard drive dies mounted with option, so should not prevent booting the USB power supply dies board loses Armbian support completely (currently under community maintenance status) the first time in my career that the solution ended up being “have you tried not turning it off and on again?”  ↩︎

0 views
./techtipsy 1 months ago

Comparing the power consumption of a 30 year old refrigerator to a brand new one

Our apartment came with a refrigerator. It was alright, it made things cold, it kept them cold. It was also incredibly noisy, and no matter how much I fiddled with its settings, the compressor was always running and any ice cream left in the deep freeze part got rock solid. 1 When I hooked up one of my smart plugs to it, I soon learned why: one of the two compressors was running all the time. This lead to a huge block of ice forming on the back of the main compartment, and the deep freeze section icing up really quickly. I suspect that the thermostat may have been busted and contributed to the issue, but after trying to repair a dishwasher, getting cut about 10 times on my hands and losing, I had zero interest in attempting another home appliance repair on my own. The refrigerator was the UPO Jääkarhu ( jääkarhu means polar bear in Finnish), and the manual that the previous owner had still kept around had July 1995 on it, meaning that the refrigerator was about the same age as I am: 30 years old. Not bad at all for a home appliance! I shopped around for a new refrigerator and got a decent one that’s about the same size, except newer. I won’t mention the brand here because they didn’t pay me anything and this post really isn’t a refrigerator review, but it was in the low-to-midrange class, sporting a “no frost” feature, and could be bought for about 369 EUR in Estonia in the summer of 2025. Based on some napkin math, I assumed that within a few years, the electricity savings will cover the upfront cost of buying the new refrigerator, assuming that it doesn’t break down. After letting it run for a while, I had some data! Turns out that the old one consumed 3.7x more electricity compared to the new one. Here are some typical daily power consumption numbers: The difference is more noticeable if we zoom out a bit. Moving from ~78 kWh to ~21 kWh consumed each month is nice. Around the time we replaced the refrigerator, we also got a working dishwasher, and with those two combined I saw a solid 10-20% decrease in the overall power usage of the whole apartment. We went from using 334 kWh in June to 268 kWh in July, 298 kWh in August and 279 kWh in September. Remember that napkin math I made earlier? If we assume about 57 kWh savings per month, and an average electricity price of 17 cents per kWh (based on actual rates during August 2025), it will take about 38 months or a bit over 3 years for the new refrigerator to pay off in the most pessimistic scenario. The pay-off will likely be larger if we account for energy prices usually rising during winter. Don’t worry about the old refrigerator, we gave it away to a person who needed one for their new home in the short term as a stopgap until they get further with renovation work. Even got some good chocolate for that! The only point of concern with this change is that I don’t really trust the new refrigerator to last as long as the old one. The previous one was good for 30 years if you look past the whole ice buildup, heat and noise, but with the new one I suspect that it’s not going to last as long. At least my new refrigerator doesn’t have a Wi-Fi-connected screen on it! honestly, I miss that a lot, the ice cream was colder for longer, I ate it in smaller bites and savored it more.  ↩︎ old refrigerator: 2.6 kWh new refrigerator: 0.7 kWh honestly, I miss that a lot, the ice cream was colder for longer, I ate it in smaller bites and savored it more.  ↩︎

0 views
./techtipsy 1 months ago

Testing two 18 TB white label SATA hard drives from datablocks.dev

This post is NOT sponsored, the products were bought with my hard-earned money. I’ve been running a full SSD storage setup for a few years in my home server and I’ve been happy with it, except for the storage anxiety that I get with running small pools of fast storage, which is why I started looking at how the hard drive market is doing. Half of tech YouTube has been sponsored by companies like ServerPartDeals, so they were one of the first places I looked at, but they seem to only operate within the US and the shipping+taxes destroy any price advantages from ordering there to Estonia (which is in Europe). At some point I stumbled upon datablocks.dev , which seems to operate within a similar niche, but in Europe and on a much smaller scale. What caught my eye were their white label hard drive offerings. Their website has a good explanation on the differences between recertified and white label hard drives. In short: white label drives have no branding, have no or very low number of power-on hours, may have small scratches or dents, but are in all other aspects completely functional and usable. White label drives also have a price advantage compared to branded recertified drives. Here’s one example with 18 TB drives, the recertified one is 16.7% more expensive compared to the white label one, and the only obvious difference seems to be the sticker on the drive. I highly suspect that the white label one is also manufactured by Seagate based on the physical similarities. I took some time to think things over and compared the pricing of various drives. The drives were all competitively priced between each other, with the price per terabyte hovering around 13 EUR/TB, so it didn’t matter much which drive size you picked, you’d still get a pretty solid deal. It was also a better deal compared to using an WD Elements/My Book drive of the same size. I decided to go with two 18 TB hard drives. I considered buying the 20 TB or 22 TB capacities, but decided to go with 18 TB because it’s the largest single hard drive that I can easily and quickly buy a replacement for in the form of a WD Elements/My Book drive. The stock on is quite volatile, the drives are in stock when new batches arrive, but they can also quickly go out of stock. I saw this live with the 22 TB hard drives, one day there are 35 left, the next day there can be 7 left, and then only one lone drive. At the time of writing, the 18 TB model that I bought is out of stock, so my choice to go with a slightly smaller but more easily replaceable one is validated. For those that have followed my blog for a while will know that I’m a huge fan of all-SSD server builds, especially this one by Jeff Geerling that I still consider building from time to time. If I dislike noise, higher power usage and slower performance, then why did I get the hard drives? It’s simple, really: I now have an actual closet that I can stash my home server in, meaning that noise isn’t that big of a worry, and as long as my home server takes about the same amount of power as my refrigerator or dishwasher, then that’s fine. SSD prices still haven’t gone down as much as I’ve hoped over the years, so the all-SSD build ideas that I have are way outside my budget. The drives arrived in a reasonable time window. The packaging was adequate, although I was slightly concerned with the cardboard box showing signs of something hitting it hard. The drives were packaged within sealed antistatic bags, and with ample bubble wrap surrounding them. Just as described, the drives did have slight scratches and very minor dents in them, but in all other aspects they looked like new. Before putting them to use, I formatted the drives using . It took a full 24 hours to do a full drive write. The write performance peaked at 275 MB/s and slowed down to 123 MB/s at the end, which is expected. 1 I also had to choose a larger block size for because otherwise it could not handle the drive, resulting in the command being . I unfortunately did not save the SMART data from the time I received the drives, but the contents were as expected, there were no more than a few power on hours and other metrics were OK. Keep in mind that it’s also possible to reset SMART data on a drive so this information cannot be taken at face value. The drives are noisy, as expected. They run at 7200 RPM and do the usual clicks and clacks that a normal hard drive does. If this bothers you, use foam to fix it. The soft side of a sponge can work just as well. With these drives I’ve now followed my own advice and tiered my storage: two 1 TB SSD-s for the things that benefit from good speed and latency (databases, containers), and 18 TB hard drives for bulk storage, backups and less frequently used data. Coming from an all-SSD build, I expected the performance to drop in day-to-day operations, but in most cases I cannot tell a difference. My family photos load just fine, media plays back well, and backups take slightly longer, which isn’t noticeable due to them running during the night. Only when I look at the Prometheus node exporter graphs do I notice that sometimes the server is waiting behind the disks a bit more due to higher . The power usage did shoot up as a result, roughly 10-20 W. Not ideal, but my whole networking and home server setup is idling at below 45 W, and I’ve had less efficient home servers in the past, so it’s not that big of a deal. In this configuration, the drives run quite cool. During formatting on a hot day, I saw them go up to a maximum of 51°C, but in general use they sit at around 38-42°C. Overall, I’m reasonably happy with the drives. I expect these to last me at least 5 years, and I’m probably going to switch one of the drives out a bit sooner to reduce the risk of a full drive pool failure. They’ve made it the first 50 days, so that’s good! Oh, and here’s the output for the disks after running them for about two months: hard drives are expected to be slower at the end of the drive because of their design, the platter rotates at 7200 RPM but the end of the drive is located at the inner tracks of the platter, near the center of the spindle, which results in a slower effective speed. Math is cool!  ↩︎ hard drives are expected to be slower at the end of the drive because of their design, the platter rotates at 7200 RPM but the end of the drive is located at the inner tracks of the platter, near the center of the spindle, which results in a slower effective speed. Math is cool!  ↩︎

0 views
./techtipsy 2 months ago

Samsung 870 QVO 4TB SATA SSD-s: how are they doing after 4 years of use?

I’ve been running four Samsung 870 QVO 4TB SATA SSD-s for a while now. They’re old enough to be popping up on the second-hand market, so I thought it would be good to provide a few additional data points for those thinking about buying one. Mine have mainly been used in a home server setting, with one also being used as a backup drive at times. I initially got these drives because I found the noise that hard drives generated to be unacceptably high in a small apartment. They’re also quite a bit faster than hard drives and use significantly less power. The drives were manufactured in 2021, two in April, two in June. Overall, I haven’t seen many issues with the drives, and when I did, it was a Linux kernel issue. These drives are still performing at the expected speed and at during write-heavy workloads they only drop down to 140-170 MB/s, which is considerably better than what the cheapest SATA SSD-s can do in the same scenario, those can go a low as 30 MB/s or even worse. I did notice that one of the drives reported 4 bad blocks over its lifetime, and oddly enough it’s the drive with the least amount of power-on hours. The reported SSD lifetime is reported to be around 94%, with over 170+ TB of data written. At this point, the drives are not even close to the 1440 TBW endurance limit that Samsung has published. The price hasn’t gone down as much as I’ve hoped over the years. At the time I bought the drives, they were roughly 400 EUR a piece, and now they’re selling for about 270 EUR a piece. It’s still significantly cheaper, but back in 2021-2022 there was more optimism about SSD prices coming down over the years. For comparison, 4TB SSD-s from other manufacturers and form factors (NVMe, SATA) start from about 190-200 EUR, however I am not fully confident that those perform at the same level, at least under sustained writes. For those curious, here’s the full output for all the individual drives. S5STNF0R405312K S5STNF0R407424M S5STNF0R614596K S5STNF0R614601K

0 views
./techtipsy 2 months ago

The unreasonable effectiveness of the pancake rule

Being chronically late to meetings sucks. Not only is it very rude, but you’re signalling that you don’t value your coworkers’ time. However, I’ve picked up a technique that works unreasonably well within a team. 1 If you are late to the first meeting of the day three times within a quarter, then you will have to make pancakes for the whole team. Let’s say that you have a daily stand-up taking place at 10:00. Arriving at 10:00 :59: completely OK. Arriving at 10:01 :00: You’re one step closer to making pancakes! Keep in mind that you may hit some obstacles when implementing this rule, so feel free to adjust it. When proposing this idea in my current team, I learned that the office does not offer pancake-making facilities. The pancakes can be substituted for other types of cake or bringing in something else, as long as the team gives prior approval of that modification. The pancake strikes can also be pooled together and spent with your teammates if they wish to do so. If you’re struggling with your team being late to your daily meeting(s), then go ahead and add this rule to the working agreement. You do have a working agreement set up, right? Right? And a free security tech tip to close out: if you see an unlocked work laptop at the office, open your internal chat application of choice on it and try posting to a public channel that you’ll be bringing cake/beers/candy to the office. Works wonders for enforcing the habit of locking your laptop up when leaving the desk! to be fair, the sample size is two, but it has worked out really well in both!  ↩︎ to be fair, the sample size is two, but it has worked out really well in both!  ↩︎

0 views
./techtipsy 3 months ago

Those cheap AliExpress 18650 Li-ion cell power bank enclosures suck, actually

I had a few old ThinkPad batteries lying around. They were big, bulky and not holding much of a charge. Inside those were standard 18650 Li-ion battery cells. I have two TOMO M4 power banks around, and they are fantastic for reusing these old 18650 battery cells inside them. You can even mix and match cells without a worry because they are individually addressed, meaning that any issues with battery charge levels and voltages differing between cells are not a concern. Unfortunately the TOMO M4 lacks modern features, such as USB-C ports and USB-C PD outputs at higher voltages and currents, which makes it less useful and convenient in 2025. I haven’t found any newer designs from them as well that are just as cool. I still wanted to reuse those 18650 cells, so I went to AliExpress and bought some 18650 battery enclosures for testing. One holds 8 cells, another one 10 cells, and the largest one could fit 20 cells inside it. Unfortunately, they all suck and are likely a huge fire hazard in the wrong hands. For the 8-cell variant, I used newly bought 18650 Li-ion cells that were charged up to the same level. This battery enclosure worked quite well, until it didn’t. For whatever reason, the enclosure could not charge itself and other devices at the same time. With the 10-cell variant, I used two different batches of used 18650 Li-ion cells from old ThinkPad batteries, charging them up first. That one worked fine, until it also failed in weird ways. It got quite hot during charging/discharging cycles, and eventually the segment display that’s responsible for displaying the charge level stopped showing certain segments. At that point I lost trust in that enclosure, too. I had the most fun with the 20-cell battery enclosure. My first fuck-up involved using two old battery cells with different charge levels, which resulted in some magic smoke coming out of the PCB of the enclosure itself. Somehow that didn’t break the battery bank enclosure, so I crammed 20 charged up used and mixed 18650 Li-ion cells in it and started charging and discharging it. The batteries got quite hot, likely around 50-70°C based on the temperature readings of my hand. 1 At that point I realized I was playing with fire and stopped. The USB-C PD behaviour was different on all power banks. Some were fine with powering a ThinkPad laptop with the appropriate cable, some were flaky with setting the power levels, and some were just useless with certain cable or device combinations. The battery banks rely on a very simple arrangement: the 18650 Li-ion cells are connected in parallel, and the resulting 3.7-4.2V is then boosted up for the appropriate voltage on the control board. This carries risks: if you insert two or more Li-ion cells with different voltages, then one will start charging the others to bring the cells to the same voltage, and that can become uncontrolled and result in a cell overheating and/or exploding. It’s also a horrible idea to mix and match used cells of different capacities and wear levels as they will charge and discharge at different rates. In my experience, a cheap DIY power bank enclosure also carries the risk of attracting attention at an airport security check. After learning how bad these can be, that is an entirely justified suspicion. I ended up throwing all the battery bank enclosures out, the hardware failures and issues made me too concerned about one of these starting a fire. I like controlled fires, but the uncontrolled ones are really not my cup of tea. If you know of a 18650 Li-ion cell battery bank enclosure that works like the TOMO M4 but has modern features (USB-C port, USB-PD, can charge laptops etc.) then please do reach out to me as I’d love to test one out. You can find the contact details below the post. 50-55°C feels very hot to the touch, so it’s a good rule of thumb (no pun intended) for determining the minimum temperature of a hot surface by hand. Disclaimer: not physics advice.  ↩︎ 50-55°C feels very hot to the touch, so it’s a good rule of thumb (no pun intended) for determining the minimum temperature of a hot surface by hand. Disclaimer: not physics advice.  ↩︎

0 views
./techtipsy 3 months ago

The 'politsei' problem, or how filtering unwanted content is still an issue in 2025

A long time ago, there was a small Estonian website called “Mängukoobas” (literal translation from Estonian is “game cave”). It started out as a place for people to share various links to browser games, mostly built with Flash or Shockwave. It had a decent moderation system, randomized treasure chests that could appear on any part of the website, and a lot more. 1 What it also had was a basic filtering system. As a good chunk of the audience was children (myself included), there was a need to filter out all the naughty Estonian words, such as “kurat”, “türa”, “lits” and many more colorful ones. The filtering was very basic, however, and some took it to themselves to demonstrate how flawed the system was by intentionally using phrases like “politsei”, which is Estonian for “police”. It would end up being filtered to “po****ei” as it also contained the word “lits”, which translates to “slut” 2 . Of course, you could easily overcome the filter by using a healthy dose of period characters, leading to many cases of “po.l.i.t.sei” being used. With the ZIRP phenomenon we got a lot of companies wanting to get into the “platform” business where they bring together buyers and sellers, or service providers and clients. A lot of these platforms rely on transactions taking place only on their platform and nowhere else, so they end up doing their best to avoid the two parties from being in contact off-platform and paying out of band, as that would directly cut into their revenue. As a result, they scan private messages and public content for common patterns, such as e-mails and phone numbers, and block or filter them. As you can predict, this can backfire in a very annoying way. I was looking for a cheap mini PC on a local buy-sell website and stumbled on one decent offer. I looked at the details, was going over the CPU model, and found the following: Oh. Well, maybe it was an error, I will ask the seller for additional details with a public question. The response? I never ended up buying that machine because I don’t really want to gamble with Intel CPU model numbers, and a few days later it was gone. It’s 2025, I’m nearing my mandatory mid-life crisis, and the Scunthorpe problem is alive and well. fun tangent: the site ended up being like a tiny social network, eventually incorporating things like a cheap rate.ee knock-off where children were allowed to share pictures of themselves. As you can imagine, this was a horrible, horrible idea, as it attracted the exact type of person that would be interested in that type of content. I got lucky by being so poor that I did not have a webcam or a digital camera to make any pictures with, and I remember that fondly because someone on MSN Messenger was very insistent that I take some pictures of myself. Don’t leave children with unmonitored internet access!  ↩︎ “slut” is also an actual word in Swedish which translates to “final”. I think. I’m not a Swedish expert, actually.  ↩︎ fun tangent: the site ended up being like a tiny social network, eventually incorporating things like a cheap rate.ee knock-off where children were allowed to share pictures of themselves. As you can imagine, this was a horrible, horrible idea, as it attracted the exact type of person that would be interested in that type of content. I got lucky by being so poor that I did not have a webcam or a digital camera to make any pictures with, and I remember that fondly because someone on MSN Messenger was very insistent that I take some pictures of myself. Don’t leave children with unmonitored internet access!  ↩︎ “slut” is also an actual word in Swedish which translates to “final”. I think. I’m not a Swedish expert, actually.  ↩︎

0 views
./techtipsy 3 months ago

How to run Uptime Kuma in Docker in an IPv6-only environment

I use Uptime Kuma to check the availability of a few services that I run, with the most important one being my blog. It’s really nice. Today I wanted to set it up on a different machine to help troubleshoot and confirm some latency issues that I’ve observed, and for that purpose I picked the cheapest ARM-based Hetzner Cloud VM hosted in Helsinki, Finland. Hetzner provides a public IPv6 address for free, but you have to pay extra for an IPv4 address. I didn’t want to do that out of principle, so I went ahead and copied my Docker Compose definition over to the new server. For some reason, Uptime Kuma would start up on the new IPv6-only VM, but it was unsuccessful in making requests to my services, which support both IPv4 and IPv6. The requests would time out and show up as “Pending” in the UI, and the service logs complained about not being able to deliver e-mails about the failures. I confirmed IPv6 connectivity within the container by running and running a few and commands with IPv6 flags, had no issues with those. When I added a public IPv4 address to the container, everything started working again. I fixed the issue by explicitly disabling the IPv4 network in the Docker Compose service definition, and that did the trick, Uptime Kuma made successful requests towards my services. It seems that the service defaults to IPv4 due to the internal Docker network giving it an IPv4 network to work with, and that causes issues when your machine doesn’t have any IPv4 network or public IPv4 address associated with it. Here’s an example Docker Compose file: That’s it! If you’re interested in different ways to set up IPv6 networking in Docker, check out this overview that I wrote a while ago.

0 views
./techtipsy 3 months ago

3D printing is pretty darn cool, actually

I love 3D printing. Out of all the tech hype cycles and trends over the last decade, this one is genuinely useful. There’s simply something magical about being able to design or download a model from the internet, send it to a machine, and after a few hours you get an actual physical object in return! I don’t own a 3D printer myself, but I’ve had access to people who are happy to help out by printing something for me. So far I’ve printed the following useful things: There’s so much more that I’d want to print, like various battery holders, controller stands, and IKEA SKÅDIS mounts . There’s also the option of downloading and printing a whole PC case , which is incredibly tempting. Will I finally be able to build the perfect home server according to my very specific requirements? Probably not, given how often my preferences change, but it would be incredibly cool! And yet I don’t own a 3D printer. The main obstacle for me is the time, I feel like in order to be successful with a 3D printer, I’ll need to at the very least learn the basics of filaments, their properties, what parameters to configure and how, how to maintain a 3D printer, how to fix one when it breaks, how to diagnose misalignment issues etc. I’ll also need space for one, extruding hot melting plastic seems like a thing that I’d want to host in a proper workshop and with actual ventilation. It’s a whole-ass hobby, not a half-ass one. Durability can be problematic with 3D prints, even in my limited experience. For example, I tried positioning the Makita vacuum cleaner holder differently, but ended up putting too much strain on the design, which eventually lead to it completely failing. In other cases, filaments like PLA aren’t suitable for designs where they are attached to warm or hot computer parts, they will warp like crazy. I appreciate the hell out of anyone that shares their designs with the world, and especially those that allow remixing or customizing their designs. There are fantastic designs and ideas out there on sites like Printables , and the creativity that’s on display warms my heart. a Makita vacuum cleaner holder a dual vertical laptop stand it’s such a simple and cheap design, and yet it works incredibly well if you add some rubberized material to the bottom and inside the laptop holder a dual HDD adapter for a Zimaboard a stand for the Steam Deck a carrying case insert for the Steam Deck a case for the Orange Pi Zero

0 views
./techtipsy 4 months ago

PSA: part of your Kagi subscription fee goes to a Russian company (Yandex)

Today I learned that Kagi uses Yandex as part of its search infrastructure, making up about 2% of their costs, and their CEO has confirmed that they do not plan to change that. Yandex represents about 2% of our total costs and is only one of dozens of sources we use. To put this in perspective: removing any single source would degrade search quality for all users while having minimal economic impact on any particular region. The world doesn’t need another politicized search engine. It needs one that works exceptionally well, regardless of the political climate. That’s what we’re building. That is unfortunate, as I found Kagi to be a good product with an interesting take on utilizing LLM models with search that is kind of useful, but I cannot in good heart continue to support it while they unapologetically finance a major company that has ties to the Russian government, the same country that is actively waging a war against Ukraine, an European country, for over 11 years, during which they’ve committed countless war crimes against civilians and military personnel. Kagi has the freedom to decide how they build the best search engine, and I have the freedom to use something else. Please send all your whataboutisms to .

0 views
./techtipsy 4 months ago

How a Hibernate deprecation log message made our Java backend service super slow

It was time to upgrade Hibernate on that one Java monolithic 1 backend service that my team was responsible for. We took great precautions with these types of changes due to the scale of the system, splitting changes into as many small parts as possible and releasing them as often as possible. With bigger changes we opted for running a few instances of the new version in parallel to the existing one. Then came Hibernate 5.2. Hibernate 5.2 introduced a new warning log to indicate that the existing API for writing queries is deprecated. Every time you used the Criteria API it would print the line. Just one little issue there. Can you see it? Every time you used the Criteria API it would print the line. In a poorly written Java backend service, one HTTP request can make multiple queries to the database. With hundreds of millions of HTTP requests, this can easily balloon to billions of additional logs a day. Well, that’s exactly what happened to our service, resulting in the CPU usage jumping up considerably and the latency of the service being negatively impacted. We didn’t have the foresight to compare every metric against every instance of the service, and when the metrics were summarized across all instances, this increase was not that noticeable while both new and existing instances of the service were running. Aside from the service itself, this had negative effects downstream as well. If you have a solution for collecting your service logs for analysis and retention, and it’s priced on the amount of logs that you print out, then this can end up being a very costly issue for you. We resolved the issue by making a configuration change to our logger that disabled these specific logs. This does make me wonder who else may have been impacted by this change over the years and what that impact might’ve looked like regarding the resource usage on a world-wide scale. I’m not blaming the Hibernate developers, they had good intentions, but the impact of an innocent change like that was likely not taken into account for large-scale services. Last I heard, the people behind Hibernate are a very small team, and yet their software powers much of the world, including critical infrastructure like the banking system. I’m well aware that we’re talking about Hibernate releases that were released around the time I was still a junior developer (2016-2018). Some call it technical debt , others call it over half a decade of neglect. unmaintaned monoliths suck, but so do unmaintained microservices.  ↩︎ unmaintaned monoliths suck, but so do unmaintained microservices.  ↩︎

0 views
./techtipsy 5 months ago

From building ships to shipping builds: how to succeed in making a career switch to software development

I have worked with a few software developers who made the switch to this industry in the middle of their careers. A major change like that can be scary and raise a lot of fears and doubts, but I can attest that this can work out well with the right personality traits and a supporting environment. Here’s what I’ve observed. To keep the writing concise, I’ll be using the phrase “senior junior” 1 to describe those that have made such a career switch. Fear is a natural reaction to any major change in life, especially when there’s risk of taking a financial hit while you have a family to support and a home loan to pay. The best mitigation that I’ve heard is believing that you can make the change, successfully. It sounds like an oversimplification, sure, as all it does is that it removes a mental blocker and throws out the self-doubt. And yet it works unreasonably well. It also helps if you have at least some savings to help mitigate the financial risk. A years’ worth of expenses saved up can go a long way in providing a solid safety net. A great software developer is not someone that simply slings some code over the wall and spends all of their day working only on the technical stuff, there are quite a few critical skills that one needs to succeed. This is not an exhaustive list, but I’ve personally observed that the following ones are the most critical: Those with more than a decade of experience in another role or industry will most likely have a lot of these skills covered already, and they can bring that skill set into a software development team while working with the team to build their technical skill set. Software development is not special, at the end of they day, you’re still interacting with humans and everything that comes with that, good or bad. After working with juniors that are fresh out of school and “senior juniors” who have more career experience than I do, I have concluded that the ones that end up being great software developers have one thing in common: the passion and drive to learn everything about the role and the work we do. One highlight that I often like to share in discussions is one software developer who used to work in manufacturing. At some point they got interested in learning how they can use software to make work more efficient. They started with an MVP solution involving a big TV and Google Sheets, then they started learning about web development for a solution in a different area of the business, and ended up building a basic inventory system for the warehouse. After 2-3 years of self-learning outside of work hours and deploying to production in the most literal sense, they ended up joining my team. They got up to speed very quickly and ended up being a very valuable contributor in the team. In another example, I have worked with someone who previously held a position as a technical draftsman and 3D designer in a ship building factory (professionals call it a shipyard), but after some twists and turns ended up at a course for those interested in making a career switch, which led to them eventually working in the same company I do. Now they ship builds with confidence while making sure that the critical system we are working on stays stable. That developer also kicks my ass in foosball about 99% of the time. The combination of industry experience and software development skills is an incredibly powerful one. When a software developer starts work in a project, they learn the business domain piece by piece, eventually reaching a state where they have a slight idea about how the business operates, but never the full picture. Speaking with their end users will help come a long way, but there are always some details that get lost in that process. Someone coming from the industry will have in-depth knowledge about the business, how it operates, where the money comes from, what are the main pain points and where are the opportunities for automation. They will know what problems need solving, and the basic technical know-how on how to try solving them. Like a product owner, but on steroids. Software developers often fall into the trap of creating a startup to scratch that itch they have for building new things, or trying out technologies that have for a very long time been on their to-do list. The technical problems are fun to solve, sure, but the focus should be on the actual problem that needs fixing. If I wanted to start a new startup with someone, I’d look for someone working in an industry that I’m interested in and who understands the software development basics. Or maybe I’m just looking for an excellent product owner. If you have a “senior junior” software developer on your team, then there really isn’t anything special you’d need to do compared to any other new joiner. Do your best to foster a culture of psychological safety, have regular 1-1s with them, and make sure to pair them up with more experienced team members as often as possible. A little bit of encouragement in challenging environments or periods of self-doubt can also go a long way. Temporary setbacks are temporary, after all. Don’t worry about all that “AI” 2 hype, if it was as successful in replacing all software development jobs as a lof of people like to shout from the rooftops, then it would have already done so. At best, it’s a slight productivity boost 3 at the cost of a huge negative impact on the environment. If you’re someone that has thought about working as a software developer or who is simply excited about all the ways that software can be used to solve actual business problems and build something from nothing, then I definitely recommend giving it a go, assuming that you have the safety net and risk appetite to do so. For reference, my journey towards software development looked like this, plus a few stints of working as a newspaper seller or a grocery store worker. who do you call a “senior senior” developer, a senile developer?  ↩︎ spicy autocomplete engines (also known as LLM-s) do not count as actual artificial intelligence.  ↩︎ what fascinates me about all the arguments around “AI” (LLM-s) is the feeling of being more productive. But how do you actually measure developer productivity, and do you account for possible reduced velocity later on when you’ve mistaken code generation speed as velocity and introduced hard to catch bugs into the code base that need to be resolved when they inevitably become an issue?  ↩︎ ability to work in a team great communication skills conflict resolution ability to make decisions in the context of product development and business goals maintaining an environment of psychological safety who do you call a “senior senior” developer, a senile developer?  ↩︎ spicy autocomplete engines (also known as LLM-s) do not count as actual artificial intelligence.  ↩︎ what fascinates me about all the arguments around “AI” (LLM-s) is the feeling of being more productive. But how do you actually measure developer productivity, and do you account for possible reduced velocity later on when you’ve mistaken code generation speed as velocity and introduced hard to catch bugs into the code base that need to be resolved when they inevitably become an issue?  ↩︎

0 views
./techtipsy 5 months ago

My horrible Fairphone customer care experience

Fairphone has bad customer support. It’s not an issue with the individual customer support agents, I know how difficult their job is 1 , and I’m sure that they’re trying their best, but it’s a more systematic issue in the organization itself. It’s become so bad that Fairphone issued an open letter to the Fairphone community forum acknowledging the issue and steps they’re taking to fix it. Until then, I only have my experience to go by. I’ve contacted Fairphone customer support twice, once with a question about Fairphone 5 security updates not arriving in a timely manner, and another time with a request to refund the Fairphone Fairbuds XL as part of the 14-day policy. In both cases, I received an initial reply over 1 month later. It’s not that catastrophic for a non-critical query, but in situations where you have a technical issue with a product, this can become a huge inconvenience for the customer. I recently gave the Fairbuds XL a try because the reviews for it online were decent and I want to support the Fairphone project, but I found the sound profile very underwhelming and the noise cancelling did not work adequately. 2 I decided to use the 14-day return policy that Fairphone advertise, which led to the worst customer care experience I’ve had so far. 3 Here’s a complete timeline of the process on how to return a set of headphones to the manufacturer for a refund. 2025-02-10: initial purchase of the headphones 2025-02-14: I receive the headphones and test them out, with disappointing results 2025-02-16: I file a support ticket with Fairphone indicating that I wish to return the headphones according to their 14-day return policy 2025-02-25: I ask again about the refund after not hearing back from Faiprhone 2025-03-07: I receive an automated message that apologized for the delay and asked me to not make any additional tickets on the matter, which I had not been doing 2025-04-01: I start the chargeback process for the payment through my bank due to Fairphone support not replying over a month later 2025-04-29: Fairphone support finally responds with instructions on how to send back the device to receive a refund 2025-05-07: after acquiring packaging material and printing out three separate documents (UPS package card, invoice, Cordon Electronics sales voucher), I hand the headphones over to UPS 2025-05-15: I ask Fairphone about when the refund will be issued 2025-05-19 16:20 EEST: I receive a notice from Cordon Electronics confirming they have received the headphones 2025-05-19 17:50 EEST: I receive a notice from Cordon Electronics letting me know that they have started the process, whatever that means 2025-05-19 20:05 EEST: I receive a notice from Cordon Electronics saying that the repairs are done and they are now shipping the device back to me (!) 2025-05-19 20:14 EEST: I contact Fairphone support about this notice that I received, asking for a clarification 2025-05-19 20:24 EEST: I also send an e-mail to Cordon Electronics clarifying the situation and asking them to not send the device back to me, but instead return it to Fairphone for a refund 2025-05-20 14:42 EEST: Cordon Electronics informs me that they have already shipped the device and cannot reverse the decision 2025-05-21: Fairphone support responds, saying that it is being sent back due to a processing error, and that I should try to “refuse the order” 2025-05-22: I inform Fairphone support about the communication with Cordon Electronics 2025-05-27: Fairphone is aware of the chargeback that I initiated and they believe the refund is issued, however I have not yet received it 2025-05-27: I receive the headphones for the second time. 2025-05-28: I inform Fairphone support about the current status of the headphones and refund (still not received) 2025-05-28: Fairphone support recommends that I ask the bank about the status of the refund, I do so but don’t receive any useful information from them 2025-06-03: Fairphone support asks if I’ve received the refund yet 2025-06-04: I receive the refund through the dispute I raised through the bank. This is almost 4 months after the initial purchase took place. 2025-06-06: Fairphone sends me instructions on how to send back the headphones for the second time. 2025-06-12: I inform Fairphone that I have prepared the package and will post it next week due to limited access to a printer and the shipping company office 2025-06-16: I ship the device back to Fairphone again. There’s an element of human error in the whole experience, but the initial lack of communication amplified my frustrations and also contributed to my annoyances with my Fairphone 5 boiling over. And just like that, I’ve given up on Fairphone as a brand, and will be skeptical about buying any new products from them. I was what one would call a “brand evangelist” to them, sharing my good initial experiences with the phone to my friends, family, colleagues and the world at large, but bad experiences with customer care and the devices themselves have completely turned me off. If you have interacted with Fairphone support after this post is live, then please share your experiences in the Fairphone community forum, or reach out to me directly (with proof). I would love to update this post after getting confirmation that Fairphone has fixed the issues with their customer care and addressed the major shortcomings in their products. I don’t want to crap on Fairphone, I want them to do better. Repairability, sustainability and longevity still matter. I haven’t worked as a customer care agent, but I have worked in retail, so I roughly know what level of communication the agents are treated with, often unfairly.  ↩︎ that experience reminded me of how big of a role music plays in my life. I’ve grown accustomed to using good sounding headphones and I immediately noticed all the little details being missing in my favourite music.  ↩︎ until this point, the worst experience I had was with Elisa Eesti AS, a major ISP in Estonia. I wanted to use my own router-modem box that was identical to the rented one from the ISP, and that only got resolved 1.5 months later after I expressed intent to switch providers. Competition matters!  ↩︎ I haven’t worked as a customer care agent, but I have worked in retail, so I roughly know what level of communication the agents are treated with, often unfairly.  ↩︎ that experience reminded me of how big of a role music plays in my life. I’ve grown accustomed to using good sounding headphones and I immediately noticed all the little details being missing in my favourite music.  ↩︎ until this point, the worst experience I had was with Elisa Eesti AS, a major ISP in Estonia. I wanted to use my own router-modem box that was identical to the rented one from the ISP, and that only got resolved 1.5 months later after I expressed intent to switch providers. Competition matters!  ↩︎

0 views
./techtipsy 5 months ago

Lenovo ThinkCentre M900 Tiny: how does it fare as a home server?

My evenings of absent-minded local auction site scrolling 1 paid off: I now own a Lenovo ThinkCentre M900 Tiny. It’s relatively old, being manufactured in 2016 2 , but it’s tiny and has a lot of useful life left in it. It’s also featured in the TinyMiniMicro series by ServeTheHome. I managed to get it for 60 EUR plus about 4 EUR shipping, and it comes with solid specifications: The price is good compared to similar auctions, but was it worth it? Yes, yes it was. I have been running a ThinkPad T430 as a server for a while now, since October 2024. It served me well in that role and would’ve served me for even longer if I wanted to, but I had an itch for a project that didn’t involve renovating an apartment. 3 One of my main curiosities was around the power usage. Will this machine beat the laptop in terms of efficiency while idling and running normal home server workloads? Yes, yes it does. While booting into Windows 11 and calming down a bit, the lowest idle power numbers I saw were around 8 W. This concludes the testing on Windows. On Linux (Fedora Server 42), the idle power usage was around 6.5 W to 7 W. After running , I ended up getting that down to 6.1 W - 6.5 W. This is much lower compared to the numbers that ServeTheHome got, which were around 11-13 W (120V circuit). My measurements are made in Europe, Estonia, where we have 240V circuits. You may be able to find machines where the power usage is even lower. Louwrentius mada an idle power comparison on an HP EliteDesk Mini G3 800 where they measured it at 4 W. That might also be due to other factors in play, or differences in measurement tooling. During normal home server operation with 5 SATA SSD-s connected (4 of them with USB-SATA adapters), I have observed power consumption being around 11-15 W, with peaks around 40 W. On a pure CPU load with , I saw power consumption being around 32 W. Formatting the internal SATA SSD added 5 W to that figure. Yes. But hear me out. Back in 2021, I wrote about USB storage being a very bad idea, especially on BTRFS. I’ve learned a lot over the years, and BTRFS has received continuous improvements as well. In my ThinkPad T430 home server setup, I had two USB-connected SSD-s running in RAID0 for over half a year, and it was completely fine unless you accidentally bumped into the SSD-s. USB-connected storage is fine under the right circumstances: After a full BTRFS scrub and a few days of running, it seems fine. Plus it looks sick as hell with the identical drives stacked on top. All that’s missing are labels specifying which drive is which, but I’m sure that I’ll get to that someday, hopefully before a drive failure happens. In a way, this type of setup best represents what a novice home server enthusiast may end up with: a tiny, power-efficient PC with a bunch of affordable drives connected. There are alternative options for handling storage on a tiny 1 liter PC, but they have some downsides that I don’t want to be dealing with right now. An USB DAS allows you to handle many drives with ease, but they are also damn expensive. If you pick wrong, you might also end up with one where the USB-SATA chip craps out under high load, which will momentarily drop all the drives, leaving you with a massive headache to deal with. Cheaper USB-SATA docks are more prone to this, but I cannot confirm or deny if more expensive options have the same issue. Running individual drives sidesteps this issue and moves any potential issues to the host USB controller level. There is also a distinct lack of solutions that are designed around 2.5" drives only. Most of them are designed around massive and power-hungry 3.5" drives. I just want to run my 4 existing SATA SSD-s until they crap out completely. An additional box that does stuff generally adds to the overall power consumption of the setup as well, which I am not a big fan of. Lowering the power consumption of the setup was the whole point! I can’t rule out testing USB DAS solutions in the future as they do seem handy for adding storage to tiny PC-s and laptops with ease, but for now I prefer going the individually connected drives route, especially because I don’t feel like replacing my existing drives, they still have about 94% SSD health in them after 3-4 years of use, and new drives are expensive . Or you could go full jank and use that one free NVMe slot in the tiny PC to add more SATA ports or break out to other devices, such as a PCIe HBA, and introduce a lot of clutter to the setup with an additional power supply, cables and drives. Or use 3.5" external hard drives with separate power adapters. It’s what I actually tried out back in 2021, but I had some major annoyances with the noise. Here are some notes on everything else that I’ve noticed about this machine. The PC is quite efficient as demonstrated by the power consumption numbers, and as a result it runs very cool, idling around 30-35 °C in a ~22-24 °C environment. Under a heavy load, the CPU temperatures creep up to 65-70 °C, which is perfectly acceptable. The fan does come on at higher load and it’s definitely audible, but in my case it runs in a ventilated closet, so I don’t worry about that at all. The CPU (Intel i5-6500T) is plenty fast for all sorts of home server workloads with its 4 CPU cores and clock speeds of 2.7-2.8 GHz under load. The UEFI settings offered a few interesting options that I decided to change, the rest are set to default. There is an option to enable an additional C-state for even better power savings. For home server workloads, it was nice to see the setting to allow you to boot the PC without a keyboard being attached, found under “Keyboardless operation” setting. I guess that in some corporate environments disconnected keyboards are such a common helpdesk issue that it necessitates having this option around. I just like these tiny PC boxes a lot. They are tiny, fast and have a very solid construction, which makes them feel very premium in your hands. They are also perfectly usable, extensible and can be an absolute bargain at the right price. With solid power consumption figures that are only a few watts off of a Raspberry Pi 5 , it might make more sense to get a TinyMiniMicro machine for your next home server. I’m definitely very happy with mine. The USB storage idea was a bit too insane. I saw sporadic errors on random drives, suggesting that there is an issue with the PC USB ports under long-term use. It wasn’t noticeable at all until I explicitly looked at the kernel logs, just to be safe. Good call on my part. As a result, I have switched back to my trusty ThinkPad T430. Some logs that demonstrate the issues, collected using : well, at least it beats doom-scrolling social media.  ↩︎ yeah, I don’t like being reminded of being old, too.  ↩︎ there are a lot of similarities between construction/renovation work and software development, but that’s a story for another time.  ↩︎ CPU: Intel i5-6500T RAM: 16GB DDR4 Storage: 256GB SSD Power adapter included the cables are not damaged the cables are not at a weird angle or twisted I actually had issues with this point, my very cool and nice cable management resulted in one disk having connectivity issues, which I fixed by relieving stress on the cables and routing them differently the connected PC does not have chronic overheating issues the whole setup is out of the reach of cats, dogs, children and clumsy sysadmin cosplayers the USB-SATA adapters pass through the device ID and S.M.A.R.T information to the host the device ID part especially is key to avoiding issues with various filesystems (especially ZFS) and storage pool setups the ICY BOX IB-223U3a-B is a good option that I have personally been very happy with , and it’s what I’m using in this server build a lot of adapters (mine included) don’t support running SSD TRIM commands to the drives, which might be a concern has not been an issue for over half a year with those ICY BOX adapters, but it’s something to keep in mind you are not using an SBC as the home server even a Raspberry Pi 4 can barely handle one USB-powered SSD not an issue if you use an externally powered drive, or an USB DAS well, at least it beats doom-scrolling social media.  ↩︎ yeah, I don’t like being reminded of being old, too.  ↩︎ there are a lot of similarities between construction/renovation work and software development, but that’s a story for another time.  ↩︎

0 views
./techtipsy 6 months ago

We get laptops with annoying cooling fans because we keep buying them

I don’t like laptops with loud cooling fans in them. Quite a controversial position, I know. But really, they do suck. A laptop can be great to use, have a fantastic keyboard, sharp display, lots of storage and a fast CPU, and all of that can be ruined by one component: the cooling fan. Laptop fans are small, meaning that they have to run faster to have any meaningful cooling effect, which means that they are usually very loud and often have a high-pitched whine to them, making them especially obnoxious. Sometimes it feels like a deliberate attack on one of my senses. Fans introduce a maintenance burden. They keep taking in dust, which tends to accumulate at the heat sink. If you skip maintenance, then you’ll see your performance drop and the laptop will get notably hot, which may contribute to a complete hardware failure. We’ve seen tremendous progress in the world of consumer CPU-s over the last decade. Power consumption is much lower while idle, processors can do a lot more work in the same power envelope, and yet most laptops that I see in use are still actively cooled by an annoying-ass cooling fan. 1 And yet we keep buying them. But it doesn’t have to be this way. My colleagues that have switched to Apple Silicon laptops are sometimes surprised to hear the fan on their laptop because it’s a genuinely rare occurrence for them. Most of the time it just sits there doing nothing, and when it does come on, it’s whisper-quiet. And to top it off, some models, such as the Macbook Air series, are completely fanless. Meanwhile, those colleagues that run Lenovo ThinkPads with Ryzen 5000 and 7000 series APU-s (that includes me) have audible fans and at the same time the build times for the big Java monolith that we maintain are significantly slower (~15%) compared to the fan-equipped MacBooks. 2 We can fix this, if we really wanted to. As a first step, you can change to a power saving mode on your current laptop. This will likely result in your CPU and GPU running more efficiently, which also helps avoid turning the cooling fan on. You will have to sacrifice some performance as a result of this change, which will not be a worthwhile trade-off for everyone. If you are OK with risking damaging your hardware, you can also play around with setting your own fan curve. The CPU and GPU throttling technology is quite advanced nowadays, so you will likely be fine in this area, but other components in the laptop, such as the battery, may not be very happy with higher temperatures. After doing all that, the next step is to avoid buying a laptop that abuses your sense of hearing. That’s the only signal that we can send to manufacturers that they will actually listen to. Money speaks louder than words. What alternative options do we have? Well, there are the Apple Silicon MacBooks, and, uhh, that one ThinkPad with an ARM CPU, and a bunch of Chromebooks, and a few Windows tablets I guess. I’ll be honest, I have not kept a keen eye on recent developments, but a quick search online for fanless laptops pretty much looks as I described. Laptops that you’d actually want to get work done on are completely missing from that list, unless you like Apple. 3 In a corporate environment the choice of laptop might not be fully up to you, but you can do your best to influence the decision-makers. There’s one more alternative: ask your software vendor to not write shoddily thrown together software that performs like shit. Making a doctor appointment should not make my cooling fan go crazy. Not only is slow and inefficient software discriminatory towards those that cannot afford decent computer hardware, it’s also directly contributing to the growing e-waste generation problem by continuously raising the minimum hardware requirements for the software that we rely on every day. Written on a Lenovo ThinkPad X395 that just won’t stop heating up and making annoying fan noises. passive vs active cooling ? More like passive vs annoying cooling .  ↩︎ I dream of a day where Asahi Linux runs perfectly on an Apple Silicon MacBook. It’s not production ready right now, but the developers have done an amazing job so far!  ↩︎ I like the hardware that Apple produces, it’s the operating system that I heavily dislike.  ↩︎ passive vs active cooling ? More like passive vs annoying cooling .  ↩︎ I dream of a day where Asahi Linux runs perfectly on an Apple Silicon MacBook. It’s not production ready right now, but the developers have done an amazing job so far!  ↩︎ I like the hardware that Apple produces, it’s the operating system that I heavily dislike.  ↩︎

0 views