Posts in Ui (20 found)
Ruslan Osipov 6 days ago

Turns out Windows has a package manager

I have a Windows 11 PC, and something that really annoyed me about Windows for decades is the inability to update all installed programs at once. It’s just oh-so-annoying to have to update a program manually, which is worse for things I don’t use often - meaning every time I open a program, I have to deal with update pop-ups. I was clearly living under a rock, because all the way in 2020 Microsoft introduced package manager which lets you install, and more importantly update packages. It’s as simple as opening a command line (ideally as administrator, so you don’t have to keep hitting yes on the permission prompt for every program), and runinng . Yup, that’s it. You’ll update the vast majority of software you have installed. Some software isn’t compatible, but when I ran the command for the first time, Windows updated a little over 20 packages, which included the apps I find myself having to update manually the most often. To avoid having to do this manually, I’ve used windows Task Scheduler to create a new weekly task which runs a file, which consists of a single line: I just had to make sure Run with the highest privileges is enabled in task settings. So long, pesky update reminders. My Windows apps will finally stay up-to-date, hopefully.

0 views
Ruslan Osipov 1 months ago

Thoughts on 3D printing

A few months back my wife gifted me a 3D printer: an entry level Bambu Lab A1 Mini . It’s a really cool little machine - it’s easy to set up, and it integrates with Maker World - a vast repository of free 3D models. Now that I’ve lived with a 3D printer for nearly half a year, I’d like to share what I’ve learned. After booting up the printer, printing benchy - a little boat which tests printer calibration settings, and seeing thousands of incredible designs on Bambu Lab’s Maker World - I thought I will never have to buy anything ever again. I was wrong. While some stuff printer on a 3D printer is fantastic, it’s not always the best replacement for mass produced objects. Many of the mass produced plastic items are using injection molding - liquid plastic that gets poured into a mold - and that produces a much stronger final product. That might be different if you’re printing with tougher plastics like ABS, but you also wouldn’t be using beginner-friendly machines like the A1 Mini to do that. So yeah, you still need to buy the heavy duty plastic stuff. And even as you print things, I wouldn’t say it’s cheaper than buying things from a store. It’s probably about the same, given the occasional failed prints, costs of the 3D printer, the need for multiple filaments, and the fact that by having a 3D printer you’re more likely to print things you don’t exactly need. Oh, I’ve printed so many useless things - it’s amazing. The Elden Ring warrior jar Alexander planter. Solair of Astora figurine. A beautiful glitch art sculpture. I even got a 0.2mm nozzle (smaller than the default 0.4mm) and managed to 3D print passable wargame and D&D miniatures. Which was pretty awesome, although you have to pay for the nicest looking models, which does take away from enjoyment of making plastic miniatures appear in your house “out of nowhere”. I’m not against artists getting paid, they certainly deserve it, but printed models were comparable to an mid-range Reaper miniature if you know what I mean, which certainly isn’t terrible, but it’s harder to justify breaking even. Maybe I could get better at getting the small details printed nicely. Oh, and if you’re into wargames - this thing easily prints incredible terrain. A basic 3D printer will pay for itself once you furnish a single battlefield. Once you’re done with printing basic things, you do need to start fiddling with the settings. Defaults only take you so far, and if you want a smoother surface, smaller details, or improvement in any other quality indicator - you have to tinker with the settings and produce test prints. It’s a hobby in it’s own, and it’s fun and rewarding, but this can get in the way when you’re just trying to print something really cool. But the most incredible feeling of accomplishment came when I needed something specific around the house, and I’d be able to design it. We bought some hanging plants, and I wished I could just hang it on the picture rail of our century home. And I was able to design a hanger, and it took me 3 iterations to create an item that fits my house perfectly and that I love. My mom needed a plastic replacement part for a long discontinued juicer. I was able to design the thing (don’t worry, I covered PLA in food-safe epoxy), and the juicer will see another few decades of use. Door stops, highly specific tools, garden shenanighans - the possibilities are endless. It took me a few months to move past using others’ designs and making my own - Tinkercad has been sufficient for my use cases so far, although I’m sure I’ll outgrow it as my projects get more complicated. 3D printers aren’t quite yet the consumer product, but my A1 Mini shoed me that this future is getting closer. Some day, we might all have a tiny 3D printer in our home (or have a cheap corner 3D printing shop?), to quickly and effortlessly create many household objects. Until then, 3D printers remain a tinkerer’s tool, but a really fun one at that, and modern printers are reducing the barrier to entry, making it much easier to get into the hobby.

0 views
codedge 1 months ago

Random wallpaper with swaybg

Setting a wallpaper in Sway, with swaybg, is easy. Unfortunately there is no way of setting a random wallpaper automatically out of the box. Here is a little helper script to do that. The script is based on a post from Silvain Durand 1 with some slight modifications. I just linked the script my sway config instead of setting a background there. Sway config : The script spawns a new instance, changes the wallpaper, and kills the old instance. With this approach there is no flickering of the background when changing. An always up-to-date version can be found in my dotfiles . Original script from Silvain Durand: https://sylvaindurand.org/dynamic-wallpapers-with-sway/   ↩︎ Original script from Silvain Durand: https://sylvaindurand.org/dynamic-wallpapers-with-sway/   ↩︎

0 views

LLMs Eat Scaffolding for Breakfast

We just deleted thousands of lines of code. Again. Each time a new LLM model comes out, that’s the same story. LLMs have limitations so we build scaffolding around them. Each models introduce new capabilities so that old scaffoldings must be deleted and new ones be added. But as we move closer to super intelligence, less scaffoldings are needed. This post is about what it takes to build successfully in AI today. Every line of scaffolding is a confession: the model wasn’t good enough. LLMs can’t read PDF? Let’s build a complex system to convert PDF to markdown LLMs can’t do math? Let’s build compute engine to return accurate numbers LLMs can’t handle structured output? Let’s build complex JSON validators and regex parsers LLMs can’t read images? Let’s use a specialized image to text model to describe the image to the LLM LLMs can’t read more than 3 pages? Let’s build a complex retrieval pipeline with a search engine to feed the best content to the LLM. LLMs can’t reason? Let’s build chain-of-thought logic with forced step-by-step breakdowns, verification loops, and self-consistency checks. etc, etc... millions of lines of code to add external capabilities to the model. But look at models today: GPT-5 is solving frontier mathematics, Grok-4 Fast can read 3000+ pages with its 2M context window, Claude 4.5 sonnet can ingest images or PDFs, all models have native reasoning capabilities and support structured outputs. The once essential scaffolding are now obsolete. Those tools are backed in the model capabilities. It’s nearly impossible to predict what scaffolding will become obsolete and when. What appears to be essential infrastructure and industry best practice today can transform into legacy technical debt within months. The best way to grasp how fast LLMs are eating scaffolding is to look at their system prompt (the top-level instruction that tells the AI how to behave). Looking at the prompt used in Codex, OpenAI coding agent from GPT-o3 model to GPT-5 is mind-blowing. GPT-o3 prompt: 310 lines GPT-5 prompt: 104 lines The new prompt removed 206 lines. A 66% reduction. GPT-5 needs way less handholding. The old prompt had complex instructions on how to behave as a coding agent (personality, preambles, when to plan, how to validate). The new prompt assumes GPT-5 already knows this and only specifies the Codex-specific technical requirements (sandboxing, tool usage, output formatting). The new prompt removed all the detailed guidance about autonomously resolving queries, coding guidelines, git usage. It’s also less prescriptive. Instead of “do this and this” it says “here are the tools at your disposal.” As we move closer to super intelligence, the models require more freedom and leeway (scary, lol!). Advanced models require simple instructions and tooling. Claude Code, the most sophisticated agent today, relies on a simple filesystem instead of a complex index and use bash commands (find, read, grep, glob) instead of complex tools. It moves so fast. Each model introduces a new paradigm shift. If you miss a paradigm shift, you’re dead. Having an edge in building AI applications require deep technical understanding, insatiable curiosity, and low ego. By the way, because everything changes, it’s good to focus on what won’t change Context window is how much text you can feed the model in a single conversation. Early model could only handle a couple of pages. Now it’s thousands of pages and it’s growing fast. Dario Amodei the founder of Anthropic expects 100M+ context windows while Sam Altman hinted at billions of context tokens . It means the LLMs can see more context so you need less scaffolding like retrieval augmented generation. November 2022 : GPT-3.5 could handle 4K context November 2023 : GPT-4 Turbo with 128K context June 2024 : Claude 3.5 Sonnet with 200K context June 2025 : Gemini 2.5 Pro with 1M context September 2025 : Grok-4 Fast with 2M context Models used to stream at 30-40 tokens per second. Today’s fastest models like Gemini 2.5 Flash and Grok-4 Fast hit 200+ tokens per second. A 5x improvement. On specialized AI chips (LPUs), providers like Cerebras push open-source models to 2,000 tokens per second. We’re approaching real-time LLM: full responses on complex task in under a second. LLMs are becoming exponentially smarter. With every new model, benchmarks get saturated. On the path to AGI, every benchmark will get saturated. Every job can be done and will be done by AI. As with humans, a key factor in intelligence is the ability to use tools to accomplish an objective. That is the current frontier: how well a model can use tools such as reading, writing, and searching to accomplish a task over a long period of time. This is important to grasp. Models will not improve their language translation skills (they are already at 100%), but they will improve how they chain translation tasks over time to accomplish a goal. For example, you can say, “Translate this blog post into every language on Earth,” and the model will work for a couple of hours on its own to make it happen. Tool use and long-horizon tasks are the new frontier. The uncomfortable truth: most engineers are maintaining infrastructure that shouldn’t exist. Models will make it obsolete and the survival of AI apps depends on how fast you can adapt to the new paradigm. That’s what startups have an edge over big companies. Bigcorp are late by at least two paradigms. Some examples of scaffolding that are on the decline: Vector databases : Companies paying thousands/month for when they could now just put docs in the prompt or use agentic-search instead of RAG ( my article on the topic ) LLM frameworks : These frameworks solved real problems in 2023. In 2025? They’re abstraction layers that slow you down. The best practice is now to use the model API directly. Prompt engineering teams : Companies hiring “prompt engineers” to craft perfect prompts when now current models just need clear instructions with open tools Model fine-tuning : Teams spending months fine-tuning models only for the next generation of out of the box models to outperform their fine-tune (cf my 2024 article on that ) Custom caching layers : Building Redis-backed semantic caches that add latency and complexity when prompt caching is built into the API. This cycle accelerates with every model release. The best AI teams master have critical skills: Deep model awareness : They understand exactly what today’s models can and cannot do, building only the minimal scaffolding needed to bridge capability gaps. Strategic foresight : They distinguish between infrastructure that solves today’s problems versus infrastructure that will survive the next model generation. Frontier vigilance : They treat model releases like breaking news. Missing a single capability announcement from OpenAI, Anthropic, or Google can render months of work obsolete. Ruthless iteration : They celebrate deleting code. When a new model makes their infrastructure redundant, they pivot in days, not months. It’s not easy. Teams are fighting powerful forces: Lack of awareness : Teams don’t realize models have improved enough to eliminate scaffolding (this is massive btw) Sunk cost fallacy : “We spent 3 years building this RAG pipeline!” Fear of regression : “What if the new approach is simple but doesn’t work as well on certain edge cases?” Organizational inertia : Getting approval to delete infrastructure is harder than building it Resume-driven development : “RAG pipeline with vector DB and reranking” looks better on a resume than “put files in prompt” In AI the best team builds for fast obsolescence and stay at the edge. Software engineering sits on top of a complex stack. More layers, more abstractions, more frameworks. Complexity was a sophistication. A simple web form in 2024? React for UI, Redux for state, TypeScript for types, Webpack for bundling, Jest for testing, ESLint for linting, Prettier for formatting, Docker for deployment…. AI is inverting this. The best AI code is simple and close to the model. Experienced engineers look at modern AI codebases and think: “This can’t be right. Where’s the architecture? Where’s the abstraction? Where’s the framework?” The answer: The model ate it bro, get over it. The worst AI codebases are the ones that were best practices 12 months ago. As models improve, the scaffolding becomes technical debt. The sophisticated architecture becomes the liability. The framework becomes the bottleneck. LLMs eat scaffolding for breakfast and the trend is accelerating. Thanks for reading! Subscribe for free to receive new posts and support my work. LLMs can’t read PDF? Let’s build a complex system to convert PDF to markdown LLMs can’t do math? Let’s build compute engine to return accurate numbers LLMs can’t handle structured output? Let’s build complex JSON validators and regex parsers LLMs can’t read images? Let’s use a specialized image to text model to describe the image to the LLM LLMs can’t read more than 3 pages? Let’s build a complex retrieval pipeline with a search engine to feed the best content to the LLM. LLMs can’t reason? Let’s build chain-of-thought logic with forced step-by-step breakdowns, verification loops, and self-consistency checks. Vector databases : Companies paying thousands/month for when they could now just put docs in the prompt or use agentic-search instead of RAG ( my article on the topic ) LLM frameworks : These frameworks solved real problems in 2023. In 2025? They’re abstraction layers that slow you down. The best practice is now to use the model API directly. Prompt engineering teams : Companies hiring “prompt engineers” to craft perfect prompts when now current models just need clear instructions with open tools Model fine-tuning : Teams spending months fine-tuning models only for the next generation of out of the box models to outperform their fine-tune (cf my 2024 article on that ) Custom caching layers : Building Redis-backed semantic caches that add latency and complexity when prompt caching is built into the API. Deep model awareness : They understand exactly what today’s models can and cannot do, building only the minimal scaffolding needed to bridge capability gaps. Strategic foresight : They distinguish between infrastructure that solves today’s problems versus infrastructure that will survive the next model generation. Frontier vigilance : They treat model releases like breaking news. Missing a single capability announcement from OpenAI, Anthropic, or Google can render months of work obsolete. Ruthless iteration : They celebrate deleting code. When a new model makes their infrastructure redundant, they pivot in days, not months. Lack of awareness : Teams don’t realize models have improved enough to eliminate scaffolding (this is massive btw) Sunk cost fallacy : “We spent 3 years building this RAG pipeline!” Fear of regression : “What if the new approach is simple but doesn’t work as well on certain edge cases?” Organizational inertia : Getting approval to delete infrastructure is harder than building it Resume-driven development : “RAG pipeline with vector DB and reranking” looks better on a resume than “put files in prompt”

0 views
ptrchm 1 months ago

Event-driven Modular Monolith

The main Rails app I currently work on has just turned eight. It’s not a huge app. It doesn’t deal with web-scale traffic or large volumes of data. Only six people working on it now. But eight years of pushing new code adds up. This is a quick overview of some of the strategies we use to keep the codebase maintainable. After the first few years, our codebase suffered from typical ailments: tight coupling between domains, complex database queries spread across various parts of the app, overgrown models, a maze of side effects triggered by ActiveRecord callbacks , endlessly chained associations (e.g. ) – with an all-encompassing model sitting on top of the pile. Modular Monolith Pub/Sub (Events) Patterns Service Objects Repositories for Database Queries Slim and Dumb Models Bonus: A Separate Frontend App How Do I Start?

0 views
Luke Hsiao 2 months ago

Berkeley Mono Variable (TX-02) in Ghostty

Inspired by Michael Bommarito’s post , I’m just dropping some quick notes on getting Berkeley Mono Variable (TX-02) to work in Ghostty. Specifically, Berkeley Mono , released on 2024-12-31, running in Ghostty . Using the variable version of the font is highly convenient: it is very fast to change styles and tweak things until it is exactly how you like it, without having to iterate with installing static fonts. I suggest you read his post first for lots of nice context on fonts and their features. Then, the key bit of information I needed to get this working was the following. TX-02, the updated version of Berkeley Mono, has different OpenType features than the original. There is no documentation I could find on exactly what they mean, but via some trial an error, I’ve landed on the following config. Specifically, I found that and appear to change what the stylistic sets do (to different things than my comments). , , and don’t do anything that I noticed. So, effectively, it seems to me that the only two features you actually care about are your settings, and then whether you want ligatures with , and what style of / you want via stylistic sets. Another tip: if you have static versions of Berkeley Mono installed, I noticed that that sometimes breaks Ghostty from loading Berkeley Mono Variable. I’m unsure why, but I was able to resolve it by removing the static fonts, configuring things, and then putting them back.

0 views
Uros Popovic 3 months ago

RTL generation for custom CPU Mrav

Overview of how SystemVerilog RTL code is generated in the build flow for the Mrav custom CPU core.

0 views
Maurycy 3 months ago

Optimized cyanotypes:

As far as I’m aware, this is the most sensitive cyanotype formula on the internet, and is just about usable for in-camera photography (ISO 0.0001): The sensitizer solution must be protected from blue and UV light. The developer very slightly light sensitive, but realistically, it should be fine. The paper should be protected from stay light during the process. The developer solution can be reused multiple times: apply it liberally and collect the excess. My version is around 5 times as sensitive, and has well preserved highlights, allowing it to achieve compatible results in 1/20th the time of the classic formula: enough to turn what would be a 3 hour exposure into a 10 minute exposure. Using sunlight, a good exposure is between 100 kilolux seconds and 1000 kilolux seconds, and the effective ISO is around 0.0001. (The original method has an ISO of around 0.000005) It doesn’t get as dark as the classic formula , maxing out at the dark blue as shown in the image. This can actually an advantage for photography because it keeps the contrast manageable: The original formula tends to have very dark shadows, bright highlights and little in the way of midtones. The standard iron-ferricyanide/cyanotype formula has a number of problems: Because the pigment is formed during the exposure, it blocks light and slows down the reaction. The result is that it needs an exposure that’s much longer then it needs to be. A lot of pigment gets lost during washing. Even though they are insoluble, small particles can get suspended in water and carried away — resulting in missing highlights at best and the entire image disappearing at worst. Alkaline buffered paper just doesn’t work. The base effect the photochemistry itself, leading to a blotchy appearance and also bleaches the pigment over time. The final problem is that citrate really isn’t a good electron donor for photo-reduction. Of all the carboxillic acids, iron (III) oxalate is best at responding to light. The reaction is also pH sensitive, and works best in an acidic environment, something that isn’t present in the classic formula. [1] can be fixed by using a two step process, where the iron (III) salt is applied to paper, exposed and only then treated with ferricyanide. For [4], ferric ammonium oxalate is available, but it’s easier to just add oxalic acid to ferric ammonium citrate. The excess acid also takes care of the pH issue. As a bonus, the oxalic acid also takes care of [2] because it results in larger pigment crystals and [3] because it neutralizes any buffers that may be present. Iron (III) oxalate based formulas tend to leave a yellow stain composed of Iron (II) oxalate on the paper, which can be dissolved in citric acid. Doing this during development also allows the otherwise trapped iron to contribute to image formation. Slowest to fastest: I did not test Mike Ware’s “New Cyanotype”, because I don’t have ferric ammonium oxalate, and don’t want to play with dichromate. This test puts it between classic and two step. This is similar to Herschel’s original, but with a different ratio of citrate to ferricyanide. Probobly the most common contemporary mixture. Note: A concentrated solution should be prepared, which will form crystals of Ferric potassium oxalate. These need to be discarded, and then the remaining liquid is diluted before using. The sensitized paper is blue due to the lack of the intense yellow of ferricyanide and the presence of trace Prussian blue. Similar mixtures are commonly used in commercial blue printing. The main product is the reduced form, Prussian white, so the print must be oxidized with hydrogen peroxide before viewing. The main product is the reduced form, Prussian white, so the print must be oxidized with hydrogen peroxide before viewing. This formula produces a slightly fogged result. No frills, (and least sensitive) two-step process. Popularized by hands-on-pictures.com Acidified two-step process: more sensitive then the standard two-step. This is a usable alternative if you don’t have oxalic acid. Two step acidified with oxalic acid, which is quite strong, and the resulting oxalate ion is better then citrate at photoreduction. Current record holder in my testing. Spread the sensitizer on the paper. It doesn’t take much, just slightly wet the surface. I find spreading with a glass rod works better then brushing it on. Let the paper dry in a dark area. Expose the paper. Apply the developer solution. No finesse required: just pour it on. Wash the print with water for a minute or so to remove the unreacted chemicals. Even an invisible amount of residue can fog the image. The reaction is self limiting. Pigment washout. Limited paper compatibility. Classic [18% of max @ 25s in sun] Mike Ware’s “New Cyanotype” 2-Step classic “Cyanotype Rex” 2-Step: Ferric ammonium citrate + citric acid Blue sheet: Classic with ferr o cyanide 2-Step blue sheet: Ferr o cyanide developer. 2-Step: Ferric ammonium citrate + oxalic acid [18% of max @ 1s in sun]

0 views
underlap 3 months ago

Arch linux take two

After a SSD failure [1] , I have the pleasure of installing arch linux for the second time. [2] Last time was over two years ago (in other words I remember almost nothing of what was involved) and since then I’ve been enjoying frequent rolling upgrades (only a couple of which wouldn’t boot and needed repairing). While waiting for the new SSD to be delivered, I burned a USB stick with the latest arch iso in readiness. I followed the instructions to check the ISO signature using gpg: So this looks plausible, but to be on the safe side, I also checked that the sha256 sum of the ISO matched that on the arch website. My previous arch installation ran out of space in the boot partition, so I ended up fiddling with the configuration to avoid keeping a backup copy of the kernel. This time, I have double the size of SSD, so I could (at least) double the size of the boot partition. But what is a reasonable default size for the boot partition? According to the installation guide , a boot partition isn’t necessary. In fact, I only really need a root ( ) partition since my machine has a BIOS (rather than UEFI). Since there seem to be no particular downsides to using a single partition, I’ll probably go with that. Then I don’t need to choose the size of a boot partition. The partitioning guide states: If you are installing on older hardware, especially on old laptops, consider choosing MBR because its BIOS might not support GPT If you are partitioning a disk that is larger than 2 TiB (≈2.2 TB), you need to use GPT. My system BIOS was dated 2011 [3] and the new SSD has 2 TB capacity, so I decided to use BIOS/MBR layout, especially since this worked fine last time. Here are the steps I took after installing the new SSD. Boot from the USB stick containing the arch ISO. Check ethernet is connected using ping. It was already up to date. Launch and set the various options: I then chose the Install option. It complained that there was no boot partition, so I went back and added a 2 GB fat32 boot partition. Chose the install option again. The installation began by formatting and partitioning the SSD. Twelve minutes later, I took the option to reboot the system after installation completed. After Linux booted (with the slow-painting grub menu, which I’ll need to switch to text), I was presented with a graphical login for i3. After I logged in, it offered to create an i3 config for me, which I accepted. Reconfigured i3 based on the contents of my dotfiles git repository. Installed my cloud provider CLI in order to access restic/rclone backups from the previous arch installation. At this point I feel I have a usable arch installation and it’s simply a matter of setting up the tools I need and restoring data from backups. I wanted to start dropbox automatically on startup and Dropbox as a systemd service was just the ticket. The failed SSD had an endurance of 180 TBW and lasted 5 years. The new SSD has an endurance of 720 TBW, so I hope it would last longer, although 20 years (5*720/180) seems unlikely. ↩︎ I was being ironic: it was quite painful the first time around. But this time I know how great arch is, so I’ll be more patient installing it. Also, I have a backup and a git repo containing my dot files, so I won’t be starting from scratch. ↩︎ There was a BIOS update available to fix an Intel advisory about a side-channel attack. However, I couldn’t confirm that my specific hardware was compatible with the update, so it seemed too risky to apply the update. Also, browsers now mitigate the side-channel attack. In addition, creating a bootable DOS USB drive seems to involve either downloading an untrusted DOS ISO or attempting to create a bootable Windows drive (for Windows 10 or 11 which may require a license key), neither of which I relish. ↩︎

0 views
Jason Fried 4 months ago

A fly and luck

There was a tiny fly right by the drain, and I was about to wash my hands. Turning on the water would have sent it right down the hole. A quick end, or an eventual struggled drowning, hard to know. But that would be that, there was no getting out. Somehow, for a moment, I slipped into contemplation. I could just turn on the water, I could rescue it, I could use a different sink

0 views
Blargh 5 months ago

Software defined KISS modem

I’ve kept working on my SDR framework in Rust called RustRadio , that I’ve blogged about twice before . I’ve been adding a little bit here, a little bit there, with one of my goals being to control a whole AX.25 stack. As seen in the diagram in this post , we need: Applications talk in terms of streams. AX.25 implementation turns that into individual data frames. The most common protocol for sending and receiving frames is KISS . I’ve not been happy with the existing KISS modems for a few reasons. The main one is that they just convert between packets and audio . I don’t want audio, I want I/Q signals suitable for SDRs. On the transmit side it’s less of a problem for regular 1200bps AX.25, since either the radio will turn audio into a FM-modulated signal, or if using an SDR it’s trivial to add the audio-to-I/Q step. On transmit you do have to trigger PTT, though. You can do VOX, but it’s not optimal. But on the receive side it’s a completely different matter. Once it’s audio, the information about the RF signal strength is gone. It makes it impossible to work on more advanced reception strategies such as whole packet clock recovery , or soft decoding . Soft decoding would allow things like “CRC doesn’t match, but this one bit had a very low RF signal strength, so if flipping that bit fixes the CRC, then that’s probably correct. Once you have a pluggable KISS modem you can also innovate on making the modem better. A simple example is to just run the same modem in multiple copies , thereby increasing the bandwidth (both in the Hz sense and the bps sense). Since SDRs are not bound to audio as a communication medium, they can also be changed to use more efficient modulations. Wouldn’t it be cool to build a QAM modulation scheme, with LDPC and “real” soft decoding? Yes, an SDR based modem does have two main challenges: For the duplex problem, the cheap and simple solution is to use frequencies on different bands, and put a band pass filter on the receive port, thus blocking the transmitted power. SDR outputs are not clean, so you’ll need a filter on the transmit path too anyway. In other words, you can just use a diplexer . It gets harder if RX and TX need to be on the same band, or worse, the same exact frequency. Repeaters tend to use cavity filters . But that’s a bit bulky for my use cases. And in any case don’t work if the frequency is exactly the same. More likely a better use case here is to use half duplex, with a relay switching from RX to TX and back. But you need to synchronize it so that there’s no race condition that accidentally plows 10W into your receive port, even for a split second. But that’s a problem for the future. For now I’m just using two antennas. I’ve implemented it. It works. It’s less that 250 lines of Rust, and the actual transmitter and receiver is really easy to follow. Well… to me at least. In order to not introduce too many things at a time, here’s how to use the regular Linux kernel stack with my new bell202 modem. Bell202 is the standard and most used amateur radio data mode. Often just referred to as “1200bps packet”. Build and start the modem: Create Linux AX.25 config: Attach the kernel to the modem: Now use it as normal: Applications, client and server — I’ve made those . AX.25 connected mode stack (OSI layer 4, basically) — The kernel’s sucks, so I made that too . A modem (OSI layer 1-2), turning digital packets into analog radio — The topic of this post. Power. SDRs don’t transmit at high power, so you need to get it through a power amplifier. Duplex. Most TX-capable SDRs have two antenna ports. One for TX, one for RX. You’ll need to have two antennas, or figure out a safe way to transmit on the same antenna without destroying the RX port.

0 views
Cassidy Williams 5 months ago

Generating open graph images in Astro

Something that always bugged me about this blog is that the open graph/social sharing images used this for every single post: I had made myself a blank SVG template (of just the rainbow-colored pattern) for each post literally years ago, but didn’t want to manually create an image per blog post. There are different solutions out there for this, like the Satori library, or using a service like Cloudinary , but they didn’t fit exactly how I wanted to build the images, and I clearly have a problem with control. So, I built myself my own solution! Last year, I made a small demo for Cosynd with Puppeteer that screenshotted websites and put it into a PDF for our website copyright offering, aptly named screenshot-demo . I liked how simple that script was, and thought I could follow a similar strategy for generating images. My idea was to: And then from there, I’d do this for every blog title I’ve written. Seemed simple enough? Reader, it was not. BUT it worked out in the end! Initially, I set up a fairly simple Astro page with HTML and CSS: With this, I was able to work out what size and positioning I wanted my text to be, and how I wanted it to adjust based on the length of the blog post title (both in spacing and in size). I used some dummy strings to do this pretty manually (like how I wanted it to change ever so slightly for titles that were 4 lines tall, etc.). Amusing note, this kind of particular design work is really fun for me, and basically impossible for AI tools to get right. They do not have my eyes nor my opinions! I liked feeling artistic as I scooted each individual pixel around (for probably too much time) and made it feel “perfect” to me (and moved things in a way that probably 0 other people will ever notice). Once I was happy with the dummy design I had going, I added a function to generate an HTML page for every post, so that Puppeteer could make a screenshot for each of them. With the previous strategy, everything worked well. But, my build times were somewhat long, because altogether the build was generating an HTML page per post (for people to read), a second HTML page per post (to be screenshotted), and then a screenshot image from that second HTML page. It was a bit too much. So, before I get into the Puppeteer script part with you, I’ll skip to the part where I changed up my strategy (as the kids say) to use a single page template that accepted the blog post title as a query parameter. The Astro page I showed you before is almost exactly the same, except: The new script on the page looked like this, which I put on the bottom of the page in a tag so it would run client-side: (That function is an interesting trick I learned a while back where tags treat content as plaintext to avoid accidental or dangerous script execution, and their gives you decoded text without any HTML tags. I had some blog post titles that had quotes and other special characters in them, and this small function fixed them from breaking in the rendered image!) Now, if you wanted to see a blog post image pre-screenshot, you can go to the open graph route here on my website and see the rendered card! In my folder, I have a script that looks mostly like this: This takes the template ( ), launches a browser, navigates to the template page, loops through each post, sizes it to the standard Open Graph size (1200x630px), and saves the screenshot to my designated output folder. From here, I added the script to my : I can now run to render the images, or have them render right after ! This is a GitHub Gist of the actual full code for both the script and the template! There was a lot of trial and error with this method, but I’m happy with it. I learned a bunch, and I can finally share my own blog posts without thinking, “gosh, I should eventually make those open graph images” (which I did literally every time I shared a post). If you need more resources on this strategy in general: I hope this is helpful for ya!

0 views
Rafael Camargo 6 months ago

Customizing checkboxes and radio buttons without hacks

For the longest time, I thought it was impossible to style native checkboxes and radio buttons without pulling off some kind of creative stunt — like what the big component libraries do, such as Material UI, Ant Design and VuetifyJS. But guess what. It's totally possible. Checkboxes and Radio Buttons can be customized easily without ha.

0 views
JSLegendDev 6 months ago

How to Build a Sonic Themed Infinite Runner Game in TypeScript With KAPLAY - Part 2/2

In the previous part of the tutorial, we finished implementing Sonic’s movement and jumping logic. We also implemented platforms and background infinite scrolling. In this part, we will finish what remains to be implemented. Implementing Rings for Sonic to Collect Implementing a Scoring System Adding Enemies Implementing Collision Logic With Enemies Finishing The Scoring UI Implementing The Game Over Scene In the file, add the following code : Import the needed assets in . The function creates a ring game object. The component adds an method to that game object (used later in the tutorial) to destroy it once it leaves the screen. “ring” is a tag used to identify the game object in collision logic which we will later cover. Multiple game objects can have the same tag. In , add the following logic in the game scene to spawn rings. We create a recursive function called . When called, it first creates a ring by calling . In KAPLAY, you can set an loop specific to a game object that will be destroyed if the game object is destroyed. In that update loop, we make the ring move to the left at the same rate as the game’s speed. This will give the illusion that Sonic is approaching the ring while in reality, it’s the contrary. We use the method to destroy the ring when it exits the screen to the left. Using KAPLAY’s function we’re able to get a random number between 0.5 and 3 representing the time to wait before spawning another ring. KAPLAY’s function is used to only call the function once the wait time is elapsed. Implementing a Scoring System Now that the rings are spawned we need to write the logic for Sonic to collect them. Which implies needing to keep track of the score. In , under our game scene, add the following code : In addition to creating variables related to the score, we created a game object acting as our score UI. Using the component, we’re able to display text on the screen. The second param of that component is used to set the font and sizing needed. Finally, we use the component to make sure the score UI is always displayed on top of other game objects by setting it’s layer to 2. You should have the following result. Now, let’s update the score every time Sonic collides with a ring. Add the following code in our game scene : We used Sonic’s built-in method which takes as the first param the tag of a game object you want to check collisions with. The second param is a function that will run in case a collision does occur. Here, we play the “ring” sound and then destroy the ring game object Sonic collided with. Finally, we increment the score and change the score UI’s text to reflect the new score. If you run the game now, you should see the score updating every time Sonic collides with a ring. Adding Enemies The code needed for adding enemies to our game is going to be very similar to the one for adding rings. The only difference is that, contrary to rings, if Sonic touches an enemy, it’s game over. However, if Sonic jumps on that enemy, the enemy gets destroyed. In , add the following code : Here, we defined a function for creating our enemy, the “Motobug”. We used components that should now be familiar to you. However, you might have noticed that we pass an object to the area component. This is something you can do to define a custom hitbox shape. Here, we’re setting the shape of the hitbox to be a rectangle using KAPLAY’s Rect constructor. It allows you to set the hitbox’s origin relative to the game object. If you pass k.vec2(0,0), the origin will be the same as the game object’s. The second and third param of the constructor are used to set the width and the height of the hitbox. Once we will add enemies to the game, you’ll be able to use the debug mode to view how our hitbox configuration for Motobug is rendered. Add the following code to : The logic for spawning “Motobugs” is mostly the same compared to the one for “rings”. However, the “Motobug”s update loop is slightly different. When the game’s speed is inferior to 3000 we make the “Motobug” move faster than the scrolling of the platforms so that it appears as moving on the platforms towards Sonic. Otherwise, it would look like Sonic is the one moving towards stationary “Motobugs”. However, when the game’s speed gets really fast, it isn’t possible to really tell the difference. In that case, we simply make the “Motobug” move at the same rate as the scrolling platforms. At this point, you should see enemies spawn in your game. Implementing Collision Logic With Enemies At the moment, if Sonic collides with an enemy, nothing happens. Likewise, if he jumps on one. Let’s add the following code in : If you run the game now, you should be able to jump on enemies and if Sonic hits an enemy while grounded, you will be transitioned over to an empty game over screen. You’ll notice that we added logic to multiply the player’s score if they jump on multiple enemies before hitting the ground. We’re also registering the current player score in local storage so we can display it later in the game over scene. Since our game is very fast paced, it’s hard for players to keep track of how many rings they’re collecting. They would have to look up to the top left of the screen while risking not seeing an enemy in time to avoid it/jump on it. To mitigate this and to give the player a better sense of what they’re doing, I opted to display the number of rings collected after every collision with a ring or a jump on an enemy. This will also make combos easier to understand. Add the following code in to implement this feature : Now, if you run the game, you should see a +1 appear every time Sonic collides with a ring and a +10, x2, x3, etc… when he jumps on one or many “Motobugs”. An important concept present in the code above, is that game objects can have child game objects assigned to them in KAPLAY. This is what we do here : Instead of calling the function to create a game object, we can call the method to create a child game object of an existing game object. Here, we create the as a child of Sonic so that its position is relative to him. Finally, for our game over screen, let’s display the player current VS best score and allow them to try the game again if they wish to. In the game over scene code in , add the following : While it should be relatively easy to figure out what the code above does, I’d like to explain what we do here : Using KAPLAY’s function we’re able to get the data we previously set in local storage. However, when the player plays the game for the first time, they will not have a best score. That’s why we set to be 0 if returns null which is possible. We do the same with currentScore. Now, if you run the project, you should have the following game over screen appear after getting hit by an enemy. After, 1 sec you should be able to press the “Jump” button (in our case click or press the space key) to play the game again. Deployment Assuming you want to be able to publish the game in web portals like itch.io, you can make a build by creating a vite.config.ts file at the root of your project’s folder and specifying the base as . Now, run the command and you should see a folder appear in your project files. Make sure your game still works by testing the build using . Finally, once ready to publish, zip your folder and upload it to itch.io or to other web game platforms of your liking. Hope you enjoyed learning how to make games in TypeScript with KAPLAY. If you’re interested in seeing more web developement and game development tutorials from me. I recommend subscribing to not miss out on future releases. Subscribe now If you’re up for it, you can check out my beginner React.js tutorial. In the previous part of the tutorial, we finished implementing Sonic’s movement and jumping logic. We also implemented platforms and background infinite scrolling. In this part, we will finish what remains to be implemented. Table of Contents Implementing Rings for Sonic to Collect Implementing a Scoring System Adding Enemies Implementing Collision Logic With Enemies Finishing The Scoring UI Implementing The Game Over Scene

0 views
Michael Lynch 6 months ago

Refactoring English: Month 5

At the start of each month, I declare what I’d like to accomplish. Here’s how I did against those goals: I originally set out to write a guide that focused on Kickstarter, but the more I wrote, the less I felt like Kickstarter was the interesting part. I was more excited about crowdfunding as a path for self-published authors, and Kickstarter is just one way of crowdfunding

0 views
Evan Schwartz 8 months ago

Building a fast website with the MASH stack in Rust

I'm building Scour , a personalized content feed that sifts through noisy feeds like Hacker News Newest, subreddits, and blogs to find great content for you. It works pretty well -- and it's fast . Scour is written in Rust and if you're building a website or service in Rust, you should consider using this "stack". After evaluating various frameworks and libraries, I settled on a couple of key ones and then discovered that someone had written it up as a stack. Shantanu Mishra described the same set of libraries I landed on as the "mash 🥔 stack" and gave it the tagline "as simple as potatoes". This stack is fast and nice to work with, so I wanted to write up my experience building with it to help spread the word. TL;DR: The stack is made up of Maud , Axum , SQLx , and HTMX and, if you want, you can skip down to where I talk about synergies between these libraries. (Also, Scour is free to use and I'd love it if you tried it out and posted feedback on the suggestions board !) Scour uses server-side rendered HTML, as opposed to a Javascript or WebAssembly frontend framework. Why? First, browser are fast at rendering HTML. Really fast. Second, Scour doesn't need a ton of fancy interactivity and I've tried to apply the "You aren't gonna need it" principle while building it. Holding off on adding new tools helps me understand the tools I do use better. I've also tried to take some inspiration from Herman from BearBlog's approach to "Building software to last forever" . HTML templating is simple, reliable, and fast. Since I wanted server-side rendered HTML, I needed a templating library and Rust has plenty to choose from. The main two decisions to make were: Here is a non-exhaustive list of popular template engines and where they fall on these two axes: I initially picked because of its popularity, performance , and type safety. (I quickly passed on all of the runtime-evaluated options because I couldn't imagine going back to a world of runtime type errors. Part of the reason I'm writing Rust in the first place is compile-time type safety!) After two months of using , however, I got frustrated with its developer experience. Every addition to a page required editing both the Rust struct and the corresponding HTML template. Furthermore, extending a base template for the page header and footer was surprisingly tedious. templates can inherit from other templates . However, any values passed to the base template (such as whether a user is logged in) must be included in every page's Rust struct , which led to a lot of duplication. This experience sent me looking for alternatives. Maud is a macro for writing fast, type-safe HTML templates right in your Rust source code. The format is concise and makes it easy to include values from Rust code. The Hello World example shows how you can write HTML tags, classes, and attributes without the visual noise of angle brackets and closing tags: Rust values can be easily spliced into templates (HTML special characters are automatically escaped ): Control structures like , , , , and are also very straightforward: Partial templates are also easy to reuse by turning them into small functions that return : All in all, Maud provides a pleasant way to write HTML components and pages. It also ties in nicely with the rest of the stack (more on that later). Axum is a popular web framework built by the Tokio team. The framework uses functions with extractors to declaratively parse HTTP requests. The Hello World example illustrates building a router with multiple routes, including one that handles a POST request with a JSON body and returns a JSON response: Axum extractors make it easy to parse values from HTTP bodies, paths, and query parameters and turn them into well-defined Rust structs. And, as we'll see later, it plays nicely with the rest of this stack. Every named stack needs a persistence layer. SQLx is a library for working with SQLite, Postgres, and MySQL from async Rust. SQLx has a number of different ways of working with it, but I'll show one that gives a flavor of how I use it: You can derive the trait for structs to map between the database row and your Rust types. Note that you can derive both and 's and on the same structs to use them all the way from your database to the Axum layer. However, in practice I've often found that it is useful to separate the database types from those used in the server API -- but it's easy to define implementations to map between them. The last part of the stack is HTMX . It is a library that enables you to build fairly interactive websites using a handful of HTML attributes that control sending HTTP requests and handling their responses. While HTMX itself is a Javascript library, websites built with it often avoid needing to use custom Javascript directly. For example, this button means "When a user clicks on this button, issue an AJAX request to /clicked, and replace the entire button with the HTML response". Notably, this snippet will replace just this button with the HTML returned from , rather than the whole page like a plain HTML form would. HTMX has been having a moment, in part due to essays like The future of HTMX where they talked about "Stability as a Feature" and "No New Features as a Feature". This obviously stands in stark contrast to the churn that the world of frontend Javascript frameworks is known for. There is a lot that can and has been written about HTMX, but the logic clicked for me after watching this interview with the creator of it. The elegance of HTMX -- and the part that makes its promise of stability credible -- is that it was built from first principles to generalize the behavior already present in HTML forms and links . Specifically, (1) HTML forms and links (2) submit GET or POST HTTP requests (3) when you click a Submit button and (4) replace the entire screen with the response. HTMX asks and answers the questions: By generalizing these behaviors, HTMX makes it possible to build more interactive websites without writing custom Javascript -- and it plays nicely with backends written in other languages like Rust. Since we're talking about Rust and building fast websites, it's worth emphasizing that while HTMX is a Javascript library, it only needs to be loaded once. Updating your code or website behavior will have no effect on the HTMX libraries, so you can use the directive to tell browsers or other caches to indefinitely store the specific versions of HTMX and any extensions you're using. The first visit might look like this: But subsequent visits only need to load the HTML: This makes for even faster page loads for return users. Overall, I've had a good experience building with this stack, but I wanted to highlight a couple of places where the various components complemented one another in nice ways. Earlier, I mentioned my frustration with , specifically around reusing a base template that includes different top navigation bar items based on whether a user is logged in or not. I was wondering how to do this with Maud, when I came across this Reddit question: Users of maud (and axum): how do you handle partials/layouting? David Pedersen, the developer of Axum, had responded with this gist . In short, you can make a page layout struct that is an Axum extractor and provides a method that returns : When you use the extractor in your page handler functions, the base template automatically has access to the components it needs from the request: This approach makes it easy to reuse the base page template without needing to explicitly pass it any request data it might need. (Thanks David Pedersen for the write-up -- and for your work on Axum!) This is somewhat table stakes for HTML templating libraries, but it is a nice convenience that Maud has an Axum integration that enables directly return from Axum routes (as seen in the examples just above). HTMX has a number of very useful extensions , including the Preload extension . It preloads HTML pages and fragments into the browser's cache when users hover or start clicking on elements, such that the transitions happen nearly instantly. The Preload extension sends the header with every request it initiates, which pairs nicely with middleware that sets the cache response headers: (Of course, this same approach can be implemented with any HTTP framework, not just Axum.) Update: after writing this post, u/PwnMasterGeno on Reddit pointed out the crate to me. This library includes Axum extractors and responders for all of the headers that HTMX uses. For example, you can use the header to determine if you need to send the full page or just the body content. also has a nice feature for cache management . It has a that automatically sets the component of the HTTP cache headers based on the request headers you use, which will ensure the browser correctly resends the request when the request changes in a meaningful way. While I've overall been happy building with the MASH stack, here are the things that I've found to be less than ideal. I would be remiss talking up this stack without mentioning one of the top complaints about most Rust development: compile times. When building purely backend services, I've generally found that Rust Analyzer does the trick well enough that I don't need to recompile in my normal development flow. However, with frontend changes, you want to see the effects of your edits right away. During development, I use Bacon for recompiling and rerunning my code and I use to have the frontend automatically refresh. Using some of Corrode's Tips For Faster Rust Compile Times , I've gotten it down to around 2.5 seconds from save to page reload . I'd love if it were faster, but it's not a deal-breaker for me. For anyone building with the MASH stack, I would highly recommend splitting your code into smaller crates so that the compiler only has to recompile the code you actually changed. Also, there's an unmerged PR for Maud to enable updating templates without recompiling , but I'm not sure if that will end up being merged. If you have any other suggestions for bringing down compile times, I'd love to hear them! HTMX's focus on building interactivity through swapping HTML chunks sent from the backend sometimes feels overly clunky. For example, the Click To Edit example is a common pattern involving replacing an Edit button with a form to update some information such as a user's contact details. The stock HTMX way of doing this is fetching the form component from the backend when the user clicks the button and swapping out the button for the form. This feels inelegant because all of the necessary information is already present on the page, save for the actual form layout. It seems like some users of HTMX combine it with Alpine.js , Web Components, or a little custom Javascript to handle this. For the moment, I've opted for the pattern lifted from the HTMX docs but I don't love it. If you're building a website and using Rust, give the MASH stack a try! Maud is a pleasure to use. Axum and SQLx are excellent. And HTMX provides a refreshing rethink of web frontends. That said, I'm not yet sure if I would recommend this stack to everyone doing web development. If I were building a startup making a normal web app, there's a good chance that TypeScript is still your best bet. But if you are working on a solo project or have other reasons that you're already using Rust, give this stack a shot! If you're already building with these libraries, what do you think? I'd love to hear about others' experiences. Thanks to Alex Kesling for feedback on a draft of this post! Discuss on r/rust , r/htmx or Hacker News . If you haven't already signed up for Scour, give it a try and let me know what you think !

0 views

Intel 9 285K on ASUS Z890: not stable!

Update (2025-05-15): Turns out the CPU was faulty! See My 2025 high-end Linux PC for a new article on this build, now with a working CPU. Update (2025-09-07): The replacement CPU also died and I have given up on Intel. See Bye Intel, hi AMD! for more details on the AMD 9950X3D. In January I ordered the components for a new PC and expected that I would publish a successor to my 2022 high-end Linux PC 🐧 article. Instead, I am now sitting on a PC which regularly encounters crashes of the worst-to-debug kind, so I am publishing this article as a warning for others in case you wanted to buy the same hardware. Which components did I pick for this build? Here’s the full list: Total: ≈1800 CHF, excluding the Graphics Card I re-used from a previous build. …and the next couple of sections go into detail on how I selected these components. I have been a fan of Fractal cases for a couple of generations. In particular, I realized that the “Compact” series offers plenty of space even for large graphics cards and CPU coolers, so that’s now my go-to case: the Fractal Define 7 Compact (Black Solid). I really like building components into the case and working with the case. There are no sharp edges, the mechanisms are a pleasure to use and the cable-management is well thought-out. The only thing that wasn’t top-notch is that Fractal ships the case screws in sealed plastic packages that you need to cut open. I would have wished for a re-sealable plastic baggie so that one can keep the unused screws instead of losing them. I wanted to keep my options open regarding upgrading to an nVidia 50xx series graphics card at a later point. Those models have a TGP (“Total Graphics Power”) of 575 watts, so I needed a power supply that delivers enough power for the whole system even at peak power usage in all dimensions. I ended up selecting the Corsair RM850x, which reviews favoribly (“leader in the 850W gold category”) and was available at my electronics store of choice. This was a good choice: the PSU indeed runs quiet, and I really like the power cables (e.g. the GPU cable) that they include: they are very flexible, which makes them easy to cable-manage. I have been avoiding PCIe 5 SSDs so far because they consume a lot more power compared to PCIe 4 SSDs. While bulk streaming data transfer rates are higher on PCIe 5 SSDs, random transfers are not significantly faster. Most of my compute workload are random transfers, not large bulk transfers. The power draw situation with PCIe 5 SSDs seems to be getting better lately, with the Phison E31T being the first controller that implements power saving. A disk that uses the E31T controller is the Corsair Force Series MP700 Elite. Unfortunately, said disk was unavailable when I ordered. Instead, I picked the Samsung 990 Pro with 4 TB. I made good experiences with the Samsung Pro series over the years (never had one die or degrade performance), and my previous 2 TB disk is starting to fill up, so the extra storage space is appreciated. One annoying realization is that most mainboard vendors seem to have moved to 2.5 GbE (= 2.5 Gbit/s ethernet) onboard network cards. I would have been perfectly happy to play it safe and buy another Intel I225 1 GbE network card, as long as it just works with Linux. In the 2.5 GbE space, the main players seem to be Realtek and Intel. Most mainboard vendors opted for Realtek as far as I could see. Linux includes the driver for Realtek network cards, but you need a recent-enough Linux version (6.13+) that includes commit “ r8169: add support for RTL8125D ”, accompanied by a recent-enough linux-firmware package. Even then, there is some concern around stability and ASPM support. See for example this ServerFault post by someone working on the driver. Despite the Intel 1 GbE options being well-supported at this point, Intel’s 2.5 GbE options might not fare any better than the Realtek ones: I found reports of instability with Intel’s 2.5 GbE network cards . Aside from the network cards, I decided to stick to the ASUS prime series of mainboards, as I made good experiences with those in my past few builds. Here are a couple of thoughts on the ASUS PRIME Z890-P mainboard I went with: I am a long-time fan of Noctua’s products: This company makes silent fans with great cooling capacity that work reliably! For many years, I have swapped out every of my PC’s fans with Noctua fans, and it was always an upgrade. Highly recommended. Hence, it is no question that I picked the latest and greatest Noctua CPU cooler for this build: the Noctua NH-D15 G2. There are a couple of things to pay attention to with this cooler: Probably the point that raises most questions about this build is why I selected an Intel CPU over an AMD CPU. The primary reason is that Intel CPUs are so much better at power saving! Let me explain: Most benchmarks online are for gamers and hence measure a usage curve that goes “start game, run PC at 100% resources for hours”. Of course, when you never let the machine idle, you would care about power efficiency : how much power do you need to use to achive the desired result? My use-case is software development, not gaming. My usage curve oscillates between “barely any usage because Michael is reading text” to “complete this compilation as quickly as possible with all the power available”. For me, I need both absolute power consumption at idle, and absolute performance to be best-of-class. AMD’s CPUs offer great performance (the recently released Ryzen 9 9950X3D is even faster than the Intel 9 285K), and have great power efficiency , but poor power consumption at idle: With ≈35W of idle power draw, Zen 5 CPUs consume ≈3x as much power as Intel CPUs! Intel’s CPUs offer great performance (like AMD), but excellent power consumption at idle. Therefore, I can’t in good conscience buy an AMD CPU, but if you want a fast gaming-only PC or run an always-loaded HPC cluster with those CPUs, definitely go ahead :) I don’t necessarily recommend any particular nVidia graphics card, but I have had to stick to nVidia cards because they are the only option that work with my picky Dell UP3218K monitor . From time to time, I try out different graphics cards. Recently, I got myself an AMD Radeon RX 9070 because I read that it works well with open source drivers. While the Radeon RX 9070 works with my monitor (great!), it seems to consume 45W in idle, which is much higher than my nVidia cards, which idle at ≈ 20W. This is unacceptable to me: Aside from high power costs and wasting precious resources, the high power draw also means that my room will be hotter in summer and the fans need to spin faster and therefore louder. Maybe I’ll write a separate article about the Radeon RX 9070. On the internet, I read that there was some issue related to the Power Limits that mainboards come with by default. Therefore, I did a UEFI firmware update first thing after getting the mainboard. I upgraded to version 1404 (2025/01/10) using the provided ZIP file ( ) on an MS-DOS FAT-formatted USB stick with the EZ Flash tool in the UEFI firmware interface. Tip: do not extract the ZIP file, otherwise the EZ Flash tool cannot update the Intel ME firmware. Just put the ZIP file onto the USB disk as-is. I verified that with this UEFI version, the is 250W, and , which are exactly the values that Intel recommends. Great! I also enabled XMP and verified that memtest86 reported no errors. To copy over the data from the old disk to the new disk, I wanted to boot a live linux distribution (specifically, grml.org ) and follow my usual procedure: boot with the old disk and the new (empty) disk, then use to copy the data. It’s nice and simple, hard to screw up. Unfortunately, while grml 2024.12 technically does boot up, there are two big problems: There is no network connectivity because the kernel and linux-firmware versions are too old. I could not get Xorg to work at all. Not with the Intel integrated GPU, nor with the nVidia dedicated GPU. Not with or any of the other options in the grml menu. This wasn’t merely a convenience problem: I needed to use (the graphical version) for its partition moving/resizing support. Ultimately, it was easier to upgrade my old PC to Linux 6.13 and linux-firmware 20250109, then put in the new disk and copy over the installation. At this point (early February), I switched to this new machine as my main PC. Unfortunately, I could never get it to run stable! This journal shows you some of the issues I faced and what I tried to troubleshoot them. One of the first issues I encountered with this system was that after resuming from suspend-to-RAM, I was greeted with a login window instead of my X11 session. The logs say: I couldn’t find any good tips online for this error message, so I figured I’d wait and see how frequently this happens before investigating further. On Feb 18th, after resume-from-suspend, none of my USB peripherals would work anymore! This affected all USB ports of the machine and could not be fixed, not even by a reboot, until I fully killed power to the machine! In the kernel log, I saw the following messages: The HC dying issue happened again when I was writing an SD card in my USB card reader: To try and fix the host controller dying issue, I updated the UEFI firmware to version and disabled the XMPP RAM profile. To rule out any GPU-specific issues, I decided to switch back from the Inno3D GeForce RTX4070 Ti to my older MSI GeForce RTX 3060 Ti. On Feb 28th, my PC did not resume from suspend-to-RAM. It would not even react to a ping, I had to hard-reset the machine. When checking the syslog afterwards, there are no entries. I checked my power monitoring and saw that the machine consumed 50W (well above idle power, and far above suspend-to-RAM power) throughout the entire night. Hence, I suspect that the suspend-to-RAM did not work correctly and the machine never actually suspended. On March 4th, I was running the test suite for a medium-sized Django project (= 100% CPU usage) when I encountered a really hard crash: The machine stopped working entirely, meaning all peripherals like keyboard and mouse stopped responding, and the machine even did not respond to a network ping anymore. At this point, I had enough and switched back to my 2022 PC. What use is a computer that doesn’t work? My hierarchy of needs contains stability as the foundation, then speed and convenience. This machine exhausted my tolerance for frustration with its frequent crashes. Manawyrm actually warned me about the ASUS board : ASUS boards are a typical gamble as always – they fired their firmware engineers about 10 years ago, so you might get a nightmare of ACPI troubleshooting hell now (or it’ll just work). ASRock is worth a look as a replacement if that happens. Electronics are usually solid, though… I didn’t expect that this PC would crash so hard, though. Like, if it couldn’t suspend/resume that would be one thing (a dealbreaker, but somewhat expected and understandable, probably fixable), but a machine that runs into a hard-lockup when compiling/testing software? No thanks. I will buy a different mainboard to see if that helps, likely the ASRock Z890 Pro-A. If you have any recommendations for a Z890 mainboard that actually works reliably, please let me know! Update 2025-04-17: I have received the ASRock Z890 Pro-A, but the machine shows exactly the same symptoms! I also swapped the power supply, which also did not help. Running Prime95 crashed almost immediately. At this point, I have to assume the CPU itself is defective and have started an RMA. I will post another update once (if?) I get a replaced CPU. Update 2025-05-11: The CPU was faulty indeed! See My 2025 high-end Linux PC for a new article on this build, now with a working CPU. I like the quick-release PCIe mechanism: ASUS understood that people had trouble unlocking large graphics cards from their PCIe slot, so they added a lever-like mechanism that is easily reachable. In my couple of usages, this worked pretty well! I wrote about slow boot times with my 2022 PC build that were caused by time-consuming memory training. On this ASUS board, I noticed that they blink the Power LED to signal that memory training is in progress. Very nice! It hadn’t occurred to me previously that the various phases of the boot could be signaled by different Power LED blinking patterns :) The downside of this feature is: While the machine is in suspend-to-RAM, the Power LED also blinks! This is annoying, so I might just disconnect the Power LED entirely. The UEFI firmware includes what they call a Q-Dashboard: An overview of what is installed/connected in which slot. Quite nice: I decided to configure it with one fan instead of two fans: Using only one fan will be the quietest setup, yet still have plenty of cooling capacity for this setup. There are 3 different versions that differ in how their base plate is shaped. Noctua recommends: “For LGA1851, we generally recommend the regular standard version with medium base convexity” ( https://noctua.at/en/intel-lga1851-all-you-need-to-know ) The height of this cooler is 168 mm. This fits well into the Fractal Define 7 Compact Black. There is no network connectivity because the kernel and linux-firmware versions are too old. r8169: add support for RTL8125D I could not get Xorg to work at all. Not with the Intel integrated GPU, nor with the nVidia dedicated GPU. Not with or any of the other options in the grml menu. This wasn’t merely a convenience problem: I needed to use (the graphical version) for its partition moving/resizing support.

0 views
Carlos Becker 11 months ago

Automatically merge dependabot pull requests

A couple of weeks ago I added a small automation to automatically merge dependabot pull requests if the build succeed.

0 views