Posts in Tutorial (20 found)

Pausing a CSS animation with getAnimations()

It’s Blogvent, day 9, where I blog daily in December! CSS animations are cool, but sometimes you want them to just cool it . You can pause them by using the method ! When you call on an element, you get an array of all of the objects on said element, which includes CSS animations. There’s various things you can do with the returned object, like getting the of the animation’s timeline, or the playback state of the animation ( ), or in our case, actually pausing the animation with . We could loop through every Animation object in that array and pause it, like so: Or, if you just want one animation to pause, you can filter from the returned results. Here’s a real demo where there’s only one animation happening, so we pause it based on the current . See the Pen getAnimations() demo by Cassidy ( @cassidoo ) on CodePen . Hope this was helpful!

0 views
pkh.me 3 days ago

A series of tricks and techniques I learned doing tiny GLSL demos

In the past two months or so, I spent some time making tiny GLSL demos. I wrote an article about the first one, Red Alp . There, I went into details about the whole process, so I recommend to check it out first if you're not familiar with the field. We will look at 4 demos: Moonlight , Entrance 3 , Archipelago , and Cutie . But this time, for each demo, we're going to cover one or two things I learned from it. It won't be a deep dive into every aspect because it would be extremely redundant. Instead, I'll take you along a journey of learning experiences. See it on its official page , or play with the code on its Shadertoy portage . In Red Alp, I used volumetric raymarching to go through the clouds and fog, and it took quite a significant part of the code to make the absorption and emission convincing. But there is an alternative technique that is surprisingly simpler. In the raymarching loop, the color contribution at each iteration becomes 1/d or c/d where d is the density of the material at the current ray position, and c an optional color tint if you don't want to work in grayscale level. Some variants exist, for example 1/d^2 , but we'll focus on 1/d . Let's see how it looks in practice with a simple cube raymarch where we use this peculiar contribution: The signed function of the cube is from the classic Inigo Quilez page . For the rotation you can refer to Xor or Blackle article. For the general understanding of the code, see my previous article on Red Alp . The first time I saw it, I wondered whether it was a creative take, or if it was backed by physical properties. Let's simplify the problem with the following figure: The glowing object sends photons that spread all around it. The further we go from the object, the more spread these photons are, basically following the inverse square law 1/r^2 , which gives the photons density, where r is the distance to the target object. Let's say we send a ray and want to know how many photons are present along the whole path. We have to "sum", or rather integrate, all these photons density measures along the ray. Since we are doing a discrete sampling (the dots on the figure), we need to interpolate the photons density between each sampling point as well. Given two arbitrary sampling points and their corresponding distance d_n and d_{n+1} , any intermediate distance can be linearly interpolated with r=\mathrm{mix}(d_n,d_{n+1},t) where t is within [0,1] . Applying the inverse square law from before ( 1/r^2 ), the integrated photons density between these 2 points can be expressed with this formula (in t range): t being normalized, the \Delta t is here to covers the actual segment distance. With the help of Sympy we can do the integration: So the result of this integration is: Now the key is that in the loop, \Delta t stepping is actually d_{n+1} , so we end up with: And we find back our mysterious 1/d . It's "physically correct", assuming vacuum space. Of course, reality is more complex, and we don't even need to stick to that formula, but it was nice figuring out that this simple fraction is a fairly good model of reality. In the cube example we didn't go through the object, using . But if we were to add some transparency, we could have used instead, where could be interpreted as absorption and the pass-through, or transparency. I first saw this formula mentioned in Xor article on volumetric . To understand it a bit better, here is my intuitive take: the causes a potential penetration into the solid at the next iteration, which wouldn't happen otherwise (or only very marginally). When inside the solid, the causes the ray to continue further (by the amount of the distance to the closest edge). Then the multiplication by makes sure we don't penetrate too fast into it; it's the absorption, or "damping". This is basically the technique I used in Moonlight to avoid the complex absorption/emission code. See it on its official page , or play with the code on its Shadertoy portage . This demo was probably one of the most challenging, but I'm pretty happy with its atmospheric vibe. It's kind of different than the usual demos for this size. I initially tried with some voxels, but I couldn't make it work with the light under 512 characters (the initialization code was too large, not the branchless DDA stepping). It also had annoying limitations (typically the animation was unit bound), so I fell back to a classic raymarching. The first thing I did differently was to use an L-∞ norm instead of an euclidean norm for the distance function: every solid is a cube so it's appropriate to use simpler formulas. For the light, it's not an illusion, it's an actual light: after the first raymarch to a solid, the ray direction is reoriented toward the light and the march runs again (it's the macro). Hitting a solid or not defines if the fragment should be lighten up or not. A bad surprise of this demo was uncovering two driver bugs on mobile: The first was worked around with the macro (actually saved 3 characters in the process), but the 2nd one had to be unpacked and made me lose 2 characters. Another thing I studied was how to set up the camera in a non-perspective isometric or dimetric view . I couldn't make sense of the maths from the Wikipedia page (it just didn't work), but Sympy rescued me again: Inspecting the matrices and factoring out the common terms, we obtain the following transform matrices: The ray direction is common to all fragments, so we use the central UV coordinate (0,0) as reference point. We push it forward for convenience: (0,0,1), and transform it with our matrix. This gives the central screen coordinate in world space. Since the obtained point coordinate is relative to the world origin, to go from that point to the origin, we just have to flip its sign. The ray direction formula is then: To get the ray origin of every other pixel, the remaining question is: what is the smallest distance we need to step back the screen coordinates such that, when applying the transformation, the view wouldn't clip into the ground at y=0 . This requirement can be modeled with the following expression: The -1 being the lowest y-screen coordinate (which we don't want into the ground). The lazy bum in me just asks Sympy to solve it for me: We get z>\sqrt{2} for isometric, and z>\sqrt{3} for dimetric. With an arbitrary scale of the coordinate we end up with the following: The is an arbitrary small value to make sure the y-coordinate ends up above 0. In Entrance 3, I used a rough approximation of the isometric setup. See it on its official page , or play with the code on its Shadertoy portage . For this infinite procedurally generated Japan, I wanted to mark a rupture with my red/orange obsession. Technically speaking, it's actually fairly basic if you're familiar with Red Alp. I used the same noise for the mountains/islands, but the water uses a different noise. The per octave noise curve is , with the particularity of shifting the coordinate with its derivative: . This is some form of domain warping that gives the nice effect here. When I say , I'm really referring to the x-axis position. It is not needed to work with the z-component (xz forms the flat plane) because each octave of the fbm has a rotation that "mixes" both axis, so is actually backed in . I didn't come up with the formula, but found it first one this video by Acerola . I don't know if he's the original author, but I've seen the formula being replicated in various places. See it on its official page , or play with the code on its Shadertoy portage . Here I got cocky and thought I could manage to fit it in 512 chars. I failed, by 90 characters. I did use the smoothmin operator for the first time: every limb of the body of Cutie is composed of two spheres creating a rounded cone (two sphere of different size smoothly merged like metaballs). Then I used simple IK kinetics for the animation. Using leg parts with a size of 1 helped simplifying the formula and make it shorter. You may be wondering about the smooth visuals itself: I didn't use the depth map but simply the number of iterations. Due to the nature of the raymarching algorithm, when a ray passes close to a shape, it slows down significantly, increasing the number of iterations. This is super useful because it exaggerate the contour of the shapes naturally. It's wrapped into an exponential, but defines the output color directly. I will continue making more of those, keeping my artistic ambition low because of the 512 characters constraint I'm imposing on myself. You may be wondering why I keep this obsession about 512 characters, and many people called me out on this one. There are actually many arguments: Why 512 in particular? It happens to be the size of a toot on my Mastodon instance so I can fit the code there, and I found it to be a good balance. One with tricky for-loop compounds on Snapdragon/Adreno because I was trying hard to avoid the macros and functions. One with chained assignments on Imagination/PowerVR (typically affect Google Pixel Pro 10). A tiny demo has to focus on one or two very scoped aspects of computer graphics, which makes it perfect as a learning support . It's part of the artistic performance : it's not just techniques and visuals, the wizardry of the code is part of why it's so impressive. We're in an era of visuals, people have been fed with the craziest VFX ever. But have they seen them with a few hundreds bytes of code? The constraint helps me finish the work : when making art, there is always this question of when to stop. Here there is an intractable point where I just cannot do more and I have to move on. Similarly, it prevents my ambition from tricking me into some colossal project I will never finish or even start. That format has a ton of limitations, and that's its strength. Working on such a tiny piece of code for days/weeks just brings me joy . I do feel like a craftsperson, spending an unreasonable amount of time perfecting it, for the beauty of it. I'm trying to build a portfolio, and it's important for me to keep it consistent . If the size limit was different, I would have done things differently, so I can't change it now. If I had hundreds more characters, Red Alp might have had birds, the sky opening to lit a beam of light on the mountains, etc.

0 views
Abhinav Sarkar 5 days ago

Solving Advent of Code 2025 in Janet: Day 1–4

I’m solving the Advent of Code 2025 in Janet . After doing the last five years in Haskell, I wanted to learn a new language this year. I’ve been eyeing the “New Lisps” 1 for a while now, and I decided to learn Janet. Janet is a Clojure like Lisp that can be interpreted, embedded and compiled, and comes with a large standard library with concurrency, HTTP and PEG parser support. I want to replace Python with Janet as my scripting language. Here are my solutions for Dec 1–4. This post was originally published on abhinavsarkar.net . All my solutions follow the same structure because I wrote a template to create new empty solutions. Actually, I added a fair bit of automation this time to build, run, test and benchmark the solutions. Day 1 was a bit mathy but it didn’t take too long to figure out. I spent more time polishing the solution to be idiomatic Janet code. , the PEG grammar to parse the input was the most interesting part for me on the day. If you know Janet, you can notice this is not the cleanest code, but that’s okay, it was my day 1 too. The most interesting part of the day 2 solution was the macro that reads the input at compile-time and creates a custom function to check whether a number is in one of the given ranges. This turned out to be almost 4x faster than writing the same thing as a function. Notice , the PEG grammar to parse the input. So short and clean! I also leaned into the imperative and mutable nature of the Janet data-structures. The code is still not the cleanest as I was still learning. The first part of day 3 was pretty easy to solve, but using the same solution for the second part just ran forever. I realized that this is a Dynamic Programming problem, but I don’t like doing array-based solutions, so I simply rewrote the solution to add caching. And it worked! It is definitely on the slower side, but I’m okay with it. The code has become a little more idiomatic Janet. Day 4 is when I learned more about Janet control flow structures. The solution for the part 2 is a straightforward Breadth-first traversal . The interesting parts are the , and statements. So concise and elegant! That’s it for now. Next note will drop after 4 or 5 days. You can browse the code repo to see the full setup. If you have any questions or comments, please leave a comment below. If you liked this post, please share it. Thanks for reading! The new Lisps that interest me are: Janet, Fennel and Jank . ↩︎ If you liked this post, please leave a comment . The new Lisps that interest me are: Janet, Fennel and Jank . ↩︎

0 views
neilzone 6 days ago

Adding a button to Home Assistant to run a shell command

Now that I have fixed my garage door controller , I wanted to see if I could use it from within Home Assistant, primarily so that I can then have a widget on my phone screen. In this case, the shell command is a simple invocation to a remote machine. (I am not aware of a way to add a button to an Android home screen to run a shell command or bash script.) Adding a button to run a shell command or bash script in Home Assistant was pretty straightforward, following the Home Assistant documentation for shell commands . To my configuration.yaml file, I added something like: The quirk here was that reloading the configuration.yaml file from within the Home Assistant UI was insufficient. I needed to completely restart Home Assistant to pick up the changes. Once I had restarted Home Assistant, the shell commands were available. To add buttons, I needed to create “helpers”. I did this from the Home Assistant UI, via Settings / Devices & services / Helpers / Create helper". One helper for each button. After I had created each helper, I went back into the helper’s settings, to add zone information, so that it appeared in the right place in the dashboard. Having created the button/helper, and the shell command, I used an automation to link the two together. I did this via the Home Assistant UI, Settings / Automations & scenes. For the button, it was linked to a change in state of the button, with no parameters specified. The “then do” action is the shell command for the door in question.

0 views

The what, how, and why of CSS clamp()

It’s Blogvent, day 4, where I blog daily in December! CSS is cool and you should use it. In a sentence, lets you assign a value to a CSS property between a minimum and a maximum range, and uses a preferred value in that range. It’s really helpful for responsive layouts and typography! has three parameters, in order: You can assign it to a property, like so: The column width here is always between 200px and 400px wide, and defaults to 40% of its container width. If that 40% is less than 200px, the width will be 200px. Similarly, if that 40% is more than 400px, the width will be 400px. Or, another example: The font size here is always between 16px and 24px, and defaults to 4% of the screen’s width. If a screen is 1000px wide, that means the font size would be 40px if it were that exact 4%, but with this function, it is capped at 24px. It’s shorter! Honestly that’s why. You can accomplish a lot with a single line of (that is arguably easier to maintain) than a set of media queries. It reduces reliance on multiple rules and functions. A typical media query approach for a column width might be: But with , you could do: This is way shorter, and I would argue, easier to read and maintain! CSS is widely supported , so you can safely use it across your apps and websites. If you’d like to learn more, here’s some handy links for ya: Until next time! A minimum value A preferred value A maximum value Clamp Calculator CSS documentation CSS Tricks Almanac:

0 views
neilzone 1 weeks ago

Using gpioset and gpioget to control the gpio pins on a Raspberry Pi with a relay board under Debian Trixie

A couple of years ago, I bodged a web-controlled garage door opener with a Raspberry Pi . It worked fine, until I upgraded the Raspberry Pi in question to Debian Trixie. I noted that the relevant files in were no longer present, and some further research showed that this was an intentional change: In the upstream kernel, /sys/class/gpio (the sysfs interface) has been deprecated in favor of a device interface, /dev/gpiochipN. The old interface is gone from the kernel in the nightly Debian builds (bookworm/sid) for Raspberry Pi. So, unsurprisingly, my old way of doing things was not working. The documentation for the relay board has not been updated. Fortunately, after a bit of experimentation, I could get it working again using and . I could not find a way of stopping after a fixed period of time (I was expecting to do it, but it did not), so I ended up wrapping it in , which is also a bodge. Anyway, this is now what I am using:

0 views
The Coder Cafe 1 weeks ago

Build Your Own Key-Value Storage Engine—Week 3

Curious how leading engineers tackle extreme scale challenges with data-intensive applications? Join Monster Scale Summit (free + virtual). It’s hosted by ScyllaDB, the monstrously fast and scalable database. Agenda Week 0: Introduction Week 1: In-Memory Store Week 2: LSM Tree Foundations Week 3: Durability with Write-Ahead Logging Last week, you built the first version of an LSM: an in-memory memtable for recent writes, immutable SSTables on disk, and a MANIFEST file listing the SSTable files. However, if the database crashes, data in the memtable would be lost. This week, you will focus on durability by introducing Write-Ahead Logging (WAL). A WAL is an append-only file on disk that records the same operations you keep in memory. How it works: On write, record it in the WAL and the memtable. On restart, you read the WAL from start to end and apply each record to the memtable. Introducing a WAL is not free, though. Writes are slower because each write also goes to the WAL. It also increases write amplification, the ratio of data written to data requested by a client. Another important aspect of durability is when to synchronize a file’s state with the storage device. When you write to a file, it may appear as saved, but the bytes may sit in memory caches rather than on the physical disk. These caches are managed by the OS’s filesystem, an abstraction over the disk. If the machine crashes before the data is flushed, you can lose data. To force the data to stable storage, you need to call a sync primitive. The simple, portable choice is to call fsync , a system call that flushes a file’s buffered data and required metadata to disk. 💬 If you want to share your progress, discuss solutions, or collaborate with other coders, join the community Discord server ( channel): Join the Discord For the WAL data format, you won’t use JSON like the SSTables, but NDJSON (Newline-Delimited JSON). It is a true append-only format with one JSON object per line. Append a record to the WAL file , opened with . Set the field to , and the and fields to the provided key and value. For example, writing : Update the memtable with the same logic as before: If the key exists, update the value. Otherwise, create a new entry. Acknowledge the HTTP request. Create an empty file if it doesn’t exist. Replay the WAL from start to end. For each valid line, apply it to the memtable. Keep the same flush trigger (2,000 entries) and the same logic (stop-the-world operation) as last week: Write the new SSTable: Flush the memtable as a new immutable JSON SSTable file with keys sorted (same as before). fsync the SSTable file. the parent directory of the SSTable to make the new filename persistent. Update the MANIFEST atomically: Read the current MANIFEST lines into memory and append the new SSTable filename. Open with . Write the entire list to from the start. Rename → . the parent directory of the MANIFEST. Reset the WAL: Truncate the WAL to zero length. the WAL file. If the server is unavailable, do not fail. Retry indefinitely with a short delay (or exponential backoff). To assess durability: Run the client against the same input file ( put.txt ). Stop and restart your database randomly during the run. Your client should confirm that no acknowledged writes were lost after recovery. Add a per-record checksum to each WAL record. On startup, verify records and stop at the first invalid/truncated one, discarding the tail. For reference, ScyllaDB checksums segments using CRC32; see its commitlog segment file format for inspiration. Regarding the flush process, if the database crashes after step 1 (write the new SSTable) and before step 2 (update the MANIFEST atomically), you may end up with a dangling SSTable file on disk. Add a startup routine to delete any file that exists on disk but is not listed in the MANIFEST. This keeps the data directory aligned with the MANIFEST after a crash. That’s it for this week! Your storage engine is now durable. On restart, data that was in the memtable is recovered from the WAL. This is made possible by and the atomic update of the MANIFEST. Deletion is not handled yet. In the worst case, a miss can read all SSTables, which quickly becomes highly inefficient. In two weeks, you will add a endpoint and learn how SSTables are compacted so the engine can reclaim space and keep reads efficient. In your implementation, you used as a simple “make it durable now“ button. In practice, offers finer control both over what you sync and when you sync. What: (or opening the file with ) persists the data without pushing unrelated metadata, which is usually what you want for WAL appends. You can go further with to bypass the page cache and sync only the data you wrote, but that comes with extra complexity. When: While calling a sync primitive after every request is offered by systems that promise durability, it is often not the default. Many databases use group commit, which batches several writes into one call to amortize the cost while still providing strong guarantees. For additional information, see A write-ahead log is not a universal part of durability by . For example, RocksDB provides options for tuning WAL behavior to meet the needs of different applications: Synchronous WAL writes (what you implemented this week) Group commit. No WAL writes at all. If you want, you can also explore group commit in your implementation and its impact on durability and latency/throughput, since this series will not cover it later. Also, you should know that since a WAL adds I/O to the write path, storage engines use a few practical tricks to keep it fast and predictable. A common one is to preallocate fixed-size WAL segments at startup to: Avoid the penalty of dynamic allocation. Prevent write fragmentation. Align buffers for (an open (2) flag for direct I/O that bypasses the OS page cache). Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. ❤️ If you enjoyed this post, please hit the like button. Week 0: Introduction Week 1: In-Memory Store Week 2: LSM Tree Foundations Week 3: Durability with Write-Ahead Logging Last week, you built the first version of an LSM: an in-memory memtable for recent writes, immutable SSTables on disk, and a MANIFEST file listing the SSTable files. However, if the database crashes, data in the memtable would be lost. This week, you will focus on durability by introducing Write-Ahead Logging (WAL). A WAL is an append-only file on disk that records the same operations you keep in memory. How it works: On write, record it in the WAL and the memtable. On restart, you read the WAL from start to end and apply each record to the memtable. Append a record to the WAL file , opened with . Set the field to , and the and fields to the provided key and value. For example, writing : Update the memtable with the same logic as before: If the key exists, update the value. Otherwise, create a new entry. Acknowledge the HTTP request. Create an empty file if it doesn’t exist. Replay the WAL from start to end. For each valid line, apply it to the memtable. Write the new SSTable: Flush the memtable as a new immutable JSON SSTable file with keys sorted (same as before). fsync the SSTable file. the parent directory of the SSTable to make the new filename persistent. Update the MANIFEST atomically: Read the current MANIFEST lines into memory and append the new SSTable filename. Open with . Write the entire list to from the start. Rename → . the parent directory of the MANIFEST. Reset the WAL: Truncate the WAL to zero length. the WAL file. Run the client against the same input file ( put.txt ). Stop and restart your database randomly during the run. Your client should confirm that no acknowledged writes were lost after recovery. What: (or opening the file with ) persists the data without pushing unrelated metadata, which is usually what you want for WAL appends. You can go further with to bypass the page cache and sync only the data you wrote, but that comes with extra complexity. When: While calling a sync primitive after every request is offered by systems that promise durability, it is often not the default. Many databases use group commit, which batches several writes into one call to amortize the cost while still providing strong guarantees. For additional information, see A write-ahead log is not a universal part of durability by . For example, RocksDB provides options for tuning WAL behavior to meet the needs of different applications: Synchronous WAL writes (what you implemented this week) Group commit. No WAL writes at all. Avoid the penalty of dynamic allocation. Prevent write fragmentation. Align buffers for (an open (2) flag for direct I/O that bypasses the OS page cache).

0 views
xenodium 1 weeks ago

Bending Emacs - Episode 7: Eshell built-in commands

With my recent rinku post and Bending Emacs episode 6 both fresh in mind, I figured I may as well make another Bending Emacs episode, so here we are: Bending Emacs Episode 7: Eshell built-in commands Check out the rinku post for a rundown of things covered in the video. Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll continue making more videos! Enjoying this content or my projects ? I am an indie dev. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views
xenodium 1 weeks ago

Rinku: CLI link previews

In my last Bending Emacs episode, I talked about overlays and used them to render link previews in an Emacs buffer. While the overlays merely render an image, the actual link preview image is generated by rinku , a tiny command line utility I built recently. leverages macOS APIs to do the actual heavy lifting, rendering/capturing a view off screen, and saving to disk. Similarly, it can fetch preview metadata, also saving the related thumbnail to disk. In both cases, outputs to JSON. By default, fetches metadata for you. In this instance, the image looks a little something like this: On the other hand, the flag generates a preview, very much like the ones you see in native macOS and iOS apps. Similarly, the preview renders as follows: While overlays is one way to integrate anywhere in Emacs, I had been meaning to look into what I can do for eshell in particular. Eshell is just another buffer , and while overlays could do the job, I wanted a shell-like experience. After all, I already knew we can echo images into an eshell buffer . Before getting to on , there's a related hack I'd been meaning to get to for some time… While we're all likely familiar with the cat command, I remember being a little surprised to find that offers an alternative elisp implementation. Surprised too? Go check it! Where am I going with this? Well, if eshell's command is an elisp implementation, we know its internals are up for grabs , so we can technically extend it to display images too. is just another function, so we can advice it to add image superpowers. I was pleasantly surprised at how little code was needed. It basically scans for image arguments to handle within advice and otherwise delegates to 's original implementation. And with that, we can see our freshly powered-up command in action: By now, you may wonder why the detour when the post was really about ? You see, this is Emacs, and everything compounds! We can now leverage our revamped command to give similar superpowers to , by merely adding an function. As we now know, outputs things to JSON, so we can use to parse the process output and subsequently feed the image path to . can also output link titles, so we can show that too whenever possible. With that, we can see the lot in action: While non-Emacs users are often puzzled by how frequently we bring user flows and integrations on to our beloved editor, once you learn a little elisp, you start realising how relatively easily things can integrate with one another and pretty much everything is up for grabs . Reckon and these tips will be useful to you? Enjoying this blog or my projects ? I am an 👉 indie dev 👈. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;)

0 views
fLaMEd fury 1 weeks ago

Contain The Web With Firefox Containers

What’s going on, Internet? While tech circles are grumbling about Mozilla stuffing AI features into Firefox that nobody asked for (lol), I figured I’d write about a feature people might actually like if they’re not already using it. This is how I’m containing the messy sprawl of the modern web using Firefox Containers. After the ability to run uBlock Origin, containers are easily one of Firefox’s best features. I’m happy to share my setup that helps contain the big bad evil and annoying across the web. Not because I visit these sites often or on purpose. I usually avoid them. But for the moments where I click something without paying attention, or I need to open a site just to get a piece of information and failing (lol, login walls), or I end up somewhere I don’t wanta to be. Containers stop that one slip from bleeding into the rest of my tabs. Firefox holds each site in its own space so nothing spills into the rest of my browsing. Here’s how I’ve split things up. Nothing fancy. Just tidy and logical. Nothing here is about avoiding these sites forever. It’s about containing them so they can’t follow me around. I use two extensions together: MAC handles the visuals. Containerise handles the rules. You can skip MAC and let Containerise auto create containers, but you lose control over colours and icons, so everything ends up looking the same. I leave MAC’s site lists empty so it doesn’t clash with Containerise. Containerise becomes the single source of truth. If I need to open something in a specific container, I just right click and choose Open in Container. Containers don’t fix the surveillance web, but they do reduce the blast radius. One random visit to Google, Meta, Reddit or Amazon won’t bleed into my other tabs. Cookies stay contained. Identity stays isolated. Tracking systems get far less to work with. Well, that’s my understanding of it anyway. It feels like one of the last features in modern browsers that still puts control back in the user’s hands, without having to give up the open web. Just letting you know that I used ChatGPT (in a container) to help me create the regex here - there was no way I was going to be able to figure that out myself. So while Firefox keeps pandering to the industry with AI features nobody asked for (lol), there’s still a lot to like about the browser. Containers, uBlock Origin, and the general flexibility of Firefox still give you real control over your internet experience. Hey, thanks for reading this post in your feed reader! Want to chat? Reply by email or add me on XMPP , or send a webmention . Check out the posts archive on the website. Firefox Multi Account Containers (MAC) for creating and customising the containers (names, colours, icons). Containerise for all the routing logic using regex rules.

0 views
Oya Studio 1 weeks ago

Vibe Coding: Beyond the Joke – A Serious Tool for Rapid Prototyping

Learn how to create custom animated scenes for your live streams using Flutter as a source in OBS.

0 views
Andre Garzia 1 weeks ago

My chocolate truffles recipe

# My Brigadeiro Recipe (chocolate truffles) This is a very traditional recipe from Brazil that is a staple of birthday parties all over the country. The ingredients are: * 1 tin of condensed milk (395g) * 1/2 of a cup of cream (100g) * 1/2 tablespoon of butter (15g) * 1 and a half tablespoon of cocoa powder (23g) Mix everything on a pan until they are well mixed. Turn the heat on in medium and mix until you can use a spatula to separate the mix and when it comes back together it folds from the top like a wave. Be aware that the mixture will rise in the pot before going down again. If you have oreos, you can make little bats for halloween as seen in the photo below. ![](/2025/11/img/3f29fa74-c6a2-45c0-8332-fa77772c25ab.jpg)

0 views
xenodium 1 weeks ago

Bending Emacs - Episode 6: Overlays

The Bending Emacs series continues with a new a new episode. Bending Emacs Episode 6: Overlays Today we had a quick intro to overlays. Here's the snippet I used for adding snippets: Similarly, this is what we used for removing the overlay. Of the experiments, you can find: Hope you enjoyed the video! Liked the video? Please let me know. Got feedback? Leave me some comments . Please go like my video , share with others, and subscribe to my channel . If there's enough interest, I'll continue making more videos! Enjoying this content or my projects ? I am an indie dev. Help make it sustainable by ✨ sponsoring ✨ Need a blog? I can help with that . Maybe buy my iOS apps too ;) Redaction snippet at the related blog post . Dired media metadata at Ready Player's ready-player-dired.el . Link previews: While I don't have elisp to share for link previews just yet, I did release a tiny thumbnail utility named rinku ;)

0 views
Rob Zolkos 2 weeks ago

A Mermaid Validation Skill for Claude Code

AI coding agents generate significantly more markdown documentation than we used to write manually. This creates opportunities to explain concepts visually with mermaid diagrams - flowcharts, sequence diagrams, and other visualizations defined in text. When Claude generates these diagrams, the syntax can be invalid even though the code looks correct. Claude Code skills provide a way to teach Claude domain-specific workflows - in this case, validating diagrams before marking the work complete. Mermaid diagrams are text-based and version-controllable, which makes them useful for documentation. The syntax can be finicky: A diagram might look correct in markdown but fail to render. Without validation, you discover this only after the work is complete. Skills in Claude Code are markdown files that provide instructions for specific tasks. They can be invoked manually or triggered automatically when Claude detects a relevant situation. Here is the mermaid validation skill: The skill uses (mermaid CLI) for validation. Install it globally: Create the skill file at: Or in your project at: When Claude creates or edits a markdown file containing mermaid diagrams: Invalid diagrams have no value. Rather than leaving validation to the user, Claude verifies its own work. Manual validation requires remembering to ask for it every time. Skills make validation automatic and consistent. Every diagram gets checked, and errors are fixed before you see them. The pattern applies beyond mermaid diagrams. Any output that can be validated by a tool is a candidate for a validation skill. If a tool can identify errors, Claude can fix them before marking the task complete. Let me know if you have questions about setting this up. Missing arrows or incorrect arrow types Unbalanced brackets Invalid node names Typos in keywords The skill is automatically invoked Validation runs via Errors are fixed and re-validated Success is only reported when diagrams render correctly

0 views
Brain Baking 2 weeks ago

Rendering Your Java Code Less Error Prone

Error Prone is Yet Another Programming Cog invented by Google to improve their Java build system. I’ve used the multi-language PMD static code analyser before (don’t shoot the messenger!), but Error Prone takes it a step further: it hooks itself into your build system, converting programming errors as compile-time errors. Great, right, detecting errors earlier, without having to kick an external process like PMD into gear? Until you’re forced to deal with hundreds of errors after enabling it: sure. Expect a world of hurt when your intention is to switch to Error Prone just to improve code linting, especially for big existing code bases. Luckily, there’s a way to gradually tighten the screw: first let it generate a bunch of warnings and only when you’ve tackled most of them, turn on Error! Halt! mode. When using Gradle with multiple subprojects, things get a bit more convoluted. This mainly serves as a recollection of things that finally worked—feeling of relief included. The root file: The first time you enable it, you’ll notice a lot of nonsensical errors popping up: that’s what that is for. We currently have the following errors disabled: Error Prone’s powerful extendability resulted in Uber picking up where Google left off by releasing NullAway , a plug-in that does annotation-based null checking fully supporting the JSpecify standard . That is, it checks for stupid stuff like: JSpecify is a good attempt at unifying these annotations—last time I checked, IntelliJ suggested auto-importing them from five different packages—but the biggest problem is that you’ll have to dutifully annotate where needed yourself. There are OpenRewrite JSpecify recipes available to automatically add them but that won’t even cover 20% of the cases, as when it comes to manual if null checks and the use of , NullAway is just too stupid to understand what your intentions are. NullAway assumes non-null by default. This is important, because in Java object terminology, everything is nullable by default. You won’t need to add a lot of annotations, but adding has a significant ripple effect: if that’s nullable, then the object calling this object might also be, which means I should add this annotation here and here and here and here and here and… Uh oh. After 100 compile errors, Gradle gives up. I fixed 100 errors, recompiled, and 100 more appeared. This fun exercise lasted almost an entire day until I was the one giving up. The potential commit touched hundreds of files and added more bloat to an already bloated (it’s Java, remember) code base I’ve ever seen. Needless to say, we’re currently evaluating our options here. I’ve also had quite a bit of trouble picking the right combination of plug-ins for Gradle to get this thing working. In case you’d like to give it a go, extend the above configuration with: You have to point NullAway to the base package path ( ) otherwise it can’t do its thing. Note the configuration: we had a lot of POJOs with private constructors that set fields to while they actually cannot be null because of serialisation frameworks like Jackson/Gson. Annotate these with and NullAway will ignore them. If you thought fixing all Error Prone errors was painful, wait until you enable NullAway. Every single statement needs its annotation. OpenRewrite can help, but up to a point, as for more complicated assignments you’ll need to decide for yourself what to do. Not that the exercise didn’t bear any fruit. I’ve spotted more than a few potential mistakes we made in our code base this way, and it’s fun to try and minimize nullability. The best option of course is to rewrite the whole thing in Kotlin and forget about the suffix. All puns aside, I can see how Error Prone and its plug-ins can help catch bugs earlier, but it’s going to come at a cost: that of added annotation bloat. You probably don’t want to globally disable too many errors so is also going to pop up much more often. A difficult team decision to make indeed. Related topics: / java / By Wouter Groeneveld on 25 November 2025.  Reply via email . —that’s a Google-specific one? I don’t even agree with this thing being here… —we’d rather have on every line next to each other —we can’t update to JDK9 just yet —we’re never going to run into this issue —good luck with fixing that if you heavily rely on reflection

1 views
Jack Vanlightly 2 weeks ago

Demystifying Determinism in Durable Execution

Determinism is a key concept to understand when writing code using durable execution frameworks such as Temporal, Restate, DBOS, and Resonate. If you read the docs you see that some parts of your code must be deterministic while other parts do not have to be.  This can be confusing to a developer new to these frameworks.  This post explains why determinism is important and where it is needed and where it is not. Hopefully, you’ll have a better mental model that makes things less confusing. We can break down this discussion into: Recovery through re-execution. Separation of control flow from side effects. Determinism in control flow Idempotency and duplication tolerance in side effects This post uses the term “control flow” and “side effect”, but there is no agreed upon set of terms across the frameworks. Temporal uses “workflow” and “activity” respectively. Restate uses the terms such as “handler”,  “action” and “durable step”. Each framework uses different vocabulary and have varying architectures behind them. There isn’t a single overarching concept that covers everything, but the one outlined in this post provides a simple way to think about determinism requirements in a framework agnostic way. Durable execution takes a function that performs some side effects, such as writing to a database, making an API call, sending an email etc, and makes it reliable via recovery (which in turn depends on durability). For example, a function with three side effects: Step 1, make a db call. Step 2, make an API call. Step 3, send an email. If step 2 fails (despite in situ retries) then we might leave the system in an inconsistent state (the db call was made but not the API call). In durable execution, recovery consists of executing the function again from the top, and using the results of previously run side effects if they exist. For example, we don’t just execute the db call again, we reuse the result from the first function execution and skip that step. This becomes equivalent to jumping to the first unexecuted step and resuming from there. Fig 1. A function is retried, using the results of the prior partial execution where available. So, durable execution ensures that a function can progress to completion via recovery, which is a retry of the function from the top. Resuming from where we left off involves executing the code again but using stored results where possible in order to resume from where it failed. In my Coordinated Progress model, this is the combination of a reliable trigger and progressable work . A function is a mix of executing control flow and side effects. The control flow itself may include state, and branches (if/then/else) or loops execute based on that state. The control flow decides which side effects to execute based on this looping and branching. Fig 2. Control flow and side effects In Temporal, the bad_login function would be a workflow and the block_account and send_warning_email would be activities . The workflow and activity work is separated into explicit workflow and activity tasks, possibly run on different workers. Other frameworks simply treat this as a function and wrap each side effect to make it durable. I could get into durable promises and continuations but that is a topic I will cover in a future post. So let’s look at another example. First we retrieve a customer record, then we check if we’re inside of the promo end date, if so, charge the card with a 10% discount, else charge the full amount. Finally send a receipt email. This introduces a bug that we’ll cover in the next section. Fig 3. process_order function as a mix of control flow (green) and side effects (grey) Durable execution treats the control flow differently from the side effects, as we’ll see in sections 3 and 4. Determinism is required in the control flow because durable execution re-executes code for recovery. While any stored results of side effects from prior executions are reused, the control flow is executed in full. Let’s look at an example: Fig 4. Double charge bug because of a non-deterministic if/else In the first execution, the current time is within the promo date, so the then-branch is executed, charging the card with the discount. However, on the second invocation, the current time is after the promo end date, causing the else-branch to execute, double charging the customer. Fig 5. A non-deterministic control flow causes a different branch to execute during the function retry. This is fixed by making the now() deterministic by turning it into a durable step whose result is recorded. Then the second time it is executed, it returns the same datetime (it becomes deterministic). The various SDKs provide deterministic dates, random numbers and UUIDs out of the box. Another fun example is if we make the decision based on the customer record retrieved from the database. In this variant, the decision is made based on the loyalty points the customer currently has. Do you see the problem? If the send email side effect fails, then the function is retried. However, the points value of the order was deducted from the customer in the last execution, so that in execution 2, the customer no longer has enough loyalty points! Therefore the else-branch is executed, charging their credit card! Another double payment bug. We must remember that the durable function is not an atomic transaction. It could be considered a transaction which has guarantees around making progress, but not one atomic change across systems. We can fix this new double charge bug by ensuring that the same customer record is returned on each execution. We can do that by treating the customer record retrieval as a durable step whose result will be recorded. Fig 6. Make the customer retrieval deterministic if the control flow depends on it. Re-execution of the control flow requires determinism: it must execute based on the same decision state every single time and it must also pass the same arguments to side effect code every single time. However, side effects themselves do not need to be deterministic, they only require idempotency or duplication tolerance. Durable execution re-executes the control flow as many times as is needed for the function to make progress to completion. However, it typically avoids executing the same side effects again if they were previously completed. The result of each side effect is durably stored by the framework and a replay only needs the stored result. Therefore side effects do not need to be deterministic, and often that is undesirable anyway. A db query that retrieves the current number of orders or the current address of a customer may return a different result every time. That’s a good thing, because the number of orders might change, and an address might change. If the control flow depends on the number of orders, or the current address, then we must ensure that the control flow is always returned the same answer. This is achieved by storing the result of the first execution, and using that result for every replay (making the control flow deterministic). Now to the idempotency. What if a side effect does complete, but a failure of some kind causes the result to not be stored by the framework? Well, the durable execution framework will replay the function, see no stored result and execute the side effect again. For this reason we want side effects to either be idempotent or otherwise tolerate running more than once. For example, we might decide that sending the same email again is ok. The cost of reliable idempotency might not be worth it. On the other hand, a credit card payment most definitely should be idempotent. Some frameworks make the separation of control flow from side effects explicit, namely, Temporal. In the Temporal programming model, the workflow definition is the control flow and each activity is a side effect (or some sort of non-deterministic operation). Other frameworks such as Resonate and Restate are based on functions which can call other functions which can result in a tree of function calls. Each function in this tree has a portion of control flow and side effects (either executed locally or via a call to another function). Fig 7. A tree of function calls, with control-flow in each function. The same need for determinism in the control flow is needed in each of these functions. This is guaranteed by ensuring the same inputs, and the replacement of non-deterministic operations (such as date/times, random numbers, ids, retrieved objects) with deterministic ones. Our mental model is built on separating a durable function into the control flow and the side effects. Some frameworks actually explicitly separate the two (like Temporal) while others are more focused on composable functions. The need for determinism in control flow is a by-product of recovery being based on retries of the function. If we could magically reach into the function, to the exact line to resume from, reconstructing the local state and executing from there, we wouldn’t need deterministic control flow code. But that isn’t how it works. The function is executed again from the top, and it better make the same decisions again, or else you might end up with weird behaviors, inconsistencies or even double charging your customers. The side effects absolutely can and should be non-deterministic, which is fine because they should generally only be executed once, even if the function itself is executed many times. For those failure cases where the result is not durably stored, we rely on idempotency or duplication tolerance. This is a pretty generalized model. There are a number of nuances and differences across the frameworks. Some of the examples would actually result in a non-determinism error in Temporal, due to how it records event history and expects a matching replay. The developer must learn the peculiarities of each framework. Hopefully this post provides a general overview of determinism in the context of durable execution. Recovery through re-execution. Separation of control flow from side effects. Determinism in control flow Idempotency and duplication tolerance in side effects Step 1, make a db call. Step 2, make an API call. Step 3, send an email.

0 views
Jimmy Miller 2 weeks ago

The Easiest Way to Build a Type Checker

Type checkers are a piece of software that feel incredibly simple, yet incredibly complex. Seeing Hindley-Milner written in a logic programming language is almost magical, but it never helped me understand how it was implemented. Nor does actually trying to read anything about Algorithm W or any academic paper explaining a type system. But thanks to David Christiansen , I have discovered a setup for type checking that is so conceptually simple it demystified the whole thing for me. It goes by the name Bidirectional Type Checking. The two directions in this type checker are types and types. Unlike Hindley-Milner, we do need some type annotations, but these are typically at function definitions. So code like the sillyExample below is completely valid and fully type checks despite lacking annotations. How far can we take this? I'm not a type theory person. Reading papers in type theory takes me a while, and my comprehension is always lacking, but this paper seems like a good starting point for answering that question. So, how do we actually create a bidirectional type checker? I think the easiest way to understand it is to see a full working implementation. So that's what I have below for a very simple language. To understand it, start by looking at the types to figure out what the language supports, then look at each of the cases. But don't worry, if it doesn't make sense, I will explain in more detail below. Here we have, in ~100 lines, a fully functional type checker for a small language. Is it without flaw? Is it feature complete? Not at all. In a real type checker, you might not want to know only if something typechecks, but you might want to decorate the various parts with their type; we don't do that here. We don't do a lot of things. But I've found that this tiny bit of code is enough to start extending to much larger, more complicated code examples. If you aren't super familiar with the implementation of programming languages, some of this code might strike you as a bit odd, so let me very quickly walk through the implementation. First, we have our data structures for representing our code: Using this data structure, we can write code in a way that is much easier to work with than the actual string that we use to represent code. This kind of structure is called an "abstract syntax tree". For example This structure makes it easy to walk through our program and check things bit by bit. This simple line of code is the key to how all variables, all functions, etc, work. When we enter a function or a block, we make a new Map that will let us hold the local variables and their types. We pass this map around, and now we know the types of things that came before it. If we wanted to let you define functions out of order, we'd simply need to do two passes over the tree. The first to gather up the top-level functions, and the next to type-check the whole program. (This code gets more complicated with nested function definitions, but we'll ignore that here.) Each little bit of may seem a bit trivial. So, to explain it, let's add a new feature, addition. Now we have something just a bit more complicated, so how would we write our inference for this? Well, we are going to do the simple case; we are only allowed to add numbers together. Given that our code would look something like this: This may seem a bit magical. How does make this just work? Imagine that we have the following expression: There is no special handling in for so we end up at If you trace out the recursion (once you get used to recursion, you don't actually need to do this, but I've found it helps people who aren't used to it), we get something like So now for our first left, we will recurse back to , then to , and finally bottom out in some simple thing we know how to . This is the beauty of our bidirectional checker. We can interleave these and calls at will! How would we change our add to work with strings? Or coerce between number and string? I leave that as an exercise to the reader. It only takes just a little bit more code. I know for a lot of people this might all seem a bit abstract. So here is a very quick, simple proof of concept that uses this same strategy above for a subset of TypeScript syntax (it does not try to recreate the TypeScript semantics for types). If you play with this, I'm sure you will find bugs. You will find features that aren't supported. But you will also see the beginnings of a reasonable type checker. (It does a bit more than the one above, because otherwise the demos would be lame. Mainly multiple arguments and adding binary operators.) But the real takeaway here, I hope, is just how straightforward type checking can be. If you see some literal, you can its type. If you have a variable, you can look up its type. If you have a type annotation, you can the type of the value and it against that annotation. I have found that following this formula makes it quite easy to add more and more features.

0 views
Jeff Geerling 2 weeks ago

How to silence the fan on a CM5 after shutdown

Out of the box, if you buy a Raspberry Pi Compute Module 5, install it on the official CM5 IO Board, and install a fan on it (e.g. my current favorite, the EDAtec CM5 Active Cooler ), you'll notice the fan ramps up to 100% speed after you shut down the Pi. That's not fun, since at least for a couple of my CM5s, they are more often powered down than running, creating a slight cacophany!

0 views