Latest Posts (10 found)
Lukáš Lalinský 1 months ago

How I turned Zig into my favorite language to write network programs in

I’ve been watching the Zig language for a while now, given that it was created for writing audio software (low-level, no allocations, real time). I never paid too much attention though, it seemed a little weird to me and I didn’t see the real need. Then I saw a post from Andrew Kelley (creator of the language) on Hacker News, about how he reimplemented my Chromaprint algorithm in Zig, and that got me really interested. I’ve been planning to rewrite AcoustID’s inverted index for a long time, I had a couple of prototypes, but none of the approaches felt right. I was going through some rough times, wanted to learn something new, so I decided to use the project as an opportunity to learn Zig. And it was great, writing Zig is a joy. The new version was faster and more scalable than the previous C++ one. I was happy, until I wanted to add a server interface. In the previous C++ version, I used Qt , which might seem very strange for a server software, but I wanted a nice way of doing asynchronous I/O and Qt allowed me to do that. It was callback-based, but Qt has a lot of support for making callbacks usable. In the newer prototypes, I used Go, specifically for the ease of networking and concurrency. With Zig, I was stuck. There are some Zig HTTP servers, so I could use those. I wanted to implement my legacy TCP server as well, and that’s a lot harder, unless I want to spawn a lot of threads. Then I made a crazy decision, to use Zig also for implementing a clustered layer on top of my server, using NATS as a messaging system, so I wrote a Zig NATS client , and that gave me a lot of experience with Zig’s networking capabilities. Fast forward to today, I’m happy to introduce Zio, an asynchronous I/O and concurrency library for Zig . If you look at the examples, you will not really see where is the asynchronous I/O, but it’s there, in the background and that’s the point. Writing asynchronous code with callbacks is a pain. Not only that, it requires a lot of allocations, because you need state to survive across callbacks. Zio is an implementation of Go style concurrency, but limited to what’s possible in Zig. Zio tasks are stackful coroutines with fixed-size stacks. When you run , this will initiate the I/O operation in the background and then suspend the current task until the I/O operation is done. When it’s done, the task will be resumed, and the result will be returned. That gives you the illusion of synchronous code, allowing for much simpler state management. Zio support fully asynchronous network and file I/O, has synchronization primitives (mutexes, condition variables, etc.) that work with the cooperative runtime, has Go-style channels, OS signal watches and more. Tasks can run in single-threaded mode, or multi-threaded, in which case they can migrate from thread to thread for lower latency and better load balancing. And it’s FAST. I don’t want to be posting benchmarks here, maybe later when I have more complex ones, but the single-threaded mode is beating any framework I’ve tried so far. It’s much faster than both Go and Rust’s Tokio. Context switching is virtually free, comparable to a function call. The multi-threaded mode, while still not being as robust as Go/Tokio, has comparable performance. It’s still a bit faster than either of them, but that performance might go down as I add more fairness features. Because it implements the standard interfaces for reader/writer, you can actually use external libraries that are unaware they are running within Zio. Here is an example of a HTTP server: When I started working with Zig, I really thought it’s going to be a niche language to write the fast code in, and then I’ll need a layer on top of that in a different language. With Zio, that changed. The next step for me is to update my NATS client to use Zio internally. And after that, I’m going to work on a HTTP client/server library based on Zio.

0 views
Lukáš Lalinský 8 months ago

My AI helpers, CodeRabbit and SourceGraph Cody

I’ve been an early adopter of AI coding tools. I’ve been using GitHub Copilot from the technical preview stages in 2021. It was mind-blowing to me. The interface was pretty minimal compared to what we have now, but even at the stage, it was revolutionizing the way I work. I’ve dreamed for a long time about programming without having to actually write all the code, and it was starting to become a reality. All in all, I was pretty happy with it. Last year, I discovered Cody from SourceGraph . I’ve tried the trial and I was hooked. It had so much more context about the code I’m working on. I could just select a function, tell it to refactor something on it, and it would do it directly in my editor. Writing documentation, generating tests, writing new code, everything become easier. I’ve used it last year to write a replacement of the acoustid-index server, something I’ve been planning for a long time, but I decided to also learn a new language, Zig , on the project. Cody made the process really effortless. It included countless refactoring, as I was still learning the right patterns in the language, and I was doing most of the work without actually writing the code myself. This year, I’ve started using the chat with thinking models a lot more often, and Cody’s ability to apply the code blocks from the chat to the editor. Even better, I’m actually using this for free, as part of their support for open source. It’s such a good tool that I’d be happy to pay for now, and will definitely start doing that once my current free license expires. And this year I discovered CodeRabbit for automated code reviews. I was super skeptical about this, but they also have a free plan for open source projects, do I decided to give it a try. I’m maintaining AcoustID alone, so having another set of eyes looking at the code, even if mechanical ones, is welcome. And I was blown away. On the first pull request, it actually found a small logical error I had in the code. And this kept happening again and again. After some time, I switched it to the assertive profile, and now I actually enjoy opening a pull request and going through the suggestions it makes. Yes, sometimes they are obsessive, but that’s OK. I’ve tried alternatives, like Gemini or Copilot, both having options to do code reviews, but the level of quality is somewhere completely elsewhere. Gemini and Copilot feel like useless toys compared to CodeRabbit. The last four years have completely changed my approach to programming, and for the better. As good as all these new AI tools are, I don’t really expect them to be replacing technical programming jobs. You really need to evaluate their outputs, and if you are not able to do that critically, you will deal with a lot of bullshit code. But if you can judge the quality of the output, these are great helpers and I’m really looking forward to what the future brings.

0 views
Lukáš Lalinský 8 months ago

Goodbye, Google Chrome

I’ve upgraded my laptop today, that included Google Chrome, and surprise, surprise, when it restarted, it started telling me that the uBlock Origin extension is no longer supported and will be disabled. I’m paying for YouTube Premium, but I still can’t live without ad blocker. It turns out you can just enable the extension again and it will keep working, but for how long. So I decide to switch. I’ve been using Chrome from the very early times, long before it was mainstream. The more popular Chrome was, the more hostile Google was. Now removing my ad blocker, that’s just saying they don’t want me to use their browser. That’s OK, now we have alternatives, even with ad blockers built in. My first choice would have been Opera, but unfortunately 1Password doesn’t support it, so I went to the next best option, Brave. So far, I’m pretty happy with it.

0 views

Msgpack serialization library for Zig

I’ve been playing with Zig over the last few weeks. The language had been on my radar for a long time, since it was originally developed for writing audio software, but I never paid too much attention to it. It seems that it’s becoming more popular, so I’ve decided to learn it and picked a small task of rewriting the AcoustID fingerprint index server in it. That is still in progress, but there is one side product that is almost ready, a library for handling msgpack serialization . The library can be only used with static schemas, defined using Zig’s type system. There are many options for generating compact messages, almost competing with protobuf, but without separate proto files and protoc. I’m quite happy with the API. It’s mainly possible due to Zig’s comptime type reflection. This is the most basic usage: See the project on GitHub for more details.

0 views

Chromaprint 1.4 released

A new version of Chromaprint has been released. This is a fairly big release I originally intended to call 2.0, but one key feature I was planning to include is not yet finished, so I decided to go with 1.4 instead. So what’s new? The biggest feature is that all components of audio fingerprinting process now work in a streaming fashion and can provide partial results at any time. That means that it’s now possible to feed a continuous audio stream to the process and get back partial fingerprints. This is useful if you want to fingerprint e.g. an internet radio stream and do not want to explicitly split the stream into small chunks. There are also side effects of these changes and the whole process is now faster and uses less memory. This change is also reflected in the fpcalc utility, which can now work on streams. In fact, the fpcalc utility has been completely rewritten. It now also supports JSON output for easier integration, since you can find JSON parser in the standard library of almost every new programing language. Here is an example using an online radio: Or you can fingerprint raw pcm data from an external audio input: This opens up many options how Chromaprint can be used outside of AcoustID. There has also been a big source code cleanup. A lot of old code has been removed. We started using C++11 features, so it’s no longer possible to compile Chromaprintwith old C++ compilers. Unit tests no longer depend on boost. KissFFT is now bundled in the package and used as a fallback if no other FFT library is found. That means Chromaprint can be now build without any external dependencies. The public C API now uses standard fixed-size int types from stdint.h. This breaks API-level backwards compatibility, you will need to modify your programs. The binary interface has not changed and programs compiled against the old version of the library will continue working. All source code from Chromaprint written by me has been relicensed from LGPL to MIT. Note that this does not mean much for the library as a whole, since it still depends on LGPL code, but it’s now easier to reuse parts of the code in other projects, if needed. We also have fpcalc binaries for ARM processors now. They were built and tested on Raspberry Pi, but might work on other ARM devices. Source code tarball (597 KB) Static binaries for the tool Windows, i686 (1.4 MB) Windows, x86_64 (1.5 MB) macOS, i386 (1.1 MB) macOS, x86_64 (1.2 MB) Linux, i686 (1.3 MB) Linux, x86_64 (1.2 MB) Linux, armhf (1.2 MB)

0 views

mbdata 2016.07.17 released

I have released a new version of mbdata . It’s a small Python package for working with the MusicBrainz database using SQLAlchemy. This version updates the SQLAlchemy models to the latest MusicBrainz database schema. It also adds support for Python 3 in addition to Python 2.

0 views

Chromaprint 1.3.2 released

A new version of Chromaprint has been released. This is a very small bug fix release fixing fpcalc crash on a corrupt file. Changes since version 1.3.1: Fixed crash on an invalid audio file that FFmpeg could not decode. Fixed build on Ubuntu 14.04 with libav. Source code tarball (525 KB) Static binaries for the tool Windows, 32-bit (1 MB) Windows, 64-bit (1 MB) Mac OS X, 32-bit, 10.4+ (964 KB) Mac OS X, 64-bit, 10.4+ (944 KB) Linux, 32-bit (1 MB) Linux, 64-bit (1 MB)

0 views

Chromaprint 1.3.1 released

A new version of Chromaprint has been released. Changes since version 1.3: Fixed to actually restrict fingerprints the requested length. Fixed SONAME version for the shared library. Source code tarball (525 KB) Static binaries for the tool Windows, 32-bit (1 MB) Windows, 64-bit (1 MB) Mac OS X, 32-bit, 10.4+ (964 KB) Mac OS X, 64-bit, 10.4+ (944 KB) Linux, 32-bit (1 MB) Linux, 64-bit (1 MB)

0 views

Chromaprint 1.3 released

A new version of Chromaprint has been released. This is another small release, there are no changes to the core functionality. Changes since version 1.2: The binary packages have been built with FFmpeg 2.8.6, adding support for DSF files You can use use to get the full fingerprint New function for calculating SimHash from the fingerprint data Added info section to the fpcalc executable on Mac OS X Generate .pc (pkg-config) file on Mac OS X when not building a framework Removed use of some long deprecated FFmpeg APIs Some smaller bug fixes Source code tarball (525 KB) Static binaries for the tool Windows, 32-bit (1 MB) Windows, 64-bit (1 MB) Mac OS X, 32-bit, 10.4+ (964 KB) Mac OS X, 64-bit, 10.4+ (944 KB) Linux, 32-bit (1 MB) Linux, 64-bit (1 MB)

0 views

Let’s Encrypt and Nginx

I’m late to the game, but I finally gave Let’s Encrypt a try and I love it. The biggest advantage is the fact that SSL certificates can be completely automated. No more remembering how to renew certificates once a year. These are mostly just notes for my future use, but maybe it will be useful for somebody. This is how I use Let’s Encrypt with Nginx. Install the letsencrypt client: Create a directory for the client to use for authorization: Then I put this into my nginx site config: That allows the letsencrypt client to manage authorization files for my domain. And now I can generate the first certificate: Hopefully, that should generate a certificate and I can put them into the HTTPS section of my nginx config: And for the main benefit, I can now set up a cron job like this, that will make sure my certificates stay up to date:

0 views