Posts in Zig (20 found)
Lukáš Lalinský 1 months ago

How I turned Zig into my favorite language to write network programs in

I’ve been watching the Zig language for a while now, given that it was created for writing audio software (low-level, no allocations, real time). I never paid too much attention though, it seemed a little weird to me and I didn’t see the real need. Then I saw a post from Andrew Kelley (creator of the language) on Hacker News, about how he reimplemented my Chromaprint algorithm in Zig, and that got me really interested. I’ve been planning to rewrite AcoustID’s inverted index for a long time, I had a couple of prototypes, but none of the approaches felt right. I was going through some rough times, wanted to learn something new, so I decided to use the project as an opportunity to learn Zig. And it was great, writing Zig is a joy. The new version was faster and more scalable than the previous C++ one. I was happy, until I wanted to add a server interface. In the previous C++ version, I used Qt , which might seem very strange for a server software, but I wanted a nice way of doing asynchronous I/O and Qt allowed me to do that. It was callback-based, but Qt has a lot of support for making callbacks usable. In the newer prototypes, I used Go, specifically for the ease of networking and concurrency. With Zig, I was stuck. There are some Zig HTTP servers, so I could use those. I wanted to implement my legacy TCP server as well, and that’s a lot harder, unless I want to spawn a lot of threads. Then I made a crazy decision, to use Zig also for implementing a clustered layer on top of my server, using NATS as a messaging system, so I wrote a Zig NATS client , and that gave me a lot of experience with Zig’s networking capabilities. Fast forward to today, I’m happy to introduce Zio, an asynchronous I/O and concurrency library for Zig . If you look at the examples, you will not really see where is the asynchronous I/O, but it’s there, in the background and that’s the point. Writing asynchronous code with callbacks is a pain. Not only that, it requires a lot of allocations, because you need state to survive across callbacks. Zio is an implementation of Go style concurrency, but limited to what’s possible in Zig. Zio tasks are stackful coroutines with fixed-size stacks. When you run , this will initiate the I/O operation in the background and then suspend the current task until the I/O operation is done. When it’s done, the task will be resumed, and the result will be returned. That gives you the illusion of synchronous code, allowing for much simpler state management. Zio support fully asynchronous network and file I/O, has synchronization primitives (mutexes, condition variables, etc.) that work with the cooperative runtime, has Go-style channels, OS signal watches and more. Tasks can run in single-threaded mode, or multi-threaded, in which case they can migrate from thread to thread for lower latency and better load balancing. And it’s FAST. I don’t want to be posting benchmarks here, maybe later when I have more complex ones, but the single-threaded mode is beating any framework I’ve tried so far. It’s much faster than both Go and Rust’s Tokio. Context switching is virtually free, comparable to a function call. The multi-threaded mode, while still not being as robust as Go/Tokio, has comparable performance. It’s still a bit faster than either of them, but that performance might go down as I add more fairness features. Because it implements the standard interfaces for reader/writer, you can actually use external libraries that are unaware they are running within Zio. Here is an example of a HTTP server: When I started working with Zig, I really thought it’s going to be a niche language to write the fast code in, and then I’ll need a layer on top of that in a different language. With Zio, that changed. The next step for me is to update my NATS client to use Zio internally. And after that, I’m going to work on a HTTP client/server library based on Zio.

0 views
Chris Coyier 2 months ago

Strongbacks

Back when I went to the Alaska Folk Festival , a real highlight was catching The Strongbacks do their version of sea shanties live on the main stage. I remember a real tear-jerker protest shanty that I’d love to hear again. As fate would have it, I also went to Zig Zag campout this year and met a fella named Evan who was an excellent clawhammer player from Astoria, Oregon. I didn’t realize until the last night at the community showcase concert that Evan as *in* The Strongbacks. He plugged that they have a new album coming out at the end of his performance at that show and… now it’s out! It’s on all the stuff (ughgk) but perhaps easiest right here is a YouTube “topic” for the whole album. I really like this one: I haven’t listened to the whole thing yet. Hopefully it’s got that protest one in it, but if not, it’ll live in my brain.

0 views
matklad 2 months ago

Look Out For Bugs

One of my biggest mid-career shifts in how I write code was internalizing the idea from this post: Don’t Write Bugs Historically, I approached coding with an iteration-focused mindset — you write a draft version of a program, you set up some kind of a test to verify that it does what you want it to do, and then you just quickly iterate on your draft until the result passes all the checks. This was a great approach when I was only learning to code, as it allowed me to iterate past the things which were not relevant for me at that point, and focus on what matters. Who cares if it is or in the “паблик статик войд мэйн стринг а-эр-джи-эс”, it’s just some obscure magic spell anyway, and completely irrelevant to the maze-traversing thingy I am working on! Carrying over this approach past the learning phase was a mistake. As Lawrence points out, while you can spend time chasing bugs in the freshly written code, it is possible to dramatically cut the amount of bugs you introduce in the first place, if you focus on optimizing that (and not just the iteration time). It felt (and still feels) like a superpower! But there’s already a perfectly fine article about not making bugs, so I am not going to duplicate it. Instead, I want to share a related, but different super power: You can find bugs by just reading code. I remember feeling this superpower for the first time. I was investigating various rope implementations, and, as a part of that, I looked at the , the implementation powering IntelliJ, very old and battle tested code. And, by just reading the code, I found a bug, since fixed . It wasn’t hard, the original code is just 500 lines of verbose Java (yup, that’s all that you need for a production rope). And I wasn’t even trying to find a bug, it just sort-of jumped out at me while I was trying to understand how the code works. That is, you can find some existing piece of software, carefully skim through implementation, and discover real problems that can be fixed. You can do this to your software as well! By just re-reading a module you wrote last year, you might find subtle problems. I regularly discover TigerBeetle issues by just covering this or that topic on IronBeetle : bug discovered live , fixed , and PR merged . Here are some tips for getting better at this: The key is careful, slow reading. What you actually are doing is building the mental model of a program inside your head. Reading the source code is just an instrument for achieving that goal. I can’t emphasize this enough: programming is all about building a precise understanding inside your mind, and then looking for the diff between your brain and what’s in git. Don’t dodge an opportunity to read more of the code. If you are reviewing a PR, don’t review just the diff, review the entire subsystem. When writing code, don’t hesitate to stop and to probe and feel the context around. Go for or to understand the historical “why” of the code. When reading, mostly ignore the textual order, don’t just read each source file top-down. Instead, use these two other frames: Start at or subsystem equivalent, and use “goto definition” to follow an imaginary program counter. Identify the key data structures and fields, and search for all places where they are created and modified. You want to see a slice across space and time, state and control flow (c.f. Concurrent Expression Problem ). Just earlier today I used the second trick to debug an issue for which I haven’t got a repro. I identified as the key assignment that was recently introduced, then ctrl + f for , and that immediately revealed a gap in my mental model. Note how this was helped by the fact that the thing in question, , was always called that in the source code! If your language allows it, avoid , use proper names. Identify and collect specific error-prone patterns or general smells in the code. In Zig, if there’s an allocator and a in the same scope, you need to be very careful . If there’s an isolated tricky function, it’s probably fine. If there’s a tricky interaction between functions, it is a smell, and some bugs are lurking there. Bottom line: reading the code is surprisingly efficient at proactively revealing problems. Create space for calm reading. When reading, find ways to build mental models quickly, this is not entirely trivial.

0 views
matklad 3 months ago

Reserve First

A short post about a coding pattern that is relevant for people who use the heap liberally and manage memory with their own hands. Let’s start with two bugs. The first one is from Andrew Kelley’s HYTRADBOI 2025 talk, “Programming Without Pointers” : The second one is from the Ghostty terminal emulator: Can you spot the two bugs? In lieu of a spoiler, allow me to waste your bandwidth with a Dante Gabriel Rossetti painting: In both functions, a bug happens when the second expression throws. In the case, we insert an item into a hash table, but leave it uninitialized. Accessing the item later will crash in the best case. The Ghostty example is even more interesting. It actually tries to avoid this exact problem, by attempting to carefully revert changes in the block. But it fails to do so properly! While the data is restored to on error, the still frees , so we end up with uninitialized memory all the same: Both are “exception safety” problems: if we attempt an operation that mutates an object, and an error happens midway, there are three possible outcomes: The object state remains as if we didn’t attempt the operation. The object is left in a different, but valid state. The object becomes invalid and unsafe to use. In these two cases in particular, the only source of errors is fallible allocation. And there’s a pattern to fix it: As a reminder , is a Zig idiom for expressing “no errors after this point”. Applying the pattern to two examples we get: Memory reservation is a magic trick, contains all the failures, but doesn’t change the data structure! Do you see how powerful that is? I learned this pattern from Andrew Kelley during the coffee break after the talk! I haven’t measured the optimal level of spice here to make the truest possible statement. Instead I opted for dumping as much spice as possible to get the brain gears grinding: Zig should remove and rename to just . If you want to insert a single item, that’s two lines now. Don’t insert items one-by-one, reserve memory in bulk, up-front. Zig applications should consider aborting on OOM. While the design goal of handling OOM errors correctly is laudable, and Zig makes it possible, I’ve seen only one application, xit which passes “matklad spends 30 minutes grepping for ” test. For libraries, prefer leaving allocation to the caller, or use generative testing with an allocator that actually returns errors. Alternatively, do as TigerBeetle. We take this pattern literally, reserve all resources in main, and never allocate memory afterwards: ARCHITECTURE.md#static-memory-allocation

0 views
matklad 3 months ago

Zig's Lovely Syntax

It’s a bit of a silly post, because syntax is the least interesting detail about the language, but, still, I can’t stop thinking how Zig gets this detail just right for the class of curly-braced languages, and, well, now you’ll have to think about that too. On the first glance, Zig looks almost exactly like Rust, because Zig borrows from Rust liberally. And I think that Rust has great syntax, considering all the semantics it needs to express (see “Rust’s Ugly Syntax” ). But Zig improves on that, mostly by leveraging simpler language semantics, but also through some purely syntactical tasteful decisions. How do you spell a number ninety-two? Easy, . But what type is that? Statically-typed languages often come with several flavors of integers: , , . And there’s often a syntax for literals of a particular types: , , . Zig doesn’t have suffixes, because, in Zig, all integer literals have the same type: : The value of an integer literal is known at compile time and is coerced to a specific type on assignment or ascription: To emphasize, this is not type inference, this is implicit comptime coercion. This does mean that code like generally doesn’t work, and requires an explicit type. Raw or multiline strings are spelled like this: This syntax doesn’t require a special form for escaping itself: It nicely dodges indentation problems that plague every other language with a similar feature. And, the best thing ever: lexically, each line is a separate token. As Zig has only line-comments, this means that is always whitespace. Unlike most other languages, Zig can be correctly lexed in a line-by-line manner. Raw strings is perhaps the biggest improvement of Zig over Rust. Rust brute-forces the problem with syntax, which does the required job, technically, but suffers from the mentioned problems: indentation is messy, nesting quotes requires adjusting hashes, unclosed raw literal breaks the following lexical structure completely, and rustfmt’s formatting of raw strings tends to be rather ugly. On the plus side, this syntax at least cannot be expressed by a context-free grammar! For the record, Zig takes C syntax (not that C would notice): The feels weird! It will make sense by the end of the post. Here, I want only to note part, which matches the assignment syntax . This is great! This means that grepping for gives you all instances where a field is written to. This is hugely valuable: most of usages are reads, but, to understand the flow of data, you only need to consider writes. Ability to mechanically partition the entire set of usages into majority of boring reads and a few interesting writes does wonders for code comprehension. Where Zig departs from C the most is the syntax for types. C uses a needlessly confusing spiral rule. In Zig, all types are prefix: While pointer type is prefix, pointer dereference is postfix, which is a more natural subject-verb order to read: Zig has general syntax for “raw” identifiers: It is useful to avoid collisions with keywords, or for exporting a symbol whose name is otherwise not a valid Zig identifier. It is a bit more to type than Kotlin’s delightful , but manages to re-use Zig’s syntax for built-ins ( ) and strings. Like, Rust, Zig goes for function declaration syntax. This is such a massive improvement over C/Java style function declarations: it puts token (which is completely absent in traditional C family) and function name next to each other, which means that textual search for allows you to quickly find the function. Then Zig adds a little twist. While in Rust we write The arrow is gone! Now that I’ve used this for some time, I find arrow very annoying to type, and adding to the visual noise. Rust needs the arrow: Rust has lambdas with an inferred return type, and, in a lambda, the return type is optional. So you need some sort of an explicit syntax to tell the parser if there is return type: And it’s understandable that lambdas and functions would want to use compatible syntax. But Zig doesn’t have lambdas, so it just makes the type mandatory. So the main is Related small thing, but, as name of the type, I think I like more than . Zig is using and for binding values to names: This is ok, a bit weird after Rust’s, whose would be in Zig, but not really noticeable after some months. I do think this particular part is not great, because , the more frequent one, is longer. I think Kotlin nails it: , , . Note all three are monosyllable, unlike and ! Number of syllables matters more than the number of letters! Like Rust, Zig uses syntax for ascribing types, which is better than because optional suffixes are easier to parse visually and mechanically than optional prefixes. Zig doesn’t use and and spells the relevant operators as and : This is easier to type and much easier to read, but there’s also a deeper reason why they are not sigils. Zig marks any control flow with a keyword. And, because boolean operators short-circuit, they are control flow! Treating them as normal binary operator leads to an entirely incorrect mental model. For bitwise operations, Zig of course uses and . Both Zig and Rust have statements and expressions. Zig is a bit more statement oriented, and requires explicit returns: Furthermore, because there are no lambdas, scope of return is always clear. Relatedly, the value of a block expression is void. A block is a list of statements, and doesn’t have an optional expression at the end. This removes the semicolon problem — while Rust rules around semicolons are sufficiently clear (until you get to macros), there’s some constant mental overhead to getting them right all the time. Zig is more uniform and mechanical here. If you need a block that yields a value, Zig supports a general syntax for breaking out of a labeled block: Rust makes pedantically correct choice regarding s: braces are mandatory: This removes the dreaded “dangling else” grammatical ambiguity. While theoretically nice, it makes -expression one-line feel too heavy. It’s not the braces, it’s the whitespace around them: But the ternary is important! Exploding a simple choice into multi-line condition hurts readability. Zig goes with the traditional choice of making parentheses required and braces optional: By itself, this does create a risk of style bugs. But in Zig formatter (non-configurable, user-directed) is a part of the compiler, and formatting errors that can mask bugs are caught during compilation. For example, is an error due to inconsistent whitespace around the minus sign, which signals a plausible mixup of infix and binary minus. No such errors are currently produced for incorrect indentation (the value add there is relatively little, given ), but this is planned. NB: because Rust requires branches to be blocks, it is forced to make synonym with . Otherwise, the ternary would be even more unusable! Syntax design is tricky! Whether you need s and whether you make or mandatory in ifs are not orthogonal! Like Python, Zig allows on loops. Unlike Python, loops are expressions, which leads to a nicely readable imperative searches: Zig doesn’t have syntactically-infinite loop like Rust’s or Go’s . Normally I’d consider that a drawback, because these loops produce different control flow, affecting reachability analysis in the compiler, and I don’t think it’s great to make reachability dependent on condition being visibly constant. But! As Zig places semantics front and center, and the rules for what is and isn’t a comptime constant are a backbone of every feature, “anything equivalent to ” becomes sufficiently precise. Incidentally, these days I tend to write “infinite” loops as Almost always there is an up-front bound for the number of iterations until the break, and its worth asserting this bound, because debugging crashes is easier than debugging hangs. , , , , and all use the same Ruby/Rust inspired syntax for naming captured values: I like how the iterator comes first, and then the name of an item follows, logically and syntactically. I have a very strong opinion about variable shadowing. It goes both ways: I spent hours debugging code which incorrectly tried to use a variable that was shadowed by something else, but I also spent hours debugging code that accidentally used a variable that should have been shadowed! I really don’t know whether on balance it is better to forbid or encourage shadowing! Zig of course forbids shadowing, but what’s curious is that it’s just one episode of the large crusade against any complexity in name resolution. There’s no “prelude”, if you want to use anything from std, you need to import it: There are no glob imports, if you want to use an item from std, you need to import it: Zig doesn’t have inheritance, mixins, argument-dependent lookup, extension functions, implicit or traits, so, if you see , that is guaranteed to be a boring method declared on type. Similarly, while Zig has powerful comptime capabilities, it intentionally disallows declaring methods at compile time. Like Rust, Zig used to allow a method and a field to share a name, because it actually is syntactically clear enough at the call site which is which. But then this feature got removed from Zig. More generally, Zig doesn’t have namespaces. There can be only one kind of in scope, while Rust allows things like I am astonished at the relative lack of inconvenience in Zig’s approach. Turns out that is all the syntax you’ll ever need for accessing things? For the historically inclined, see “The module naming situation” thread in the rust mailing list archive to learn the story of how rust got its syntax. The lack of namespaces touches on the most notable (by its absence) feature of Zig syntax, which deeply relates to the most profound aspect of Zig’s semantics. Everything is an expression. By which I mean, there’s no separate syntactic categories of values, types, and patterns. Values, types, and patterns are of course different things. And usually in the language grammar it is syntactically obvious whether a particular text fragment refers to a type or a value: So the standard way is to have separate syntax families for the three categories, which need to be internally unambiguous, but can be ambiguous across the categories because the place in the grammar dictates the category: when parsing , everything until is a pattern, stuff between and is a type, and after we have a value. There are two problems here. First, there’s a combinatorial explosion of sorts in the syntax, because, while three categories describe different things, it turns out that they have the same general tree-ish shape. The second problem is that it might be hard to maintain category separation in the grammar. Rust started with the three categories separated by a bright line. But then, changes happen. Originally, Rust only allowed syntax for assignment. But today you can also write to do unpacking like Similarly, the turbofish used to move the parser from the value to the type mode, but now const parameters are values that can be found in the type position! The alternative is not to pick this fight at all. Rather than trying to keep the categories separately in the syntax, use the same surface syntax to express all three, and categorize later, during semantic analysis. In fact, this is already happens in the example — these are different things! One is a place (lvalue) and another is a “true” value (rvalue), but we use the same syntax for both. I don’t think such syntactic unification necessarily implies semantic unification, but Zig does treat everything uniformly, as a value with comptime and runtime behavior (for some values, runtime behavior may be missing, for others — comptime): The fact that you can write an where a type goes is occasionally useful. But the fact that simple types look like simple values syntactically consistently make the language feel significantly less busy. As a special case of everything being an expression, instances of generic types look like this: Just a function call! Though, there’s some resistance to trickery involved to make this work. Usually, languages rely on type inference to allow eliding generic arguments. That in turn requires making argument syntax optional, and that in turn leads to separating generic and non-generic arguments into separate parameter lists and some introducer sigil for generics, like or . Zig solves this syntactic challenge in the most brute-force way possible. Generic parameters are never inferred, if a function takes 3 comptime arguments and 2 runtime arguments, it will always be called with 5 arguments syntactically. Like with the (absence of) importing flourishes, a reasonable reaction would be “wait, does this mean that I’ll have to specify the types all the time?” And, like with import, in practice this is a non-issue. The trick are comptime closures. Consider a generic : We have to specify type when creating an instance of an . But subsequently, when we are using the array list, we don’t have to specify the type parameter again, because the type of variable already closes over . This is the major truth of object-orienting programming, the truth so profound that no one even notices it: in real code, 90% of functions are happiest as (non-virtual) methods. And, because of that, the annotation burden in real-world Zig programs is low. While Zig doesn’t have Hindley-Milner constraint-based type inference, it relies heavily on one specific way to propagate types. Let’s revisit the first example: This doesn’t compile: and are different values, we can’t select between two at runtime because they are different. We need to coerce the constants to a specific runtime type: But this doesn’t kick the can sufficiently far enough and essentially reproduces the with two incompatible branches. We need to sink coercion down the branches: And that’s exactly how Zig’s “Result Location Semantics” works. Type “inference” runs a simple left-to-right tree-walking algorithm, which resembles interpreter’s . In fact, is exactly what happens. Zig is not a compiler, it is an interpreter. When evaluates an expression, it gets: When interpreting code like the interpreter passes the result location ( ) and type down the tree of subexpressions. If branches store result directly into object field (there’s a inside each branch, as opposed to one after the ), and each coerces its comptime constant to the appropriate runtime type of the result. This mechanism enables concise syntax for specifying enums: When evaluates the switch, it first evaluates the scrutinee, and realizes that it has type . When evaluating arm, it sets result type to for the condition, and a literal gets coerced to . The same happens for the second arm, where result type further sinks down the . Result type semantics also explains the leading dot in the record literal syntax: Syntactically, we just want to disambiguate records from blocks. But, semantically, we want to coerce the literal to whatever type we want to get out of this expression. In Zig, is a shorthand for . I must confess that did weird me out a lot at first during writing code (I don’t mind reading the dot). It’s not the easiest thing to type! But that was fixed once I added snippet, expanding to . The benefits to lightweight record literal syntax are huge, as they allow for some pretty nice APIs. In particular, you get named and default arguments for free: I don’t really miss the absence of named arguments in Rust, you can always design APIs without them. But they are free in Zig, so I use them liberally. Syntax wise, we get two features (calling functions and initializing objects) for the price of one! Finally, the thing that weirds out some people when they see Zig code, and makes others reconsider their choice GitHub handles, even when they haven’t seen any Zig: syntax for built-in functions. Every language needs to glue “userspace” code with primitive operations supported by the compiler. Usually, the gluing is achieved by making the standard library privileged and allowing it to define intrinsic functions without bodies, or by adding ad-hoc operators directly to the language (like Rust’s ). And Zig does have a fair amount of operators, like or . But the release valve for a lot of functionality are built-in functions in distinct syntactic namespace, so Zig separates out , , , , , , , , , and . There’s no need to overload casting when you can give each variant a name. There’s also for type ascription. The types goes first, because the mechanism here is result type semantics: evaluates the first argument as a type, and then uses that as the type for the second argument. Curiously, I think actually can be implemented in the userspace: In Zig, a type of function parameter may depend on values of preceding (comptime) ones! My favorite builtin is . First, it’s the most obvious way to import code: Its crystal clear where the file comes from. But, second, it is an instance of reverse syntax sugar. You see, import isn’t really a function. You can’t do The argument of has to be a string, syntactically. It really is syntax, except that the function-call form is re-used, because it already has the right shape. So, this is it. Just a bunch of silly syntactical decisions, which add up to a language which is positively enjoyable to read. As for big lessons, obviously, the less features your language has, the less syntax you’ll need. And less syntax is generally good, because varied syntactic constructs tend to step on each other toes. Languages are not combinations of orthogonal aspects. Features tug and pull the language in different directions and their combinations might turn to be miraculous features in their own right, or might drag the language down. Even with a small feature-set fixed, there’s still a lot of work to pick a good concrete syntax: unambiguous to parse, useful to grep, easy to read and not to painful to write. A smart thing is of course to steal and borrow solutions from other languages, not because of familiarity, but because the ruthless natural selection tends to weed out poor ideas. But there’s a lot of inertia in languages, so there’s no need to fear innovation. If an odd-looking syntax is actually good, people will take to it. Is there anything about Zig’s syntax I don’t like? I thought no, when starting this post. But in the process of writing it I did discover one form that annoys me. It is the while with the increment loop: This is two-thirds of a C-style loop (without the declarator), and it sucks for the same reason: control flow jumps all over the place and is unrelated to the source code order. We go from condition, to the body, to the increment. But in the source order the increment is between the condition and the body. In Zig, this loop sucks for one additional reason: that separating the increment I think is the single example of control flow in Zig that is expressed by a sigil, rather than a keyword. This form used to be rather important, as Zig lacked a counting loop. It has form now, so I am tempted to call the while-with-increment redundant. Annoyingly, is almost equivalent to But not exactly: if contains a , or , the version would run the one extra time, which is useless and might be outright buggy. Oh well.

0 views
matklad 3 months ago

Partially Matching Zig Enums

Usually, you handle it like this: But once in a while, there’s common handling code you want to run for several variants. The most straightforward way is to duplicate: But this gets awkward if common parts are not easily extractable into function. The “proper” way to do this is to refactor the enum: This gets very awkward if there’s one hundred usages of , 95 of them look better with flat structure, one needs common code for ab case, and the four remaining need common code for ac. The universal recipe for solving the AB problem relies on a runtime panic: And… this is fine, really! I wrote code of this shape many times, and it never failed at runtime due to a misapplied refactor later. Still, every time I write that , I die inside a little. Surely there should be some way to explain to the compiler that is really unreachable there? Well, as I realized an hour ago, in Zig, you can! This is the awkward runtime-panicky and theoretically brittle version: And here’s a bullet-proof compiler-checked one: There are two tricks here. forces the compiler to generate the program twice, where is bound to comptime value. The second trick is , which instructs the compiler to fail if it gets to the else branch. But, because is known at comptime, compiler knows that is in fact unreachable, and doesn’t hit the error. Adding a bug fails compilation, as intended:

0 views

How I like to install NixOS (declaratively)

For one of my network storage PC builds , I was looking for an alternative to Flatcar Container Linux and tried out NixOS again (after an almost 10 year break). There are many ways to install NixOS, and in this article I will outline how I like to install NixOS on physical hardware or virtual machines: over the network and fully declaratively. The term declarative means that you describe what should be accomplished, not how. For NixOS, that means you declare what software you want your system to include (add to config option , or enable a module) instead of, say, running . A nice property of the declarative approach is that your system follows your configuration, so by reverting a configuration change, you can cleanly revert the change to the system as well. I like to manage declarative configuration files under version control, typically with Git. When I originally set up my current network storage build, I chose CoreOS (later Flatcar Container Linux) because it was an auto-updating base system with a declarative cloud-init config. The NixOS manual’s “Installation” section describes a graphical installer (“for desktop users”, based on the Calamares system installer and added in 2022) and a manual installer. With the graphical installer, it’s easy to install NixOS to disk: just confirm the defaults often enough and you’ll end up with a working system. But there are some downsides: The graphical installer is clearly not meant for remote installation or automated installation. The manual installer on the other hand is too manual for my taste: expand “Example 2” and “Example 3” in the NixOS manual’s Installation summary section to get an impression. To be clear, the steps are very doable, but I don’t want to install a system this way in a hurry. For one, manual procedures are prone to mistakes under stress. And also, copy & pasting commands interactively is literally the opposite of writing declarative configuration files. Ideally, I would want to perform most of the installation from the comfort of my own PC, meaning the installer must be usable over the network. Also, I want the machine to come up with a working initial NixOS configuration immediately after installation (no manual steps!). Luckily, there is a (community-provided) solution: nixos-anywhere . You take care of booting a NixOS installer, then run a single command and nixos-anywhere will SSH into that installer, partition your disk(s) and install NixOS to disk. Notably, nixos-anywhere is configured declaratively, so you can repeat this step any time. (I know that nixos-anywhere can even SSH into arbitrary systems and kexec-reboot them into a NixOS installer, which is certainly a cool party trick, but I like the approach of explicitly booting an installer better as it seems less risky and more generally applicable/repeatable to me.) I want to use NixOS for one of my machines, but not (currently) on my main desktop PC. Hence, I installed only the tool (for building, even without running NixOS) on Arch Linux: Now, running should drop you in a new shell in which the GNU hello package is installed: By the way, the Nix page on the Arch Linux wiki explains how to use nix to install packages, but that’s not what I am interested in: I only want to remotely manage NixOS systems. Previously, I said “you take care of booting a NixOS installer”, and that’s easy enough: write the ISO image to a USB stick and boot your machine from it (or select the ISO and boot your VM). But before we can log in remotely via SSH, we need to manually set a password. I also need to SSH with the environment variable because the termcap file of rxvt-unicode (my preferred terminal) is not included in the default NixOS installer environment. Similarly, my configured locales do not work and my preferred shell (Zsh) is not available. Wouldn’t it be much nicer if the installer was pre-configured with a convenient environment? With other Linux distributions, like Debian, Fedora or Arch Linux, I wouldn’t attempt to re-build an official installer ISO image. I’m sure their processes and tooling work well, but I am also sure it’s one extra thing I would need to learn, debug and maintain. But building a NixOS installer is very similar to configuring a regular NixOS system: same configuration, same build tool. The procedure is documented in the official NixOS wiki . I copied the customizations I would typically put into , imported the module from and put the result in the file: To build the ISO image, I set the environment variable to point to the file and to select the upstream channel for NixOS 25.05: After about 1.5 minutes on my 2025 high-end Linux PC , the installer ISO can be found in (1.46 GB in size in my case). Unfortunately, the nix project has not yet managed to enable the “experimental” new command-line interface (CLI) by default yet, despite 5+ years of being available, so we need to create a config file and enable the modern interface: How can you tell old from new? The old commands are hyphenated ( ), the new ones are separated by a blank space ( ). You’ll notice I also enabled Nix flakes , which I use so that my nix builds are hermetic and pinned to a certain revision of nixpkgs and any other nix modules I want to include in my build. I like to compare flakes to version lock file in other programming environments: the idea is that building the system in 5 months will yield the same result as it does today. To verify that flakes work, run (not ): For reference, here is the configuration I use to create a new VM for NixOS in Proxmox. The most important setting is (= UEFI boot, which is not the default), so that I can use the same boot loader configuration on physical machines as in VMs: Before we can boot our (unsigned) installer, we need to enter the UEFI setup and disable Secure Boot. Note that Proxmox enables Secure Boot by default, for example. Then, boot the custom installer ISO on the target system, and ensure works without prompting for a password. Declare a with the following content: Declare your disk config in : Declare your desired NixOS config in : …and lock it: After about one minute, my VM was installed and rebooted! Tip: Last month, I had to temporarily pin to the latest released version (1.9.0) because of issue nixos-anywhere#510 like so: Now that the declarative part of the system is in place, we need to take care of the stateful part. In my case, the only stateful part that needs setting up is the Tailscale mesh VPN. To set up Tailscale, I log in via SSH and run . Then, I add the new node to my network by following the link. Afterwards, in the Tailscale Machines console , I disable key expiration and add ACL tags. Now, after I changed something in my configuration file, I use remotely to roll out the change to my NixOS system: Note that not all changes are fully applied as part of : while systemd services are generally restarted, newly required kernel modules are not automatically loaded (e.g. after enabling the coral hardware accelerator in Frigate). So, to be sure everything took effect, your system after deploying changes. One of the advantages of NixOS is that in the boot menu, you can select which generation of the system you want to run. If the latest change broke something, you can quickly reboot into the previous generation to undo that change. Of course, you can also undo the configuration change and deploy a new generation — whichever is more convenient in the situation. With this article, I hope I could convey what I wish someone would have told me when I started using Nix and NixOS: Where do you go from here? You need to manually enable SSH after the installation — locally, not via the network. The graphical installer generates an initial NixOS configuration for you, but there is no way to inject your own initial NixOS configuration. Using nixos-anywhere, fetch the hardware-configuration.nix from the installer and install NixOS to disk: Enable flakes and the new CLI. Use nixos-anywhere to install remotely. Build a custom installer if you want, it’s easy! Use ’s builtin flag for remote deployment. Read through all documentation on nixos.org → Learn . Here are a couple of posts from people in and around my bubble that I looked at for inspiration / reference, in no particular order: Michael Lynch wrote about setting up an Oracle Cloud VM with NixOS and about managing his Zig configuration . Nelson Elhage wrote about using Nix to test dozens of Python interpreters as part of his performance investigation into Python 3.14 tail-call interpreter performance . Vincent Bernat wrote about using Nix to build an SD card image for an ARM single board computer . Mitchell Hashimoto shared his extensive NixOS configs . Wolfgang has a YouTube video about using NixOS for his Home Server ( → his configs ) Contact your local Nix community! I recently attended the “Zero Hydra Failures” event of the Nix Zürich group and the kind people there were happy to talk about all things Nix :)

0 views
Lukáš Lalinský 8 months ago

My AI helpers, CodeRabbit and SourceGraph Cody

I’ve been an early adopter of AI coding tools. I’ve been using GitHub Copilot from the technical preview stages in 2021. It was mind-blowing to me. The interface was pretty minimal compared to what we have now, but even at the stage, it was revolutionizing the way I work. I’ve dreamed for a long time about programming without having to actually write all the code, and it was starting to become a reality. All in all, I was pretty happy with it. Last year, I discovered Cody from SourceGraph . I’ve tried the trial and I was hooked. It had so much more context about the code I’m working on. I could just select a function, tell it to refactor something on it, and it would do it directly in my editor. Writing documentation, generating tests, writing new code, everything become easier. I’ve used it last year to write a replacement of the acoustid-index server, something I’ve been planning for a long time, but I decided to also learn a new language, Zig , on the project. Cody made the process really effortless. It included countless refactoring, as I was still learning the right patterns in the language, and I was doing most of the work without actually writing the code myself. This year, I’ve started using the chat with thinking models a lot more often, and Cody’s ability to apply the code blocks from the chat to the editor. Even better, I’m actually using this for free, as part of their support for open source. It’s such a good tool that I’d be happy to pay for now, and will definitely start doing that once my current free license expires. And this year I discovered CodeRabbit for automated code reviews. I was super skeptical about this, but they also have a free plan for open source projects, do I decided to give it a try. I’m maintaining AcoustID alone, so having another set of eyes looking at the code, even if mechanical ones, is welcome. And I was blown away. On the first pull request, it actually found a small logical error I had in the code. And this kept happening again and again. After some time, I switched it to the assertive profile, and now I actually enjoy opening a pull request and going through the suggestions it makes. Yes, sometimes they are obsessive, but that’s OK. I’ve tried alternatives, like Gemini or Copilot, both having options to do code reviews, but the level of quality is somewhere completely elsewhere. Gemini and Copilot feel like useless toys compared to CodeRabbit. The last four years have completely changed my approach to programming, and for the better. As good as all these new AI tools are, I don’t really expect them to be replacing technical programming jobs. You really need to evaluate their outputs, and if you are not able to do that critically, you will deal with a lot of bullshit code. But if you can judge the quality of the output, these are great helpers and I’m really looking forward to what the future brings.

0 views
zackoverflow 8 months ago

I spent 181 minutes waiting for the Zig compiler this week

TLDR; The Zig compiler takes about 1 minute and 30 seconds to compile debug builds of Bun. Zig's language server doesn't do basic things like type-checking, so often have to run the compiler to see if my code works.

0 views

Msgpack serialization library for Zig

I’ve been playing with Zig over the last few weeks. The language had been on my radar for a long time, since it was originally developed for writing audio software, but I never paid too much attention to it. It seems that it’s becoming more popular, so I’ve decided to learn it and picked a small task of rewriting the AcoustID fingerprint index server in it. That is still in progress, but there is one side product that is almost ready, a library for handling msgpack serialization . The library can be only used with static schemas, defined using Zig’s type system. There are many options for generating compact messages, almost competing with protobuf, but without separate proto files and protoc. I’m quite happy with the API. It’s mainly possible due to Zig’s comptime type reflection. This is the most basic usage: See the project on GitHub for more details.

0 views
seated.ro 1 years ago

a weekend love story - raylib/zig

Before this weekend, I was a plebeian who used JavaScript for 80% of my tasks. Now I am an esoteric plebeian who has used zig once. Anyway, I decided to give zig a shot and try to build a game with it. There was really only two libraries I wanted to learn (sokol and raylib), I went with raylib. I went through the official raylib docs to get a feel for the API, then I searched for zig bindings and found two, Not-Nik/raylib-zig and ryupold/raylib-zig The second one hasn’t been updated in 6 months, and it isn’t using the zig package manager. Ergo, leaving me no choice but to use the first one. Fairly easy to set things up, I just needed to install zig and raylib-zig. NOTE : We will be using zig v0.12.0 instead of v0.13.0, we will see why later. I used zvm to make life easier mangaging various zig versions, but you can install zig however you want. Follow this part of the docs to install raylib-zig. Do this inside your game directory. This adds raylib-zig as a dependency to our project. You can check the file to verify. Now the most important part if you want to compile your game for the web with emscripten. IF YOU’RE NOT BUILDING FOR THE WEB, PLEASE SKIP . Install as mentioned here: emscripten installation guide and make sure to do the command. NOTE : Just make sure not to clone the emsdk into $HOME/.emscripten, because emscripten uses that as the default cache directory. It will fuck your build up. (I didn’t do this at all) If you followed the raylib-zig installation guide, your build.zig should look something like this: If it isn’t, it should be now! Now we need to add a block specific to the emscripten target. I missed this code block and was stuck trying to figure out how to get the emscripten run step to work. :( I wrote an asteroids clone following along with this video: zig space rocks - jdh . Go write whatever game you want! Here is a screenshot of the game in action: Here are some other cool games made in zig: tetris , terrain-zigger Below is some cool zig code I wrote that I quite like: This syntax is so fucking cool, I will never forget to again. This code is for deciding when to play the low/high bloop sound. It increases in intensity the longer a player is alive. Makes it feel like the game is getting harder. Pretty cool! Also avoided for loops with bit shifting. absolutely unnecessary. But I did it anyway. One thing I think is important is for some reason I wasn’t able to just use a for the heap allocations. So i reused some code from the ryupold/examples-raylib.zig repo and made a custom emscripten allocator. Without this, I was getting issues. I also increased the memory limit to 16MB. Idk which of those fixed it, but it works now! After that, I was sitting at a solid of memory usage, so I think I was good. You can build the game for desktop with . For the web, you need to run the following command: Now, as to why we used . For some reason the build command above fails for the emscripten target with as mentioned in this open issue . If someone can figure out why, please mention it in the issue! You can drop everything in the folder and host it wherever. Was a fun start to my zig arc! I hope you guys don’t waste time debugging shit like me lol. May we zig harder every day. Source Code

0 views
maxdeviant.com 1 years ago

2023 in Review

I haven't written a year-in-review since 2019 . I'm sure it's no coincidence that this was the last one before COVID-19 reared its ugly head and threw things into disarray. I find that it can be hard to write these when everything since March 2020 bleeds together for me, but perhaps that is all the more reason that I should be writing them. So in hopes of a return to form, here is my 2023 year-in-review. After just shy of three years at WorkOS, in September I left WorkOS and started working at Zed Industries. I've been assisting with a massive rewrite since I started, which we'll be launching early next year. I'm very much looking forward to getting that out into the world and getting to work on our 2024 roadmap! This year marked the 13th anniversary of my family moving back to the US from China. This was a massive transition for me, and I'm still uncovering the ways in which it's impacted me since. In the past two years I've started to explore and unpack these feelings. Back in June 2022 I first learned the term "unresolved grief" and was able to put a name to the weight I've been shouldering all this time. I've also learned that it's a fairly common phenomenon among third culture kids (TCKs), like myself. After dragging my feet for a while, I finally started seeing a therapist this year. She is a TCK herself and has experience counseling TCKs through their unresolved grief. So far attending therapy has been immensely helpful as I start to unpack all of the baggage from my life overseas. Two things that I've learned in the process thus far: In June I started work on something that I've wanted to build for a long time: my own programming language. This undertaking has always been on the shelf as a "someday" project, with the skills and expertise needed to pull this off always feeling out of reach. One day I was struck by the realization that not building a programming language due to feeling ill-versed on how to build one was a self-fulfilling prophecy of failure. That was the kick I needed to take the plunge and do something to move me closer to the goal. And thus began Crane . There's still a long ways to go before it feels like an honest-to-God programming language, but I'm proud of taking the first step in the right direction. In a somewhat similar vein, in August I resumed work on a long-running project of building an MMO-lite. MMORPGs have always been my favorite games to play, and I've always wanted to make one that was my own. Unfortunately, MMORPGs are also the most difficult games to make, and making one solo is almost certainly impossible. However, I still believe it possible—with sufficient scope-adjusting—to make a game that bears some resemblance to an MMORPG that I would enjoy playing. I've worked on this project off-and-on for a while now, but this year I made some significant progress towards something that is actually starting to feel like a game. I'm still not ready to share too much about it publicly, but what I can say is that I have the bones in place for an RPG where multiple players can interact within a shared, persistent world controlled by a separate server. At the beginning of the year I put together a short list of goals for this year. Write Rust, and a lot of it I've been shouting Rust's praises from the rooftops for a while now, all the while not having a ton of experience using it in anger. I wanted to change that this year, and set myself this goal of writing lots of Rust code. At the time, I was still using TypeScript at my day job, which meant that any Rust I was writing was still in my spare time. The bulk of my Rust coding was in the two projects discussed above: the Crane compiler and my work-in-progress multiplayer game. Since joining Zed Industries in September, I'm now writing Rust full-time, which has been a dream come true. With all the Rust I've written this year, I can safely say that this goal was an absolute success. Distance myself from platforms I don't control I wrote this goal in the wake of Twitter being taken over and summarily sacked . These events left me feeling wary of all platforms that I don't have direct control over, and prompted a shift towards owning as much of my data as possible. For instance, I now have a routine job that syncs all of my Last.fm listening data over to flat files in a GitHub repo 1 . I have all my tweets backed up there as well, up until Twitter did away with API access. I'm still spending far too much time being consumptive on platforms like Twitter and Instagram. Connect with others more intentionally For a while now I've been feeling increasingly isolated in my personal life. I've fallen out of touch with most of my old friends due to time, distance, or both. While I didn't make as much progress as I would have liked in this area, I did try to be more intentional about connecting with my parents and siblings. Still a long ways to go. Do better at planning and cooking meals It doesn't feel like there's been any noticeable improvement in this area. Getting dinner on the table each night still feels like a chore, and I usually find myself scrambling at the last minute to figure out what we're going to eat. This often results in eating less-than-healthy meals, like frozen pizza. Find a form of exercise that I don't hate I really haven't been good about exercising this year. I've tried riding the stationary bike a bit, but it never lasts more than a week or two before I fall out of the habit. On a positive note, I upgraded my home office to an adjustable standing desk this year. I try to stand for about half of my work day, which I've found has left me feeling better overall. I'm quite happy with my code stats for this year. My GitHub chart has a lot of green: I also hit the ground running in the Zed repo: Overall my music listening was down a bit from previous years, but I still found time to listen to a bunch of it. These are the albums I listened to the most in 2023: For me, 2023 has embodied the polar extremes of good and bad. This year has been simultaneously tumultuous and rewarding, joyous and mournful. I feel like I've learned more about myself in this year alone than in all of the years preceeding it. But these discoveries have often come at the cost of confronting what has been buried deep in my psyche. And yet, I remain optimistic that the best days are ahead, and that this new year will contain much to be thankful for. Still figuring out contingency plans for GitHub itself... There is way more to unpack than I initially thought. I feel like I'm just at the tip of the iceberg. I need to give myself the space to process my experiences, and understand that it's not a straight path to being fully healed. Equinox - City State Nosebleeds - MisterWives Zig - Poppy The Fortress in the Forest - Sanguine Forest EVERGREEN - PVRIS Всё - Рожь DAMSEL IN DISTRESS - GIRLI Blackbraid II - Blackbraid Gag Order - Kesha In The Faith That Looks Through Death - Vital Spirit Digital Pacific - Luna Shadows Volume II - Manuel Gardner Fernandes VOL. 4 :: SLAVES OF FEAR - HEALTH Kx5 - kx5 After Hours - The Weeknd Euphoric - Georgia HELLO, THANK YOU - Blvck Ceiling Diorama - MØL Dystopia - Old Seas Young Mountains Fatalism - Polaris

0 views
Daniel Mangum 1 years ago

Understanding Every Byte in a WASM Module

In my last post, we explored how to build WASM modules, both with and without WASI support, using Clang. In a comment on Reddit, it was mentioned that much of the setup I walked through in that post could be avoided by just leveraging Zig’s WASI supprt. This is a great point, and I would recommend doing the same. The following command is inarguably simpler than what I described. $ zig cc --target=wasm32-wasi However, there are two reasons why knowing how to use Clang for compilation is useful.

0 views