Posts in Zig (20 found)
Anton Zhiyanov 2 weeks ago

'Better C' playgrounds

I have a soft spot for the "better C" family of languages: C3, Hare, Odin, V, and Zig. I'm not saying these languages are actually better than C — they're just different. But I needed to come up with an umbrella term for them, and "better C" was the only thing that came to mind. I believe playgrounds and interactive documentation make programming languages easier for more people to learn. That's why I created online sandboxes for these langs. You can try them out below, embed them on your own website, or self-host and customize them. If you're already familiar with one of these languages, maybe you could even create an interactive guide for it? I'm happy to help if you want to give it a try. C3  • Hare  • Odin  • V  • Zig  • Editors An ergonomic, safe, and familiar evolution of C. ⛫  homepage • αω  tutorial • ⚘  community A systems programming language designed to be simple, stable, and robust. ⛫  homepage • αω  tutorial • ⚘  community A high-performance, data-oriented systems programming language. ⛫  homepage • αω  tutorial • ⚘  community A language with C-level performance and rapid compilation speeds. ⛫  homepage • αω  tutorial • ⚘  community A language designed for performance and explicit control with powerful metaprogramming. ⛫  homepage • αω  tutorial • ⚘  community If you want to do more than just "hello world," there are also full-size online editors . They're pretty basic, but still can be useful.

0 views
matklad 3 weeks ago

Static Allocation For Compilers

TigerBeetle famously uses “static allocation” . Infamously, the use of the term is idiosyncratic: what is meant is not arrays, as found in embedded development, but rather a weaker “no allocation after startup” form. The amount of memory TigerBeetle process uses is not hard-coded into the Elf binary. It depends on the runtime command line arguments. However, all allocation happens at startup, and there’s no deallocation. The long-lived event loop goes round and round happily without . I’ve wondered for years if a similar technique is applicable to compilers. It seemed impossible, but today I’ve managed to extract something actionable from this idea? Static allocation depends on the physics of the underlying problem. And distributed databases have surprisingly simple physics, at least in the case of TigerBeetle. The only inputs and outputs of the system are messages. Each message is finite in size (1MiB). The actual data of the system is stored on disk and can be arbitrarily large. But the diff applied by a single message is finite. And, if your input is finite, and your output is finite, it’s actually quite hard to need to allocate extra memory! This is worth emphasizing — it might seem like doing static allocation is tough and requires constant vigilance and manual accounting for resources. In practice, I learned that it is surprisingly compositional. As long as inputs and outputs of a system are finite, non-allocating processing is easy. And you can put two such systems together without much trouble. routing.zig is a good example of such an isolated subsystem. The only issue here is that there isn’t a physical limit on how many messages can arrive at the same time. Obviously, you can’t process arbitrary many messages simultaneously. But in the context of a distributed system over an unreliable network, a safe move is to drop a message on the floor if the required processing resources are not available. Counter-intuitively, not allocating is simpler than allocating, provided that you can pull it off! Alas, it seems impossible to pull it off for compilers. You could say something like “hey, the largest program will have at most one million functions”, but that will lead to both wasted memory and poor user experience. You could also use a single yolo arena of a fixed size, like I did in Hard Mode Rust , but that isn’t at all similar to “static allocation”. With arenas, the size is fixed explicitly, but you can OOM. With static allocation it is the opposite — no OOM, but you don’t know how much memory you’ll need until startup finishes! The “problem size” for a compiler isn’t fixed — both the input (source code) and the output (executable) can be arbitrarily large. But that is also the case for TigerBeetle — the size of the database is not fixed, it’s just that TigerBeetle gets to cheat and store it on disk, rather than in RAM. And TigerBeetle doesn’t do “static allocation” on disk, it can fail with at runtime, and it includes a dynamic block allocator to avoid that as long as possible by re-using no longer relevant sectors. So what we could say is that a compiler consumes arbitrarily large input, and produces arbitrarily large output, but those “do not count” for the purpose of static memory allocation. At the start, we set aside an “output arena” for storing finished, immutable results of compiler’s work. We then say that this output is accumulated after processing a sequence of chunks, where chunk size is strictly finite. While limiting the total size of the code-base is unreasonable, limiting a single file to, say, 4 MiB (runtime-overridable) is fine. Compiling then essentially becomes a “stream processing” problem, where both inputs and outputs are arbitrary large, but the filter program itself must execute in O(1) memory. With this setup, it is natural to use indexes rather than pointers for “output data”, which then makes it easy to persist it to disk between changes. And it’s also natural to think about “chunks of changes” not only spatially (compiler sees a new file), but also temporally (compiler sees a new version of an old file). Is there any practical benefits here? I don’t know! But seems worth playing around with! I feel that a strict separation between O(N) compiler output and O(1) intermediate processing artifacts can clarify compiler’s architecture, and I won’t be too surprised if O(1) processing in compilers would lead to simpler code the same way it does for databases?

0 views
Michael Lynch 1 months ago

Refactoring English: Month 12

Hi, I’m Michael. I’m a software developer and founder of small, indie tech businesses. I’m currently working on a book called Refactoring English: Effective Writing for Software Developers . Every month, I publish a retrospective like this one to share how things are going with my book and my professional life overall. At the start of each month, I declare what I’d like to accomplish. Here’s how I did against those goals: I’ve gotten stuck on my design docs chapter. There’s a lot I want to cover, and I’m having trouble articulating some of it and deciding how much of it belongs in the book. Part of the problem is that the chapter is so long that it feels overwhelming to tackle all at once. My new plan is to break the chapter into smaller sections and focus on those one at a time. I think this is my last “hard” chapter, as I have a better sense of what I want to say in the remaining chapters. I keep procrastinating on this even though I enjoy doing it and get useful responses. I keep automating more of the logistical work in the hopes that reducing initial friction will motivate me to do it more. 3,508 people read the post, so it was somewhat successful at attracting new readers. Bob Nystrom, the author I was writing about, liked my article , which was gratifying. I figured even if my article flopped, at least it would let Bob Nystrom know how much I appreciated his work. November was a good month in terms of visits and sales. Visits were down slightly from October, but it was still one of the strongest months of the year. I did a Black Friday discount for 30% off. I only advertised it to readers on my mailing list, as I always feel strange spamming a sale everywhere. But the announcement was successful, as 18 customers purchased for a total of $359.41. Peter Spiess-Knafl , co-founder of zeitkapsl , cited Refactoring English in a blog post , which reached #1 on Lobsters . I was glad to see Peter’s post, as my plan for the book has always been for it to help readers write successful blog posts and be happy enough about the book that they recommend it. I read Hacker News so often that I feel like I’d be good at predicting which stories will reach the front page, but I’ve never tested this belief rigorously. So, I made a game to test my accuracy. The game shows me the newest submissions to Hacker News, and the player predicts whether or not they’ll reach the front page: The biggest problem with the game is that a story can take up to 24 hours to reach the front page. Waiting 24 hours for results sucks the fun out of the game. I tried changing the rules so that you’re predicting whether an article will reach the front page in its first 30 minutes, but 30 minutes still feels painfully slow. My new idea is to make a tentative call 10 minutes after a story has been submitted. Given the story’s age, upvotes, and comment count, I can calculate some rough probability of whether it has a chance of hitting the front page. So, if you predicted a story would reach the front page, but 10 minutes later, it still has no upvotes or comments, the game will tentatively tell you that you got it wrong, but you can still get the points back if the story makes a miraculous comeback in the next 24 hours. I thought about making a version of the game where you guess the results of past stories. That way, I could give instant feedback because the answer is already available, but that feels less fun, as other people have made similar games. Plus, for the HN diehards I’m hoping this game appeals to, past data ruins it because you kind of remember what was on the front page and what wasn’t. My wife and I had our first child last year , so we wanted a way to share baby photos with our family privately. Some of my friends had used apps like this, but they were all ad-supported. I hate the idea of companies slapping ads on photos of my child, so I looked for other options. When I came across TinyBeans, I thought I’d found a winner. They had a paid version that disabled ads, and privacy was the main feature they advertised: perfect! Then, I started using TinyBeans, and there were ads everywhere. “Buy our photo books!” “Give us more personal information!” I opened the app just now and had to dismiss three separate ads to see photos of my own child. TinyBeans shows me three huge ads when I open the app, even though I’m a paying customer and have dismissed these exact ads dozens of times before. It also turns out that my family members receive even more ads than I see, including for third-party services. Here’s a recent one that encourages my family to invest in some scammy AI company: When TinyBeans sends emails to my family, they stick spammy ads like these in between photos of my son. The “no ads” promise of the paid tier is limited to me and my wife; TinyBeans bombards everyone else in my family with ads and upsells. I wanted to ditch TinyBeans early on, but I was too busy with new parent stuff to find a new app and migrate my whole family to it. So, each month, I begrudgingly give TinyBeans my $9. Then, Black Friday happened. TinyBeans sent me an email patting themselves on the back for not cluttering my inbox with Black Friday deals because all the deals would be in the app. TinyBeans sends me a pointless email to boast about not cluttering my inbox with pointless emails. Great, an email congratulating yourself about how little you’ll email me. But that wasn’t even true! TinyBeans proceeded to send me four more emails telling me to check my app for Black Friday deals: After promising not to bombard me with Black Friday promotions, TinyBeans emailed me five Black Friday promotions. That pushed me over the edge, and now I’m on a spite mission to create my own TinyBeans replacement and stop giving TinyBeans my money. “And what are your reasons for wanting to create an app to share baby photos?” The only functionality I care about in TinyBeans is: How hard could that be? 20 hours of dev work? The TinyBeans web and Android apps suck anyway, so I’ll be glad to move away from them. And because the experience is mostly email-based, I can replace TinyBeans with my own app without my family having to do any work as part of the migration. I’m not starting a company to compete with TinyBeans. I just want to make a web app that replaces TinyBeans’ functionality. One of my shameful secrets as a developer is that I’m bad at managing windows on my screen. I compensate by overusing my mouse, even though that’s slow and inefficient. Last year, I switched from Windows to Linux and got a 49" ultrawide monitor . While Windows was designed for mouse-happy users like me, Linux desktops are much more keyboard-focused, so my lack of keyboard discipline began catching up with me. I’d keep opening windows and never close them, so I’d end up with 10+ VS Code windows, 10+ Firefox windows, and 5 different instances of the calculator app for one-off calculations. They were all in one big pile in the middle of my desktop. At that point, it was obvious I was wasting tons of screen real estate and burning time locating my windows. I tried a few different window managers, but I kept running into issues. Like I couldn’t get lockscreens to work, or they’d fail to use my monitor’s full 5120x1440 resolution. The fastest person I’ve ever seen navigate their computer is my friend okay zed . I asked him for advice, and he explained his approach to window management . His strategy is to use many virtual desktops where windows are almost always full screen within the desktop. He uses xmonad, but he suggested I try Awesome Window Manager. I liked okay’s philosophy of single-purpose virtual desktops, so I created an Awesome window manager configuration around it. So, I have a dedicated desktop for my blog, a dedicated desktop for my book, one for email, etc. I try to limit myself to 1-2 windows per desktop, but sometimes I’ll pull up a third or fourth while looking something up. Here’s what my blog desktop looks like, which is pretty typical: one VS Code window for editing, one Firefox window for viewing the result, and sometimes a second Firefox window for looking stuff up: I didn’t like any of the default desktop modes, so I had to roll my own . It gives each window 25% of my screen’s width, and if I open more than four, it squishes everything to fit. I can also manually expand or contract windows with Shift+Win+H and Shift+Win+L. Except sometimes I accidentally lock myself out because Win+L is my hotkey for locking the screen. Based on a few weeks with Awesome, here’s how I’m feeling: I was talking to LGUG2Z on Mastodon about how annoying it is to embed tweets on my blog. If the user deletes their tweet, I end up with dead content in my post. Even when it works, my readers have to load trackers from Twitter. I’ve been working around it by just screenshotting tweets, but that’s an ugly solution. I want to embed tweets in Hugo (the static site generator I use for this blog) with a shortcode like , and then Hugo could fetch the tweet data and store it under source control so that I don’t have an ongoing dependency on Twitter. LGUG2Z explored this idea and implemented support for it on his Zola blog. He runs a script to pre-download data once from external sources (like tweets), and then he can embed the content in his blog without re-retrieving it at blog build time or reader visit time. I tried to adapt LGUG2Z’s solution for Hugo, but it got too complicated . I wrote a standalone script that downloads data from Twitter and then I’d render it in a tweet-like UI . Regular text tweets worked okay, but once I got to tweets with embedded media or retweets, it felt like I was building too much on shaky foundation. I used to store all of my photos on Google Photos. Despite my privacy concerns, Google Photos was just so much better than anything else that I held my nose and just gave them all my photos. I’ve since become more privacy sensitive and distrustful of Google, so I stopped uploading new photos to Google Photos, but I haven’t found a replacement. I’ve heard good things about Immich and Ente, so I was glad to see this detailed writeup from Michael Stapelberg about his experience setting up an Immich server using NixOS . Firefox recently improved their Enhanced Tracking Protection , a feature I didn’t realize existed. I turned it on, and it blocks trackers that uBlock was allowing and hasn’t had any false positives. I just discovered “Rich Friend, Poor Friend” from 2022 and the follow up from a few weeks ago. I definitely relate to hiring professionals instead of asking my friends for help (e.g., hiring movers instead of asking friends). I’m maybe in the worst part of the curve where I’m wealthy enough to not want to ask friends to help me move but not so wealthy that I have a separate guest house to make it easy to host them. The Deel corporate espionage story is getting surprisingly little attention in my bubble. In March 2025, Rippling revealed that they discovered one of their employees was actually a corporate spy working for their competitor, Deel. When they caught the spy, he ran into the bathroom and tried to flush his phone down the toilet. Rippling posted an update in November that they found banking records showing that Deel had routed payments to the spy through the wife of Deel’s COO. The wife was, coincidentally, a compliance lead at Robinhood, another company known for its scummy ethics . As an unhappy former Deel customer, I’m happy to see them get their comeuppance. I’m working on a game to predict which posts will reach the front page of Hacker News. I’m creating a family photo sharing app out of spite. I switched to a keyboard-first window manager. Result : Published one new chapter Result : I only reached out to two readers (one responded). Result : Published “What Makes the Intro to Crafting Interpreters so Good?” My family can browse the baby photos and videos I’ve uploaded. My family members can subscribe to receive new photos and videos via email. My family members can comment or give emoji reactions to photos. What I like Encourages me to keep single-purpose desktops for better focus. Encourages me to navigate via keyboard hotkeys rather than mouse clicks. Doesn’t crash on suspend 2% of the time like Gnome did. What I dislike Everything is implemented in and configured through Lua, a language I don’t know. I’m using LLMs to write all my configs. The configuration is fairly low-level, so you have to write your own logic for things like filling the viewport without overflowing it. I don’t like any of the default desktop modes, so I had to roll my own. The documentation is all text, which feels bizarre for software designed specifically around graphics. If you accidentally define conflicting hotkeys, Awesome doesn’t warn you. If I click a link outside of Firefox, sometimes it loads the link in a browser that isn’t on my current desktop. I’m guessing it loads it on whatever Firefox window I most recently touched. What I still need to figure out How to implement “scratchpad” functionality. Like if I want to pull up my password manager as a floating window or summon the calculator for a quick calculation, then dismiss it. How to put more widgets into the status bar like network connectivity and resource usage. Published “What Makes the Intro to Crafting Interpreters so Good?” Published “My First Impressions of MeshCore Off-Grid Messaging” . Published “Add a VLAN to OPNsense in Just 26 Clicks Across 6 Screens” Created a tiny Zig utility called count-clicks to count clicks and keystrokes on an x11 system. Got Awesome Window Manager working. Quick feedback is important in creating a fun game. TinyBeans actually has a lot of ads, even on the paid version. The Awesome window manager is a better fit for my needs than Gnome. Publish a game that attracts people to the Refactoring English website. Publish two chapters of Refactoring English . Write a design doc for a just-for-fun family photo sharing app. If you’re interested in beta testing the “Will it Hit the Front Page?” game, reach out .

1 views
Lukáš Lalinský 2 months ago

How I turned Zig into my favorite language to write network programs in

I’ve been watching the Zig language for a while now, given that it was created for writing audio software (low-level, no allocations, real time). I never paid too much attention though, it seemed a little weird to me and I didn’t see the real need. Then I saw a post from Andrew Kelley (creator of the language) on Hacker News, about how he reimplemented my Chromaprint algorithm in Zig, and that got me really interested. I’ve been planning to rewrite AcoustID’s inverted index for a long time, I had a couple of prototypes, but none of the approaches felt right. I was going through some rough times, wanted to learn something new, so I decided to use the project as an opportunity to learn Zig. And it was great, writing Zig is a joy. The new version was faster and more scalable than the previous C++ one. I was happy, until I wanted to add a server interface. In the previous C++ version, I used Qt , which might seem very strange for a server software, but I wanted a nice way of doing asynchronous I/O and Qt allowed me to do that. It was callback-based, but Qt has a lot of support for making callbacks usable. In the newer prototypes, I used Go, specifically for the ease of networking and concurrency. With Zig, I was stuck. There are some Zig HTTP servers, so I could use those. I wanted to implement my legacy TCP server as well, and that’s a lot harder, unless I want to spawn a lot of threads. Then I made a crazy decision, to use Zig also for implementing a clustered layer on top of my server, using NATS as a messaging system, so I wrote a Zig NATS client , and that gave me a lot of experience with Zig’s networking capabilities. Fast forward to today, I’m happy to introduce Zio, an asynchronous I/O and concurrency library for Zig . If you look at the examples, you will not really see where is the asynchronous I/O, but it’s there, in the background and that’s the point. Writing asynchronous code with callbacks is a pain. Not only that, it requires a lot of allocations, because you need state to survive across callbacks. Zio is an implementation of Go style concurrency, but limited to what’s possible in Zig. Zio tasks are stackful coroutines with fixed-size stacks. When you run , this will initiate the I/O operation in the background and then suspend the current task until the I/O operation is done. When it’s done, the task will be resumed, and the result will be returned. That gives you the illusion of synchronous code, allowing for much simpler state management. Zio support fully asynchronous network and file I/O, has synchronization primitives (mutexes, condition variables, etc.) that work with the cooperative runtime, has Go-style channels, OS signal watches and more. Tasks can run in single-threaded mode, or multi-threaded, in which case they can migrate from thread to thread for lower latency and better load balancing. And it’s FAST. I don’t want to be posting benchmarks here, maybe later when I have more complex ones, but the single-threaded mode is beating any framework I’ve tried so far. It’s much faster than both Go and Rust’s Tokio. Context switching is virtually free, comparable to a function call. The multi-threaded mode, while still not being as robust as Go/Tokio, has comparable performance. It’s still a bit faster than either of them, but that performance might go down as I add more fairness features. Because it implements the standard interfaces for reader/writer, you can actually use external libraries that are unaware they are running within Zio. Here is an example of a HTTP server: When I started working with Zig, I really thought it’s going to be a niche language to write the fast code in, and then I’ll need a layer on top of that in a different language. With Zio, that changed. The next step for me is to update my NATS client to use Zio internally. And after that, I’m going to work on a HTTP client/server library based on Zio.

0 views
Chris Coyier 3 months ago

Strongbacks

Back when I went to the Alaska Folk Festival , a real highlight was catching The Strongbacks do their version of sea shanties live on the main stage. I remember a real tear-jerker protest shanty that I’d love to hear again. As fate would have it, I also went to Zig Zag campout this year and met a fella named Evan who was an excellent clawhammer player from Astoria, Oregon. I didn’t realize until the last night at the community showcase concert that Evan as *in* The Strongbacks. He plugged that they have a new album coming out at the end of his performance at that show and… now it’s out! It’s on all the stuff (ughgk) but perhaps easiest right here is a YouTube “topic” for the whole album. I really like this one: I haven’t listened to the whole thing yet. Hopefully it’s got that protest one in it, but if not, it’ll live in my brain.

0 views
matklad 4 months ago

Look Out For Bugs

One of my biggest mid-career shifts in how I write code was internalizing the idea from this post: Don’t Write Bugs Historically, I approached coding with an iteration-focused mindset — you write a draft version of a program, you set up some kind of a test to verify that it does what you want it to do, and then you just quickly iterate on your draft until the result passes all the checks. This was a great approach when I was only learning to code, as it allowed me to iterate past the things which were not relevant for me at that point, and focus on what matters. Who cares if it is or in the “паблик статик войд мэйн стринг а-эр-джи-эс”, it’s just some obscure magic spell anyway, and completely irrelevant to the maze-traversing thingy I am working on! Carrying over this approach past the learning phase was a mistake. As Lawrence points out, while you can spend time chasing bugs in the freshly written code, it is possible to dramatically cut the amount of bugs you introduce in the first place, if you focus on optimizing that (and not just the iteration time). It felt (and still feels) like a superpower! But there’s already a perfectly fine article about not making bugs, so I am not going to duplicate it. Instead, I want to share a related, but different super power: You can find bugs by just reading code. I remember feeling this superpower for the first time. I was investigating various rope implementations, and, as a part of that, I looked at the , the implementation powering IntelliJ, very old and battle tested code. And, by just reading the code, I found a bug, since fixed . It wasn’t hard, the original code is just 500 lines of verbose Java (yup, that’s all that you need for a production rope). And I wasn’t even trying to find a bug, it just sort-of jumped out at me while I was trying to understand how the code works. That is, you can find some existing piece of software, carefully skim through implementation, and discover real problems that can be fixed. You can do this to your software as well! By just re-reading a module you wrote last year, you might find subtle problems. I regularly discover TigerBeetle issues by just covering this or that topic on IronBeetle : bug discovered live , fixed , and PR merged . Here are some tips for getting better at this: The key is careful, slow reading. What you actually are doing is building the mental model of a program inside your head. Reading the source code is just an instrument for achieving that goal. I can’t emphasize this enough: programming is all about building a precise understanding inside your mind, and then looking for the diff between your brain and what’s in git. Don’t dodge an opportunity to read more of the code. If you are reviewing a PR, don’t review just the diff, review the entire subsystem. When writing code, don’t hesitate to stop and to probe and feel the context around. Go for or to understand the historical “why” of the code. When reading, mostly ignore the textual order, don’t just read each source file top-down. Instead, use these two other frames: Start at or subsystem equivalent, and use “goto definition” to follow an imaginary program counter. Identify the key data structures and fields, and search for all places where they are created and modified. You want to see a slice across space and time, state and control flow (c.f. Concurrent Expression Problem ). Just earlier today I used the second trick to debug an issue for which I haven’t got a repro. I identified as the key assignment that was recently introduced, then ctrl + f for , and that immediately revealed a gap in my mental model. Note how this was helped by the fact that the thing in question, , was always called that in the source code! If your language allows it, avoid , use proper names. Identify and collect specific error-prone patterns or general smells in the code. In Zig, if there’s an allocator and a in the same scope, you need to be very careful . If there’s an isolated tricky function, it’s probably fine. If there’s a tricky interaction between functions, it is a smell, and some bugs are lurking there. Bottom line: reading the code is surprisingly efficient at proactively revealing problems. Create space for calm reading. When reading, find ways to build mental models quickly, this is not entirely trivial.

0 views
matklad 5 months ago

Reserve First

A short post about a coding pattern that is relevant for people who use the heap liberally and manage memory with their own hands. Let’s start with two bugs. The first one is from Andrew Kelley’s HYTRADBOI 2025 talk, “Programming Without Pointers” : The second one is from the Ghostty terminal emulator: Can you spot the two bugs? In lieu of a spoiler, allow me to waste your bandwidth with a Dante Gabriel Rossetti painting: In both functions, a bug happens when the second expression throws. In the case, we insert an item into a hash table, but leave it uninitialized. Accessing the item later will crash in the best case. The Ghostty example is even more interesting. It actually tries to avoid this exact problem, by attempting to carefully revert changes in the block. But it fails to do so properly! While the data is restored to on error, the still frees , so we end up with uninitialized memory all the same: Both are “exception safety” problems: if we attempt an operation that mutates an object, and an error happens midway, there are three possible outcomes: The object state remains as if we didn’t attempt the operation. The object is left in a different, but valid state. The object becomes invalid and unsafe to use. In these two cases in particular, the only source of errors is fallible allocation. And there’s a pattern to fix it: As a reminder , is a Zig idiom for expressing “no errors after this point”. Applying the pattern to two examples we get: Memory reservation is a magic trick, contains all the failures, but doesn’t change the data structure! Do you see how powerful that is? I learned this pattern from Andrew Kelley during the coffee break after the talk! I haven’t measured the optimal level of spice here to make the truest possible statement. Instead I opted for dumping as much spice as possible to get the brain gears grinding: Zig should remove and rename to just . If you want to insert a single item, that’s two lines now. Don’t insert items one-by-one, reserve memory in bulk, up-front. Zig applications should consider aborting on OOM. While the design goal of handling OOM errors correctly is laudable, and Zig makes it possible, I’ve seen only one application, xit which passes “matklad spends 30 minutes grepping for ” test. For libraries, prefer leaving allocation to the caller, or use generative testing with an allocator that actually returns errors. Alternatively, do as TigerBeetle. We take this pattern literally, reserve all resources in main, and never allocate memory afterwards: ARCHITECTURE.md#static-memory-allocation

0 views
matklad 5 months ago

Zig's Lovely Syntax

It’s a bit of a silly post, because syntax is the least interesting detail about the language, but, still, I can’t stop thinking how Zig gets this detail just right for the class of curly-braced languages, and, well, now you’ll have to think about that too. On the first glance, Zig looks almost exactly like Rust, because Zig borrows from Rust liberally. And I think that Rust has great syntax, considering all the semantics it needs to express (see “Rust’s Ugly Syntax” ). But Zig improves on that, mostly by leveraging simpler language semantics, but also through some purely syntactical tasteful decisions. How do you spell a number ninety-two? Easy, . But what type is that? Statically-typed languages often come with several flavors of integers: , , . And there’s often a syntax for literals of a particular types: , , . Zig doesn’t have suffixes, because, in Zig, all integer literals have the same type: : The value of an integer literal is known at compile time and is coerced to a specific type on assignment or ascription: To emphasize, this is not type inference, this is implicit comptime coercion. This does mean that code like generally doesn’t work, and requires an explicit type. Raw or multiline strings are spelled like this: This syntax doesn’t require a special form for escaping itself: It nicely dodges indentation problems that plague every other language with a similar feature. And, the best thing ever: lexically, each line is a separate token. As Zig has only line-comments, this means that is always whitespace. Unlike most other languages, Zig can be correctly lexed in a line-by-line manner. Raw strings is perhaps the biggest improvement of Zig over Rust. Rust brute-forces the problem with syntax, which does the required job, technically, but suffers from the mentioned problems: indentation is messy, nesting quotes requires adjusting hashes, unclosed raw literal breaks the following lexical structure completely, and rustfmt’s formatting of raw strings tends to be rather ugly. On the plus side, this syntax at least cannot be expressed by a context-free grammar! For the record, Zig takes C syntax (not that C would notice): The feels weird! It will make sense by the end of the post. Here, I want only to note part, which matches the assignment syntax . This is great! This means that grepping for gives you all instances where a field is written to. This is hugely valuable: most of usages are reads, but, to understand the flow of data, you only need to consider writes. Ability to mechanically partition the entire set of usages into majority of boring reads and a few interesting writes does wonders for code comprehension. Where Zig departs from C the most is the syntax for types. C uses a needlessly confusing spiral rule. In Zig, all types are prefix: While pointer type is prefix, pointer dereference is postfix, which is a more natural subject-verb order to read: Zig has general syntax for “raw” identifiers: It is useful to avoid collisions with keywords, or for exporting a symbol whose name is otherwise not a valid Zig identifier. It is a bit more to type than Kotlin’s delightful , but manages to re-use Zig’s syntax for built-ins ( ) and strings. Like, Rust, Zig goes for function declaration syntax. This is such a massive improvement over C/Java style function declarations: it puts token (which is completely absent in traditional C family) and function name next to each other, which means that textual search for allows you to quickly find the function. Then Zig adds a little twist. While in Rust we write The arrow is gone! Now that I’ve used this for some time, I find arrow very annoying to type, and adding to the visual noise. Rust needs the arrow: Rust has lambdas with an inferred return type, and, in a lambda, the return type is optional. So you need some sort of an explicit syntax to tell the parser if there is return type: And it’s understandable that lambdas and functions would want to use compatible syntax. But Zig doesn’t have lambdas, so it just makes the type mandatory. So the main is Related small thing, but, as name of the type, I think I like more than . Zig is using and for binding values to names: This is ok, a bit weird after Rust’s, whose would be in Zig, but not really noticeable after some months. I do think this particular part is not great, because , the more frequent one, is longer. I think Kotlin nails it: , , . Note all three are monosyllable, unlike and ! Number of syllables matters more than the number of letters! Like Rust, Zig uses syntax for ascribing types, which is better than because optional suffixes are easier to parse visually and mechanically than optional prefixes. Zig doesn’t use and and spells the relevant operators as and : This is easier to type and much easier to read, but there’s also a deeper reason why they are not sigils. Zig marks any control flow with a keyword. And, because boolean operators short-circuit, they are control flow! Treating them as normal binary operator leads to an entirely incorrect mental model. For bitwise operations, Zig of course uses and . Both Zig and Rust have statements and expressions. Zig is a bit more statement oriented, and requires explicit returns: Furthermore, because there are no lambdas, scope of return is always clear. Relatedly, the value of a block expression is void. A block is a list of statements, and doesn’t have an optional expression at the end. This removes the semicolon problem — while Rust rules around semicolons are sufficiently clear (until you get to macros), there’s some constant mental overhead to getting them right all the time. Zig is more uniform and mechanical here. If you need a block that yields a value, Zig supports a general syntax for breaking out of a labeled block: Rust makes pedantically correct choice regarding s: braces are mandatory: This removes the dreaded “dangling else” grammatical ambiguity. While theoretically nice, it makes -expression one-line feel too heavy. It’s not the braces, it’s the whitespace around them: But the ternary is important! Exploding a simple choice into multi-line condition hurts readability. Zig goes with the traditional choice of making parentheses required and braces optional: By itself, this does create a risk of style bugs. But in Zig formatter (non-configurable, user-directed) is a part of the compiler, and formatting errors that can mask bugs are caught during compilation. For example, is an error due to inconsistent whitespace around the minus sign, which signals a plausible mixup of infix and binary minus. No such errors are currently produced for incorrect indentation (the value add there is relatively little, given ), but this is planned. NB: because Rust requires branches to be blocks, it is forced to make synonym with . Otherwise, the ternary would be even more unusable! Syntax design is tricky! Whether you need s and whether you make or mandatory in ifs are not orthogonal! Like Python, Zig allows on loops. Unlike Python, loops are expressions, which leads to a nicely readable imperative searches: Zig doesn’t have syntactically-infinite loop like Rust’s or Go’s . Normally I’d consider that a drawback, because these loops produce different control flow, affecting reachability analysis in the compiler, and I don’t think it’s great to make reachability dependent on condition being visibly constant. But! As Zig places semantics front and center, and the rules for what is and isn’t a comptime constant are a backbone of every feature, “anything equivalent to ” becomes sufficiently precise. Incidentally, these days I tend to write “infinite” loops as Almost always there is an up-front bound for the number of iterations until the break, and its worth asserting this bound, because debugging crashes is easier than debugging hangs. , , , , and all use the same Ruby/Rust inspired syntax for naming captured values: I like how the iterator comes first, and then the name of an item follows, logically and syntactically. I have a very strong opinion about variable shadowing. It goes both ways: I spent hours debugging code which incorrectly tried to use a variable that was shadowed by something else, but I also spent hours debugging code that accidentally used a variable that should have been shadowed! I really don’t know whether on balance it is better to forbid or encourage shadowing! Zig of course forbids shadowing, but what’s curious is that it’s just one episode of the large crusade against any complexity in name resolution. There’s no “prelude”, if you want to use anything from std, you need to import it: There are no glob imports, if you want to use an item from std, you need to import it: Zig doesn’t have inheritance, mixins, argument-dependent lookup, extension functions, implicit or traits, so, if you see , that is guaranteed to be a boring method declared on type. Similarly, while Zig has powerful comptime capabilities, it intentionally disallows declaring methods at compile time. Like Rust, Zig used to allow a method and a field to share a name, because it actually is syntactically clear enough at the call site which is which. But then this feature got removed from Zig. More generally, Zig doesn’t have namespaces. There can be only one kind of in scope, while Rust allows things like I am astonished at the relative lack of inconvenience in Zig’s approach. Turns out that is all the syntax you’ll ever need for accessing things? For the historically inclined, see “The module naming situation” thread in the rust mailing list archive to learn the story of how rust got its syntax. The lack of namespaces touches on the most notable (by its absence) feature of Zig syntax, which deeply relates to the most profound aspect of Zig’s semantics. Everything is an expression. By which I mean, there’s no separate syntactic categories of values, types, and patterns. Values, types, and patterns are of course different things. And usually in the language grammar it is syntactically obvious whether a particular text fragment refers to a type or a value: So the standard way is to have separate syntax families for the three categories, which need to be internally unambiguous, but can be ambiguous across the categories because the place in the grammar dictates the category: when parsing , everything until is a pattern, stuff between and is a type, and after we have a value. There are two problems here. First, there’s a combinatorial explosion of sorts in the syntax, because, while three categories describe different things, it turns out that they have the same general tree-ish shape. The second problem is that it might be hard to maintain category separation in the grammar. Rust started with the three categories separated by a bright line. But then, changes happen. Originally, Rust only allowed syntax for assignment. But today you can also write to do unpacking like Similarly, the turbofish used to move the parser from the value to the type mode, but now const parameters are values that can be found in the type position! The alternative is not to pick this fight at all. Rather than trying to keep the categories separately in the syntax, use the same surface syntax to express all three, and categorize later, during semantic analysis. In fact, this is already happens in the example — these are different things! One is a place (lvalue) and another is a “true” value (rvalue), but we use the same syntax for both. I don’t think such syntactic unification necessarily implies semantic unification, but Zig does treat everything uniformly, as a value with comptime and runtime behavior (for some values, runtime behavior may be missing, for others — comptime): The fact that you can write an where a type goes is occasionally useful. But the fact that simple types look like simple values syntactically consistently make the language feel significantly less busy. As a special case of everything being an expression, instances of generic types look like this: Just a function call! Though, there’s some resistance to trickery involved to make this work. Usually, languages rely on type inference to allow eliding generic arguments. That in turn requires making argument syntax optional, and that in turn leads to separating generic and non-generic arguments into separate parameter lists and some introducer sigil for generics, like or . Zig solves this syntactic challenge in the most brute-force way possible. Generic parameters are never inferred, if a function takes 3 comptime arguments and 2 runtime arguments, it will always be called with 5 arguments syntactically. Like with the (absence of) importing flourishes, a reasonable reaction would be “wait, does this mean that I’ll have to specify the types all the time?” And, like with import, in practice this is a non-issue. The trick are comptime closures. Consider a generic : We have to specify type when creating an instance of an . But subsequently, when we are using the array list, we don’t have to specify the type parameter again, because the type of variable already closes over . This is the major truth of object-orienting programming, the truth so profound that no one even notices it: in real code, 90% of functions are happiest as (non-virtual) methods. And, because of that, the annotation burden in real-world Zig programs is low. While Zig doesn’t have Hindley-Milner constraint-based type inference, it relies heavily on one specific way to propagate types. Let’s revisit the first example: This doesn’t compile: and are different values, we can’t select between two at runtime because they are different. We need to coerce the constants to a specific runtime type: But this doesn’t kick the can sufficiently far enough and essentially reproduces the with two incompatible branches. We need to sink coercion down the branches: And that’s exactly how Zig’s “Result Location Semantics” works. Type “inference” runs a simple left-to-right tree-walking algorithm, which resembles interpreter’s . In fact, is exactly what happens. Zig is not a compiler, it is an interpreter. When evaluates an expression, it gets: When interpreting code like the interpreter passes the result location ( ) and type down the tree of subexpressions. If branches store result directly into object field (there’s a inside each branch, as opposed to one after the ), and each coerces its comptime constant to the appropriate runtime type of the result. This mechanism enables concise syntax for specifying enums: When evaluates the switch, it first evaluates the scrutinee, and realizes that it has type . When evaluating arm, it sets result type to for the condition, and a literal gets coerced to . The same happens for the second arm, where result type further sinks down the . Result type semantics also explains the leading dot in the record literal syntax: Syntactically, we just want to disambiguate records from blocks. But, semantically, we want to coerce the literal to whatever type we want to get out of this expression. In Zig, is a shorthand for . I must confess that did weird me out a lot at first during writing code (I don’t mind reading the dot). It’s not the easiest thing to type! But that was fixed once I added snippet, expanding to . The benefits to lightweight record literal syntax are huge, as they allow for some pretty nice APIs. In particular, you get named and default arguments for free: I don’t really miss the absence of named arguments in Rust, you can always design APIs without them. But they are free in Zig, so I use them liberally. Syntax wise, we get two features (calling functions and initializing objects) for the price of one! Finally, the thing that weirds out some people when they see Zig code, and makes others reconsider their choice GitHub handles, even when they haven’t seen any Zig: syntax for built-in functions. Every language needs to glue “userspace” code with primitive operations supported by the compiler. Usually, the gluing is achieved by making the standard library privileged and allowing it to define intrinsic functions without bodies, or by adding ad-hoc operators directly to the language (like Rust’s ). And Zig does have a fair amount of operators, like or . But the release valve for a lot of functionality are built-in functions in distinct syntactic namespace, so Zig separates out , , , , , , , , , and . There’s no need to overload casting when you can give each variant a name. There’s also for type ascription. The types goes first, because the mechanism here is result type semantics: evaluates the first argument as a type, and then uses that as the type for the second argument. Curiously, I think actually can be implemented in the userspace: In Zig, a type of function parameter may depend on values of preceding (comptime) ones! My favorite builtin is . First, it’s the most obvious way to import code: Its crystal clear where the file comes from. But, second, it is an instance of reverse syntax sugar. You see, import isn’t really a function. You can’t do The argument of has to be a string, syntactically. It really is syntax, except that the function-call form is re-used, because it already has the right shape. So, this is it. Just a bunch of silly syntactical decisions, which add up to a language which is positively enjoyable to read. As for big lessons, obviously, the less features your language has, the less syntax you’ll need. And less syntax is generally good, because varied syntactic constructs tend to step on each other toes. Languages are not combinations of orthogonal aspects. Features tug and pull the language in different directions and their combinations might turn to be miraculous features in their own right, or might drag the language down. Even with a small feature-set fixed, there’s still a lot of work to pick a good concrete syntax: unambiguous to parse, useful to grep, easy to read and not to painful to write. A smart thing is of course to steal and borrow solutions from other languages, not because of familiarity, but because the ruthless natural selection tends to weed out poor ideas. But there’s a lot of inertia in languages, so there’s no need to fear innovation. If an odd-looking syntax is actually good, people will take to it. Is there anything about Zig’s syntax I don’t like? I thought no, when starting this post. But in the process of writing it I did discover one form that annoys me. It is the while with the increment loop: This is two-thirds of a C-style loop (without the declarator), and it sucks for the same reason: control flow jumps all over the place and is unrelated to the source code order. We go from condition, to the body, to the increment. But in the source order the increment is between the condition and the body. In Zig, this loop sucks for one additional reason: that separating the increment I think is the single example of control flow in Zig that is expressed by a sigil, rather than a keyword. This form used to be rather important, as Zig lacked a counting loop. It has form now, so I am tempted to call the while-with-increment redundant. Annoyingly, is almost equivalent to But not exactly: if contains a , or , the version would run the one extra time, which is useless and might be outright buggy. Oh well.

0 views
matklad 5 months ago

Partially Matching Zig Enums

Usually, you handle it like this: But once in a while, there’s common handling code you want to run for several variants. The most straightforward way is to duplicate: But this gets awkward if common parts are not easily extractable into function. The “proper” way to do this is to refactor the enum: This gets very awkward if there’s one hundred usages of , 95 of them look better with flat structure, one needs common code for ab case, and the four remaining need common code for ac. The universal recipe for solving the AB problem relies on a runtime panic: And… this is fine, really! I wrote code of this shape many times, and it never failed at runtime due to a misapplied refactor later. Still, every time I write that , I die inside a little. Surely there should be some way to explain to the compiler that is really unreachable there? Well, as I realized an hour ago, in Zig, you can! This is the awkward runtime-panicky and theoretically brittle version: And here’s a bullet-proof compiler-checked one: There are two tricks here. forces the compiler to generate the program twice, where is bound to comptime value. The second trick is , which instructs the compiler to fail if it gets to the else branch. But, because is known at comptime, compiler knows that is in fact unreachable, and doesn’t hit the error. Adding a bug fails compilation, as intended:

0 views

How I like to install NixOS (declaratively)

For one of my network storage PC builds , I was looking for an alternative to Flatcar Container Linux and tried out NixOS again (after an almost 10 year break). There are many ways to install NixOS, and in this article I will outline how I like to install NixOS on physical hardware or virtual machines: over the network and fully declaratively. The term declarative means that you describe what should be accomplished, not how. For NixOS, that means you declare what software you want your system to include (add to config option , or enable a module) instead of, say, running . A nice property of the declarative approach is that your system follows your configuration, so by reverting a configuration change, you can cleanly revert the change to the system as well. I like to manage declarative configuration files under version control, typically with Git. When I originally set up my current network storage build, I chose CoreOS (later Flatcar Container Linux) because it was an auto-updating base system with a declarative cloud-init config. The NixOS manual’s “Installation” section describes a graphical installer (“for desktop users”, based on the Calamares system installer and added in 2022) and a manual installer. With the graphical installer, it’s easy to install NixOS to disk: just confirm the defaults often enough and you’ll end up with a working system. But there are some downsides: The graphical installer is clearly not meant for remote installation or automated installation. The manual installer on the other hand is too manual for my taste: expand “Example 2” and “Example 3” in the NixOS manual’s Installation summary section to get an impression. To be clear, the steps are very doable, but I don’t want to install a system this way in a hurry. For one, manual procedures are prone to mistakes under stress. And also, copy & pasting commands interactively is literally the opposite of writing declarative configuration files. Ideally, I would want to perform most of the installation from the comfort of my own PC, meaning the installer must be usable over the network. Also, I want the machine to come up with a working initial NixOS configuration immediately after installation (no manual steps!). Luckily, there is a (community-provided) solution: nixos-anywhere . You take care of booting a NixOS installer, then run a single command and nixos-anywhere will SSH into that installer, partition your disk(s) and install NixOS to disk. Notably, nixos-anywhere is configured declaratively, so you can repeat this step any time. (I know that nixos-anywhere can even SSH into arbitrary systems and kexec-reboot them into a NixOS installer, which is certainly a cool party trick, but I like the approach of explicitly booting an installer better as it seems less risky and more generally applicable/repeatable to me.) I want to use NixOS for one of my machines, but not (currently) on my main desktop PC. Hence, I installed only the tool (for building, even without running NixOS) on Arch Linux: Now, running should drop you in a new shell in which the GNU hello package is installed: By the way, the Nix page on the Arch Linux wiki explains how to use nix to install packages, but that’s not what I am interested in: I only want to remotely manage NixOS systems. Previously, I said “you take care of booting a NixOS installer”, and that’s easy enough: write the ISO image to a USB stick and boot your machine from it (or select the ISO and boot your VM). But before we can log in remotely via SSH, we need to manually set a password. I also need to SSH with the environment variable because the termcap file of rxvt-unicode (my preferred terminal) is not included in the default NixOS installer environment. Similarly, my configured locales do not work and my preferred shell (Zsh) is not available. Wouldn’t it be much nicer if the installer was pre-configured with a convenient environment? With other Linux distributions, like Debian, Fedora or Arch Linux, I wouldn’t attempt to re-build an official installer ISO image. I’m sure their processes and tooling work well, but I am also sure it’s one extra thing I would need to learn, debug and maintain. But building a NixOS installer is very similar to configuring a regular NixOS system: same configuration, same build tool. The procedure is documented in the official NixOS wiki . I copied the customizations I would typically put into , imported the module from and put the result in the file: To build the ISO image, I set the environment variable to point to the file and to select the upstream channel for NixOS 25.05: After about 1.5 minutes on my 2025 high-end Linux PC , the installer ISO can be found in (1.46 GB in size in my case). Unfortunately, the nix project has not yet managed to enable the “experimental” new command-line interface (CLI) by default yet, despite 5+ years of being available, so we need to create a config file and enable the modern interface: How can you tell old from new? The old commands are hyphenated ( ), the new ones are separated by a blank space ( ). You’ll notice I also enabled Nix flakes , which I use so that my nix builds are hermetic and pinned to a certain revision of nixpkgs and any other nix modules I want to include in my build. I like to compare flakes to version lock file in other programming environments: the idea is that building the system in 5 months will yield the same result as it does today. To verify that flakes work, run (not ): For reference, here is the configuration I use to create a new VM for NixOS in Proxmox. The most important setting is (= UEFI boot, which is not the default), so that I can use the same boot loader configuration on physical machines as in VMs: Before we can boot our (unsigned) installer, we need to enter the UEFI setup and disable Secure Boot. Note that Proxmox enables Secure Boot by default, for example. Then, boot the custom installer ISO on the target system, and ensure works without prompting for a password. Declare a with the following content: Declare your disk config in : Declare your desired NixOS config in : …and lock it: After about one minute, my VM was installed and rebooted! Tip: Last month, I had to temporarily pin to the latest released version (1.9.0) because of issue nixos-anywhere#510 like so: Now that the declarative part of the system is in place, we need to take care of the stateful part. In my case, the only stateful part that needs setting up is the Tailscale mesh VPN. To set up Tailscale, I log in via SSH and run . Then, I add the new node to my network by following the link. Afterwards, in the Tailscale Machines console , I disable key expiration and add ACL tags. Now, after I changed something in my configuration file, I use remotely to roll out the change to my NixOS system: Note that not all changes are fully applied as part of : while systemd services are generally restarted, newly required kernel modules are not automatically loaded (e.g. after enabling the coral hardware accelerator in Frigate). So, to be sure everything took effect, your system after deploying changes. One of the advantages of NixOS is that in the boot menu, you can select which generation of the system you want to run. If the latest change broke something, you can quickly reboot into the previous generation to undo that change. Of course, you can also undo the configuration change and deploy a new generation — whichever is more convenient in the situation. With this article, I hope I could convey what I wish someone would have told me when I started using Nix and NixOS: Where do you go from here? You need to manually enable SSH after the installation — locally, not via the network. The graphical installer generates an initial NixOS configuration for you, but there is no way to inject your own initial NixOS configuration. Using nixos-anywhere, fetch the hardware-configuration.nix from the installer and install NixOS to disk: Enable flakes and the new CLI. Use nixos-anywhere to install remotely. Build a custom installer if you want, it’s easy! Use ’s builtin flag for remote deployment. Read through all documentation on nixos.org → Learn . Here are a couple of posts from people in and around my bubble that I looked at for inspiration / reference, in no particular order: Michael Lynch wrote about setting up an Oracle Cloud VM with NixOS and about managing his Zig configuration . Nelson Elhage wrote about using Nix to test dozens of Python interpreters as part of his performance investigation into Python 3.14 tail-call interpreter performance . Vincent Bernat wrote about using Nix to build an SD card image for an ARM single board computer . Mitchell Hashimoto shared his extensive NixOS configs . Wolfgang has a YouTube video about using NixOS for his Home Server ( → his configs ) Contact your local Nix community! I recently attended the “Zero Hydra Failures” event of the Nix Zürich group and the kind people there were happy to talk about all things Nix :)

0 views
Lukáš Lalinský 10 months ago

My AI helpers, CodeRabbit and SourceGraph Cody

I’ve been an early adopter of AI coding tools. I’ve been using GitHub Copilot from the technical preview stages in 2021. It was mind-blowing to me. The interface was pretty minimal compared to what we have now, but even at the stage, it was revolutionizing the way I work. I’ve dreamed for a long time about programming without having to actually write all the code, and it was starting to become a reality. All in all, I was pretty happy with it. Last year, I discovered Cody from SourceGraph . I’ve tried the trial and I was hooked. It had so much more context about the code I’m working on. I could just select a function, tell it to refactor something on it, and it would do it directly in my editor. Writing documentation, generating tests, writing new code, everything become easier. I’ve used it last year to write a replacement of the acoustid-index server, something I’ve been planning for a long time, but I decided to also learn a new language, Zig , on the project. Cody made the process really effortless. It included countless refactoring, as I was still learning the right patterns in the language, and I was doing most of the work without actually writing the code myself. This year, I’ve started using the chat with thinking models a lot more often, and Cody’s ability to apply the code blocks from the chat to the editor. Even better, I’m actually using this for free, as part of their support for open source. It’s such a good tool that I’d be happy to pay for now, and will definitely start doing that once my current free license expires. And this year I discovered CodeRabbit for automated code reviews. I was super skeptical about this, but they also have a free plan for open source projects, do I decided to give it a try. I’m maintaining AcoustID alone, so having another set of eyes looking at the code, even if mechanical ones, is welcome. And I was blown away. On the first pull request, it actually found a small logical error I had in the code. And this kept happening again and again. After some time, I switched it to the assertive profile, and now I actually enjoy opening a pull request and going through the suggestions it makes. Yes, sometimes they are obsessive, but that’s OK. I’ve tried alternatives, like Gemini or Copilot, both having options to do code reviews, but the level of quality is somewhere completely elsewhere. Gemini and Copilot feel like useless toys compared to CodeRabbit. The last four years have completely changed my approach to programming, and for the better. As good as all these new AI tools are, I don’t really expect them to be replacing technical programming jobs. You really need to evaluate their outputs, and if you are not able to do that critically, you will deal with a lot of bullshit code. But if you can judge the quality of the output, these are great helpers and I’m really looking forward to what the future brings.

0 views
zackoverflow 10 months ago

I spent 181 minutes waiting for the Zig compiler this week

TLDR; The Zig compiler takes about 1 minute and 30 seconds to compile debug builds of Bun. Zig's language server doesn't do basic things like type-checking, so often have to run the compiler to see if my code works.

0 views

Msgpack serialization library for Zig

I’ve been playing with Zig over the last few weeks. The language had been on my radar for a long time, since it was originally developed for writing audio software, but I never paid too much attention to it. It seems that it’s becoming more popular, so I’ve decided to learn it and picked a small task of rewriting the AcoustID fingerprint index server in it. That is still in progress, but there is one side product that is almost ready, a library for handling msgpack serialization . The library can be only used with static schemas, defined using Zig’s type system. There are many options for generating compact messages, almost competing with protobuf, but without separate proto files and protoc. I’m quite happy with the API. It’s mainly possible due to Zig’s comptime type reflection. This is the most basic usage: See the project on GitHub for more details.

0 views
seated.ro 1 years ago

a weekend love story - raylib/zig

Before this weekend, I was a plebeian who used JavaScript for 80% of my tasks. Now I am an esoteric plebeian who has used zig once. Anyway, I decided to give zig a shot and try to build a game with it. There was really only two libraries I wanted to learn (sokol and raylib), I went with raylib. I went through the official raylib docs to get a feel for the API, then I searched for zig bindings and found two, Not-Nik/raylib-zig and ryupold/raylib-zig The second one hasn’t been updated in 6 months, and it isn’t using the zig package manager. Ergo, leaving me no choice but to use the first one. Fairly easy to set things up, I just needed to install zig and raylib-zig. NOTE : We will be using zig v0.12.0 instead of v0.13.0, we will see why later. I used zvm to make life easier mangaging various zig versions, but you can install zig however you want. Follow this part of the docs to install raylib-zig. Do this inside your game directory. This adds raylib-zig as a dependency to our project. You can check the file to verify. Now the most important part if you want to compile your game for the web with emscripten. IF YOU’RE NOT BUILDING FOR THE WEB, PLEASE SKIP . Install as mentioned here: emscripten installation guide and make sure to do the command. NOTE : Just make sure not to clone the emsdk into $HOME/.emscripten, because emscripten uses that as the default cache directory. It will fuck your build up. (I didn’t do this at all) If you followed the raylib-zig installation guide, your build.zig should look something like this: If it isn’t, it should be now! Now we need to add a block specific to the emscripten target. I missed this code block and was stuck trying to figure out how to get the emscripten run step to work. :( I wrote an asteroids clone following along with this video: zig space rocks - jdh . Go write whatever game you want! Here is a screenshot of the game in action: Here are some other cool games made in zig: tetris , terrain-zigger Below is some cool zig code I wrote that I quite like: This syntax is so fucking cool, I will never forget to again. This code is for deciding when to play the low/high bloop sound. It increases in intensity the longer a player is alive. Makes it feel like the game is getting harder. Pretty cool! Also avoided for loops with bit shifting. absolutely unnecessary. But I did it anyway. One thing I think is important is for some reason I wasn’t able to just use a for the heap allocations. So i reused some code from the ryupold/examples-raylib.zig repo and made a custom emscripten allocator. Without this, I was getting issues. I also increased the memory limit to 16MB. Idk which of those fixed it, but it works now! After that, I was sitting at a solid of memory usage, so I think I was good. You can build the game for desktop with . For the web, you need to run the following command: Now, as to why we used . For some reason the build command above fails for the emscripten target with as mentioned in this open issue . If someone can figure out why, please mention it in the issue! You can drop everything in the folder and host it wherever. Was a fun start to my zig arc! I hope you guys don’t waste time debugging shit like me lol. May we zig harder every day. Source Code

0 views