Latest Posts (17 found)
Dayvster 1 weeks ago

Is Odin Just a More Boring C?

## Why I Tried Odin ### Background My recent posts have been diving deep into Zig and C, a shift from my earlier focus on React and JavaScript. This isn’t a pivot but a return to my roots. I started programming at 13 with C and C++, and over the years, I’ve built a wide range of projects in systems programming languages like C, C++, Rust, and now Zig. From hobby experiments and custom Linux utilities to professional embedded systems work think vehicle infotainment, tracking solutions, and low-level components I’ve always been drawn to the power and precision of systems programming. Alongside this, I’ve crafted tools for my own environment and tackled plenty of backend engineering, blending my full-stack expertise with a passion for low-level control. ### Why Odin Caught My Eye I like many others initially dismissed Odin as that language that was primarily intended for game development. It took me a moment or should I say many moments to realize just how stupid that notion was. Because let's analyze what game development actually means, it means building complex systems that need to be efficient, performant and reliable. It means working with graphics, physics, input handling, networking and more. It means dealing with concurrency, memory management and low level optimizations. In other words, game development is a perfect fit for a systems programming language like Odin. So basically if it's intended for game development, then it should be a great fit for general systems programming, desktop applications and since game dev usually means manual memory management without a garbage collector, it should also be possible to some extent to use it for embedded systems. So after I've gave myself a good slap on the forehead for being a bit of an idiot. I decided why not give Odin a fair shot and build something useful with it. ## The Project Now I may have been a bit liberal with the word useful there, what I actually decided to build was something that I usually like to build whenever I wanna try out a new language, namely a tiny key-value store with a pub/sub system. It won't win any awards for originality and I'm pretty sure the folks over at redis aren't exactly shaking in their boots. It is the most basic most barebones implementation of both lacking any real useful features that would make it usable in a production environment. But it is a good exercise in understanding the language and its capabilities. Mainly because it involves a few different aspects of programming that are relevant to systems programming. It involves data structures, memory management, concurrency and networking. And even if you create something as basic and lacking as I have in this example, you still have room for experimentation and exploration to add more features. ### Building a Tiny KV Store With Pub/Sub My initial minimal proof of concept was simple and straightforward. ```odin package main import "core:fmt" import "core:time" KVStore :: struct { store: map[string]string, } kvstore_init :: proc() -> KVStore { return KVStore{store = map[string]string{}} } kv_put :: proc(kv: ^KVStore, key: string, value: string) { kv.store[key] = value } kv_get :: proc(kv: ^KVStore, key: string) -> string { if value, ok := kv.store[key]; ok { return value } return "" } PubSub :: struct { subscribers: map[string][]proc(msg: string), } pubsub_init :: proc() -> PubSub { return PubSub{subscribers = map[string][]proc(msg: string){}} } subscribe :: proc(ps: ^PubSub, topic: string, handler: proc(msg: string)) { if arr, ok := ps.subscribers[topic]; ok { new_arr := make([]proc(msg: string), len(arr)+1); for i in 0..<len(arr) { new_arr[i] = arr[i]; } new_arr[len(arr)] = handler; ps.subscribers[topic] = new_arr; } else { ps.subscribers[topic] = []proc(msg: string){handler}; } } publish :: proc(ps: ^PubSub, topic: string, msg: string) { if handlers, ok := ps.subscribers[topic]; ok { for handler in handlers { handler(msg); } } } kv: KVStore; main :: proc() { kv = kvstore_init(); ps := pubsub_init(); handler1 :: proc(msg: string) { fmt.println("Sub1 got:", msg); kv_put(&kv, "last_msg", msg); } handler2 :: proc(msg: string) { fmt.println("Sub2 got:", msg); } handler3 :: proc(msg: string) { fmt.println("Sub3 got:", msg); } subscribe(&ps, "demo", handler1); subscribe(&ps, "demo", handler2); subscribe(&ps, "demo", handler3); publish(&ps, "demo", "Welcome to dayvster.com"); time.sleep(2 * time.Second); publish(&ps, "demo", "Here's another message after 2 seconds"); last := kv_get(&kv, "last_msg"); fmt.println("Last in kvstore:", last); } ``` As you can see it currently lacks any real error handling, concurrency and persistence. But it does demonstrate the basic functionality of a key-value store with pub/sub capabilities. What I have done is created two main structures, `KVStore` and `PubSub`. The `KVStore` structure contains a map to store key-value pairs and provides functions to put and get values. The `PubSub` structure contains a map of subscribers for different topics and provides functions to subscribe to topics and publish messages. The `main` function initializes the key-value store and pub/sub system, defines a few handlers for incoming messages, subscribes them to a topic, and then publishes some messages to demonstrate the functionality. From this basic example we've explored how to handle memory management in Odin, how to work with data structures like maps and slices, and how to define and use procedures. ### Memory Management Like C and Zig, Odin employs manual memory management, but it offers user-friendly utilities to streamline the process, much like Zig, in contrast to C’s more rudimentary approach. For instance, the `make` function in Odin enables the creation of slices with a defined length and capacity, akin to Zig’s slice allocation. In the code above, `make([]proc(msg: string), len(arr)+1)` generates a slice of procedure pointers with a length of `len(arr)+1`. Essentially, it allocates memory on the heap and returns a slice header, which includes a pointer to the allocated memory, along with the length and capacity of the slice. **but how and when is that memory freed?** In this code, memory allocated by `make` (e.g., for the slice in `subscribe`) and for maps (e.g., `kv.store` and `ps.subscribers`) is not explicitly freed. Since this is a short-lived program, the memory is reclaimed by the operating system when the program exits. However, in a long-running application, you’d need to use Odin’s delete procedure to free slices and maps explicitly. For example: ```odin kvstore_deinit :: proc(kv: ^KVStore) { delete(kv.store); } pubsub_deinit :: proc(ps: ^PubSub) { for topic, handlers in ps.subscribers { delete(handlers); } delete(ps.subscribers); } ``` So let's add that in the `main` function before it exits to ensure we clean up properly: ```odin // ... existing code ... main :: proc() { // ... existing code ... pubsub_deinit(&ps); kvstore_deinit(&kv); } // end of main ``` Well would you look at that, we just added proper memory management to our tiny KV store with pub/sub system and all it took was a few lines of code. I'm still a huge fan of C but this does feel nice and clean, not to mention really readable and easy to understand. Is our code now perfect and fully memory safe? Not quite, it still needs error handling and thread safety(way later) for production use, but it’s a solid step toward responsible memory management. ### Adding concurrency Enhancing Pub/Sub with Concurrency in Odin To make our pub/sub system more realistic, we've introduced concurrency to the publish procedure using Odin's core:thread library. This allows subscribers to process messages simultaneously, mimicking real-world pub/sub behavior. Since handler1 modifies kv.store via kv_put, we've added a mutex to KVStore to ensure thread-safe access to the shared map. Here's how it works: - **Concurrent Execution with Threads**: The publish procedure now runs each handler in a separate thread created with thread.create. Each thread receives the handler and message via t.user_args, and thread.start kicks off execution. Threads are collected in a dynamic array (threads), which is cleaned up using defer delete(threads). The thread.join call ensures the program waits for all threads to finish, and thread.destroy frees thread resources. This setup enables handler1, handler2, and handler3 to process messages concurrently, with output order varying based on thread scheduling. - **Thread Safety with Mutex**: Since handler1 updates kv.store via kv_put, concurrent access could lead to race conditions, as Odin's maps aren't inherently thread-safe. To address this, a sync.Mutex is added to KVStore. The kv_put and kv_get procedures lock the mutex during map access, ensuring only one thread modifies or reads kv.store at a time. The mutex is initialized in kvstore_init and destroyed in kvstore_deinit. ```odin publish :: proc(ps: ^PubSub, topic: string, msg: string) { if handlers, ok := ps.subscribers[topic]; ok { threads := make([dynamic]^thread.Thread, 0, len(handlers)) defer delete(threads) // Allocate ThreadArgs for each handler thread_args := make([dynamic]^ThreadArgs, 0, len(handlers)) defer { for args in thread_args { free(args) } delete(thread_args) } for handler in handlers { msg_ptr := new(string) msg_ptr^ = msg t := thread.create(proc(t: ^thread.Thread) { handler := cast(proc(msg: string)) t.user_args[0] msg_ptr := cast(^string) t.user_args[1] handler(msg_ptr^) free(msg_ptr) }) t.user_args[0] = rawptr(handler) t.user_args[1] = rawptr(msg_ptr) thread.start(t) append(&threads, t) } for t in threads { thread.join(t) thread.destroy(t) } } } ``` This implementation adds concurrency by running each handler in its own thread, allowing parallel message processing. The mutex ensures thread safety for kv.store updates in handler1, preventing race conditions. Odin's core:thread library simplifies thread management, offering a clean, pthread-like experience. Odin’s threading feels like a bit like C’s pthreads but without the usual headache, and it’s honestly a breeze to read and write. For this demo, the mutex version keeps everything nice and tidy, However in a real application, you'd still want to consider more robust error handling and possibly a thread pool for efficiency and also some way to handle thread lifecycle and errors and so on... ## Adding Persistence I haven't added persistence to this code-block personally because I feel that would quickly spiral the demo that I wanted to keep simple and focused into something much more complex. But if you wanted to add persistence, you could use Odin's `core:file` library to read and write the `kv.store` map to a file. You would need to serialize the map to a string format (like `JSON` or `CSV`) when saving and deserialize it when loading. Luckily odin has `core:encoding/json` and `core:encoding/csv` libraries that can help with this. Which should at the very least make that step fairly trivial. So if you feel like it, give it a shot and let me know how it goes. Do note that this step is a lot harder than it may seem especially if you want to do it properly and performantly. ## Now to Compile and Run Now here's the thing the first time I ran `odin build .` I thought I messed up somewhere because, it basically took a split second and produced no output no warnings no nothing. But I did see that a binary was produced named after the folder I was in. So I ran it with ```bash ❯ ./kvpub Sub1 got: Welcome to dayvster.com Sub2 got: Welcome to dayvster.com Sub3 got: Welcome to dayvster.com Sub1 got: Here's another message after 2 seconds Sub2 got: Here's another message after 2 seconds Sub3 got: Here's another message after 2 seconds Last in kvstore: Here's another message after 2 seconds ``` And there you have it, a tiny key-value store with pub/sub capabilities built in Odin. That compiled bizarrely fast, in fact I used a util ([pulse](https://github.com/dayvster/pulse)) I wrote to benchmark processes and their execution time and it clocked in at a blazing 0.4 seconds to compile ```bash ❯ pulse --benchmark --cmd 'odin build .' --runs 3 ┌──────────────┬──────┬─────────┬─────────┬─────────┬───────────┬────────────┐ │ Command ┆ Runs ┆ Avg (s) ┆ Min (s) ┆ Max (s) ┆ Max CPU% ┆ Max RAM MB │ ╞══════════════╪══════╪═════════╪═════════╪═════════╪═══════════╪════════════╡ │ odin build . ┆ 3 ┆ 0.401 ┆ 0.401 ┆ 0.401 ┆ 0.00 ┆ 0.00 │ └──────────────┴──────┴─────────┴─────────┴─────────┴───────────┴────────────┘ ``` Well I couldn't believe that so I ran it again this time with `--runs 16` to get a better average and it still came in at a very respectable `0.45` (MAX) seconds. **OK that is pretty impressive.** but consistent maybe my tool is broken? I'm not infallible after all. So I re-confirmed it why `hyperfine` and it came out at: ```bash ❯ hyperfine "odin build ." Benchmark 1: odin build . Time (mean ± σ): 385.1 ms ± 12.5 ms [User: 847.1 ms, System: 354.6 ms] Range (min … max): 357.3 ms … 400.1 ms 10 runs ``` God damn that is fast, now I know the program is tiny and simple but still that is impressive and makes me wonder how it would handle a larger codebase. Please if you have any feedback or insights on this let me know I am really curious. just for sanitysake I also ran `time odin build .` and it came out at you've guessed it `0.4` seconds. ### Right so it's fast, but how's the experience? Well I have to say it was pretty smooth overall. The compiler is fast and the error messages are generally clear and helpful if not perhaps a bit... verbose for my taste **For example** I've intentionally introduced a simple typo in the `map` keyword and named is `masp` to showcase what I mean: ```bash ❯ odin build . /home/dave/Workspace/TMP/odinest/main.odin(44:31) Error: Expected an operand, got ] subscribers: masp[string][]proc(msg: string), ^ /home/dave/Workspace/TMP/odinest/main.odin(44:32) Syntax Error: Expected '}', got 'proc' subscribers: masp[string][]proc(msg: string), ^ /home/dave/Workspace/TMP/odinest/main.odin(44:40) Syntax Error: Expected ')', got ':' subscribers: masp[string][]proc(msg: string), ^ /home/dave/Workspace/TMP/odinest/main.odin(44:41) Syntax Error: Expected ';', got identifier subscribers: masp[string][]proc(msg: string), ^ ``` I chose specifically this map because I wanted to showcase how Odin handles errors when you try to build, it could simply say `Error: Unknown type 'masp'` but instead it goes on to produce 4 separate errors that all stem from the same root cause. This is obviously because the parser gets confused and can't make sense of the code anymore. So essentially you get every single error that results from the initial mistake even if they are on the same line. Now would I love to see them condensed into a single error message? Because it stems from the same line and the same root cause? Yes I would. But that's just my personal preference. ## Where Odin Shines ### Simplicity and Readability Odin kinda feels like a modernized somehow even more boring C but in the best way possible. It's simple, straightforward and easy to read. It does not try to have some sort of clever syntax or fancy features, it really feels like a no-nonsense no frills language that wants you to start coding and being productive as quickly as possible. In fact this brings me to my next point. ### The Built in Libraries Galore I was frankly blown away with just how much is included in the standard and vendored(more on that later) libraries. I mean it has everything you'd expect from a modern systems programming language but it also comes with a ton of complete data structures, algorithms and utilities that you would usually have to rely on third-party libraries for in C or even Zig. For more info just look at [Odin's Core Library](https://pkg.odin-lang.org/core/) and I mean really look at it and read it do not just skim it. Here's an example [flags](https://pkg.odin-lang.org/core/flags/) which is a complete command line argument parser, or even [rbtree](https://pkg.odin-lang.org/core/container/rbtree/) which is a complete implementation of a red-black tree data structure that you can just import and use right away But what really blew me away was ### The Built in Vendor Libraries / Packages Odin comes with a set of vendor libraries that basically give you useful bindings to stuff like `SDL2/3`, `OpenGL`, `Vulkan`, `Raylib`, `DirectX` and more. This is really impressive because it means you can start building games or graphics applications right away without having to worry about setting up bindings or dealing with C interop. Now I'm not super sure if these vendor bindings are all maintained and created by the Odin team from what I could gather so far, it would certainly seem so but I could be wrong. If you know more about this please let me know. But all that aside these bindings are really well done and easy to use. For example here's how you can create a simple window with SDL2 in Odin: ```odin package main import sdl "vendor:sdl2" main :: proc() { sdl.Init(sdl.INIT_VIDEO) defer sdl.Quit() window := sdl.CreateWindow( "Odin SDL2 Black Window", sdl.WINDOWPOS_CENTERED, sdl.WINDOWPOS_CENTERED, 800, 600, sdl.WINDOW_SHOWN, ) defer sdl.DestroyWindow(window) renderer := sdl.CreateRenderer(window, -1, sdl.RENDERER_ACCELERATED) defer sdl.DestroyRenderer(renderer) event: sdl.Event running := true for running { for sdl.PollEvent(&event) { if event.type == sdl.EventType.QUIT { running = false } } sdl.SetRenderDrawColor(renderer, 0, 0, 0, 255) sdl.RenderClear(renderer) sdl.RenderPresent(renderer) } } ``` This code creates a simple window with a black background using SDL2. It's pretty straightforward and easy to understand, especially if you're already familiar with SDL2 or SDL3. ### C Interop Odin makes it trivially easy to interop with C libraries, as long as that. This is done via their `foreign import` where you'd create an import name and link to the library file and `foreign` blocks to link to declared individual function or types. I could explain it with examples here but Odin's own documentation does a way better job and will keep this post from getting even longer than it already is. So please check out [Odin's C interop](https://odin-lang.org/news/binding-to-c/) documentation for more info. ## Where Odin Feels Awkward ### Standard Library Gaps While Odin's standard library is quite comprehensive, there are still some gaps and missing features that can make certain tasks more cumbersome. For example, while it has basic file I/O capabilities, it lacks more advanced features like file watching or asynchronous I/O. Additionally, while it has a decent set of data structures, it lacks some more specialized ones like tries or bloom filters I'd also love to see a b+ tree implementation in the core library. But those are at most nitpicks and finding third-party libraries or writing your own implementations is usually straightforward. However... ### No Package Manager I really like languages that come with their own package manager, it makes it so much easier to discover, install and manage third-party libraries / dependencies. Odin currently lacks a built-in package manager, which means you have to manually download and include third-party libraries in your projects. This can be a bit of a hassle, especially I'd imagine for larger projects with multiple dependencies. ### Smaller Nitpicks - **dir inconsistencies**: I love how it auto named my binary after the folder I was in but I wish it did the same whenever I ran `odin run` and `odin build` I had to explicitly specify `odin run .` and `odin build .` that felt a bit inconsistent to me because if it knows the folder we are in why not just use that as the default value when we wanna tell it to run or build in the current directory? - **Error messages**: As mentioned earlier, while Odin's error messages are generally clear, they can sometimes be overly verbose, especially when multiple errors stem from a single root cause. It would be nice to see more concise error reporting in such cases. So to fix this I'd love to either see error messages collapsed into a single message with an array of messages from the same line, or somehow grouped together into blocks. ### Pointers are ^ and not * I'm on a German keyboard and the `^` character is a bit of a pain to type, especially when compared to the `*` character which is right next to the `Enter` key on my keyboard. I get that Odin wants to differentiate itself from C and C++ but this small change feels unnecessary and adds a bit of friction to the coding experience. These are as the title says just minor nitpicks and in no way detract from the overall experience of using Odin, just minor annoyances that I personally had while using the language your experience may differ vastly and none of these may even bother you. ## So is Odin just a More Boring C? In a way, yes kind of. I mean it's very similar in approach and philosophy but with more "guard rails" and helpful utilities to make the experience smoother and more enjoyable and the what I so far assume are first party bindings to popular libraries via the vendors package really makes it stand out in a great way, where you get a lot more consistency and predictability than you would if you were to use C with those same libraries. And I guess that's the strength of Odin, it's so boring that it just let's you be a productive programmer without getting in your way or trying to be too clever or fancy. I use boring here in an affectionate way, if you've ever read any of my other posts you'll know that I do not appreciate complexity and unnecessary cleverness in programming which is why I suppose I'm often quite critical of rust even though I do like it for certain use cases. In this case I'd say Odin is very similar to Go both are fantastic boring languages that let you get stuff done without much fuss or hassle. The only difference is that Go decided to ship with a garbage collector and Odin did not, which honestly for me personally makes Odin vastly more appealing. ### Syntax and Ergonomics Odin’s syntax is like C with a modern makeover clean, readable, and less prone to boilerplate. It did take me quite a while to get used to replacing my muscle memory for `*` with `^` for pointers and `func`, `fn`, `fun`, `function` with `proc` for functions. But once I got over that initial hump, it felt pretty natural. Also `::` for type declarations is a bit unusual and took me longer than I care to admit, as I'm fairly used to `::` being used for scope resolution in languages like C++ and Rust. But again, once I got used to it, it felt fine. Everything else about the syntax felt pretty intuitive and straightforward. ## Who Odin Might Be Right For ### Ideal Use Cases - **Game Development**: Honestly I totally see where people are coming from when they say Odin is great for game development. The built-in vendor libraries for SDL2/3, OpenGL, Vulkan, Raylib and more make it super easy to get started with game development. Plus the language's performance and low-level capabilities are a great fit for the demands of game programming. - **Systems Programming**: Odin's manual memory management, low-level access, and performance make it a solid choice for systems programming tasks like writing operating systems, device drivers, or embedded systems. I will absolutely be writing some utilities for my Linux setup in Odin in the near future. - **Desktop Applications**: Again this is where those vendor libraries shine, making it easy to build cross-platform desktop applications with graphical interfaces as long as you're fine with doing some manual drawing of components, I'd love to see a binding for something like `GTK` or `Qt` in the vendor packages in the future. - **General Purpose Programming**: This brings me back to my intro where I said that it took me a while to realize that if Odin is good for game development then realistically by all means it should basically be good for anything and everything you wish to create with it. So yea give it a shot make something cool with it. ### Where It’s Not a Good Fit Yet - **Web Development**: The Net library is pretty darn nice and very extensive, however it does seem like it's maybe a bit more fit for general networking tasks rather than simplifying your life as a web backend developer. I'm sure there's already a bunch of third party libraries for this, but if you're a web dev you are almost spoiled for choice at the moment by languages that support web development out of the box with all the fixings and doodads. ## Final Thoughts ### Would I Use It Again? Absolutely in fact I will, I've already started planning some small utilities for my Linux setup in Odin. I really like the simplicity and readability of the language, as well as the comprehensive standard and vendor libraries. The performance is also impressive, especially the fast compile times. ### Source Code and Feedback You can find the complete source code for the tiny key-value store with pub/sub capabilities on my GitHub: [dayvster/odin-kvpub](https://github.com/dayvster/odin-kvpubsub) If you create anything cool with it I'd love to see it so do hit me up on any of my socials. I'd love to hear your thoughts and experiences with Odin, whether you've used it before or are considering giving it a try. Feel free to leave a comment or reach out to me on Twitter [@dayvster](https://twitter.com/dayvsterdev). Appreciate the time you took to read this post, and happy coding!

1 views
Dayvster 1 weeks ago

My Battle Tested React Hooks Are Now Open Source

## Don't yap just give me the links Ok I get it you're busy here you go: - [GitHub Repo](https://github.com/dayvster/react-kata) - [NPM Package](https://www.npmjs.com/package/react-kata) Have a great day! ## What's This? I've been developing react since 2013 it's been through a lot of changes and updates over the years. We've seen patterns come and go, however when hooks were introduced in 2019, I was instantly ... hooked. No wait please don't leave, that was a good joke and you know it! ![](https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExYjdtenpmZ2E1MnIwcTIxeGZ1cTV5OXQ1bGtobWN3am1wOTRkczlhbCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/28STqyG0HPzIQ/giphy.gif) Anyhow now that that's out of the way, I wanted to share with you a collection of various hooks that I have written over and over and over again in my past projects. These hooks have been battle tested in production applications and multiple client, personal and professional projects. ## Why Open Source? Well I have this post on my blog about why I believe everyone should open source their work, [Open Source Your Projects](https://dayvster.com/blog/open-source-your-projects/) I've been recently redesigning my website and fixing up my old blog posts and I stumbled upon this one decided to re-read it fully and discovered I am a hypocrite by value of pure laziness. Yes I did not open source these hooks sooner, because I was simply lazy. I had them in various different projects, repos, side projects etc. etc. But I never took the time to actually put them together in a single package and publish it. I just kept either navigating to these projects and constantly copy pasting them around OR worse yet I would rewrite them from scratch every time I needed them, which is just plain dumb. So I decided to stop being a hypocrite and finally open source a bunch of my stuff part of that was the task of collecting all of these react hooks and putting them together in a single package. So here it is ## Introducing React-Kata - [GitHub Repo](https://github.com/dayvster/react-kata) - [NPM Package](https://www.npmjs.com/package/react-kata) React-Kata is a collection of battle-tested React hooks that I've used in various projects over the years. These hooks are designed to make your life easier and help you write cleaner, more efficient code. ### Why React-Kata? Kata is a term used in martial arts to describe a set of movements and techniques that are practiced repeatedly to improve skill and mastery. Similarly, React-Kata is a collection of hooks that I've rewritten and reused and refined over and over again. So I thought it would be appropriate to name it React-Kata. Also I did martial arts as a younger person and wanted to show off a bit, lay off. ## Examples: - `useLocalStorage`: A hook that makes it easy to work with local storage in your React applications. - `useDebounce`: A hook that debounces a value, making it easier to work with user input and avoid unnecessary re-renders. - `useThrottle`: A hook that throttles a value, making it easier to work with user input and avoid unnecessary re-renders. - `useTimeout`: A hook that makes it easy to work with timeouts in your React applications. - `useInterval`: A hook that makes it easy to work with intervals in your React applications - `useSessionStorage`: A hook that makes it easy to work with session storage in your React applications. and many many more in fact to see a lot of all of them check out the [GitHub Repo Hooks overview](https://github.com/dayvster/react-kata?tab=readme-ov-file#-hooks-overview) ## Code Examples ### useShimmer This hook generates a shimmer effect SVG that can be used as a placeholder while content is loading. It takes width and height as parameters and returns an SVG string. ````tsx import { useShimmer } from 'react-kata'; function ShimmerDemo() { const shimmer = useShimmer(400, 300); return ; } ```` ### useWhyDidYouUpdate This hook helps you identify why a component re-rendered by logging the changed props to the console or handling them with a custom callback. I can not overstate how often this one has saved my ass in the past. ```tsx import React, { useState } from 'react'; import { useWhyDidYouUpdate } from 'react-kata'; function Demo(props) { // Logs changed props to console by default useWhyDidYouUpdate('Demo', props); // Or provide a callback to handle changes useWhyDidYouUpdate('Demo', props, changedProps => { // Custom handling, e.g. send to analytics alert('Changed: ' + JSON.stringify(changedProps)); }); return {props.value} ; } ``` ### useTheme Let's you simply manage themes in your application. Supports auto, light, dark, and custom themes. For custom themes you simply input the name of the theme and it will be applied as the data-theme attribute on the html element. It will also be stored in local storage and retrieved from local storage on page load. ```tsx import React from 'react'; import { useTheme } from 'react-kata'; function Demo() { // Supports auto, light, dark, and custom themes const [theme, setTheme, toggleTheme] = useTheme(["auto", "light", "dark", "solarized"]); return ( Current theme: {theme} Toggle Theme setTheme("solarized")}>Solarized setTheme("auto")}>Auto ); } ``` ### useReload Allows you to pass a custom condition that when met will trigger a page reload. ```tsx import React from 'react'; import { useReload } from 'react-kata'; function Demo() { // Only reload if user confirms const reload = useReload(() => window.confirm('Reload?')); return Reload Page ; } ``` ## Conclusion I hope you find these hooks as useful as I have. If you have any suggestions for new hooks or improvements to existing ones, please feel free to open an issue or submit a pull request on GitHub, I'll always appreciate feedback, criticism, suggestions and especially Pull Requests. Happy coding!

1 views
Dayvster 3 weeks ago

Why Zig Feels More Practical Than Rust for Real-World CLI Tools

## Introduction So when it comes to memory management there are two terms you really need to know, the stack and the heap. The stack is a region of memory that stores temporary data that is only needed for a short period of time. It operates in a last-in, first-out (LIFO) manner, meaning that the most recently added data is the first to be removed, as the name suggests. Basically imagine a stack of plates, if you wanna remove one plate you remove the top one, remove the middle plate and disaster awaits in this analogy. The stack is typically used for storing function parameters, local variables, and return addresses. It is fast and efficient because it has a fixed size and does not require dynamic memory allocation. The size of the stack is usually limited, and if a program tries to use more stack space than is available, it can result in a stack overflow error. This can happen if a function calls itself recursively too many times or if a program allocates too much memory on the stack. Whereas the heap as the name suggests is a region of memory that is used for dynamic memory allocation. Unlike the stack, the heap does not have a fixed size and can grow or shrink as needed. The heap is typically used for storing data that needs to persist beyond the lifetime of a single function call, such as objects or data structures that are created at runtime. Imagine the heap as a pile of clothes in a disorganized household, you can add or remove clothes as needed and as long as the pile isn't too big you can find what you need with relative speed and ease. But it will quickly become a nightmare if you let it grow out of control. The heap is managed by the operating system and requires dynamic memory allocation, which can be slower and less efficient than stack allocation. The heap can also become fragmented over time, since we do not always store data in a contiguous block of memory. This can lead to performance issues and make it more difficult to allocate large blocks of memory. ### Rust's Borrow Checker Rust's borrow checker is a a pretty powerful tool that helps ensure memory safety during compile time. It enforces a set of rules that govern how references to data can be used, preventing common programming memory safety errors such as null pointer dereferencing, dangling pointers and so on. However you may have notice the word **compile time** in the previous sentence. Now if you got any experience at systems programming you will know that compile time and runtime are two very different things. Basically compile time is when your code is being translated into machine code that the computer can understand, while runtime is when the program is actually running and executing its instructions. The borrow checker operates during compile time, which means that it can only catch memory safety issues that can be determined statically, before the program is actually run. This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices. ### Story Time Last weekend I've made a simple CLI tool for myself to help me manage my notes it parses `~/.notes` into a list of notes, then builds a tag index mapping strings to references into that list. Straightforward, right? Not in Rust. The borrow checker blocks you the moment you try to add a new note while also holding references to the existing ones. Mutability and borrowing collide, lifetimes show up, and suddenly you’re restructuring your code around the compiler instead of the actual problem. In Zig, we would just allocate the list with an allocator, store pointers into it for the tag index, and mutate freely when we need to add or remove notes. No lifetimes, no extra wrappers, no compiler gymnastics, that’s a lot more straightforward. ### But Dave isn't that the exact point of Rust's borrow checker? Yes it is, however by using Zig I managed to get most of the benefits of Rust's memory safety without the complexity or ceremony of the borrow checker. All it took was some basic understanding of memory management and a bit of discipline. I was able to produce two CLI's that are both memory safe and efficient however the Zig one was way more straightforward and easier to reason about and took less time to write. ## What is Safety, Really for CLI Tools? This is where a lot of developers trip up, Rust markets itself as a language that produces safe software, great marketing hook, but one tiny problem, memory safety is one puzzle piece of overall software safety. I'm not sure if the Rust foundation does this on purpose sort of a blanket statement to make it seem like memory safety is the end all be all of software safety, or if they just don't want to constantly prefix safety with memory safety(even though they should). But back to the main point, memory safety is just one aspect of software safety. You can argue if it's a big or small piece of the puzzle, I'd say it depends on the software and use-case but it's definitely not the only piece. So What exactly is safety in terms of CLI tools? Memory safety alone does not make a program safe. Your CLI tool can still crash, produce wrong results, corrupt files, leak sensitive data, be vulnerable to various types of attacks or just behave in a way that is not expected. Let's go back to my `Notes CLI` it's rust version may never segfault but it could silently overwrite my index or tags or corrupt my files if I make a mistake in my logic, or perhaps it could store my file in a temporary location that is world readable, exposing my notes to anyone on the system. Is that safe? No. Would using Zig solve any of those issues automatically, also no. Is my example a bit contrived, yes, but it illustrates the point that memory safety is not the only thing that matters when it comes to software safety. In fact you should also consider other aspects of safety such as: - **Predictable Behavior**: The program should do what the user expects, even when input is malformed or unexpected. A CLI that panics on a missing file or fails silently on a corrupted note is not safe. - **Avoiding Crashes or Silent Corruption**: The program should handle errors gracefully, providing meaningful feedback to the user instead of crashing or corrupting data. A CLI that crashes on a malformed note or silently overwrites existing notes is not safe. - **Manageable Performance**: The program should perform well under expected workloads, avoiding excessive resource consumption or slowdowns. A CLI that becomes unresponsive when managing a large number of notes is not safe. This is where it really helps to understand memory allocations and performance characteristics of your language of choice. - **Sensitive Data Handling**: The program should protect sensitive data from unauthorized access or exposure. A CLI that stores notes in a world-readable temporary file is not safe. - **Robustness Against Attacks**: The program should be resilient against common attack vectors, such as injection attacks or buffer overflows. A CLI that can be exploited to execute arbitrary code or corrupt data is not safe. And this is precisely where Rust's memory safety shines, it can help prevent certain types of vulnerabilities that arise from memory mismanagement. However, it's not a silver bullet that guarantees overall safety. ## The Borrow Checker: Strengths and Limitations The borrow checker is impressive. It prevents dangling references, double frees, and mutable aliasing at compile time, things that would otherwise cause segfaults or undefined behavior. It’s why Rust can claim “memory safe without a garbage collector.” ### Strengths: - **Zero data races / mutable aliasing issues**: The compiler guarantees that only one mutable reference exists at a time, and that immutable references cannot be combined with mutable ones. - **Strong compile-time guarantees**: Many memory-related bugs are caught before you even run the program. - **Early bug detection**: You find mistakes before shipping code, which is a huge win in long-lived services or concurrent systems. ### Limitations / Pain Points: **Cognitive overhead**: You’re constantly thinking about lifetimes, ownership, and borrow scopes, even for simple tasks. A small CLI like my notes tool suddenly feels like juggling hot potatoes. **Boilerplate and contortions**: You end up introducing clones, wrappers (Rc, RefCell), or redesigning data structures just to satisfy the compiler. Your code starts serving the compiler, not the problem. **Compile-time only**: The borrow checker cannot fix logic bugs, prevent silent corruption, or make your CLI behave predictably. It only ensures memory rules are followed. - **Edge cases get messy**: Shared caches, global state, or mutable indexes often trigger lifetime errors that are annoying to work around. At this point, the Rust borrow checker can feel more like a mental tax than a helpful tool, especially for short-lived CLI projects. You’re trading developer ergonomics for a compile-time guarantee that, in many CLI scenarios, may be overkill. ## Zig's Approach to Safety and Simplicity Zig takes a different approach to safety and simplicity. It provides manual memory management with optional safety checks, allowing developers to choose the level of control they need. This can lead to more straightforward code for certain use cases, like CLI tools. However where it really shines is how it does manual memory management, I've briefly touched upon this in my other blog post [Zig Allocators Explained](https://dayvster.com/blog/zig-allocators-explained/). But basically long story short Zig gives you allocators, a set of tools that helps you manually manage your memory in a more structured and predictable way. You can choose to use a general purpose allocator like the `std.heap.GeneralPurposeAllocator` or you can create your own custom allocator that fits your specific needs. This allows you to have more control over how memory is allocated and deallocated, which can lead to more efficient and predictable memory usage. This combined with Zig's `defer` statement which allows you to schedule cleanup code to run when a scope is exited, makes it easy to manage resources gives you most of the power of Rust's borrow checker at your disposal without the complexity and ritual. However it asks one thing in return of you, discipline, your software will be only as safe as you make it. We can make the same claim about Rust, you can throw `copy` and `clone` and `unsafe` around your code and throw away all the benefits of the borrow checker in a heartbeat. The two languages are polar opposites in this regard, Zig places the burden on the developer and makes it easy for them to produce memory safe software, whereas Rust places the burden on the compiler and makes it hard for developers to produce memory unsafe software. Back to the main point, zig's approach to memory management is in my subjective opinion more practical for most of my use cases, especially for CLI tools. It allows me to write straightforward code that is easy to reason about and maintain, without the overhead of the borrow checker. I can allocate a list of notes, store pointers to them in a tag index, and mutate the list freely when I need to add or remove notes. No lifetimes, no extra wrappers, no compiler gymnastics, that’s a lot more straightforward. Oh I almost forgot, Zig also has the `comptime` feature which allows you to execute code at compile time. This can be useful for generating code, performing static analysis, or optimizing performance and even for testing which is a really nice bonus and can be a small helper when it comes to memory safety. ## Developer Ergonomics Matter and Developers are not Idiots When developing software we want to be productive and efficient, most of all we want to be correct and produce good software, however we also want to enjoy the process of creation and not feel like we are fighting the tools we use. Developer ergonomics is a term that refers to how easy and comfortable it is to use a programming language or framework. It encompasses things like syntax, tooling, documentation, and community support. A language with good developer ergonomics can make it easier to write correct code, while a language with poor developer ergonomics can make it harder to do so. I'd say as it currently stands Rust has poor developer ergonomics but produces memory safe software, whereas Zig has good developer ergonomics and allows me to produce memory safe software with a bit of discipline. I personally usually prefer languages where I do not have to succumb to too much ceremony and ritual to get things done, I want to be able to express my ideas in code without having to constantly think about the underlying mechanics of the language and yet I want to be responsible and produce good software. So with C and C++ this was a tiny bit harder as you basically had to learn some useful and practical memory management patterns and techniques, Zig comes with them baked in. I feel like Zig really respects it's developers and treats them like adults, it gives you the tools and expects you to use them wisely. Rust on the other hand feels like it treats developers like children that need to be constantly supervised and guided, which can be frustrating and demotivating. Developers are not idiots, sure even the smartest amongst us still produce memory safety issues or bugs in their software and it's silly to assume that with enough training and practice we can become perfect developers, but we can become better developers. We can learn from our mistakes and improve our skills, we can learn to write better code and produce better software. It's not good to abstract that away to the compiler and assume that it will magically make us better developers, I don't personally think it will. In fact not to sound too cliche but I think that the journey to becoming a better developer is a series of mistakes and fixes, we learn from our mistakes and improve our skills. What does it say about a language that tries to abstract away the mistakes we make, does it really help us become better developers ? ## Final Thoughts Rust is amazing, if you’re building something massive, multithreaded, or long-lived, where compile-time guarantees actually save your life. The borrow checker, lifetimes, and ownership rules are a boon in large systems. But for small, practical CLI tools? Rust can feel like overkill. That’s where Zig shines. Lightweight, fast, and straightforward, you get memory safety without constantly bending over backward for the compiler. You can allocate a list, track pointers, and mutate freely without extra wrappers, lifetimes, or contortions. Iterating feels natural, the code is easier to reason about, and you get stuff done faster. Memory safety is important, but it’s just one piece of the puzzle. Predictable behavior, maintainable code, and robustness are just as critical, and that’s exactly why Zig often feels more practical for real-world CLI tools. At the end of the day, it’s not about which language is “better.” It’s about what fits your workflow and the kinds of projects you build. For me, Zig hits the sweet spot: memory safe, low ceremony, and developer-friendly, perfect for small tools that actually get things done. ## References - The Stack and the Heap - Zig Allocators Explained - Rustonomicon - The Dark Arts of Unsafe Rust - Zig Documentation - Memory Management - Rust Documentation - The Rust Programming Language - Zig Documentation - Comptime - Rust Documentation - Ownership - Zig Documentation - Defer - Rust Documentation - Error Handling - Zig Documentation - Error Handling - Rust Documentation - Concurrency - Zig Documentation - Concurrency - Rust Documentation - Testing - Zig Documentation - Testing - Rust Documentation - Performance

0 views
Dayvster 3 weeks ago

Are We Chasing Language Hype Over Solving Real Problems?

## Intro As you may have heard or seen, there is a bit of controversy around Ubuntu adopting a rewritten version of GNU Core Utils in Rust. This has sparked a lot of debate in the tech community. This decision by Canonical got me thinking about this whole trend or push of rewriting existing software in Rust which seems to be happening a lot lately. To put it bluntly I was confused by the need to replace GNU Core Utils with a new implementation as GNU Core Utils has been around since arguably the 90s and more realistically 2000s and it has been battle tested and proven to be reliable, efficient, effective and most importantly secure as it had basically never had any major security vulnerabilities in its entire existence. So why then would we deem it necessary to replace it with a new implementation in Rust? Why would anyone go through the trouble of rewriting something that already works perfectly fine and has been doing so for decades? When the end result at best is going to be a tool that does the same thing as the original and in the very best case scenario offer the same performance? What bothers me even more is the bigger pattern this points to. Are we as developers more interested in chasing new languages and frameworks than actually solving real problems? I strongly subscribe to the idea that software development 60% problem solving and 40% creative exploration and innovation. But lately it feels like the balance is shifting more and more towards the latter. We seem to be more interested in trying out the latest and greatest languages and frameworks than actually solving real problems. ## The Hype of New Languages and Shiny Object Syndrome We've all been there in the past haven't we? Getting excited about a new programming language or framework that promises to solve all our problems and make our lives easier. It's easy to get caught up in the hype and want to try out the latest and greatest technology, but it's important to remember that just because something is new doesn't mean it's better. We need to be careful not to fall into the trap of "shiny object syndrome" where we chase after the latest trends without considering whether they actually solve our problems or improve our workflows. It's important to evaluate new technologies based on their merits and how they fit into our existing systems and workflows, rather than simply jumping on the bandwagon because everyone else is doing it. Now for the important question: **Do I think the developers of coreutils-rs are doing this just because Rust is the new hotness?** Short and simple: No, no I do not. I believe they have good intentions and are likely trying to improve upon the existing implementation in some way. However, I do not agree with them that there is a need for a rewritten version of GNU Core Utils in Rust. I also do not agree that GNU Core Utils is inherently insecure or unsafe. ### Why do we get Exited About New Languages? It's also important to briefly touch upon the psychological aspect of why we get excited about new languages. New languages often come with new features, syntax, and paradigms that can be appealing to developers. They may also promise to solve problems that existing languages struggle with, such as performance, concurrency, or memory safety. Additionally, new languages can offer a fresh perspective on programming and can inspire creativity and innovation. Not to mention the community aspect, new usually means a changing of the guard, new people, new ideas, new ways of thinking about problems. All of these factors can contribute to the excitement and enthusiasm that developers feel when a new language is introduced. This enthusiasm can sometimes lead to an almost zealous approach of wanting everything and anything to be written only in the new language by this new and fresh community of developers. This can lead to a situation where existing and well-established software is rewritten in the new language, even if there is no real need for it. This can be seen as a form of "language evangelism" where developers are trying to promote their favorite language by rewriting existing software in it. ## The Case of GNU Core Utils As I've briefly touched upon earlier, GNU Core Utils is a collection of basic file, shell and text manipulation utilities that are fundamental to the operation of Unix-like operating systems. These utilities include commands like `ls`, `cp`, `mv`, `rm`, `cat`, `echo`, and many others. They are essential for performing everyday tasks in the command line interface (CLI) and are used by system administrators, developers, and users alike. Some of these can run hundreds of times per second, so performance is absolutely crucial. Even a small reduction in performance to a utility that is run by some OS critical daemon can have a significant impact on the overall performance of the system. GNU core utils has been optimized for this for about 30+ years at this point and is it really worth just tossing all of those lessons and optimizations out the window just to rewrite it in a new language? I've also briefly touched upon that at best in the absolute **best case scenario** a rewritten version of GNU Core Utils in Rust would be able to match the performance of the original implementation. As we know GNU Core Utils are mostly written in C and some C++ mixed in sparingly. So far benchmarks have shown time and time again that at best with a lot of optimizations and tweaks Rust can only ever match the performance of C and in most cases it is actually slower. So the best case outcome of this rewrite is that we get a tool that does the same thing as the original and at best offers the same performance. So what is the actual benefit of this rewrite? Where is the value, what is the actual problem that is being solved here? ## When Hype Overshadows Real Problems This is the crux of the issue, it's very very easy to get swept up in the excitement of a new language and want to use it for everything and anything under the sun. As developers we love novelty and communities with enthusiasm and fresh ideas. It's stimulating it's fun it feels like progress it feels like we are finally building something again instead of just rehashing and maintaining. We all know from personal experience that creating a new project is more fun and enjoyable than maintaining and existing one and this is a natural human tendency. Now do I think this is one of the reasons the developers of coreutils-rs are doing this? Yes, I do. But in the end they are solving a problem that does not exist. ### It's not just about Core Utils Now with how often I've mentioned this specific example of GNU Core Utils you might think I want to single them out or have some sort of grudge or specific issue with this particular project. No, not really... I think this project is indicative of the larger issue we face in the tech community. It's very easy to get caught up in the excitement of new languages and frameworks and want to use them for everything and anything. This can lead to a situation where we are rewriting existing software in new languages without considering whether it actually solves any real problems or improves our workflows. ## Problem Solving Should Be Our North Star At the end of the day, software development is about solving real problems, not about chasing novelty, shiny new languages, or personal curiosity. Every line of code we write, every framework we adopt, every library we integrate should be justified by the problems it helps solve, not by the hype around it. Yet increasingly, it feels like the industry is losing sight of this. We celebrate engineers for building in the “new hot language,” for rewriting tools, or for adopting the latest framework, even when the original solution worked perfectly fine. We reward novelty over necessity, excitement over impact. This is not a phenomenon isolated to just core utils or systems programming, it happens in web development, mobile development, data science, and pretty much every other area of software development. We too often abandon tried and true solutions and ideas of new and exciting shiny ones without considering whether they actually solve any real problems or improve our workflows. For example web development went full circle with React Server Components where we went from separation of concerns straight back to PHP style mixing of HTML and logic, server rendering and delivering interactive components to the client. Or the whole GraphQL craze where traditional REST APIs were abandoned en masse for a new and exciting way of doing things that promised to solve the dreaded problem of "over-fetching" and "under-fetching" of data. Yet in reality, it introduced a whole new set of problems and complexities that were not present in traditional REST APIs. Or perhaps the whole microservices and microfrontend craze where a lot of projects were abandoned or rewritten to be split into smaller and smaller pieces, **Was it all bad? Should we always just stick to what works and only ever maintain legacy projects and systems? Heck no!** There is definitely a place for innovation and new ideas in software development. New languages, frameworks, and tools can bring fresh perspectives and approaches to problem-solving. However, it's important to evaluate these new technologies based on their merits and how they fit into our existing systems and workflows, rather than simply jumping on the bandwagon because everyone else is doing it. We need to be more critical and thoughtful about the technologies we adopt and the projects we undertake. We need to ask ourselves whether a new language or framework actually solves a real problem or improves our workflows, or if we're just chasing novelty for its own sake. ## A Final Thought At the end of the day, it’s not about Rust, React, GraphQL, or the latest microservices fad. It’s about solving real problems. Every line of code, every framework, every rewrite should have a purpose beyond curiosity or hype. We live in a culture that rewards novelty, celebrates “cool” tech, and often mistakes excitement for progress. But progress isn’t measured in how many new languages you touch, or how many shiny rewrites you ship, it’s measured in impact, in the problems you actually solve for users, teams, and systems. So next time you feel the pull of the newest hot language, framework, or tool, pause. Ask yourself: “Am I solving a real problem here, or just chasing excitement?” Because at the end of the day, engineering isn’t about what’s trendy, it’s about what works, what matters, and what actually makes a difference. And that, my friends, is the craft we should all be sharpening.

0 views
Dayvster 3 weeks ago

Dev Culture Is Dying The Curious Developer Is Gone

## When Curiosity Lead the Way If you have been in software development for a while, you might remember a time when developers were launching unique and innovative products and projects just for the sake of curiosity, learning or even just because they had a particular interest in a specific topic. This curiosity and problem solving mindset gave us some of the best tools that we still use today such as VLC, Linux, Git, Apache HTTP Server, Docker(arguably), and many many more. These tools were not created by large corporations or solopreneurs looking to increase their MMR or ARR. They were created by curious developers who wanted to solve a unique problem they had or even just wanted to learn something new. ### Nights Spend Chasing ideas and Tinkering I still remember back in the 2000s (2003-2009) the nights I spent tinkering with new technologies, frameworks, and programming languages. I would often find myself staying up late into the night, fueled by curiosity and a desire to learn more about the craft of software development. I would make the dumbest of projects and the strangest of shortcuts just because I could and just to see if it would work. Even if it would only serve me and no one else, I would still make it because it was simply fun. ### Learning Without a Purpose There is something to be said about learning without a clear purpose, goal or even a expected reward at the end of your journey. It allows you to explore new ideas and concepts without the pressure of having to deliver a specific outcome. It allows you to be creative and even tinker with suboptimal implementations and solutions or even some that are flat out insane or idiotic. Because at the end of your journey, you will not be met with disappointment that you did not create a new product or service that will generate passive income or be used by hundreds of thousands of people. No, that was never your expectation going into it in the first place, you started the journey simply because you were curious and you wanted to create something even if your target demographic was just yourself. This in many ways leads to a better learning journey and a more fulfilling experience as you are not bound by the constraints of having to deliver a specific outcome or meet certain expectations. You can simply explore and learn at your own pace and in your own way. Don't get me wrong this does not apply only to new comers to the field or junior software developers, this applies to every single developer out there even the most experienced ones. Personally I'd consider myself fairly experienced in the field of software development, I started learning C++ back in 2003 and my first job as a software developer was in 2008. I've been in the field for a while now. In fact the longer I am in the field the more I realize that I know nothing. There is always something new to learn and explore and I find myself constantly tinkering with technologies be they new or old. Learning is not only about new and shiny tech, sometimes it's assembly or system design, microcontrollers, embedded etc. This is plain and simply the **tinkerers mindset** and I believe that this mindset is slowly dying out in the field of software development. I find less and less like-minded people and I encounter more and more push back along the lines of "Why are you wasting your time with that? You should be focusing on X, Y or Z instead." or "That is not going to help you in your career." or even "You should be focusing on building products that will generate passive income or be used by hundreds of thousands of people." ## The Era of Metrics and Shiny Things I find that there has been a strong shift in developer culture over the past decade or so. A very strong and worrying shift towards metrics, revenue optimization, delivering "value" and "building for the masses". I'm not sure this is a good shift but it is one that is happening nonetheless. It seems to me that the focus has shifted from curiosity, learning and a joy for creating cool things to a focus on metrics, observables, problem solving for your niche audience. I see countless developers spending their free time using their free time using technologies they do not enjoy building products they do not care about for an audience they do not understand, simply because they believe that this is what they should be doing in order to be successful as a software developer or to be taken seriously in the field. Many who I talk to believe that this will set them apart from the rest of the pack or that they are a temporarily embarrassed startup CTO/founder or that they are building the next big thing that will paint their name in the stars and grant them the fame and respect of their fellow developers. But how can you ever hope to build something that huge if you do not even care about it? If the problem you are solving is not even a problem you yourself have or worse yet care about? This is where a deeper issue shows itself, When you don't care about what you are building you start looking elsewhere for that sense of progress, accomplishment or even identity. You become a Next.js developer, a React developer, a Rust developer etc... You start to identify yourself by the tools you use rather than the problems you solve or the things you create. ### Chasing Every new Framework or Idea If you've identified with anything in this article so far, then take a moment and answer this to yourself honestly. How often did you find yourself working on your product or project only get think oh but this new framework/library/module/plugin is so much better, I should be using that instead of what I am doing right now, I need to improve my stack, I need to be using the latest and greatest. Because I am building something that will eventually be used by hundreds of thousands of people, so why stunt my growth, why risk being left behind? Naturally your webapp has to use the latest version of React or Next.js with it's latest features and optimizations, a year or so back (2023-2024) that was React server components. Or maybe you just had to switch to the newest version of Vue.js or Angular because they have some new feature that will make your life easier or your app faster or more scalable. Or perhaps your utilities or backend are written in Go or Node or C# and really you should be using Rust because it's just so damn fast and memory efficient. You can't pass that up can you? So you title yourself after whatever language, framework or library you are currently using. You are no longer a software developer, you are a Next.js developer, a specialist in your field. you chase every new shiny thing and you write a product or service in that shiny thing optimizing for MMR, ARR, DAU, MAU, SEO rankings, conversion rates and all that jazz. Wondering why your product or service is not taking off, why no one is using it, why you are not getting any traction. Weird... you used all the right things, you've used the technology that you should have used you've optimized for all the right metrics, you've done everything by the book. So why is it not working? ## What we Lost Along the Way Constantly adopting the latest and greatest thing, not because it inspires you or because you care about it, but simply because you think you should be using it in order to be successful is a recipe for disaster. Not just for you for out entire developer culture as a whole. I don't want to sound overly dramatic but I do lament the loss of the curious developer, the tinkerer, the obsessed creative that just wants to build something cool even if nobody cares about it, even if it only solves their own problem. I think we are slowly killing this mindset, it's slowly disappearing from our culture wether that will be good or bad only time will tell. If you were to ask me I'd say it's a very bad thing. Don't get me wrong we have occasional bright sparks of innovation and creativity HTMX, Bun, Astro, Zig and many other come to mind. But these are few and far between but they show that there are still curious developers out there, they are just harder to find and shrinking in numbers and being drowned out by the noise of metric seeking and revenue optimizations. ## The World Moves On, But Some of Us Remember I don't want to sound too much like a middle aged man lamenting a world that is changing around him. I understand the world moves on, you get more and more jaded and cynical as you get older. But I assure you this is not that. This is a pattern I've been noticing for a while and it worries me. The tools the projects that were built by curious developers are still around and still in use but compared to before we get relatively few new ones that are truly build out of curiosity and passion. There are occasional sparks but not like before. Think of all the amazing software you use today, think which ones were made by insanely curious developers and think of how old that software is or when it came out and then think of more modern software and how many of those were made by massive corporations or solopreneurs or even just flat bought out or sold out. I think we are losing something very important in our culture and I hope we can find it again before it's too late. Before the curious developer is gone for good and we are left with a sea of software built with no privacy concerns, horrible monetization strategies, bloated frameworks and libraries and no ownership mindset, not for you the consumer and not by the creator. ## The Death of Ownership is not Just for the Consumer We've all seen the shifting tide, consumers no longer own their software, you may buy the newest Adobe suite, or JetBrains IDE, latest iPhone or Android or even the latest Windows, but you do not own it. It can be taken away from you at any time, you simply rent it, you pay a monthly fee to use it. You do not own it, you simply have a license to use it. But do we ever take time to consider the loss of ownership for the creators? The developers, the curious tinkerers, the obsessed creatives that build these tools and software. Do they own it? Or do they simply rent it out to the highest bidder or sell it off to the largest corporation? Do people still want to build something that is uniquely their or do they simply want to build the latest and greatest SASS that they can rent out to the masses? Do they care about the software they build or do they simply care about the metrics, the revenue, the growth? You can argue that Linus Torvalds owns Linux and cares about Linux the kernel, you can argue that Jean Baptiste Kempf owns VLC and cares about VLC. Does Solomon Hykes own Docker and care about Docker? Does Daniel Ek own Spotify and care about Spotify? Does Mark Zuckerberg own Facebook and care about Facebook? Are they owners, true owners of the product they built or did they simply become renters of their own creation, a slave to the metrics and revenue optimization? Again I don't want to be overly dramatic but this is an important question we all should be asking ourselves. I know I am asking myself this question more and more often as I see the world around me change and the developer culture shift towards metrics and revenue optimization. ## Carving Space for Curiosity and Innovation I implore you to find time in your life to be curious and creative to tinker and build something just for the sake of building it. Even if no one else cares about it, even if it only solves your own problem. Make something cool, something unique don't care about others build it for yourself, built it because you want to and can. Don't let the world tell you what you should be doing, what you should be building, what you should be using, no matter how ambitious or dumb or idiotic it may seem to others, make it because you want to, because it makes you happy, because it makes you feel alive. Software development is a unique craft, it's equal parts creative and equal parts engineering, two opposing forces that when combined can create something truly amazing. Fight the temptation to add marketing into the mix and dilute the craft with it. ### Build what you Can't Ship Have a project in mind that you've always wanted to tackle but it never made sense to you to do it because it would never be used by anyone else or it would never make you any money? Do it anyway, build it, tinker with it, learn from it. Who cares if you can't ship it to the masses, who cares if it's useless. Make it, create something from nothing, just because you can. ### Share the Spark You might think this goes against my previous point, but it really does not. Share your work, share your creations, bring others into your world, if nobody responds who cares, you made it, you created something from nothing, maybe the value is in the journey and not the destination. Maybe the value is in the learning and not the outcome. Maybe the value is in the process and not the product. And who knows maybe your unique problem will be shared by others, maybe your unique solution will inspire others to create something new, something unique. Maybe your curiosity will spark a fire in someone else and they will go on to create something truly amazing. It's not impossible it happened for Linux, it happened for VLC heck it happened for Git. Just try to conceptualize what an insane idea Git was even when SVN was a well established and widely used version control system. Who in their right mind would think that a distributed version control system would be a good idea? Yet here we are, Git is the de facto standard for version control in software development. ## Conclusion I wrote this article to lament the loss of the curious spark in our developer culture, not to criticize or judge anyone. I understand the world moves on, I understand that we all have to make a living and that we all have to pay the bills. But I also believe that we should not lose sight of what makes software development such a unique and special craft. If you've made it this far, thank you, sincerely thank you. It's one of my longer articles and I appreciate you taking the time to read it. I hope it has sparked something in you, I hope it has made you think about your own journey as a software developer and I hope it has inspired you to be curious and creative again.

0 views
Dayvster 4 weeks ago

Why I Still Reach for C for Certain Projects

## The Project That Made Me Choose C Again So a while back I got tired of xserver and decided to give wayland a try. I use Arch (BTW) with a tiling window manager (dwm) and I wanted to try something new, since I had a couple of annoyances with xserver. I've heard some good things about wayland so I thought you know what, why not let's give it a shot. After 30-45min my Arch and Hyprland setup was done and ready to go. I was pretty happy with it, but I was missing some features I've previously had such as notifications for when somebody posts new content to their RSS feed. I quite like RSS feeds I use them to keep up to date with blogs, news, streams, releases etc. So I thought to myself, why not write a small program that checks my RSS feeds and sends me a notification when there's something new. Now the way I was gonna go about this was pretty simple. I would write a simple C daemon that would run in the background, check my RSS feeds on a random interval between 5-15min and if there was something new it would send out a notification using `notify-send`. Then comes the tricky part I wanted `swaync` to allow me to offer two buttons on the notification, one to open the link in my default browser and one to ignore the notification and mark it as ignored in a flat file on my system so that I could see if I have to remove certain feeds that I tend to ignore a lot. Now here's the problem, you can't really do that with `swaync`, I mean it does support buttons but it doesn't really let you handle the button clicks in a way that would allow you to do what I wanted. So I had to come up with a workaround. The workaround was to have another C program run as a daemon that would listen on the DBus for a very specific notfication that would contain an action with a string of `openUrl("url")` or `ignoreUrl("url")` and then handle the action accordingly. This way I could have `swaync` send out a notification with the buttons and when the user(me) clicks on one of the buttons it would send out a DBus notification that my second daemon would pick up and handle. ## Why C Was the Right Choice Here? Now you might be wondering why I chose C for this project and not something like python which would allow me to write this specific program much faster and enjoy the rest of my weekend. Well my answer to you is simple, I don't like python and I didn't feel like wasting multiple 100s of MBs of RAM for something this simple and small. In fact I DID write it in Python and Go first with their respective DBus libraries, both were super easy to work with and I had an initial working prototype within less than an hour. But as I ran a simple `htop` I saw that the python version was using around 150MB of RAM and the Go version was using around 50MB of RAM. Now don't get me wrong, I'm not on an old thinkpad I have RAM to spare 32Gb to be exact. But why waste it on something this small and simple. Plus C is just so much more fun and exciting to work with. So I set up my C project which was just a simple `Makefile` and a couple of `.c` and `.h` files. I imported `dbus/dbus.h` and got to work. Now I'd be lying if I said it took me no time at all, in fact it took me roughly 3-4h which is a lot longer than python or Go. But in the end I had a working prototype that was using around 1-2MB of RAM and was super fast and responsive. I also got to brush up on my DBus skills which were a bit rusty since I hadn't worked with it in a while. Now getting a simple program like that from 150 to 50 to 1-2Mb of RAM usage is a huge performance improvement and it really showcases the power and strength of C as a programming language. Look at this this way I may have spent multiple hours longer writing this program in C but it will just continue to run in the background using a fraction of the resources that python or Go would have used for a long time to come. Additionally this probably won't be the only modification I make to my system now imagine if I were this lackluster with 10-20 small programs that I run in the background and let's make a wild assumption that each of those programs would use about 50-200Mb of RAM. Now we're looking at 500-4000Mb of RAM usage, or precisely one chrome tab! No I don't think that's an acceptable tradeoff for a couple of hours of my time. So in the end I think C was the right choice for this project and I'm pretty happy with the result. ## Why Modern Languages Aren't Always the Best Fit That long rant above is a cute story / anecdote. But I feel like it also highlights a bigger issue that developers face today. A lot of times we just wanna complete a certain task as quickly as possible and have it running before we've had our second coffee. I get it we're all busy people and when resources are as cheap as they are today saving a few Mb of RAM or a few ms of CPU time isn't really a big deal. But there is something so nice and satisfying about writing a small program in C that does exactly what you want it to do and nothing more. It's like a breath of fresh air in a world where everything is becoming more and more bloated and resource hungry. It feels a bit zen taking everything down to the bare minimum and just focusing on the task at hand. I'd liken developing in C to woodworking, you have to be precise and careful with your cuts, but in the end you get a beautiful product that you can be uniquely proud of. Sure you could go pick up a flat pack from your local IKEA and have a nice looking table within an hour. But it will be souless and generic and not really made well, not something you can proudly say to your visitors "Hey I built this myself". I mean you could... but they probably won't really be as impressed with your flat pack assembly as if you were to show them a hand crafted table you made yourself. ## C Excels in Low-Level System Programming and Efficiency So maybe the example project that I described above in the beginning of the article is too rickety or not really high brow enough that's fair I get that. But even if you think my project was stupid or silly or could have been done better or easier there is no denying that C simply is the gold standard for low level systems programming and efficiency. C gives you unparalleled control over system resources, memory management, and performance optimization. This makes it the go-to choice for developing operating systems, embedded systems, and performance-critical applications where every byte of memory and every CPU cycle counts. For example, the Linux kernel, which powers a vast majority of servers, desktops, and mobile devices worldwide, is primarily written in C. This is because C allows developers to write code that can directly interact with hardware and manage system resources efficiently. If you drive any modern vehicle that was created in the past decade or so, chances are that you have multiple microcontrollers in your car that are running C code to manage everything from engine performance, emergency breaking systems (AEB, ABS), distance control systems (ACC) and so on. For these systems you are very limited in the amount of RAM usage you can afford and most importantly you can not afford too much latency or delays in processing. Now how often have you heard about these systems failing and leading to fatal crashes? Not very often I'd wager. Of course safety standards, regulations and certifications play a big role in this but the choice of programming language is also a big factor. C is a mature and well-established language with a long history of use in safety-critical systems, making it a reliable choice for such applications. It does it's job exceptionally well and it does it with a very small footprint and most importantly you can run it in perpetuity on very limited hardware without worry over failure or issues due to resource exhaustion. ## Real World Scenarios Where C Shines Here are a few real-world scenarios where C is the preferred choice: - **Operating Systems**: As mentioned earlier, the Linux kernel is written in C. Other operating systems like Windows and macOS also have significant portions written in C. - **Embedded Systems**: C is widely used in embedded systems development for devices like microcontrollers, IoT devices, and automotive systems due to its efficiency and low-level hardware access. - **Game Development**: Many game engines and performance-critical game components are written in C or C++ to achieve high performance and low latency. - **Database Systems**: Many database management systems, such as MySQL and PostgreSQL, are implemented in C to ensure efficient data handling and query processing. - **Network Programming**: C is often used for developing network protocols and applications due to its ability to handle low-level socket programming and efficient data transmission. - **Compilers and Interpreters**: Many programming language compilers and interpreters are written in C to leverage its performance and low-level capabilities. - **High-Performance Computing**: C is used in scientific computing and simulations where performance is critical, such as in numerical libraries and parallel computing frameworks. - **System Utilities**: Many system utilities and command-line tools in Unix-like operating systems are written in C for efficiency and direct system access. - **Device Drivers**: C is the language of choice for writing device drivers that allow the operating system to communicate with hardware devices. - **Cryptography**: Many cryptographic libraries and algorithms are implemented in C to ensure high performance and security. - **Audio/Video Processing**: Libraries like FFmpeg are written in C to handle multimedia processing efficiently. - **Web Servers**: Popular web servers like Apache and Nginx are written in C to handle high loads and provide fast response times. You get my point, C is everywhere and it excels in scenarios where performance, efficiency, and low-level system access are paramount. ## When C Isn’t the Right Choice ? Now as you can see I'm a pretty big proponent of C and you'll often hear people who exclusively develop in C (sadly not me), say: "Anything and everything under the sun can be written in C". Which is true, if you were to remove all programming languages from existence save one, my choice would always be C as we could potentially rebuild everything from the ground up. However even I have to admit that C may not always be the best choice everything has it's ups and downs. So C may not be your best choice when: - **Rapid Prototyping**: If you need to quickly prototype an idea or concept, higher-level languages like Python or JavaScript may be more suitable due to their ease of use and extensive libraries. - **Web Development**: For building web applications, languages like JavaScript (with frameworks like React, Angular, or Vue) or Python (with frameworks like Django or Flask) are often preferred due to their ease of use and extensive web development libraries. - **Data Science and Machine Learning**: Languages like Python and R are commonly used in data science and machine learning due to their extensive libraries and frameworks (e.g., TensorFlow, PyTorch, Pandas). - **Mobile App Development**: For mobile app development, languages like Swift (for iOS) and Kotlin (for Android) are often preferred due to their platform-specific features and ease of use. - **Memory Safety**: If memory safety is a primary concern, languages like Rust or Go may be more suitable due to their built-in memory safety features and garbage collection(Go). - **Ease of Learning**: If you're new to programming, languages like Python or JavaScript may be easier to learn due to their simpler syntax and extensive learning resources. - **Community and Ecosystem**: If you need access to a large community and ecosystem of libraries and frameworks, languages like JavaScript, Python, or Java may be more suitable due to their extensive ecosystems. ## Conclusion So there you have it why I still reach for C for certain projects. It’s not always the fastest or easiest choice, but for small, efficient programs that need to run lean and mean, it often makes sense. C isn’t flashy or trendy, but it gets the job done, and sometimes that’s all that matters. Curious to hear how others tackle similar projects C, Rust, Go, or something else?

0 views
Dayvster 1 months ago

Stop Abstracting and Start Programming

How often do you find yourself writing code with a clear goal in mind when suddenly this annoying urge at the back of your mind tells you > Oh man this is so repetitive and single use, you really should abstract this, oh and this function up there man that function could be way more generic, oh you should totally use generics here, heck you know what make it all modular, do it now, abandon your task and do it now, you will thank me later. That voice is an asshole, ignore it do not listen, it is trying to lead you down a path of pain and misery. ## Why Over-Abstraction Hurts ? As programmers we have this idea in our mind that eventually this code you are writing right now will be reused by yourself or someone else in the future and obviously you'll want to make it as reusable and as generic as possible. So that when the time comes you can just plug and play it into something else with minimal effort and complete whatever task you have at hand with relative ease. The problem with this mindset is that it's mostly true but can easily be overdone to annoying extremes. You can easily end up in a situation where you have a codebase that is so abstracted and generic that it becomes impossible to understand what the code is actually doing. You end up with layers upon layers of indirection, making it hard to trace the flow of data and logic. This can lead to increased complexity, making it difficult for new developers (or even yourself) to understand and maintain the code. I've worked on countless codebases in the past where just changing something as simple as a label required me to dig through 5-7 levels of files. This is not ideal and it's really hard to build a mental model of the codebase when you have to jump through so many hoops just to understand what a piece of code is doing. In fact... ## Anecdote I once worked on a project where I had to change the text on a single button. Should’ve been a two-minute job. Instead, it turned into a nightmare. The label wasn’t in the component. It wasn’t in the props. It wasn’t even in the constants file. Nope. It was hidden behind a “UIContextProvider” that wrapped another “GenericLabelRenderer” that passed down to a “LocalizedStringFactory.” By the time I finally found the damn string, I had clicked through six different files and completely lost track of what I was even trying to do. That’s the cost of over-abstraction. Something dead simple became a multi-hour scavenger hunt because someone thought they were being clever by making the text system “flexible” and “reusable.” Spoiler: nobody ever reused it. ## When to Abstract your Code ? This is a tricky question and the answer as always is well it depends. But there are some general rules and guidelines that personally help me be productive and avoid over-abstraction. ### Write it First, Abstract Later: Start by writing the code in a straightforward manner. Focus on getting the functionality working first. Once you have a working solution, you can then look for opportunities to abstract and refactor. ### YAGNI You Ain't Gonna Need It: Avoid adding abstractions for features or use cases that you don't currently need. It's easy to fall into the trap of over-engineering for hypothetical future scenarios. Only abstract when you have a clear and present need for it. ### Keep it Simple: Strive for simplicity in your code. If an abstraction adds unnecessary complexity without a clear benefit, it's probably not worth it. Simple code is often easier to understand and maintain. ### Limit the Levels of Indirection: Try to keep the number of layers of abstraction to a minimum. If you find yourself needing to jump through multiple files or layers to understand a piece of code, it might be a sign that the abstraction is too deep. ### Use Descriptive Names: When you do create abstractions, use clear and descriptive names. This can help make the purpose of the abstraction more apparent and reduce the cognitive load when reading the code. I especially find the most value in the first point, usually if you just make it work first even if it's a quick and dirty implementation that just proves it works. You can really easily find which parts require refactoring and abstraction once you are done with the initial implementation, and from what little experimentation I've done with this technique I found that I end up finishing my work a lot faster than if I had tried to abstract everything from the get go. Mostly because this way I avoid the dreaded **analysis paralysis** where you just keep thinking about how to make something generic and reusable instead of just getting the job done. And that last part is really crucial I know I am repeating myself over here but it's really easy to find yourself spiraling into a rabbit hole of generic functions and complex what if scenarios that drive your architecture into a direction that is not really useful for the task at hand. ## Conclusion Abstraction is a powerful tool in software development, but like any tool, it can be misused. By focusing on simplicity, writing code that works first, and being mindful of when and how to abstract, you can avoid the pitfalls of over-abstraction and create code that is both maintainable and understandable. Remember, the goal is to write code that solves problems effectively, not to create the most generic and reusable code possible. So next time you feel that urge to abstract everything away, take a step back, breathe, and ask yourself if it's really necessary for the task at hand.

0 views
Dayvster 1 months ago

In Defense of C++

## The Reputation of C++ C++ has often and frequently been criticized for its complexity, steep learning curve, and most of all for its ability to allow the developers using it to not only shoot themselves in the foot, but to blow off their whole leg in the process. But do these criticisms hold up under scrutiny? Well, in this blog post, I aim to tackle some of the most common criticisms of C++ and provide a balanced perspective on its strengths and weaknesses. ## C++ is "Complex" C++ is indeed a complex language, with a vast array of features and capabilities. For any one thing you wish to achieve in C++, there are about a dozen different ways to do it, each with its own trade-offs and implications. So, as a developer, how are you to know which approach is the best one for your specific use case? Surely you have to have a deep understanding of the language to make these decisions, right? **Not really...** I mean, don't get me wrong, it helps, but it's not a hard requirement. Premature optimization is the root of all evil, and in C++, you can write perfectly fine code without ever needing to worry about the more complex features of the language. You can write simple, readable, and maintainable code in C++ without ever needing to use templates, operator overloading, or any of the other more advanced features of the language. There's this idea that for everything you want to do in any programming language, you need to use the most efficient and correct approach possible. Python has this with their pythonic way of doing things, Java has this, C# has this, and Go has this. Heck, even something as simple as painting HTML onto a browser needs to be reinvented every couple of years and argued about ad nauseam. Here's the thing, though, in most cases, there is no one right way to do something. The hallowed "best approach" is often just a matter of personal or team preference. The idea that if you just write your code in the "best" and correct way, you'll never need to worry about maintaining it is just plain wrong. Don't worry so much about using the "best" approach; worry more about writing code that is easy to read and understand. If you do that, you'll be fine. ## C++ is "Outdated" C++ is very old, in fact, it came out in 1985, to put it into perspective, that's 4 years before the first version of Windows was released, and 6 years before the first version of Linux came out, or to drive the point even further home, back when the last 8-bit computer was released. So yes, C++ is quite old by any standard. But does that make it outdated? **Hell no** it's not like C++ has just been sitting around unchanged from its 1985 release. C++ has been actively developed and improved upon for over 40 years now, with new features and capabilities being added all the time. The most recent version of the C++ standard, C++20, was released in 2020 and introduced a number of new features and improvements to the language. C++23 has introduced significant enhancements, particularly in the standard library and constexpr capabilities. Notably, concepts, ranges, and coroutines have been expanded, bringing modern programming paradigms to C++ and making the language more powerful and expressive th an ever before. **But Dave, what we mean by outdated is that other languages have surpassed C++ and provide a better developer experience.** Matter of personal taste, I guess, C++ is still one of the most widely used programming languages with a huge ecosystem of libraries and tools. It's used in a wide range of applications, from game development to high-performance computing to embedded systems. Many of the most popular and widely used software applications in the world are written in C++. I don't think C++ is outdated by any stretch of the imagination; you have to bend the definition of outdated quite a bit to make that claim. ## C++ is "Unsafe" Ah, finally, we get to the big one, and yes, I will draw comparisons to Rust as it's the "memory safe" language that a lot of people claim will or should replace C++. **In fact, let's get the main point out of the way right now.** ### Rewrites of C++ codebases to Rust always yield more memory-safe results than before. Countless companies have cited how they improved their security or the amount of reported bugs or memory leaks by simply rewriting their C++ codebases in Rust. **Now is that because of Rust?** I'd argue in some small part, yes. However, I think the biggest factor is that any rewrite of an existing codebase is going to yield better results than the original codebase. When you rewrite a codebase, you have the opportunity to rethink and redesign the architecture, fix bugs, and improve the overall quality of the code. You get to leverage all the lessons learned from the previous implementation, all the issues that were found and fixed, and you already know about. All the headaches that would be too much of a pain to fix in the existing codebase, you can just fix them in the new one. Imagine if you will that you've built a shed, it was a bit wobbly, and you didn't really understand proper wood joinery when you first built it, so it has a few other issues, like structural integrity and a leaky roof. After a few years, you build a new one, and this time you know all the mistakes you made the first time around, so you build it better, stronger, and more weatherproof. In the process, you decide to replace the materials you've previously used, say for example, instead of using maple, you opt for oak. Is it correct to say that the new shed is better only because you used oak instead of maple? Or is that a really small part of the overall improvement? That's how I feel when I see these companies claim that rewriting their C++ codebases in Rust has made them more memory safe. It's not because of Rust, it's because they took the time to rethink and redesign their codebase and implemented all the lessons learned from the previous implementation. ### But that does not deny the fact that C++ is unsafe. Yes, C++ can be unsafe if you don't know what you're doing. But here's the thing: all programming languages are unsafe if you don't know what you're doing. You can write unsafe code in Rust, you can write unsafe code in Python, you can write unsafe code in JavaScript. Memory safety is just one aspect of safety in programming languages; you can still write unsafe code in memory-safe programming languages. Just using Rust will not magically make your application safe; it will just make it a lot harder to have memory leaks or safety issues. The term "unsafe" is a bit too vague in this context, and I think it's being used as a catch-all term, which to me reeks of marketing speak. ### Can C++ be made safer? Yes, C++ can be made safer; in fact, it can even be made memory safe. There are a number of libraries and tools available that can help make C++ code safer, such as smart pointers, static analysis tools, and memory sanitizers. Heck, if you wish, you can even add a garbage collector to C++ if you really want to(please don't). But the easiest and most straightforward way to make C++ safer is to simply learn about smart pointers and use them wherever necessary. Smart pointers are a way to manage memory in C++ without having to manually allocate and deallocate memory. They automatically handle the memory management for you, making it much harder to have memory leaks or dangling pointers. This is the main criticism of C++ in the first place. ## C++ is Hard to Read Then don't write it that way. C++ is a multi-paradigm programming language; you can write procedural code, object-oriented code, functional code, or a mix of all three. You can write simple and readable code in C++ if you want to. You can also write complex and unreadable code in C++ if you want to. It's all about personal or team preference. Here's a rule of thumb I like to follow for C++: make it look as much like C as you possibly can, and avoid using too many advanced features of the language unless you really need to. Use smart pointers, avoid raw pointers, and use the standard library wherever possible. You can do a heck of a lot of programming by just using C++ as you would C and introducing complexity only when you really need to. ### But doesn't that defeat the whole purpose of C++? Why not just use C then? C++ is a superset of C you can write C code in C++, and it will work just fine. C++ adds a lot of features and capabilities to C. If you were to start with C, then you are locked with C, and that's fine for a lot of cases, don't get me wrong, but C++ gives you the option to use more advanced features of the language when you need them. You can start with C and then gradually introduce C++ features as you need them. You don't have to use all the features of C++ if you don't want to. Again, going back to my shed analogy, if you build a shed out of wood, you can always add a metal roof later if you need to. You don't have to build the whole shed out of metal if you don't want to. ## C++ has a confusing ecosystem C++ has a large ecosystem built over the span of 40 years or so, with a lot of different libraries and tools available. This can make it difficult to know which libraries and tools to use for a specific task. But this is not unique to C++; every programming language has this problem. Again, the simple rule of thumb is to use the standard library wherever possible; it's well-maintained and has a lot of useful features. For other tasks like networking or GUI development, there are a number of well-known libraries that are widely used and well-maintained. Do some research and find out which libraries are best suited for your specific use case. **Avoid boost like the plague.** Boost is a large collection of libraries that are widely used in the C++ community. However, many of the libraries in boost are outdated and no longer maintained. They also tend to be quite complex and difficult to use. If you can avoid using boost, do so. Unless you are writing a large and complex application that requires the specific features provided by Boost, you are better off using other libraries that are more modern and easier to use. Do not add the performance overhead and binary size bloat of Boost to your application unless you really need to. ## C++ is not a good choice for beginners Programming is not a good choice for beginners, woodworking is not a good choice for beginners, and car mechanics is not a good choice for beginners. Programming is hard; it takes time and effort to learn, as all things do. There is no general language that is really good for beginners; everything has its trade-offs. Fact is, if you wanna get into something like **systems programming or game development** then starting with Python or JavaScript won't really help you much. You will eventually need to learn C or C++. If your goal is to become a web developer or data scientist, then start with Python or JavaScript. If you just want a job in the programming industry, I don't know, learn Java or C#, both great languages that get a lot of undeserved hate, but offer a lot of job opportunities. Look, here's the thing: if you're just starting out in programming, yeah, it's gonna be hard no matter what language you choose. I'd actually argue that starting with C or C++ is far better than starting with something that obscures a lot of the underlying concepts of programming, I'd argue further that by starting with Python or Javascript you are doing yourself a disservice in the long run and trading off the pain of learning something when your understanding of a topic is still fresh and malleable for the pain of learning something later when you have a lot more invested in your current understanding of programming. But hey, that's just my opinion. ## C++ vs Rust: Friends or Rivals? Rust has earned a lot of love in recent years, and for good reason. It takes memory safety seriously, and its borrow checker enforces discipline that C++ often leaves to the programmer. That said, Rust is still building its ecosystem, and the learning curve can feel just as steep — just in different ways. C++ may not prevent you from shooting yourself in the foot, but it gives you decades of battle-tested tooling, compilers, and libraries that power everything from Chrome to Unreal Engine. In practice, many teams use Rust and C++ together rather than treating them as enemies. Rust shines in new projects where safety is the priority, while C++ continues to dominate legacy systems and performance-critical domains. ## Is C++ Still Used in 2025? The short answer: absolutely. Despite the constant chatter that it’s outdated, C++ remains one of the most widely used languages in the world. Major browsers like Chrome and Firefox are still written in it. Game engines like Unreal run on it. Automotive systems, financial trading platforms, and even AI frameworks lean heavily on C++ for performance and control. New standards (C++20, C++23) keep modernizing the language, ensuring it stays competitive with younger alternatives. If you peel back the layers of most large-scale systems we rely on daily, you’ll almost always find C++ humming away under the hood. ## Conclusion C++ is a powerful and versatile programming language that has stood the test of time. While it does have its complexities and challenges, it remains a relevant and widely used language in today's tech landscape. With the right approach and mindset, C++ can be a joy to work with and can yield high-performance and efficient applications. So next time you hear someone criticize C++, take a moment to consider the strengths and capabilities of this venerable language before dismissing it outright. Hope you enjoyed this blog post. If you did, please consider sharing it with your friends and colleagues. If you have any questions or comments, please feel free to reach out to me on [Twitter](https://twitter.com/dayvster).

0 views
Dayvster 1 months ago

Zig Error Handling

); return err; } ); }; } ); } }; } } ``` In this example, `processDocument` attempts to open and read a file. It can return - `FileNotFound` if the file doesn't exist, - `InvalidFormat` if the file content is not as expected, - `AccessDenied` if there are permission issues, - or any errors from `std.fs.File.OpenError` and `std.fs.File.ReadError`. In `main`, we attempt to process multiple documents, handling errors appropriately based on their type. We use `try` to propagate errors from file operations and custom checks, and `catch` in `main` to handle them. After we use `defer` to ensure the file is closed properly, even if an error occurs during reading. ## Conclusion Zig's error handling model is simple, explicit, and powerful. By treating errors as values and using constructs like `try` and `catch`, Zig allows developers to write clear and maintainable code that gracefully handles errors. Whether you're writing low-level system code or high-level application logic, Zig's error handling model provides the tools you need to manage errors effectively. I hope this post has given you a good understanding of how error handling works in Zig and why it's a great fit for many programming scenarios.

0 views
Dayvster 1 years ago

Read Directory Contents With Zig

); } } ``` ## Now let's break down the code: 1. At the very tippy top we define the allocator we want to use, in this case the page allocator. 2. We then open the current working directory as an iterable directory. the `openIterableDir` function takes in two parameters, the first is subDir which like the name suggests allows you to open a subdirectory of the current working directory, and the second is a set of flags which we are not using in this case. 3. We then create an `ArrayList` to hold the file names, this is a dynamic array that can grow and shrink as needed. 4. We use the allocator to initialize the `ArrayList` and then defer the deinit function to free the memory when the function exits. 5. We then iterate through the directory contents using the `dirIterator` and append the file names to the `ArrayList`. 6. Finally, we print the contents of the `ArrayList` to the standard output. And that's it! You've successfully read the contents of a directory in Zig using the page allocator. if you wish to learn more about the page allocator or any other allocator that zig comes with, feel free to check out my article on [Zig Allocators Explained](/blog/zig-allocators-explained/). ## Conclusion Thank you for taking your time and reading the article till the end. I hope you've learnt something new and exciting and that you may consider using or learning Zig in the future. If you have any questions or feedback feel free to reach out to me on [Twitter](https://x.com/dayvsterdev).

0 views
Dayvster 1 years ago

The Future of Frontend Development

## Introduction The year was 2003, I was a young kid only 13 of age. I've had my first computer just about for 7 years at that point however this was the year when my family switched from dial-up to DSL broadband internet. Which meant my alloted time of 1-2h of internet time per day just turned into non stop internet access. This was the first time I was able to explore the internet without any restrictions. I was able to watch videos, play games, chat with friends, and most importantly I was able to access forums, blogs and websites that would teach me how to code. My first language was C, in my youthful naivete I thought it would be super cool to learn how to become a game dev(like most kids). Over on one of my IRC channels I frequented I was told that C++ was what all the cool kids were using. So I switched over to that. Through C++ I got into networks programming and through that I slowly but steadily got into web development. At this point I was an angsty 16 year old teenager, my language of choice was PHP. I built my first few websites for family and friends and I was hooked on web development. I was able to build something that was accessible to everyone, something that was easy to share and something that was easy to maintain. But through the years the web has evolved the quickest and gone through some of the most drastic changes. We went from simple XHTML + PHP dynamic websites to interactive web applications built with jQuery and AJAX, to full blown SPA's(single page applications) built with React, Angular, and Vue to PWAs and WASM. It has been a wild ride and I'm excited to see where the future of frontend development will take us. In this article I will be discussing some of the emerging trends and technologies that I believe will shape the future of frontend development. **Note** These are not predictions, I don't have a crystal ball in my possession, my smoke machine is broken and I'm all out of tarot cards. These are just my thoughts and opinions on things that in the current year of 2024 are gaining traction and can potentially shape the future of frontend development. A lot of these are old news or old ideas that are being revisited and reimagined and some have been around for a while but are just now gaining traction. ## Emerging Trends and Technologies ### The Rise of the Headless CMS Back in the ol' days of the web we simply defaulted to using Wordpress for all our CMS needs. It was easy, user friendly and could be molded into anything you wanted it to be with some moderate effort and best of all it did both the "frontend and backend" for you. That last part was ironically also it's biggest weakness. It limited your choices and because it had to accomodate more and more user needs it became bloated and slow. #### What is a Headless CMS? A headless CMS is essentially a content management system stripped to it's essentials, it only serves as an interface between your content and your frontend. It doesn't care about how you display your content, it only cares about how you store and manage your content. This means you can use any frontend technology you want to display your content. This means that you are free to use any frontend technology you want, heck you can even go full static site generator or have your mobile app or desktop app consume the same content from the same CMS as your website. It follows the UNIX principle of do one thing and do it well, in this case it's managing your content. I believe that the rise of the headless CMS can not and will not be stopped and will only continue to grow in popularity. The benefits far outweigh the negatives. In fact some headless CMSs out there let you build your own bespoke CMS dashboard with minimal hassle independent of your data source for example: Directus, Strapi, and Sanity. What's more plenty of these headless CMSs are offered as a service that your end clients can simply access and manage via their browser. This means you can build a website for a client and they can manage their content without having to worry about the technical details of how their website works or how to deploy it. ### The Continued Dominance of JavaScript Frameworks Every second a new JavaScript framework is born, every minute a new JavaScript framework dies. There's no denying that JavaScript frameworks have had a huge impact on frontend development. They have made it incredibly easy to build complex web applications with very minimal effort for a small trade off in maintainability, occasional performance issues and production breaking bugs due to some obscure javascript quirk. I don't think JavaScript frameworks are going anywhere anytime soon. In fact I believe that they will continue to dominate the frontend development landscape for the foreseeable future. They have become the de facto standard for building web applications and they have a huge ecosystem of libraries and tools that make building web applications a breeze. I believe they will evolve further and adopt more features from other languages and frameworks. For example React has adopted server components which allows you to essentially run NodeJS style backend code in your frontend. This means you can do things like database queries, file system operations, and other backend operations in your frontend code that will then be executed on the server and the result will be sent to the client. Is that good, bad or ugly? Yes, all of the above. But it's a trend that I believe will continue to grow and evolve. ### The Integration of WebAssembly In recent years WebAssembly has gained a lot of traction and has become a popular choice for building high performance web applications. #### What is WebAssembly? WebAssembly is a binary instruction format for a stack-based virtual machine. It is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications. Basically it allows you to run code written in other languages like C, C++, and Rust in the browser. This means you can build high performance web applications that can run at near native speeds in the browser. Which is great for performance intensive applications that would traditionally be desktop applications. Figma is a great example of a web application that uses WebAssembly to run incredibly fast and smooth in the browser. I believe that downloading and installing desktop applications will become more and more of a nieche thing for most common users of the web. WebAssembly will allow developers to build high performance web applications that can run on any device with a modern browser. This removes a pretty big chunk of friction and humans will always take the path of least resistance. The friction of having to download and install an application that is, why would you do that when you can just open a browser and use the application right away? Now I am aware of all the security implications of that and also that this means that we won't ever truly own software anymore which I personally don't think is the best thing for the future of software but that's a topic for another article. ### The Growing Importance of Web Accessibility Web accessibility has been a hot topic for a while now and it's only going to get hotter(rawr). The web is for everyone and it should be accessible to everyone. This means that we as developers have a responsibility to make sure that our websites are accessible to everyone. If I roll the dice on this sentence I can assume that you the reader are a healthy individual but many users out there are not and they should be able to have the same access to the the wonderful place that is the web as you or me do. I believe that web accessibility will become a standard requirement for all websites in the future. Now wether that is because of legislation or because of a shift in mindset is to be seen. But I believe that it will happen. Or atleast I hope it will happen. ### The Evolution of Progressive Web Apps (PWAs) Progressive Web Apps or PWAs for short have been around for a while now but I feel like they've only just started to gain some traction, it's been slow for sure. #### What is a Progressive Web App? A progressive web app is a basically just a Web App with a manifest file that makes it installable on your device typically mobile but also desktop. It also has a service worker that allows it to work offline and load faster. It has an interesting dynamic with WASM as well, you could build your PWA with WASM and have it run in the browser but also be easily installable on any device. This means you could build a pretty ok performing mobile app with just web technologies and have it be installable on any device. I'm personally not the biggest fan of PWAs but I can see the appeal for some use cases. I believe that PWAs will become more and more popular in the future as they offer a way to build cross platform applications with web technologies without relying on electron or having to rewrite your application for every platform or device. They can also bypass phone app stores and desktop stores which has caused some controversy with Apple in the past. ### The Ascendancy of Artificial Intelligence (AI) and Machine Learning (ML) AI and ML have been around for a while now. AI has kinda exploded in recent years with ChatGPT, Copilot, and other AI powered tools, in fact it seems like 2024 is the year where people try to put AI into everything. Only a matter of time till they try to put themselves in AI but that's a different story. I believe that AI and ML will become really useful tools for a lot of people and for a lot of technologies and usecases. Personally I'm not a big fan of using AI for creative work, I find that humans are a lot better at actually creating creative things than AI is. In fact AI can not really create as of yet it can just kinda assume and predict based on data it has been fed. So there's a point of diminishing returns when it comes to AI and creative work. But plenty of companies will use AI in creative ways such as meeting, call assistants that will be able to take notes, do sentiment analysis and recall the entire meeting in great detail for you. To put it simply imagine if you had an AI assistant that you could ask about that one point your manager was making during that big meeting he wanted everyone to attend but you were too busy browsing social media on your phone. Asking your manager to repeat himself long after the meeting has ended would be embarassing or income changing depending on your manager. But if you had an AI assistant you could simply ask that AI all your questions. ### The Rise of Streaming and Real-Time Communication At the moment the web operates on a request-response model. You make a request to a server, wait for a bit while the server processes your request does it's work to prepare a response and then sends it back to you. This is fine but fairly slow. I believe there will be a point in which this type of communication between server and client is no longer the most efficient way to communicate. Something that will make the cost of this a lot lower. Basically socket communication is superior to request-response communication in almost every single way, it's main drawbacks are that it's more expensive and more difficult to maintain and debug. But basically I think even simple websites will simply stream their content over to your browser in the future. The only interesting part with this will be how will stuff like SEO look, when you don't really have a static page to index anymore. I personally think that SEO will become less and less important in the future as the web becomes more and more of a real-time experience plus the fact that much of the use cases of search engines are already covered by AI assistants at the moment. ### The Power of Web Workers and Service Workers Misko Hevery has a tendency to be ahead of his time but not popular. He has recently created Qwik which is a new-ish JavaScript framework that relies heavily on web workers and service workers to deliver a fast and smooth experience to the user. Now you may know Misko as the creator of Angular which was the first hugely popular and widely used JavaScript frontend framework. Web workers and service workers are a way to run JavaScript code in the background of your application. This means you can do things like offload heavy computations to a web worker and have it run in the background while your main thread is free to do other things. This can greatly improve the performance of your web application and make it feel more responsive and smooth. I think Misko is on the right track with this, workers will most likely be a very important building block of modern javascript frameworks in the future. However I believe that it won't be Qwik that popularizes this but rather most likely React or Svelte or some other completely new framework. ### The Rise of Low-Code/No-Code Development Let me preface this by saying that I hate that this is becoming so popular, in fact the only thing I hate more is the fact that there are thought leaders and guides out there that recommend building your solo startup by heavily utilising no-code tools to bridge something quick and dirty together as a proof of concept. No-Code and Low-Code tools are basically tools that let you create with minimal to no coding knowledge or experience required. They are great for prototyping and building simple applications. You can create a website, a mobile app, or a web application without writing a single line of code. I believe that due to the enterpreneurial spirit of the current generation and the rise of the POC and MVP culture that no-code and low-code tools will become increasingly popular in the future. They offer folks the ability to scaffold their idea out into a working prototype to showcase to investors or even to launch to the general public. I don't like it, but that's what I think will happen. ### The Continued Growth of Static Site Generators What is old will be new again. Static sites once ruled the web, then dynamic sites took over, now static sites are making a comeback. We're kinda forming two camps in web development recently. On one side you got the minimal effort wire a lot of different tools together to get a working web app or website and on the other side you have the let's keep it simple and just build as much stuff as simple as possible gang **cough HTMX cough**. Static site generators kinda bridge that gap in a way, because with tools like Astro and Svelte you can build a static site that is also a web application with all the complexity and interplay of different tools and libraries that you want. I believe that static site generators will continue to grow in popularity in the future as they offer a way to build fast and performant websites that can also possibly scale into more complex and feature rich web applications within the same codebase. ## Conclusion The web still remains one of the quickest evolving and most exciting platforms to develop for. In just 20 years we went from simple static websites filled with GIFs and auto playing music to fully fledged application that run in your browser and let you do work that was previously only possible by going through a very long desktop application installation process. It hasn't been that long but the web has come a long way and I'm excited to see where it will go in the future. This blog post outlined a couple of my thoughts on the trends and technologies that I see emerging or growing in popularity currently. Wether or not I'm right remains to be seen but I'm excited to see where the future of frontend development will take us.

0 views
Dayvster 1 years ago

Building Blocks Of Zig: Unions

); } ); } ); }, } ``` ## Conclusion So there it is, we've learned about Zig unions and how they can be used to define types that can have multiple possible value types, I know it may seem a bit confusing at first but once you get the hang of it you'll see how powerful unions can be. I'd recommend using them whenever you are dealing with values that can have multiple different possible types, such as: configuration values, user input, API's with different return types for the same field etc. etc. I hope you enjoyed this post and learned something new about Zig. If you have any questions or comments feel free to reach out to me on [Twitter](https://twitter.com/dayvsterdev)

0 views
Dayvster 1 years ago

Jotai atomWithStorage

); } ``` As you see you can use `atomWithStorage` just like you would use `atom`, it really is as easy as that. It will automatically default to persisting the data in `localStorage` but you can also pass in a second argument to specify where you want to store the data. ```jsx const themeAtom = atomWithStorage("theme", "dark", sessionStorage); ``` ### Learn more about Jotai You can find out more about why I love Jotai in my post [Why I love Jotai](/blog/why-i-love-jotai/) and you can find the official documentation [here](https://jotai.org/docs/introduction)

0 views
Dayvster 1 years ago

Building Blocks Of Zig: slices

; // create a slice of the first 5 elements of the array var slice = arr[0..5]; } ; var slice = arr[0..5]; const firstElement = slice[0]; } ); } } ``` In this example we are simply iterating over each element in the slice and printing it out to the console. Much like you would with an array. ## Conclusion Slices are a powerful feature in Zig that allow you to work with contiguous sequences of elements. They are similar to arrays but have a few key differences. Slices are a pointer to the first element of the sequence and a length. You can create slices using the `[]` syntax and access elements in a slice using the `[]` syntax. You can also iterate over a slice using a for loop. Slices are a fundamental building block in Zig and are used to represent arrays, strings and other data structures that are contiguous in memory and have a fixed length that is known at runtime.

0 views
Dayvster 1 years ago

Building Blocks Of Zig: Understanding Structs

), }, } ), }, } ), }, } ), } ), }, } ), } ), }, } first: ?*Goblin, last: ?*Goblin, len: usize, } } ``` ## Conclusion Struct are a fundamental building block of Zig and are used to group related data together in a logical and structured way. Structs can have default values, be packed, have undefined fields, have methods and be returned from functions which results in a generic. Structs are a powerful and flexible way to model data in Zig and are used extensively in the standard library and in many third party libraries.

0 views
Dayvster 1 years ago

Zig is the Next Big Programming Language

## What is Zig? Zig is a general-purpose programming language designed for performance, safety, and simplicity. It aims to provide modern features while remaining lightweight and efficient. Developed with a focus on readability and robustness, Zig offers compile-time memory safety, error handling without exceptions, and predictable behavior. It supports various platforms, making it suitable for multiple application domains. What that means is that Zig is a language that is designed to be easy to read, write, and maintain, all while having a comparable performance and efficiency to C, with a few modern features that make it a joy to work with. I’ve been using Zig for a few years now (since 2020) and I’ve been loving it. It’s been my go-to language for a lot of my hobby projects, server-side applications, and even some zany web dev projects that I have running on my home network. ## Can Zig Replace C? That is hard to say, personally, I think it’s entirely possible because zig is also a drop-in compiler for C and C++. You can use Zig to compile your C and C++ code without ever needing to use GCC, Clang, or MSVC. This means all C and C++ code can be compiled and even interop with Zig. ## How does Zig manage memory? Zig has a very unique way of handling memory allocation, which is drastically different from the approach that Rust took. With zig, you have full control over how you allocate your memory and how you deallocate it. That means you have arena allocation, page allocation, general allocation, etc. You can even create your own custom allocation, and together with comptime access, you can even make a custom allocator that will automatically deallocate memory when it goes out of scope, which may be super familiar to a lot of you Rust devs out there. ## When does Zig outshine Rust? The performance of Zig and Rust is pretty much neck and neck, that is to say, the differences in performance are negligible. However, Zig has a lot of cool features that Rust does not and most likely never will, such as the ability to compile C/C++ interop with C and C++ code, consume C and C++ libraries, manual memory management, comptime which is similar to Rust macros but much more powerful and straightforward. So I wouldn’t say it’s a matter of Zig outshining Rust, but rather Zig being a different tool with a different approach to solving problems. An approach that I find much more enjoyable and straightforward than Rust. ## Can Zig compile C++? Yes as mentioned above one of Zig's biggest strengths is its ability to compile C++ and C code. ## Can Zig be used for embedded systems? I’ve used Zig on my Raspberry Pi and it’s been a joy to work with. Now I know that a Raspberry Pi is not strictly the same as proper embedded systems programming, but it’s at the very least comparable. Basically, if you can run C on it you can run Zig on it for the most part. So yes Zig can be used for embedded systems programming. ## Zig vs Go Much like Go Zig is a systems programming language, but unlike Go Zig is a low-level language that gives you full control over memory allocation and deallocation. Go on the other hand is a garbage-collected language that does not give you that level of control. There are a lot of situations where Garbage collection is too slow or too inefficient or simply does not fit the use case. If that is your current use case then Zig would be a much better choice than Go. Of course on the other hand Zig is at the time of writing this article still not stable and nowhere near as mature as Go, so do take that into consideration. ## Zig vs C++ Zig is a much simpler language than C++. While it also builds on the same principles as C++ and allows you to allocate and deallocate memory manually just like C++, it does not have the same level of complexity that C++ has. Zig is designed to be a modern language that is easy to read, write, and maintain. C++ has been around for a while and powers some of the biggest and most vital applications in the world. But it is not without its flaws, one of which that I specifically dislike is how many different ways you have to achieve what is essentially the same thing. Zig on the other hand is much more straightforward. Also, Zig does not have nor will ever have classes, so if OOP is something you can’t live without then Zig may not be for you. But it’d still urge you to try it and experience a paradigm shift. I know I did and I’ve never looked back. ## When will Zig be stable? That is hard to say, the Zig team is working hard to get Zig to a stable release, but as of writing this article, it is still in version 0. something. So it’s hard to say when it will be stable. But what I can tell you is that you can already write a heck of a lot of stuff with Zig. ## Does Zig have a package manager? No Zig does not currently have a package manager, one is planned and actively worked on and will most likely be released sometime in 2024. But for now, we sadly have to manually manage our dependencies. ## Does Zig have async? No Zig at the moment of writing this article does not have async, but it is planned and actively worked on. So it’s only a matter of time before Zig has async. Also as a side note, you can write your Zig applications out as multiple binaries and use the GNU scheduler to run them concurrently, which does a much better job than most async implementations in other languages, give it a try you may be pleasantly surprised. ## Why should I use Zig? Zig is a fantastic modern language that is a joy to use and work with. It reads wonderfully and gives you the developer complete autonomy over your entire application including how you will manage your memory. Instead of managing your memory behind the scenes, it gives you the tools and the power to do that yourself. I find this approach of teaching instead of enforcing much better and even more secure than Rust's approach of enforcing strict memory safety by having the developer jump through hoops. Not to mention that it’s still entirely possible to write memory-unsafe code in Rust it’s painfully easy to do so. So use Zig if you want a modern, performant, and memory-safe language that is actively developed by a team of incredibly talented developers ## Is Zig hard to learn? Zig the language itself is not hard to learn, I’d even say it’s surprisingly simple. The main difficulty comes from the fact that Zig is a low-level systems programming language, so you will have to understand some core Computer Science concepts and have a good low-level understanding of how your system works. But if you are willing to put in the time and effort to learn Zig you will be rewarded. ## Conclusion Zig is a fantastic language that I’ve loved using and will continue to use I will most likely write about it a lot more in the future. It’s a language that I enjoy working with and I firmly believe it has a bright future and a lot of potential to become a major player in the programming language space. With this article, I hope I’ve given you a good overview of what Zig is and why you should consider using it and hopefully answered some of the questions you may have had about Zig. If you have any more questions feel free to reach out on Twitter, I’m always happy to help out and chat.

0 views
Dayvster 1 years ago

Zig Allocators Explained

; const allocator = gpa.allocator(); const memory = try allocator.alloc(u8, 215); defer allocator.free(memory); } ``` Simple and straightforward, just like the page allocator but with a few more features and safety checks. ## So what's the point of all this? Well the main point of using allocators in zig is to give you full control over how your memory is allocated and deallocated, this is similar to how rust forces you to be conscious of memory allocations by freeing up memory when it is no longer needed whenever it exits the scope, unless you use one of the many mechanisms it has to prevent that from happening. Zig kinda flips the script and does nothing for you under the hood but also forces you to be conscious of memory allocations by making you do it yourself however in an easy and straightforward way. This gives it the memory safety of Rust but with the simplicity of something like C or Go. It's a very unique and interesting approach to memory management and one of the many reasons why I love Zig so much. ## Conclusion I hope this article has given you a good understanding of what Allocators are and how Zig uses them to handle memory allocation. If you have any questions or feedback feel free to reach out via twitter. I'm always happy to help out, receive feedback, or just chat about this kind of stuff in general. Thanks for reading and happy coding!

0 views