Latest Posts (20 found)
NorikiTech 1 months ago

Rust’s lack of constructors

Part of the “ Rustober ” series. I found it interesting that Rust doesn’t have initializers. (It doesn’t have classes either and those may be related.) You either set all the fields on a struct directly, create a function which acts like a memberwise initializer and is named by convention, or use a builder pattern : Here’s an example of a function. Rust helps us a bit here by allowing to use parameter names directly in place of matching field names. Library struct fields are often private (like above) and the only way to initialize one is through an associated function (= static method = no parameter). Of course, the parameters can be arbitrary and don’t have to match the struct fields — it’s simply a function, not an initializer. According to the “ Constructors ” chapter in Rustonomicon: There is exactly one way to create an instance of a user-defined type: name it, and initialize all its fields at once. That’s it. Every other way you make an instance of a type is just calling a totally vanilla function that does some stuff and eventually bottoms out to The One True Constructor. … The reasons for this are varied, but it largely boils down to Rust’s philosophy of being explicit . … In concrete contexts, a type will provide a static method for any kind of “default” constructor. This has no relation to in other languages and has no special meaning. It’s just a naming convention.

0 views
NorikiTech 1 months ago

Rust struct field order

Part of the “ Rustober ” series. One of the Rust quirks is that when initializing a struct, the named fields can be in any order: In Swift, this is an error. However, looking at the rules for C initialization , it seems the C behavior is the same, called “designated initializer” and has been available since C99. Possibly, this also has to deal with Rust’s struct update syntax where you can initialize a struct based on another instance, in which case the set of field names would be incomplete, so their order does not really matter since they are named:

0 views
NorikiTech 1 months ago

Finished the Rust Book!

Part of the “ Rustober ” series. Just finished my paper copy of “The Rust Programming Language” (2 nd edition covering Rust 2021), the same one that I shared in a previous post . I thought earlier that I’d finish about half, up to more advanced topics, and I’d be ready to work on my side project. However, looking at all the advanced syntax and concepts the external libraries are using , the choice to finish reading the book was worth it. Towards the end, even though the density of information became higher, the book showed how to organize and work with a larger amount of Rust code, and what principles multithreading and macros were based on. Now, looking at some of the function signatures, it’s not entirely new territory. The current, Rust 2024 edition of the book — and the upcoming 3 rd edition of the paper book coming out next year — now has an additional chapter about asynchronous programming that didn’t exist before: “ Fundamentals of Asynchronous Programming: Async, Await, Futures, and Streams ”. I’ll have a cursory look before looking at Axum’s docs…

0 views
NorikiTech 1 months ago

Cargo dependency tree

Part of the “ Rustober ” series. Rust’s tool is quite pleasant to use. I know that Rust projects generally use many dependencies in total (dozens) and I thought: how do I get a dependency tree? Turns out, has a [built-in command]. The output for my project that I described in “ A Rust web service ” is a bit scary! The stars mean that this dependency is present somewhere else in the list, so its dependencies are omitted. Looking at all the different versions, I immediately thought: what happens if two dependencies use different versions of the same crate? Cargo usually resolves the issue by bumping one version (if possible) to another, but if the versions are SemVer-incompatible (i.e. 1.0 and 2.0), then both are included separately. Of course, this does not always work smoothly , but I don’t expect to run into this kind of problem for my project (and even then there are ways of resolving it). What’s interesting is that in the latter case you will get an error similar to this ( source ): Cargo has specific recommendations on how libraries should follow SemVer to reduce the possibility of such problems.

0 views
NorikiTech 1 months ago

Rules of ownership and borrowing in Rust

Part of the “ Rustober ” series. Coming from any practical programming language to Rust, we need to learn its system of memory ownership which is different to what we’re used to. Even if some concepts are familiar from languages with manual memory management (like C and C++), or the application of them is familiar like in languages with a garbage collector, Rust will surprise you. I like how simple and elegant the rules of memory ownership are. There are only three: The second rule, I think, is the most important and gives Rust its power to enforce correctness. In C, you can allocate memory and it’s just there, you “own” it as long as you have a pointer. You can use the memory throughout its lifetime, but it’s an open question who owns it at any one time. When you don’t need the memory anymore, you it. Because single ownership can’t be enforced, you can try to use memory somewhere else as if it’s still yours — that is a SEGFAULT. After you’ve freed the memory, you can try to free it again somewhere else — that is, too, a SEGFAULT. Finally, you can choose to not free the memory at all, which is a memory leak but it’s fairly safe because your program doesn’t crash. Rust also allows borrowing a value to use its memory without changing the owner. That comes with its own set of rules : The rules above prevent the value from being changed from two places at once, or let it be changes from under you. A classic example of that in C is resizing an array. You have a pointer to it in one place and consider it stable and safe (nothing in C is safe). Elsewhere, you resize the array by calling to give it more memory. You get a new pointer to the array and the “stable and safe” pointer immediately becomes invalid. As I like to say, it is now a pointer to a SEGFAULT. Even worse is resizing it because something doesn’t fit and then losing the new pointer. Tracking ownership is enabled by lifetimes — everything has a lifetime scope, the longest being , the life of the program. Early in Rust development, lifetime annotations were required in most places and that made the code really ugly (I remember my initial impressions of looking at early Rust). However, since then lifetime elision rules have been adopted and now lifetime annotations are needed much less often. Lifetime elision is also very simple (but took a long time to get right!): For example, a clear violation of these rules is when you pass two different things into a function and return one of them. The compiler cannot tell automatically which one of them will pass its lifetime to the output, so you must tell it by annotating the parameter and output lifetimes. What I found interesting is that lifetimes are a generic parameter like generic type parameters which means you can do to them things you do to generics e.g. applying trait bounds (especially it comes into play with trait objects). And more importantly, you cannot assign lifetimes arbitrarily, you can only reveal (=annotate) relationships to the compiler between the lifetimes that already exist. Each value has an owner. There can only be one owner at a time. When the owner goes out of scope, the value will be dropped. At any given time, you can have either one mutable reference or any number of immutable references. References must always be valid. Each elided lifetime in the function or closure parameters becomes a distinct lifetime parameter. If there is exactly one lifetime used in the parameters (elided or not), that lifetime is assigned to all elided output lifetimes. If the receiver has type or (i.e. it’s a method) , then the lifetime of that reference to is assigned to all elided output lifetime parameters.

0 views
NorikiTech 1 months ago

A Rust web service

Part of the “ Rustober ” series. A web service in Rust is much more than it seems. Whereas in Go you can just and , in Rust you have to bring your own async runtime. I remember when async was very new in Rust, and apparently since then the project leadership never settled on a specific runtime. Now you can pick and choose (good), but also you have to pick and choose (not so good). In practical, real-world terms it means you use Tokio . And . And . And . And . I think that’s it? My default intention is to use as few dependencies as possible — preferably none. For Typingvania I wrote the engine from scratch in C and the only library I ship with the game is SDL . DDPub that runs this website only really needs Markdown support. However, with Rust I want to try something different and commit to the existing ecosystem. I expect there will be plenty of challenges as it is. So how does all of the above relate to each other and why is it necessary? In short, as far as I understand it now: Because all of the libraries are so integrated with each other, it enables (hopefully) a fairly ergonomic experience where I don’t have to painstakingly plug things together. They are all mature libraries tested in production, so even though it’s a lot of API surface, I consider it a worthwhile time investment: my project will work, and more likely than not I’ll have to work with them again if/when I get to write Rust in a commercial setting — or just work on more side projects. , being the source (all three of the others are subprojects) provides the runtime I mentioned above and defines how everything else on top of it works. provides the trait that defines how data flows around, with abstractions and utilities around it, but is protocol-agnostic. Here’s the trait: implements HTTP on top of ’s trait, but as a building block, not as a framework. is a framework for building web applications — an abstraction over and . Effectively, I will be using mostly facilities implemented using the other three libraries. Finally, is an asynchronous SQL toolkit using directly.

0 views
NorikiTech 1 months ago

Rust traits vs Swift protocols

Part of the “ Rustober ” series. As I said in the first post of the series , parts of Rust are remarkably similar to Swift, such as and . Let’s try to compare Rust traits to Swift protocols. I’m very new to Rust and I’m not aiming for completeness, so take it with a grain of salt. Looking at them both, Swift leans towards developer ergonomics (many things are implicit, less strict rules around what can be defined where) and Rust leans towards compile-time guarantees: there’s less flexibility but also less ambiguity. For example, in Swift you can add multiple protocol conformances at once, and the compiler will pick up any types that are named the same as associated types: And in Rust: Even this short example shows how flexible Swift is — and we haven’t even seen generics yet. I’m convinced Rust generics in traits are better done than in Swift partly because they are more granular. Whenever I tried to compose anything complicated out of Swift protocols, I always ran into problems either with “Self or associated type requirements” (when a protocol can only be used a generic constraint) or existential types. Here’s a real example where Swift couldn’t help me constrain an associated type on a protocol, so I had to leave it simply as an associated type without additional conformance. The idea is to have a service that would be able to swap between multiple instances of concrete providers, all conforming to several different types and ultimately descending (in the sense, not sense) from one common ancestor. Here’s similar code in Rust which does not have this problem: I’m looking forward to exploring the differences (and similarities) (and bashing my head on the wall) when I get to write some actual Rust code.

0 views
NorikiTech 1 months ago

Rust on Linux

Part of the “ Rustober ” series. What surpised me the most in the 2024 State of Rust Survey results was how many people were actually developing on Linux, and it clearly leading the chart for several years running. Truly, the year of Linux on the desktop. The presence of WSL implies that people could be developing in a Linux container, but the percentage is still too high. This is surprising to me, because over my career I’ve never worked in companies where Linux was viable to use by developers, simply because there were either Windows or macOS requirements most of time. It would be interesting to work at a shop that encouraged or allowed Linux. I’ll use this opportunity to drop a Linux hardware tier list I found on Reddit . I usually have an underpowered Linux nettop or laptop that I use for writing notes, but recent AMD chips look promising enough that I might be tempted.

0 views
NorikiTech 2 months ago

Motivation for learning Rust

Part of the “ Rustober ” series. Rust is my next first-choice language for making things. It now possesses a combination of traits (get it?) that make it an obvious choice. I’m writing Swift full-time at work (and have been since 2014) and I would have happily picked Swift if it ever became what it was supposed to: I simply cannot use Swift, my strongest language, for any of the projects I do outside of work. Frustrated, I have tried other languages in their strong domains: For me, a major constraint is that I work on multiple things and sometimes leave them for a year or more, and then come back and work on them some more. All of the languages above were used for different projects. Consequently, I need to remember all of them at once if I need to pick a project up again. However, looking back, all of them could be done in Rust without major changes. Why haven’t I learned Rust earlier then? I made an attempt a few years ago but found it too difficult for my taste. Considering the constraint above, it made sense. If it were one of many, it wouldn’t have worked. To work in big languages like C++ and Rust (or let’s be honest, in any language except Go), you need to commit, otherwise after a year of not using it, it goes completely out of your head. Besides reviving your half-dead project you’ll need to relearn the language as well. I also thought it was changing too fast (all of the above were changing much slower) and there was too much drama in the community, so its future was uncertain. Yet in the last few years Rust has proven that it is stable, mature and developer-friendly. The language, tooling and the amount of available libraries are improving (in a good way). I believe it’s a solid career choice as well on the horizon of a few years. Hell, it even plays well with AI because of the strong type system and borrow checker. I’m starting to use it in earnest to build the first project in it, let’s see how it goes. Remarkably, it seems much closer to Swift than I expected, with common use of optionals, , traits and enums with associated types. P.S. C# and Java are missing from the above list. While I did use both for a time, I was never comfortable fully committing to either because it’s a completely new set of libraries and ways of doing things. Swift never became truly cross-platform — I was really hoping for it. Rust did. Apple butchered the language after 4.0 to make SwiftUI look nice. A lot of complexity was added even though the community wasn’t happy. Swift compile times get worse and worse, and they are never mentioned as a priority. Rust compile times constantly improve. Et cetera… my list of grievances with Swift is very long. While I love Common Lisp, it’s simply too niche — I would need to implement most of what I need myself, and I have limited time. I couldn’t find in myself the will to commit to it without second-guessing. C++ is a solid choice for lower-level tasks, but nobody uses it to write web services, and half of my ideas are web services. I liked using C to write all of Typingvania . I understand its appeal and draw, but in C, most things felt tedious , and far from everything could be abstracted enough to be ergonomic. Handling memory didn’t bother me, I learned most of my lessons through objects’ lifetimes in C++. Finally, I wrote multiple things in Go, and it probably came closest to what I was looking for (which was the reason why I committed to it in the first place). However, I got progressively more disillusioned because of its C interop, its lack of constraints, the poor generics (too little, too late), and general inelegance (as a result of its nonexpressiveness). Yes, I could achieve a lot quickly and pick the project back up later, as long as I used Go exclusively, and not as part of something else.

0 views
NorikiTech 3 months ago

Typingvania is out now!

This is just a short post to say that Typingvania is OUT NOW on Steam (Windows, macOS) and Mac App Store ! When I started the project back in the beginning of 2024 I knew I wanted a release “sometime in the future” but I didn’t realize how much the game would change, or when I’d be able to release it. From a simple prototype it grew to something I was serious about, so I spared no effort helping it come to life. I’m happy with how it turned out. I used all the tech I wanted to use and implemented all the features I had in mind from the very beginning. It looks good and feels good. Now it’s time to put it into people’s hands. My local meetup group Bristol Indie Developers have been a continuous inspiration to keep working on Typingvania. I’m sure that without those meetups every month I would have quietly dropped it as most other projects I start. I can highly recommend regular in-person meetups to keep creative juices flowing. When you know a meetup is coming up in a week or two, naturally you sit down and get to work. I’ll leave you to it .

0 views
NorikiTech 6 months ago

Brain and LLMs language representation

Today I came across the article “ Deciphering language processing in the human brain through LLM representations ” on the Google Research blog. It claims the neural activation in the brain for understanding and producing speech is similar to that of Whisper , OpenAI’s speech recognition model. Representing language on the computer, or even better, deciphering how exactly it is represented in the brain, has been the holy grail in linguistics for decades. While the articles highlights many differences as well as similarities, it is nevertheless very interesting research. I also noticed that they claim “linear” similarity but the correlation values are all under 0.25 i.e. low. So, something similar is happening but not more than that. As someone with linguistic education who studied languages for many years, I find it fascinating how the machinery has improved in the recent years. It’s still nowhere near perfection, in my opinion, and bridging the last 5% may well take many more years. Real-world language is just too messy and nuanced, and that’s what makes speech and text fun.

0 views
NorikiTech 6 months ago

Make your game, not an engine

Noel Berry’s recent article “ Making Video Games in 2025 (without an engine) ” has been making rounds on Hacker News. Noel Berry is one of the developers of Celeste, a successful and technically excellent game. The main point of the article is that he doesn’t use big commercial engines (Unreal and Unity) because he doesn’t need most of their functionality. It’s more fun to do things on your own and have full control over them. At the same time, not using an engine does not mean you have to implement everything yourself. There is a range of frameworks for every other language with good platform support (FNA, SDL, Raylib, etc.), so that you don’t have to do the plumbing but can start developing your game right away. Another point he makes is that if you did need advanced 3D or just less effort, Godot is a great choice because it’s truly open source and now capable enough to achieve your vision. (Last year, I wrote “ Godot quickstart ” following my first steps with the engine.) Compared to Noel, I’m a complete amateur who only dabbled in game development, but the path I took with Typingvania was motivated by the same reasons. I started out prototyping in Godot, but it made things that were important to me difficult. I wanted resolution-independent GPU text rendering and easier custom binary resource access, and writing my own engine for Typingvania allowed me to do it. Just as Noel says, I used SDL to handle all the system and low-level decisions for me. Using a framework, I still had to implement all the game systems, but they were not generic plumbing — they were specific to the game I was making and only did what I needed to do, nothing more. Which brings me to the advice in the post’s title: make your game, not an engine . I have seen several experienced gamedevs say it. A trap many beginner game developers fall into, especially if (like me) they’re coming from a programming background, is to start implementing their own game engine that would power their game. It’s “cool” and fun and a great learning opportunity. Invariably it turns into a kitchen sink project and the new game dev gets overwhelmed. Some persevere and create fairly good general-purpose engines which doesn’t bring them any closer to releasing a game. The game engine becomes their project, instead of the game they’ve been making. If you want to actually release a game, work on the game instead of its engine. Implement only the systems you need to support what you are making. You can keep the “engine” code separate from “game” code and potentially extract the engine later (it’s how many commercial engines were born). Making a complete game engine requires hundreds of small decisions on API design, and I found I was completely inequipped to make them because I simply didn’t have the experience. However, I knew exactly how I wanted a particular element in my game to behave, and I used that to guide the implementation on the “engine” side. Looking back, I should have stayed with Godot because at times, I found the need to implement yet another system exhausting. Now that most of the features are in place, it’s not the most ergonomic experience and I’d rather use a language more expressive than plain C, but it works and I’m happy with the result. My final point is: do whatever you want! There is no one true way, and you don’t know before your try it for yourself. Would I have made better progress if I stayed with Godot? Maybe. But would I have made that progress if I were not motivated by building the engine tech that tickled my fancy? There’s no way to know. So just try whatever you want and have fun and see where it takes you.

0 views
NorikiTech 6 months ago

Typingvania Devlog 6

First let’s look at what changed since the previous devlog . The main Typingvania codebase is now more than 15,000 lines of C! I studied to be a linguist and I strongly believe software should be available in as many languages as possible. Typingvania does not have much UI text, no more than 200–250 strings, and this is a trivial amount to translate into many languages. I started with several European languages I can read (and my native Russian) and translated what I had with an LLM. As I’m closer to release, I will have all text verified by native speakers and add translations into more languages so that everyone is comfortable, whatever language they speak. In Typingvania UI language does not equal book language. You can choose any UI language and at the same time type a book in any language. If I didn’t mention it: there will be books in multiple languages. At launch there will be at least two, maybe more depending on how much effort I can squeeze in. It is impossible to have a typing game without some kind of statistics because “number go up” is the main mechanic. You improve what you track, and I want to give what numbers I can. First of all, I’ve added an overall statistics screen with totals and averages that tells you how you are doing in general. How much of the book you’ve typed? How accurately, on average, you’re typing? Here’s my overview screen after three and a half hours of typing (another statistic I’m tracking): It’s comprehensive, but we can do better. Now you can also see how you are doing over time , in this case, over all of the typing history for this book: The charts first came out all lumpy and I spent a few days making them look as crisp as the text — I’m happy with the result. The faded line is the actual values that show how consistent you are, but the colored trend line gives you a better idea how you are doing over time. As you keep typing, it will become harder to see how you’ve been doing recently . To help with that, I added a separate screen that only shows your last 20 rounds. At 30-something words per round, 20 rounds are your last 10-30 minutes of typing. Both of the above are across all characters, but later I will add screens where you will see your accuracy and speed for each individual character. For me, “;” is particularly stubborn. There will be more statistics screens but I’ll talk about them when I implement them. Overtype was the UI I implemented very early in the Godot prototype but couldn’t do it for a long time in this version, even though it’s critical — I wouldn’t be able to release without it. Here’s how it looks, before we talk about it more: This UI is necessary because in Typingvania, I present the original text of the book. If the author writes “ætat” (which is Latin for “aged”), it stays on the screen — but that should not prevent you from typing it. That’s why I introduce overtype, which is replacing untypeable characters with characters you can directly type on the keyboard layout you’ve chosen when starting the book. In this particular case, if you had, say, a Danish keyboard and the book supported Danish layout, overtype would not appear because you could type “æ” directly. Similarly, accented letters in French words and the pound sign (£) are replaced so you can easily type them on the default US English keyboard. Overtype will do much more work when then language of the book is different from what you type. For example, for a book in Mandarin Chinese you will (likely) be typing pinyin instead, so over “你好” you will see overtype prompting you to type “ni3 hao3”. If you ever studied Asian languages, you are familiar with a similar feature called “ruby text” or (for Japanese) “furigana” that shows how certain characters are pronounced. Overtype works the same, but you need to type the characters! You can both check yourself if you are learning the language and know the words, or learn the pronunciation for words you don’t know — while improving your typing. I tried to make this input as intuitive and unintrusive as possible so it doesn’t break your typing flow when you already know what you need to type. I have already changed how the upcoming word appears several times, and now I think I finally got it right. I’m showing the full next line, but far enough down that it doesn’t distract you, and faded by a step so that the upcoming active words are the same shade as the inactive words on the current line. If they were the same shade as the active words on the current line (as before) they would pull your focus toward themselves and you’d be distracted. Let’s see if I change it again next time :) I completely automated the building, signing and Apple notarization process for Mac builds and now Typingvania has simultaneous builds both for Windows and Mac on Steam , as well as App Store. Cloud saves also work on Steam and are cross-platform so if you use two computers with the same keyboard like me, you can use either OS to run Typingvania and have all your typing statistics in one save. Wink wink, wishlist Typingvania on Steam now! At the end of the previous devlog I said that before release I wanted to implement statistics (done — not all screens are there but enough, and I collect the data), overtype (done) and finally, multiple layouts for books (not done yet). To support multiple layouts, I need to flip the book data file inside out, which is as complicated as it sounds but also means support for multiple languages for books. I actually delayed this devlog by a couple of days to finish the design for the updated data file and figure out how to change the tooling. I reckon it will take a couple of weeks to implement, but with that done, Typingvania will be data- (if not yet content- or UI-) ready for release — after 6 months of development of this version (I’ve done the Godot prototype between January and April 2024). After that, I will start preparing for release by doing a million of the small things I have on my todo that I know exactly how to do but they have not been the priority. See you here next month yeah?

0 views
NorikiTech 1 years ago

Beyond syntax

Several times I saw someone on LinkedIn post something along the lines of: Why won’t they hire me, a senior engineer with 15 years of experience, for a position that requires language X, but I’ve written in languages Y and Z — don’t they know a true software engineer can learn languages quickly on the job? I have attempted to learn and write several languages over the years (in no particular order: Python, Rust, JavaScript, PHP, Objective-C, Kotlin, Common Lisp, Swift, C++, Erlang, Go, Ruby, C, Lua) and I can confidently say: “It depends”. For me more difficult by far have been not the languages themselves, but related aspects: usage domains, ecosystem and best practices. True, you can learn the syntax and write some code that runs in weeks, but it will take you months or even years to become proficient in what to write (usage domain) and how (ecosystem and best practices). Usage domain or knowledge domain is the class of problems you’re solving with a language. Mobile development, machine learning, game development are all broad usage domains. These are, in my opinion, the hardest to switch and are the main source of difficulty when learning a new language. This one is a little tongue-in-cheek because you don’t necessarily have to change industries when changing your working language (especially if it’s JavaScript). But think about it: some languages are more aligned to certain domains than others, consequently there are more jobs in those domains that would allow you to work in that language. Most of those jobs, besides language requirements, have knowledge domain requirements. If you wanted to write C at work, depending on the industry, you would be expected to know, for example, embedded development and all that entails, including electrical engineering and small controller architectures — not to mention the knowledge specific to writing C in such environments. If you wanted to write Swift full-time, the job expectations would almost certainly include intimate knowledge of macOS or iOS, their UI guidelines and generally knowledge specific to Apple platforms. From my mobile development experience, when iOS development switched to Swift or Kotlin replaced Java on Android, the learning curve for mobile developers was not particularly steep. We kept using the same libraries on the same platform and wrote logically equivalent code. There is a saying I like: “You can write Java in any language”, and that’s exactly what we did for the first several months. We didn’t know yet what the best practices were for the new language, however we were very familiar with the domain and could get away with writing Objective-C in Swift and Java in Kotlin. Today, if we wrote code like that, we’d be laughed out of the room. The usage domain is something to think about when considering a language you’d like to work in. Of course, most languages are general-purpose and you can, with some effort, write almost anything in any of them, so you can choose any one you like for your personal projects. My point is particularly about professional use. Maybe I’d enjoy writing OCaml (I hear it has a great type system), but where, most notably, is it used in a professional setting? Writing compilers and high-frequency trading. Both of those impose hard requirements on domain knowledge, and knowing only the language is emphatically not enough. The language’s ecosystem is the set of related tooling and especially libraries. Few people write most of the supporting code they need themselves (serious game developers are a notable exception), and you’ll probably use libraries to perform the tasks you’ve set out to do. Libraries for some tasks are huge and in some jobs, they will be part of the job description (for a popular example, React development on the web). A language does not achieve that much by itself, and (again talking about a professional setting) you will be expected to know most popular libraries that are associated with a particular language by name, and to intimately know one or more of them. An expert, some say, is someone who has made all the mistakes to be made in a certain domain. It is the same with libraries. To be an expert, you have to have made the mistakes, and that means to have spent time writing real code using those libraries. Even if you learn the language well, you won’t write production-grade code until you’ve learned to use the libraries. Effectively, your contribution will be far from expert level. It takes a long time to really learn and become familiar with some of them. Some libraries even have other libraries written on top of them that you’ll also be expected to know and use expertly. Tooling can be similarly difficult for certain languages, for example CMake, the build system primarily used for C++, is as flexible as it is arcane. JavaScript, despite the fact it’s not compiled, also uses build systems, and an expert JS developer will have used several. Julia Evans in her post “ Importing a frontend Javascript library without a build system ” says: I needed to figure out how to import a Javascript library in my code without using a build system, and it took FOREVER to figure out how to import it because the library’s setup instructions assume that you’re using a build system. If tools are unfamiliar, without access to experts you can spend days banging your head on a wall for what may be a typical issue solved in a known way. This is both discouraging and unproductive. The language’s best practices are the set of techniques that a professional developer would use to write what is called “idiomatic code” that is natural, efficient and gets the job done in a way that works well with the design of a particular programming language. Only by mastering best practices you’ll start writing code that other developers will consider elegant. In linguistics there exists the notion of linguistic relativity , also known as the Sapir–Whorf hypothesis, which says that (simply put) language defines thought. The modern understanding is that language does not define, but rather influences thought in subtle ways. This idea is succinctly expressed by the saying we all know: if all you have is a hammer, everything seems like a nail. In exactly the same way, every programming language has certain approaches to problems, and the more you program in it, the more your learn to solve problems in a certain way. For example, functional languages favor function composition, so it becomes natural for you to solve problems through function composition. If you don’t regularly use a language with another paradigm, switching to it may be mentally difficult because its best practices suggest solving problems in way that now feels unnatural. Coming back to the phrase I used above, “you can write Java in any language” — now it’s clear what it means: to write code in another language as if it were Java, that is, using Java’s best practices and not those native to another language. (Sorry for using Java as a scapegoat, but we’re all familiar with the characteristic verbose OOP style of its early versions.) The next time you see that LinkedIn post, remember that what seems like a simple language switch is more similar to moving to a new country. You might quickly learn to order coffee in the local language, but true integration requires understanding the culture and eventually, learning to think like a local. If I were trying to change my main working language, my best bet would probably be switching the role within the knowledge domain I know well, or trying to venture out into one adjacent. Meanwhile, I would work on a project in the new language to help me figure out what libraries people use most and what tools are available. The struggle to set them up would give me the initial practical knowledge to build upon.

0 views
NorikiTech 1 years ago

“We can do better”

The CI pipeline takes an hour to run and it’s getting longer every month. There are five slightly different implementations of the same thing in the codebase. The docs for the network module were last updated two major versions ago. The project has a few thousand warnings. “It is what it is.” The project you work on may not be as bad as this, but there are pockets of persistent inefficiency all around. Larger organizations have more of them simply because there are more cracks between domains of ownership. When you work alone, you’re authorized to make things better by default. When a hundred people and a dozen or more teams share a project, it becomes unclear who can change what (and to what extent). People start looking away: “It’s not what I’m supposed to work on. This will disrupt other people’s work.” The team needs to be disciplined and conscious of creeping entropy to avoid this. Unchecked, the project naturally becomes a Big Ball of Mud as described by Brian Foote and Joseph Yoder in their 1997 article of the same name : A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. Their code shows unmistakable signs of unregulated growth, and repeated, expedient repair. Someone on X said just the other day: “Pull requests and code review are a complete waste of time! Why would I work in such a low-trust environment?” Radical (engagement bait) and generally true, however most of us do not have free choice of workplace and project. Sooner or later you may end up working on a Big Ball of Mud, and what are you going to do then? I find that using just four words, you can start improving the situation (no, they are not “rewrite it in Rust”). When I noticed I was frustrated, I started saying: “ We can do better ”. What does it mean? It’s a call to remember the ambition and striving for engineering excellence. No one purposefully wants to be a bad software engineer. We all want to write neat and elegant code. When I look at another ream of sloppy code, saying “We can do better” reminds me of this feeling: “Is this really the best we can do as a team?” This naturally leads to the next question: “What small thing can I do right now to make this just a tiny bit better?” Small changes compounded over time make a big difference. It’s the core idea of the Japanese concept of kaizen or continuous improvement. Looking at a swamp of bad code, you may feel it can only be completely drained and the pit filled with fresh concrete — but for a single person, it’s too much work. Instead, you can do what you can alongside regular work. This undercurrent of improvement gradually clears the water around where you’re working. Even when a team is not particularly engaged, they never oppose others who take initiative to make improvements. That empowers you to effect good change. Remember small steps? Nobody opposes small clearly beneficial changes. You will find that people actually want change for the better. Who doesn’t? Your efforts will be cheered, and then things really start to turn around — led by your example, people begin bringing and implementing their ideas. People feel powerless on large poorly managed projects — you can’t improve much alone. Showing that positive change doesn’t have to be large empowers others to make tiny changes and build up from there. Slowly, this creates a net-positive culture and melts tech debt. What is a tiny change? Adding one line to the project README saying which exact version of supporting software to use. Improving a frequent error message to be more clear. Writing a two-line bash script to automate something people had to do by hand. Sorting a list of five debug options so that the one people most often use is on top. Adding a three-line helper method and replacing 74 ad-hoc implementations across the whole codebase. All of the ideas above are improvements to the developer experience. What about delivering customer value? It is simple: a better developer experience means developers spend less time being frustrated with their tools and more time working to deliver the best customer experience. All of the project code, if you think about it, is a tool to bring value to customers. Of course, the same approach can — and should — be used for improving user experience. It is what it is, until we make it better.

0 views
NorikiTech 1 years ago

To know is not enough

I like to find things out to add them to my mental intuitive decision-making framework. These “factoids” bring more data points and evidence to the surface to strengthen the result. The big “but” is while learning new facts is enjoyable, the result is not as much improved as I’d like to believe. While the knowledge-gathering activity is useful for writing by sparking unexpected associations, it falls flat for skill-based activities like software development. When learning a new programming language or a new paradigm, I try to reach for what I call “thinking like an expert”. I read a lot quickly, including advanced topics, to avoid making the novice mistakes. Most learning material starts slowly (which is not a bad thing per se ) and skips over many concepts or syntax you will often see in real code. Instead of learning simplified coding, I jump straight to code an experienced developer would write. It works to a degree. The deep dive creates a lot of ephemeral knowledge that needs reinforcement, otherwise it quickly disappears. As I mentioned in “ Physical education ”, the brain creates many new, tentative connections, but then you have to immediately put them to use and let them grow. After I’ve read a lot I must write a lot of code, or I’ll lose most of the knowledge. The truth is usually I start working on something only to be blocked by indirectly related problems that I can never predict — and I’ve been through the cycle many times. The dev environment doesn’t work, the library you want to use turns out difficult to integrate, the language behaves differently than you expect on a metalevel that has not been described enough in the books… Meanwhile, your newly acquired knowledge evaporates from your mind alarmingly quickly. The result is that by the time I get to implement what I’ve read, I need to read about it again. So far, at least with learning programming languages, what works best for me is to: Gradually you learn the basics and build up to more advanced concepts (revolutionary, I know). The difference from regular “start with X language” tutorials is learning about the advanced concepts first and using them sooner (but not immediately). My theory is this helps write decent code earlier than otherwise. That’s exactly how I picked up Go again last year and to work on DDPub , the engine this website runs on. Since then I’ve used Go in several other projects and jumped straight in every time. For me, trying to figure out the solution to a specific problem results in much better retention than simply knowing all the advanced things without a way to use them. This is all somewhat similar to the Feynman Technique : You cannot be sure you truly know something until you try to explain it — whether through writing, talking, or code. Read one or more comprehensive texts that cover all aspects of the language, including expert-level patterns. I don’t study this stuff closely and read it quickly, almost skimming — I know I’ll forget more than half of it anyway. Being aware that these concepts exist helps eliminate unknown unknowns. Start working on a real project immediately instead of solving toy problems. They help you learn the basics, but a real project has enough basics to learn them and is more engaging. Solve the next immediate problem, and only that . Trying to solve everything at once only makes you confused, but building up your project solution by solution, eventually you can ship it. Every aspect of the project can be improved, and while doing it you’re writing “real code” in the context of a larger codebase. Study a topic Explain the topic to someone else in as simple terms as possible (in this case, to a computer through programming) Identify gaps where your explanation doesn’t work (i.e. you can’t implement something because you don’t know exactly how) Improve your understanding and try again

0 views
NorikiTech 1 years ago

“Million Dollar Weekend”, Noah Kagan

Before starting “Million Dollar Weekend” ( MDW going forward) I thought it would be similar to other business books: a lot of fluff and a couple of takeaways. There is nothing inherently wrong with that, it’s a proven formula, and if you don’t like the fluff either find a summary (no shortage of them for any popular book) or (my solution) just read faster. I quickly found out there was not much fluff in MDW and it went hard from the very beginning. I started MDW because I had a question: “How do I learn to think like a person who launches businesses?” I’ve been struggling with that over the years. I read books and studied courses, even watched people do it at close range. This reminds me of a quote (that I’m sure I read in a book but couldn’t find the source): “How do I draw a masterpiece?” “Become a person who draws masterpieces and draw naturally.” It’s one thing to know and quite another to be able to do it. I work full-time as a software engineer, and (having read the book) the skillset I use every day and the goals I pursue at work are completely different to what a successful entrepreneur would do to work on a business. Kagan does an admirable job not only telling what he suggests to do, but what people often get wrong. I saw myself in many of his descriptions. I won’t retell the book here, it’s only a little over 200 pages and the more important things are all in the first half — you can read all of it in a couple of hours. What I learned was that at the core of Kagan’s approach are two things: playful, but relentless experimentation and directly asking actual people for money. Between them, they share the expectation of failure. Making an experiment, you are prepared for it to fail — and just as prepared to make the next one. Asking people for money, you are prepared for each person to say “no” — and ready to immediately ask the next person. Another thing that I found particularly useful were his criteria of ideas worth trying, from his unique perspective as a successful online entrepreneur. The most common error startup founders make is not getting “product-market fit” right, or simply building something nobody wants. Kagan addresses this head-on with examples and questions to ask yourself and prospective customers. Developers like to write code and don’t like talking to people — sometimes it works out, but for most people (Kagan insists) it’s infinitely more useful to talk to as many potential customers as possible even before writing any code. Paul Graham makes the same argument in his classic essay “ Do Things That Don’t Scale ”: …a combination of shyness and laziness. They’d rather sit at home writing code than go out and talk to a bunch of strangers and probably be rejected by most of them. …a lot of startup founders are trained as engineers, and customer service is not part of the training of engineers. You’re supposed to build things that are robust and elegant, not be slavishly attentive to individual users like some kind of salesperson. Why am I posting this review on a tech blog? MDW is a useful change of perspective from “I’m just doing my job”. As software engineers, we are in the unique position to produce infinitely scalable products (in contrast to physical goods). This also means we can do a lot of experiments with no downsides — just throw the results away if they don’t work and try something else. Hey, now I can use the lesson from the book and ask! If you liked this, please subscribe to the newsletter by putting your email in the form below. I’ll make it worth your while.

0 views
NorikiTech 1 years ago

Hayaku devlog 1

Welcome to the first development log of Hayaku, attempt number six! (I explain why it’s the sixth attempt in my post on the history of Hayaku .) Half the game jam time is past, and while the project is nowhere near releasable, it’s playable and in much better shape than it has ever been over the past six years. To start, check out this video of me playing: For this first leg of the jam, I focused on implementing the core game loop that you see in the video: getting some questions scheduled and showing hints, progressing through the full deck and up- and de-ranking correctly and incorrectly answered questions, and finally, making the tap-slide control according to spec. Godot made achieving all of this straightforward, except one instance where the UI is slightly off-center and I didn’t find a way to make it align naturally — however I already came up with a way to do it through minimal coding. What you see in the video is basically it, that’s the whole game. You will spend 99% of your time on this gameplay screen. Every part of it will be improved, but you will still open it, start the game and then keep going through the questions. Originally designed to be mobile-first, it can work on the desktop in horizontal mode, but it is really meant to be played in short bursts while you’re waiting for something else. Last week I brought it to a local gamedev meetup in the same state you see in the video and got favorable responses. As I explained multiple times during the meetup, the goal of the game is to make you comfortable reading Japanese (kana and later kanji) so you start reading faster over time. This is an aspect of learning Japanese that most learning resources gloss over. You are supposed to learn to automatically read kana “naturally”, but I experienced a long period where I was shaky and that hindered my ability to learn kanji because there was this moment or two of hesitation. In language learning, it’s much better to learn very few things very well — they will be committed to long-term memory and you will never forget them, and that will be your permanent foundation you can slowly expand. To round up the meetup demo, I added one question that showcases the later stages of Hayaku. Here you are already past romaji questions, where you learn kana, and on to kanji: Godot’s grown-up text layout engine lets me fully implement my idea to highlight the matching parts of phrases to help you make sense of them. At the same time, my simple ruby text (or “furigana” if it’s specific to Japanese) implementation shows you the reading of the kanji. You don’t need to “win” in Hayaku and you shouldn’t be afraid to “lose”. If you get a question wrong, it simply shows you the correct answer and moves on to the next question. The question that you got wrong is de-ranked and will be shown again. To “progress” (get new questions) you need to get all the previous questions right. It’s simple but effective, a variation of spaced repetition.

0 views
NorikiTech 1 years ago

High distraction environment

If you want to do deep work , you need to reach a state of deep focus . You can get halfway there by eliminating distractions. A typical modern workplace in “digital”, however, is what I call a high distraction environment — the opposite of what is required to do great focused work. The company encourages collaboration by expecting you to be available for a chat or a “quick call” during the working hours. You start each week with at least several hours of meetings already in your calendar. Some of those meetings require preparation time, possibly several days in advance. Todos keep piling up and the priorities are constantly shifting, so you have to spend mental energy deciding where to slot each new task. Just as you manage to carve out half an afternoon for work, something urgent comes up that needs immediate attention. I’m not even talking about sitting in an open office where people around you are talking all day. You feel like your time is out of your control, you’re constantly distracted and struggle to find enough time to do your work. I know I do! And it’s not just my experience, this happens by default unless the company or your team make it their priority to promote focused work without distractions. Distractions such as those above are insidious because they are “work” — collaboration (for lack of a better word), helping other people, keeping yourself informed. However, by engaging in them you produce nothing of value. Unchecked, they will eat all of your time. Even worse, they pollute whatever time during the workday left for work. Nagging thoughts about other tasks and meetings don’t let you bring your full focus to the task at hand. With chats, I can either be present and people can message me, or working; not both. I found that I cannot focus (and get on with my work) when I have a chat window or email open, even if nobody is writing. While I dislike some of Jonathan Blow’s opinions, this time I fully agree when he says in the video above You can’t have a lot of stuff nagging on your mind. Programming is like [deeper design thinking] too. You’re trying to put together intricate structures in your mind and you’re trying to visualize things that are very abstract and hard to visualize, and you need a lot of focus to get there. Somehow, I don’t understand why, but the threat of being interrupted — even the idea that I can be interrupted — can prevent that focus from being attained. Linus Torvalds wants his computer to be completely silent to avoid distraction: The way I work is that I want to not have external stimulation. I want my office to be quiet. The loudest thing in the room – by far – should be the occasional purring of the cat. These people realize that you absolutely need your full focus to produce extraordinary work. They consistently do it and consistently achieve remarkable results. Why is the lack of focus so bad? Feelings aside, doesn’t the work get done in the end? Yes, but it’s the result of shallow effort which leads to work of lower quality delivered later. I’m saying “we can do better” at work a lot; I’m striving for excellence, not mediocrity. Few companies are openly satisfied with creating a mediocre product for their customers, but that happens when people who work on the product don’t get the space they need to produce brilliant work. As a software engineer, I shouldn’t be struggling to find time to do my job well, it’s ridiculous! Yet this is the grim reality. I still do good work, but I know I could do much more. I know because I have — in low distraction environments. You might say “Well, just get better at time management and setting priorities”; sometimes I’m saying this myself. Some of this can be done from the bottom up. I’ve had very good experience working remotely in a time zone where I only had four hours crossover with my coworkers because I could do four hours of highly focused work every day knowing I wouldn’t be interrupted. Turning off email and chat for a certain period also works, but not as well because I know other people are there and might need my help. People wear noise-cancelling headphones at the office to dampen the noise, but headphones give me a headache. The magic happens when the whole company embraces ways of working that lead to excellent work, such as Basecamp with their Shape Up process or Doist that has bet on async-first communication . To a typical Scrum shop, this looks like blasphemy: how will any work appear or get done if there are no regular ceremonies? Unfortunately, this requires heavily investing in talent and attracting (and keeping) the right people, who care about their work. The larger and more rigid the company, the harder it is. For me, the solution is simply to be left alone. I know what to do, let me do it. In the time free from distraction, I’ll make things better — for everyone.

0 views
NorikiTech 1 years ago

Typingvania Devlog 3

I haven’t done much I can share since the last devlog back in March . I’m working on dynamically suggesting words based on your ability and that proved to be a non-straightforward algorithm. I’ve made some progress but it’s not ready to show yet, sorry!

0 views