Logo: Clojure+
Clojure+ is a project to improve Clojure stdlib.
Clojure+ is a project to improve Clojure stdlib.
I’ve been working on OSS projects for almost 15 years now. Things are simple in the beginning - you’ve got a single project, no users to worry about and all the time and the focus in world. Things changed quite a bit for me over the years and today I’m the maintainer of a couple of dozen OSS projects in the realms of Emacs, Clojure and Ruby mostly. People often ask me how I manage to work on so many projects, besides having a day job, that obviously takes up most of my time. My recipe is quite simple and I refer to it as “burst-driven development”. Long ago I’ve realized that it’s totally unsustainable for me to work effectively in parallel on several quite different projects. That’s why I normally keep a closer eye on my bigger projects (e.g. RuboCop, CIDER, Projectile and nREPL), where I try to respond quickly to tickets and PRs, while I typically do (focused) development only on 1-2 projects at a time. There are often (long) periods when I barely check a project, only to suddenly decide to revisit it and hack vigorously on it for several days or weeks. I guess that’s not ideal for the end users, as some of them might feel that I “undermaintain” some (smaller) projects much of the time, but this approach has worked for me very well for quite a while. The time I’ve spent develop OSS projects has taught me that: To illustrate all of the above with some example, let me tell you a bit about copilot.el 0.3 . I became the primary maintainer of about 9 months ago. Initially there were many things about the project that were frustrating to me that I wanted to fix and improve. After a month of relatively focused work I had mostly achieved my initial goals and I’ve put the project on the backburner for a while, although I kept reviewing PRs and thinking about it in the background. Today I remembered I hadn’t done a release there in quite a while and 0.3 was born. Tomorrow I might remember about some features in Projectile that have been in the back of my mind for ages and finally implement them. Or not. I don’t have any planned order in which I revisit my projects - I just go wherever my inspiration (or current problems related the projects) take me. And that’s a wrap. Nothing novel here, but I hope some of you will find it useful to know how do I approach the topic of multi-project maintenance overall. The “job” of the maintainers is sometimes fun, sometimes tiresome and boring, and occasionally it’s quite frustrating. That’s why it’s essential to have a game plan for dealing with it that doesn’t take a heavy toll on you and make you eventually hate the projects that you lovingly developed in the past. Keep hacking! few problems require some immediate action you can’t always have good ideas for how to improve a project sometimes a project is simply mostly done and that’s OK less is more “hammock time” is important
In my decade-plus of maintaining my dotfiles , I’ve written a lot of little shell scripts. Here’s a big list of my personal favorites. and are simple wrappers around system clipboard managers, like on macOS and on Linux. I use these all the time . prints the current state of your clipboard to stdout, and then whenever the clipboard changes, it prints the new version. I use this once a week or so. copies the current directory to the clipboard. Basically . I often use this when I’m in a directory and I want use that directory in another terminal tab; I copy it in one tab and to it in another. I use this once a day or so. makes a directory and s inside. It’s basically . I use this all the time —almost every time I make a directory, I want to go in there. changes to a temporary directory. It’s basically . I use this all the time to hop into a sandbox directory. It saves me from having to manually clean up my work. A couple of common examples: moves and to the trash. Supports macOS and Linux. I use this every day. I definitely run it more than , and it saves me from accidentally deleting files. makes it quick to create shell scripts. creates , makes it executable with , adds some nice Bash prefixes, and opens it with my editor (Vim in my case). I use this every few days. Many of the scripts in this post were made with this helper! starts a static file server on in the current directory. It’s basically but handles cases where Python isn’t installed, falling back to other programs. I use this a few times a week. Probably less useful if you’re not a web developer. uses to download songs, often from YouTube or SoundCloud, in the highest available quality. For example, downloads that video as a song. I use this a few times a week…typically to grab video game soundtracks… similarly uses to download something for a podcast player. There are a lot of videos that I’d rather listen to like a podcast. I use this a few times a month. downloads the English subtitles for a video. (There’s some fanciness to look for “official” subtitles, falling back to auto-generated subtitles.) Sometimes I read the subtitles manually, sometimes I run , sometimes I just want it as a backup of a video I don’t want to save on my computer. I use this every few days. , , and are useful for controlling my system’s wifi. is the one I use most often, when I’m having network trouble. I use this about once a month. parses a URL into its parts. I use this about once a month to pull data out of a URL, often because I don’t want to click a nasty tracking link. prints line 10 from stdin. For example, prints line 10 of a file. This feels like one of those things that should be built in, like and . I use this about once a month. opens a temporary Vim buffer. It’s basically an alias for . I use this about once a day for quick text manipulation tasks, or to take a little throwaway note. converts “smart quotes” to “straight quotes” (sometimes called “dumb quotes”). I don’t care much about these in general, but they sometimes weasel their way into code I’m working on. It can also make the file size smaller, which is occasionally useful. I use this at least once a week. adds before every line. I use it in Vim a lot; I select a region and then run to quote the selection. I use this about once a week. returns . (I should probably just use .) takes JSON at stdin and pretty-prints it to stdout. I use this a few times a year. and convert strings to upper and lowercase. For example, returns . I use these about once a week. returns . I use this most often when talking to customer service and need to read out a long alphanumeric string, which has only happened a couple of times in my whole life. But it’s sometimes useful! returns . A quick way to do a lookup of a Unicode string. I don’t use this one that often…probably about once a month. cats . I use for , for a quick “not interested” response to job recruiters, to print a “Lorem ipsum” block, and a few others. I probably use one or two of these a week. Inspired by Ruby’s built-in REPL, I’ve made: prints the current date in ISO format, like . I use this all the time because I like to prefix files with the current date. starts a timer for 10 minutes, then (1) plays an audible ring sound (2) sends an OS notification (see below). I often use to start a 5 minute timer in the background (see below). I use this almost every day as a useful way to keep on track of time. prints the current time and date using and . I probably use it once a week. It prints something like this: extracts text from an image and prints it to stdout. It only works on macOS, unfortunately, but I want to fix that. (I wrote a post about this script .) (an alias, not a shell script) makes a happy sound if the previous command succeeded and a sad sound otherwise. I do things like which will tell me, audibly, whether the tests succeed. It’s also helpful for long-running commands, because you get a little alert when they’re done. I use this all the time . basically just plays . Used in and above. uses to play audio from a file. I use this all the time , running . uses to show a picture. I use this a few times a week to look at photos. is a little wrapper around some of my favorite internet radio stations. and are two of my favorites. I use this a few times a month. reads from stdin, removes all Markdown formatting, and pipes it to a text-to-speech system ( on macOS and on Linux). I like using text-to-speech when I can’t proofread out loud. I use this a few times a month. is an wrapper that compresses a video a bit. I use this about once a month. removes EXIF data from JPEGs. I don’t use this much, in part because it doesn’t remove EXIF data from other file formats like PNGs…but I keep it around because I hope to expand this one day. is one I almost never use, but you can use it to watch videos in the terminal. It’s cursed and I love it, even if I never use it. is my answer to and , which I find hard to use. For example, runs on every file in a directory. I use this infrequently but I always mess up so this is a nice alternative. is like but much easier (for me) to read—just the PID (highlighted in purple) and the command. or is a wrapper around that sends , waits a little, then sends , waits and sends , waits before finally sending . If I want a program to stop, I want to ask it nicely before getting more aggressive. I use this a few times a month. waits for a PID to exit before continuing. It also keeps the system from going to sleep. I use this about once a month to do things like: is like but it really really runs it in the background. You’ll never hear from that program again. It’s useful when you want to start a daemon or long-running process you truly don’t care about. I use and most often. I use this about once a day. prints but with newlines separating entries, which makes it much easier to read. I use this pretty rarely—mostly just when I’m debugging a issue, which is unusual—but I’m glad I have it when I do. runs until it succeeds. runs until it fails. I don’t use this much, but it’s useful for various things. will keep trying to download something. will stop once my tests start failing. is my emoji lookup helper. For example, prints the following: prints all HTTP statuses. prints . As a web developer, I use this a few times a month, instead of looking it up online. just prints the English alphabet in upper and lowercase. I use this surprisingly often (probably about once a month). It literally just prints this: changes my whole system to dark mode. changes it to light mode. It doesn’t just change the OS theme—it also changes my Vim, Tmux, and terminal themes. I use this at least once a day. puts my system to sleep, and works on macOS and Linux. I use this a few times a week. recursively deletes all files in a directory. I hate that macOS clutters directories with these files! I don’t use this often, but I’m glad I have it when I need it. is basically . Useful for seeing the source code of a file in your path (used it for writing up this post, for example!). I use this a few times a month. sends an OS notification. It’s used in several of my other scripts (see above). I also do something like this about once a month: prints a v4 UUID. I use this about once a month. These are just scripts I use a lot. I hope some of them are useful to you! If you liked this post, you might like “Why ‘alias’ is my last resort for aliases” and “A decade of dotfiles” . Oh, and contact me if you have any scripts you think I’d like. to start a Clojure REPL to start a Deno REPL (or a Node REPL when Deno is missing) to start a PHP REPL to start a Python REPL to start a SQLite shell (an alias for )
Translations: Russian Syntax highlighting is a tool. It can help you read code faster. Find things quicker. Orient yourself in a large file. Like any tool, it can be used correctly or incorrectly. Let’s see how to use syntax highlighting to help you work. Most color themes have a unique bright color for literally everything: one for variables, another for language keywords, constants, punctuation, functions, classes, calls, comments, etc. Sometimes it gets so bad one can’t see the base text color: everything is highlighted. What’s the base text color here? The problem with that is, if everything is highlighted, nothing stands out. Your eye adapts and considers it a new norm: everything is bright and shiny, and instead of getting separated, it all blends together. Here’s a quick test. Try to find the function definition here: See what I mean? So yeah, unfortunately, you can’t just highlight everything. You have to make decisions: what is more important, what is less. What should stand out, what shouldn’t. Highlighting everything is like assigning “top priority” to every task in Linear. It only works if most of the tasks have lesser priorities. If everything is highlighted, nothing is highlighted. There are two main use-cases you want your color theme to address: 1 is a direct index lookup: color → type of thing. 2 is a reverse lookup: type of thing → color. Truth is, most people don’t do these lookups at all. They might think they do, but in reality, they don’t. Let me illustrate. Before: Can you see it? I misspelled for and its color switched from red to purple. Here’s another test. Close your eyes (not yet! Finish this sentence first) and try to remember what color your color theme uses for class names? If the answer for both questions is “no”, then your color theme is not functional . It might give you comfort (as in—I feel safe. If it’s highlighted, it’s probably code) but you can’t use it as a tool. It doesn’t help you. What’s the solution? Have an absolute minimum of colors. So little that they all fit in your head at once. For example, my color theme, Alabaster, only uses four: That’s it! And I was able to type it all from memory, too. This minimalism allows me to actually do lookups: if I’m looking for a string, I know it will be green. If I’m looking at something yellow, I know it’s a comment. Limit the number of different colors to what you can remember. If you swap green and purple in my editor, it’ll be a catastrophe. If somebody swapped colors in yours, would you even notice? Something there isn’t a lot of. Remember—we want highlights to stand out. That’s why I don’t highlight variables or function calls—they are everywhere, your code is probably 75% variable names and function calls. I do highlight constants (numbers, strings). These are usually used more sparingly and often are reference points—a lot of logic paths start from constants. Top-level definitions are another good idea. They give you an idea of a structure quickly. Punctuation: it helps to separate names from syntax a little bit, and you care about names first, especially when quickly scanning code. Please, please don’t highlight language keywords. , , , stuff like this. You rarely look for them: “where’s that if” is a valid question, but you will be looking not at the the keyword, but at the condition after it. The condition is the important, distinguishing part. The keyword is not. Highlight names and constants. Grey out punctuation. Don’t highlight language keywords. The tradition of using grey for comments comes from the times when people were paid by line. If you have something like of course you would want to grey it out! This is bullshit text that doesn’t add anything and was written to be ignored. But for good comments, the situation is opposite. Good comments ADD to the code. They explain something that couldn’t be expressed directly. They are important . So here’s another controversial idea: Comments should be highlighted, not hidden away. Use bold colors, draw attention to them. Don’t shy away. If somebody took the time to tell you something, then you want to read it. Another secret nobody is talking about is that there are two types of comments: Most languages don’t distinguish between those, so there’s not much you can do syntax-wise. Sometimes there’s a convention (e.g. vs in SQL), then use it! Here’s a real example from Clojure codebase that makes perfect use of two types of comments: Per statistics, 70% of developers prefer dark themes. Being in the other 30%, that question always puzzled me. Why? And I think I have an answer. Here’s a typical dark theme: and here’s a light one: On the latter one, colors are way less vibrant. Here, I picked them out for you: This is because dark colors are in general less distinguishable and more muddy. Look at Hue scale as we move brightness down: Basically, in the dark part of the spectrum, you just get fewer colors to play with. There’s no “dark yellow” or good-looking “dark teal”. Nothing can be done here. There are no magic colors hiding somewhere that have both good contrast on a white background and look good at the same time. By choosing a light theme, you are dooming yourself to a very limited, bad-looking, barely distinguishable set of dark colors. So it makes sense. Dark themes do look better. Or rather: light ones can’t look good. Science ¯\_(ツ)_/¯ There is one trick you can do, that I don’t see a lot of. Use background colors! Compare: The first one has nice colors, but the contrast is too low: letters become hard to read. The second one has good contrast, but you can barely see colors. The last one has both : high contrast and clean, vibrant colors. Lighter colors are readable even on a white background since they fill a lot more area. Text is the same brightness as in the second example, yet it gives the impression of clearer color. It’s all upside, really. UI designers know about this trick for a while, but I rarely see it applied in code editors: If your editor supports choosing background color, give it a try. It might open light themes for you. Don’t use. This goes into the same category as too many colors. It’s just another way to highlight something, and you don’t need too many, because you can’t highlight everything. In theory, you might try to replace colors with typography. Would that work? I don’t know. I haven’t seen any examples. Some themes pay too much attention to be scientifically uniform. Like, all colors have the same exact lightness, and hues are distributed evenly on a circle. This could be nice (to know if you have OCD), but in practice, it doesn’t work as well as it sounds: The idea of highlighting is to make things stand out. If you make all colors the same lightness and chroma, they will look very similar to each other, and it’ll be hard to tell them apart. Our eyes are way more sensitive to differences in lightness than in color, and we should use it, not try to negate it. Let’s apply these principles step by step and see where it leads us. We start with the theme from the start of this post: First, let’s remove highlighting from language keywords and re-introduce base text color: Next, we remove color from variable usage: and from function/method invocation: The thinking is that your code is mostly references to variables and method invocation. If we highlight those, we’ll have to highlight more than 75% of your code. Notice that we’ve kept variable declarations. These are not as ubiquitous and help you quickly answer a common question: where does thing thing come from? Next, let’s tone down punctuation: I prefer to dim it a little bit because it helps names stand out more. Names alone can give you the general idea of what’s going on, and the exact configuration of brackets is rarely equally important. But you might roll with base color punctuation, too: Okay, getting close. Let’s highlight comments: We don’t use red here because you usually need it for squiggly lines and errors. This is still one color too many, so I unify numbers and strings to both use green: Finally, let’s rotate colors a bit. We want to respect nesting logic, so function declarations should be brighter (yellow) than variable declarations (blue). Compare with what we started: In my opinion, we got a much more workable color theme: it’s easier on the eyes and helps you find stuff faster. I’ve been applying these principles for about 8 years now . I call this theme Alabaster and I’ve built it a couple of times for the editors I used: It’s also been ported to many other editors and terminals; the most complete list is probably here . If your editor is not on the list, try searching for it by name—it might be built-in already! I always wondered where these color themes come from, and now I became an author of one (and I still don’t know). Feel free to use Alabaster as is or build your own theme using the principles outlined in the article—either is fine by me. As for the principles themselves, they worked out fantastically for me. I’ve never wanted to go back, and just one look at any “traditional” color theme gives me a scare now. I suspect that the only reason we don’t see more restrained color themes is that people never really thought about it. Well, this is your wake-up call. I hope this will inspire people to use color more deliberately and to change the default way we build and use color themes. Look at something and tell what it is by its color (you can tell by reading text, yes, but why do you need syntax highlighting then?) Search for something. You want to know what to look for (which color). Green for strings Purple for constants Yellow for comments Light blue for top-level definitions Explanations Disabled code JetBrains IDEs Sublime Text ( twice )
Read on the website: Not all environments have Lisp-aware structural editing. Some are only line-oriented. How does one go about editing Lisp line-by-line?
Read on the website: Threading macros make Lisp-family languages much more readable. Other languages too, potentially! Except… other languages don’t have macros. How do we go about enabling threading “macros” there?
Чем Datomic отличается от других баз данных и почему иногда остутствие оптимизатора лучше, чем его присутствие
The steps in this guide have generated A$1,179,000 in salary (updated 13th April, 2025), measured as the sum of the highest annual salaries friends and readers have reached after following along, where they were willing to attribute their success actions in here. If it works for you, email me so I can bump the number up. I currently run my business out of my own pocket. If I don't make sales, I lose savings, and it's as simple as that. I am all-in on creating work I love by force of arms, and I'd sooner leave the industry than be disrespected at a normal workplace again. The impetus to run that risk comes from two places. The first is that my tolerance for middle managers jerking themselves off at my expense is totally eroded, and I realized that I either had to do something about it or stop complaining. I'll happily go to an office if I think it will produce something I care about, but I will not do it because someone wants to impress a withered husk who thinks his sports car makes him attractive to young women. The second , and what this post is about, is that I am really good at getting jobs, and have friends with a very deep understanding of how the job market works. In Australia, when you apply for a job without permanent residency, you are filtered out of all applications immediately. It is the first question on all online application forms, and the reason is that companies do not want to deal with visa renewals and they have far too many candidates. This leads to a situation where any characteristic that is remotely inconvenient but not noticeably correlated with suitability for the business is grounds for rejection. It is not uncommon for immigrants to take months, sometimes over a year, to find their first job actually writing code. Despite being a non-white with no professional network in the country and an undesirable visa, I had my first paid programming engagement lined up before finalizing the move off my student visa. I had a full-time job on A$117K lined up for the same day my full work visa kicked in. I continued to dig up work whenever a contract was expiring, even landing a gig mid-COVID, and while most of these jobs left much to be desired , I believe this has more to do with the state of the industry in Australia than anything that I did. And I have only gotten better at this over the past two years, because while I despaired about the state of software in general, I never stopped thinking and experimenting about how to regain some control over how I'm treated. Almost everyone I spend time with now has walked away from a job without flinching. I've done it . I once caught up with a friend, and he said "Work is stupid, I'm going to Valencia for a year." I said, "W-what? Valencia? When? For a year ?" "Yeah, a year. I'm going in two weeks." And then that glorious son of a bitch did it . Came home. Had a job waiting for him. Quit that job, got another job at more money. Quit that job, got one interstate because he felt like it at similar pay for half the work. All in a "weak" market. I get a lot of emails from people who despair about the state of the industry or who otherwise can't find jobs, and I always end up giving the same advice. I don't have the time to keep doing that. So in this post, I'm going to attempt to convince thousands of people that you should have much higher standards for what you tolerate, that you can build up the reserves to do your version of going to Valencia (this could just be staying home and playing with your kids for six months), and that it is immensely risky not to have this ability in your back pocket. Along the way, we will answer questions like "How long should a CV be?", "What should go on it?", and "When will this suffering end?" From Scott Smitelli's phenomenal The Ideal Candidate Will Be Punched In The Stomach : What was the plan here? Why did you leave a perfectly sustainable—if difficult and slightly unrewarding—job to do this crap? What even is this crap? You are, and let’s not mince words here, you are a thief. That’s the only way to make sense of this situation: You are stealing money from somebody, somehow. This is money you have not earned. There is no legitimate way that you should be receiving any form of compensation for simply absorbing abuse. These people, maybe the whole company but certainly the people in your immediate management chain, are irredeemably damaged to be using human beings that way. They will take, and take, and smile at you like they’re doing you some kind of favor, and maybe throw you a little perk from time to time to distract you from thinking about it too hard. But you? You can’t stop thinking. You can’t stop thinking. You can’t stop thinking. If you're in this photo and don't like it, this blog post is for you. We have one end-goal. A career where you're paid well, are treated with real respect, and we will not settle for less. And I mean real respect, as in "we will not proceed on this major project without your professional blessing, and you can fire abusive clients", not "you can work from home two days a week if the boss is feeling generous". I had a brief email exchange with Erez Zukerman, the CEO of ZSA last year, and asked how their customer support is so good — it's the best customer support I've ever experienced and there's no close second. He replied: For support, the basic understanding is that support is the heart of the business. It is not an afterthought. Support is a senior position at ZSA, with judgment, power of review over features and website updates before anything is released (nothing major goes out without a green line from every member of support), the power to fire customers (!), real flexibility when it comes to the schedule, etc. There are also lots of ways to expand, like writing (Robin has been writing incredible blog posts and creating printables), recording (Tisha recorded Tisha Talks Switches which thousands of people enjoyed), and more. Anything short of that isn't real respect. Not a special parking spot. Not the ability to pick up your kids sometimes . Not a patronizing award on Teams. Most places fall short of this, and because we have all agreed to demand better for ourselves, we are going to consider all of these places as mildly abusive. A lot of office jobs seem like a slow death of the soul — better than the swift death of the body that careers like construction work offer, but that isn't a reason to stop striving. Shoddy work. Hour long stand-ups. The deadlines are somehow always urgent and must be delivered immediately, but are also always late and everyone knows they'll be late from day one. This is delightful at times — office scenes in improvised theater get funnier the straighter you play them — but many people eventually feel that something vital is missing from their work lives. I really enjoy David Whyte's The Three Marriages as an antidote to the tedious objection of "Work to live, don't live to work". It's a part of life, and while it isn't all of life, being bored and treated like a disposable cog for eight hours a day shouldn't be any part of your life. If you're happy to coast, adieu, catch you later. This a no judgement zone for the next five minutes. Here is a quick reality check. I have, by virtue of hundreds of people reaching out to me over this blog, seen the "I want to leave my bad job" story play out far more times than a typical person does in a lifetime. It always plays out in one of two ways. The first is that the person immediately and aggressively looks for new jobs. This usually goes well. If it does not go well, they can always find a new job again. When the job is pursued through "normal" mechanisms, such as cold Seek applications, these jobs almost never meet the standard I set above: great pay, great team, great interview process, and whatever office arrangement you prefer. But they've always been doing better along at least some of the four measures. The other story is much more typical, and it goes something like this: I'd love to leave, but there's something keeping me. One more year and I'll get a new title, and then I'll be so well-placed for a new job. I've heard the market is bad, so I should wait until it picks up again. I'll get a raise soon, then I'll negotiate for a new job. I'm scared of keeping up with mortgage repayments. I just need a year to finish up this project, it'll look great on my CV. My network is terrible , so I don't have the same options open to me. I think I can make a difference if I'm given a few more months. In two years, this second approach has never gone well. Never, ever, ever. Consider this real exchange, copied verbatim and redacted. May, 2024: Me: I'm a little bit concerned that the pathway above leads to delaying indefinitely (there's always going to be a risk of moving then getting laid off - so what risk level do you actually tolerate, and how is that balanced against [COMPANY] being run so badly that you can get laid off there too?) but you know your situation better than me. Reader: Well, the company was bought out and seems to be stabilizing. July, 2024: Reader: Got some fantastic news! Gonna get a raise at [COMPANY], 20%! It came as a surprise, apparently they think I earn too little so they're giving me a raise because of that. November, 2024: Reader: Wanted to let you know I got news, I'm gonna be fired next month. This happens so often that it's actually boring for me . I've had exchanges like the above often enough that I know the person is finished months before they do. Play stupid games, win stupid prizes. They will either be let go, burn out and quit, or burn out and stay there as their health deteriorates. No one, at any level short of executive, has managed to have the impact that would be required for them to feel it was worth the cost. The thing that is missing, to my eyes, is some sense of confidence and self-respect. I hear lots of supposed barriers to getting a better job, but almost none of them are convincing, especially from people in the first world, so what I'm actually hearing about are psychological barriers. It takes a certain degree of confidence to know that you have worth, because a great deal of our society, whether by coincidence or design, causes people to feel like they're not desirable. If you don't have confidence, you feel trapped at the current situation, because what if you can't find something else ? What if you're not good enough? This is a real risk, but guess what, life's risky! Two months ago, one of my high school classmates, one of the fittest people I know, died of an aneurysm at age 29. Think about it this way: enough people read this blog that if you are reading this sentence , you have just drawn a ticket in the "heart attack kills me by December" lottery. This isn't hypothetical, this will happen — someone reading this will die having spent a few hundred hours on spreadsheets this year, and perhaps even have time to think "I wish I had listened to Ludic, he is so smart and wise." 1 And on self-respect, I will concede that you're getting the vestiges of my time spent in psychology, but why would anyone respect you if you let someone do Scrum at you for hours ? No one respected me when I let people do Scrum at me, and that was my fault . "It what der street trolls make when dey is short o' cash an' ... what is it dey's short of, Brick?" The moving spoon paused. "Dey is short o' self-respec', Sergeant," he said, as one might who'd had the lesson shouted into his ear for twenty minutes. So where do we start off? Well, the first thing to do is bury the idea that you need this particular job, or that you are otherwise unworthy. And we're going to do that by getting really good at getting mediocre jobs, and we're almost always going to want to be doing day-rate contracts. We are going to do a lot of things that I do not endorse when going for a good job, like sending your CV anywhere, talking to recruiters, etc. Regular full-time jobs obtained through mass-market channels have dysfunctional social dynamics that are too complex to get into here. Patrick McKenzie writes : Many people think job searches go something like this: See ad for job on Monster.com Send in a resume. Get an interview. Get asked for salary requirements. Get offered your salary requirement plus 5%. Try to negotiate that offer, if you can bring yourself to. This is an effective strategy for job searching if you enjoy alternating bouts of being unemployed, being poorly compensated, and then treated like a disposable peon. Working jobs like the above comes at a real cost, even if you can get them at-will. I had an episode of intense burnout which resulted in a year of recovery , and I had to think very hard about how to not feel trapped in a bad situation again, even if the business fails. I do not want to attend hour-long stand-ups anymore. This section is about how to get the above jobs as effectively and painlessly as possible, but they will still not be great , and if you do them forever then I will be very disappointed in you. In any case, if one must engage with the market in this way to build confidence and a reputation, then day-rate contracts are amazing. I am heavily in favour of contracting. The day rate is much higher. You are forced to continue searching every few months, which means you are also forced to always be aware that you have options, and we will discuss how to minimize the pain of this. You will meet far more people because you will be at a new workplace every few months. Here in Australia, a weak contracting job will pay A$1K per day, which is approximately double what a permanent employee earns. I.e, for every six months of contracting, you can afford six months of unemployment, and you're still as well off as you were if you had been permanently employed over that period. Contracts are terminated more frequently, but you're also in a much better position because you've saved way more money per day worked, you've met tons more people, and your CV is always up-to-date. And you also knew it was going to expire in six months, so having it end three months early isn't a horrible shock to your planning. You are also excluded from the most mentally draining practices in a corporate environment, and afforded a higher status than regular employees. You will usually not be asked to attend pointless meetings, and instead be left free to execute on technical work, particularly if you indicate that you can manage scope independently. If someone does ask you to attend a pointless meeting, you can recite the Litany of a Thousands Dollars a Day in your head over and over as the project manager attempts to flay your mind. You know that delightful period after you've submitted a resignation and you're about to get out? That's the whole contract . A six month contract feels like handing in a resignation with six months of notice. When the CEO says "Can we put GenAI and blockchain in the product?", you can close your eyes, my God you are so happy, and whisper "Inshallah, I will not be on this train when it derails". None of these jobs will be great. This is not a good way to get jobs in the long-term. This is a boring, soulless way for someone that does not have any appreciable career capital or networking ability to generate adequate jobs on high pay. We only bother with this so that you know that if your business explodes, or the cool non-profit you find fires you, or if a new boss comes in and abuses you, you'll know deep down that you can walk right out that door and tell them to get fucked. I should note that the advice in this section was heavily contributed to by a friend who wishes to remain anonymous, but let us all send them silent thanks. Anyway, we take Sturgeon's Law very seriously on this blog. Ninety percent of everything is crap. It is with this understanding that we must proceed. There is a pathway to navigate that relies entirely on the broad understanding that: Let us begin with recruiters. Recruiters are an unfortunate reality of the industry. I still haven't worked out why they exist when a company can just post a job ad themselves, and their talent team has to filter out the candidates themselves anyway, but whatever. They're here, and I've learned enough about the world to accept that it's 50/50 on whether their existence is economically rational. In 2019, about a month into my first full-time programming job, I received a call from a recruiter. They were looking for someone with Airflow experience to work a contract with Coles, a massive Australian grocery chain. I had no idea what to really say to this, being inexperienced and hugely underconfident, so I just listened to his questions and answered them. Most of my answers were a sad: "Ah, no, sorry, I know what AWS is but I've never used it before at a real business. I know what Airflow is , but I've never..." Until finally we come across the fateful question: "And do you know Linux?" Why, yes, I do know Linux. At that stage of my career, it never even occurred to me to ponder what knowing Linux is. Do I even know my keyboard if I can't construct one from scratch? What does he mean know ? How deranged would I have to be to say I know Python, without qualification, without being a core contributor? But none of that occurred to me, I just said yes. He is delighted, we get to chatting, and we quickly realize that we're both working our first jobs! He is a year younger, also nervous about his job, and is so happy to be talking to someone that just sounds like a normal person. He is soon comfortable enough to ask me a very vulnerable question: "So what is Linux?" I answer, and I've been doing nothing but teach psychologists-in-training statistics for a year, so the explanation is good. Each good explanation leads to another, until I'm fielding questions like: "What is Airflow? What is AWS?" We hang up, on good terms, and I stare at the wall for a long moment. There are people out there just like, calling around and functionally asking "Have you used FeebleGlorp for eight years?" with no internal model of what FeebleGlorp might be? That can't be right. Everyone at school told me that affairs would be very serious in the real world. Affairs are not very serious in the real world. Affairs have never been less serious. I told myself for a while that this must have been because he was so young, but no, they're actually almost all like this. I have only ever met two recruiters with intact brains 2 . To quote a reader with extensive HR experience who attempted to explain this dysfunction to me: While there are professionals that specialize in tech and with time develop enough depth to understand the discipline and move the needle in the right direction, for most recruiters it is not an economic advantage to do so; as the winds of the market are ever changing, recruiters are always the first ones to go into the chopping block when there are layoffs. Better to be a generalist recruiter and keep your job options open. I.e, the recruiters you are talking to probably go out of their way to avoid learning anything, because they may be recruiting in a different industry next month. This means a few things. I normally do not send CVs anywhere and decry them, but I've reversed my stance. They're a terrible way to get good jobs, but a heavily optimized CV will demolish most other candidates, who are about as unserious as the recruiters. So how do we optimize? Well, we're trying to get past recruiters. On your CV, quality indicators only matter if the recruiter can understand them, and as per the above they do not understand anything . At 12:34 PM today, while writing this blog post, I got a call from a recruiter and I asked them a question for blog material. "Hey, question about my CV, would it be better if I mentioned that I'm well-regarded by Linus Torvalds?" (This is not true, we don't know each other.) And they said, "Uh, I'd leave that out, these are very busy people and need technical credentials." Recruiters are only looking for one thing. They are looking for the number of years of experience that you say you have in buzzword, and possibly that you've worked somewhere like Google — but I've never seen a Googler compete for open-market contracts, so don't feel too disadvantaged. Years of experience with buzzword is the only thing that matters. Delete everything else. Link to your GitHub profile? Goodbye, none of these people are going to read that. I've been assured that the typical talent team spends five to ten seconds per CV . I am a passionate front-end developer with a drive for — no one cares, and if you reflect on how you felt even writing that sentence, of course no one cares. You didn't care. My CV used to say things like "deployed a deep learning project in collaboration with Google engineers" and it had sections like this: Some of the most talented people I know in Australia have told me that that this would qualify me for an instant interview on their teams, but this CV does not work because the person reading your CV will not care about the craft. If someone that does care reads it, it will be after four untalented people decided it was allowed to land on their desk, and at that point they're going to interview you anyway so your CV doesn't matter. The ideal CV starts with lines like this: Five (5) years expert skills in cloud database development and integration (Databricks, Snowflake) using ETL/ELT tools (dbt and Airbyte) and deploying cloud computing with AWS (EC2, RDS) and Azure (VM) cloud platforms The rest of the CV should be more lines like that , nothing else matters. A senior talent acquisition nerd at McKenzie told me that CVs should be one page, because it shows that the candidate is concise. Their counterpart at another agency said that you need three pages or you can barely get to know the candidate. Which of them is right? Both of them had no idea what they're talking about, because both of them are just eyeballing it, coming up with post-hoc rationalizations for their behavior that ignores the real hard question of why they specialize in hiring talent in fields that they cannot describe. I now trend towards a three page CV for no reason other than it looks like I must have more experience if it won't fit on one page, and it gives me more space to put buzzwords in. And when I say buzzwords, I mean you need the room to write things like "Amazon Web Services (AWS)" because some of the people reading the CVs do not know they are the same thing . Act on the principle of minimum charity, and accept that this version of your CV will never get you a great job. We know what we're optimizing for at this stage, and it isn't amazing colleagues, it is the ability to refill your coffers very quickly and with minimal pain . Okay, but which buzzwords do you pick? If you hop onto a job search platform, you are going to see many jobs that are essentially asking you to cosplay as a software engineer. For example, I have just hopped onto Seek and punched in "data engineer", my own subspecialty. This immediately yields this job from an international consultancy whose frontpage reads: GenAI is the most powerful force in business—and society—since the steam engine. As software and code generate more value than ever, every worker, business leader, student, and parent is now asking: Are we ready? Wow, that sure is something! I think speak on behalf of all of us when I say "please stop, you're hurting us". Also it looks like there isn't a single mention of AI on their website in 2022, so I'm really impressed that they've become experts in a novel technology just in time to cash in on over-excited executives . But what does the actual job listing entail? Proficient in Azure Data Factory, Databricks, SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS). And from this, by mental force, I can tell you everything you need to know about the job. My third eye is fully open, and the recruiting department's pathetic attempts to ward off my psychic intrusions are but tattered veils before a hurricane. They are almost certainly recruiting data engineers for a company with a very weak IT department, probably a government client, that is in the middle of a failing cloud migration. SSIS is the phylactery of a millennia old lich-king, a piece of software that runs out of SQL Server on old government data warehouses everywhere. The first time I had to fix an SSIS production outage, the senior engineer on my team told me to "untick all the boxes on that screen, then re-tick them all and click save", and that actually solved the problem . The entire point of a cloud migration is to stop using SSIS and use something better, but that would require you to be good at your job, so instead consultancies sell Azure Data Factory. Azure Data Factory is notable for having been forged in the hottest furnace in Hell. The last time I used it, I clicked "save" on a slow internet connection and it started to open and close random panels for five minutes before saving my work, which I can only assume means that the product has to open every component on the front-end to fetch data from the DOM to populate a POST request... which is, you know, is certainly one way that we could do things. Why use something so bad? It's because Azure Data Factory can be used to run SSIS packages! So now you're on the cloud, and have a new bad service running your old bad service, all without actually improving anything! And of course, they are both tools that do not require programming , so the consultancy can sell you a team of non-programmers for $2,000 per day. I've worked alongside one of these teams. They had one good developer who desperately tried to handle all of their work at once, and I shit you not, four "engineers" that spent eight hours a day copying-and-pasting secrets out of legacy SSIS packages into Azure Data Factory's secret manager for weeks on end. With a bit of experience, most job listings are simply an honor roll of dead IT projects. And because many executives hop onto the same bandwagons at the same times (but call it innovation), there seem to be specific patterns for the type of cog that companies are pursuing at any given moment. The friend who gave me most of the tips in here has an "Azure Data Engineer" CV, where he removes all mention of AWS work he has done so that government recruiters don't hurt their pretty little heads, and vice versa. Companies on Azure want Databricks because you can spin it up from the Azure UI, and companies on AWS similarly use Snowflake because of groupthink. Just smash those words onto the page. Every field can think of some variation of this. If you're a data scientist, it'll be a few common patterns to try cram LLMs into things. If you're a front-end developer, it's probably going to be a soup of React and its supporting technologies. Again, no one reading your CV until the final stage will know anything. Once, a recruiter had coffee with me, and they asked me why Git is such a popular programming language. I write my CV in Overleaf because I can make faster edits during the early phases of figuring out which patterns work, and fidgeting with layout is probably the most annoying part of any sort of CV-writing. This is a tough situation. What I did was looking up a few "easy" jobs, like data analyst, hop onto LinkedIn, navigate to that company's page, then navigate to someone that looks like they might be leading the relevant team. Do not message HR. They, as a rule, do not have human frailties like mercy and kindness while they are at work. Go straight to someone that actually cares if you are good at the job, and impress upon them that you are a real person, who either has a very cool life story about changing career pathways late in life, or who is an adorable graduate UwU. If you are super, super, super desperate, my company has an unpaid internship for graduates that really, really, really think that they just need a tiny bit of experience to get taken seriously. Once you're done scrubbing all signs of personality or competence out of your CV, leaving only eight (8) years of experience, what then? If you're going to be doing this all the time, how do we make it relatively painless? The first thing is to hop onto a bunch of job platforms and upload the CV. That's simple enough. This means that recruiters will start reaching out to you every week or so, and some of them will have jobs for you. 90% of them will fail to secure you an interview. I start the conversations with something like "In the interest of making sure we're making good use of time, what's the expected compensation for the job?". They'll say a number. If the number isn't high enough, thank them for their time and hang up. Don't waste your time. They will not present you to the candidate if you ask them to do any work beyond sending a CV and collecting a commission — from their perspective, you are cattle to be sold. If I was desperate, I would take the first job offered to me at any pay, then not slow my search down at all. Most contracts allow you to quit on very short notice, so use that against the employer instead of having it used against you for a change. The second thing is that you can start testing out the CV in the lowest-effort manner possible. The recommendation from my friend that has experimented the most is to grab a job platform's app on your phone, and to apply to maybe three or four jobs every morning. Don't bother with ones that ask you to make accounts on new job platforms, or write cover letters, or anything like that. Save a filter that removes anything you aren't willing to do, whether it's pay that's too low or a long commute. Err on the side of being picky, and do this every workday , even if you already have a contract. If the list of jobs becomes empty, then you must either relax your constraints or move to a new area. Sorry! If you get as far as a call with a human and are later rejected, ask them, especially recruiters, what employers want to see. They will tell you which buzzwords are good. If there is any conceivable way that you can claim to have experience in an important buzzword, write it down. This is incidentally how the strain of doing this is best managed — by not doing anything more arduous than reading a few jobs and clicking "apply", then not thinking about it until the next day. Don't apply to so many that it feels like even a bit of an ordeal. Do not let rejection affect you, most of the people involved in this process do not deserve your respect in this instance. I am sure they are lovely husbands and wives and sons and we don't care right now . It has taken up to two months before calls have started rolling in, and that is why I'd suggest doing this more-or-less constantly, even when settled into a contract role. You want to know if jobs have suddenly started to dry up, or if you need to make adjustments to your buzzword soup. A fair number of these jobs won't do any sort of diligence. The interview will be fine. Questions will sometimes be on the level of "Do you know Python?", a real question that a real director asked me before paying me hundreds of thousands of dollars. I've done a few more unpleasant interviews, detailed here , but at this point they don't bother me. If I found myself in another one of these situations, I would hang up mid-call. Eat your assertiveness vegetables, they'll put hair on your chest. Quit. You got this job, you'll get another job. Don't quit, duh. Listen man, I didn't design the industry, but I rolled brown skin and an Indian name at character creation. I'm just doing what I've gotta do. All those jobs will be mediocre, but you won't feel like any particular person has too much power over you. But still, the second job market is where you actually want to be. This is the promised land where people have functioning test suites, the executives know something about the work being undertaken, your colleagues are not Senior Void Gazers who have been so thoroughly beaten down by the industry that they dully repeat "it's a living and I have kids", and as a bonus you're probably paid about 50% more. It is so totally divorced from the first job market that people in it sometimes do not understand that the first job market exists. Famous Netflix-guy-turned-Twitch-streamer, the Primagen, has never even heard of PowerBI , what is probably the most popular analytics tool on the planet. These people are blessed . It is not accessible via Seek. It is accessible entirely through having well-placed friends and a reputation for being a cool person with a modicum of self-respect. You can't generate these by pulling the "apply for job" lever over and over. This way you don't have to pray that your friend's companies are hiring at any given moment, you'll just always know that you've got an interview every few weeks. Because getting in here isn't very predictable, this section is general advice in no particular order. If the company asks you to do Leetcode stuff, my opinion right now is that they're probably at least a bit serious, but I don't think a place that asks you to grovel before entry is a great place to be along non-technical dimensions. Erik Dietrich calls this type of interview "carnival cash", rewarding compliant employees and middle managers with the opportunity to terrorize their fellow humans instead of with money. I'm not that sure about this point. I'd probably be bad at a Leetcode interview, so I'm biased against them. Maybe they're correlated with high quality programming performance in some way that I don't understand. People often say "I don't have any connections" or "My network is terrible". This was a 0% judgement zone earlier. It is now a 0% sympathy zone. There is a phenomenon I refer to called "trying to try". It can be broadly summarized as any set of behaviors where someone has not seriously engaged their brain, does not really believe that they're doing anything with a serious chance of success, and are more-or-less just looking for reasons to say that they tried but failed. This happens in subtle places — for example, when training with beginner sabre fencers, you can stand perfectly immobile and they will very consistently hit your blade instead of you. They are so panicked and upset that their body is not trying to win , simply going through the motions of what fencing looks like. This manifests in all sorts of ways that I'll talk about one day, but it's so apparent in the job search. "I've applied for fifty jobs and no one responded". A good indicator that you're trying to try is that you are: Most people tell me they've applied for jobs and didn't get responses. Slightly savvier people tell me they've sent some cold emails out. Some people beyond that say they've started attending Meetups but had no luck. None of them have done anything remotely interesting or otherwise indicative of novel thought. I got my first programming job by emailing Josh Wiley from my psychology degree, a man who did not know me at all, but I had been in one lecture with him, and his wife was the only senior academic honest enough to tell me not to undertake a PhD. I still have the original email. We had a brief back-and-forth, and two weeks later one of his colleagues said "One of my PhD students is freaking out because they can't process some data in R", and that got me my first paid programming job, processing microsaccade data in sleep-deprived drivers. A few weeks later, I saw that a data analyst job was up for grabs at a nearby university. The smooth brained thing to do would have been to apply via Seek and get ignored. I instead went on LinkedIn, looked up the company, look up the word "lead" and cold messaged someone who seemed like they might have something to do with the job. This led me to Dave Coulter, who I still catch up every few months, and a job offer that let me skip straight to being a mid-level engineer. During the interview, when they asked "Have you programmed professionally?", I described the microsaccade project and they hired me. I didn't mention that it was about thirty hours of work in total, and they didn't ask. I actually lost the original position to someone with six more years of experience, who was offered the original data analyst role or a much more highly-paid contract. They wanted the stability, so they took the permanent role, leaving me with a massive pay bump for the contract role, and we both quit at the same time anyway. And they do not conceptualize it as losing tends of thousands of dollars, but they were functionally unemployed for months relative to what they could have earned. Score one for contracts. Those still ended up being mediocre jobs, but I just wanted to illustrate that there is a level of trying that looks more like "there is a gun to my head and I'm willing to do unorthodox things to survive", and the people that email me for jobs have never reached the unorthodox part of that. Presumably the people that do reach this point do not need to email me for jobs. I woke up this morning to an email from Dan Tentler from Phobos about safe ways to run Incus with NixOS images pre-loaded with Airflow and an overlay VPN to client sites. Dan learned about Phobos from a group of hackers in Oslo. I learned about overlay VPNs in December from the CTO of Hyprefire , Stefan Prandl, when asking for advice on network security. I have a discussion about something like this every day, even if it isn't in the tech space. Before that, at a relatively "decent" engineering company, the most complex discussion I had was trying to explain to someone that their Snowflake workload was crashing because they were trying to read 2M+ records into a hash map and that this takes a lot of memory. I have learned more in the last three months than I have in the previous three years, basically along every dimension of my profession. I'm trying to catch up for years of working with mediocre performers, and it's hard . It's definitely doable, and remember that I'm doing this while spending half my time on sales so you can do it faster than me, but there is a real cost to not working with really great people. I've studied hard over the past few years, but nothing comes close to just having awesome people around you. This matters because really good teams don't hire total scrubs that haven't taken control of their education. The first job market does not reward skill or personal development. The second one does actually require you to be good. The best offer I've received from a good company (A$185K) was obtained not through Seek, but by meeting my current co-founder Ash Lally during the preparation for a game of Twilight Imperium IV where I absolutely smoked everyone . The only other place that I've considered might be acceptable to settle down, much better than the offer I received through Ash, was the result of getting coffee with a local reader, then eventually being invited to drinks with their team a few times. We mostly talked about split keyboards and Star Citizen. It has been a few months since I quit my last job, and I used to say all sorts of conciliatory things like "Sure, that engineer is terrible, but most of them are good!", but money talks. I only offered one of them a job with me. In retrospect, most of them had the potential to be good, but enough years in a typical corporate setting will ruin this. When I was 20, people were happy to hire me because I had potential. Now, potential is still important, but it's important that I've at least demonstrated that some of it is manifesting . Many engineers have pathologies that I think make them unsuitable for work on a healthy team, in the same way that some people need to do some self-work to enter a healthy relationship. For example, I know many people who feel guilty taking time off, so they'll burn themselves out without someone constantly getting them to slow down. I'm sympathetic, but a team as small as mine doesn't have time to walk someone through that level of self-harm and still deliver for clients reliably. We help each other through lots of little quirks we need to deprogram out of corporate contexts, but we need to be starting from a place of some progress. An example that Modal's Jonathon Belotti sent my way is that Modal's most high-performing team members will get a two week deadline, then confidently spend the entirety of the first week reading a book on the technology they're about to use. Most engineers I know, including myself a few years ago, would rather hack incompetently for two weeks. The essential reason for this is being too underconfident to act on our beliefs about how engineering should be done (or worse, not having those beliefs at all), and we'd rather fail in the approved, was-working-visibly fashion than risk looking unorthodox. "I programmed the whole two weeks and failed!" feels easier to justify than "I read a book for one of those weeks and failed!". But team members should be picked for their judgement, and if they are good for the team in proportion to the quality of that judgement and their willingness to exercise it in the face of orthodoxy . People are awful at asking for work. Here is how I advise people do it. If you don't have a good time, just leave it be. You're here to rekindle old relationships and meet interesting people, and maybe they can help you out. The moment you start asking people for help that you don't even want as friends is the moment that the entire endeavour becomes sleazy. I think of each person I know (in the context of job searching) as some sort of machine that randomly spits out jobs in a uniform distribution over a year. Let's say each person has a 5% chance of turning up a job every month, maybe more or less depending on the market. If you want an 80% chance at a job every month, you have to have enough people with you in mind that you're rolling the dice enough times to get that number. Many people tell me that they attended a few Meetups and had no results, even though that's what you're supposed to do. It's good that they tried, but most large Meetups seem to be populated by people who are ineffectually looking for jobs. Don't be ineffectual. Large Meetups were frustrating when I was a student because everyone interesting was swarmed by students trying desperately to look employable without being needy, and it is frustrating as a non-student because now I get swarmed. People go "Oh, I am a data scientist, I will go to the Data Science Meetup". That's better than not going out at all, but strictly inferior to going to a tiny Meetup with ten nerds that are deeply into Elixir or some other niche bit of technology. You will form real connections with the latter, and the fact that you know what Clojure is will be enough to make many people at such a place want to work with you. If you are in a city with a functioning tech industry and can't think of any interesting technology, then it's going to be really hard to justify why you deserve a spot on a good team, so maybe solve that problem first. If you're decent at writing and have opinions on something... write. It's amazing for meeting people. I have several readers that have sent me their writing, and without any intervention from me , about 30% of them hit the front-page of Hackernews on the strength of their material. There are surprisingly few people putting out good material on almost all topics, especially in the age of LLM slop. Reader Mira Welner wrote about something as generic as Git checkout and hit the front-page. Bernardo Stein, mentioned in various places on the blog as the guy that coaches me through my worst engineering impulses from my corporate career, has front-paged by writing about NixOS . Nat Bennett, who I've been getting advice from for months and am now hiring to coach my team, front-paged Hackernews writing about the notebook they keep at new jobs . Even Scott Smitelli, who I quoted earlier for having wrote this fantastic piece emailed it to me, and before I could finish reading it people were already recommending it to me through other channels. It's super easy to meet people through writing if you aren't afraid of pushing out your real opinions and indeed, you will see extremely stupid comments on all of the above writing, so you will need to be unafraid. Fine. Tell people that you, personally, are ChatGPT. Someone else may lose their job and think "I wish I hadn't listened to Ludic, he is so stupid and foolish", but I refuse to acknowledge them. ↩ The main one is Gary Donovan who I didn't even meet in the wild. I met a reader for coffee, and that reader worked with a really nice engineering company. That company said Gary is their favourite recruiter. The first time I called him, I said something about Lisp and it turned out he had a copy of The Little Schemer in front of him at that very second, and we later had a great talk about engineering culture in F1 over ramen. I am still reeling at the implications in neuroscience of a recruiter that can read — is it possible that some of them are sentient? ↩
A special Programming Languages: Theory, Design and Implementation edition of some interesting articles I recently read on the internet: There is something amazing about making your own programming language. In “You Should Make a New Programming Language” Nicole Tietz-Sokolsaya puts forward some great reasons to do the same, but I do it just for the sheer excitement of witnessing a program written in my own language run. Why aren’t there programming languages that are convenient to write but slow by default, and allow the programmer to drop to a harder to write but more performant form, if required? Alex Kladov ponders on this question in “On Ousterhout’s Dichotomy” , and offers a possible solution. I am big fan of Algebraic data types , and consider them an indispensable tool in the modern programmers’ toolbox. In “Where Does the Name ‘Algebraic Data Type’ Come From?” Li-yao Xia investigates the possible sources of the name, going back to the programming languages from half a century ago. Follow Casey Rodarmor through the rabbithole to learn where an unexpected newline character comes from in this entertaining and enlightening article “Whence ‘\n’?” . Turnstyle is an esoteric, graphical functional language by Jasper Van der Jeugt. I have never seem anything like it before. It’s truly mind-blowing and I’m still trying to understand how it works. As good programmers, we try to stay away from the dark corners of programming languages, but Justine Tunney takes a head-first dive into them and comes up with an enthralling tale in the article “Weird Lexical Syntax” . I am not going to lie, I love Lisps! I must have implemented at least a dozen of them by now. If you are like me, you may have wondered “Why Is It Easy to Implement a Lisp?” . Eli Bendersky puts forward a compelling argument. How better to implement a fast (and small) Lisp than to compile it to LLVM IR. Using Clojure this time, John Jacobsen showcases it in “To The Metal… Compiling Your Own Language(s)” . Phil Eaton takes an ingenious approach for “Compiling Dynamic Programming Languages” , one that has never occurred to me before, but now will be a part of my toolbox forever. Here’s another technique that I was only vaguely familiar with: JIT compilation using macros. In “Runtime Optimization with Eval” Gary Verhaegen demonstrates this technique using Clojure. When compiling dynamically typed programming languages, we need to tag pointers to data with the runtime type information. In “What Is the Best Pointer Tagging Method?” Troy Hinckley describes some good ways of doing the same. I relish Max Bernstein’s articles about programming language implementation techniques. In “What’s in an e-graph?” they describe an optimization technique using e-graphs used in compilers. I love atypical uses of Programming Language Theory. Adam Dueck explains their PLT adventure in “How I Learned Pashto Grammar Through Programming Syntax Trees” . Brainfuck, the most popular of esoteric programming languages, has been a lot on my mind recently. And who better to learn about compiling BF from than Wilfred Hughes. In “An Optimising BF Compiler” they go over the algorithms they used to write “An Industrial-Grade Brainfuck Compiler” . And lastly, from the wicked mind of Srijan Paul, comes a twist: “Compiling to Brainf#ck” about their programming language Meep that, you guessed right, compiles to BF. If you have any questions or comments, please leave a comment below. If you liked this post, please share it. Thanks for reading! This note was originally published on abhinavsarkar.net . If you liked this note, please leave a comment .
So you went ahead and created a new programming language, with an AST, a parser, and an interpreter. And now you hate how you have to write the programs in your new language in files to run them? You need a REPL ! In this post, we’ll create a shiny REPL with lots of nice features using the Haskeline library to go along with your new PL that you implemented in Haskell. This post was originally published on abhinavsarkar.net . First a short demo: That is a pretty good REPL, isn’t it? You can even try it online 1 , running entirely in your browser. Let’s assume that we have created a new small Lisp 2 , just large enough to be able to conveniently write and run the Fibonacci function that returns the nth Fibonacci number . That’s it, nothing more. This lets us focus on the features of the REPL 3 , not the language. We have a parser to parse the code from text to an AST, and an interpreter that evaluates an AST and returns a value. We are not going into the details of the parser and the interpreter, just listing the type signatures of the functions they provide is enough for this post. Let’s start with the AST: That’s right! We named our little language FiboLisp. FiboLisp is expression oriented; everything is an expression. So naturally, we have an AST. Writing the Fibonacci function requires not many syntactic facilities. In FiboLisp we have: We also have function definitions, captured by , which records the function name, its parameter names, and its body as an expression. And finally we have s, which are a bunch of function definitions to define, and another bunch of expressions to evaluate. Short and simple. We don’t need anything more 4 . This is how the Fibonacci function looks in FiboLisp: We can see all the AST types in use here. Note that FiboLisp is lexically scoped. The module also lists a bunch of keywords ( ) that can appear in the car 5 position of a Lisp expression, that we use later for auto-completion in the REPL, and some functions to convert the AST types to nice looking strings. For the parser, we have this pared-down code: The essential function is , which takes the code as a string, and returns either a on failure, or a on success. If the parser detects that an S-expression is not properly closed, it returns an error. We also have this pretty-printer module that converts function ASTs back to pretty Lisp code: Finally, the last thing before we hit the real topic of this post, the FiboLisp interpreter: We have elided the details again. All that matters to us is the function that takes a program, and returns either a runtime error or a value. is the runtime representation of the values of FiboLisp expressions, and all we care about is that it can be n and fully evaluated via 6 . also takes a function, that’ll be demystified when we get into implementing the REPL. Lastly, we have a map of built-in functions and a list of built-in values. We expose them so that they can be treated specially in the REPL. If you want, you can go ahead and fill in the missing code using your favourite parsing and pretty-printing libraries 7 , and the method of writing interpreters. For this post, those implementation details are not necessary. Let’s package all this functionality into a module for ease of importing: Now, with all the preparations done, we can go REPLing. The main functionality that a REPL provides is entering expressions and definitions, one at a time, that it R eads, E valuates, and P rints, and then L oops back, letting us do the same again. This can be accomplished with a simple program that prompts the user for an input and does all these with it. However, such a REPL will be quite lackluster. These days programming languages come with advanced REPLs like IPython and nREPL , which provide many functionalities beyond simple REPLing. We want FiboLisp to have a great REPL too. You may have already noticed some advanced features that our REPL provides in the demo. Let’s state them here: Haskeline — the Haskell library that we use to create the REPL — provides only basic functionalities, upon which we build to provide these features. Let’s begin. As usual, we start the module with many imports 8 : Notice that we import the previously shown module qualified as , and Haskeline as . Another important library that we use here is terminfo , which helps us do colored output. A REPL must preserve the context through a session. In case of FiboLisp, this means we should be able to define a function 9 as one input, and then use it later in the session, one or many times 10 . The REPL should also respect the REPL settings through the session till they are unset. Additionally, the REPL has to remember whether it is in middle of writing a multiline input. To support multiline input, the REPL also needs to remember the previous indentation, and the input done in previous lines of a multiline input. Together these form the : Let’s deal with settings first. We set and unset settings using the and commands. So, we write the code to parse setting the settings: Nothing fancy here, just splitting the input into words and going through them to make sure they are valid. The REPL is a monad that wraps over : also lets us do IO — is it really a REPL if you can’t do printing — and deal with exceptions. Additionally, we have a read-only state that is a function, which will be explained soon. The REPL starts in the single line mode, with no indentation, functions definitions, settings, or previously seen input. Let’s go top-down. We write the function that is the entry point of this module: This sets up Haskeline to run our REPL using the functions we provide in the later sections: and . This also demystifies the read-only state of the REPL: a function that adds colors to our output strings, depending on the capabilities of the terminal in which our REPL is running in. We also set up a history file to remember the previous REPL inputs. When the REPL starts, we output some messages in nice colors, which are defined as: Off we go ing now: We infuse our with the powers of Haskeline by wrapping it with Haskeline’s monad transformer, and call it the type. In the function, we , it, and again. We also deal with the user quitting the REPL (the case), and hitting Ctrl + C to interrupt typing or a running evaluation (the handling for ). Wait a minute! What is that imperative looking doing in our Haskell code? That’s right, we are looking through some lenses! If you’ve never encountered lenses before, you can think of them as pairs of setters and getters. The lenses above are for setting and getting the corresponding fields from the data type 11 . The , , and functions are for getting, setting and modifying respectively the state in the monad using lenses. We see them in action at the beginning of the function when we use to set the various fields of to their initial values in the monad. All that is left now is actually reading the input, evaluating it and printing the results. Haskeline gives us functions to read the user’s input as text. However, being Haskellers, we prefer some structure around it: We’ve got all previously mentioned cases covered with the data type. We also do some input validation and capture errors for the failure cases with the constructor. is used for when the user quits the REPL. Here is how we read the input: We use the function provided by Haskeline to show a prompt and read user’s input as a string. The prompt shown depends on the of the REPL state. In the mode we show , where in the mode we show . If there is no input, that means the user has quit the REPL. In that case we return , which is handled in the function. If the input is empty, we read more input, preserving the previous indentation ( ) in the mode. If the input starts with , we parse it for various commands: The and cases are straightforward. In case of , we make sure to check that the file asked to be loaded is located somewhere inside the current directory of the REPL or its recursive subdirectories. Otherwise, we deny loading by returning a . We parse the settings using the function we wrote earlier. If the input is not a command, we parse it as code: We append the previously seen input (in case of multiline input) with the current input and parse it using the function provided by the module. If parsing fails with an , it means that the input is incomplete. In that case, we set the REPL line mode to , REPL indentation to the current indentation, and seen input to the previously seen input appended with the current input, and read more input. If it is some other error, we return a with it. If the result of parsing is a program, we return it as a input. That’s it for reading the user input. Next, we evaluate it. Recall that the function calls the function with the read input: The cases of , and are straightforward. For settings, we insert or remove the setting from the REPL settings, depending on it being set or unset. For the other cases, we call the respective helper functions. For a command, we check if the requested identifier maps to a user-defined or builtin function, and if so, print its source. Otherwise we print an error. For a command, we check if the requested file exists. If so, we read and parse it, and interpret the resultant program. In case of any errors in reading or parsing the file, we catch and print them. Finally, we come to the workhorse of the REPL: the interpretation of the user provided program: We start by collecting the user defined functions in the current input with the previously defined functions in the session such that current functions override the previous functions with the same names. At this point, if the setting is set, we print the program AST. Then we invoke the function provided by the module. Recall that the function takes the program to interpret and a function of type . This function is a color-adding wrapper over the function returned by the Haskeline function 12 . This function allows non-REPL code to safely print to the Haskeline driven REPL without garbling the output. We pass it to the function so that the interpret can invoke it when the user code invokes the builtin function or similar. We make sure to and the value returned by the interpreter so that any lazy values or errors are fully evaluated 13 , and the measured elapsed time is correct. If the interpreter returns an error, we print it. Else we convert the value to a string, and if is it not empty 14 , we print it. Finally, we print the execution time if the setting is set, and set the REPL defs to the current program defs. That’s all! We have completed our REPL. But wait, I think we forgot one thing … The REPL would work fine with this much code, but it would not be a good experience for the user, because they’d have to type everything without any help from the REPL. To make it convenient for the user, we provide contextual auto-completion functionality while typing. Haskeline lets us plug in our custom completion logic by setting a completion function, which we did way back at the start. Now we need to implement it. Haskeline provides us the function to easily create our own completion function. It takes a callback function that it calls with the current word being completed (the word immediately to the left of the cursor), and the content of the line before the word (to the left of the word), reversed. We use these to return different completion lists of strings. Going case by case: This covers all cases, and provides helpful completions, while avoiding bad ones. And this completes the implementation of our wonderful REPL. I wrote this REPL while implementing a Lisp that I wrote 15 while going through the Essentials of Compilation book, which I thoroughly recommend for getting started with compilers. It started as a basic REPL, and gathered a lot of nice functionalities over time. So I decided to extract and share it here. I hope that this Haskeline tutorial helps you in creating beautiful and useful REPLs. Here is the complete code for the REPL. If you have any questions or comments, please leave a comment below. If you liked this post, please share it. Thanks for reading! The online demo is rather slow to load and to run, and works only on Firefox and Chrome. Even though I managed to put it together somehow, I don’t actually know how it exactly works, and I’m unable to fix the issues with it. ↩︎ Lisps are awesome and I absolutely recommend creating one or more of them as an amateur PL implementer. Some resources I recommend are: the Build Your Own Lisp book, and the Make-A-Lisp tutorial. ↩︎ REPLs are wonderful for doing interactive and exploratory programming where you try out small snippets of code in the REPL, and put your program together piece-by-piece. They are also good for debugging because they let you inspect the state of running programs from within. I still fondly remember the experience of connecting (or jacking in ) to running productions systems written in Clojure over REPL, and figuring out issues by dumping variables. ↩︎ We don’t even need . We can, and have to, define variables by creating functions, with parameters serving the role of variables. In fact, we can’t even assign or reassign variables. Functions are the only scoping mechanism in FiboLisp, much like old-school JavaScript with its IIFEs . ↩︎ car is obviously C ontents of the A ddress part of the R egister , the first expression in a list form in a Lisp. ↩︎ You may be wondering about why we need the instances for the errors and values. This will become clear when we write the REPL. ↩︎ I recommend the sexp-grammar library, which provides both parsing and printing facilities for S-expressions based languages. Or you can write something by yourself using the parsing and pretty-printing libraries like megaparsec and prettyprinter . ↩︎ We assume that our project’s Cabal file sets the default-language to GHC2021, and the default-extensions to , , , and . ↩︎ Recall that there is no way to define variables in FiboLisp. ↩︎ If the interpreter allows mutually recursive function definitions, functions can be called before defining them. ↩︎ We are using the basic-lens library here, which is the tiniest lens library, and provides only the five functions and types we see used here. ↩︎ Using the function returned from is not necessary in our case because the REPL blocks when it invokes the interpreter. That means, nothing but the interpreter can print anything while it is running. So the interpreter can actually print directly to and nothing will go wrong. However, imagine a case in which our code starts a background thread that needs to print to the REPL. In such case, we must use the Haskeline provided print function instead of printing directly. When printing to the REPL using it, Haskeline coordinates the prints so that the output in the terminal is not garbled. ↩︎ Now we see why we derive instances for errors and . ↩︎ Returned value could be of type void with no textual representation, in which case we would not print it. ↩︎ I wrote the original REPL code almost three years ago. I refactored, rewrote and improved a lot of it in the course of writing this post. As they say, writing is thinking. ↩︎ If you liked this post, please leave a comment .
Recently I followed the very good Coursera course “ Algorithms, Part I ” from Coursera. The exercises were in Java, and the most fun one was implementing a two-dimensional version of a k-d tree . Since I sometimes do generative art in Clojure , I thought this would be a fun algorithm to implement myself. There already exists other implementations, for example this one , but this time I wanted to learn, not use. A 2-d tree is a spatial data structure that is efficient for nearest neighbour and range searches in a two-dimensional coordinate system. It is a generalization of a binary search tree to two dimensions. Recall that in a binary search tree, one builds a tree structure by inserting elements such that the left nodes always are lower, and the right nodes always are higher. In that way, one only needs $O(\log(n))$ lookups to find a given element. See Wikipedia for more details. In a 2-d tree one manages to do the same with points $(x,y)$ by alternating the comparison on the $x$ or $y$ coordinate. For each insertion, one splits the coordinate system in two. Look at the following illustration: This is the resulting tree structure after having inserted the points $(0.5, 0.5)$, $(0.6, 0.3)$, $(0.7, 0.8)$, $(0.4, 0.8)$, and $(0.4, 0.6)$. For each level of the tree, the coordinate to compare with is alternating. The following illustration shows how the tree divides the coordinate system into sub-regions: To illustrate searching, let’s look up $(0.4, 0.6)$. Then we first compare it with $(0.5, 0.5)$. The $x$ coordinate is lower, so we look at the left subtree. Now the $y$ coordinate is lower, so we look at the left subtree again, and we found our point. This is 2 compares instead of the maximum 5. There’s a lot more explanation on Wikipedia . Let’s jump straight to the implementation in Clojure. We first define a node to contain tree values: the point to insert, a boolean indicating if we are comparing vertically or horizontally (vertical means comparing the $x$-coordinate), and a rectangle, indicating which subregion the node corresponds to. (note: we don’t really need to carry around the rectangle information - it can be computed from the boolean and the previous point. I might optimize this later.) Insertion is almost identical to insertion into a binary search tree. Where the structure shines, is when looking for nearest neighbour. The strategy is as follows: keep track of the “best so far” point, and only explore subtrees that are worth exploring. When is a subtree wort exploring? Only when its region is closer to the search point than the current best point: In addition, we do one optimization when there are two subtrees. We explore the closest subtree first. Here’s the full code: In the recursion, we keep a stack of paths (it looks like ). When exploring a new node, we add it to the top of the stack, and when recurring, we pop the current stack. Here’s how the data structure looks after inserting the same points as in the illustration above: I wanted the tree structure to behave like a normal Clojure collection. The way to do this, is to implement the required interfaces. For example, to be able to use , , , , etc, we have to implement the interface. To find out which methods we need to implement, I found this Gist very helpful. I create a new type that I call using : When implementing , I took a lot of inspiration (and implementation) from this blog post by Nathan Wallace. Also, thanks to the Reddit user for pointing out a bad implementation. Here is the diff after his comments. We can create a helper method to create new trees: Now we can create a new tree like this: Also, the following code works: (get all points whose seconds coordinate is greater than 0.5) The full code can be seen here on Github . I did this partly to learn a simple geometric data structure, but also to make another tool in my generative art toolbox . Implementing an algorithm helps immensely when trying to understand it: I got better and better visualizing how these trees looked like. There are two main things about Clojure I want to mention in this section: the library, and the Clojure interfaces. The library is a property based testing library. In a few words, given constraints on inputs, in can generate test data for your function. In one particular case, it helped me verify that my code had a bug and produce a minimal example of this (the bug was that I forgot to recur in the clause in the function). By writing some “simple-ish” code, I got an example of an input that made the return a wrong answer. Here is the code: It is probably more verbose than needed, but the summary is this: the function return a generator , which, given some restraints, can return sample inputs (in this case: vectors of points). Then I compare the result from the tree-search with the brute force result given by first sorting the points, then picking the first point. The way I’ve set it up, whenever I run my tests, generates 10000 test cases and fails the test if my implementation doesn’t return the correct result. This was very handy, and quite easy to set up. It was rewarding to implement the Clojure core interfaces for my type ( , , etc.). What was a bit frustrating though, was the lack of documentation. I ended up reading a lot of Clojure source code to understand the control flows. Basically, the only thing I now about , is that it is a Java interface like this: Then I had to search the Clojure source code to understand how it was supposed to be used. It would be nice with a docstring or two. I found many blog posts that implemented custom types ( this one , this one , or this one ), but very little in Clojure own documentation. On the flip side, I got to read some of the Clojure source code, which was very educational. I also got to understand a bit more the usefulness of protocols (using and to provide several implementations). Here it was very useful to read the source code of thi-ng/geom . I learned a lot, and I got one more tool to make generative art. Perhaps later I could publish the code as a library, but I should really battle test it a bit more first (anyone can copy the code, it is open source on my Github) I used the data structure to create the following pictures (maybe soon I’ll link to my own page instead of Instagram). The function was very useful in making the code fast enough. Se dette innlegget på Instagram Et innlegg delt av Fredrik Meyer (@generert) Until then, thanks for reading this far!
I just got a new Macbook, and I thought it would be useful for my future self to write down what I installed on it. Luckily the history file in my shell is long enough to remember everything. The order of the steps is quite random. I hear there are other alternatives out there, but I stick to Brew for now. I installed Brew the “official” way: This seemed to automatically install the XCode Command Line Tools . Follow the install instructions (this adds a init script to my file): This installs Clojure and OpenJDK 21 . From the official Clojure documentation. I do my Clojure programming in Emacs with Cider and clojure-lsp . I use this version of Emacs on Mac. Install with: The second line makes it possible to open Emacs with Finder. My Emacs configuration is stored here . since I don’t install on a new machine very often, usually I have to restart Emacs a few times before it works. I use Tmux to manage windows in my Terminal. My Tmux configuration is Git managed. Here is the current version: This first requires install the Tmux plugin manager . The package makes copy on select work as expected. I wrote about how I use Tmux here . Mostly by habit I use oh-my-zsh for terminal configuration. I’m mostly happy with the default configuration. I do a lot of frontend development, so I will probably need to install more Node related packages, but I needed Node to let Emacs install LSP clients automatically (many of them are stored on NPM ). I do a lot of my backup using Jottacloud . They have a CLI utility to select directories to backup. From the official documentation : For editor integration, also add the language server : I was looking for a good window manager for Mac. Spectacle is not maintained anymore (I do think it works still), but after some searching, I found Rectangle . Open source and easy to use. I mostly use to maximize windows and / to move windows to the left/right half. I use GNU Stow to manage (some of) my dotfiles. A good intro is here . I keep the dotfiles in a private repository (maybe I make it public one day). I use bat sometimes to read code files with syntax highlighting in the browser. And ripgrep for fast search (also makes some Emacs plugins faster). Remember to update the Git config. At the moment mine looks like this: This blog is built using Jekyll , so it needs Ruby installed. It also uses (at the moment) an old version of Ruby, so I installed Ruby 2.7.3 with a version manager: That seems to be all (for now) that is installed via the CLI. I have usually also always install iTerm2, but I noticed I don’t use many of its features (tabs, themes, etc?), so for now I’m sticking with the builtin Terminal app. It’s easy and it stores all my passwords. For the interruptions. All my files. Sometimes I need a VPN. What’s life without music? (silent) The app allows downloading series. I have the Remarkable tablet , and I often use the app to upload PDF’s. Sometimes I play games. Unfortunately, many games don’t work anymore on Mac - and I might be too lazy to try installing Windows on it.
I've been writing code in Kotlin on and off over a few months, and I think I'm now at this unique stage of learning something new when I already have a sense of what's what, but not yet so far advanced so I don't remember beginner's pain points. Here's a dump of some of my impressions, good and bad. We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. — Guy Steele Kotlin drags Java programmers another half of the rest of the way. That is to say, Kotlin doesn't feel like a real functional-first language. It's still mostly Java with all its imperativism, mutability and OO, but layered with some (quite welcome) syntactic sugar that makes it less verbose and actually encourages functional style. Where it still feels mostly Java-ish is when you need to work with Java libraries. Which is most of the time, since the absolutely transparent Java interop doesn't make writing Kotlin-flavored libraries a necessity. For starters, you don't have to put everything in classes with methods any more. Plain top-level functions are perfectly okay. You also don't need to write/generate a full-blown class if what you really need is a struct/record. Instead you just do: These have some handy features (like comparability) implemented out of the box, which is nice. And then you can pass them to functions as plain arguments, without necessarily having to make them methods on those argument's classes. Like other newer languages (Swift, Rust) Kotlin allows you to add your own methods to existing classes, even to built-in types. They are neatly scoped to whatever package they're defined in, and don't hijack the type for the entirety of the code in your program. The latter is what happens when you add a new method to a built-in class dynamically in Ruby, and as far as I know, it's a constant source of bad surprises. It doesn't require any special magic. Just keep in mind that is not really different from , only the name of the first parameter is going to be , and it's going to be available implicitly. This, I think, is actually a big deal, becasue looser coupling between types and functions operating on them pushes you away from building rigid heirarchies. And by now I believe most people have realized that inheritance doesn't scale. So these days the only real value in having over is the ability to compose functions in the natural direction: … as opposed to Yes, I know your Haskell/OCaml/Clojure have their own way of doing it. Good. Kotlin has chaining. Kotlin uses and for declaring local data as immutable and mutable, respectively. is encouraged to be used by default, and the compiler will yell at you if you use without actually needing to mutate the variable. This is very similar to Rust's and . Unfortunately however, Kotlin doesn't enforce immutability of a class instance inside its methods, so it's still totally possible to do: … and have internal state changed unpredictably. Kotlin is another new language adopting "everyhing is an expression" paradigm. You can assign the result of, say, an statement to a variable or it. This plays well with a shortened syntax for functions consisting of a single expression, which doesn't involve curly braces and the keyword: You still need in imperative functions and for early bail-outs. This is all good, I don't know of any downsides. I think Kotlin has easily the best syntax for nameless in-place functions out of all languages with curly braces: You put the body of the function within , no extra keywords or symbols required. If it has one argument (which is very common), it has an implicit short name, . This one is really cool: if the lambda is the last argument of the accepting function, you can take it outside the parentheses, and if there are no other arguments, you can omit the parentheses altogether. So filtering, mapping and reducing a collection looks like: Note the absence of after the first two functions. The line with is more complicated because it does have an extra argument, an initial value, which has to go into parentheses, and it also has a two-argument lambda, so it needs to name them. Many times you can get away with not inventing a name for another temporary variable: takes the object on which it was called ( in this case), passes it as a single argument to its lambda, where you can use it as, well, , and then returns whatever was returned from the lambda. This makes for succinct, closed pieces of code which otherwise would either bleed their local variables outside the scope, or require a named function. This reminds me of Clojure's , and Kotlin also has its own idiom similar to which is a variant that only works when the value is not : If the result of is the operator would safely short-cirquit the whole thing and not call the block. Speaking of , it's actually one of no fewer than five slight variations of the same idea. They vary by which name the object is passed inside the lambda block, and by what it returns, the object itself or the result of the lambda. Here they are: Technically, you can get by with only ever using , because you can always return explicitly, and the difference between and is mostly cosmetic: sometimes you can save more characters by omitting typing , sometimes you still need it to avoid things like , so you switch to using . The real reason for all these variations is they're supposed to convey different semantics . In practice I would say it creates more fuss than it helps, but it may be just my lack of habit. And no, I didn't forget about the fifth one, , which is just a variant of , but you pass the object in parentheses instead of putting it in front of a dot: I can only probably justify its existence by a (misplaced) nostalgia for a similar from Pascal and early JavaScript. And there's a reason nobody uses it anymore: the implicit was a reliable source of hard to spot bugs. By the way, this sudden language complexity is something that Lisps manage to avoid by simply not having the distinction between "functions" and "methods", and always returning the last expression from a form. "An elegant weapon for a more civilized age", and all that :-) That one caught me off guard. Turns out there's a difference on what kind of value you call , and such. Calling them on a does not produce a lazy sequence, it actually produce a concrete list. If you want a lazy result you should cast a concrete collection to first: That's one more gotcha to be aware of if you want to avoid allocating memory for temporary results at every step of your data transformations. In Python, tuples are a workhorse as much as dicts and lists. One of their underappreciated properties is their natural orderability : as long as corresponding elements of two tuples are comparable with each other, tuples are also comparable, with leftmost elements being the most significant, so you have: This is tremendously convenient when sorting collections of custom elements, because you only need to provide a function mapping your custom value to a tuple: Kotlin doesn't have tuples. It has pairs , but they aren't orderable and, well, sometimes you need three elements. Or four! So when you want to compare custom elements you have two options: Define comparability for your custom class. Which you do at the class declaration, way too far away from the place where you're sorting them. Or it may not work for you at all if you need to sort these same elements in more than one way. Define a comparator function in place. Kotlin lambdas help here, but since it needs to return a -1/0/1, it's going to be sprawling and repetitive: for all elements, subtract one from another, check for zero, return if not, move to the next element otherwise. Bleh… It's probably to widespread type inference that we owe the resurgence in popularity of typed languages. It's what makes them palatable. But implementations are not equally capable across the board. I can't claim a lot of cross-language experience here, but one thing I noticed about Kotlin is that it often doesn't go as far as, say, Rust in figuring out what is it that you meant. For example, Kotlin can't figure out the type of an item of an initially empty list based on what data you're adding to it: Rust does this just fine: It's a contrived example, but in paractice I also had stumbled against Kotlin's inability to look into how the type is being used later. This is not a huge problem of course… I'm going to bury the lead here and first give you two examples that look messy (to me) before uncovering the True Source of Evil. The first thing are and modifiers for type parameters. There is a long detailed article about them in the docs about generics which I could only sort of understand after the third time I read it. It all has to do with trying to explain to the compiler the IS-A relationship between containers of sub- and supertypes. Like could be treated as if you only read items from it, but you obviously can't write a random into it. Or something… The second example is about extension methods (those that you define on some third-party class in your namespace) that can't be virtual . It may not be immediately apparent why, until you realize that slapping a method on a class is not the same as overriding it in a descendant, but is simply a syntactic sugar for . So when you call it doesn't actually look into the VMT of , it looks for a free-standing function in a local namespace. You put the body of the function within , no extra keywords or symbols required. If it has one argument (which is very common), it has an implicit short name, . This one is really cool: if the lambda is the last argument of the accepting function, you can take it outside the parentheses, and if there are no other arguments, you can omit the parentheses altogether. takes the object as , returns the object takes the object as , returns the result of the block takes the object as , returns the object takes the object as , returns the result of the block Define comparability for your custom class. Which you do at the class declaration, way too far away from the place where you're sorting them. Or it may not work for you at all if you need to sort these same elements in more than one way. Define a comparator function in place. Kotlin lambdas help here, but since it needs to return a -1/0/1, it's going to be sprawling and repetitive: for all elements, subtract one from another, check for zero, return if not, move to the next element otherwise. Bleh…
Tim Bray beat me to writing about this with some very similar thoughts to mine: Fixing JSON . I especially like his idea about native times, along with prefixing them with as a parser hint. I'd like to propose some tweaks however, based on my experience of writing JSON parsers (twice). Not only you don't need either of them, they actually make parsing more complicated. When you're inside an array or an object, you already know when to expect a next value or a key, but you have to diligently check for commas and colons with the sole reason of signaling errors if you don't find them where expected. Add to that edge cases with trailing commas and empty containers, and you get a really complicated state machine with no real purpose. My proposal is simpler than Tim's, though: no need to actually remove them, just equate them to whitespace. As in: . That's it. It removes all the complications from parsing, and humans can write those for aesthetics. And by the way, this approach works fine in Clojure for vectors and maps . JSON is defined as a UTF-8 encoded stream of bytes. This is already enough for encoding the entire Unicode. Yet, on top of that there's another encoding scheme using . One could probably speculate it was added to enable authoring tools that can only operate in the ASCII subset of UTF-8, but thankfully we've moved away from those dark ages already. Handling those is a pain in the ass for a parser, especially a streaming one. Dealing with single-letter escapes like is easy, but with you need an extra buffer, you need to check for edge cases with not-yet-enough characters, and you're probably going to need a whole separate class of errors for those. Gah…