Posts in Elixir (15 found)
Ginger Bill 1 months ago

Package Managers are Evil

n.b. This is a written version of a dialogue from a YouTube video: 2 Language Creators vs 2 Idiots | The Standup Package managers (for programming languages) are evil 1 . To start, I need to make a few distinctions between concepts a lot of programmers mix up: These are all separate and can have no relation to one another. I have nothing wrong with packages, in fact Odin has packages built into the language. I have nothing wrong with repositories, as that’s how a lot of people discover new packages—a search engine, something I think everyone uses on a daily basis 2 . Build systems are usually language dependent/specific, and for Odin I have tried minimize the need for a build system entirely (at least as a separate thing) where most projects will build with , which works due to the linking information being defined in the source code itself with the system. This leaves package managers ; what do they do? Package managers download packages from a repositories, handles the dependencies and tries to fix them, and then it downloads its dependencies, and its dependencies, and its dependencies… and you can probably see where my criticism is going. This is the automation of dependency hell . The problem is that not everything needs to be automated, especially hell. Dependency hell is a real thing which anyone who has worked on a large project has experienced. Projects having thousands, if not tens of thousands, of dependencies where you don’t know if they work properly, where are the bugs, you don’t how anything is being handled—it’s awful. This the wrong thing to automate. You can do this manually, however it doesn’t stop you getting into hell, rather just slow you down, as you can put yourself into hell (in fact everyone puts themselves into hell voluntarily). The point is it makes you think how you get there, so if you have to download manually, you will start thinking “maybe I don’t want this” or “maybe I can do this instead”. And when you need to update packages, being manual forces you to be very careful. That’s my general criticism: the unnecessary automation. Most packages managers usually have to define what a package is, because the language itself does not have a well defined concept of a package in the language. JavaScript is great example of this as there are multiple different package managers for the language ( being one of the most popular), but because each package manager defines the concept of a package differently, it results in the need for a package manager manager . Yes… this is a real thing. This is why I am saying it is evil , as it will send you to hell quicker. When using some languages, such as Go, most people don’t seem to need many third-party packages even though Go has a built-in package manager. The entrance to hell seems to far and hard to get to 3 . The reason such languages don’t fall into this trap as quickly is that those languages have a really good core/standard library—batteries included. When using Go for example, you don’t need any third-party libraries to make a web server, Go has it all there and you are done. Go even has Go compiler built into the standard library; in fact it has two, a high level one for tooling and one which is the actual compiler itself 4 . In real life, when you have a dependency, you are responsible for it 5 . If the thing that is dependent on you does something wrong, like a child or business, you might end up in jail, as you are responsible for that. Package dependencies are not that far different but people trust them with little-to-no verification. And when something goes wrong, you are on the hook to maintain it. It is a thing you should worry about and take care of. A common thing that people bring up about package managers are security risks. There are indeed serious problems, especially when you blindly trust things you have just randomly started depending from off the internet. However for my needs, those are not even the biggest worries for what I work on, but they might be for you! For me at work, we use currently use SDL2 for our windowing stuff at work, and we have found a huge amount of bugs and we hate it to the point that I/we will probably write our own window and input handling system from scratch for each Operating System we target. At least it is our code and we can depend on it and correct it when things go wrong; we are not having an extra dependency. I know SDL2 is used by millions of people, but we keep hitting all of the bugs. “But it’s great though!”. SDL3 might fix it all but the time to integrate SDL3 would be the same time I could write it from scratch. I am not advocating to write things from scratch. I wish there were libraries I could say that they “Just Work™”, but I still have to depend on them, and they are a liability; not just security liabilities but just bug liabilities. Each dependency is a potential liability. People rarely, if ever, vet their code, especially third-party code. Most people assume random code off the internet works . This is a societal issue where programmers are very high trusting in a place where you should have the least amount of trust possible. To put it bluntly, a lot of programmers come from a highly developed countries which are in general high trust societies, and then they apply that to the rest of their online world. This means you only need one person to do something malicious to something millions depend on to screw everything up. It doesn’t even have to be malicious but a funny bug, where if you clicked one pixel on the screen, it is Rick Rolling you. n.b. This argument was made by ThePrimeagen; not myself We’ve had an explosion of engineers over the past ten years, which have come just into the advent of all of these package managers coming out, for all of these languages, all at the same time. So programming felt very daunting; when you don’t know how something works, it feels very daunting, especially when you first start out. The thing that is confusing, especially the high-trust argument that was being made, there is this weird Gell-Mann amnesia effect going on. You read one page and it’s all about horses and you feel “man I know a lot about horses”. Then flip to the next page and it’s about Javascript and you go “man they got everything wrong about Javascript”. Then you flip the next page and “man I know a lot about beetles”. You’ve just forgot that they are super wrong on the thing you understood, but you think everything else is correct. You’ll find engineers who will go “some of my coworkers are so horrible, hey, let me download this library off the internet, this is going to be awesome”. It’s crazy as if they look and go “wow, one third of our staff cannot program anything, also I am going to trust every open source package I’ve downloaded”. So there is this Gell-Man amnesia in programming code, where people who do open source or open things are viewed as the best of the engineers when that isn’t true. Most people assume programming is like every other industry, like actual engineering which has been around for thousands of years, or modern science which has been around for about half a millenium. People trust who they perceive to be the “experts”, as you see all of these articles, books, conference videos, etc, and they all tell you stuff but for the most part which does not necessarily seem true. I remember trusting those who were perceived to be “experts” which were esposing “wisdom”. However, as I have programmed more over the years, I realized there is very very little wisdom in this industry. This industry is 70–75 years old at best, and that is not old enough to have any good evolutionary selection pressure. It is not old enough to get rid of the bad things—it hasn’t evolved quick enough. We will find out in a few HUNDRED years, and I mean hundreds , what is actual good wisdom is in programming. There are some laws we know like Conway’s Law , where “organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations”. Or to rephrase it in programming terms, the structure of the code will reflect the company that programs it. But that is one of the only laws that we know to exist. My general view is that package managers (and not the things I made distinctions about) are probably in general a net-negative for the entire programming landscape, and should be avoided if possible. Excerpt from the Odin FAQ: https://odin-lang.org/docs/faq/#how-do-i-manage-my-code-without-a-package-manager Through manual dependency management. Regardless of the language, it is a very good idea that you know what you are depending on in your project. Copying and vendoring each package manually, and fixing the specific versions down is the most practical approach to keeping a code-base stable, reliable, and maintainable. Automated systems such as generic package managers hide the complexity and complications in a project which are much better not hidden away. Not everything that can be automated ought to be automated. The automation of dependency hell is a case which should not encouraged. People love to put themselves in hell, dragging others down with them, and a package manager enables that. Another issue is that for other languages, the concept of a package is ill-defined in the language itself. And as such, the package manager itself is usually trying to define the concept of what a package is, which leads to many issues. Sometimes, if there are multiple competing package managers with different definitions of what a package is, the monstrosity of a package-manager-manager arises and the hell that brings with it. The term “evil” is being used partially hyperbolic to make a point.  ↩︎ I primarily use DuckDuckGo , but I also use Google and many others because they are pretty much all bad.  ↩︎ This sentence is a quote from ThePrimeagen from that video.  ↩︎ ThePrimeagen mentioned the “Klingon Approach” as a joke. This refers to Klingons (a species of humanoids in Star Trek) which have redundant organs. A very nerdy joke.  ↩︎ This comment was from José Valim (the creator of the Elixir programming language) and this was a very good point which I wanted to add to this article.  ↩︎ Package Repositories Build Systems Package Managers The term “evil” is being used partially hyperbolic to make a point.  ↩︎ I primarily use DuckDuckGo , but I also use Google and many others because they are pretty much all bad.  ↩︎ This sentence is a quote from ThePrimeagen from that video.  ↩︎ ThePrimeagen mentioned the “Klingon Approach” as a joke. This refers to Klingons (a species of humanoids in Star Trek) which have redundant organs. A very nerdy joke.  ↩︎ This comment was from José Valim (the creator of the Elixir programming language) and this was a very good point which I wanted to add to this article.  ↩︎

0 views
Emil Privér 2 months ago

About AI

For the last 1.5 years, I have forced myself to work with and learn AI, mostly because the future of software engineering will inevitably have more AI within it. I’ve focused on optimizing my workflow to understand when AI is a genuinely useful tool versus when it’s a hindrance. Now, 1.5 years later, I feel confident enough to say I’ve learned enough about AI to have some opinions, which is why I’m writing this post. AI has become a race between countries and companies, mostly due to status. The company that creates an AGI first will win and get the most status. The models provided by USA-based companies are heavy and require a lot of resources to operate, which is why we build data centers of GPUs in the USA and Norway . At the same time, China delivers models that are almost as good as Claude Opus but need a fraction of the resources to deliver a result. There’s a strange disconnect in the industry. On one hand, GitHub claims that 20 million users are on Copilot, and Sundar Pichai says that over 25% of the code at Google is now written by AI. On the other hand, independent studies show that AI actually makes experienced developers slower. The common thread seems to be that companies selling AI solutions advocate for their efficiency, while independent sources tell a different story. It’s also incredibly difficult to measure AI’s true efficiency. Most metrics focus on whether we accept an AI’s suggestion, not on whether we accept that code, leave it unedited, and ship it to production—mostly because tracking that is a profoundly difficult task. My experience lands somewhere in the middle. I’ve learned that AI is phenomenally good at helping me with all the “bullshit code”: refactoring, simple tasks that take two minutes to develop, or analyzing a piece of code. But for anything else, AI is mostly in my way, especially when developing something new. The reason is that AI can lure you down a deep rabbit hole of bad abstractions that can take a significant amount of time to fix. I’ve learned that you must understand in detail how you want to solve a problem to even have a fair shot at AI helping you. When a task is more than just busywork, AI gets in the way. The many times I’ve let AI do most of the job, I’ve been left with more bugs and poorly considered details in the implementation. Programming is the type of work where there often is no obvious solution; we need to “feel” the code as we work with it to truly understand the problem. When I work on something I’ve never worked on before, AI is a nice tool to get some direction. Sometimes this direction is really good, and sometimes it’s horrible and makes me lose a day or two of work. It’s a gamble, but the more experienced you become, the easier it is to know when you’re going in the wrong direction. But I am optimistic. I do think we can have a beautiful future where engineers and AI can work side-by-side together and create cool stuff. I’ve used IDEs, chat interfaces, web interfaces like Lovable, and CLIs, and it’s with CLIs that I’ve gained the most value. So far, CLIs are the best way to work with AI because you have full control over the context, and you are forced to think through the solution before you hit enter. In contrast, IDEs often suggest code and make changes automatically, sometimes without my full awareness. In a way, CLIs keep me in the loop as an active participant, not a passive observer. For everything I don’t like doing, AI is phenomenally good. Take design, for instance. I’ve used Lovable and Figma to generate beautiful UIs and then copied the HTML to implement in an Elixir dashboard, and the results have been stunning. I also use AI when I write articles to help with spelling and maintaining a clear narrative thread. It’s rare, but sometimes you get lucky and the AI handles a simple task for you perfectly. There are a couple of really nice things about AI that I’ve learned. I previously mentioned getting rid of the boring bullshit stuff I do in my daily work—this bullshit stuff is about 5% of everything I do on a daily basis. For example, today I managed to one-shot solve a new feature using AI. The feature was adding search to one of our APIs using Generalized Search Tree. I added the migration file with the create index, and the AI added query parameters to the HTTP handler and updated the database function to search on if we have added a search parameter. But the thing that AI has provided me the most value in my day-to-day work is that instead of bringing a design to a meeting, we can bring a proof of concept. This changes a lot because developers can now easily understand what we want to build, and we can save tons of time on meetings. Building PoCs is also really good when we enter sales meetings, as a common hard problem in sales is understanding what the customer really wants. When we can bring a PoC to the meeting, it’s easier to get feedback faster on what the customer wants because they can try it and give direct feedback on something they can press on. This is an absolute game-changer. Another really nice thing is when we have a design and we ask the AI to take a Figma design into some React code—it can generate the boilerplate and some of the design. It doesn’t one-shot it, but it brings some kind of start which saves us some time. The reason why AI works well with frontend is that the hard part about frontend is not writing the code from a design; it’s optimizing it for 7 billion people with different minds and thoughts, different devices, internet connections, and different disabilities. I use research mode from time to time when I have some bug that is super hard to find. For instance, I switched from to in Go a while ago. When I pushed to production, we got an error where the connection pool manager gave an error because the transaction ID already exists, so I rolled back. I did some quick DuckDuckGoing to find the issue but didn’t find it. So I asked Gemini to do research on the error, and a comment in some GitHub issue suggested changing the default query exec mode to cache describe from cache statement: By making this change, I solved the problem. I probably would have spent another 20 minutes debugging or searching for the solution, and research mode could point me in the correct direction, which helped me solve it faster. But it’s a gamble—sometimes what the AI suggests is the wrong direction and you lose some time. Vibe coding, for those who don’t know what it is, is when you prompt your way to write software. You instruct an AI to build something and only look at the result rather than using AI as a pair-programmer to build the software, where you check the code you add. There is a major problem with this, as AI is, for a fact, terrible at writing code and introduces security problems. It also doesn’t understand the concept of optimizing code—it just generates a result. For instance, I asked Claude in a project to fetch all books and then fetch the authors and merge the books and authors into a schema structure where the book includes the author’s name. This is a simple task, but Claude preferred to fetch (database query) the Author in a loop instead of fetching all books first, then their authors, and merging them together in a loop. This created an N+1 query. If we have 10,000 products, this would mean 10,001 queries, which is not a smart way of fetching data as this could lead to high usage of the database, making it expensive because we need to fetch so much data. Vibe coding will pass. I don’t think it will stay for much longer, mostly because we require developers to actually understand what we’re working with, and vibe coding doesn’t teach you—plus vibe coding creates too much slop. Instead, it will be more important to learn how to work with context and instructions to the AI. But there is one aspect of it which I think is great and dangerous. When you’re a new developer, it’s really easy to struggle with programming because it sometimes takes time to get feedback on what you’re working with. This is why Python or JavaScript are great languages to start with (even if JavaScript is a horrible language) because it’s easy to get feedback on what you’re building. This is also why it’s great to start with frontend when you’re new, because the hardest part of building frontends is not the design, and you can get feedback in the UI on all changes you make, which makes you feel more entitled and happy to continue programming. If you don’t get the feedback, it’s easy to enter a situation where you feel that you’re not getting anywhere and you give up. But with an AI, we can get a lot of help on the way to building a bit more advanced stuff. When you’re a new developer, it’s not really the most important thing to learn everything—it’s to keep going forward and building more stuff as you will learn more after a while. The only problem with vibe coding and learning is that it takes you more time to learn new stuff when you don’t do it yourself. There are some services that claim non-technical people can build SaaS companies using their services, which of course is a lie. For the non-technical people I’ve talked to regarding this matter, a tool like Replit can be good when what they sell is not a SaaS offering—for example, they are a barber and need a website. When we try to vibe code a SaaS company using a prompt web interface, these “non-technical” founders often throw their money into a lake because what they want to build doesn’t work and has tons of bugs and wrongly built solutions, because you need to understand software in order to write software. There is a saying in programming: the last 10% of the solution is 90% of the work. The last 10% is the details, such as how should we handle this amount of messages—should they be an HTTP handler or should we create a message queue? How should my system and the other system communicate? What is a product and what is an ingredient? This is the type of stuff that non-technical people don’t understand and what the AI doesn’t understand as well—the AI just generates code. AI is fundamentally the biggest tech debt generator I’ve worked with in my life. Tech debt, for those who don’t know what it means, is the trade-off created when we develop software. Tech debt can create situations where the system is too expensive or it becomes too much of a problem to get new features out in production due to limitations in other parts of the system. Every company has some kind of tech debt—some have higher and some have lower. Limiting tech debt is easy: you only need to say no to stuff or don’t write any code at all, or you need to focus on building software in a standard and simple way. The more simple (but not stupid), the lower the amount of tech debt. For example, the problem with choosing PostgreSQL over DynamoDB is that it requires more manual work to scale a PostgreSQL database, while a DynamoDB scales automatically as it’s managed. When we get traffic spikes, DynamoDB handles this better, but at the same time, DynamoDB forces us to structure our data in a different way, and you don’t really know what you will pay for the usage. It increases memory usage as you need to fetch entire documents and not just some specific fields, which is common for NoSQL databases. The reason why AI is the biggest tech debt generator I’ve seen is that it leads us into rabbit holes that can be really hard to get out of. For instance, the AI could suggest you should store data in big JSON blobs in the database due to “flexibility” in a relational database. Another problem is that AI loves complexity, and the problem is that we accept complexity. Complexity creates issues where code becomes harder to maintain and creates bugs. AI often prefers not to reuse code and instead writes the same logic again. Another issue with AI and tech debt is that AI creates tons of security vulnerabilities as it still doesn’t understand security well, and many new developers don’t understand it either. For instance, how to prevent SQL injections. A general thing that companies who sell AI services claim is that developers become more efficient, and some developers think they are, mostly because the only thing they look at is when they write code and not the part after writing code, such as debugging. For developers who use AI, some parts of their job switch from writing code to debugging, code review, and fixing security vulnerabilities. More experienced developers also feel that they are being held back by AI-generated slop code due to how badly the AI thinks, while more inexperienced developers feel the opposite. For instance, I a while ago created bulk endpoints to our existing API and asked AI to do the job. I thought I saved some time as most of this was just boilerplate code and some copying. The task was that we should take the existing logic for creating, updating, and deleting objects. I prompted Claude to do updates and creations in parallel due to the logic, so it created channels and goroutines to process the data in parallel. When I started to test this solution, I quite quickly saw some problems. For example, the AI didn’t handle channels correctly—it set a fixed length of the channel, which would create a problem where we create a situation where the code would hang as we’re trying to add more data to the channels while we at the same time use . This was an easy fix, but I spent some time debugging this. When we added deletion of objects, we didn’t build a solution where we deleted multiple IDs at once; instead we used goroutines and channels to send multiple delete SQL queries, which created a much slower and more expensive solution. This was also fixed. Instead of logging the error and returning a human-readable error, we returned the actual error code, which could create security vulnerabilities. After my try at offloading more to the AI, I stopped and went back to writing everything myself as it generally took less time. It’s also a really weird way of measuring efficiency on how much code we accept, especially when most of the code won’t even hit production. But I guess they need to sell their software in some way. So with all these tools built for developers, I realized that the people who gain the most from all these tools are not the developers—it’s all the people around who don’t write code. It’s easier for customers to show what they really want, we can enter sales meetings with a PoC which makes the selling part easier, and product owners can generate a PoC to show the developers how they think and get quicker feedback. Another good thing is when we can summarize text from different sources into one concise piece of text, which could help us prevent the need for reading multiple Slack channels, emails, and calendar entries. This saves us really nice time as well. As a manager, AI is really nice to get a summary of how everything is going at the company and what tasks everyone is working on and the status of the tasks, instead of having refinement meetings to get status updates on tasks. There are probably multiple things we do on our daily basis that AI can help us do to prevent needless meetings and organizing, removing the need for many managers. The most gain we get so far is all the easy tasks we repeat on our daily basis. This is a new part of this article added 7 August 2025. About 40% of NVIDIA’s revenue comes from 5 companies (Microsoft, Amazon, Meta, Google, and Tesla) , which means that if one of these companies—for example, Amazon—decides to cut its AI spending in half, NVIDIA would lose 50% of its revenue from Amazon. This would adjust the stock price, which would later affect the other big tech companies. Most of these AI companies are not even profitable. NVIDIA is 7.74% of the index value of the S&P 500 according to https://slickcharts.com/sp500 . The biggest tech companies represent about 35% of the US stock market, and NVIDIA is 19% of them. The difference between NVIDIA and other companies selling hardware is that other companies have a bigger offering. If one customer of NVIDIA says they are not interested in buying more GPUs, there’s a big chance that more companies will do this (like a domino effect), and NVIDIA would have little to fall back on. The chance of this happening is quite high, as most of these big tech companies are not profitable and they are all pressured to show returns on their AI investments. Imagine the cost this would have for the individual investor and our saving accounts. One problem with NVIDIA is that they need to sell more GPUs every quarter in order to keep up with the revenue. This fragility extends beyond NVIDIA to the entire AI service ecosystem. Most companies that sell services on top of LLMs price them at costs that aren’t sustainable long-term. Cursor is a good example—their rapid growth came from a business plan that didn’t generate profit, which means they needed to heavily restrict model usage and add rate limits to avoid losing money on customer usage. The irony is that the biggest income will come from the infrastructure to run LLMs and AI, not the actual LLMs themselves. The reason I added this is that the AI market is built on hyperscalers and has created a bubble more dangerous than the dot-com era. I think we will have a bright future with AI and that AI can help us in so many ways—for example, removing stress from our day-to-day tasks that might not even be related to jobs, such as repetitive tasks we do with our family. If an AI can replace these repeated tasks, I could spend more time with my fiancé, family, friends, and dog, which is awesome, and I am looking forward to that. It is also sad how companies use AI as an excuse to fire people to “optimize” while the real problem is the company structure, the ways of working, and the type of people working at the company.

0 views
Michael Lynch 3 months ago

My First Impressions of Gleam

I’m looking for a new programming language to learn this year, and Gleam looks like the most fun. It’s an Elixir-like language that supports static typing. I read the language tour , and it made sense to me, but I need to build something before I can judge a programming language well

0 views
Michael Lynch 4 months ago

A Simple Example of Calling an Elixir Library from Gleam

I’ve been experimenting a bit with Gleam and Elixir lately as part of my search for a new programming language . One of Gleam’s flagship features is that it can call Elixir code and libraries, but I couldn’t find any examples of how to do that. I wrote a simple example of calling an Elixir library from a Gleam project, based on my beginner’s understanding of the Gleam/Elixir/Erlang ecosystem

0 views
Ludicity 5 months ago

Get Weird And Disappear

Can't leave rap alone, the game needs me 'Til we grow beards, get weird and disappear Into the mountains, nothin' but clowns down here The most treasured object in my home is the cheapest bookshelf I could find. It has been overflowing for quite some time, despite my efforts to only buy books of exceptional quality and relegating the rest to my kindle. Many of the books are unread, but that hasn't stopped me from adding to the pile 1 . My partner has gracefully accepted that I will stop at every bookstore that looks remotely interesting, and patiently reminds me that my past-self has issued strict orders to my current-self to stop acquiring tomes until I have space for a second bookshelf. Over time, I have dropped from buying a book a week to one every month, which represents a spectacular effort for me. Despite the gradual development of what I'd like to think of as iron will, I caved upon stumbling across this particular edition of A Wizard of Earthsea . This was a repudiation of all that hard-won bookshelf space saving. I had read A Wizard of Earthsea when I was thirteen, and I didn't really get it. It was a book about wizards. So what? It was slow and ponderous, and my teenage self knew that an interesting wizard should be firing lightning bolts at dragons. But the illustration on this copy of the book was so gorgeous that I bought it anyway. It was only when I slotted it into my bookshelf, and it was fully juxtaposed against the others, that I realized how hideous so many of my other books were. I treasure their contents , but consider the first two books I pulled at random from the shelf. The Robin Hobb book on the left is generally beautiful, but it is done a disservice by the George R.R Martin quote splayed across the front cover, its marketing payload expended the moment I purchased the book, but which I am nonetheless obligated to look at for the next thirty years. Almost all my books are thus marred. And this is, sadly, a relatively tasteful example compared to much of the competition. Consider instead the copy of American Gods on the right, sweet merciful Christ, it is a truly repulsive abomination 2 . Just look at it . The quote at the top is George R. R. Martin once again, which says something about what marketing departments think is impactful. In the top right, we see Ian McShane, which is incredibly jarring because I know him only for his role in John Wick. Along the bottom we have the tedious "No. 1 Bestselling Author". And most offensively, thanks to the screaming red bubble, my descendants will forever know that this precious artefact handed down from their great grandfather once had a no-doubt mediocre second season on Amazon Prime, whatever the fuck an "Amazon Prime" is. Did they turn the rainforest into a cinema when they were done destroying it? Upon closer inspection, my bookshelf is riddled with tasteless advertising smuggled into my home by boring marketing departments, and I am not open to arguments that this meaningfully boosts sales. Some people might have differing taste, but this is bland to the point of nothingness. At least erotica has the decency to slap a shirtless hunk on there, and we can envision the target demographic. Not a shred of this crassness is evident in this copy of A Wizard of Earthsea , and some part of me thought "This is Quality", and amidst the shrieking cacophony of the other titles ("New York Times Bestseller! Twilight Meets Game Of Thrones! Breathtaking!"), it is the one I walked away with. It is now one of my favourite books. A Wizard of Earthsea is a towering work of genius. Going back to it now that I'm older, I must simply accept that my teenage judgement was... well, a teenager's judgement. It is beautifully written, wise, and it does not feel the need to overstay its welcome with contemporary notions of how long a book should be. The writing speaks for itself: "You want to work spells,'"Ogion said presently, striding along. "You've drawn too much water from that well. Wait. Manhood is patience. Mastery is nine times patience." He spoke softly and his eyes were somber as he looked at Ged. "You thought, as a boy, that a mage is one who can do anything. So I thought, once. So did we all. And the truth is that as a man’s real power grows and his knowledge widens, ever the way he can follow grows narrower: until at last he chooses nothing, but does only and wholly what he must do..." While it was not a given, it is not surprising that the gentleness of the prose is matched perfectly with the elegance of the overall presentation. What was surprising is that cracking open the book reveals an introduction from Le Guin herself, specifically lamenting the immense difficulty she had over the years in simply getting artwork done which was not at terrible odds with the contents. ... until very recently, the books of Earthsea had no illustrations. This was partly my own decision. After Ruth's unique wraparound jacket for the first edition of A Wizard of Earthsea — with its splendidly stylized, copper-brown portrait face — cover art for the books mostly went out of my control. This is the beautiful cover to which Le Guin is referring. Le Guin continues: The results could be ghastly — the droopy, lily-white wizard of the first Puffin UK paperback;... ...the silly man with sparks shooting out of his fingers that replaced him... ... I was ashamed of the covers that gave the reader every wrong idea about the people and the place. I resented publishers' art departments that met any suggestion that the cover might resemble something or someone in the book by rejecting it, informing me loftily that they Knew what would Sell (a mystery that no honest cover designer would ever claim to know). Let us take a moment to appreciate how funny those covers are, and then take another moment to soak up some secondhand embarrassment — somewhere, alive or dead, there are people that told fucking Le Guin how to go about her business, to turn her characters from brown-skinned to white, who perhaps even rejected some early manuscripts. And at least some of them must be aware that Ursula K. Le Guin took a moment to personally light them the fuck up in the foreword of one of the most famous books of all time. This is not the first time that I've seen people in the publishing industry complain about the narrow-mindedness of their peers. Consider the afterword from my copy of Zen and the Art of Motorcycle Maintenance , which includes a letter from Pirsig's original editor, James Landis, who went to immense lengths to have the book published: The book is not, as I think you now realize from your correspondence with other publishers, a marketing man's dream, and if our particular variety of that species... tries to make trouble over the book, I'll just have to supply what pressure I can to overrule him: I'm determined to fight this thing through to the end you and I desire... No sense in trying to kid you by pretending that the book is 'commercial' in the way that term is understood by most people in publishing... However, I have ultimate faith in the book's being good, valuable, which gets us into standards beyond commercial and gets us also into what, to my mind, publishing is all about. That was from the 40th anniversary reprinting. We could speculate about the unknown unknowns — perhaps there are thousands of James Landis' out there, and we are witnessing the one who survived. It is surely possible to publish a phenomenal book and have it be ignored. But for my money, what happened is that Landis has real taste, and that while the modern economy sometimes appears to be filled entirely by bullshit, that only makes it easier for quality to win out if you're willing to stake something on it. But it doesn't come easily, because if you want to do anything interesting , and probably weird, you will immediately be besieged, from within and without, by tedious people and tired ideas. Getting permission from people without soul in the game is oppressive. Consider the final line from Landis above: However, I have ultimate faith in the book's being good, valuable, which gets us into standards beyond commercial and gets us also into what, to my mind, publishing is all about. If you run into Landis and his ilk, perhaps you can do something interesting. You have someone in your corner, who will perhaps take a risk on your behalf, because while they may preserve the skin they have in the game by refusing to be weird in front of their peers, they have to choose between that and their soul. But you will probably not run into Landis. You will probably meet the typical editor, who would be delighted if you offered to write Grit meets The Subtle Art of Not Giving A F*ck 3 . You will meet the real estate agent who will never be swayed by considerations other than the size of your bank account, or the government bureaucrat who will decide that you cannot immigrate because you are two days below the age of 23, or some other tasteless and impotent cog. Even consider startups, a community replete with people who jerk themselves off to the label of entrepreneur so hard that they're at risk of permanent groinal damage. And because we do work with some startups, I will make apologies to the twelve sane startup founders out there. 4 It's distressing how many young people are under the impression that startups are where you go to innovate and assert yourself. You're young, full of optimism, and have minimal obligations. Your only real weakness is your lack of experience, and it is with that inexperience that you... you sign a deal with people who are possibly dozens of years your senior in cunning, law, and finance? To play games on their terms, where you're a total loser if the business earns just enough money to live comfortably on your own terms? Where you have to justify your decisions, where they can pressure you to sell more of your equity, and ultimately appoint a new CEO if you make a misstep? But I digress. To continue the theme of publishing, last year I was approached by an editor at a mid-tier publisher. I made sure to read some of the editor's short stories before our first meeting, and it turned out they were themselves a talented writer with a real passion for the craft. In fact, while looking up the link to the article above, I discovered they published a book as recently as 2024. And they wanted to help me get published! My name, on a book! I knew that it was unlikely to do any significant sales, as blog-writing and book-writing are two very different skills, but still! A book! Things progress well over the next few days, and it turns out the editor shared many of my frustrations with how companies run. Just as it looked like I was going to sign on, the sales team comes with their Opinions, and I must sadly say that they were not sophisticated Opinions. To begin with, they wanted to stipulate that swearing was absolutely not allowed before knowing anything else about me or what we wanted to write. I wasn't actually intending to swear much, if at all, but I also saw no need to limit myself up front. Why agree to play left-handed before we've even tried out the game? More conditions are added. They will not even guarantee that they'll try to get it on bookshelves. I reach out to some other people who have published with them, they reveal that they were forced to write their entire book in Microsoft Word templates. Communications were confused and unclear. Even the payments were described as "opaque". Basically everything you would expect out of the typical tech project. A picture begins to emerge, of a serious editor trying to do their best, surrounded by colleagues who are about as unserious as anything I've described in the tech world. Nothing surprising, but still, why not self-publish? An editor and an artist will only run a few thousand dollars, a tiny investment compared to the time it takes to write a book, and whatever I lack in experience can be compensated for with giving a shit. To quote one of my co-founders, Jordan Andersen, it's only arrogant if you're wrong 5 . In the meantime though, I frequently find myself reflecting on how bizarre the knee-jerk "no swearing" rule was. Sure, there are always going to be some concessions to be made to marketing. I don't just label all my posts "Unnamed Post #7", because I would like to be read at least a little bit. But swearing? Who gives a shit? Is that really going to be the difference between a good book and a bad book? Why on earth would you hear that you've got someone interested in writing a book whose writing is doing millions of hits that very week and then immediately tell them that you know what will sell better than they will? In fact, the only other book they've got that's well-known enough for me to recognize it has swearing in it , so I know it isn't even a principled brand stance. A midwit just went "I am slightly self-conscious about how I appear to my boss" and that was enough to tank the whole thing. I will never stop citing Sturgeon's Law . Ninety percent of everything is crud. That is to say, ninety percent of everything is crud to a practitioner who has taken the time to develop judgement. That is to say there is some distribution of judgement and bravery, and a whole lot of people are clumped in one not-very-good batch. These days, when people become concerned about the seeming arrogance in that statement, I point them at real estate agents and say: "That's how bad many professionals are, but you can only tell when the issue is as blatant as a broken window in your living room." I still have not met a professional whose competence I've been able to verify that will not endorse something close to Sturgeon's Law. The best I've been able to manage is that, if you ignore all secondary attributes required for a job such as empathy, communication skills, maturity, and restrict yourself purely to the mechanical task at hand, some medical subspecialties are consistently okay... but with all those caveats, you'd still never want them to be your physician. In all fields, very weak practitioners seem to cling to a series of practices or principles without a nuanced understanding of why they're doing those things. Consider, for example, a junior engineer. The trajectory of the typical junior engineer is that they will hear a principle like Don't Repeat Yourself, and immediately begin writing functions to generalize two pieces of code that even look similar. This is normal and even admirable — it is a single-objective optimization problem, and they're optimizing! Junior engineers are supposed to be weak practitioners, but this person is trying . They will then become slightly more sophisticated, and learn about premature abstraction, and suddenly we have two principles that are in conflict. Collapsing duplicated code means you are Not Repeating Yourself, but you are probably introducing an abstraction, and now you have a tradeoff between two principles 6 . And if you keep going, you will pick up dozens to hundreds of these, implicit and explicit, and must become comfortable with the fact that you're going to have to balance them all simultaneously via the application of judgement, and you will be wrong sometimes. With all that nuance comes the boon and burden of being able to disregard principles. For example, we use Test Driven Development , but I was recently working on something that required spinning up a headless Chrome browser to convert a HTML document to a PDF. Unfortunately, this is very slow in a test suite and annoying in some CI/CD products for various reasons . Because I have an idea about why I test, and what the value of it is, and how likely this is to break, I made the decision to just not test that bit of the application when running tests in GitHub Actions. Will I regret that? Maybe, probably, but the point is that my rules are there for reasons and sometimes those reasons don't feel like they apply. 7 Truly weak practitioners will come to you with totally degenerate rules or principles. This is the "no swearing" publisher, but it is also the "no dogs" landlord, the "no risk" security guy, the "must have ten years of Git experience" recruiter and I've no doubt that everyone reading this can think of at least one system where they've thought "I can't believe I'm at the mercy of a person that isn't even thinking ". Frequently their rules are not even rules, they're just phrases that competent people used to say, and eventually they became well-known enough to become fake-able signal. Data-driven. Garbage in, garbage out. Agile. All of these may have previously signalled novel thought, and now anyone that's really savvy does their best to not use the phrases because they know how tacky it makes them look, even if they actually use them in meaningful ways . This can even happen when you've bypassed the large, visible, external gatekeepers, such as recruiters. All it takes is one person within an organization that loves talking and again lacks that baseline competency. One of my co-founders, Ash Lally, the infamous Business Jaguar 8 , was on a contract outside the context of our consultancy recently. They are tasked with pulling data from one system into another system, which is about the most boring and typical thing that a data engineer can do. The employer is not technologically advanced and this task is not their core competency, so as is the case for almost all work in this category, the solution is to schedule some code to run once every ten minutes to pull it in. They are working with someone that, to steal the words of a reader, has had a very impressive career in numeric terms, i.e, they're terrible at their job, but boy oh boy have they been doing it for a long time. Ash's counterpart considers pulling the data in every ten minutes, then comes back with: "Event-driven is good for internal system integration." Event-driven means that instead of writing something very simple to grab all new data every ten minutes, you instead write a very bespoke system that detects new events in real-time and copies it over to the target system. This is way more complicated and totally unsuitable for a company that hasn't figured out version control yet. And also totally unproductive, because the data can't be used without another piece of data which has a hard limit of being calculated... once every ten minutes. So the company still has a ten minute delay before anything can be used, plus all the insane overhead. But still, we should hear them out, why is event-driven good for system integration? I can think of some reasons, but I'll never know if they're the reasons this person had in mind, because that information is not forthcoming! They heard those words somewhere from someone authoritative-sounding and they now feel comfortable regurgitating them at other people. But we're going event-driven now! Bonus points, this infrastructure supports emergency services, so you can commit stochastic murder this way! It's like serial killing but you've tricked a whole department into being your co-conspirators! Even when you've passed all the normal gatekeepers, you will sometimes need permission to do the engineering equivalent of tying your shoelaces correctly. 9 Above, I talk about people who have even gotten as far as misapplying principles. As frustrating as that is, that isn't even the worst you can do . Some people never reflect on their craft at all, or do not view it as a craft. The person that says "event-driven is good for internal systems integration" has at least taken a moment to read something, even if it's a terrible LinkedIn post or a Medium blog. Things can get so, so much worse. Remember all your peers at university that would say "I dream of working in HR"? No, probably not, because that sentence has possibly never been uttered in all of human existence. The field is laden to the point of breaking with people that drifted into it because it was the first job they could get that paid white-collar wages for a qualification no more sophisticated than knowing where some of the buttons are in Workday. Some of them, by dint of personality and ethics, are good at their jobs, but they are few and far between, and perhaps most importantly are not systematically produced. Is it actually a surprise, under those conditions, that HR seems to be useless whenever you interact with them? Could we expect any other outcome? It's a testament to the power that so many of them have relinquished to the economy, by dint of being unemployable on the grounds of technical skill or competence , that they bother to get out of bed at all. They don't want to be there any more than you want them there. Jordan, the same co-founder responsible for "it's only arrogant if you're wrong", wrote a single blog post on our company website. During a meeting where Jordan asked for some feedback, we spent some time caught up on this little snippet. I just want to pause and note how fucked this was. While I was an accredited allied health professional at the time, it's fucking ridiculous that a bunch of recreationists were put in charge of dictating how anyone should respond to a medical emergency... And here's what the team was split on — do we swear on the company blog? You can probably figure out where I set on the issue. Go for it! What are you, some sort of coward? But the team wasn't sure, and the truth is that I wasn't entirely sure either. Maybe I can get away with that on my personal blog, but is that too crass? I mean, some people feel it's already too crass here. Will people be scared to take us to their managers? And most importantly, other consultancies aren't doing this , should we be more like them? Eventually, Ricky Ling chimes in (I'm sorry there are so many of us), and his take for Jordan is the Quality-focused take. "If you think it makes the writing better, do it". We still aren't sure, but we know that we didn't quit our jobs and give up all that sweet, sweet salary to be cowards, so we do it. That single post has generated us a huge amount of outreach from cool people despite not going viral or being widely circulated as far as I know, and we almost started prematurely neutering it because a group of people who already deviate wildly were still worried about doing something even a little bit different. Being seen to deviate is extremely hard, in large part because to deviate is to be seen , and the gaze of others is deeply uncomfortable. There's an exercise that actors do, where they sit in two chairs opposite each other, make eye contact, and then one of them just starts saying things that they're feeling. "I am aware that my hands are warm." "I am aware that I am uncomfortable with you staring at me." "I am aware that I want to burst out laughing." This is very, very uncomfortable to do. I've done it three times, for about ten minutes each time, and I cannot emphasize how much we all hated it. From this we can infer that actors are total fucking weirdos, but also that acting as a field has identified that there is something deep to unpack in reflecting on what it is like to be perceived without the luxury of flinching or breaking eye contact. This aversion to deviation and subsequent perception is enough to make some people totally impotent. In the earlier days of the consultancy, I envisioned that our sales pipeline would be something like this: This sometimes does happen, but it has never happened at a company without a really spectacular culture. What happens at the average company or government agency is something to the effect of: "I am working on a project, and we're being scammed by some other consultants. The whole thing is months late, and it isn't going to finish any time soon. Is there any chance you can help out?" I reply, "Yeah, of course, we'd love to. But we'd need an introduction to someone inside the company who might be willing to admit that something is wrong." And this is where, every single time, the person that initially reached out has decided that they don't want to take the risk of being visible. I understand — a lowly engineer suggesting an external consultancy with hot takes is really, really unusual, especially so if they present hard-to-ignore and inconvenient evidence that a project is failing. It has possibly never happened to the average CTO across their entire career. The engineer would be as visible as it is possible to be, perhaps moreso than if they hauled off and punched someone in the face, and during the next round of layoffs they will either be promoted for being weird or fired for being weird. In either case, redundancy is no longer dependent on who is at the top of the list when the CFO sorts the salary column on a spreadsheet by descending. In the past year, I've seen people have huge impacts at companies by exerting some bravery. This sometimes means getting a bully fired, sometimes it means getting the first tech project to work smoothly in a decade. In some of these cases, the person involved was made redundant shortly afterwards for interfering with people's selfish political aims, and other times they were promoted for actually giving a damn (and yes, sometimes for unknowingly helping someone else's selfish political aims). But all of them went in willing to put something on the line. A huge number of employees will venture nothing for their beliefs, and while that is fine or whatever, bla bla bla, they have kids, they're also going to be largely ineffective because they won't change anything big, because big change is novel and weird, and novel and weird might get you noticed, and getting noticed means something is going to happen to you. Our company is really into weird stuff. Our favourite TTRPG is Burning Wheel, an extremely niche product, hundreds of pages long, whose creator refused to release a PDF for years because he felt that it watered down the experience. Our favourite traditional board game is Twilight Imperium , a sci-fi masterpiece that takes thirteen hours to play. For internal projects, we program Elixir, a language with near-zero name recognition amongst engineering genpop that compiles to Erlang, a language developed in the 80s by Ericsson of all places. We're an Extreme Programming shop. Most of my spend this year has been learning material and one-on-one coaching for engineers in the company. We started with six co-founders, something that caused several banks a great deal of consternation when we tried to open our account, and I am now realizing makes it very hard to write blog posts without introducing what must feel to a reader like an endless stream of new names. It gets stranger. We pair program on all work 10 . We have no network drive, just Git repositories where all documentation is written in Markdown, and the proposals that must go to clients are converted to HTML so we can style them with CSS. AI? Not only do we not turn on AI-assistance during programming, we'll crucify each other for copying a StackOverflow example if we can't explain the behaviour with reference to code or core documentation. We contracted Mira Welner this year, internet famous for frontpaging Hackernews a few times as a fresh grad, who described our work as "the opposite of vibe programming", which will make different people have wildly different opinions about whether we're competent. We invite our clients to sip from the Chalice of Madness, wherein we pray that they will find that the Kool-aid is delicious and refreshing, amen. If there is a conventional script for something, we have probably thrown it out the window or seriously considered it. I'm obviously convinced that these practices are good or I wouldn't be doing them, but they have a secondary characteristic that may be more valuable than their originally envisioned purpose. They filter out shoddy, default thinking, and otherwise shake people into vivid awareness. Everyone is constantly immersed in weirdness, and that makes the next piece of weirdness less scary. I mean, we may still fail as a business for many reasons, but it won't be because we weren't thinking , it'll be because we had stupid thoughts , which at least has the virtue of meaning we actually tried. If you had a company policy of "we only hire people that can pass an interview about Burning Wheel rules", I suspect you would hire a stronger team than what the average company manages. At least they'll have something in common and they've demonstrated refined taste in one area. I don't even care if they can program, they'll figure it out. Iroh: What do you plan to do now that you've found the Avatar's bison? Keep it locked in our new apartment? Should I go put on a pot of tea for him? Zuko: First, I have to get it out of here. Iroh: And then what? You never think these things through! Some years ago, I knew a man that was laid up in hospital for a chronic illness. He was not having a good time, but it must be admitted that some of this bad time was the product of a long-running pattern of poor life decisions. C’est la vie. In any case, one day he is having a particularly bad day, and he messages me from the hospital. He's going to leave and pick up a pack of cigarettes. This perplexes me to no end. "But you don't smoke and are almost out of money, right?" And yes, it turns out I had not misremembered. He wanted to start smoking at that moment in time. Now, this is actually a fairly extraordinary amount of effort. Imagine, you're comfortable in a hospital bed, in a terrible mood, and all the normal pressures that lead to smoking are absent, such as peers to pressure you. But ex nihilo , you now grudgingly swing to your feet, fish out your last bit of money (cigarettes are supremely expensive in Australia, approximately a whopping USD $20 per pack), and march off to try and form a financially crippling addiction. Surely you have enough problems, and don't need to go through the trouble of leveraging your executive function to make more of them? After mulling this over for a while, I come to the realization that this fellow felt the need to inhabit the role of the hard-done-by man on the street. And what does the hard-done-by man do? Well, on the television or a play, he would light up a cigarette of course! Probably take a big, long drag off it, then close his eyes and exhale a stream of smoke into the sky as he sighs. It's real cool , yeah, he's a real cool dude. It's all symbolic, y'see? He's destroying his body as if to say "the world is wearing me down so quickly, the cigarettes won't have time to get me, heh". This is practically understandable, in a strange, catharsis-y way. We all know social scripts exist. There are rough guidelines on how much to tip a waiter in the U.S. In Australia, you should stand on the left side of an escalator if you're not intending to walk up it. What I hadn't really thought about was that some of them are so strong and latent that they can spontaneously cause a person to self-inflict a nicotine addiction. I hadn't even really noticed that I had an archetype of the put-upon smoker in my mind, but there it was, and you likely knew exactly what I was talking about. These scripts are adaptive in some situations — for example, well-adjusted men should be aware that the comradely arm on the shoulder is a great deal more appropriate with a male colleague than a female one, and an even better-adjusted person will just not touch their colleagues. But other scripts, especially the ones that are too small to think about or too large to see, drive insane life decisions. When I was studying psychology, almost everyone, including me , picked the course because we weren't sure what we wanted to do with our lives, but the script says finish high school and then take on ten of thousands of dollars in debt. That script is so large that it basically encompasses your whole upbringing. When you try to look at it, you see grey, and think "there is no elephant here, just a cement wall". Or when someone looks for a job they just start sending CVs out, even though that basically doesn't work at all. The assumption that this is how you get a job is so quiet that people don't even realize they're making a decision. They think the decision is whether you should have a one-page CV or a three-page CV, not whether they should have a CV at all. And again, when you interface with a sufficiently large system, they will have their own scripts, and you will be expected to abide by them. Your counterparty will usually unthinkingly apply them, and if you find someone that is willing to actually look at the situation at hand, treasure them. The other issue with going off-script, or otherwise deviating, is that you have to accept that you're going to look extremely silly if things don't work out. Someone, usually someone that isn't taking any risks themselves, will be a dick about it. Let's say that my company does well enough for me to draw a full tech-tier salary in 2026. I'll be praised for whatever I did differently. Six co-founders! My God, he's a genius! You can keep most of them working contract roles, which means your revenue requirements are low, but it only takes them about an hour a week to mobilize their networks for sales! His mind is a thousandfold blade, so keen is his thinking! Sure, now let's say that it collapses in June because a co-founder wants to pivot to a SaaS offering and I don't. Six co-founders! My God, what a fool! Everyone knows there are reasons that VCs prefer two to three founders and no more! A huge risk for a startup are the founders having a falling-out, and the risk increases exponentially with each node in the fully connected network! His mind is being passed around by stoned college grads, so blunt is his thinking! I can skip the embarrassment by doing what everyone else does. If I failed the normal and boring way, by running a non-Extreme Programming consultancy, with one co-founder, that writes Python because everyone else does, selling GenAI bullshit 11 , I would be beyond critique. I'd just say "Gosh golly, sales are hard", and everyone would nod sympathetically. Guess what though, whether or not people point and laugh, they certainly aren't going to pay your rent for you. Who gives a fuck about embarrassment? LaRusso: Do you think I stand a chance at the tournament? Miyagi: Not matter what Miyagi think. Miyagi not fighting. What most of this comes down to, fundamentally, is whether you trust your own judgement, what you're willing to stake on it, and whether that conviction is strong enough to throw away the guarantees that you'd get from conforming. Because you could take the guaranteed money for writing the book, and if you don't really believe that your judgement is superior, perhaps the money is worth it. Readers call me with some frequency to ask me if I think their startup idea is good. Could it sell? Would it work? Should I do it? How the hell would I know, I'm just some guy with a blog and a business that may explode in the next year. But if I were the owner of a $100M company, would I really know any better? If anyone knew, then when you asked them they would steal your idea. But no one knows, so the idea is safe and sound, and deep down the asker must be aware of this or they wouldn't be asking . Or take last week, when a student visited me from a local university to ask if they should drop out of university to run a startup. They really didn't like the idea of a corporate job, found university extremely tedious and their peers uninspired, so on, so forth. They were clearly very intelligent, but also clearly very young. What do you say there? Because the knee-jerk impulse, for fear of being irresponsible, is to say the boring thing without thinking about it. Finish your degree first. Get a few years of corporate experience. Or even, I know you're depressed, but don't quit your job without something else lined up. None of that is necessarily bad advice, but we should view ourselves with suspicion when we think about saying them, because they also absolve ourselves of guilt if things go awry. Who could blame us for giving the conservative line, even if you can easily play the tape forward and see it results in damnation? The advice that people are too scared to give, but which they sometimes really believe, with the usual caveats for a mental health event that has crippled the recipient's ability to see clearly: "What does it matter what I think? Do what you think is best for yourself, and yes, you might lose your life savings. What's the alternative, doing what other people say is best for the rest of your life?" Thank you for reading all the way through. For this kindness, I leave you with a parting gift. Penguin, what the fuck is going on over there? PS: Oh yeah, I put this in to justify writing, hire my nerds or whatever. PPS: Reader and now close friend Phil Giammattei could use some help with a horrible brush with cancer in the family. You can support him here . PPPS: You all crushed Phil's goal, thank you so much for your generosity. Things are obviously Extremely Bad in many parts of the world right now, and it's wonderfully refreshing to see people pull together, even if it is for a degenerate like Phil. I'd also like to take a moment to curse the person that dethroned me from my spot of Top Donation, and to double-curse them for doing it anonymously, embarrassing me further. Nassim Taleb calls this the "antilibrary" , the collection of books that one owns that one has not yet read. Taleb argues that an antilibrary is there to educate and humble, while a library of already-read books has much of its usefulness shifted to signalling erudition. If you were to point out that the optimal antilibrary strategy is to constantly buy books and never read any, Taleb (and I) would call you a nerd.  ↩ It was also a gift. Sorry Ricky, the Red Rising copy you got me alongside this was much better!  ↩ It is endlessly entertaining to me that the cover of The Subtle Art of Not Giving A F*ck , a book so desperately trying to be edgy, censors the the word fuck, achieving a flaccidity that goes beyond simply deciding not to swear at all.  ↩ 100% of startup founders nodding at this.  ↩ It is a custom that whenever I or Jordan call our shot and mess up, we call the other person and loudly declare "I was arrogant and wrong", then accept ruthless mockery.  ↩ It is remarkable how many companies seem to forget that the ultimate value is arguably Working Software, and while all the subsidiary principles may fall out of Working Software when you shake it very hard, you can't do away with it. Why calculate velocity if you know you aren't shipping, scrumlords?  ↩ The scary thing about messing with long-running, almost-mystical practices like testing is that their second-order effects run so deep that I get many benefits that I haven't even noticed, so making these decisions also means I may have thrown away some things I don't understand. It's like becoming convinced that I can leave a candle out of the summoning ritual.  ↩ I am slowly realizing that my company is comprised entirely of fucking lunatics, and I am the cracked porcelain mask of sanity that attends sales calls.  ↩ Our scathing disdain for this has not prevented us from chanting "event-driven is good for internal system integration" whenever we need to annoy Ash, which is probably not bullying.  ↩ We incidentally allow anyone to pair program with us over a four hour session, where you can choose to be amazed at our great team energy, or if you'd prefer laugh at how bad we are at programming, so long as you'd consider writing a nice review if you have fun and see strong evidence of competence. Just email me!  ↩ I am bored just describing this.  ↩

0 views
Ludicity 7 months ago

Ludic's Guide To Getting Software Engineering Jobs

The steps in this guide have generated A$1,179,000 in salary (updated 13th April, 2025), measured as the sum of the highest annual salaries friends and readers have reached after following along, where they were willing to attribute their success actions in here. If it works for you, email me so I can bump the number up. I currently run my business out of my own pocket. If I don't make sales, I lose savings, and it's as simple as that. I am all-in on creating work I love by force of arms, and I'd sooner leave the industry than be disrespected at a normal workplace again. The impetus to run that risk comes from two places. The first is that my tolerance for middle managers jerking themselves off at my expense is totally eroded, and I realized that I either had to do something about it or stop complaining. I'll happily go to an office if I think it will produce something I care about, but I will not do it because someone wants to impress a withered husk who thinks his sports car makes him attractive to young women. The second , and what this post is about, is that I am really good at getting jobs, and have friends with a very deep understanding of how the job market works. In Australia, when you apply for a job without permanent residency, you are filtered out of all applications immediately. It is the first question on all online application forms, and the reason is that companies do not want to deal with visa renewals and they have far too many candidates. This leads to a situation where any characteristic that is remotely inconvenient but not noticeably correlated with suitability for the business is grounds for rejection. It is not uncommon for immigrants to take months, sometimes over a year, to find their first job actually writing code. Despite being a non-white with no professional network in the country and an undesirable visa, I had my first paid programming engagement lined up before finalizing the move off my student visa. I had a full-time job on A$117K lined up for the same day my full work visa kicked in. I continued to dig up work whenever a contract was expiring, even landing a gig mid-COVID, and while most of these jobs left much to be desired , I believe this has more to do with the state of the industry in Australia than anything that I did. And I have only gotten better at this over the past two years, because while I despaired about the state of software in general, I never stopped thinking and experimenting about how to regain some control over how I'm treated. Almost everyone I spend time with now has walked away from a job without flinching. I've done it . I once caught up with a friend, and he said "Work is stupid, I'm going to Valencia for a year." I said, "W-what? Valencia? When? For a year ?" "Yeah, a year. I'm going in two weeks." And then that glorious son of a bitch did it . Came home. Had a job waiting for him. Quit that job, got another job at more money. Quit that job, got one interstate because he felt like it at similar pay for half the work. All in a "weak" market. I get a lot of emails from people who despair about the state of the industry or who otherwise can't find jobs, and I always end up giving the same advice. I don't have the time to keep doing that. So in this post, I'm going to attempt to convince thousands of people that you should have much higher standards for what you tolerate, that you can build up the reserves to do your version of going to Valencia (this could just be staying home and playing with your kids for six months), and that it is immensely risky not to have this ability in your back pocket. Along the way, we will answer questions like "How long should a CV be?", "What should go on it?", and "When will this suffering end?" From Scott Smitelli's phenomenal The Ideal Candidate Will Be Punched In The Stomach : What was the plan here? Why did you leave a perfectly sustainable—if difficult and slightly unrewarding—job to do this crap? What even is this crap? You are, and let’s not mince words here, you are a thief. That’s the only way to make sense of this situation: You are stealing money from somebody, somehow. This is money you have not earned. There is no legitimate way that you should be receiving any form of compensation for simply absorbing abuse. These people, maybe the whole company but certainly the people in your immediate management chain, are irredeemably damaged to be using human beings that way. They will take, and take, and smile at you like they’re doing you some kind of favor, and maybe throw you a little perk from time to time to distract you from thinking about it too hard. But you? You can’t stop thinking. You can’t stop thinking. You can’t stop thinking. If you're in this photo and don't like it, this blog post is for you. We have one end-goal. A career where you're paid well, are treated with real respect, and we will not settle for less. And I mean real respect, as in "we will not proceed on this major project without your professional blessing, and you can fire abusive clients", not "you can work from home two days a week if the boss is feeling generous". I had a brief email exchange with Erez Zukerman, the CEO of ZSA last year, and asked how their customer support is so good — it's the best customer support I've ever experienced and there's no close second. He replied: For support, the basic understanding is that support is the heart of the business. It is not an afterthought. Support is a senior position at ZSA, with judgment, power of review over features and website updates before anything is released (nothing major goes out without a green line from every member of support), the power to fire customers (!), real flexibility when it comes to the schedule, etc. There are also lots of ways to expand, like writing (Robin has been writing incredible blog posts and creating printables), recording (Tisha recorded Tisha Talks Switches which thousands of people enjoyed), and more. Anything short of that isn't real respect. Not a special parking spot. Not the ability to pick up your kids sometimes . Not a patronizing award on Teams. Most places fall short of this, and because we have all agreed to demand better for ourselves, we are going to consider all of these places as mildly abusive. A lot of office jobs seem like a slow death of the soul — better than the swift death of the body that careers like construction work offer, but that isn't a reason to stop striving. Shoddy work. Hour long stand-ups. The deadlines are somehow always urgent and must be delivered immediately, but are also always late and everyone knows they'll be late from day one. This is delightful at times — office scenes in improvised theater get funnier the straighter you play them — but many people eventually feel that something vital is missing from their work lives. I really enjoy David Whyte's The Three Marriages as an antidote to the tedious objection of "Work to live, don't live to work". It's a part of life, and while it isn't all of life, being bored and treated like a disposable cog for eight hours a day shouldn't be any part of your life. If you're happy to coast, adieu, catch you later. This a no judgement zone for the next five minutes. Here is a quick reality check. I have, by virtue of hundreds of people reaching out to me over this blog, seen the "I want to leave my bad job" story play out far more times than a typical person does in a lifetime. It always plays out in one of two ways. The first is that the person immediately and aggressively looks for new jobs. This usually goes well. If it does not go well, they can always find a new job again. When the job is pursued through "normal" mechanisms, such as cold Seek applications, these jobs almost never meet the standard I set above: great pay, great team, great interview process, and whatever office arrangement you prefer. But they've always been doing better along at least some of the four measures. The other story is much more typical, and it goes something like this: I'd love to leave, but there's something keeping me. One more year and I'll get a new title, and then I'll be so well-placed for a new job. I've heard the market is bad, so I should wait until it picks up again. I'll get a raise soon, then I'll negotiate for a new job. I'm scared of keeping up with mortgage repayments. I just need a year to finish up this project, it'll look great on my CV. My network is terrible , so I don't have the same options open to me. I think I can make a difference if I'm given a few more months. In two years, this second approach has never gone well. Never, ever, ever. Consider this real exchange, copied verbatim and redacted. May, 2024: Me: I'm a little bit concerned that the pathway above leads to delaying indefinitely (there's always going to be a risk of moving then getting laid off - so what risk level do you actually tolerate, and how is that balanced against [COMPANY] being run so badly that you can get laid off there too?) but you know your situation better than me. Reader: Well, the company was bought out and seems to be stabilizing. July, 2024: Reader: Got some fantastic news! Gonna get a raise at [COMPANY], 20%! It came as a surprise, apparently they think I earn too little so they're giving me a raise because of that. November, 2024: Reader: Wanted to let you know I got news, I'm gonna be fired next month. This happens so often that it's actually boring for me . I've had exchanges like the above often enough that I know the person is finished months before they do. Play stupid games, win stupid prizes. They will either be let go, burn out and quit, or burn out and stay there as their health deteriorates. No one, at any level short of executive, has managed to have the impact that would be required for them to feel it was worth the cost. The thing that is missing, to my eyes, is some sense of confidence and self-respect. I hear lots of supposed barriers to getting a better job, but almost none of them are convincing, especially from people in the first world, so what I'm actually hearing about are psychological barriers. It takes a certain degree of confidence to know that you have worth, because a great deal of our society, whether by coincidence or design, causes people to feel like they're not desirable. If you don't have confidence, you feel trapped at the current situation, because what if you can't find something else ? What if you're not good enough? This is a real risk, but guess what, life's risky! Two months ago, one of my high school classmates, one of the fittest people I know, died of an aneurysm at age 29. Think about it this way: enough people read this blog that if you are reading this sentence , you have just drawn a ticket in the "heart attack kills me by December" lottery. This isn't hypothetical, this will happen — someone reading this will die having spent a few hundred hours on spreadsheets this year, and perhaps even have time to think "I wish I had listened to Ludic, he is so smart and wise." 1 And on self-respect, I will concede that you're getting the vestiges of my time spent in psychology, but why would anyone respect you if you let someone do Scrum at you for hours ? No one respected me when I let people do Scrum at me, and that was my fault . "It what der street trolls make when dey is short o' cash an' ... what is it dey's short of, Brick?" The moving spoon paused. "Dey is short o' self-respec', Sergeant," he said, as one might who'd had the lesson shouted into his ear for twenty minutes. So where do we start off? Well, the first thing to do is bury the idea that you need this particular job, or that you are otherwise unworthy. And we're going to do that by getting really good at getting mediocre jobs, and we're almost always going to want to be doing day-rate contracts. We are going to do a lot of things that I do not endorse when going for a good job, like sending your CV anywhere, talking to recruiters, etc. Regular full-time jobs obtained through mass-market channels have dysfunctional social dynamics that are too complex to get into here. Patrick McKenzie writes : Many people think job searches go something like this: See ad for job on Monster.com Send in a resume. Get an interview. Get asked for salary requirements. Get offered your salary requirement plus 5%. Try to negotiate that offer, if you can bring yourself to. This is an effective strategy for job searching if you enjoy alternating bouts of being unemployed, being poorly compensated, and then treated like a disposable peon. Working jobs like the above comes at a real cost, even if you can get them at-will. I had an episode of intense burnout which resulted in a year of recovery , and I had to think very hard about how to not feel trapped in a bad situation again, even if the business fails. I do not want to attend hour-long stand-ups anymore. This section is about how to get the above jobs as effectively and painlessly as possible, but they will still not be great , and if you do them forever then I will be very disappointed in you. In any case, if one must engage with the market in this way to build confidence and a reputation, then day-rate contracts are amazing. I am heavily in favour of contracting. The day rate is much higher. You are forced to continue searching every few months, which means you are also forced to always be aware that you have options, and we will discuss how to minimize the pain of this. You will meet far more people because you will be at a new workplace every few months. Here in Australia, a weak contracting job will pay A$1K per day, which is approximately double what a permanent employee earns. I.e, for every six months of contracting, you can afford six months of unemployment, and you're still as well off as you were if you had been permanently employed over that period. Contracts are terminated more frequently, but you're also in a much better position because you've saved way more money per day worked, you've met tons more people, and your CV is always up-to-date. And you also knew it was going to expire in six months, so having it end three months early isn't a horrible shock to your planning. You are also excluded from the most mentally draining practices in a corporate environment, and afforded a higher status than regular employees. You will usually not be asked to attend pointless meetings, and instead be left free to execute on technical work, particularly if you indicate that you can manage scope independently. If someone does ask you to attend a pointless meeting, you can recite the Litany of a Thousands Dollars a Day in your head over and over as the project manager attempts to flay your mind. You know that delightful period after you've submitted a resignation and you're about to get out? That's the whole contract . A six month contract feels like handing in a resignation with six months of notice. When the CEO says "Can we put GenAI and blockchain in the product?", you can close your eyes, my God you are so happy, and whisper "Inshallah, I will not be on this train when it derails". None of these jobs will be great. This is not a good way to get jobs in the long-term. This is a boring, soulless way for someone that does not have any appreciable career capital or networking ability to generate adequate jobs on high pay. We only bother with this so that you know that if your business explodes, or the cool non-profit you find fires you, or if a new boss comes in and abuses you, you'll know deep down that you can walk right out that door and tell them to get fucked. I should note that the advice in this section was heavily contributed to by a friend who wishes to remain anonymous, but let us all send them silent thanks. Anyway, we take Sturgeon's Law very seriously on this blog. Ninety percent of everything is crap. It is with this understanding that we must proceed. There is a pathway to navigate that relies entirely on the broad understanding that: Let us begin with recruiters. Recruiters are an unfortunate reality of the industry. I still haven't worked out why they exist when a company can just post a job ad themselves, and their talent team has to filter out the candidates themselves anyway, but whatever. They're here, and I've learned enough about the world to accept that it's 50/50 on whether their existence is economically rational. In 2019, about a month into my first full-time programming job, I received a call from a recruiter. They were looking for someone with Airflow experience to work a contract with Coles, a massive Australian grocery chain. I had no idea what to really say to this, being inexperienced and hugely underconfident, so I just listened to his questions and answered them. Most of my answers were a sad: "Ah, no, sorry, I know what AWS is but I've never used it before at a real business. I know what Airflow is , but I've never..." Until finally we come across the fateful question: "And do you know Linux?" Why, yes, I do know Linux. At that stage of my career, it never even occurred to me to ponder what knowing Linux is. Do I even know my keyboard if I can't construct one from scratch? What does he mean know ? How deranged would I have to be to say I know Python, without qualification, without being a core contributor? But none of that occurred to me, I just said yes. He is delighted, we get to chatting, and we quickly realize that we're both working our first jobs! He is a year younger, also nervous about his job, and is so happy to be talking to someone that just sounds like a normal person. He is soon comfortable enough to ask me a very vulnerable question: "So what is Linux?" I answer, and I've been doing nothing but teach psychologists-in-training statistics for a year, so the explanation is good. Each good explanation leads to another, until I'm fielding questions like: "What is Airflow? What is AWS?" We hang up, on good terms, and I stare at the wall for a long moment. There are people out there just like, calling around and functionally asking "Have you used FeebleGlorp for eight years?" with no internal model of what FeebleGlorp might be? That can't be right. Everyone at school told me that affairs would be very serious in the real world. Affairs are not very serious in the real world. Affairs have never been less serious. I told myself for a while that this must have been because he was so young, but no, they're actually almost all like this. I have only ever met two recruiters with intact brains 2 . To quote a reader with extensive HR experience who attempted to explain this dysfunction to me: While there are professionals that specialize in tech and with time develop enough depth to understand the discipline and move the needle in the right direction, for most recruiters it is not an economic advantage to do so; as the winds of the market are ever changing, recruiters are always the first ones to go into the chopping block when there are layoffs. Better to be a generalist recruiter and keep your job options open. I.e, the recruiters you are talking to probably go out of their way to avoid learning anything, because they may be recruiting in a different industry next month. This means a few things. I normally do not send CVs anywhere and decry them, but I've reversed my stance. They're a terrible way to get good jobs, but a heavily optimized CV will demolish most other candidates, who are about as unserious as the recruiters. So how do we optimize? Well, we're trying to get past recruiters. On your CV, quality indicators only matter if the recruiter can understand them, and as per the above they do not understand anything . At 12:34 PM today, while writing this blog post, I got a call from a recruiter and I asked them a question for blog material. "Hey, question about my CV, would it be better if I mentioned that I'm well-regarded by Linus Torvalds?" (This is not true, we don't know each other.) And they said, "Uh, I'd leave that out, these are very busy people and need technical credentials." Recruiters are only looking for one thing. They are looking for the number of years of experience that you say you have in buzzword, and possibly that you've worked somewhere like Google — but I've never seen a Googler compete for open-market contracts, so don't feel too disadvantaged. Years of experience with buzzword is the only thing that matters. Delete everything else. Link to your GitHub profile? Goodbye, none of these people are going to read that. I've been assured that the typical talent team spends five to ten seconds per CV . I am a passionate front-end developer with a drive for — no one cares, and if you reflect on how you felt even writing that sentence, of course no one cares. You didn't care. My CV used to say things like "deployed a deep learning project in collaboration with Google engineers" and it had sections like this: Some of the most talented people I know in Australia have told me that that this would qualify me for an instant interview on their teams, but this CV does not work because the person reading your CV will not care about the craft. If someone that does care reads it, it will be after four untalented people decided it was allowed to land on their desk, and at that point they're going to interview you anyway so your CV doesn't matter. The ideal CV starts with lines like this: Five (5) years expert skills in cloud database development and integration (Databricks, Snowflake) using ETL/ELT tools (dbt and Airbyte) and deploying cloud computing with AWS (EC2, RDS) and Azure (VM) cloud platforms The rest of the CV should be more lines like that , nothing else matters. A senior talent acquisition nerd at McKenzie told me that CVs should be one page, because it shows that the candidate is concise. Their counterpart at another agency said that you need three pages or you can barely get to know the candidate. Which of them is right? Both of them had no idea what they're talking about, because both of them are just eyeballing it, coming up with post-hoc rationalizations for their behavior that ignores the real hard question of why they specialize in hiring talent in fields that they cannot describe. I now trend towards a three page CV for no reason other than it looks like I must have more experience if it won't fit on one page, and it gives me more space to put buzzwords in. And when I say buzzwords, I mean you need the room to write things like "Amazon Web Services (AWS)" because some of the people reading the CVs do not know they are the same thing . Act on the principle of minimum charity, and accept that this version of your CV will never get you a great job. We know what we're optimizing for at this stage, and it isn't amazing colleagues, it is the ability to refill your coffers very quickly and with minimal pain . Okay, but which buzzwords do you pick? If you hop onto a job search platform, you are going to see many jobs that are essentially asking you to cosplay as a software engineer. For example, I have just hopped onto Seek and punched in "data engineer", my own subspecialty. This immediately yields this job from an international consultancy whose frontpage reads: GenAI is the most powerful force in business—and society—since the steam engine. As software and code generate more value than ever, every worker, business leader, student, and parent is now asking: Are we ready? Wow, that sure is something! I think speak on behalf of all of us when I say "please stop, you're hurting us". Also it looks like there isn't a single mention of AI on their website in 2022, so I'm really impressed that they've become experts in a novel technology just in time to cash in on over-excited executives . But what does the actual job listing entail? Proficient in Azure Data Factory, Databricks, SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS). And from this, by mental force, I can tell you everything you need to know about the job. My third eye is fully open, and the recruiting department's pathetic attempts to ward off my psychic intrusions are but tattered veils before a hurricane. They are almost certainly recruiting data engineers for a company with a very weak IT department, probably a government client, that is in the middle of a failing cloud migration. SSIS is the phylactery of a millennia old lich-king, a piece of software that runs out of SQL Server on old government data warehouses everywhere. The first time I had to fix an SSIS production outage, the senior engineer on my team told me to "untick all the boxes on that screen, then re-tick them all and click save", and that actually solved the problem . The entire point of a cloud migration is to stop using SSIS and use something better, but that would require you to be good at your job, so instead consultancies sell Azure Data Factory. Azure Data Factory is notable for having been forged in the hottest furnace in Hell. The last time I used it, I clicked "save" on a slow internet connection and it started to open and close random panels for five minutes before saving my work, which I can only assume means that the product has to open every component on the front-end to fetch data from the DOM to populate a POST request... which is, you know, is certainly one way that we could do things. Why use something so bad? It's because Azure Data Factory can be used to run SSIS packages! So now you're on the cloud, and have a new bad service running your old bad service, all without actually improving anything! And of course, they are both tools that do not require programming , so the consultancy can sell you a team of non-programmers for $2,000 per day. I've worked alongside one of these teams. They had one good developer who desperately tried to handle all of their work at once, and I shit you not, four "engineers" that spent eight hours a day copying-and-pasting secrets out of legacy SSIS packages into Azure Data Factory's secret manager for weeks on end. With a bit of experience, most job listings are simply an honor roll of dead IT projects. And because many executives hop onto the same bandwagons at the same times (but call it innovation), there seem to be specific patterns for the type of cog that companies are pursuing at any given moment. The friend who gave me most of the tips in here has an "Azure Data Engineer" CV, where he removes all mention of AWS work he has done so that government recruiters don't hurt their pretty little heads, and vice versa. Companies on Azure want Databricks because you can spin it up from the Azure UI, and companies on AWS similarly use Snowflake because of groupthink. Just smash those words onto the page. Every field can think of some variation of this. If you're a data scientist, it'll be a few common patterns to try cram LLMs into things. If you're a front-end developer, it's probably going to be a soup of React and its supporting technologies. Again, no one reading your CV until the final stage will know anything. Once, a recruiter had coffee with me, and they asked me why Git is such a popular programming language. I write my CV in Overleaf because I can make faster edits during the early phases of figuring out which patterns work, and fidgeting with layout is probably the most annoying part of any sort of CV-writing. This is a tough situation. What I did was looking up a few "easy" jobs, like data analyst, hop onto LinkedIn, navigate to that company's page, then navigate to someone that looks like they might be leading the relevant team. Do not message HR. They, as a rule, do not have human frailties like mercy and kindness while they are at work. Go straight to someone that actually cares if you are good at the job, and impress upon them that you are a real person, who either has a very cool life story about changing career pathways late in life, or who is an adorable graduate UwU. If you are super, super, super desperate, my company has an unpaid internship for graduates that really, really, really think that they just need a tiny bit of experience to get taken seriously. Once you're done scrubbing all signs of personality or competence out of your CV, leaving only eight (8) years of experience, what then? If you're going to be doing this all the time, how do we make it relatively painless? The first thing is to hop onto a bunch of job platforms and upload the CV. That's simple enough. This means that recruiters will start reaching out to you every week or so, and some of them will have jobs for you. 90% of them will fail to secure you an interview. I start the conversations with something like "In the interest of making sure we're making good use of time, what's the expected compensation for the job?". They'll say a number. If the number isn't high enough, thank them for their time and hang up. Don't waste your time. They will not present you to the candidate if you ask them to do any work beyond sending a CV and collecting a commission — from their perspective, you are cattle to be sold. If I was desperate, I would take the first job offered to me at any pay, then not slow my search down at all. Most contracts allow you to quit on very short notice, so use that against the employer instead of having it used against you for a change. The second thing is that you can start testing out the CV in the lowest-effort manner possible. The recommendation from my friend that has experimented the most is to grab a job platform's app on your phone, and to apply to maybe three or four jobs every morning. Don't bother with ones that ask you to make accounts on new job platforms, or write cover letters, or anything like that. Save a filter that removes anything you aren't willing to do, whether it's pay that's too low or a long commute. Err on the side of being picky, and do this every workday , even if you already have a contract. If the list of jobs becomes empty, then you must either relax your constraints or move to a new area. Sorry! If you get as far as a call with a human and are later rejected, ask them, especially recruiters, what employers want to see. They will tell you which buzzwords are good. If there is any conceivable way that you can claim to have experience in an important buzzword, write it down. This is incidentally how the strain of doing this is best managed — by not doing anything more arduous than reading a few jobs and clicking "apply", then not thinking about it until the next day. Don't apply to so many that it feels like even a bit of an ordeal. Do not let rejection affect you, most of the people involved in this process do not deserve your respect in this instance. I am sure they are lovely husbands and wives and sons and we don't care right now . It has taken up to two months before calls have started rolling in, and that is why I'd suggest doing this more-or-less constantly, even when settled into a contract role. You want to know if jobs have suddenly started to dry up, or if you need to make adjustments to your buzzword soup. A fair number of these jobs won't do any sort of diligence. The interview will be fine. Questions will sometimes be on the level of "Do you know Python?", a real question that a real director asked me before paying me hundreds of thousands of dollars. I've done a few more unpleasant interviews, detailed here , but at this point they don't bother me. If I found myself in another one of these situations, I would hang up mid-call. Eat your assertiveness vegetables, they'll put hair on your chest. Quit. You got this job, you'll get another job. Don't quit, duh. Listen man, I didn't design the industry, but I rolled brown skin and an Indian name at character creation. I'm just doing what I've gotta do. All those jobs will be mediocre, but you won't feel like any particular person has too much power over you. But still, the second job market is where you actually want to be. This is the promised land where people have functioning test suites, the executives know something about the work being undertaken, your colleagues are not Senior Void Gazers who have been so thoroughly beaten down by the industry that they dully repeat "it's a living and I have kids", and as a bonus you're probably paid about 50% more. It is so totally divorced from the first job market that people in it sometimes do not understand that the first job market exists. Famous Netflix-guy-turned-Twitch-streamer, the Primagen, has never even heard of PowerBI , what is probably the most popular analytics tool on the planet. These people are blessed . It is not accessible via Seek. It is accessible entirely through having well-placed friends and a reputation for being a cool person with a modicum of self-respect. You can't generate these by pulling the "apply for job" lever over and over. This way you don't have to pray that your friend's companies are hiring at any given moment, you'll just always know that you've got an interview every few weeks. Because getting in here isn't very predictable, this section is general advice in no particular order. If the company asks you to do Leetcode stuff, my opinion right now is that they're probably at least a bit serious, but I don't think a place that asks you to grovel before entry is a great place to be along non-technical dimensions. Erik Dietrich calls this type of interview "carnival cash", rewarding compliant employees and middle managers with the opportunity to terrorize their fellow humans instead of with money. I'm not that sure about this point. I'd probably be bad at a Leetcode interview, so I'm biased against them. Maybe they're correlated with high quality programming performance in some way that I don't understand. People often say "I don't have any connections" or "My network is terrible". This was a 0% judgement zone earlier. It is now a 0% sympathy zone. There is a phenomenon I refer to called "trying to try". It can be broadly summarized as any set of behaviors where someone has not seriously engaged their brain, does not really believe that they're doing anything with a serious chance of success, and are more-or-less just looking for reasons to say that they tried but failed. This happens in subtle places — for example, when training with beginner sabre fencers, you can stand perfectly immobile and they will very consistently hit your blade instead of you. They are so panicked and upset that their body is not trying to win , simply going through the motions of what fencing looks like. This manifests in all sorts of ways that I'll talk about one day, but it's so apparent in the job search. "I've applied for fifty jobs and no one responded". A good indicator that you're trying to try is that you are: Most people tell me they've applied for jobs and didn't get responses. Slightly savvier people tell me they've sent some cold emails out. Some people beyond that say they've started attending Meetups but had no luck. None of them have done anything remotely interesting or otherwise indicative of novel thought. I got my first programming job by emailing Josh Wiley from my psychology degree, a man who did not know me at all, but I had been in one lecture with him, and his wife was the only senior academic honest enough to tell me not to undertake a PhD. I still have the original email. We had a brief back-and-forth, and two weeks later one of his colleagues said "One of my PhD students is freaking out because they can't process some data in R", and that got me my first paid programming job, processing microsaccade data in sleep-deprived drivers. A few weeks later, I saw that a data analyst job was up for grabs at a nearby university. The smooth brained thing to do would have been to apply via Seek and get ignored. I instead went on LinkedIn, looked up the company, look up the word "lead" and cold messaged someone who seemed like they might have something to do with the job. This led me to Dave Coulter, who I still catch up every few months, and a job offer that let me skip straight to being a mid-level engineer. During the interview, when they asked "Have you programmed professionally?", I described the microsaccade project and they hired me. I didn't mention that it was about thirty hours of work in total, and they didn't ask. I actually lost the original position to someone with six more years of experience, who was offered the original data analyst role or a much more highly-paid contract. They wanted the stability, so they took the permanent role, leaving me with a massive pay bump for the contract role, and we both quit at the same time anyway. And they do not conceptualize it as losing tends of thousands of dollars, but they were functionally unemployed for months relative to what they could have earned. Score one for contracts. Those still ended up being mediocre jobs, but I just wanted to illustrate that there is a level of trying that looks more like "there is a gun to my head and I'm willing to do unorthodox things to survive", and the people that email me for jobs have never reached the unorthodox part of that. Presumably the people that do reach this point do not need to email me for jobs. I woke up this morning to an email from Dan Tentler from Phobos about safe ways to run Incus with NixOS images pre-loaded with Airflow and an overlay VPN to client sites. Dan learned about Phobos from a group of hackers in Oslo. I learned about overlay VPNs in December from the CTO of Hyprefire , Stefan Prandl, when asking for advice on network security. I have a discussion about something like this every day, even if it isn't in the tech space. Before that, at a relatively "decent" engineering company, the most complex discussion I had was trying to explain to someone that their Snowflake workload was crashing because they were trying to read 2M+ records into a hash map and that this takes a lot of memory. I have learned more in the last three months than I have in the previous three years, basically along every dimension of my profession. I'm trying to catch up for years of working with mediocre performers, and it's hard . It's definitely doable, and remember that I'm doing this while spending half my time on sales so you can do it faster than me, but there is a real cost to not working with really great people. I've studied hard over the past few years, but nothing comes close to just having awesome people around you. This matters because really good teams don't hire total scrubs that haven't taken control of their education. The first job market does not reward skill or personal development. The second one does actually require you to be good. The best offer I've received from a good company (A$185K) was obtained not through Seek, but by meeting my current co-founder Ash Lally during the preparation for a game of Twilight Imperium IV where I absolutely smoked everyone . The only other place that I've considered might be acceptable to settle down, much better than the offer I received through Ash, was the result of getting coffee with a local reader, then eventually being invited to drinks with their team a few times. We mostly talked about split keyboards and Star Citizen. It has been a few months since I quit my last job, and I used to say all sorts of conciliatory things like "Sure, that engineer is terrible, but most of them are good!", but money talks. I only offered one of them a job with me. In retrospect, most of them had the potential to be good, but enough years in a typical corporate setting will ruin this. When I was 20, people were happy to hire me because I had potential. Now, potential is still important, but it's important that I've at least demonstrated that some of it is manifesting . Many engineers have pathologies that I think make them unsuitable for work on a healthy team, in the same way that some people need to do some self-work to enter a healthy relationship. For example, I know many people who feel guilty taking time off, so they'll burn themselves out without someone constantly getting them to slow down. I'm sympathetic, but a team as small as mine doesn't have time to walk someone through that level of self-harm and still deliver for clients reliably. We help each other through lots of little quirks we need to deprogram out of corporate contexts, but we need to be starting from a place of some progress. An example that Modal's Jonathon Belotti sent my way is that Modal's most high-performing team members will get a two week deadline, then confidently spend the entirety of the first week reading a book on the technology they're about to use. Most engineers I know, including myself a few years ago, would rather hack incompetently for two weeks. The essential reason for this is being too underconfident to act on our beliefs about how engineering should be done (or worse, not having those beliefs at all), and we'd rather fail in the approved, was-working-visibly fashion than risk looking unorthodox. "I programmed the whole two weeks and failed!" feels easier to justify than "I read a book for one of those weeks and failed!". But team members should be picked for their judgement, and if they are good for the team in proportion to the quality of that judgement and their willingness to exercise it in the face of orthodoxy . People are awful at asking for work. Here is how I advise people do it. If you don't have a good time, just leave it be. You're here to rekindle old relationships and meet interesting people, and maybe they can help you out. The moment you start asking people for help that you don't even want as friends is the moment that the entire endeavour becomes sleazy. I think of each person I know (in the context of job searching) as some sort of machine that randomly spits out jobs in a uniform distribution over a year. Let's say each person has a 5% chance of turning up a job every month, maybe more or less depending on the market. If you want an 80% chance at a job every month, you have to have enough people with you in mind that you're rolling the dice enough times to get that number. Many people tell me that they attended a few Meetups and had no results, even though that's what you're supposed to do. It's good that they tried, but most large Meetups seem to be populated by people who are ineffectually looking for jobs. Don't be ineffectual. Large Meetups were frustrating when I was a student because everyone interesting was swarmed by students trying desperately to look employable without being needy, and it is frustrating as a non-student because now I get swarmed. People go "Oh, I am a data scientist, I will go to the Data Science Meetup". That's better than not going out at all, but strictly inferior to going to a tiny Meetup with ten nerds that are deeply into Elixir or some other niche bit of technology. You will form real connections with the latter, and the fact that you know what Clojure is will be enough to make many people at such a place want to work with you. If you are in a city with a functioning tech industry and can't think of any interesting technology, then it's going to be really hard to justify why you deserve a spot on a good team, so maybe solve that problem first. If you're decent at writing and have opinions on something... write. It's amazing for meeting people. I have several readers that have sent me their writing, and without any intervention from me , about 30% of them hit the front-page of Hackernews on the strength of their material. There are surprisingly few people putting out good material on almost all topics, especially in the age of LLM slop. Reader Mira Welner wrote about something as generic as Git checkout and hit the front-page. Bernardo Stein, mentioned in various places on the blog as the guy that coaches me through my worst engineering impulses from my corporate career, has front-paged by writing about NixOS . Nat Bennett, who I've been getting advice from for months and am now hiring to coach my team, front-paged Hackernews writing about the notebook they keep at new jobs . Even Scott Smitelli, who I quoted earlier for having wrote this fantastic piece emailed it to me, and before I could finish reading it people were already recommending it to me through other channels. It's super easy to meet people through writing if you aren't afraid of pushing out your real opinions and indeed, you will see extremely stupid comments on all of the above writing, so you will need to be unafraid. Fine. Tell people that you, personally, are ChatGPT. Someone else may lose their job and think "I wish I hadn't listened to Ludic, he is so stupid and foolish", but I refuse to acknowledge them.  ↩ The main one is Gary Donovan who I didn't even meet in the wild. I met a reader for coffee, and that reader worked with a really nice engineering company. That company said Gary is their favourite recruiter. The first time I called him, I said something about Lisp and it turned out he had a copy of The Little Schemer in front of him at that very second, and we later had a great talk about engineering culture in F1 over ramen. I am still reeling at the implications in neuroscience of a recruiter that can read — is it possible that some of them are sentient?  ↩

0 views
Lambda Land 10 months ago

Should Programming Languages be Safe or Powerful?

Should a programming language be powerful and let a programmer do a lot, or should it be safe and protect the programmer from bad mistakes? Contrary to what the title insinuates, these are not diametrically opposed attributes. Nevertheless, this is the mindset that underlies notions such as, “macros, manual memory management, etc. are power tools—they’re not supposed to be safe.” If safety and power are not necessarily opposed, why does this notion persist? The problem—I think—is that historically you did have to trade safety for certain kinds of power: if you wanted to write a high-performance device driver, C—with all its unsafe behavior—was your only option. This founded the idea that the “power tools” of the industry were fundamentally dangerous. There’s a few things wrong with this though: Power is relative to the domain of interest. Both Haskell and C are powerful, but in completely different ways. So, when judging whether an aspect of a language is powerful or not, consider its application. Expressive languages get you power without sacrificing safety. New advances in programming language research have found ways to express problem domains more precisely. This means that we have less and less reason to breach safety and reach into the unsafe implementation details to get our work done. It’s good to add safety to power tools. A safe power tool is more trustworthy than an unsafe one. This holds for real-world tools: I will never use a table saw without a functioning saw stop. Specifically in the case of macros, there’s been an evolution from powerful-but-unsafe procedural macros in Lisp to safe-but-less-powerful pattern macros in Scheme, and finally to powerful-and-safe macros in Racket. More safety means higher reliability—something that everyone wants. And with advances in making languages more expressive, you can have a language perfectly suited to a particular domain without sacrificing safety. A language that lets you do more of what you want to do is more powerful than a language where you can’t do what you want. But what does “what you want to do” encompass? If you want to write device drivers, then C is great for you. However, C is not as expressive in some of the ways that, say, Haskell is. For example, in Haskell, I can write lazy, recursive definitions. Here’s a list of all Yes, all the Fibonacci numbers. Haskell is lazy; this will compute as many as you ask for. the Fibonacci numbers: Before you tell me that that’s just a useless cute trick, I actually had to use this when I was building the balancing algorithm in my rope data structure for my text editor written in Haskell . Haskell is incredibly powerful in an expressive sense: a single line of code can elegantly capture a complicated computation. The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise. Edsgar Dijkstra Power is closely related to the domain of interest: a language is powerful in a particular realm of problems. C is powerful for working with memory directly. Conversely, Haskell or Racket is more powerful than C in pretty much every other domain because these languages give the user tremendous ability to match the program to the domain . This is a meta-power that sets high-level languages apart from lower-level ones. Safe languages can be just as powerful as their unsafe counterparts—in many cases, they are more powerful because the abstractions they create better fit the domain. Whenever a tradeoff between power and safety must be made, that is a sign that the language is not the right fit for the domain. Consider how immutability gives you local reasoning power . At one of my industry jobs, our codebase was a mixture of Ruby and Elixir. Both are safe languages, but Elixir is immutable. When I was working on some Elixir code, I could read: and I didn’t have to worry about getting modified in the call to . To understand the output of this function, I didn’t have to worry too much about the implementation of . In contrast, if you did the same sort of thing in Ruby: the method could do something sneaky like set name to if it didn’t exist. You might think, “well, just document that behavior.” Now I need to read the documentation of every function I encounter—I might as well go read the code to be sure the documentation isn’t out of date. Local reasoning means to understand what is passed, I don’t have to worry in the first place if will do somethig to the result of . In this case, I did have to understand what every method call did to understand the function. This made it harder to track down errors because I had to account for all the side effects that could happen at every method call. Certain things like immutability might seem constraining, but constraints can liberate you by allowing you to rely on particular behaviors. Elixir doesn’t let you modify things in-place, but you can rely on this, which makes understanding and composing code easier. Haskell forces you to express side-effects in the type system, but this lets you know that calling a function with a signature like won’t do any IO or throw an exception. Rust doesn’t have like in Java, but you know when you get a pointer, you can safely dereference it and you don’t have to do all the null checking that you have to do in Java. The evolution of syntax macros in Lisp, Scheme, and Racket provide an interesting real-world instance of how safety and power can start off as a trade-off, but with better language design, become complimentary. I don’t have the space here to do a deep dive into Lisp macros, but here’s the short of it: Lisp macros are just functions that receive code as data. This code is represented as nested lists of symbols. All a macro needs to do is return a new list of symbols that will be spliced right into the call site. The problem with this is that these macros are unhygienic : if I introduce a new variable, as I did with in , that is just a bare symbol that can be inadvertently captured producing unexpected output: This is very bad! To use a macro safely, you need to be sure that it’s not introducing variables that you might accidentally capture. Lisp provides a mechanism Lisp has a function called which makes a fresh symbol for you to use. Some other languages such as Julia have a function; is a poor substitute for proper hygiene. to avoid some of the pitfalls with variable capture, but that’s not the end of the danger. If I have a macro that expands to a call to a function, e.g. , I would expect this to be the in scope at the time I defined the macro. However, this might not be the case—a user might inadvertently redefine a function, and then the macro would not behave in the expected way. Scheme has a faculty called , which lets you define transformations between a pattern and a template: Rust’s form is essentially from Scheme, but a little fancier with some syntax classes like and such. This is safe; the examples from the Lisp run as expected: However, we’ve lost some of the power because we can only define transformations between templates. We can’t, for example, write a macro that does some deep inspection of the code and makes decisions on how to expand. Furthermore, there’s no way for us to intentionally break hygiene when we really want to. Racket resolves the dilemma between having to choose between powerful Lisp-like procedural macros, and safe Scheme-like hygienic macros by giving us fully hygienic procedural macros! I have another blog post discussing macros in Lisp, Scheme, and Racket and I go into some detail about the evolution of those macro systems. And if you want to dive deep into macro hygiene, see Matthew Butterick’s excellent explainer on Hygiene from his book Beautiful Racket . The upshot of it is that Racket uses a combination of features (scope sets, syntax objects, etc.) to give the user a richer way of specifying syntax than simple dumb lists of symbols. This avoids inadvertent variable capture as well as keeps function references lined up nicely. However, macros can still do arbitrary computation, which means that we’re not constrained in the way that the pattern-transformation macros in Scheme are. And just to prove that Racket is just as powerful as Common Lisp, here’s the classic macro: This example is inspired by Greg Hendershott’s fabulous tutorial Fear of Macros . The bit lets us introduce new bindings intentionally , whilst still keeping us from accidental breaches of macro hygiene. Consequentially, Racket’s macro system is far more useful than Lisp or Scheme’s systems, and this because of Racket’s safety and expressiveness. You can actually build trustworthy systems on top of Racket’s macro system because you’re not constantly foot-gunning yourself with hygiene malfunctions, and the macros are expressive enough to do some rather complicated things . Safe systems let us build software that is more capable and more reliable. Unsafe power is something to improve, not grudgingly accept—and much less defend as somehow desirable. Languages like Rust and Zig have made systems programming immune to whole hosts of errors by being more expressive than C, and languages like Racket are leading the way in making metaprogramming more useful reliable and less like dark magic. If you want to learn more about writing macros in Racket, check out Beautiful Racket by Matthew Butterick and Fear of Macros by Greg Hendershott. I highly recommend listening Runar Bjarnason’s talk at Scala World, Constraints Liberate, Liberties Constrain , wherein he discusses how constraining one part of a system can open up freedoms of later components that build on that constrained part. Power is relative to the domain of interest. Both Haskell and C are powerful, but in completely different ways. So, when judging whether an aspect of a language is powerful or not, consider its application. Expressive languages get you power without sacrificing safety. New advances in programming language research have found ways to express problem domains more precisely. This means that we have less and less reason to breach safety and reach into the unsafe implementation details to get our work done. It’s good to add safety to power tools. A safe power tool is more trustworthy than an unsafe one. This holds for real-world tools: I will never use a table saw without a functioning saw stop.

0 views
Lambda Land 1 years ago

Skills That I Needed When I Started My PhD

I’m starting my third year as a PhD student. I thought it would be good to look back on some of the things that have helped me to this point. I study programming languages, but I imagine these things will help anyone in computer science—and some might have application to other STEM fields as well. There are many softer skills that you need as a PhD student: curiosity, good work ethic, organization, etc. These are essential and nothing can replace them. (Note: that was not an exhaustive list.) I’m going to focus on some of the tools and hard skills that made the ride a little more comfortable. These compliment, rather than compete with, the softer skills that one develops as a beginning researcher. This is a rough list, and not a how-to article. This is mostly just a collection of things I’ve seen other people lacking that have caused them to struggle. If you are considering doing a PhD, you might want to pick up some of these skills as you get ready to start to help you hit the ground running. I recommend reading The Pragmatic Programmer (Thomas, David and Hunt, Andrew, 2019). It’s written primarily for industry programmers, but there’s a lot in there that applies to anyone in CS research. All of the things I mention in this section are covered in detail in there. You have got to know Git. If you cannot wrangle versions of your software and papers (yes, put the papers you write under version control) you will waste much time shooting yourself in the foot and trying to recover work you lost. You will also be laughed to scorn should you ever depart academia for a stint in industry if you do not know Git. In all of the papers I have worked on, we have used Git to collaborate. We’ve typically used GitHub, which is fine as forges go, but I’ve also worked with a self-hosted GitLab instance, and that was fine too. It is incredibly helpful to know a scripting language. I grew up on Perl, which makes munging large amounts of text a piece of cake. You don’t have to learn Perl; you should get really comfortable with a language that makes it easy to manipulate text and files. Makefiles are also super helpful. I like using Makefiles to simply give names to a particular workflow. A Makefile for building a paper might look like this: Now, instead of remembering all the incantations necessary to do some task, I have given that task a name by which I can call it. You must become proficient with the command line. If you are doing research, you will likely need to run software that other researchers have produced. And more likely than not, this will be rough software with bugs and sharp edges that is meant to demonstrate some research concept than be some practical tool ready for developers who only know how to code through YouTube videos and ChatGPT. That this software is rough is a feature of research software , not a bug. There is rarely , if ever, a GUI available. You are going to have to do stuff on the command line, so get used to it. Getting used to the command line helps with Scripting as well. Any task you do on the command line, you can write a script to automate. Building little scripts to e.g. build your paper, your homework, your experiments, etc. will save you time in the long run. Emacs or Vim—pick one and learn it really well. VS Code is flashy and all, but it doesn’t have the same depth and breadth of customizations that Emacs and Vim give you. Also, Emacs and Vim are free software. You are in control! I, of course, love Emacs and I even made a starter kit called Bedrock to help some of my friends in my research lab get started with Emacs. I use Emacs to program, write papers, take notes, manage email, track tasks, and more. I made a list of my top Emacs packages a few weeks ago if you’d like more ideas on what is possible. Vim is fine too and I will still respect you if you choose to go that route. ;) Familiarity with LaTeX has definitely helped me. Fighting with LaTeX is no fun, but you will have to do a little bit of it at some point. Lots of people like using Overleaf; I prefer the command line. Don’t get me wrong: Overleaf is awesome and makes collaborating in a Google Docs sort of way possible, but you loose some flexibility, and if something goes wrong on Overleaf right before your deadline, you’re toast. There is a lovely computer science bibliography hosted at dblp.org . When I was going through the bibliography for my last paper I was able to find lots of missing DOIs simply by putting in the title of the paper into the search bar; DBLP found all the bibliographic information that I needed. Take notes whenever you learn how to do something that wasn’t obvious to you when you started out doing it. I like the Zettelkasten method for taking notes: whenever I learn how to e.g. do some complex layout in LaTeX or learn a neat Makefile trick, I write it down. You can think of it as writing your own personal pages If you don’t know what a page is, this is the standard manual system available on UNIX-like systems (e.g. FreeBSD, macOS, and Linux). Open a terminal and run to read the manual page for itself. You really need to get comfortable with the Command line . Some of these notes I rarely look back at. Others I revisit regularly. But even though I might not review some notes that frequently, there are cases where something on my system will break and a years-old note comes to my rescue from the last time I had to solve that problem. For example, I took notes on how to upgrade my language server for Elixir. I don’t upgrade that thing very often, but there is a little tweak I need to do just because of how my system is set up that is not obvious. It took me a few hours of debugging the first time, but, because I took notes, it now only takes me a few minutes. Academics generally love email. It’s simple, robust, and doesn’t change its UI every few weeks, unlike some popular chat platforms. Unfortunately many universities are forcing everyone to move to Outlook. This is a very bad thing. Fortunately, there are some workarounds that you can use to reclaim some control over your email. I have a sweet workflow with my email. That’s right, I do it all from Emacs. Now, while I do recommend you learn how to use Emacs, I understand that not everyone will start using Emacs. Everyone should get proficient with their email client and know how to use it well. I recommend anything that you can control entirely from the keyboard. You should also get comfortable with editing replies. You know how, when you reply to an email, you usually see something like this: Some mail clients will make the at the beginning of the line pretty with different colored lines and whatnot. It’s all angle brackets under the hood, and you can still edit it as described here. Just typing your reply above the email is called “top-posting”, and it’s considered bad form. You can actually edit the bit that was sent to interleave your reply with bits of the prior email. This makes it easier for people to know what you’re replying to. When used appropriately, this makes emails much more pleasant to read. It doesn’t break the email thread either; you can still see the chain of replies. You need some way to keep track of tasks. I have a workflow based off of Org-mode , which I will not detail here. The short of it is that you need to be spending at least a little time with some regularity “sharpening the saw” 1 by making sure that whatever tool you use to keep track of tasks is working for you. Thomas, David and Hunt, Andrew (2019). The Pragmatic Programmer , Addison-Wesley. https://en.wikipedia.org/wiki/The_7_Habits_of_Highly_Effective_People   ↩︎ https://en.wikipedia.org/wiki/The_7_Habits_of_Highly_Effective_People   ↩︎

0 views
Lambda Land 1 years ago

I Probably Hate Writing Code in Your Favorite Language

The Tao gave birth to machine language. Machine language gave birth to the assembler. The assembler gave birth to the compiler. Now there are ten thousand languages. Each language has its purpose, however humble. Each language expresses the Yin and Yang of software. Each language has its place within the Tao. But do not program in COBOL if you can avoid it. The Tao of Programming I probably hate writing code in your favorite programming language, whatever it may be. This is because I get frustrated by basically all of the top 10 languages you’ll find listed anywhere for various reasons. Do I hate programming in Python? You bet I do. “But it’s Python ! Python is the best programming language on earth!” I can hear you say. I grant that it has its place. Python wins because its ecosystem of libraries is so huge and because there are so many resources for new users. It’s also garbage collected, which means memory safety is not an issue. It the current hot thing, because there is so much support for machine learning in Python. I don’t consider Python quite as boring as Java ! But my problem is that Python is a boring language. This isn’t a bad thing necessarily. If you’re interested in solving a problem with a known solution and you’re doing it for business, the a boring language is probably better for you than, say, Haskell. Why do I think Python is boring? In part because of its philosophy: There should be one—and preferably only one—obvious way to do it. The Zen of Python Python has a model of how it wants you to solve problems. That’s right: it wants you to solve problems with objects, classes, and explicit loops. Got a problem that’s the perfect fit for a functional paradigm? Well, I guess you can use and , but you only get a single expression inside of lambdas, data structures are all mutable, and you can’t use recursion to handle lists. Ugh. I could tell similar stories for other languages that I don’t like programming in. These languages include JavaScript, Go, Java, and C++. Go and Java seem to have been made with huge teams of programmers in mind: make the language and syntax as simple as possible, and then even simpler at the expense of expressivity! This guards against programmers coming up with a clever way to express their problem in a domain-specific way—that’s probably a virtue in large companies. But that’s not how I like to program. The thing I hate about all of the languages I listed is their emphasis on mutation. When I call a function and pass it a list or object or whatever, I have no guarantees about that thing’s value when the function returns. That means, to understand some code, I have to understand all of the functions that get called. In contrast, when I write in a language like Elixir or Haskell, which have immutable data structures, I can look at some code like this: and I don’t have to know what or do to their arguments; I just know they return a value of some kind; I’m free to continue using , , and as much as I like because their value has not changed. It might not seem like much in this example, but it is a big deal when you’re neck-deep in a debugging session. I once worked on a codebase that was half in Elixir and half in Ruby. I spent most of my time on the Elixir side. One time when I had to do some debugging in Ruby, I found it so difficult to trace the execution of the program because data was being changed in method calls. If this doesn’t make much sense to you, you might have to experience it first: once you’ve worked in a large functional codebase, you will find yourself bewildered by all the spooky-action-at-a-distance that goes on inside a large OO codebase. Other things that frustrate me in programming languages include: That last one is something that really bothers me about Python: stack frames in Racket cost 2 words. Source: I asked Matthew Flatt about Racket’s stack frame size once. Either do proper tail-call elimination or , if you really absolutely must have all of your precious stack frames performance and elegance be darned , then allocate your stack frames on the heap and stop worrying about it already! (I seem to recall a conversation where someone with knowledge of these things implied that this was in the works. I don’t know any details about it though.) Seriously though: some solutions lend themselves really well to a nice tail-recursive solution. But can you rely on such an implementation to be performant or even run in Python? Nope. Argh!! Clearly, I like functional programming: it fits how my mind works, and I think it is in a lot of ways objectively better than other paradigms for software engineering. Immutability gives you the ability to reason locally about your code—not to mention not having to worry about race conditions when mutating data in concurrent environments! To parallel the first list that I wrote, here are things that I like in a language: Macros can be a two-edged sword. That said, a lot of the danger around macros has largely been ameliorated. Elixir is a great example of this: Elixir has a small core and uses macros a lot to define basic things like in terms of simpler constructs. What languages do I enjoy programming in? Racket is my favorite: it’s designed to be flexible and give the programmer maximum ability to express their intent to the computer. Racket is a programmable programming language . Other languages I enjoy include Haskell, Elixir, and Rust. Haskell is the ur-functional language, and it’s really fun to use the type system to describe your domain. Pretty soon the compiler starts keeping you from making all sorts of mistakes that would be hard to catch with basic testing. In Elixir, you get lots of nice functional data structures, proper TCO, pattern matching, and soon gradual typing! Rust is great because it has a phenomenal type system with good type inference; its metaprogramming story could be improved though. I want to make it clear that I am not attempting to start a flame-war or saying that Python, Java, et al. are useless: they have their place and are very respectable works of engineering. All I am saying is that, given a choice of language for a hobby project, I will pick something else because I don’t want to be frustrated by the language when I work. Anyway, that’s the end of my griping about languages. (For today, at least.) There will always be things we wish to say in our programs that in all known languages can only be said poorly. A language that doesn’t affect the way you think about programming, is not worth knowing. Alan Perlis Automatic type conversion (looking at you JavaScript). No type inference (if you’re gonna be statically typed, don’t make me write out the type every time Java). No structural typing (type is determined by the shape, not the class name). No good functional data structures. No metaprogramming. No TCO/limits on stack depth. Easy, explicit conversions between different types of data. Dynamic typing or powerful type inference or gradual typing! Structural typing (having nominal typing too can be nice when needed; but given one or the other I’ll take structural over nominal any day). Functional data structures like cons cells, maps and sets supporting functional updates, and RRB trees ! Powerful macros that let me extend the language. Proper TCO.

0 views
Lambda Land 1 years ago

Chorex: Guaranteeing Deadlock Freedom in Elixir

Chorex is a brand-new Elixir library for choreographic programming [ 3 ]: Chorex provides a macro-based DSL that lets you describe how processes communicate to perform a computation. This top-down description of interacting processes is called a choreography . From this choreography, Chorex creates modules for each process that handle all the message-passing in the system. The interactions performed by the generated code will never deadlock by construction because the choreographic DSL ensures that no processes will be waiting on each other at the same time. This is a research project; if you like experimenting with new things, please try this out! The best way to leave feedback is by opening an issue on the Chorex repository. Chorex is still in active development, and we would love to see whatever you make with Chorex. Chorex is available on hex.pm . Development is on GitHub . Try it out! Chorex enables choreographic programming in Elixir. A choreography is a birds-eye view of communicating parties in a concurrent system: you describe the different actors and how they send messages to each other. From this choreography you can create an endpoint projection , which just means you create some code for each of the concurrent actors that handles all the communication. Choreographic programming ensures deadlock freedom by construction . That means you will not be able to accidentally create a system of actors that accidentally deadlock. It’s still possible to have other kinds of bugs that freeze the system (e.g. one of the actors hangs on an infinite loop) but it eliminates an entire class of bug that is difficult to track down in real applications. Additionally, Chorex implements higher-order choreographies [ 1 ] which let you treat choreographies as first-class citizens in your language. This improves the modularity of code built with choreographies. Chorex does all this by leveraging Elixir’s macro system: you write down a choreography using the macro provided by Chorex. The macro expands into several modules: one for each actor in your system. You then create another module for each actor in the system which ​s the respective macro-generated module; the macro-generated module handles the communication between the different parties in the choreography, and your hand-written module handles all the internal bits to that node. Let’s look at an example. Here’s a simple, classic example: someone wants to buy a book, so they ask the seller for the price. The seller responds with the price. Here’s a diagram of that communication: And here is the corresponding choreography describing that: The macro will create (roughly) the following code: along with a macro. Now we create modules for each of our actors ( , ) and we use the generated module to handle the communication: To kick off the choreography, start up a process for each actor and send them everyone’s PID: Now you can wait for the processes to send you (the parent that started the choreography) their return values. From the choreography, we expect the actor to finish with the price . We can get that like so after sending the actors the config for the network: In sum, this is how you use Chorex: Choreographies can get a lot more complicated than this puny example here. See the Chorex README and module documentation for more extensive examples with Chorex. Lugović and Montesi built an IRC client and server in Java with a choreography [ 2 ]—I’m excited to see what’s possible in Elixir! Chorex is a research project , meaning that its primary function is to prove out new ideas. Development speed takes priority over stability of features and API. This is a scout and a trailblazer, not a surveyor and road-laying machine. We would like to make Chorex as useful as possible; historically choreographic programming libraries have been cutting-edge research projects. Chorex is still research-oriented, but if we can make it useful to people other than ourselves, then that’s a big win. :) Moreover, no one has done choreographic programming with Elixir-style concurrency, where processes have mailboxes and where there are existing idioms around processes and communication. This is not intended to be a production-grade system. Maybe some day, but not today. Please don’t use this to build a production system then blame us when your system goes down. Please do use this in hobby projects and let us know what you manage to build! Please send us any feedback you have! You can contact me directly or open an issue on the Chorex repository. We would love to see anything you make with Chorex. While building the macro, I realized I needed to walk an AST and gather up a list of functions an an endpoint would need to define. This inspired me to create a writer monad; I documented how I stumbled upon a pattern that a monad solved quite elegantly earlier on my blog . Write a choreography to describe your system The macro will create modules for each endpoint Implement each endpoint’s derived behaviour Fire of the choreography Await replies

0 views
Lambda Land 1 years ago

My Top Emacs Packages

If you ask anyone what the best Emacs packages are, you’ll almost definitely hear Magit (the only Git porcelain worth using) and Org Mode (a way to organize anything and everything in plain text) listed as #1 and #2. And they’re right! I use those packages extensively every day. Besides those two powerhouses, there are a handful of packages that make using Emacs a delight. If I had to ever use something else, I would miss these packages most: Jump around your screen crazy fast. Teleport to any character with ~5 key strokes. See https://karthinks.com/software/avy-can-do-anything/ for more reasons why it’s awesome. I almost exclusively rely on and have it bound to . Kind of like a super-charged right-click for Emacs. Works beautifully in dired, when selecting files in the minibuffer. There’s an easy way to make it play well with Avy which is just the best. Eat is a terminal emulator that’s faster almost all the other terminal emulators for Emacs. The only emulator it’s not faster than is Vterm, which is pretty dang speedy. Eat has been more than fast enough for all my needs however. Additionally, it can make a terminal emulator in a particular region , so if you use Eshell, you can get a little terminal emulator for every command you run. Normally, if you run, say, , you see the ugly terminal escape characters printed as text. With Eat, however, those terminal escape characters get interpreted correctly. Interactive programs (e.g. the Julia and Elixir REPLs) work flawlessly with it. Best spellchecking ever. It can spellcheck based off of the fontlock face; I keep this on when I program to get on-the-fly spellchecking of code comments and strings. I keep bound to à la flyspell because it is so darn helpful. Supports checking documents with mixed languages. This is one of the packages I miss most when I’m editing text outside of Emacs. The best way to add citations in Emacs, hands-down. Reads bibtex, inserts in org-mode, LaTeX, whatever. These next packages are all by Daniel Mendler . These packages improve selecting commands, buffers, files, etc. from the and interfaces. These make Emacs insanely ergonomic and excellent. These replace packages like Helm , Ivy/Counsel/Swiper , and Company . In comparison to these packages, Vertico + Consult + Corfu are lighter-weight, faster, less buggy (in my experience; I’ve tried them all!), and work better with other Emacs packages because they follow the default built-in APIs. Lighter-weight, less buggy vertical completing-read interface. Replaces Ivy. Incredibly flexible. Works out-of-the-box with everything that has a interface, so you don’t need special packages to make it play nice. Recommend adding Marginalia as well by the same author to add extra infos. Better than counsel. The live preview is amazing; I use instead of , instead of Swiper. is :fire: for searching large projects with instant previewable results. Pairs well with Embark to save results to a buffer. Lightweight pop-up library. Pairs well with Cape by the same author. See also Orderless which enhances everything from to to the Corfu popup. Vertico + Consult + Orderless + Marginalia + Corfu + Cape + Embark is sometimes called the “minad stack”. Embark and Orderless are both developed by Omar Camarena (oantolin) who frequently collaborates with Daniel Mendler. When I asked Omar on Reddit about the name, Omar replied that “minad stack” is fine; another name they’ve tried for the stack is “iceberg”, which I think is a good name too. It’s the new hotness—that said, it’s gotten really really stable over the past two years. If you like these packages, consider sponsoring their maintainers! These are some of my favorite open-source projects and I try to support them when I can. If you like these packages, you might like my Emacs Bedrock starter kit which, unlike many other starter kits, is meant to be a no-nonsense no-fluff no-abstraction bare-bones start for you to fork and tinker with to your liking. The stock configuration only installs one package ( which-key , which is amazing) but includes some extra example configuration. The extras/base.el file includes sample starter configuration for most of the above packages. (I should add to it, come to think of it…) Avy Jump around your screen crazy fast. Teleport to any character with ~5 key strokes. See https://karthinks.com/software/avy-can-do-anything/ for more reasons why it’s awesome. I almost exclusively rely on and have it bound to . Embark Kind of like a super-charged right-click for Emacs. Works beautifully in dired, when selecting files in the minibuffer. There’s an easy way to make it play well with Avy which is just the best. Eat Eat is a terminal emulator that’s faster almost all the other terminal emulators for Emacs. The only emulator it’s not faster than is Vterm, which is pretty dang speedy. Eat has been more than fast enough for all my needs however. Additionally, it can make a terminal emulator in a particular region , so if you use Eshell, you can get a little terminal emulator for every command you run. Normally, if you run, say, , you see the ugly terminal escape characters printed as text. With Eat, however, those terminal escape characters get interpreted correctly. Interactive programs (e.g. the Julia and Elixir REPLs) work flawlessly with it. Jinx Best spellchecking ever. It can spellcheck based off of the fontlock face; I keep this on when I program to get on-the-fly spellchecking of code comments and strings. I keep bound to à la flyspell because it is so darn helpful. Supports checking documents with mixed languages. This is one of the packages I miss most when I’m editing text outside of Emacs. Citar The best way to add citations in Emacs, hands-down. Reads bibtex, inserts in org-mode, LaTeX, whatever. Vertico Lighter-weight, less buggy vertical completing-read interface. Replaces Ivy. Incredibly flexible. Works out-of-the-box with everything that has a interface, so you don’t need special packages to make it play nice. Recommend adding Marginalia as well by the same author to add extra infos. Consult Better than counsel. The live preview is amazing; I use instead of , instead of Swiper. is :fire: for searching large projects with instant previewable results. Pairs well with Embark to save results to a buffer. Corfu Lightweight pop-up library. Pairs well with Cape by the same author. Eat is not the fastest terminal emulator, Vterm is. Thanks to a Redditor who pointed this out.

0 views
Lambda Land 1 years ago

Boilerplate Busting in Functional Languages

This is the story of how I solved a problem (ugly, cumbersome boilerplate code) that I ran into while writing a program in a functional language (Elixir). Functional programming languages often pride themselves on expressiveness and elegance; but occasionally they are not amenable to the most obvious solutions to the problems we wish to solve. In this case, the simplest solution to my problem would have been to have a global mutable variable. But no one likes those. The solution most programmers would have found obviates mutation, but the code ends up being rather clunky. This inelegance stems from two intertwined issues pulling the code in different directions. However, the two concerns are so intertwined that it can be difficult to see them as separate issues at all! In this blog post, I hope I can show you a new way of looking at this class of problem that will let you see what those two issues are, and how to cleanly split them apart to get nice, maintainable, functional code. You take your analytic knife, put the point directly on the term Quality and just tap, not hard, gently, and the whole world splits, cleaves, right in two… and the split is clean. There’s no mess. No slop. No little items that could be one way or the other. Not just a skilled break but a very lucky break. Sometimes the best analysts, working with the most obvious lines of cleavage, can tap and get nothing but a pile of trash. And yet here was Quality; a tiny, almost unnoticeable fault line; a line of illogic in our concept of the universe; and you tapped it, and the whole universe came apart, so neatly it was almost unbelievable. Zen and the Art of Motorcycle Maintenance , Robert M. Pirsig I will use a concrete example to describe my specific problem, though the issue is general enough that you likely have encountered it. After I walk through the example I’ll cover the essential elements that make my solution work, and how it generalizes to similar problems. Suppose that you are writing an application that lets people track their workout habits. Every time that they succeed in meeting a goal, they register what they accomplished. Now you have a database full of logged events like “went to the gym” or “swam 1000 m” or “ran a mile”, etc. Now you need some way to convert this set of events into reward points—preferably in a way that the user finds motivating. But every user is different, so let’s say that you make it so that users can customize exactly how goal completions translate into reward points. Somewhere in your app you let users write a little equation that your program will then evaluate against the events that they have logged. In the end, the user gets a single point value. This is, in essence, a little interpreter. I will not go over how to build an interpreter here, but the gist of it is that you walk down the AST of the equation and evaluate the leaves and nodes recursively until you wind up with a single number of points at the end. Now let’s say that you are processing a large number of such requests, and you would like to batch all of the database calls. In the previous example, there were four database queries, one of which ( ) was a duplicate. (Each in the surface syntax or node in the AST induces a database query.) To improve performance, you could batch all of these database calls—thereby also eliminating duplicate queries—and then have this data on hand as you walk the AST. So here is the new operation we would like to perform: we want to walk through AST, collect all of the database calls into a set, and replace each instance of a database query ( nodes) in the expression to a reference ( nodes) that we can link to the batched results. We will create a fresh identifier every time we encounter a new query; duplicate queries will use the first reference. Now we need to implement it. We could just create a variable that we can mutate as we walk down the tree: every time we encounter a node that looks like , we generate a fresh identifier (or look up an old one if it’s a duplicate query) and replace it with . Once we’re done walking the tree, we have a new AST with nodes instead of nodes, and a list of queries that we can execute in one go. This could work, but the fact that we are using global mutable state should ring alarm bells for anyone —not just functional programmers. Whenever we call our transform function, we would have to ensure that we clear out the old list of accumulated information. Don’t forget about all the other problems that global mutable state brings. There must be a better way. How might our function be more pure? Instead of just returning a modified tree, we can return a tuple of the new AST node plus a list of queries. (I’ll call this specific shape an AST-queries-tuple throughout.) This eliminates the need for a global variable, and now every call to our optimization function is pure. It’s easier to test and reason about. However, this means that we have to take care to combine this information whenever we do a recursive call. It becomes even more cumbersome when we recur over elements in a list and we need to combine all their results together. A well-crafted makes things work OK, but I think you can agree the following code isn’t the most straightforward to read and understand what’s going on. This is quite a bit of boilerplate. It’s not the worst code ever—it’s certainly better than our first solution with a global variable—but we seem to be saying more than we need to here. How can we clean up this code? The code is messy because there are actually two competing concerns here: we have some main computation that we care about (transforming the tree) and some side information (the set of database queries) that we’d like to collect in parallel. If we can separate these concerns, our code will improve. Now that we see the two intertwined issues, how do we go about separating them? We will still carry around the AST-queries-tuple, but we are going to pull out the logic that governs how we keep track of the list of queries and keep it separate from the AST transformation logic. First, let’s define a module, a type to help us keep track of an AST-queries-tuple, and a function that takes some AST and pairs it with an empty list of queries: Second, the clever bit: we write a function that lets us manipulate the AST value inside the tuple without worrying about how to combine the sets of queries . We’ll call this function , and it takes an AST-queries-tuple and gives it to a function argument that expects just the AST bit. That function argument should return a new AST-queries-tuple. Our function will then merge the two lists of queries together without the function parameter ever having to worry about it. Now we can use this to write our function! Before we get there, remember that Elixir has a handy set of customizable infix operators that we can use as shorthand: It would be nice if we could use for this shorthand… but I’m getting ahead of myself. That means that instead of writing this: We can just write: You might be able to see now how this would make writing this little optimization pass a lot cleaner. We can go a step further on the syntax though: with a little metaprogramming imagination, we can write some shorthand for the notation that looks more like variable assignment in a . It’s not that hard to do. Now we can write the handler for like so: And that gets transformed into the nested anonymous function notation we saw previously. Now we don’t have to think about merging the list of queries any more: the operator handles all that for us. Bigger savings come if we think about making a version of that works with our AST-queries-tuples. We’ll call it since it’s like a that we’re mashing the results together: Here we map over a list of values, collect all the resulting sets of queries, and merge them together. This gives us a big savings when we write something like the function: Notice that all the handling of the extra information has been lifted out of our code. Not only does it make it clearer what the core intent of the functions are, but we also get some added flexibility around how we structure that extra data. If we wanted to use a map or a set instead of a list of tuples as the second element in the AST-queries-tuple, then with this refactoring we only have to modify the , , and functions. The rest of the code can stay the same! So, with a little bit of work, we’ve gone from a solution using global mutable state 🤢 to passing around a AST-queries-tuple 😐 to abstracting out the tuple entirely, gaining clarity and flexibility along the way. 🤩 Our threading-related functions are actually generic enough that they don’t need to be about ASTs and lists of queries—as long as we are doing some main computation with a little extra data gathering on the side, this pattern should apply. Wouldn’t it be nice if this pattern had a name? Surprise! This is exactly the writer monad ! This whole post has been a monad tutorial in disguise! If you’ve been exposed to monads before, you might recognize as and as or—as the Haskell programmers like to call it— . The macro is just notation . (I couldn’t think of a clever name for , so I just pretended the stood for mash instead of monad .) “Monad” is just an interface. There’s a subtle difference between interfaces and typeclasses, and I’ll get to that shortly. This is meant to build intuition. That’s all there is to it. To make your data structure (like our AST-queries-tuple) conform to the Monad interface, you need functions like and . Once you have those, you have a monad. OK, there are certain properties (called the “monad laws”—don’t worry, they’re not that scary even though the name sounds ominous) that these functions need to satisfy, but they’re pretty easy to hit. If you don’t satisfy these laws, your monad won’t behave predictably in certain circumstances. If you’re getting started with monads, don’t worry about it right now. That’s pretty much all there is to it. There isn’t a fixed number of monads; there are however a set of more-or-less “standard” monads which have been discovered to be generally useful; the Haskell Wiki has a nice list here . Among these is the “maybe” monad, which lets you focus on the happy-path of computation and abstracts away the failure path. In Elixir, you can see this pattern with the idiom. (The notation commonly seen in this idiom closely follows Haskell’s notation.) There are many other useful monads besides the writer and maybe monads. Some of these (like the IO monad ) are pretty specific to Haskell and other pure functional languages that don’t support the same kinds of control flow or constructs; others have wider application. Most of the value (I think) of monads is not having and , but all the helper functions like , notation, and friends. While I was writing some Haskell , I got to know all the monad-related functions and how useful they were. These helper functions are what make programming with monads natural and powerful . The core functions and are all you “need” to make a monad, but no one would actually program with just those. If you ever find something that you think would work well modeled as a monad, be sure to implement additional functions beyond and . You can see a list of functions Haskell implements for monads here if you want some inspiration. You hear a lot about monads with languages like Haskell, but not so much with other functional languages like Elixir or Rust. Part of this is need, and part is because of ergonomics. Haskell needs monads to implement effectful computation. Effects include exceptions (modeled by the maybe monad) or logging information (the writer monad). Languages that have these effects natively don’t strictly need these monads. (Though, as we’ve seen, writing a monad can help other languages, even when they have uncontrolled side-effects, like Elixir.) Haskell makes using monads ergonomic through its typeclass mechanism. Other languages have more constrained method dispatching mechanisms, so you have to jump through some hoops to get monads to work as seamlessly as they do in Haskell. If you’re familiar with an OO language, you’ve almost certainly come across the idea of an interface: it’s just a specification of methods an object needs to implement. Typeclasses are similar: they specify a set of functions needed for a type to belong to a typeclass. There are a few key differences between typeclasses and interfaces however: Interfaces are closed , meaning, if an object doesn’t implement the interface, you can’t do anything about it, unless you modify the definition of the object itself. In contrast, typeclasses are open , meaning that I can implement the requisite functions to turn a datatype into a monad, even if I can’t modify the definition of the datatype itself. Interfaces specify methods that dispatch on their object. If I call , then I will look up the method in whatever class belongs to. Typeclass functions can dispatch on the return type of the function. For example, the (i.e. ) function needs to dispatch on whatever type it’s expected to return. If I said something like: would dispatch to the version specified for the type for , and ’s implementation for . This makes the monad functions really generic; with a that can dispatch on the expected return type, you can write without thinking much about which monad exactly you’re using. In languages that don’t have typeclasses, you need to take special steps to ensure that you dispatch to the proper variant of the monad. Racket has a monad library that works via a generics interface system, plus a few tricks to teach (called in this library) what type it should return. I am sure the same tricks would apply to Elixir. Indeed, Elixir has protocols , which are like interfaces, but they are open. They still dispatch on the shape of the first argument passed to them, you would need to pull a trick like Racket’s function and pass an argument to ignore just to get the dispatch right. Elixir has less need for monads than Haskell because its functions are impure. (A function can do arbitrary IO, send messages, throw exceptions, etc.) but there are still cases (as we have seen) where a monad can make life easier. Consider using a monad library the next time you need to avoid ugly side-effects! Functional languages are not immune to sprouting boilerplate. And while most of the design patterns in a certain OO cookbook are “invisible or simpler” in functional languages, Quote from Peter Norvig. See: https://norvig.com/design-patterns/design-patterns.pdf some patterns crop up when similar problems arise. Monads are a powerful tool for dividing the essential from the incidental in a program. Exactly what constitutes the essential versus incidental parts—along with how to separate them—can be tricky to see at first. I think this is because separating these concerns in mainstream functional languages get less visibility, and not because of any inherent difficulty of the problem. Perhaps if everyone started out programming by learning Racket and got comfortable with functional idioms, monads would be as natural as the Visitor pattern. Certainly it would be more comfortable than the ! I was surprised and delighted when a monadic solution appeared as the most natural solution to a problem I was working on. Now, you might say that’s because I work as a programming languages researcher. However , the last two times I was working in industry, we had some sort of language interpreter, and I had to walk an AST. Knowing the writer monad would have saved me a lot of time and effort. I hope you can start seeing some monadic patterns in your code, and that you’ll be able to make a monad to make it easier to reason about and refactor your code as well. Thanks to Scott Wiersdorf for the initial impetus to write this, as well as some thoughtful feedback on the prose and outline. Thanks also to Mark Ericksen for some additional comments. Interfaces are closed , meaning, if an object doesn’t implement the interface, you can’t do anything about it, unless you modify the definition of the object itself. In contrast, typeclasses are open , meaning that I can implement the requisite functions to turn a datatype into a monad, even if I can’t modify the definition of the datatype itself. Interfaces specify methods that dispatch on their object. If I call , then I will look up the method in whatever class belongs to. Typeclass functions can dispatch on the return type of the function. For example, the (i.e. ) function needs to dispatch on whatever type it’s expected to return. If I said something like: would dispatch to the version specified for the type for , and ’s implementation for . This makes the monad functions really generic; with a that can dispatch on the expected return type, you can write without thinking much about which monad exactly you’re using. https://wiki.haskell.org/All_About_Monads https://learnyouahaskell.com/a-fistful-of-monads

0 views
Lambda Land 1 years ago

Towards Fearless Macros

Macros are tricky beasts. Most languages—if they have macros at all—usually include a huge “here there be dragons” warning to warn curious would-be macro programmers of the dangers that lurk ahead. What is it about macros that makes them so dangerous and unwieldy? That’s difficult to answer in general: there are many different macro systems with varying degrees of ease-of-use. Moreover, making macros easy to use safely is an open area of research—most languages that have macros don’t have features necessary to implement macros safely. Hence, most people steer clear of macros. There are many ways to characterize macro systems; I won’t attempt to cover them all here, but here’s the spectrum I’ll be covering in this post: Figure 1: A spectrum of how easy macro systems are to use safely If you’ve done any C programming, you’ve likely run into things like: That bit is a macro—albeit a C macro. These operate just after the lexer: they work on token streams. It’s a bit like textual search-and-replace, though it knows a little bit about the structure of the language (not much: just what’s a token and what’s not) so that you won’t run into problems if you do something like this: because that in the string is not a token—it’s just part of a string. C macros can’t do very much: you scan the token stream for a macro, then fill in the variables to the macro, and then replace the macro and the arguments its consumed with the filled-out template that is the macro definition. This prevents you from doing silly things like replacing something sitting inside of a string literal, but it’s far, far from being safe, as we’ll see in the next section. In contrast to C’s macros, Lisp’s macros are much more powerful. Lisp macros operate after the lexer and the parser have had a go at the source code—Lisp macro operate on abstract syntax trees —or ASTs, which is what the compiler or interpreter works with. Why is this a big deal? The ASTs capture the language’s semantics around precedence, for instance. In C you can write a macro that does unexpended things, like this: The macro didn’t know anything about precedence and we computed the wrong thing. This means that, to use a macro in C, you have to have a good idea of how it’s doing what it’s intended to do. That means C macros are leaky abstractions that prevent local reasoning: you have to consider both the macro definition and where it’s used to understand what’s going on. In contrast, Lisp macros are an improvement because they will rewrite the AST and the precedence you’d expect will be preserved. You can do this, for example: Lisp macros are also procedural macros , meaning you can execute arbitrary code inside of a macro to generate new ASTs. Macros in Lisp and its descendants are essentially functions from AST → AST. This opens up a whole world of exciting possibilities! Procedural macros constitute a “lightweight compiler API”. [ 4 ] “Same except for variable names” is also called alpha-equivalence. This comes from the λ-calculus, which states that the particular choice of variable name should not matter. E.g. \(\lambda x.x\) and \(\lambda y.y\) are the same function in the lambda calculus, just as \(f(x) = x + 2\) and \(g(y) = y + 2\) are the same function in algebra. Lisp macros aren’t without danger—many a Lisp programmer has shot their foot off with a macro. One reason is that Lisp macros are not hygienic —variables in the macro’s implementation may leak into the context of the macro call. This means that two Lisp programs that are the same except for different variable names can behave differently: The fact that the macro implementation uses a variable named ( tmp-leaky ) has leaked through to the user of the macro. ( tmp-capture ) This phenomenon is called variable capture , and it exposes this macro as a leaky abstraction! There are ways to mitigate this using , but those are error-prone manual techniques. It makes macro writing feel like you’re writing in an unsafe lower-level language. Scheme’s macros introduce a concept known as hygiene , which prevents variable capture automatically: In this case, the variable that the macro introduces ( tmp-intro-macro ) is not the same thing that the variable from the calling context ( tmp-intro-let ) refers to. This separation of scopes happens automatically behind the scenes, so there’s now no chance of accidental variable capture. Breaking hygiene has some utility in some cases—for example, one might want to add a form inside the body of a loop. There are ways around hygiene, but these are not without some problems. For more details see [ 2 ]. If you’d like to know more about hygiene, [ 1 ] is an excellent resource. Since Scheme macros (and Lisp macros more generally) allow running arbitrary Scheme code—including code from other modules—the dependency graph between modules can get so tangled that clean builds of a Scheme codebase are impossible. Racket solves this problem with its phase separation , which puts clear delimiters between when functions and macros are available to different parts of the language. This detangles dependency graphs without sacrificing the expressive power of macros. I wrote a little bit about phase separation ; you can read more on the Racket docs as well as Matthew Flatt’s paper [ 3 ] on phase separation. Racket also has a robust system for reasoning about where a variable’s definition comes from called a scope set . This is a notion makes reasoning about where variables are bound sensible. See a blog post as well as [NO_ITEM_DATA:flattBindingSetsScopes2016b] by Matthew Flatt for more on scope sets. Phase separation and scope sets make Racket macros the safest to use: Racket macros compose sensibly and hide their implementation details so that it is easy to write macros that are easy to use as if they were built-in language constructs. Racket also goes beyond the form that it inherited from Scheme; Racket’s macro-building system makes generating good error messages easy. There’s a little bug in the macro we used earlier, and that is the form only takes an identifier (i.e. a variable) as its first argument. We don’t have any error checking inside the macro; if we were to call with something that wasn’t an identifier, we’d get an error in terms of the the macro expands to, not the macro call itself: This isn’t good because there’s no in our code at all! We could add some error handling in our macro to manually check that and are identifiers, but that’s a little tedious. Racket’s helps us out: Much better! Now our error is in terms that the macro user will recognize. There are lots of other things that can do that make it easy to write correct macros that generate good error messages—a must for macros that become a part of a library. Many modern languages use macros; I’ll only talk about a few more here. If something’s missing, that’s probably because I didn’t want to be exhaustive. Julia macros have a lot of nice things: they operate on ASTs and they’re hygienic, though the way hygiene is currently implemented is a little strange: all variables get ’d automatically Meaning, they all get replaced with some generated symbol that won’t clash with any possible variable or function name. whether or not they come from inside the macro or they originated from the calling code. Part of the problem is that all variables are represented as simple symbols, which [ 1 ] shows is insufficient to properly implement hygiene. Evidently there is some ongoing work to improve the situation. This is a good example of research ideas percolating into industry languages I think. Elixir has robust AST macros, and its standard library makes heavy use of macros; many “core” Elixir constructs like , , , , and others are actually macros that expand to smaller units of Elixir. Elixir actually gets hygiene right! Unlike Julia, variables in Elixir’s AST have metadata—including scope information—attached to them. This and other aspects of Elixir’s macro system open it up to lots of exciting possibilities. The Nx library brings support for numerical and GPU programming to Elixir, and it works essentially by implementing a custom Elixir compiler in Elixir itself , and macros play a big role in this. Me thinking that Elixir is a big mainstream language should tell you something about the languages I spend my time with in my job as a PhD student. I think Elixir macros are really neat—they’re the most powerful I’ve seen in a “big mainstream” language. Rust supports two kinds of macros: macros-by-example, and procedural macros. Macros-by-example are a simple pattern-to-pattern transformation. Here’s an example from The Rust Book : This macro takes a pattern like and expands it to a pattern like Notice how the marks a part of the template that can be repeated. pattern-repeat This is akin to Racket or Scheme’s repetition form. Macros-by-example work on AST, but you can’t perform arbitrary computation on the AST. For that, you need procedural macros. Rust’s procedural macros (called “proc macros”) work on a token stream, and you can perform arbitrary computation, which puts them in a bit of a funny middle ground between C and Lisp. There is a Rust crate that you can use to parse a Rust token stream into Rust AST, but you don’t get any nice source information from the AST nodes, which makes producing good error messages a challenge. I personally find Rust macros to be disappointing. There’s a wide variety of macro systems. The best macro systems: Different languages have different features in their macro systems; some languages make it easy to use macros sensibly, while for others macros are a formidable challenge to use properly—make sure you know what your language provides and the trade-offs involved. Turns out you can do a lot with functions. Powerful function programming languages let you do so much with first-class functions. If you can get access to first-class continuations , as you can in Racket and Scheme, then you can create powerful new programming constructs without having to resort to macros. I came across the JuliaCon 2019 keynote talk, where Steven Johnson explains how many of the things that you can do with macros can be solved just with Julia’s type dispatch. If you can do something with functions, you probably should: functions are first-class values in most languages these days, and you’ll enjoy increased composability, better error messages, and code that is easier to read and understand by your peers. Macros introduce little languages wherever you use them. For simple macros, you might not have any constraints on what you may write under the scope of a macro. As an example, consider a macro that adds a -loop construct to a language by rewriting to another kind of looping mechanism: you shouldn’t have any restriction on what you can write inside the body of the loop. However, more complex macros can impose more restrictions on what can and cannot be written under their lexical extent. These restrictions may or may not be obvious. Examples: accidental variable capture limits what can be safely written, and grammatical errors (e.g. using an expression where an identifier was expected) can lead to inscrutable errors. Better macro systems mitigate these problems. It’s not enough to just have a macro system that uses ASTs; you need a macro system that makes it easy to write correct macros with clear error messages so they truly feel like natural extensions of the language. Few languages do this right. Macro systems have improved since the 1960s. While Lisp excluded many of the pitfalls of C macros by construction , you still had to use kluges like to manually avoid variable capture. Scheme got rid of that with hygienic macros, and Racket improved matters further by improving macro hygiene through scope sets and introducing phase separation. It is so much easier to build robust macro-based abstractions. Macros are good—anyone can write macros and experiment with new syntactic constructs. This makes development and extension of the language no longer the sole domain of the language designer and maintainer—library authors can experiment with different approaches to various problems. We see this a lot with Elixir: Elixir’s core language is really rather small; most of the magic powering popular libraries like Ecto or Phoenix comes from a choice set of macro abstractions. These and other libraries are free to experiment with novel syntax without fear of cluttering and coupling the core language with bad abstractions that would then need to be maintained in perpetuity. Macros can be powerful when used correctly—something made much easier by modern macro systems. \(\lambda x.x\) and \(\lambda y.y\) are the same function in the lambda calculus, just as \(f(x) = x + 2\) and \(g(y) = y + 2\) are the same function in algebra. Lisp macros aren’t without danger—many a Lisp programmer has shot their foot off with a macro. One reason is that Lisp macros are not hygienic —variables in the macro’s implementation may leak into the context of the macro call. This means that two Lisp programs that are the same except for different variable names can behave differently: The fact that the macro implementation uses a variable named ( tmp-leaky ) has leaked through to the user of the macro. ( tmp-capture ) This phenomenon is called variable capture , and it exposes this macro as a leaky abstraction! There are ways to mitigate this using , but those are error-prone manual techniques. It makes macro writing feel like you’re writing in an unsafe lower-level language. Scheme’s macros introduce a concept known as hygiene , which prevents variable capture automatically: In this case, the variable that the macro introduces ( tmp-intro-macro ) is not the same thing that the variable from the calling context ( tmp-intro-let ) refers to. This separation of scopes happens automatically behind the scenes, so there’s now no chance of accidental variable capture. Breaking hygiene has some utility in some cases—for example, one might want to add a form inside the body of a loop. There are ways around hygiene, but these are not without some problems. For more details see [ 2 ]. If you’d like to know more about hygiene, [ 1 ] is an excellent resource. Racket macros: phase separation and scope sets # Since Scheme macros (and Lisp macros more generally) allow running arbitrary Scheme code—including code from other modules—the dependency graph between modules can get so tangled that clean builds of a Scheme codebase are impossible. Racket solves this problem with its phase separation , which puts clear delimiters between when functions and macros are available to different parts of the language. This detangles dependency graphs without sacrificing the expressive power of macros. I wrote a little bit about phase separation ; you can read more on the Racket docs as well as Matthew Flatt’s paper [ 3 ] on phase separation. Racket also has a robust system for reasoning about where a variable’s definition comes from called a scope set . This is a notion makes reasoning about where variables are bound sensible. See a blog post as well as [NO_ITEM_DATA:flattBindingSetsScopes2016b] by Matthew Flatt for more on scope sets. Phase separation and scope sets make Racket macros the safest to use: Racket macros compose sensibly and hide their implementation details so that it is easy to write macros that are easy to use as if they were built-in language constructs. Racket also goes beyond the form that it inherited from Scheme; Racket’s macro-building system makes generating good error messages easy. There’s a little bug in the macro we used earlier, and that is the form only takes an identifier (i.e. a variable) as its first argument. We don’t have any error checking inside the macro; if we were to call with something that wasn’t an identifier, we’d get an error in terms of the the macro expands to, not the macro call itself: This isn’t good because there’s no in our code at all! We could add some error handling in our macro to manually check that and are identifiers, but that’s a little tedious. Racket’s helps us out: Much better! Now our error is in terms that the macro user will recognize. There are lots of other things that can do that make it easy to write correct macros that generate good error messages—a must for macros that become a part of a library. Other languages # Many modern languages use macros; I’ll only talk about a few more here. If something’s missing, that’s probably because I didn’t want to be exhaustive. Julia # Julia macros have a lot of nice things: they operate on ASTs and they’re hygienic, though the way hygiene is currently implemented is a little strange: all variables get ’d automatically Meaning, they all get replaced with some generated symbol that won’t clash with any possible variable or function name. whether or not they come from inside the macro or they originated from the calling code. Part of the problem is that all variables are represented as simple symbols, which [ 1 ] shows is insufficient to properly implement hygiene. Evidently there is some ongoing work to improve the situation. This is a good example of research ideas percolating into industry languages I think. Elixir # Elixir has robust AST macros, and its standard library makes heavy use of macros; many “core” Elixir constructs like , , , , and others are actually macros that expand to smaller units of Elixir. Elixir actually gets hygiene right! Unlike Julia, variables in Elixir’s AST have metadata—including scope information—attached to them. This and other aspects of Elixir’s macro system open it up to lots of exciting possibilities. The Nx library brings support for numerical and GPU programming to Elixir, and it works essentially by implementing a custom Elixir compiler in Elixir itself , and macros play a big role in this. Me thinking that Elixir is a big mainstream language should tell you something about the languages I spend my time with in my job as a PhD student. I think Elixir macros are really neat—they’re the most powerful I’ve seen in a “big mainstream” language. Rust # Rust supports two kinds of macros: macros-by-example, and procedural macros. Macros-by-example are a simple pattern-to-pattern transformation. Here’s an example from The Rust Book : This macro takes a pattern like and expands it to a pattern like Notice how the marks a part of the template that can be repeated. pattern-repeat This is akin to Racket or Scheme’s repetition form. Macros-by-example work on AST, but you can’t perform arbitrary computation on the AST. For that, you need procedural macros. Rust’s procedural macros (called “proc macros”) work on a token stream, and you can perform arbitrary computation, which puts them in a bit of a funny middle ground between C and Lisp. There is a Rust crate that you can use to parse a Rust token stream into Rust AST, but you don’t get any nice source information from the AST nodes, which makes producing good error messages a challenge. I personally find Rust macros to be disappointing. Conclusion # There’s a wide variety of macro systems. The best macro systems: Operate on the AST rather than on a stream of tokens Avoid leaking implementation details through inadvertent variable capture by being hygienic Produce good error messages that are in terms of the caller’s context (Bonus) have good phase separation to enforce clear separation between complex macro systems

0 views
tekin.co.uk 5 years ago

Better Git diff output for Ruby, Python, Elixir, Go and more

The regular Git users amongst you will be familiar with the diff output that breaks down into “hunks” like so: The first line (starting ) is known as the hunk header, and is there to help orientate the change. It gives us the line numbers for the change (the numbers between the ), but also a textual description for the enclosing context where the change happened, in this example . Git tries to figure out this enclosing context, whether it’s a function, module or class definition. For C-like languages it’s pretty good at this. But for the Ruby example above it’s failed to show us the immediate context, which is actually a method called . That’s because out of the box Git isn’t able to recognise the Ruby syntax for a method definition, which would be . What we really want to see is: And it’s not just Ruby where Git struggles to figure out the correct enclosing context. Many other programming languages and file formats also get short-changed when it comes to the hunk header context. Thankfully, it’s not only possible to configure a custom regex specific to your language to help Git better orient itself, there’s even a pre-defined set of patterns for many languages and formats right there in Git . All we have to do is tell Git which patterns to use for our file extensions.

0 views
Matthias Endler 5 years ago

What Happened To Programming In The 2010s?

A while ago, I read an article titled “What Happened In The 2010s” by Fred Wilson. The post highlights key changes in technology and business during the last ten years. This inspired me to think about a much more narrow topic: What Happened To Programming In The 2010s? 🚓 I probably forgot like 90% of what actually happened. Please don’t sue me. My goal is to reflect on the past so that you can better predict the future . Where To Start? From a mile-high perspective, programming is still the same as a decade ago: But if we take a closer look, a lot has changed around us. Many things we take for granted today didn’t exist a decade ago. What Happened Before? Back in 2009, we wrote jQuery plugins, ran websites on shared hosting services, and uploaded content via FTP . Sometimes code was copy-pasted from dubious forums, tutorials on blogs, or even hand-transcribed from books. Stack Overflow (which launched on 15 th of September 2008) was still in its infancy. Version control was done with CVS or SVN — or not at all. I signed up for Github on 3 rd of January 2010. Nobody had even heard of a Raspberry Pi (which only got released in 2012). Source: xkcd #2324 An Explosion Of New Programming Languages The last decade saw the creation of a vast number of new and exciting programming languages. Crystal , Dart , Elixir , Elm , Go , Julia , Kotlin , Nim , Rust , Swift , TypeScript all released their first stable version! Even more exciting: all of the above languages are developed in the open now, and the source code is freely available on Github. That means, everyone can contribute to their development — a big testament to Open Source. Each of those languages introduced new ideas that were not widespread before: This is just a short list, but innovation in the programming language field has greatly accelerated. More Innovation in Older Languages Established languages didn’t stand still either. A few examples: C++ woke up from its long winter sleep and released C++11 after its last major release in 1998. It introduced numerous new features like Lambdas, pointers, and range-based loops to the language. At the beginning of the last decade, the latest PHP version was 5.3. We’re at 7.4 now. (We skipped 6.0, but I’m not ready to talk about it yet.) Along the way, it got over twice as fast. PHP is a truly modern programming language now with a thriving ecosystem. Heck, even Visual Basic has tuples now. (Sorry, I couldn’t resist.) Faster Release Cycles Most languages adopted a quicker release cycle. Here’s a list for some popular languages: The Slow Death Of Null Close to the end of the last decade, in a talk from 25 th of August 2009, Tony Hoare described the pointer as his Billion Dollar Mistake . A study by the Chromium project found that 70% of their serious security bugs were memory safety problems ( same for Microsoft ). Fortunately, the notion that our memory safety problem isn’t bad coders has finally gained some traction. Many mainstream languages embraced safer alternatives to : nullable types, , and types. Languages like Haskell had these features before, but they only gained popularity in the 2010s. Revenge of the Type System Closely related is the debate about type systems . The past decade has seen type systems make their stage comeback; TypeScript, Python, and PHP (just to name a few) started to embrace type systems. The trend goes towards type inference: add types to make your intent clearer for other humans and in the face of ambiguity — otherwise, skip them. Java, C++, Go, Kotlin, Swift, and Rust are popular examples with type inference support. I can only speak for myself, but I think writing Java has become a lot more ergonomic in the last few years. Exponential Growth Of Libraries and Frameworks As of today, npm hosts 1,330,634 packages . That’s over a million packages that somebody else is maintaining for you. Add another 160,488 Ruby gems , 243,984 Python projects , and top it off with 42,547 Rust crates . Number of packages for popular programming languages. Don’t ask me what happened to npm in 2019. Source: Module Counts Of course, there’s the occasional leftpad , but it also means that we have to write less library code ourselves and can focus on business value instead. On the other hand, there are more potential points of failure, and auditing is difficult. There is also a large number of outdated packages . For a more in-depth discussion, I recommend the Census II report by the Linux Foundation & Harvard [PDF]. We also went a bit crazy on frontend frameworks: No Free Lunch A review like this wouldn’t be complete without taking a peek at Moore’s Law . It has held up surprisingly well in the last decade: Source: Wikipedia There’s a catch, though. Looking at single-core performance, the curve is flattening: Source: Standford University: The Future of Computing (video) The new transistors prophesied by Moore don’t make our CPUs faster but instead add other kinds of processing capabilities like more parallelism or hardware encryption. There is no free lunch anymore. Engineers have to find new ways of making their applications faster, e.g. by embracing concurrent execution . Callbacks, coroutines, and eventually async/await are becoming industry standards. GPUs (Graphical Processing Units) became very powerful, allowing for massively parallel computations, which caused a renaissance of Machine Learning for practical use-cases: Deep learning becomes feasible, which leads to machine learning becoming integral to many widely used software services and applications. — Timeline of Machine Learning on Wikipedia Compute is ubiquitous, so in most cases, energy efficiency plays a more prominent role now than raw performance (at least for consumer devices). Unlikely Twists Of Fate Microsoft is a cool kid now. It acquired Github, announced the Windows subsystem for Linux (which should really be called Linux Subsystem for Windows), open sourced MS-DOS and .NET . Even the Microsoft Calculator is now open source. IBM acquired Red Hat . Linus Torvalds apologized for his behavior, took time off . Open source became the default for software development (?). Learnings If you’re now thinking: Matthias, you totally forgot X , then I brought that point home. This is not even close to everything that happened. You’d roughly need a decade to talk about all of it. Personally, I’m excited about the next ten years. Software is eating the world — at an ever-faster pace. Punch program into editor Feed to compiler (or interpreter) Bleep Boop 🤖 Receive output Strong Type Systems : Kotlin and Swift made optional null types mainstream, TypeScript brought types to JavaScript, Algebraic datatypes are common in Kotlin, Swift, TypeScript, and Rust. Interoperability : Dart compiles to JavaScript, Elixir interfaces with Erlang, Kotlin with Java, and Swift with Objective-C. Better Performance : Go promoted Goroutines and channels for easier concurrency and impressed with a sub-millisecond Garbage Collector, while Rust avoids Garbage Collector overhead altogether thanks to ownership and borrowing. Angular in 2010 React in 2013 Vue in 2014 Svelte in 2016 …and soon Yew ? Microsoft is a cool kid now. It acquired Github, announced the Windows subsystem for Linux (which should really be called Linux Subsystem for Windows), open sourced MS-DOS and .NET . Even the Microsoft Calculator is now open source. IBM acquired Red Hat . Linus Torvalds apologized for his behavior, took time off . Open source became the default for software development (?).

0 views