Latest Posts (15 found)

Custom for designing, off-the-shelf for shipping

As software engineers, we're paid to write really cool type annotations solve problems. Usually we do this by taking a bunch of different pieces and putting them together to solve the problem. Maybe you mix together a database, a queue, a web framework, and some business logic. Or maybe you design a new storage engine, your own web framework, and a custom cache. It's an engineering question to determine which way is the right way. Should you build custom things? Or should you use off-the-shelf existing pieces? There is no general answer for that, of course. It's dependent on your situation. But there is a pattern that I've found helpful for problem-solving which balances the two approaches. You use as many custom components as you like for designing a solution, and then you use (mostly) off-the-shelf components for what you're going to ship . This technique helps a lot when you aren't sure what the solution will look like. If you try to design a solution using off-the-shelf solutions here, you may run into a couple of problems. First is that it's just a weird solution space, and you have to move in pre-defined step sizes. Second, though, is that you might need a custom component. If you need just one , when you're designing with off-the-shelf components, how will you realize that? And how will you know where that one custom component should go? Besides, how do you even know that any solution is possible? Designing with custom components allows you to get an existence proof [1] that the problem is solvable. You don't need to worry about a good solution, one that's viable. You first need to worry about any solution, as complicated as you like. If you can put together all these custom things you know you could build, then you know that some path exists! Custom components are also really helpful when a part exists but you don't know it exists. You have a problem, you know that it would help you to have something that can solve it. If you haven't solved that problem before, how would you know that a solution does exist? You look at it with custom components, and then that can lead you toward discovering the existing components. After you've designed a solution, you move on to refining the solution. It's ideal if you can build something entirely using off-the-shelf components, because those exist! You don't have to reinvent them, and it's probably cheaper to use them [2] . You can look at each of your custom components and start to ask: why is this custom? Maybe it needs to be, but more often, what it's solving is similar to something someone else has solved. You can look for those related tools and related problems, and find those solutions. Then slide a solution in in place of your component! Occasionally, you will find something where an off-the-shelf component does not exist which is equivalent to your custom one. Then you start to ask, okay, did I solve this problem wrong? I begin at the assumption that I'm not in a totally unique problem space. Then I look at how other people solved this problem, or why they didn't have to . What is it that people are doing that's different here so they don't run into it? And then if you get through that, you may actually have a unique new problem! And you get to keep that custom component in your design. But along the way, you moved a lot of custom work into using off-the-shelf components. Take a break, give yourself a pat on the back, then get back to work making money for corporate overlords at the cost of your health have fun building! Despite doing most of my problem solving in software, the example I want to share is a physical one (which inspired this post). I recently rebuilt my ergonomic setup again, my fifth iteration. This time, it is made of mostly off-the-shelf parts with one custom part. The first version was very much an existence proof that a way existed to use my laptop "portably" with my ergonomic keyboard. It wasn't great, and it didn't solve portability really, but it gestured at the solutions. My second version was the real existence proof, and it went almost fully custom. Off-the-shelf I used a tripod z tilt head, and I made the tray and laptop holder myself (I'm not counting the keyboard or laptop, since I'm building a solution around them). My third version used only custom components, and showed me it's possible! The fourth version used more custom components in a different arrangement. And now my new version? It's mostly off-the-shelf components. I didn't know most of these existed at all when I started on the rig a couple of years ago. Instead of a tray with grooves in it for my keyboard to slide on them, or velcro to hold it down, I got camera equipment! It uses two camera rods, two mounting blocks for those, two z-mount tripod heads, a "magic arm", and a tripod laptop holder tray. And a small custom component: a little wood piece that keeps it balanced, essentially outriggers. It's probably not done, though. There will surely be a sixth iteration. If nothing else, I want to replace the laptop holder. This one is a little too heavy; the rig as a whole is serviceable, but it would be nice to make it a little lighter for travel. So go forth and build things. Feel free to use as many custom components as you want when designing, but then think about if you can do it with off-the-shelf components when you want to be able to ship it. Of course, if my boss is reading this, reverse all the advice. You know I need to build us a new database in Rust. C'mon. My math degree is leaking out. An existence proof shows that something exists, ideally by providing a concrete example or giving a way to produce such a thing. I've found this concept very helpful in designing software, because you can break problem solving into two phases: first prove it's possible, then find a good solution. ↩ Large companies will often end up building their own custom versions of off-the-shelf components, once there is enough scale for it to work out. One example of this is Apple. They used Intel's CPUs for a long time, and then eventually designed their own once it made enough sense. Smaller companies generally cannot make this work. ↩

0 views

Visualizing distributions with pepperoni pizza (and javascript)

There's a pizza shop near me that serves a normal pizza. I mean, they distribute the toppings in a normal way. They're not uniform at all. The toppings are random, but not the way I want. The colloquial understanding of "random" is kind of the Platonic ideal of a pizza: slightly chaotic but things are more or less spread out over the whole piece in a regular way. If you take a slice you'll get more of less the same amount of pepperoni as any other slice. And every bite will have roughly the same amount of pepperoni as every other bite. I think it would look something like this. Regenerate this pie! This pizza to me is pretty much the canonical mental pizza. It looks pretty random, but you know what you're gonna get. And it is random! Here's how we made it, with the visualiztion part glossed over. First, we make a helper function, since gives us values from 0 to 1, but we want values from -1 to 1. Then, we make a simple function that gives us the coordinates of where to put a pepperoni piece, from the uniform distribution. And we cap it off with placing 300 fresh pieces of pepperoni on this pie, before we send it into the oven. (It's an outrageous amount of very small pepperoni, chosen in both axes for ease of visualizing the distribution rather than realism.) But it's not what my local pizza shop's pizza's look like. That's because they're not using the same probability distribution. This pizza is using a uniform distribution . That means that for any given pepperoni, every single position on the pizza is equally likely for it to land on. We are using a uniform distribution here, but there are plenty of other distributions we could use as well. One of the other most familiar distributions is normal distribution . This is the distribution that has the normal "bell curve" that we are used to seeing. And this is probably what people are talking about most of the time when they talk about how many standard deviations something is away from something else. So what would it look like if we did a normal distribution on a pizza? The very first thing we need to answer that is a way of getting the values from the normal distribution. This isn't included with JavaScript by default, but we can implement it pretty simply using the Box-Muller transform . This might be a scary name, but it's really easy to use. Is a way of generating numbers in the normal distribution using number sampled from the uniform distribution. We can implement it like this: Then we can make a pretty simple function again which gives us coordinates for where to place pepperoni in this distribution. The only little weird thing here is that I scale the radius down by a factor of 3. Without this, the pizza ends up a little bit indistinguishable from the uniform distribution, but the scaling is arbitrary and you can do whatever you want. And then once again we cap it off with a 300 piece pepperoni pizza. Regenerate this pie! Ouch. It's not my platonic ideal of a pizza, that's for sure. It also looks closer to the pizzas my local shop serves, but it's missing something... See, this one is centered around, you know, the center . Theirs are not that. They're more chaotic with a few handfuls of toppings. What if we did the normal distributions, but multiple times, with different centers? First we have to update our position picking function to accept a center for the cluster. We'll do this by passing in the center and generating coordinates around those, while still checking that we're within the bounds of the circle formed by the crust of the pizza. And then instead of one single loop for all 300 pieces, we can do 3 loops of 100 pieces each, with different (randomly chosen) centers for each. Regenerate this pie! That looks more like it. Well, probably. This one is more chaotic, and sometimes things work out okay, but other times they're weird. Just like the real pizzas. Click that "regenerate" button a few times to see a few examples! So, this is all great. But, when would we want this? I mean, first of all, boring. We don't need a reason except that it's fun! But, there's one valid use case that a medical professional and I came up with [1] : hot honey [2] . The ideal pepperoni pizza just might be one that has uniformly distributed pepperoni with normally distributed hot honey or hot sauce. You'd start with more intense heat, then it would taper off as you go toward the crust, so you maintain the heat without getting overwhelmed by it. The room to play here is endless! We can come up with a lot of other fun distributions and map them in similar ways. Unfortunately, we probably can't make a Poisson pizza, since that's a distribution for discrete variables. I really do talk about weird things with all my medical providers. And everyone else I meet. I don't know, life's too short to go "hey, this is a professional interaction, let's not chatter on and on about whatever irrelevant topic is on our mind." ↩ The pizza topping, not my pet name. ↩

0 views

Covers as a way of learning music and code

When you're just getting started with music, you have so many skills to learn. You have to be able to play your instrument and express yourself through it. You need to know the style you're playing, and its idioms and conventions. You may want to record your music, and need all the skills that come along with it. Music is, mostly, subjective: there's not an objective right or wrong way to do things. And that can make it really hard! Each of these skills is then couched in this subjectivity of trying to see if it's good enough. Playing someone else's music, making a cover, is great because it can make it objective. It gives you something to check against. When you're playing your own music, you're in charge of the entire thing. You didn't play a wrong note, because, well, you've just changed the piece! But when you play someone else's music, now there's an original and you can try to get as close to it as possible. Recreating it gives you a lot of practice in figuring out what someone did and how they did it. It also lets you peek into why they did it. Maybe a particular chord voicing is hard for you to play. Okay, let's simplify it and play an easier voicing. How does it sound now? How does it sound with the harder one? Play around with those differences and you start to see the why behind it all. The same thing holds true for programming. One of my friends is a C++ programmer [1] and he was telling me about how he learned C++ and data structures really well early on: He reimplemented parts of the Boost library . This code makes heavy use of templates, a hard thing in C++. And it provides fundamental data structures with robust implementations and good performance [2] . What he would do is look at the library and pick a slice of it to implement. He'd look at what the API for it is, how it was implemented, what it was doing under the hood. Then he'd go ahead and try to do it himself, without any copy-pasting and without real-time copying from the other screen. Sometimes, he'd run into things which didn't make sense. Why is this a doubly-linked list here, when it seems a singly-linked list would do just fine? And in those moments, if you can't find a reason? You get to go down that path, make it the singly-linked version, and then find out later: oh, ohhh. Ohhhh, they did that for a reason. It lets you run into some of the hard problems, grapple with them, and understand why the original was written how it was. You get to study with some really strong programmers, by proxy via their codebase. Their code is your tutor and your guide for understanding how to write similar things in the future. There's a lot of judgment out there about doing original works. This kind of judgment of covers and of reimplementing things that already exist, just to learn. So many people have internalized this, and I've heard countless times "I want to make a new project, but everything I think of, someone else has already done!" And to that, I say: do it anyway [3] . If someone else has done it, that's great. That means that you had an idea so good that someone else thought it was a good idea, too. And that means that, because someone else has done it, you have a reference now. You can compare notes, and you can see how they did it, and you can learn. I'm a recovering C++ programmer myself, and had some unpleasant experiences associated with the language. This friend is a game developer, and his industry is one where C++ makes a lot of sense to use because of the built-up code around it. ↩ He said they're not perfect, but that they're really good and solid and you know a lot of people thought for a long time about how to do them. You get to follow in their footsteps and benefit from all that hard thinking time. ↩ But: you must always give credit when you are using someone else's work. If you're reimplementing someone else's library, or covering someone's song, don't claim it's your own original invention. ↩

0 views

That boolean should probably be something else

One of the first types we learn about is the boolean. It's pretty natural to use, because boolean logic underpins much of modern computing. And yet, it's one of the types we should probably be using a lot less of. In almost every single instance when you use a boolean, it should be something else. The trick is figuring out what "something else" is. Doing this is worth the effort. It tells you a lot about your system, and it will improve your design (even if you end up using a boolean). There are a few possible types that come up often, hiding as booleans. Let's take a look at each of these, as well as the case where using a boolean does make sense. This isn't exhaustive— [1] there are surely other types that can make sense, too. A lot of boolean data is representing a temporal event having happened. For example, websites often have you confirm your email. This may be stored as a boolean column, , in the database. It makes a lot of sense. But, you're throwing away data: when the confirmation happened. You can instead store when the user confirmed their email in a nullable column. You can still get the same information by checking whether the column is null. But you also get richer data for other purposes. Maybe you find out down the road that there was a bug in your confirmation process. You can use these timestamps to check which users would be affected by that, based on when their confirmation was stored. This is the one I've seen discussed the most of all these. We run into it with almost every database we design, after all. You can detect it by asking if an action has to occur for the boolean to change values, and if values can only change one time. If you have both of these, then it really looks like it is a datetime being transformed into a boolean. Store the datetime! Much of the remaining boolean data indicates either what type something is, or its status. Is a user an admin or not? Check the column! Did that job fail? Check the column! Is the user allowed to take this action? Return a boolean for that, yes or no! These usually make more sense as an enum. Consider the admin case: this is really a user role, and you should have an enum for it. If it's a boolean, you're going to eventually need more columns, and you'll keep adding on other statuses. Oh, we had users and admins, but now we also need guest users and we need super-admins. With an enum, you can add those easily. And then you can usually use your tooling to make sure that all the new cases are covered in your code. With a boolean, you have to add more booleans, and then you have to make sure you find all the places where the old booleans were used and make sure they handle these new cases, too. Enums help you avoid these bugs. Job status is one that's pretty clearly an enum as well. If you use booleans, you'll have , , , and on and on. Or you could just have one single field, , which is an enum with the various statuses. (Note, though, that you probably do want timestamp fields for each of these events—but you're still best having the status stored explicitly as well.) This begins to resemble a state machine once you store the status, and it means that you can make much cleaner code and analyze things along state transition lines. And it's not just for storing in a database, either. If you're checking a user's permissions, you often return a boolean for that. In this case, means the user can do it and means they can't. Usually. I think. But you can really start to have doubts here, and with any boolean, because the application logic meaning of the value cannot be inferred from the type. Instead, this can be represented as an enum, even when there are just two choices. As a bonus, though, if you use an enum? You can end up with richer information, like returning a reason for a permission check failing. And you are safe for future expansions of the enum, just like with roles. You can detect when something should be an enum a proliferation of booleans which are mutually exclusive or depend on one another. You'll see multiple columns which are all changed at the same time. Or you'll see a boolean which is returned and used for a long time. It's important to use enums here to keep your program maintainable and understandable. But when should we use a boolean? I've mainly run into one case where it makes sense: when you're (temporarily) storing the result of a conditional expression for evaluation. This is in some ways an optimization, either for the computer (reuse a variable [2] ) or for the programmer (make it more comprehensible by giving a name to a big conditional) by storing an intermediate value. Here's a contrived example where using a boolean as an intermediate value. But even here in this contrived example, some enums would make more sense. I'd keep the boolean, probably, simply to give a name to what we're calculating. But the rest of it should be a on an enum! Sure, not every boolean should go away. There's probably no single rule in software design that is always true. But, we should be paying a lot more attention to booleans. They're sneaky. They feel like they make sense for our data, but they make sense for our logic . The data is usually something different underneath. By storing a boolean as our data, we're coupling that data tightly to our application logic. Instead, we should remain critical and ask what data the boolean depends on, and should we maybe store that instead? It comes easier with practice. Really, all good design does. A little thinking up front saves you a lot of time in the long run. I know that using an em-dash is treated as a sign of using LLMs. LLMs are never used for my writing. I just really like em-dashes and have a dedicated key for them on one of my keyboard layers. ↩ This one is probably best left to the compiler. ↩

0 views

Proving that every program halts

One of the best known hard problems in computer science is the halting problem. In fact, it's widely thought [1] that you cannot write a program that will, for any arbitrary program as input, tell you correctly whether or not it will terminate. This is written from the framing of computers, though: can we do better with a human in the loop? It turns out, we can. And we can use a method that's generalizable, which many people can follow for many problems. Not everyone can use the method, which you'll see why in a bit. But lots of people can apply this proof technique. Let's get started. We'll start by formalizing what we're talking about, just a little bit. I'm not going to give the full formal proof—that will be reserved for when this is submitted to a prestigious conference next year. We will call the set of all programs . We want to answer, for any in , whether or not will eventually halt. We will call this and if eventually finished and otherwise. Actually, scratch that. Let's simplify it and just say that yes, every program does halt eventually, so for all . That makes our lives easier. Now we need to get from our starting assumptions, the world of logic we live in, to the truth of our statement. We'll call our goal, that for all , the statement . Now let's start with some facts. Fact one: I think it's always an appropriate time to play the saxophone. *honk*! Fact two: My wife thinks that it's sometimes inappropriate to play the saxophone, such as when it's "time for bed" or "I was in the middle of a sentence! [2] We'll give the statement "It's always an appropriate time to play the saxophone" the name . We know that I believe is true. And my wife believes that is false. So now we run into the snag: Fact three: The wife is always right. This is a truism in American culture, useful for settling debates. It's also useful here for solving major problems in computer science because, babe, we're both the wife. We're both right! So now that we're both right, we know that and are both true. And we're in luck, we can apply a whole lot of fancy classical logic here. Since we know that is true and we also know that is true. From being true, we can conclude that is true. And then we can apply disjunctive syllogism [3] which says that if is true and is true, then must be true. This makes sense, because if you've excluded one possibility then the other must be true. And we do have , so that means: is true! There we have it. We've proved our proposition, , which says that for any program , will eventually halt. The previous logic is, mostly, sound. It uses the principle of explosion , though I prefer to call it "proof by married lesbian." Of course, we know that this is wrong. It falls apart with our assumptions. We built the system on contradictory assumptions to begin with, and this is something we avoid in logic [4] . If we allow contradictions, then we can prove truly anything. I could have also proved (by married lesbian) that no program will terminate. This has been a silly traipse through logic. If you want a good journey through logic, I'd recommend Hillel Wayne's Logic for Programmers . I'm sure that, after reading it, you'll find absolutely no flaws in my logic here. After all, I'm the wife, so I'm always right. It's widely thought because it's true, but we don't have to let that keep us from a good time. ↩ I fact checked this with her, and she does indeed hold this belief. ↩ I had to look this up, my uni logic class was a long time ago. ↩ The real conclusion to draw is that, because of proof by contradiction, it's certainly not true that the wife is always right. Proved that one via married lesbians having arguments. Or maybe gay relationships are always magical and happy and everyone lives happily ever after, who knows. ↩

0 views

Taking a break

I've been publishing at least one blog post every week on this blog for about 2.5 years. I kept it up even when I was very sick last year with Lyme disease. It's time for me to take a break and reset. This is the right time, because the world is very difficult for me to move through right now and I'm just burnt out. I need to focus my energy on things that give me energy and right now, that's not writing and that's not tech. I'll come back to this, and it might look a little different. This is my last post for at least a month. It might be longer, if I still need more time, but I won't return before the end of May. I know I need at least that long to heal, and I also need that time to focus on music. I plan to play a set at West Philly Porchfest , so this whole month I'll be prepping that set. If you want to follow along with my music, you can find it on my bandcamp (only one track, but I'll post demos of the others that I prepare for Porchfest as they come together). And if you want to reach out, my inbox is open. Be kind to yourself. Stay well, drink some water. See you in a while.

0 views

Measuring my Framework laptop's performance in 3 positions

A few months ago, I was talking with a friend about my ergonomic setup and they asked if being vertical helps it with cooling. I wasn't sure, because it seems like it could help but it was probably such a small difference that it wouldn't matter. So, I did what any self-respecting nerd would do: I procrastinated. The question didn't leave me, though, so after those months passed, I did the second thing any self-respecting nerd would do: benchmarks. What we want to find out is whether or not the position of the laptop would affect its CPU performance. I wanted to measure it in three positions: My hypothesis was that using it closed would slightly reduce CPU performance, and that using it normal or vertical would be roughly the same. For this experiment, I'm using my personal laptop. It's one of the early Framework laptops (2nd batch of shipments) which is about four years old. It has an 11th gen Intel CPU in it, the i7-1165G7. My laptop will be sitting on a laptop riser for the closed and normal positions, and it will be sitting in my ergonomic tray for the vertical one. For all three, it will be connected to the same set of peripherals through a single USB-C cable, and the internal display is disabled for all three. I'm not too interested in the initial boost clock. I'm more interested in what clock speeds we can sustain. What happens under a sustained, heavy load, when we hit a saturation point and can't shed any more heat? To test that, I'm doing a test using heavy CPU load. The load is generated by stress-ng , which also reports some statistics. Most notably, it reports CPU temperatures and clock speeds during the tests. Here's the script I wrote to make these consistent. To skip the boost clock period, I warm it up first with a 3-minute load Then I do a 5-minute load and measure the CPU clock frequency and CPU temps every second along the way. We need since we're using an option ( ) which needs root privileges [1] and attempts to make the CPU run harder/hotter. Then we specify the stressor we're using with , which does some matrix calculations over a number of cores we specify. The remaining options are about reporting and logging. I let the computer cool for a minute or two between each test, but not for a scientific reason. Just because I was doing other things. Since my goal was to saturate the temperatures, and they got stable within each warmup period, cooldowh time wasn't necessary—we'd warm it back up anyway. So, I ran this with the three positions, and with two core count options: 8, one per thread on my CPU; and 4, one per physical core on my CPU. Once it was done, I analyzed the results. I took the average clock speed across the 5 minute test for each of the configurations. My hypothesis was partially right and partially wrong. When doing 8 threads, each position had different results: With 4 threads, the results were: So, I was wrong in one big aspect: it does make a clearly measurable difference. Having it open and vertical reduces temps by 3 degrees in one test and 5 in the other, and it had a higher clock speed (by 0.05 GHz, which isn't a lot but isn't nothing). We can infer that, since clock speeds improved in the heavier load test but not in the lighter load test, that the lighter load isn't hitting our thermal limits—and when we do, the extra cooling from the vertical position really helps. One thing is clear: in all cases, the CPU ran slower when the laptop was closed. It's sorta weird that the CPU temps went down when closed in the second test. I wonder if that's from being able to cool down more when it throttled down a lot, or if there was a hotspot that throttled the CPU but which wasn't reflected in the temp data, maybe a different sensor. I'm not sure if having my laptop vertical like I do will ever make a perceptible performance difference. At any rate, that's not why I do it. But it does have lower temps, and that should let my fans run less often and be quieter when they do. That's a win in my book. It also means that when I run CPU-intensive things (say hi to every single Rust compile!) I should not close the laptop. And hey, if I decide to work from my armchair using my ergonomic tray, I can argue it's for efficiency: boss, I just gotta eke out those extra clock cycles. I'm not sure that this made any difference on my system. I didn't want to rerun the whole set without it, though, and it doesn't invalidate the tests if it simply wasn't doing anything. ↩

0 views

The five stages of incident response

The scene: you're on call for a web app, and your pager goes off. Denial. No no no, the app can't be down. There's no way it's down. Why would it be down? It isn't down. Sure, my pager went off. And sure, the metrics all say it's down and the customer is complaining that it's down. But it isn't, I'm sure this is all a misunderstanding. Anger. Okay so it's fucking down. Why did this have to happen on my on-call shift? This is so unfair. I had my dinner ready to eat, and *boom* I'm paged. It's the PM's fault for not prioritizing my tech debt, ugh. Bargaining. Okay okay okay. Maybe... I can trade my on-call shift with Sam. They really know this service, so they could take it on. Or maybe I can eat my dinner while we respond to this... Depression. This is bad, this is so bad. Our app is down , and the customer knows . We're totally screwed here, why even bother putting it back up? They're all going to be mad, leave, the company is dead... There's not even any point. Acceptance. You know, it's going to be okay. This happens to everyone, apps go down. We'll get it back up, and everything will be fine.

0 views

Python is an interpreted language with a compiler

After I put up a post about a Python gotcha, someone remarked that "there are very few interpreted languages in common usage," and that they "wish Python was more widely recognized as a compiled language." This got me thinking: what is the distinction between a compiled or interpreted language? I was pretty sure that I do think Python is interpreted [1] , but how would I draw that distinction cleanly? On the surface level, it seems like the distinction between compiled and interpreted languages is obvious: compiled languages have a compiler, and interpreted languages have an interpreter. We typically call Java a compiled language and Python an interpreted language. But on the inside, Java has an interpreter and Python has a compiler. What's going on? A compiler takes code written in one programming language and turns it into a runnable thing. It's common for this to be machine code in an executable program, but it can also by bytecode for VM or assembly language. On the other hand, an interpreter directly takes a program and runs it. It doesn't require any pre-compilation to do so, and can apply a variety of techniques to achieve this (even a compiler). That's where the distinction really lies: what you end up running. An interpeter runs your program, while a compiler produces something that can run later [2] (or right now, if it's in an interpreter). A compiled language is one that uses a compiler, and an interpreted language uses an interpreter. Except... many languages [3] use both. Let's look at Java. It has a compiler, which you feed Java source code into and you get out an artifact that you can't run directly . No, you have to feed that into the Java virtual machine, which then interprets the bytecode and runs it. So the entire Java stack seems to have both a compiler and an interpreter. But it's the usage , that you have to pre-compile it, that makes it a compiled language. And similarly is Python [4] . It has an interpreter, which you feed Python source code into and it runs the program. But on the inside, it has a compiler. That compiler takes the source code, turns it into Python bytecode, and then feeds that into the Python virtual machine. So, just like Java, it goes from code to bytecode (which is even written to the disk, usually) and bytecode to VM, which then runs it. And here again we see the usage, where you don't pre-compile anything, you just run it. That's the difference. And that's why Python is an interpreted language with a compiler! Ultimately, why does it matter? If I can do and get my Rust program running the same as if I did , don't they feel the same? On the surface level, they do, and that's because it's a really nice interface so we've adopted it for many interactions! But underneath it, you see the differences peeping out from the compiled or interpreted nature. When you run a Python program, it will run until it encounters an error, even if there's malformed syntax! As long as it doesn't need to load that malformed syntax, you're able to start running. But if you a Rust program, it won't run at all if it encounters an error in the compilation step! It has to run the entire compilation process before the program will start at all. The difference in approaches runs pretty deep into the feel of an entire toolchain. That's where it matters, because it is one of the fundamental choices that everything else is built around. The words here are ultimately arbitrary. But they tell us a lot about the language and tools we're using. Thank you to Adam for feedback on a draft of this post. It is worth occasionally challenging your own beliefs and assumptions! It's how you grow, and how you figure out when you are actually wrong. ↩ This feels like it rhymes with async functions in Python. Invoking a regular function runs it immediately, while invoking an async function creates something which can run later. ↩ And it doesn't even apply at the language level, because you could write an interpreter for C++ or a compiler for Hurl , not that you'd want to, but we're going to gloss over that distinction here and just keep calling them "compiled/interpreted languages." It's how we talk about it already, and it's not that confusing. ↩ Here, I'm talking about the standard CPython implementation. Others will differ in their details. ↩

0 views

Typing using my keyboard (the other kind)

I got a new-to-me keyboard recently. It was my brother's in school, but he doesn't use it anymore, so I set it up in my office. It's got 61 keys and you can hook up a pedal to it, too! But when you hook it up to the computer, you can't type with it. I mean, that's expected—it makes piano and synth noises mostly. But what if you could type with it? Wouldn't that be grand? (Ha, grand, like a pian—you know, nevermind.) Or more generally, how do you type with any MIDI device? I also have a couple of wind synths and a MIDI drum pad, can I type with those? The first and most obvious idea is to map each key to a letter. The lowest key on the keyboard could be 'a' [1] , etc. This kind of works for a piano-style keyboard. If you have a full size keyboard, you get 88 keys. You can use 52 of those for the letters you need for English [2] and 10 for digits. Then you have 26 left. That's more than enough for a few punctuation marks and other niceties. It only kind of works, though, because it sounds pretty terrible. You end up making melodies that don't make a lot of sense, and do not stay confined to a given key signature. Plus, this assumes you have an 88 key keyboard. I have a 61 key keyboard, so I can't even type every letter and digit! And if I want to write some messages using my other instruments, I'll need something that works on those as well. Although, only being able to type 5 letters using my drums would be pretty funny... The typing scheme I settled on was melodic typing . When you write your message, it should correspond to a similarly beautiful [3] melody. Or, conversely, when you play a beautiful melody it turns into some text on your computer. The way we do this is we keep track of sequences of notes. We start with our key, which will be the key of C, the Times New Roman of key signatures. Then, each note in the scale is has its scale degree : C is 1, D is 2, etc. until B is 7. We want to use scale degree, so that if we jam out with others, we can switch to the appropriate key and type in harmony with them. Obviously. We assign different computer keys to different sequences of these scale degrees. The first question is, how long should our sequences be? If we have 1-note sequences, then we can type 7 keys. Great for some very specific messages, but not for general purpose typing. 2-note sequences would give us 49 keys, and 3-note sequences give us 343. So 3 notes is probably enough, since it's way more than a standard keyboard. But could we get away with the 49? (Yes.) This is where it becomes clear why full Unicode support would be a challenge. Unicode has 155,063 characters (according to wikipedia ). To represent the full space, we'd need at least 7 notes, since 7^7 is 823,543. You could also use a highly variable encoding, which would make some letters easy to type and others very long-winded. It could be done, but then the key mapping would be even harder to learn... My first implementation used 3-note sequences, but the resulting tunes were... uninspiring, to say the least. There was a lot of repetition of particular notes, which wasn't my vibe. So I went back to 2-note sequences, with a pared down set of keys. Instead of trying to represent both lowercase and uppercase letters, we can just do what keyboards do , and represent them using a shift key [4] . My final mapping includes the English alphabet, numerals 0 to 9, comma, period, exclamation marks, spaces, newlines, shift, backspace, and caps lock—I mean, obviously we're going to allow constant shouting. This lets us type just about any message we'd want with just our instrument. And we only used 44 of the available sequences, so we could add even more keys. Maybe one of those would shift us into a 3-note sequence. The note mapping I ended up with is available in a text file in the repo. This mapping lets you type anything you'd like, as long as it's English and doesn't use too complicated of punctuation. No contractions for you, and—to my chagrin—no em dashes either. The key is pretty helpful, but even better is a dynamic key. When I was trying this for the first time, I had two major problems: But we can solve this with code! The UI will show you which notes are entered so far (which is only ever 1 note, for the current typing scheme), as well as which notes to play to reach certain keys. It's basically a peek into the state machine behind what you're typing! Let's see this in action. As all programmers, we're obligated by law to start with "hello, world." We can use our handy-dandy cheat sheet above to figure out how to do this. "Hello, world!" uses a pesky capital letter, so we start with a shift. Then an 'h'. Then we continue on for the rest of it and get: D C E C E C E F A A B C F G E F E B E C C B A B Okay, of course this will catch on! Here's my honest first take of dooting out those notes from the translation above. Hello, world! I... am a bit disappointed, because it would have been much better comedy if it came out like "HelLoo wrolb," but them's the breaks. Moving on, though, let's make this something musical . We can take the notes and put a basic rhythm on them. Something like this, with a little swing to it. By the magic of MIDI and computers, we can hear what this sounds like. Okay, not bad. But it's missing something... Maybe a drum groove... Oh yeah, there we go. Just in time to be the song of the summer, too. And if you play the melody, it enters "Hello, world!" Now we can compose music by typing! We have found a way to annoy our office mates even more than with mechanical keyboards [5] ! As with all great scientific advancements, other great ideas were passed by in the process. Here are a few of those great ideas we tried but had to abandon, since we were not enough to handle their greatness. A chorded keyboard . This would function by having the left hand control layers of the keyboard by playing a chord, and then the right hand would press keys within that layer. I think this one is a good idea! I didn't implement it because I don't play piano very well. I'm primarily a woodwind player, and I wanted to be able to use my wind synth for this. Shift via volume! There's something very cathartic about playing loudly to type capital letters and playing quietly to print lowercase letters. But... it was pretty difficult to get working for all instruments. Wind synths don't have uniform velocity (the MIDI term for how hard the key was pressed, or how strong breath was on a wind instrument), and if you average it then you don't press the key until after it's over , which is an odd typing experience. Imagine your keyboard only entering a character when you release it! So, this one is tenable, but more for keyboards than for wind synths. It complicated the code quite a bit so I tossed it, but it should come back someday. Each key is a key. You have 88 keys on a keyboard, which definitely would cover the same space as our chosen scheme. It doesn't end up sounding very good, though... Rhythmic typing. This is the one I'm perhaps most likely to implement in the future, because as we saw above, drums really add something. I have a drum multipad, which has four zones on it and two pedals attached (kick drum and hi-hat pedal). That could definitely be used to type, too! I am not sure the exact way it would work, but it might be good to quantize the notes (eighths or quarters) and then interpret the combination of feet/pads as different letters. I might take a swing at this one sometime. I've written previously about how I was writing the GUI for this. The GUI is now available for you to use for all your typing needs! Except the ones that need, you know, punctuation or anything outside of the English alphabet. You can try it out by getting it from the sourcehut repo (https://git.sr.ht/~ntietz/midi-keys). It's a Rust program, so you run it with . The program is free-as-in-mattress: it's probably full of bugs, but it's yours if you want it. Well, you have to comply with the license: either AGPL or the Gay Agenda License (be gay, do crime [6] ). If you try it out, let me know how it goes! Let me know what your favorite pieces of music spell when you play them on your instrument. Coincidentally, this is the letter 'a' and the note is A! We don't remain so fortunate; the letter 'b' is the note A#. ↩ I'm sorry this is English only! But, you could to the equivalent thing for most other languages. Full Unicode support would be tricky, I'll show you why later in the post. ↩ My messages do not come out as beautiful melodies. Oops. Perhaps they're not beautiful messages. ↩ This is where it would be fun to use an organ and have the lower keyboard be lowercase and the upper keyboard be uppercase. ↩ I promise you, I will do this if you ever make me go back to working in an open office. ↩ For any feds reading this: it's a joke, I'm not advocating people actually commit crimes. What kind of lady do you think I am? Obviously I'd never think that civil disobedience is something we should do, disobeying unjust laws, nooooo... I'm also never sarcastic. ↩

0 views

Shadowing in Python gave me an UnboundLocalError

There's this thing in Python that always trips me up. It's not that tricky, once you know what you're looking for, but it's not intuitive for me, so I do forget. It's that shadowing a variable can sometimes give you an UnboundLocalError! It happened to me last week while working on a workflow engine with a coworker. We were refactoring some of the code. I can't share that code (yet?) so let's use a small example that illustrates the same problem. Let's start with some working code, which we had before our refactoring caused a problem. Here's some code that defines a decorator for a function, which will trigger some other functions after it runs. The outermost function has one job: it creates a closure for the decorator, capturing the passed in functions. Then the decorator itself will create another closure, which captures the original wrapped function. Here's an example of how it would be used [1] . This prints out Here's the code of the wrapper after I made a small change (omitting docstrings here for brevity, too). I changed the for loop to name the loop variable instead of , to shadow it and reuse that name. And then when we ran it, we got an error! But why? You look at the code and it's defined . Right out there, it is bound. If you print out the locals, trying to chase that down, you'll see that there does not, in fact, exist yet. The key lies in Python's scoping rules. Variables are defined for their entire scope, which is a module, class body, or function body. If you define a variable within a scope, anywhere inside a function, then that variable has that name as its own for the entire scope. The docs make this quite clear: If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. This rule is subtle. Python lacks declarations and allows name binding operations to occur anywhere within a code block. The local variables of a code block can be determined by scanning the entire text of the block for name binding operations. See the FAQ entry on UnboundLocalError for examples. This comes up in a few other places, too. You can use a loop variable anywhere inside the enclosing scope, for example. So once I saw an UnboundLocalError after I'd shadowed it, I knew what was going on. The name was used by the local for the entire function, not just after it was initialized! I'm used to shadowing being the idiomatic thing in Rust, then had to recalibrate for writing Python again. It made sense once I remembered what was going on, but I think it's one of Python's little rough edges. This is not how you'd want to do it in production usage, probably. It's a somewhat contrived example for this blog post. ↩

0 views

Big endian and little endian

Every time I run into endianness, I have to look it up. Which way do the bytes go, and what does that mean? Something about it breaks my brain, and makes me feel like I can't tell which way is up and down, left and right. This is the blog post I've needed every time I run into this. I hope it'll be the post you need, too. The term comes from Gulliver's travels, referring to a conflict over cracking boiled eggs on the big end or the little end [1] . In computers, the term refers to the order of bytes within a segment of data, or a word. Specifically, it only refers to the order of bytes , as those are the smallest unit of addressable data: bits are not individually addressable. The two main orderings are big-endian and little-endian. Big-endian means you store the "big" end first: the most-significant byte (highest value) goes into the smallest memory address. Little-endian means you store the "little" end first: the least-significant byte (smallest value) goes into the smallest memory address. Let's look at the number 168496141 as an example. This is 0x0A0B0C0D in hex. If we store 0x0A at address a , 0x0B at a+1 , 0x0C at a+2 , and 0x0D at a+3 , then this is big-endian . And then if we store it in the other order, with 0x0D at a and 0x0A at a+3 , it's little-endian . And... there's also mixed-endianness, where you use one kind within a word (say, little-endian) and a different ordering for words themselves (say, big-endian). If our example is on a system that has 2-byte words (for the sake of illustration), then we could order these bytes in a mixed-endian fashion. One possibility would be to put 0x0B in a , 0x0A in a+1 , 0x0D in a+2 , and 0x0C in a+3 . There are certainly reasons to do this, and it comes up on some ARM processors, but... it feels so utterly cursed. Let's ignore it for the rest of this! For me, the intuitive ordering is big-ending, because it feels like it matches how we read and write numbers in English [2] . If lower memory addresses are on the left, and higher on the right, then this is the left-to-right ordering, just like digits in a written number. Given some number, how do I know which endianness it uses? You don't, at least not from the number entirely by itself. Each integer that's valid in one endianness is still a valid integer in another endianness, it just is a different value . You have to see how things are used to figure it out. Or you can figure it out from the system you're using (or which wrote the data). If you're using an x86 or x64 system, it's mostly little-endian. (There are some instructions which enable fetching/writing in a big-endian format.) ARM systems are bi-endian, allowing either. But perhaps the most popular ARM chips today, Apple silicon, are little-endian. And the major microcontrollers I checked (AVR, ESP32, ATmega) are little-endian. It's thoroughly dominant commercially! Big-endian systems used to be more common. They're not really in most of the systems I'm likely to run into as a software engineer now, though. You are likely to run into it for some things, though. Even though we don't use big-endianness for processor math most of the time, we use it constantly to represent data. It comes back in networking! Most of the Internet protocols we know and love, like TCP and IP, use "network order" which means big-endian. This is mentioned in RFC 1700 , among others. Other protocols do also use little-endianness again, though, so you can't always assume that it's big-endian just because it's coming over the wire. So... which you have? For your processor, probably little-endian. For data written to the disk or to the wire: who knows, check the protocol! I mean, ultimately, it's somewhat arbitrary. We have an endianness in the way we write, and we could pick either right-to-left or left-to-right. Both exist, but we need to pick one . Given that, it makes sense that both would arise over time, since there's no single entity controlling all computer usage [3] . There are advantages of each, though. One of the more interesting advantages is that little-endianness lets us pretend integers are whatever size we like, within bounds. If you write the number 26 [4] into memory on a big-endian system, then read bytes from that memory address, it will represent different values depending on how many bytes you read. The length matters for reading in and interpreting the data. If you write it into memory on a little-endian system, though, and read bytes from the address (with the remaining ones zero, very important!), then it is the same value no matter how many bytes you read. As long as you don't truncate the value, at least; 0x0A0B read as an 8-bit int would not be equal to being read as a 16-bit ints, since an 8-bit int can't hold the entire thing. This lets you read a value in the size of integer you need for your calculation without conversion. On the other hand, big-endian values are easier to read and reason about as a human. If you dump out the raw bytes that you're working with, a big-endian number can be easier to spot since it matches the numbers we use in English. This makes it pretty convenient to store values as big-endian, even if that's not the native format, so you can spot things in a hex dump more easily. Ultimately, it's all kind of arbitrary. And it's a pile of standards where everything is made up, nothing matters, and the big-end is obviously the right end of the egg to crack. You monster. The correct answer is obviously the big end. That's where the little air pocket goes. But some people are monsters... ↩ Please, please, someone make a conlang that uses mixed-endian inspired numbers. ↩ If ever there were, maybe different endianness would be a contentious issue. Maybe some of our systems would be using big-endian but eventually realize their design was better suited to little-endian, and then spend a long time making that change. And then the government would become authoritarian on the promise of eradicating endianness-affirming care and—Oops, this became a metaphor. ↩ 26 in hex is 0x1A, which is purely a coincidence and not a reference to the First Amendment . This is a tech blog, not political, and I definitely stay in my lane. If it were a reference, though, I'd remind you to exercise their 1A rights [5] now and call your elected officials to ensure that we keep these rights. I'm scared, and I'm staring down the barrel of potential life-threatening circumstances if things get worse. I expect you're scared, too. And you know what? Bravery is doing things in spite of your fear. ↩ If you live somewhere other than the US, please interpret this as it applies to your own country's political process! There's a lot of authoritarian movement going on in the world, and we all need to work together for humanity's best, most free [6] future. ↩ I originally wrote "freest" which, while spelled correctly, looks so weird that I decided to replace it with "most free" instead. ↩

0 views

Who are your teammates?

If you manage a team, who are your teammates? If you're a staff software engineer embedded in a product team, who are your teammates? The answer to the question comes down to who your main responsibility lies with. That's not the folks you're managing and leading. Your responsibility lies with your fellow leaders, and they're your teammates. There's a concept in leadership called the first team mentality. If you're a leader, then you're a member of a couple of different teams at the same time. Using myself as an example, I'm a member of the company's leadership team (along with the heads of marketing, sales, product, etc.), and I'm also a member of the engineering department's leadership team (along with the engineering directors and managers and the CTO). I'm also sometimes embedded into a team for a project, and at one point I was running a 3-person platform team day-to-day. So I'm on at least two teams, but often three or more. Which of these is my "first" team, the one which I will prioritize over all the others? For my role, that's ultimately the company leadership. Each department is supposed to work toward the company goals, and so if there's an inter-department conflict you need to do what's best for the company —helping your fellow department heads—rather than what's best for your department. (Ultimately, your job is to get both of these into alignment; more on that later.) This applies across roles. If you're an engineering manager, your teammates are not the people who report to you. Your teammates are the other engineering managers and staff engineers at your level. You all are working together toward department goals, and sometimes the team has to sacrifice to make that happen. One of the best things about a first team mentality is that it comes with a shift in where your focus is. You have to focus on the broader goals your group is working in service of, instead of focusing on your group's individual work. I don't think you can achieve either without the other. When you zoom out from the team you lead or manage and collaborate with your fellow leaders, you gain context from them. You see what their teams are working on, and you can contextualize your work with theirs. And you also see how your work impacts theirs, both positively and negatively. That broader context gives you a reminder of the bigger, broader goals. It can also show you that those goals are unclear . And if that's the case, then the work you're doing in your individual teams doesn't matter , because no one is going in the same direction! What's more important there is to focus on figuring out what the bigger goals should be. And once those are done, then you can realign each of your groups around them. Sometimes the first team mentality will result in a conflict. There's something your group wants or needs, which will result in a problem for another group. Ultimately, this is your work to resolve, and the conflict is a lens you can use to see misalignment and to improve the greater organization. You have to find a way to make sure that your group is healthy and able to thrive. And you also have to make sure that your group works toward collective success, which means helping all the groups achieve success. Any time you run into a conflict like this, it means that something went wrong in alignment. Either your group was doing something which worked against its own goal, or it was doing something which worked against another group 's goal. If the latter, then that means that the goals themselves fundamentally conflicted! So you go and you take that conflict, and you work through it. You work with your first team—and you figure out what the mismatch is, where it came from, and most importantly, what we do to resolve it. Then you take those new goals back to your group. And you do it with humility, since you're going to have to tell them that you made a mistake. Because that alignment is ultimately your job , and you have to own your failures if you expect your team to be able to trust you and trust each other.

0 views

Stewardship over ownership

Code ownership is a popular concept, but it emphasizes the wrong thing. It can bring out the worst in a person or a team: defensiveness, control-seeking, power struggles. Instead, we should be focusing on stewardship. Code ownership as a concept means that a particular person or team "owns" a section of the codebase. This gives them certain rights and responsibilities: There are tools that help with these, like the CODEOWNERS file on GitHub. This file lets you define a group or list of individuals who own a section of the repository. Then you can require reviews/approvals from them before anything gets merged. These are all coming from a good place. We want our code to be well-maintained, and we want to make sure that someone is responsible for its direction. It really helps to know who to go to with questions or requests. Without these, changes can grind to a halt, mired in confusion and tech debt. But the concept in practice brings challenges. If you've worked on a team using code ownership before, you've probably run into: I've certainly acted badly due to code ownership, without realizing what I was doing or or why I was doing it at the time. There are almost endless ways that code ownership can bring out the worst in people. And it all makes sense. We can do better by shifting to stewardship instead of ownership. We are all stewards of things we own or are responsible for. I have stewardship over the house I live in with my family, for example. I also have stewardship over the espresso machine I use every day: It's a big piece of machinery, and it's my responsibility to take good care of it and to ensure that as long as it's mine, it operates well and lasts a long time. That reduces expense, reduces waste, and reduces impact on the world—but it also means that the object (an espresso machine) is serving its purpose to bring joy and connection. Code is no different. By focusing on stewardship rather than ownership , we are focusing on the responsible, sustainable maintenance of the code. We focus on taking good care of that which we're entrusted with. A steward doesn't jealously guard, or struggle to gain more power. A steward watches what her responsibilities are, ensuring enough to contribute but not so many as to burn out. And she nurtures and cares for the code, to make sure that it continues to serve its purpose. Instead of an adversarial relationship, stewardship promotes partnership: It promotes working with others to figure out how to make the best use of resources, instead of hoarding them for yourself. Stewardship can solve many of the same problems that code ownership does: And in some ways, they look alike. You're going to do a lot of the same things, controlling what goes in or out. But they are very different in the focus . Owners are concerned with the value of what they own. Stewards are concerned with how well it can serve the group. And this makes all the difference in producing better outcomes.

0 views

Some things that make Rust lifetimes hard to learn

After I wrote YARR (Yet Another Rust Resource, with requisite pirate mentions), one of my friends tried it out. He gave me some really useful insights as he went through it, letting me see what was hard about learning Rust from a newcomer's perspective. Unsurprisingly, lifetimes are a challenge—and seeing him go through it helped me understand why they're hard to learn. Here are a few of the challenges he ran into. I don't think that these are necessarily problems, but they're perhaps opportunities to improve educational materials. My friend gave me an example he's seen a few times when people explain lifetimes. And for many newcomers, you see this and you expect it is saying that and both have the lifetime , so they live the same amount of time. But the following is valid: In this example, and live for different amounts of time. doesn't even survive to the end of the function, whereas should be valid for the entire duration of the program. That's because lifetimes are talking about a bound on the time something can live. There's some lifetime during which we can say that and are both certainly valid . But and can both live longer than . Most code we write changes what the program does at runtime. Types can be different, because sometimes you're giving the compiler information about what something is. But most type information can change the runtime behavior! The simplest example is when you have an integer. You can declare one without a type. This has an inferred type, and if you set a different type, like , you'll get different behavior at run time. In contrast, lifetimes are only used by the compiler to ensure that borrows are all valid. The compiler can reject your program if invalid borrows are performed, but the binary output should not be affected by the lifetimes of the variables. We're used to seeing types in our programming languages, and these type systems are usually pretty similar. Rust's lifetimes are different, though. The borrow checker uses a linear type system to do its work. These are super cool, and something that I don't understand particularly well. I'm familiar with how to use the borrow checker, but I don't know any of the theory behind them. The premise, as I understand it, is that objects can be used exactly once, allowing you to safely deallocate it after use (since it won't be used again). This prevents multiple concurrent uses (yay, data race protection!) or use-after-free (yay, segfault protection!). The coolness is why we have it, but it's still pretty tough to understand. You have to learn this whole new type system that's pretty different from everything else you've touched. And most of the resources [1] out there don't even mention that it's a different kind of type system! Another challenge is that the syntax is shared with generics. Even though lifetimes are very different in behavior and type system from generics, they sit inside very similar looking syntax. This is probably unavoidable—lifetimes are related to all the other types in your code—but it certainly makes things harder to learn. When you see something like this, you expect that it's generic over a type. And you're right that it is! But then you have something that looks very similar, like this. And you might expect it to also be generic over a type. But it's not, in the normal sense. Instead it's generic over a lifetime . And that's a little confusing that those sit in the same spot, especially when it's not called out as a potential gotcha in learning materials. Lifetimes have some inherent complexity. The borrow checker is a very valuable tool, and it's great we have it! But with that power and complexity can come challenges in learning, and teaching, the underlying concepts. I think the current difficulty in learning Rust is due to a lot of things. One aspect is certainly some inherent complexity. But another aspect is that many resources aren't really geared toward the kind of programmer coming to Rust without this background knowledge, and there is room for improvement. We can make explanations of lifetimes and the borrow checker better and less confusing. Or we can at least make them more empathetic, projecting that it's expected to be confused because there are some good reasons it's hard to understand. And that you'll get there, eventually. Thank you, Ryan, for generously sharing your thoughts as you went through learning Rust. Our conversations were instrumental in writing this post. And thank you to a different Ryan for your helpful comments and corrections! I suppose, as the author of YARR , I can fix this in at least one instance. ↩

0 views