Latest Posts (20 found)

Binding port 0 to avoid port collisions

It's common to spin up a server in a test so that you can do full end-to-end requests of it. It's a very important sort of test, to make sure things work all together. Most of the work I do is in complex web backends, and there's so much risk of not having all the request processing and middleware and setup exactly the same in a mock test... you must do at least some end-to-end tests or you're making a gamble that's going to bite you. And this is great, but you quickly run into a problem: port collisions! This can happen when you run multiple tests at once and all of them start a separate server, and whoops, two have picked the same port. Or it can happen if something else running on your development machine happens to be running on the port you chose. It's annoying when it happens, too, because it's often hard to reproduce. So... how do we fix that? You read the title [1] , so you know where we're going, but let's go there together. There are a few potential solutions to this. Perhaps the most obvious is binding to a port you choose randomly. This will work a lot of the time, but it's going to be flaky. You can drive down the probability of collision, but it's going to happen sometimes. Side note, I think the only thing worse than a test that fails 10% of the time is one that fails 1% of the time. It's not flaky enough to drive urgency for anyone to fix it, but it's flaky enough that in a team context, you will run into this on a daily basis. Ask me how I know. How often you get a collision depends on a lot of factors. How many times do you bind a port in the range? How many other services might bind something in that range? How likely are two things to run concurrently? As a simple example, let's say we pick a random port in the range 9000-9999, and you have 4 concurrent tests that will overlap. If you uniformly sample from this range, then you will have a 1/1000 chance of a collision from the second test, a 2/1000 chance from the third, and a 3/1000 chance from the fourth. Our probability of having no collision is . That means that we have a 0.6% chance of a collision. This isn't horrible, but it's not great! We could also have each test increment the port it picks by 1. I've done this before, and it avoids one set of problems from collisions, but it makes a new problem. Now you're sweeping across the entire range starting from the first port. If you have anything else running on your system that binds in that range, you'll run into a collision! And if you run your entire test suite in parallel, you're much more likely to have a problem now, since they all start at the same port. The problem we've had all along is that we don't have full information. If we know the system state and all the currently open ports, then binding to one that's not in use is an easy problem. And you know who knows all that info? The kernel does. And it turns out, this is something we can ask the kernel for. We can just say "please give me a nice unused port" and it will! There's a range of ports that the kernel uses for this. It varies by system, but it's not usually very relevant what the particular range is. On my system, I can find the range by checking . My ephemeral port range is from 32768 to 60999. I'm curious why the range stops there instead of going all the way up, so that's a future investigation. To get an ephemeral port on Linux systems, you bind or listen on port 0 . Then the kernel will hand you back a port in the ephemeral range. And you know that it's available, since the kernel is keeping track. It's possible to have an issue here if the full range of ports has been exhausted but, you know what, if you hit that limit, you probably have other problems [2] . The only thing is that if you've bound to an unknown port, how do you send requests to it? We can get the port we've bound to by another syscall, . This lets us find out what address a socket is bound to, and then we can do something with that information. For tests, that means that you'll need to find a way to communicate this port from the listener to the requester. If they're in the same process, I like to do this by either injecting in the listener or returning the address. If you're doing something like postgres or redis on an ephemeral port, then you'd probably have to find the port from its output, which is tedious but doable. Here's an example from a web app I'm working on. This is how a simple test looks. We launch the web server, binding to port 0, and get the address back. Then we can send requests to that address! And inside , the relevant two lines are: ...where in our case. That's all we have to do, and we'll get a much more reliable test setup. I think suspenseful titles can be fun, improve storytelling, and drive attention. But sometimes you really need a clear, honest, spoiler of a title. Giving away the answer is great when you're giving information that people might want to quickly internalize. ↩ If you do run into this, I'm very curious to hear about the circumstances. It's the kind of problem that I'd love to look at and work on. It's kind of messy, and you know that there's something very interesting that led to it being this way. ↩ I think suspenseful titles can be fun, improve storytelling, and drive attention. But sometimes you really need a clear, honest, spoiler of a title. Giving away the answer is great when you're giving information that people might want to quickly internalize. ↩ If you do run into this, I'm very curious to hear about the circumstances. It's the kind of problem that I'd love to look at and work on. It's kind of messy, and you know that there's something very interesting that led to it being this way. ↩

0 views

TIL: Docker log rotation

Last week [1] , when I went to publish my blog post, I ran into a surprising error: I was out of disk space. My server is used only for hosting a couple of small static sites, so I was surprised. None of the content is very large, why is the disk full? A little investigation found the culprit. Starting from and then drilling in using showed me that a Docker folder was using most of the server's 25 GB disk. None of my container images are very large, so I checked in side and found a few log files that were larger than 10 GB each. It turns out, Docker doesn't automatically rotate log files! As long as a container exists, the logs will keep growing for it. This means even if you stop a container and start it again, the logs are still there and getting bigger. I'd not thought about this before, but it turns out that when my blog is seeing heavy traffic, the logs can grow in order of megabytes per hour. And that really adds up over time. First I did a quick check of how the logs are configured to start with. You can see the log configuration by using . And my container's logging was totally unconfigured! That explained a lot. Now the fix is pretty quick. The docs show us an example that works well enough here. didn't exist yet, so I created it and added this log configuration in. The original example had for log files, but I want a little more than that. I have the disk space, and I'd like longer to investigate logs before they are truncated away. After setting that up, I restarted the docker daemon by calling . But logs don't rotate yet, no! The docs told us that this applies for new containers after Docker is restarted, but not for existing containers. So the final step was to stop and remove any containers I wanted rotation to work on, then recreate them. After that, a quick check, and we've got log rotation. Don't be like me, don't forget to rotate your logs! Or, do forget. Focus on the things you enjoy, and do just enough of the other things to make it work. You can always hire someone else to solve some of the annoying, tedious, or difficult problems for you. (Hi. Hire me! ) This is another "today I learned" post that happened previously, so at this point it should be "last week I learned" or "recently I learned", but you know what, I don't make the rules [2] . ↩ Okay, I do make the rules on this site. ↩ This is another "today I learned" post that happened previously, so at this point it should be "last week I learned" or "recently I learned", but you know what, I don't make the rules [2] . ↩ Okay, I do make the rules on this site. ↩

0 views

3D printing my laptop ergonomic setup

Apparently, one of my hobbies is making updates to my ergonomic setup, then blogging about it from an Amtrak train. I've gone and done it again. My setup stayed static for some time, but my most recent iteration ended up letting me down and I had to change it again. It gave me a lot of useful information and strongly shaped how I approached this iteration. This new one is closest to the first one I wrote about in 2024, but with some major improvements and reproducibility. First things first, though. Why am making I yet more changes to this setup? Besides my constant neurodivergent drive to make things perfect, my setups all kept causing me some problems. In chronological order, here are the problems and neat benefits of each setup I used for at least a few months. So my immediate previous version was heavy and tedious to setup. I had a trip coming up to Brooklyn, so I had to either make something more portable or leave my laptop at home. I decided to take my laptop, and did a design sprint to see if I can make my dream setup. At this point I'll probably be working on this setup forever, but I hope I can stop if I am able to satisfy all my goals at some point. My dream setup has these characteristics: So, you know, it's not like I want a lot out of this setup. It's not like these are kind of a lot to all fit into one thing. I'm sure it'll be a piece of cake. I use OpenSCAD for 3D modeling. It's pretty pleasant, though some things are hard in general (like roundovers and fillets on any more complicated shapes). My design to start is basically one of my previous versions: my split keyboard at adjustable width on a base, and a slot to hold my laptop vertically. I started by measuring important dimensions, like how far apart I wanted my keyboard halves and the dimensions of my laptop. Then I compared these to my 3D printer's print volume, and started working out how I'd have to print it. The rig is wider than my 3D printer, so I had to split it up into parts. The slot would fit as a separate piece if I oriented it diagonally. The base itself would have to be split into two separate halves. To join the halves and the slot, I decided to use dovetail joints. I'm familiar with them from woodworking, and I figured they'd give a strong join here as well. I added the library BOSL2 to generate the dovetails, and these were pretty easy to model in. Then I also made some keyboard mounts, which I attach using a camera tripod mount (the Keyboardio Model 100 has threading for this). This is where I ended up for my initial design. When I printed the first pieces, I ran into a problem. The pieces came out alright, mostly, but there was this wavy defect on the top of it! It ended up being (I think) that the print was not adhering well to the printbed. This was easily solved by washing it with some water and dish soap, then prints started coming out beautifully. The other problem was that the sliders and rails worked too smoothly, and I realized that I'd need to have some way to lock the keyboard in place or it would slide around in a difficult to use way. I punted on this, and printed the whole thing. I knew I'd need another iteration on it for material reasons: I am printing the prototype from PLA, since it's easy to work with, but I wanted to print the final one from PETG for slightly better heat resistance. So, onwards, and with a clean printbed, I was able to make the full first prototype! It was 3 parts which took 2-3.5 hours each to print, for a total print time of under 12 hours. I assembled the pieces and glued them together. At this point I was able to use the setup to work on itself, which was really satisfying. I did need to make the keyboard lock in place for carrying it, but it was fairly stable on my desk at least. Now it was time to make a few tweaks, and print the whole thing in PETG for its heat resistance. I did a few things this iteration: I carved out a honeycomb pattern on the base to reduce weight and filament; I added a nubbin and detentes to the keyboard slider to lock it in place where I want (in 10mm increments); I lengthened the keyboard rails to go further in; and I widened the keyboard slot for a less snug fit. This time is when I met the challenge that is printing with PETG! I dried my filament and started doing some prototyping. I sliced apart chunks of my model to see if things fit together still, since that can change with materials. I also printed a test of my locking clicky mechanism for the keyboard, and good thing: it needed design changes, but the second print worked great (I modified the first with a knife until it fit, then measured the remaining material, and modeled that). Then I printed it. And it came out pretty well! I mean, I had major stringing and bed adherence issues the first time I tried it, but with thorough bed cleaning and a nozzle wipe, it came out cleanly. I had one spot with a minor quality issue, but it's on the bottom and not visible. And it's working out really well! Mostly! The good things here are what make it usable. It is lightweight (about 280 grams), which is comparable to my lightest previous setup but that one fell apart promptly. It seems durable; we'll see over time, but it did survive multiple backpack loadings and a trip to Brooklyn today, where I hauled it around the city with me. And it's pretty fast to deploy: I can put it together in 15 seconds. The keyboard width is very easy to adjust, and it's solidly in place where it won't slide by accident. The laptop screen is at a good height. It's reproducible: others could print it as well, with access to the files. (I'm considering making them open source, but I don't think they're quite ready to share. It needs some iteration first.) And I quite like the way it looks. However, it's not all good. I want to make some changes to it soon, after a break from the long print times and iterations. Here's the list to address: I don't know if addressing those is all feasible, or if it will satisfy my dream setup. But I do know by now that I'll not be done with this for a long, long time. Everyone needs a hobby, apparently this is one of mine. It's been surprisingly rewarding to work on my own ergonomic setup like this. I have made this setup specifically for health reasons: without it, I cannot use a laptop without severe nerve pain, and I rather like being able to work from anywhere. I have a very uncommon setup in that I'm able to use my Keyboardio Model 100 from a train; I've not seen that before. The amazing thing about 3D printers is enabling this kind of solution. I made my previous versions in my workshop out of mostly wood. It took time and iteration was a big challenge. With a 3D printer, it's doable to design it and even send it off to someone else to print. And we can make exactly what we need, at relatively low cost. It's a technology that truly changes things in making custom tailored solutions far more accessible. As far as I know, the main laptops that do this are the Framework 13 and some Lenovo Thinkpads. No Apple laptop does this. It's a big constraint and I haven't been able to design it out of my setup. I'm starting to wonder if the ticket is a headless small form factor computer with a portable monitor. ↩ I am annoyed at this, because it limits my keyboard options and I would love something lighter. Don't get me wrong, I love my Model 100. But I'm uncomfortable relying only on one keyboard from one company. ↩ My first one was difficult to adjust the keyboard width . You had to flip it over and loosen hardware from the bottom. It was also a little heavy . There's a limit to how far I can reduce weight when using a Keyboardio Model 100, but we can get closer. However, this rig was very fast to set up. It also did keep my keyboard at a good width. My second one used hinges made from fabric and hook-and-loop fasteners, which was neat but ultimately it fell apart , it was tedious to adjust , and it took a long time to set up . The big benefit of this setup was that it was extremely light . This was helpful when I was suffering from a lot of fatigue and POTS. My third one had a neat hinging mechanism which was useful for smaller spaces but wasn't much faster to set up . It used a smaller lighter keyboard, but ultimately that keyboard ended up relapsing my nerve pain . My fourth one, not previously written about, was... way too heavy . It was also a little tedious to setup , but the weight was its biggest problem. I made that one from off-the-shelf parts (mostly), with the goal of making something reproducible for others . And it worked with any laptop , not just ones with a 180 degree hinge like mine [1] . But, with how heavy and annoying it was, it's not worth reproducing . relatively lightweight : it's not going to get super light with both a laptop and my keyboard, but I want to minimize the weight beyond those solid mount for my Keyboardio Model 100 : this keyboard is, vexingly [2] , the only keyboard that keeps my nerve pain in remission. I need to use it. good laptop screen height : another problem with laptop use generally is that the screen is usually too low or the keyboard is too high. I want to make sure the screen is at a reasonable height so that I don't wreck my body through poor posture. durability : it needs to be pretty durable since I'm going to use this rig for travel. I don't abuse my laptop or my setup, but it has to stand up to regularly being taken in and out of a bag and being used in random places. It has to stand up to a variety of environmental conditions, too. as easy as opening my laptop : a lot of ergonomic problems stem from ergonomic setups being inconvenient , so if I can reduce that inconvenience, I can reduce the problems easily adjustable keyboard width : I shift around my keyboard position as my body asks for it, and having dynamic positioning helps me feel comfortable. I'd like to be able to do this with little fuss, or else I won't do it (see the previous point). mounting points for accessories : I use an eink tablet to take notes, and would love to be able to put it on a little mount on the rig. I also want to be able to mount USB hubs or the mic I use for Talon. Having options for attaching accessories would make it not just equivalent to a laptop, but far more flexible. reproducible : This setup gets a lot of comments from people, and it solves real problems for me that other people have as well. I want more people to be able to use it. interesting : whenever I take this thing out, I get comments on it. It's how I find other engineers and software folks: most people are all "ignore the lady with the weird rig" but y'all actually strike up conversations with me about it. (If you ever run into me in public, please do talk to me! Even if it looks like I'm working!) I don't want this social benefit to go away! attractive aesthetic : I've been fine using my homebrew wood setups, but they're so obviously homemade and don't look good. My dream is that it would look like it's not homemade, and would simply look like it's how the computer is intended to be used. Replacements for the camera z-mounts : I'd like to 3d print something for this, and it will be the first iteration I make. The z-mounts are over a pound of metal together, so I could bring down the weight a bit more this way. However, it may be not worth it. Add non-slip feet and extra rails on the bottom : I'd like to raise it off the surface it's on a little bit and add some rails on the bottom for a little more rigidity. Make it more rigid : it is a little bit floppy, but not to the point of being distracting when using it. I'd like it to feel a little sturdier, especially if anyone else were going to use it. Add attachment points for accessories : on Friday, someone at Recurse Center saw my coffee perched in the middle and he suggested a cupholder. I'd like that, or mounts for my mic or USB hub or myriad other things. I can use the honeycomb grid for attachment points, if I add those rails/feet on the bottom to raise it all up a little bit. Make it modular and customizable : it only works today if you have a split keyboard with a tripod mount on the bottom of it. So, that's not great for people who don't have the exact same keyboard I do! And if you have other laptops, well, it would need to be adjusted for that. I want to address this before releasing the files. (If you do have the hardware that makes this useful for you today, let me know. I'm happy to help people out with that, I just don't want to do a big public release.) As far as I know, the main laptops that do this are the Framework 13 and some Lenovo Thinkpads. No Apple laptop does this. It's a big constraint and I haven't been able to design it out of my setup. I'm starting to wonder if the ticket is a headless small form factor computer with a portable monitor. ↩ I am annoyed at this, because it limits my keyboard options and I would love something lighter. Don't get me wrong, I love my Model 100. But I'm uncomfortable relying only on one keyboard from one company. ↩

0 views

Bayes theorem and how we talk about medical tests

We want medical tests to give us a yes or no answer: you have the disease, you're cured. We treat them this way, often. My labs came back saying I'm healthy. I have immunity. I'm sick. Absolutely concrete results. The reality is more complicated, and tests do not give you a yes or no. They give you a likelihood. And most of the time, what the results mean for me , the test taker, is not immediately obvious or intuitive. They can mean something quite the opposite of what they seem. I ran into this recently on a page about celiac disease. The Celiac Disease Foundation has a page about testing for celiac disease . On this page, they give a lot of useful information about what different tests are available, and they point to some other good resources as well. In the section about one of the tests, it says (emphasis original): The tTG-IgA test will be positive in about 93% of patients with celiac disease who are on a gluten-containing diet. This refers to the test's sensitivity , which measures how correctly it identifies those with the disease. The same test will come back negative in about 96% of healthy people without celiac disease. This is the test's specificity . This is great information, and it tells you what you need to start figuring out what your chance of celiac disease is. The next paragraph says this, however: There is also a slight risk of a false positive test result, especially for people with associated autoimmune disorders like type 1 diabetes, autoimmune liver disease, Hashimoto's thyroiditis, psoriatic or rheumatoid arthritis, and heart failure, who do not have celiac disease. And this is where things are a little misleading. It says that there is a "slight risk" of a false positive test result. What do you think of as a slight risk? For me, it's maybe somewhere around 5%, maybe 10%. The truth is, the risk of a false positive is much higher (under many circumstances). When I take a test, I want to know a couple of things. If I get a positive test result, how likely is it that I have the disease? If I get a negative test result, how likely is it that I do not have the disease? The rates of positive and negative results listed above, the sensitivity and specificity, do not tell us these directly. However, they let us to calculate this with a little more information. Bayes' theorem says that . You can read as "the probability of A conditioned on B", or the chance that A happens if we know that B happens. What this formula lets us do is figure out one conditional probability we don't yet know in terms of other ones that we do know. In our case, we would say that is having celiac disease, and is getting a positive test result. This leaves as the chance that if you get a positive test result, that you do have celiac disease, which is exactly what we want to know. To compute this, we need a few more pieces of information. We already know that is 0.93, as we were told this above. And we can find prety easily. Let's say is 0.01, since about 1 in 100 people in the US have celiac disease. Estimates vary from 1 in 200 to 1 in 50, but this will do fine. That leaves us with . We have to compute it from both possibilities. If someone who has celiac disease takes the test, they have a 93% chance of it coming back positive, but they're only 1% of the population. On the other hand, someone without celiac disease has a 4% chance of it coming back positive (96% of the time it gives a true negative), and they're 99% of the population. We use these together to find that . Now we plug it all in! . Neat, 19%! So this says that, if you get a positive test result, you have a 19% chance of having celiac disease? Yes, exactly! It's less than 1 in 5! So if you get a positive test result, you have an 80% chance of it being a false positive. This is quite a bit higher than the aforementioned "slight risk." In fact, it means that the test doesn't so much diagnose you with celiac disease as say "huh, something's going on here" and strongly suggest further testing. Now let's look at the test the other way around, too. How likely is it you don't have celiac disease if you get a negative test result? Here we'd say that A is "we don't have it" and B is "we have a negative test". Doing some other calculations, pulled out of the oven fully baked in cooking show style, we can see that . So if you get a negative test result, you have a 99.9% chance of not having the disease. This can effectively rule it out! But... We know that 7% of people who take this test and do have celiac disease will get a negative result. How does this makes sense? The truth is, things are a little bit deeper. People don't actually present with exactly a 1% chance of having celiac's disease. That would be true if you plucked a random person from the population and subjected them to a blood test. But it's not true if you go to your doctor with GI symptoms which are consistent with celiac disease! If you're being tested for celiac disease, you probably are symptomatic. So that prior probability, ? It's better as something else, but how we set it is a good question. Let's say you present with symptoms highly consistent with celiac disease, and that this gives you a 10% chance of having celiac disease and a 90% chance of it being something else, given these symptoms . This changes the probability a lot. If you get a positive test in this case, then . So now a positive test is a 72% chance of having celiac disease, instead of just 20%. And a negative test here gives you a 10% chance of a false negative, better than the 0.1% chance before. The real question is how we go from symptoms to that prior probability accurately. I spent a lot of 2024 being poked with needles and tested for various diseases while we tried to figure out what was wrong with me. Ultimately it was Lyme disease, and the diagnosis took a while because of a false negative. That false negative happened because the test was calibrated for broad population sampling, not for testing individuals presenting with symptoms already. The whole story is a lot longer, and it's for another post. But maybe, just maybe, it would've been a shorter story if we'd learned reason about probabilities and medical tests better. Things are not intuitive, but Bayes is your friend, and Bayes' theorem can show us the information we really need to know. Or, we can keep going with things how they are. I mean, I did enjoy getting to know Barbara, my phlebotomist, from all my appointments. , the probability in general that the person taking the test has celiac disease. This is also called the prior probability , as it's what we would say the probability is if we did not know anything from this computation and test. , the probability that for any given test taken, it comes back positive. , the probability that if one has celiac disease, the test will come back positive.

0 views

Reflecting on 2025, preparing for 2026

As I do every year, it's that time to reflect on the year that's been, and talk about some of my hopes and goals for the next year! I'll be honest, this one is harder to write than last year's. It was an emotionally intense year in a lot of ways. Here's to a good 2026! Where last year I got sick and had time black holes from that, this year I lost time to various planned surgeries. I didn't get nearly as much done, because it was also hard to stay focused with all the attacks on trans rights happening. Without further ado, what'd I get up to? I helped coaching clients land job and improve their lives at work and beyond. I started coaching informally in 2024, and in 2025 I took on some clients formally. During the year, I helped clients improve their skills, build their confidence, and land great new jobs. I also helped clients learn how to balance their work and home life, how to be more productive and focused, and how to navigate a changing industry. This was one of the most rewarding things I did all year. I hope to do more of it this coming year! If you want to explore working together, email me or schedule an intro . I solved interesting problems at work. This reflection is mostly private, because it's so intertwined with work that's confidential. I learned a lot, and also got to see team members blossom into their own leadership roles. It is really fun watching people grow over time. I took on some consulting work. I had some small engagements to consult with clients, and those were really fun. Most of the work was focused on performance-sensitive web apps and networked code, using (naturally) Rust. This is something I'll be expanding this year! I've left my day job and am spinning up my consulting business again. More on that soon, but for now, email me if you want help with software engineering (especially web app performance) or need a principal engineer to step in and provide some engineering leadership. I wrote some good blog posts. This year, my writing output dropped to about 1/3 of what it was last year. Despite the reduction, I wrote some pretty good posts that I'm really happy with! I took a break intentionally to spend some time dealing with everything going on around me, and that helped a lot. I didn't get back to consistent weekly posts, but I intend to in 2026. My hernias were fixed. During previous medical adventures, some hernias were found. I go those fixed [1] ! Recovering from hernia repair isn't fun, but wasn't too bad in the long run. It resolved some pain I'd had for a while, which I hadn't realized was unusual pain. (Story of my life, honestly.) Long-awaited surgery! In addition to the hernia repair, I had another planned surgery done. The recovery was long, and is still ongoing. My medical leave was 12 weeks, and I'm going to continue recovering for about the first year in various forms. This has brought me so much deep relief, I can't even put it in words. Performed a 30-minute set at West Philly Porchfest. I did a solo set in West Philly Porchfest! All the arrangements were done by me, and I performed all the parts live (well, one part used a pre-sequenced arpeggiator). I played my wind synth as my main instrument, layering parts over top of myself with a looper, and I also played the drum parts. You can watch a few of the pieces in a YouTube playlist . Wrote and recorded two pieces of original music. This was one of my goals from 2024, and I'm very proud that I got it done. The first piece of music, Anticipation , came from an exercise a music therapist had me do. I took the little vignette and expanded it into a full piece, but more importantly, the exercise gave me an approach to composition. I'd like to rerecord Anticipation sometime, since I've grown as a musician significantly across the year. My second piece I'm even happier with. It's called Little Joys , and I'm just tickled that I was able to write this. I played it on my alto sax (piped through a pedal board) and programmed the other parts using a sequencer. One of my poems was published! I've written a lot more poetry this year. One of my close friends told me that I should get one of them published to have more people read it. They thought it was a good and important poem. That gave me the confidence to submit some poems, and one of them was accepted! (The one they told me to submit was not yet accepted anywhere, but fingers crossed.) You can read my poem, "my voice", in the December issue of Lavender Review . Every year when I write this, I realize I got a lot done. This year was a lot, filled with way more creative output than previous years. How does it stack up against what I wanted to do last year ? I am really proud of how much I did on my goals. I might be unhappy with my slipping on if it were a "normal" year where the government isn't trying to strip my rights, but you know what? I'll take it. Especially since I prioritized my health and happiness. So, what would I like to get out of this new year, 2026? These aren't my predictions for what will happen, nor are they concrete goals. They're more of a reflection on what I'd like this coming year to be. This is what I'm dreaming 2026 will be like for me. Keep my rights (and maybe regain ground). A perennial goal, I'd like to be able to stay where I am and have access to, I don't know, doctors and bathrooms. We've held a lot of ground this year. Hopefully some of what was lost can be regained. I'm going to keep doing what I can, and that includes living my best life and being positive representation for all others who are under attack. Maintain relationships with friends and family. I want to keep up with my friends and family and continue having regular chats with those I care about. We're a social species, and we rely on each other for support. I'm going to keep being there for the people I care about when they need me, and keep accepting their help as well when I need them. Spin up my business. I'm going out on my own, and I'm going to be offering my software engineering services again. By the end of the year, this will hopefully be thrumming along to support me and my family. Publish weekly blog posts (sustainably). I'm back in the saddle! This is the first post of 2026, and they're going to hopefully keep coming regularly. To make it sustainable, I'm going to explore if Patreon is a viable option to offset some of the time it takes to make the blog worth reading. Record a short album. I have a track in progress, and I have four more track ideas planned. I accidentally started writing an EP, I think??? This year I would love to actually finish that and release it. Publish more poetry. Writing poetry this year was very meaningful, and it's deeply important to me. I want to get more of it published so that I can share it with people who will also be able to get deep importance from it. That's it! Wow, the year was a lot. I've put a lot of myself in this post. If you've read this far, thank you so much for reading. If you've not read this far, then how're you reading this sentence anyway? 2025 had a lot in it. There were some very good things I am very grateful for. There were some very scary and bad things that I wish had never happened. All told, it's been a long few years jammed into one calendar year. I hope that 2026 will be a little calmer, with less of the bad. Maybe it can feel like just one year. Regardless, I'm going to hold as much joy in the world this year as I can. Please join me in that. Let's fill 2026 with as much joy as we can, and make the world shine in spite of everything. The surgeon really meshed me up! ↩ ❓ Once again, I wanted to keep my rights. It's a perennial goal, and I did keep my rights in the state/community I live in. I'm awarding this one a question mark since my rights were under assault, and there are now many more places I cannot safely travel to. That means it's not a full miss, but not a win either. ✅ No personal-time side projects went into production! Yet another year that I toyed with the idea and again talked myself out of it. I'm taking it off the list for 2026, since the urge wasn't really even there this time. ✅ Maintained relationships with friends and family. I've had regular, scheduled calls with some people close to me. I've visited people, supported them when they needed me, and asked for support when I needed it. ❓ I did a little consulting and coaching, but didn't explore many ways to make this (playful exploration like I do on here) my living. I'm giving this the question mark of dubiousity, since I don't think I got much information from the year toward the questions I wanted to answer. ✅ Kept my mental health strong! There were certainly some challenges. What I'm proud of most is that I recognized those challenges and made space for myself. That's why I stopped blogging regularly: I needed the space to get through things with intact mental health. ❓ Did some ridiculous fun projects with code, but not as much as I wanted. The main project was making it so I can type using my keyboard (you know, like a piano, not the thing with letters on it). I had aspired to do more, and I'm glad I let myself relax on this. ✅ Wrote some original music! ✅ Also recorded that original music! It's on my bandcamp page . The surgeon really meshed me up! ↩

0 views

Interview with a new hosting provider founder

Most of us use infrastructure provided by companies like DigitalOcean and AWS. Some of us choose to work on that infrastructure. And some of us are really built different and choose to build all that infrastructure from scratch . This post is a real treat for me to bring you. I met Diana through a friend of mine, and I've gotten some peeks behind the curtain as she builds a new hosting provider . So I was thrilled that she agreed to an interview to let me share some of that with you all. So, here it is: a peek behind the curtain of a new hosting provider, in a very early stage. This is the interview as transcribed (any errors are mine), with a few edits as noted for clarity. Nicole: Hi, Diana! Thanks for taking the time to do this. Can you start us off by just telling us a little bit about who you are and what your company does? Diana: So I'm Diana, I'm trans, gay, AuDHD and I like to create, mainly singing and 3D printing. I also have dreams of being the change I want to see in the world. Since graduating high school, all infrastructure has become a passion for me. Particularly networking and computer infrastructure. From your home internet connection to data centers and everything in between. This has led me to create Andromeda Industries and the dba Gigabit.Host. Gigabit.Host is a hosting service where the focus is affordable and performant host for individuals, communities, and small businesses. Let's start out talking about the business a little bit. What made you decide to start a hosting company? The lack of performance for a ridiculous price. The margins on hosting is ridiculous, it's why the majority of the big tech companies' revenue comes from their cloud offerings. So my thought has been why not take that and use it more constructively. Instead of using the margins to crush competition while making the rich even more wealthy, use those margins for good. What is the ethos of your company? To use the net profits from the company to support and build third spaces and other low return/high investment cost ventures. From my perspective, these are the types of ideas that can have the biggest impact on making the world a better place. So this is my way of adopting socialist economic ideas into the systems we currently have and implementing the changes. How big is the company? Do you have anyone else helping out? It’s just me for now, though the plan is to make it into a co-op or unionized business. I have friends and supporters of the project, giving feedback and suggesting improvements. What does your average day-to-day look like? I go to my day job during the week, and work on the company in my spare time. I have alerts and monitors that warn me when something needs addressing, overall operations are pretty hands off. You're a founder, and founders have to wear all the hats. How have you managed your work-life balance while starting this? At this point it’s more about balancing my job, working on the company, and taking care of my cat. It's unfortunately another reason that I started this endeavor, there just aren't spaces I'd rather be than home, outside of a park or hiking. All of my friends are online and most say the same, where would I go? Hosting businesses can be very capital intensive to start. How do you fund it? Through my bonuses and stocks currently, also through using more cost effective brands that are still reliable and performant. What has been the biggest challenge of operating it from a business perspective? Getting customers. I'm not a huge fan of marketing and have been using word of mouth as the primary method of growing the business. Okay, my part here then haha. If people want to sign up, how should they do that? If people are interested in getting service, they can request an invite through this link: https://portal.gigabit.host/invite/request . What has been the most fun part of running a hosting company? Getting to actually be hands on with the hardware and making it as performant as possible. It scratches an itch of eking out every last drop of performance. Also not doing it because it's easy, doing it because I thought it would be easy. What has been the biggest surprise from starting Gigabit.Host? How both complex and easy it has been at the same time. Also how much I've been learning and growing through starting the company. What're some of the things you've learned? It's been learning that wanting it to be perfect isn't realistic, taking the small wins and building upon and continuing to learn as you go. My biggest learning challenge was how to do frontend work with Typescript and styling, the backend code has been easy for me. The frontend used to be my weakness, now it could be better, and as I add new features I can see it continuing to getting better over time. Now let's talk a little bit about the tech behind the scenes. What does the tech stack look like? Next.js and Typescript for the front and backend. Temporal is used for provisioning and task automation. Supabase is handling user management Proxmox for the hardware virtualization How do you actually manage this fleet of VMs? For the customer side we only handle the initial provisioning, then the customer is free to use whatever tool they choose. The provisioning of the VMs is handled using Go and Temporal. For our internal services we use Ansible and automation scripts. [Nicole: the code running the platform is open source, so you can take a look at how it's done in the repository !] How do your technical choices and your values as a founder and company work together? They are usually in sync, the biggest struggle has been minimizing cost of hardware. While I would like to use more advanced networking gear, it's currently cost prohibitive. Which choices might you have made differently? [I would have] gathered more capital before getting started. Though that's me trying to be a perfectionist, when the reality is buy as little as possible and use what you have when able. This seems like a really hard business to be in since you need reliability out of the gate. How have you approached that? Since I've been self-funding this endeavor, I've had to forgo high availability for now due to costs. To work around that I've gotten modern hardware for the critical parts of the infrastructure. This so far has enabled us to achieve 90%+ uptime, with the current goal to add redundancy as able to do so. What have been the biggest technical challenges you've run into? Power and colocation costs. Colocation is expensive in Seattle. Around 8x the cost of my previous colo in Atlanta, GA. Power has been the second challenge, running modern hardware means higher power requirements. Most data centers outside of hyperscalers are limited to 5 to 10 kW per rack. This limits the hardware and density, thankfully for now it [is] a future struggle. Huge thanks to Diana for taking the time out of her very busy for this interview! And thank you to a few friends who helped me prepare for the interview.

0 views

Custom for designing, off-the-shelf for shipping

As software engineers, we're paid to write really cool type annotations solve problems. Usually we do this by taking a bunch of different pieces and putting them together to solve the problem. Maybe you mix together a database, a queue, a web framework, and some business logic. Or maybe you design a new storage engine, your own web framework, and a custom cache. It's an engineering question to determine which way is the right way. Should you build custom things? Or should you use off-the-shelf existing pieces? There is no general answer for that, of course. It's dependent on your situation. But there is a pattern that I've found helpful for problem-solving which balances the two approaches. You use as many custom components as you like for designing a solution, and then you use (mostly) off-the-shelf components for what you're going to ship . This technique helps a lot when you aren't sure what the solution will look like. If you try to design a solution using off-the-shelf solutions here, you may run into a couple of problems. First is that it's just a weird solution space, and you have to move in pre-defined step sizes. Second, though, is that you might need a custom component. If you need just one , when you're designing with off-the-shelf components, how will you realize that? And how will you know where that one custom component should go? Besides, how do you even know that any solution is possible? Designing with custom components allows you to get an existence proof [1] that the problem is solvable. You don't need to worry about a good solution, one that's viable. You first need to worry about any solution, as complicated as you like. If you can put together all these custom things you know you could build, then you know that some path exists! Custom components are also really helpful when a part exists but you don't know it exists. You have a problem, you know that it would help you to have something that can solve it. If you haven't solved that problem before, how would you know that a solution does exist? You look at it with custom components, and then that can lead you toward discovering the existing components. After you've designed a solution, you move on to refining the solution. It's ideal if you can build something entirely using off-the-shelf components, because those exist! You don't have to reinvent them, and it's probably cheaper to use them [2] . You can look at each of your custom components and start to ask: why is this custom? Maybe it needs to be, but more often, what it's solving is similar to something someone else has solved. You can look for those related tools and related problems, and find those solutions. Then slide a solution in in place of your component! Occasionally, you will find something where an off-the-shelf component does not exist which is equivalent to your custom one. Then you start to ask, okay, did I solve this problem wrong? I begin at the assumption that I'm not in a totally unique problem space. Then I look at how other people solved this problem, or why they didn't have to . What is it that people are doing that's different here so they don't run into it? And then if you get through that, you may actually have a unique new problem! And you get to keep that custom component in your design. But along the way, you moved a lot of custom work into using off-the-shelf components. Take a break, give yourself a pat on the back, then get back to work making money for corporate overlords at the cost of your health have fun building! Despite doing most of my problem solving in software, the example I want to share is a physical one (which inspired this post). I recently rebuilt my ergonomic setup again, my fifth iteration. This time, it is made of mostly off-the-shelf parts with one custom part. The first version was very much an existence proof that a way existed to use my laptop "portably" with my ergonomic keyboard. It wasn't great, and it didn't solve portability really, but it gestured at the solutions. My second version was the real existence proof, and it went almost fully custom. Off-the-shelf I used a tripod z tilt head, and I made the tray and laptop holder myself (I'm not counting the keyboard or laptop, since I'm building a solution around them). My third version used only custom components, and showed me it's possible! The fourth version used more custom components in a different arrangement. And now my new version? It's mostly off-the-shelf components. I didn't know most of these existed at all when I started on the rig a couple of years ago. Instead of a tray with grooves in it for my keyboard to slide on them, or velcro to hold it down, I got camera equipment! It uses two camera rods, two mounting blocks for those, two z-mount tripod heads, a "magic arm", and a tripod laptop holder tray. And a small custom component: a little wood piece that keeps it balanced, essentially outriggers. It's probably not done, though. There will surely be a sixth iteration. If nothing else, I want to replace the laptop holder. This one is a little too heavy; the rig as a whole is serviceable, but it would be nice to make it a little lighter for travel. So go forth and build things. Feel free to use as many custom components as you want when designing, but then think about if you can do it with off-the-shelf components when you want to be able to ship it. Of course, if my boss is reading this, reverse all the advice. You know I need to build us a new database in Rust. C'mon. My math degree is leaking out. An existence proof shows that something exists, ideally by providing a concrete example or giving a way to produce such a thing. I've found this concept very helpful in designing software, because you can break problem solving into two phases: first prove it's possible, then find a good solution. ↩ Large companies will often end up building their own custom versions of off-the-shelf components, once there is enough scale for it to work out. One example of this is Apple. They used Intel's CPUs for a long time, and then eventually designed their own once it made enough sense. Smaller companies generally cannot make this work. ↩

0 views

Visualizing distributions with pepperoni pizza (and javascript)

There's a pizza shop near me that serves a normal pizza. I mean, they distribute the toppings in a normal way. They're not uniform at all. The toppings are random, but not the way I want. The colloquial understanding of "random" is kind of the Platonic ideal of a pizza: slightly chaotic but things are more or less spread out over the whole piece in a regular way. If you take a slice you'll get more of less the same amount of pepperoni as any other slice. And every bite will have roughly the same amount of pepperoni as every other bite. I think it would look something like this. Regenerate this pie! This pizza to me is pretty much the canonical mental pizza. It looks pretty random, but you know what you're gonna get. And it is random! Here's how we made it, with the visualiztion part glossed over. First, we make a helper function, since gives us values from 0 to 1, but we want values from -1 to 1. Then, we make a simple function that gives us the coordinates of where to put a pepperoni piece, from the uniform distribution. And we cap it off with placing 300 fresh pieces of pepperoni on this pie, before we send it into the oven. (It's an outrageous amount of very small pepperoni, chosen in both axes for ease of visualizing the distribution rather than realism.) But it's not what my local pizza shop's pizza's look like. That's because they're not using the same probability distribution. This pizza is using a uniform distribution . That means that for any given pepperoni, every single position on the pizza is equally likely for it to land on. We are using a uniform distribution here, but there are plenty of other distributions we could use as well. One of the other most familiar distributions is normal distribution . This is the distribution that has the normal "bell curve" that we are used to seeing. And this is probably what people are talking about most of the time when they talk about how many standard deviations something is away from something else. So what would it look like if we did a normal distribution on a pizza? The very first thing we need to answer that is a way of getting the values from the normal distribution. This isn't included with JavaScript by default, but we can implement it pretty simply using the Box-Muller transform . This might be a scary name, but it's really easy to use. Is a way of generating numbers in the normal distribution using number sampled from the uniform distribution. We can implement it like this: Then we can make a pretty simple function again which gives us coordinates for where to place pepperoni in this distribution. The only little weird thing here is that I scale the radius down by a factor of 3. Without this, the pizza ends up a little bit indistinguishable from the uniform distribution, but the scaling is arbitrary and you can do whatever you want. And then once again we cap it off with a 300 piece pepperoni pizza. Regenerate this pie! Ouch. It's not my platonic ideal of a pizza, that's for sure. It also looks closer to the pizzas my local shop serves, but it's missing something... See, this one is centered around, you know, the center . Theirs are not that. They're more chaotic with a few handfuls of toppings. What if we did the normal distributions, but multiple times, with different centers? First we have to update our position picking function to accept a center for the cluster. We'll do this by passing in the center and generating coordinates around those, while still checking that we're within the bounds of the circle formed by the crust of the pizza. And then instead of one single loop for all 300 pieces, we can do 3 loops of 100 pieces each, with different (randomly chosen) centers for each. Regenerate this pie! That looks more like it. Well, probably. This one is more chaotic, and sometimes things work out okay, but other times they're weird. Just like the real pizzas. Click that "regenerate" button a few times to see a few examples! So, this is all great. But, when would we want this? I mean, first of all, boring. We don't need a reason except that it's fun! But, there's one valid use case that a medical professional and I came up with [1] : hot honey [2] . The ideal pepperoni pizza just might be one that has uniformly distributed pepperoni with normally distributed hot honey or hot sauce. You'd start with more intense heat, then it would taper off as you go toward the crust, so you maintain the heat without getting overwhelmed by it. The room to play here is endless! We can come up with a lot of other fun distributions and map them in similar ways. Unfortunately, we probably can't make a Poisson pizza, since that's a distribution for discrete variables. I really do talk about weird things with all my medical providers. And everyone else I meet. I don't know, life's too short to go "hey, this is a professional interaction, let's not chatter on and on about whatever irrelevant topic is on our mind." ↩ The pizza topping, not my pet name. ↩

0 views

Covers as a way of learning music and code

When you're just getting started with music, you have so many skills to learn. You have to be able to play your instrument and express yourself through it. You need to know the style you're playing, and its idioms and conventions. You may want to record your music, and need all the skills that come along with it. Music is, mostly, subjective: there's not an objective right or wrong way to do things. And that can make it really hard! Each of these skills is then couched in this subjectivity of trying to see if it's good enough. Playing someone else's music, making a cover, is great because it can make it objective. It gives you something to check against. When you're playing your own music, you're in charge of the entire thing. You didn't play a wrong note, because, well, you've just changed the piece! But when you play someone else's music, now there's an original and you can try to get as close to it as possible. Recreating it gives you a lot of practice in figuring out what someone did and how they did it. It also lets you peek into why they did it. Maybe a particular chord voicing is hard for you to play. Okay, let's simplify it and play an easier voicing. How does it sound now? How does it sound with the harder one? Play around with those differences and you start to see the why behind it all. The same thing holds true for programming. One of my friends is a C++ programmer [1] and he was telling me about how he learned C++ and data structures really well early on: He reimplemented parts of the Boost library . This code makes heavy use of templates, a hard thing in C++. And it provides fundamental data structures with robust implementations and good performance [2] . What he would do is look at the library and pick a slice of it to implement. He'd look at what the API for it is, how it was implemented, what it was doing under the hood. Then he'd go ahead and try to do it himself, without any copy-pasting and without real-time copying from the other screen. Sometimes, he'd run into things which didn't make sense. Why is this a doubly-linked list here, when it seems a singly-linked list would do just fine? And in those moments, if you can't find a reason? You get to go down that path, make it the singly-linked version, and then find out later: oh, ohhh. Ohhhh, they did that for a reason. It lets you run into some of the hard problems, grapple with them, and understand why the original was written how it was. You get to study with some really strong programmers, by proxy via their codebase. Their code is your tutor and your guide for understanding how to write similar things in the future. There's a lot of judgment out there about doing original works. This kind of judgment of covers and of reimplementing things that already exist, just to learn. So many people have internalized this, and I've heard countless times "I want to make a new project, but everything I think of, someone else has already done!" And to that, I say: do it anyway [3] . If someone else has done it, that's great. That means that you had an idea so good that someone else thought it was a good idea, too. And that means that, because someone else has done it, you have a reference now. You can compare notes, and you can see how they did it, and you can learn. I'm a recovering C++ programmer myself, and had some unpleasant experiences associated with the language. This friend is a game developer, and his industry is one where C++ makes a lot of sense to use because of the built-up code around it. ↩ He said they're not perfect, but that they're really good and solid and you know a lot of people thought for a long time about how to do them. You get to follow in their footsteps and benefit from all that hard thinking time. ↩ But: you must always give credit when you are using someone else's work. If you're reimplementing someone else's library, or covering someone's song, don't claim it's your own original invention. ↩

0 views

That boolean should probably be something else

One of the first types we learn about is the boolean. It's pretty natural to use, because boolean logic underpins much of modern computing. And yet, it's one of the types we should probably be using a lot less of. In almost every single instance when you use a boolean, it should be something else. The trick is figuring out what "something else" is. Doing this is worth the effort. It tells you a lot about your system, and it will improve your design (even if you end up using a boolean). There are a few possible types that come up often, hiding as booleans. Let's take a look at each of these, as well as the case where using a boolean does make sense. This isn't exhaustive— [1] there are surely other types that can make sense, too. A lot of boolean data is representing a temporal event having happened. For example, websites often have you confirm your email. This may be stored as a boolean column, , in the database. It makes a lot of sense. But, you're throwing away data: when the confirmation happened. You can instead store when the user confirmed their email in a nullable column. You can still get the same information by checking whether the column is null. But you also get richer data for other purposes. Maybe you find out down the road that there was a bug in your confirmation process. You can use these timestamps to check which users would be affected by that, based on when their confirmation was stored. This is the one I've seen discussed the most of all these. We run into it with almost every database we design, after all. You can detect it by asking if an action has to occur for the boolean to change values, and if values can only change one time. If you have both of these, then it really looks like it is a datetime being transformed into a boolean. Store the datetime! Much of the remaining boolean data indicates either what type something is, or its status. Is a user an admin or not? Check the column! Did that job fail? Check the column! Is the user allowed to take this action? Return a boolean for that, yes or no! These usually make more sense as an enum. Consider the admin case: this is really a user role, and you should have an enum for it. If it's a boolean, you're going to eventually need more columns, and you'll keep adding on other statuses. Oh, we had users and admins, but now we also need guest users and we need super-admins. With an enum, you can add those easily. And then you can usually use your tooling to make sure that all the new cases are covered in your code. With a boolean, you have to add more booleans, and then you have to make sure you find all the places where the old booleans were used and make sure they handle these new cases, too. Enums help you avoid these bugs. Job status is one that's pretty clearly an enum as well. If you use booleans, you'll have , , , and on and on. Or you could just have one single field, , which is an enum with the various statuses. (Note, though, that you probably do want timestamp fields for each of these events—but you're still best having the status stored explicitly as well.) This begins to resemble a state machine once you store the status, and it means that you can make much cleaner code and analyze things along state transition lines. And it's not just for storing in a database, either. If you're checking a user's permissions, you often return a boolean for that. In this case, means the user can do it and means they can't. Usually. I think. But you can really start to have doubts here, and with any boolean, because the application logic meaning of the value cannot be inferred from the type. Instead, this can be represented as an enum, even when there are just two choices. As a bonus, though, if you use an enum? You can end up with richer information, like returning a reason for a permission check failing. And you are safe for future expansions of the enum, just like with roles. You can detect when something should be an enum a proliferation of booleans which are mutually exclusive or depend on one another. You'll see multiple columns which are all changed at the same time. Or you'll see a boolean which is returned and used for a long time. It's important to use enums here to keep your program maintainable and understandable. But when should we use a boolean? I've mainly run into one case where it makes sense: when you're (temporarily) storing the result of a conditional expression for evaluation. This is in some ways an optimization, either for the computer (reuse a variable [2] ) or for the programmer (make it more comprehensible by giving a name to a big conditional) by storing an intermediate value. Here's a contrived example where using a boolean as an intermediate value. But even here in this contrived example, some enums would make more sense. I'd keep the boolean, probably, simply to give a name to what we're calculating. But the rest of it should be a on an enum! Sure, not every boolean should go away. There's probably no single rule in software design that is always true. But, we should be paying a lot more attention to booleans. They're sneaky. They feel like they make sense for our data, but they make sense for our logic . The data is usually something different underneath. By storing a boolean as our data, we're coupling that data tightly to our application logic. Instead, we should remain critical and ask what data the boolean depends on, and should we maybe store that instead? It comes easier with practice. Really, all good design does. A little thinking up front saves you a lot of time in the long run. I know that using an em-dash is treated as a sign of using LLMs. LLMs are never used for my writing. I just really like em-dashes and have a dedicated key for them on one of my keyboard layers. ↩ This one is probably best left to the compiler. ↩

0 views

Proving that every program halts

One of the best known hard problems in computer science is the halting problem. In fact, it's widely thought [1] that you cannot write a program that will, for any arbitrary program as input, tell you correctly whether or not it will terminate. This is written from the framing of computers, though: can we do better with a human in the loop? It turns out, we can. And we can use a method that's generalizable, which many people can follow for many problems. Not everyone can use the method, which you'll see why in a bit. But lots of people can apply this proof technique. Let's get started. We'll start by formalizing what we're talking about, just a little bit. I'm not going to give the full formal proof—that will be reserved for when this is submitted to a prestigious conference next year. We will call the set of all programs . We want to answer, for any in , whether or not will eventually halt. We will call this and if eventually finished and otherwise. Actually, scratch that. Let's simplify it and just say that yes, every program does halt eventually, so for all . That makes our lives easier. Now we need to get from our starting assumptions, the world of logic we live in, to the truth of our statement. We'll call our goal, that for all , the statement . Now let's start with some facts. Fact one: I think it's always an appropriate time to play the saxophone. *honk*! Fact two: My wife thinks that it's sometimes inappropriate to play the saxophone, such as when it's "time for bed" or "I was in the middle of a sentence! [2] We'll give the statement "It's always an appropriate time to play the saxophone" the name . We know that I believe is true. And my wife believes that is false. So now we run into the snag: Fact three: The wife is always right. This is a truism in American culture, useful for settling debates. It's also useful here for solving major problems in computer science because, babe, we're both the wife. We're both right! So now that we're both right, we know that and are both true. And we're in luck, we can apply a whole lot of fancy classical logic here. Since we know that is true and we also know that is true. From being true, we can conclude that is true. And then we can apply disjunctive syllogism [3] which says that if is true and is true, then must be true. This makes sense, because if you've excluded one possibility then the other must be true. And we do have , so that means: is true! There we have it. We've proved our proposition, , which says that for any program , will eventually halt. The previous logic is, mostly, sound. It uses the principle of explosion , though I prefer to call it "proof by married lesbian." Of course, we know that this is wrong. It falls apart with our assumptions. We built the system on contradictory assumptions to begin with, and this is something we avoid in logic [4] . If we allow contradictions, then we can prove truly anything. I could have also proved (by married lesbian) that no program will terminate. This has been a silly traipse through logic. If you want a good journey through logic, I'd recommend Hillel Wayne's Logic for Programmers . I'm sure that, after reading it, you'll find absolutely no flaws in my logic here. After all, I'm the wife, so I'm always right. It's widely thought because it's true, but we don't have to let that keep us from a good time. ↩ I fact checked this with her, and she does indeed hold this belief. ↩ I had to look this up, my uni logic class was a long time ago. ↩ The real conclusion to draw is that, because of proof by contradiction, it's certainly not true that the wife is always right. Proved that one via married lesbians having arguments. Or maybe gay relationships are always magical and happy and everyone lives happily ever after, who knows. ↩

0 views

Taking a break

I've been publishing at least one blog post every week on this blog for about 2.5 years. I kept it up even when I was very sick last year with Lyme disease. It's time for me to take a break and reset. This is the right time, because the world is very difficult for me to move through right now and I'm just burnt out. I need to focus my energy on things that give me energy and right now, that's not writing and that's not tech. I'll come back to this, and it might look a little different. This is my last post for at least a month. It might be longer, if I still need more time, but I won't return before the end of May. I know I need at least that long to heal, and I also need that time to focus on music. I plan to play a set at West Philly Porchfest , so this whole month I'll be prepping that set. If you want to follow along with my music, you can find it on my bandcamp (only one track, but I'll post demos of the others that I prepare for Porchfest as they come together). And if you want to reach out, my inbox is open. Be kind to yourself. Stay well, drink some water. See you in a while.

0 views

Measuring my Framework laptop's performance in 3 positions

A few months ago, I was talking with a friend about my ergonomic setup and they asked if being vertical helps it with cooling. I wasn't sure, because it seems like it could help but it was probably such a small difference that it wouldn't matter. So, I did what any self-respecting nerd would do: I procrastinated. The question didn't leave me, though, so after those months passed, I did the second thing any self-respecting nerd would do: benchmarks. What we want to find out is whether or not the position of the laptop would affect its CPU performance. I wanted to measure it in three positions: My hypothesis was that using it closed would slightly reduce CPU performance, and that using it normal or vertical would be roughly the same. For this experiment, I'm using my personal laptop. It's one of the early Framework laptops (2nd batch of shipments) which is about four years old. It has an 11th gen Intel CPU in it, the i7-1165G7. My laptop will be sitting on a laptop riser for the closed and normal positions, and it will be sitting in my ergonomic tray for the vertical one. For all three, it will be connected to the same set of peripherals through a single USB-C cable, and the internal display is disabled for all three. I'm not too interested in the initial boost clock. I'm more interested in what clock speeds we can sustain. What happens under a sustained, heavy load, when we hit a saturation point and can't shed any more heat? To test that, I'm doing a test using heavy CPU load. The load is generated by stress-ng , which also reports some statistics. Most notably, it reports CPU temperatures and clock speeds during the tests. Here's the script I wrote to make these consistent. To skip the boost clock period, I warm it up first with a 3-minute load Then I do a 5-minute load and measure the CPU clock frequency and CPU temps every second along the way. We need since we're using an option ( ) which needs root privileges [1] and attempts to make the CPU run harder/hotter. Then we specify the stressor we're using with , which does some matrix calculations over a number of cores we specify. The remaining options are about reporting and logging. I let the computer cool for a minute or two between each test, but not for a scientific reason. Just because I was doing other things. Since my goal was to saturate the temperatures, and they got stable within each warmup period, cooldowh time wasn't necessary—we'd warm it back up anyway. So, I ran this with the three positions, and with two core count options: 8, one per thread on my CPU; and 4, one per physical core on my CPU. Once it was done, I analyzed the results. I took the average clock speed across the 5 minute test for each of the configurations. My hypothesis was partially right and partially wrong. When doing 8 threads, each position had different results: With 4 threads, the results were: So, I was wrong in one big aspect: it does make a clearly measurable difference. Having it open and vertical reduces temps by 3 degrees in one test and 5 in the other, and it had a higher clock speed (by 0.05 GHz, which isn't a lot but isn't nothing). We can infer that, since clock speeds improved in the heavier load test but not in the lighter load test, that the lighter load isn't hitting our thermal limits—and when we do, the extra cooling from the vertical position really helps. One thing is clear: in all cases, the CPU ran slower when the laptop was closed. It's sorta weird that the CPU temps went down when closed in the second test. I wonder if that's from being able to cool down more when it throttled down a lot, or if there was a hotspot that throttled the CPU but which wasn't reflected in the temp data, maybe a different sensor. I'm not sure if having my laptop vertical like I do will ever make a perceptible performance difference. At any rate, that's not why I do it. But it does have lower temps, and that should let my fans run less often and be quieter when they do. That's a win in my book. It also means that when I run CPU-intensive things (say hi to every single Rust compile!) I should not close the laptop. And hey, if I decide to work from my armchair using my ergonomic tray, I can argue it's for efficiency: boss, I just gotta eke out those extra clock cycles. I'm not sure that this made any difference on my system. I didn't want to rerun the whole set without it, though, and it doesn't invalidate the tests if it simply wasn't doing anything. ↩

0 views

The five stages of incident response

The scene: you're on call for a web app, and your pager goes off. Denial. No no no, the app can't be down. There's no way it's down. Why would it be down? It isn't down. Sure, my pager went off. And sure, the metrics all say it's down and the customer is complaining that it's down. But it isn't, I'm sure this is all a misunderstanding. Anger. Okay so it's fucking down. Why did this have to happen on my on-call shift? This is so unfair. I had my dinner ready to eat, and *boom* I'm paged. It's the PM's fault for not prioritizing my tech debt, ugh. Bargaining. Okay okay okay. Maybe... I can trade my on-call shift with Sam. They really know this service, so they could take it on. Or maybe I can eat my dinner while we respond to this... Depression. This is bad, this is so bad. Our app is down , and the customer knows . We're totally screwed here, why even bother putting it back up? They're all going to be mad, leave, the company is dead... There's not even any point. Acceptance. You know, it's going to be okay. This happens to everyone, apps go down. We'll get it back up, and everything will be fine.

0 views

Python is an interpreted language with a compiler

After I put up a post about a Python gotcha, someone remarked that "there are very few interpreted languages in common usage," and that they "wish Python was more widely recognized as a compiled language." This got me thinking: what is the distinction between a compiled or interpreted language? I was pretty sure that I do think Python is interpreted [1] , but how would I draw that distinction cleanly? On the surface level, it seems like the distinction between compiled and interpreted languages is obvious: compiled languages have a compiler, and interpreted languages have an interpreter. We typically call Java a compiled language and Python an interpreted language. But on the inside, Java has an interpreter and Python has a compiler. What's going on? A compiler takes code written in one programming language and turns it into a runnable thing. It's common for this to be machine code in an executable program, but it can also by bytecode for VM or assembly language. On the other hand, an interpreter directly takes a program and runs it. It doesn't require any pre-compilation to do so, and can apply a variety of techniques to achieve this (even a compiler). That's where the distinction really lies: what you end up running. An interpeter runs your program, while a compiler produces something that can run later [2] (or right now, if it's in an interpreter). A compiled language is one that uses a compiler, and an interpreted language uses an interpreter. Except... many languages [3] use both. Let's look at Java. It has a compiler, which you feed Java source code into and you get out an artifact that you can't run directly . No, you have to feed that into the Java virtual machine, which then interprets the bytecode and runs it. So the entire Java stack seems to have both a compiler and an interpreter. But it's the usage , that you have to pre-compile it, that makes it a compiled language. And similarly is Python [4] . It has an interpreter, which you feed Python source code into and it runs the program. But on the inside, it has a compiler. That compiler takes the source code, turns it into Python bytecode, and then feeds that into the Python virtual machine. So, just like Java, it goes from code to bytecode (which is even written to the disk, usually) and bytecode to VM, which then runs it. And here again we see the usage, where you don't pre-compile anything, you just run it. That's the difference. And that's why Python is an interpreted language with a compiler! Ultimately, why does it matter? If I can do and get my Rust program running the same as if I did , don't they feel the same? On the surface level, they do, and that's because it's a really nice interface so we've adopted it for many interactions! But underneath it, you see the differences peeping out from the compiled or interpreted nature. When you run a Python program, it will run until it encounters an error, even if there's malformed syntax! As long as it doesn't need to load that malformed syntax, you're able to start running. But if you a Rust program, it won't run at all if it encounters an error in the compilation step! It has to run the entire compilation process before the program will start at all. The difference in approaches runs pretty deep into the feel of an entire toolchain. That's where it matters, because it is one of the fundamental choices that everything else is built around. The words here are ultimately arbitrary. But they tell us a lot about the language and tools we're using. Thank you to Adam for feedback on a draft of this post. It is worth occasionally challenging your own beliefs and assumptions! It's how you grow, and how you figure out when you are actually wrong. ↩ This feels like it rhymes with async functions in Python. Invoking a regular function runs it immediately, while invoking an async function creates something which can run later. ↩ And it doesn't even apply at the language level, because you could write an interpreter for C++ or a compiler for Hurl , not that you'd want to, but we're going to gloss over that distinction here and just keep calling them "compiled/interpreted languages." It's how we talk about it already, and it's not that confusing. ↩ Here, I'm talking about the standard CPython implementation. Others will differ in their details. ↩

0 views

Typing using my keyboard (the other kind)

I got a new-to-me keyboard recently. It was my brother's in school, but he doesn't use it anymore, so I set it up in my office. It's got 61 keys and you can hook up a pedal to it, too! But when you hook it up to the computer, you can't type with it. I mean, that's expected—it makes piano and synth noises mostly. But what if you could type with it? Wouldn't that be grand? (Ha, grand, like a pian—you know, nevermind.) Or more generally, how do you type with any MIDI device? I also have a couple of wind synths and a MIDI drum pad, can I type with those? The first and most obvious idea is to map each key to a letter. The lowest key on the keyboard could be 'a' [1] , etc. This kind of works for a piano-style keyboard. If you have a full size keyboard, you get 88 keys. You can use 52 of those for the letters you need for English [2] and 10 for digits. Then you have 26 left. That's more than enough for a few punctuation marks and other niceties. It only kind of works, though, because it sounds pretty terrible. You end up making melodies that don't make a lot of sense, and do not stay confined to a given key signature. Plus, this assumes you have an 88 key keyboard. I have a 61 key keyboard, so I can't even type every letter and digit! And if I want to write some messages using my other instruments, I'll need something that works on those as well. Although, only being able to type 5 letters using my drums would be pretty funny... The typing scheme I settled on was melodic typing . When you write your message, it should correspond to a similarly beautiful [3] melody. Or, conversely, when you play a beautiful melody it turns into some text on your computer. The way we do this is we keep track of sequences of notes. We start with our key, which will be the key of C, the Times New Roman of key signatures. Then, each note in the scale is has its scale degree : C is 1, D is 2, etc. until B is 7. We want to use scale degree, so that if we jam out with others, we can switch to the appropriate key and type in harmony with them. Obviously. We assign different computer keys to different sequences of these scale degrees. The first question is, how long should our sequences be? If we have 1-note sequences, then we can type 7 keys. Great for some very specific messages, but not for general purpose typing. 2-note sequences would give us 49 keys, and 3-note sequences give us 343. So 3 notes is probably enough, since it's way more than a standard keyboard. But could we get away with the 49? (Yes.) This is where it becomes clear why full Unicode support would be a challenge. Unicode has 155,063 characters (according to wikipedia ). To represent the full space, we'd need at least 7 notes, since 7^7 is 823,543. You could also use a highly variable encoding, which would make some letters easy to type and others very long-winded. It could be done, but then the key mapping would be even harder to learn... My first implementation used 3-note sequences, but the resulting tunes were... uninspiring, to say the least. There was a lot of repetition of particular notes, which wasn't my vibe. So I went back to 2-note sequences, with a pared down set of keys. Instead of trying to represent both lowercase and uppercase letters, we can just do what keyboards do , and represent them using a shift key [4] . My final mapping includes the English alphabet, numerals 0 to 9, comma, period, exclamation marks, spaces, newlines, shift, backspace, and caps lock—I mean, obviously we're going to allow constant shouting. This lets us type just about any message we'd want with just our instrument. And we only used 44 of the available sequences, so we could add even more keys. Maybe one of those would shift us into a 3-note sequence. The note mapping I ended up with is available in a text file in the repo. This mapping lets you type anything you'd like, as long as it's English and doesn't use too complicated of punctuation. No contractions for you, and—to my chagrin—no em dashes either. The key is pretty helpful, but even better is a dynamic key. When I was trying this for the first time, I had two major problems: But we can solve this with code! The UI will show you which notes are entered so far (which is only ever 1 note, for the current typing scheme), as well as which notes to play to reach certain keys. It's basically a peek into the state machine behind what you're typing! Let's see this in action. As all programmers, we're obligated by law to start with "hello, world." We can use our handy-dandy cheat sheet above to figure out how to do this. "Hello, world!" uses a pesky capital letter, so we start with a shift. Then an 'h'. Then we continue on for the rest of it and get: D C E C E C E F A A B C F G E F E B E C C B A B Okay, of course this will catch on! Here's my honest first take of dooting out those notes from the translation above. Hello, world! I... am a bit disappointed, because it would have been much better comedy if it came out like "HelLoo wrolb," but them's the breaks. Moving on, though, let's make this something musical . We can take the notes and put a basic rhythm on them. Something like this, with a little swing to it. By the magic of MIDI and computers, we can hear what this sounds like. Okay, not bad. But it's missing something... Maybe a drum groove... Oh yeah, there we go. Just in time to be the song of the summer, too. And if you play the melody, it enters "Hello, world!" Now we can compose music by typing! We have found a way to annoy our office mates even more than with mechanical keyboards [5] ! As with all great scientific advancements, other great ideas were passed by in the process. Here are a few of those great ideas we tried but had to abandon, since we were not enough to handle their greatness. A chorded keyboard . This would function by having the left hand control layers of the keyboard by playing a chord, and then the right hand would press keys within that layer. I think this one is a good idea! I didn't implement it because I don't play piano very well. I'm primarily a woodwind player, and I wanted to be able to use my wind synth for this. Shift via volume! There's something very cathartic about playing loudly to type capital letters and playing quietly to print lowercase letters. But... it was pretty difficult to get working for all instruments. Wind synths don't have uniform velocity (the MIDI term for how hard the key was pressed, or how strong breath was on a wind instrument), and if you average it then you don't press the key until after it's over , which is an odd typing experience. Imagine your keyboard only entering a character when you release it! So, this one is tenable, but more for keyboards than for wind synths. It complicated the code quite a bit so I tossed it, but it should come back someday. Each key is a key. You have 88 keys on a keyboard, which definitely would cover the same space as our chosen scheme. It doesn't end up sounding very good, though... Rhythmic typing. This is the one I'm perhaps most likely to implement in the future, because as we saw above, drums really add something. I have a drum multipad, which has four zones on it and two pedals attached (kick drum and hi-hat pedal). That could definitely be used to type, too! I am not sure the exact way it would work, but it might be good to quantize the notes (eighths or quarters) and then interpret the combination of feet/pads as different letters. I might take a swing at this one sometime. I've written previously about how I was writing the GUI for this. The GUI is now available for you to use for all your typing needs! Except the ones that need, you know, punctuation or anything outside of the English alphabet. You can try it out by getting it from the sourcehut repo (https://git.sr.ht/~ntietz/midi-keys). It's a Rust program, so you run it with . The program is free-as-in-mattress: it's probably full of bugs, but it's yours if you want it. Well, you have to comply with the license: either AGPL or the Gay Agenda License (be gay, do crime [6] ). If you try it out, let me know how it goes! Let me know what your favorite pieces of music spell when you play them on your instrument. Coincidentally, this is the letter 'a' and the note is A! We don't remain so fortunate; the letter 'b' is the note A#. ↩ I'm sorry this is English only! But, you could to the equivalent thing for most other languages. Full Unicode support would be tricky, I'll show you why later in the post. ↩ My messages do not come out as beautiful melodies. Oops. Perhaps they're not beautiful messages. ↩ This is where it would be fun to use an organ and have the lower keyboard be lowercase and the upper keyboard be uppercase. ↩ I promise you, I will do this if you ever make me go back to working in an open office. ↩ For any feds reading this: it's a joke, I'm not advocating people actually commit crimes. What kind of lady do you think I am? Obviously I'd never think that civil disobedience is something we should do, disobeying unjust laws, nooooo... I'm also never sarcastic. ↩

0 views

Shadowing in Python gave me an UnboundLocalError

There's this thing in Python that always trips me up. It's not that tricky, once you know what you're looking for, but it's not intuitive for me, so I do forget. It's that shadowing a variable can sometimes give you an UnboundLocalError! It happened to me last week while working on a workflow engine with a coworker. We were refactoring some of the code. I can't share that code (yet?) so let's use a small example that illustrates the same problem. Let's start with some working code, which we had before our refactoring caused a problem. Here's some code that defines a decorator for a function, which will trigger some other functions after it runs. The outermost function has one job: it creates a closure for the decorator, capturing the passed in functions. Then the decorator itself will create another closure, which captures the original wrapped function. Here's an example of how it would be used [1] . This prints out Here's the code of the wrapper after I made a small change (omitting docstrings here for brevity, too). I changed the for loop to name the loop variable instead of , to shadow it and reuse that name. And then when we ran it, we got an error! But why? You look at the code and it's defined . Right out there, it is bound. If you print out the locals, trying to chase that down, you'll see that there does not, in fact, exist yet. The key lies in Python's scoping rules. Variables are defined for their entire scope, which is a module, class body, or function body. If you define a variable within a scope, anywhere inside a function, then that variable has that name as its own for the entire scope. The docs make this quite clear: If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block. This can lead to errors when a name is used within a block before it is bound. This rule is subtle. Python lacks declarations and allows name binding operations to occur anywhere within a code block. The local variables of a code block can be determined by scanning the entire text of the block for name binding operations. See the FAQ entry on UnboundLocalError for examples. This comes up in a few other places, too. You can use a loop variable anywhere inside the enclosing scope, for example. So once I saw an UnboundLocalError after I'd shadowed it, I knew what was going on. The name was used by the local for the entire function, not just after it was initialized! I'm used to shadowing being the idiomatic thing in Rust, then had to recalibrate for writing Python again. It made sense once I remembered what was going on, but I think it's one of Python's little rough edges. This is not how you'd want to do it in production usage, probably. It's a somewhat contrived example for this blog post. ↩

0 views

Big endian and little endian

Every time I run into endianness, I have to look it up. Which way do the bytes go, and what does that mean? Something about it breaks my brain, and makes me feel like I can't tell which way is up and down, left and right. This is the blog post I've needed every time I run into this. I hope it'll be the post you need, too. The term comes from Gulliver's travels, referring to a conflict over cracking boiled eggs on the big end or the little end [1] . In computers, the term refers to the order of bytes within a segment of data, or a word. Specifically, it only refers to the order of bytes , as those are the smallest unit of addressable data: bits are not individually addressable. The two main orderings are big-endian and little-endian. Big-endian means you store the "big" end first: the most-significant byte (highest value) goes into the smallest memory address. Little-endian means you store the "little" end first: the least-significant byte (smallest value) goes into the smallest memory address. Let's look at the number 168496141 as an example. This is 0x0A0B0C0D in hex. If we store 0x0A at address a , 0x0B at a+1 , 0x0C at a+2 , and 0x0D at a+3 , then this is big-endian . And then if we store it in the other order, with 0x0D at a and 0x0A at a+3 , it's little-endian . And... there's also mixed-endianness, where you use one kind within a word (say, little-endian) and a different ordering for words themselves (say, big-endian). If our example is on a system that has 2-byte words (for the sake of illustration), then we could order these bytes in a mixed-endian fashion. One possibility would be to put 0x0B in a , 0x0A in a+1 , 0x0D in a+2 , and 0x0C in a+3 . There are certainly reasons to do this, and it comes up on some ARM processors, but... it feels so utterly cursed. Let's ignore it for the rest of this! For me, the intuitive ordering is big-ending, because it feels like it matches how we read and write numbers in English [2] . If lower memory addresses are on the left, and higher on the right, then this is the left-to-right ordering, just like digits in a written number. Given some number, how do I know which endianness it uses? You don't, at least not from the number entirely by itself. Each integer that's valid in one endianness is still a valid integer in another endianness, it just is a different value . You have to see how things are used to figure it out. Or you can figure it out from the system you're using (or which wrote the data). If you're using an x86 or x64 system, it's mostly little-endian. (There are some instructions which enable fetching/writing in a big-endian format.) ARM systems are bi-endian, allowing either. But perhaps the most popular ARM chips today, Apple silicon, are little-endian. And the major microcontrollers I checked (AVR, ESP32, ATmega) are little-endian. It's thoroughly dominant commercially! Big-endian systems used to be more common. They're not really in most of the systems I'm likely to run into as a software engineer now, though. You are likely to run into it for some things, though. Even though we don't use big-endianness for processor math most of the time, we use it constantly to represent data. It comes back in networking! Most of the Internet protocols we know and love, like TCP and IP, use "network order" which means big-endian. This is mentioned in RFC 1700 , among others. Other protocols do also use little-endianness again, though, so you can't always assume that it's big-endian just because it's coming over the wire. So... which you have? For your processor, probably little-endian. For data written to the disk or to the wire: who knows, check the protocol! I mean, ultimately, it's somewhat arbitrary. We have an endianness in the way we write, and we could pick either right-to-left or left-to-right. Both exist, but we need to pick one . Given that, it makes sense that both would arise over time, since there's no single entity controlling all computer usage [3] . There are advantages of each, though. One of the more interesting advantages is that little-endianness lets us pretend integers are whatever size we like, within bounds. If you write the number 26 [4] into memory on a big-endian system, then read bytes from that memory address, it will represent different values depending on how many bytes you read. The length matters for reading in and interpreting the data. If you write it into memory on a little-endian system, though, and read bytes from the address (with the remaining ones zero, very important!), then it is the same value no matter how many bytes you read. As long as you don't truncate the value, at least; 0x0A0B read as an 8-bit int would not be equal to being read as a 16-bit ints, since an 8-bit int can't hold the entire thing. This lets you read a value in the size of integer you need for your calculation without conversion. On the other hand, big-endian values are easier to read and reason about as a human. If you dump out the raw bytes that you're working with, a big-endian number can be easier to spot since it matches the numbers we use in English. This makes it pretty convenient to store values as big-endian, even if that's not the native format, so you can spot things in a hex dump more easily. Ultimately, it's all kind of arbitrary. And it's a pile of standards where everything is made up, nothing matters, and the big-end is obviously the right end of the egg to crack. You monster. The correct answer is obviously the big end. That's where the little air pocket goes. But some people are monsters... ↩ Please, please, someone make a conlang that uses mixed-endian inspired numbers. ↩ If ever there were, maybe different endianness would be a contentious issue. Maybe some of our systems would be using big-endian but eventually realize their design was better suited to little-endian, and then spend a long time making that change. And then the government would become authoritarian on the promise of eradicating endianness-affirming care and—Oops, this became a metaphor. ↩ 26 in hex is 0x1A, which is purely a coincidence and not a reference to the First Amendment . This is a tech blog, not political, and I definitely stay in my lane. If it were a reference, though, I'd remind you to exercise their 1A rights [5] now and call your elected officials to ensure that we keep these rights. I'm scared, and I'm staring down the barrel of potential life-threatening circumstances if things get worse. I expect you're scared, too. And you know what? Bravery is doing things in spite of your fear. ↩ If you live somewhere other than the US, please interpret this as it applies to your own country's political process! There's a lot of authoritarian movement going on in the world, and we all need to work together for humanity's best, most free [6] future. ↩ I originally wrote "freest" which, while spelled correctly, looks so weird that I decided to replace it with "most free" instead. ↩

0 views

Who are your teammates?

If you manage a team, who are your teammates? If you're a staff software engineer embedded in a product team, who are your teammates? The answer to the question comes down to who your main responsibility lies with. That's not the folks you're managing and leading. Your responsibility lies with your fellow leaders, and they're your teammates. There's a concept in leadership called the first team mentality. If you're a leader, then you're a member of a couple of different teams at the same time. Using myself as an example, I'm a member of the company's leadership team (along with the heads of marketing, sales, product, etc.), and I'm also a member of the engineering department's leadership team (along with the engineering directors and managers and the CTO). I'm also sometimes embedded into a team for a project, and at one point I was running a 3-person platform team day-to-day. So I'm on at least two teams, but often three or more. Which of these is my "first" team, the one which I will prioritize over all the others? For my role, that's ultimately the company leadership. Each department is supposed to work toward the company goals, and so if there's an inter-department conflict you need to do what's best for the company —helping your fellow department heads—rather than what's best for your department. (Ultimately, your job is to get both of these into alignment; more on that later.) This applies across roles. If you're an engineering manager, your teammates are not the people who report to you. Your teammates are the other engineering managers and staff engineers at your level. You all are working together toward department goals, and sometimes the team has to sacrifice to make that happen. One of the best things about a first team mentality is that it comes with a shift in where your focus is. You have to focus on the broader goals your group is working in service of, instead of focusing on your group's individual work. I don't think you can achieve either without the other. When you zoom out from the team you lead or manage and collaborate with your fellow leaders, you gain context from them. You see what their teams are working on, and you can contextualize your work with theirs. And you also see how your work impacts theirs, both positively and negatively. That broader context gives you a reminder of the bigger, broader goals. It can also show you that those goals are unclear . And if that's the case, then the work you're doing in your individual teams doesn't matter , because no one is going in the same direction! What's more important there is to focus on figuring out what the bigger goals should be. And once those are done, then you can realign each of your groups around them. Sometimes the first team mentality will result in a conflict. There's something your group wants or needs, which will result in a problem for another group. Ultimately, this is your work to resolve, and the conflict is a lens you can use to see misalignment and to improve the greater organization. You have to find a way to make sure that your group is healthy and able to thrive. And you also have to make sure that your group works toward collective success, which means helping all the groups achieve success. Any time you run into a conflict like this, it means that something went wrong in alignment. Either your group was doing something which worked against its own goal, or it was doing something which worked against another group 's goal. If the latter, then that means that the goals themselves fundamentally conflicted! So you go and you take that conflict, and you work through it. You work with your first team—and you figure out what the mismatch is, where it came from, and most importantly, what we do to resolve it. Then you take those new goals back to your group. And you do it with humility, since you're going to have to tell them that you made a mistake. Because that alignment is ultimately your job , and you have to own your failures if you expect your team to be able to trust you and trust each other.

0 views

Stewardship over ownership

Code ownership is a popular concept, but it emphasizes the wrong thing. It can bring out the worst in a person or a team: defensiveness, control-seeking, power struggles. Instead, we should be focusing on stewardship. Code ownership as a concept means that a particular person or team "owns" a section of the codebase. This gives them certain rights and responsibilities: There are tools that help with these, like the CODEOWNERS file on GitHub. This file lets you define a group or list of individuals who own a section of the repository. Then you can require reviews/approvals from them before anything gets merged. These are all coming from a good place. We want our code to be well-maintained, and we want to make sure that someone is responsible for its direction. It really helps to know who to go to with questions or requests. Without these, changes can grind to a halt, mired in confusion and tech debt. But the concept in practice brings challenges. If you've worked on a team using code ownership before, you've probably run into: I've certainly acted badly due to code ownership, without realizing what I was doing or or why I was doing it at the time. There are almost endless ways that code ownership can bring out the worst in people. And it all makes sense. We can do better by shifting to stewardship instead of ownership. We are all stewards of things we own or are responsible for. I have stewardship over the house I live in with my family, for example. I also have stewardship over the espresso machine I use every day: It's a big piece of machinery, and it's my responsibility to take good care of it and to ensure that as long as it's mine, it operates well and lasts a long time. That reduces expense, reduces waste, and reduces impact on the world—but it also means that the object (an espresso machine) is serving its purpose to bring joy and connection. Code is no different. By focusing on stewardship rather than ownership , we are focusing on the responsible, sustainable maintenance of the code. We focus on taking good care of that which we're entrusted with. A steward doesn't jealously guard, or struggle to gain more power. A steward watches what her responsibilities are, ensuring enough to contribute but not so many as to burn out. And she nurtures and cares for the code, to make sure that it continues to serve its purpose. Instead of an adversarial relationship, stewardship promotes partnership: It promotes working with others to figure out how to make the best use of resources, instead of hoarding them for yourself. Stewardship can solve many of the same problems that code ownership does: And in some ways, they look alike. You're going to do a lot of the same things, controlling what goes in or out. But they are very different in the focus . Owners are concerned with the value of what they own. Stewards are concerned with how well it can serve the group. And this makes all the difference in producing better outcomes.

0 views