Latest Posts (10 found)
Schneems 4 months ago

Don't McBlock me

“That cannot be done.” Is rarely true, but it’s a phrase I’ve heard more and more from technical people without offering any rationale or further explanation. This tendency to use absolute language when making blocking statements reminded me of a useful “McDonald’s rule” that I was introduced to many years ago when deciding where to eat with friends. It goes something like this: If I say to a friend, “I’m hungry, let’s go to McDonald’s” (or wherever), they’re not allowed to block me without making a counter-suggestion. They can’t just say “No,” they have to say something like “How about Arby’s” instead. This simple rule changes the dynamic of the suggester/blocker to one of the proposer/counter-proposer. If someone is simply refusing to be involved, they McBlocked me. In practice, though, it’s hard to always have a suggestion you’re willing to run with, so a relaxed version of the rule is that the other person has to AT LEAST specify why not. Instead of “no” it must be “no, because”. For example, it could be “I had a burger for lunch” or “I’m banned for life after jumping on a table and demanding Szechuan dipping sauce.” This helps show that you’re not just blocking things, you understand the goal and want to move the conversation forward. It gives the other person something to work with. Easy for eats, but what about tech? I work for Heroku, and recently, there was a stack EOL where customers were asked to migrate off of Ubuntu 20.04 (heroku-20). In this (many-month-long) deprecation process, I saw a lot of people make a lot of absolute statements. One of them was: “You cannot run Rails 4 on heroku-22.” Which, as you’ll guess, is only half the story. What they meant was: “Rails 4.2 saw its last release in 2020 and is quite thoroughly EOL. That version cannot run on any Ruby version 3.1. x- 3.4. x, which are present on heroku-22 or above, due to library errors. Therefore, to run Rails 4 on heroku-22, you would have to fork it and patch the security vulnerabilities yourself and update it to run on a modern Ruby version.” Which, to be fair, sounds a lot like “cannot be done,” but with more words. But, as you’ll also have likely guessed, once you know about the possible path forwards, however impractical, it might give you other ideas. You might start asking questions like “if we have to fork and maintain it, anyone else would have to also, I wonder if someone else already did.” This could send you down a quick search where you might discover that Rails LTS is a thing and basically provides a managed fork of Rails 4.2 for a fee that runs with the latest Ruby versions. I wrote about the existence of this service previously: Now, that new thing could still be a bad idea, and you might still not end up doing it, but the key here is that you’re not saying “no,” you’re saying “here are the barriers I know about.” A good way to test if you’re just using more words to say “no” or not is if your statement is falsifiable or satisfiable in some way. A “no, because” statement instead of a plain “no” moves the problem from a blocker into an opportunity. You can see this in a really good open source conversation. Instead of “this can’t be done,” someone can send a PR. Instead of “I won’t merge your PR” they can comment: “I agree/disagree with the problem/opportunity you’ve raised, I’m uncomfortable merging this because of .” A quick story, and you can go. Before writing this post, I pitched the word-smithing of “McBlocker” to my wife at the dinner table (where you can tell we are very cool and fun people). My kids, age 7 and 9, were there. My 9 y/o asked me to take him to the library after dinner (did I mention how cool we are?), where I was talking to him about types of non-fiction that he might like. I was talking about biographies when he blurted out, “I don’t like biographies.” To which I responded, “Hey, don’t McBlock me,” and when I got a laugh of recognition in return, I figured the phrase was worth a blog post. If you enjoyed this, you might enjoy my service for helping people contribute to open source (free) or my book How to Open Source (paid). Now, go McRepost this to your favorite federated social network!

0 views
Schneems 5 months ago

Bad Type Patterns - The Duplicate duck

Why aren’t people writing more types ? Perhaps it’s because the intermediate and expert developers deleted the patterns that didn’t work and left no trace for beginners to learn from. This post details some code I recently deleted that has a pattern I call the “duplicate duck.” You can learn the process I used to develop the type, and why I deleted it. Further, I advocate for Rust developers to document and share their mistakes in the hope that we can all learn from them. A “duplicate duck” is a type that implements a subset of traits of a popular type with the same results. In my case I wrote a type, , that I later realized was identically duck typed to and that my struct added nothing. I deleted my type with no loss in functionality and the world was better for it. I saved my code before throwing it away. The following is the story of my design process and eventual epiphany. Quick : I write Rust for Heroku where I maintain the Ruby Cloud Native Buildpack . I also maintain a free service CodeTriage and wrote a book, How to Open Source , for turning coders into contributors. I’ve been hacking on proc macros recently, you can read about a recent investigation “A Daft proc-macro trick: How to Emit Partial-Code + Errors” . I want proc macro authors to emit as many accumulated errors as possible (versus stopping on the first one), I’m also a fan of unit testing. I wanted to add a return type from my functions that said, “I return many accumulated errors,” and I wanted that return type to be unit-testable. In my code, I’ve been accumulating errors with . This makes it easy to combine them into a single : However, I don’t want to return a result of from my functions as the error state isn’t guaranteed to be non-empty. A good type should make invalid state impossible to represent. To guarantee my type always had at least one error, I separated out the first error from the rest of the collection. Even if this container is empty, the type definition guarantees we can always turn this into a Warning: Just because the docs state something, doesn’t mean it’s true. Note the visibility, by default I use for the struct and associated functions but not for the fields ( and ). When I’m unsure of my design, it’s easier to change them later if all access goes through functions. This type allowed me to introduce helper functions like this: This code says “I take in any slice of and then parse that attribute into a vector of or return one or more syn errors”. So far, so good. But my macro needs a to generate error tokens and my function returns a . So I needed a way to convert my type into a . Based on the properties of the type, we know we can always convert into a infallibly, so I can expose that via implementing : As a bonus, the try operator ( ) will implicitly call which allows us to do things like this: With that added, I needed a way to test my logic to ensure I was capturing multiple errors. To render the error on failure it needs to implement : It’s not pretty, but it worked and was easy. This code path is only ever called under test. To expose multiple errors for testing, I chose to implement the trait: This code says that we can now convert our struct into something that produces a series of -s. Since we’ve already got a lying around, and I knew that it implemented the same trait, I piggybacked my logic on top. This allowed me to do things like this test: This code parses a single field with multiple -s on it. In this case, is an invalid attribute, and I want to assert that it does not stop after the first one it sees. The code converts my result into an iterator and then asserts that there are two elements. Great! While the above code example worked fine, I kept applying this pattern, bubbling up errors until I hit a failure in my code: The error said that I was returning only two errors instead of 4. Which was confusing. I moved the code into a integration test and saw 4 errors. At this point it dawned on me, that at some time I was storing multiple errors into a single and then placing that combined error in my . Basically I had a multi . If that was hard to follow, here’s some pseudo code: Essentially, my type allowed for what I thought was uninspectable-state. Each could hold N errors. As I went through the stages of grief for my beautiful type that had a fundamental flaw, I hit on the idea that perhaps I could upstream a change to expose the internal combined errors from . I thought that the interface was a good candidate to add. But to my shock, when I opened the docs the for was right there this whole time . I just missed it. When I realized that already implemented every trait that I needed, I was able to change every into and replace every with a function that returns . Then, with zero other logic changes, my code compiled. That confirmed my suspicions that I had written a duck-typed duplicate of a commonly available struct. The only value my type brought was that it hinted that the function was written with error accumulation in mind, but could not guarantee that the accumulation logic was correct. It didn’t seem like this minor social hint was enough to justify the extra code. I could achieve similar goals with a type alias. If a type doesn’t introduce new capabilities or constraints and can be replaced by an existing, stable type, it should probably be deleted in favor of the more common type. Just because a type starts to smell a little foul (or rather “fowl”) does that mean you need to get rid of it? Producing a new type guarantees that there are no mix-up between your type and the common type. New typing could also allow you to restrict operations to a subset of the common type. Both of these things are about adding constraints. A third reason to keep a duck around would be the stability of the interface. If you’re going to expose your type via a library and you’re worried it might change, then it could be helpful to wrap the type so your downstream user don’t have to change their code even if the underlying logic or implementation changes. When in doubt, consider documenting your duck and explaining what constraints the new type adds over the original. After writing them down, search for an already existing type that has the same behaviors. Perhaps go so far as to document why those types don’t meet your needs. If you cannot enumerate those differences well, then perhaps it’s a sign you should ditch your duck. In my case I had explicitly called out and even went as far as implementing . Those are two strong signs that I should have investigated my claims and looked for features provided by trait implementations. One of the reasons I missed that already met my needs that I didn’t stop to consider why certain traits were implemented on the struct or think about how they might be used to expose the data that I needed. Over time I’ve been better at internalizing and mentally mapping trait names to the behaviors they provide. Still, I’ve got some more work to do. Hopefully after this experience, with strong hints that I’m re-implementing an existing type as a duck, I won’t forget to check trait implementations for what I need. Beyond “trying harder” and “writing a blog post as penitence so I don’t do it again,” I thought that it would be nice if this behavior was also shown via an example, so I sent a PR to syn to add some examples to syn::Error::combine . I don’t think we need to clutter all code with documenting every possible use case of every possible trait, but this very useful iteration functionality its in nicely in demonstrating how the combine behavior works. Hopefully, the addition of these docs will bre received well and not as an albatross I would like to encourage everyone to pay attention to your types and the pain you’re feeling around them. If you find you’ve written a type that you later refactored away, consider pausing and capturing why it was written and why the world is better off without it. What other “bad type” patterns are out there and how can we make it easier for newcomers to spot and avoid them?

0 views
Schneems 6 months ago

A Daft proc-macro trick: How to Emit Partial-Code + Errors

Update (2025/04/02): The change I suggested below was merged in PR #64 . It’s pretty neat I went from knowing nothing about this project to contributing to it in the span of a single blog post. A recent Oxide and Friends podcast episode, “A crate is born,” detailed the creation of a proc macro for deriving “diffable” data structures with a trick I want to tell you about. To help rust-analyzer as much as possible, @rain explained that the macro should always emit as much valid source code as possible, even when an error is emitted. They didn’t go into detail, so I looked into the internals that made this code + error emitting behavior possible and wanted to share. Podcast link: A Crate is Born This post covers: Who am I? I write Rust code for Heroku, mainly on the Ruby Cloud Native Buildpack (CNB). CNBs are an alternative to Dockerfile for building OCI images. You can learn more by following a language-specific tutorial you can run locally . I also wrote a book on Open Source contribution (paid) and I maintain an Open Source contribution app - CodeTriage.com (free). Skip this if you already understand the problem statement The Rust compiler will stop when it hits code that cannot compile. However, (the Language Server Protocol implementation that powers IDEs like vscode) tries to resume after an error because it can’t just stop rendering type hints. Intuitively, it makes sense that if you have an invalid function in your code, it shouldn’t break syntax highlighting (or other features) in your valid code: Daft (v0.1.2), emits trait implementations and sometimes generates new data structures. From the snapshot tests, an input of something like this: Will generate code like this: If the macro does not emit this information (possibly due to some hypothetical error not present in this example), then rust-analyzer wouldn’t know that the struct was expected to exist, what its fields were, or that returned a struct. In short, the IDE would be generally less helpful. Now that you understand the goal, how do we emit code when there’s an error? The short version is that macros don’t output code or errors; they emit tokens. The daft crate collects errors and continues when possible. If it can generate code, it will emit that generated code as tokens before turning the errors into tokens and then emitting both. An earlier version of the code looked like this: Where produces tokens from both the code ( ) and the errors . But don’t take my word for it, read the source: The entry point for the derive macro is . This function returns a . The holds for valid code that was generated and for errors The implements that emits the valid code followed by the errors (if they exist). code . Put it all together, and you have a crate that emits partially generated code, even with errors. Neat. Now that I knew how Daft implemented this feature, I wanted to understand when they chose to apply this pattern. I reviewed the snapshot tests and developed my own classifications. There are three classes of failures in the snapshot tests: First, proc-macros cannot emit warnings. If the coder entered slightly off information that wouldn’t affect compilation, the macro author must choose between letting it slide or raising an error. There’s no in-between. When there’s a problem but daft can determine programmer intent, it will emit code and errors. I call this a “warning error.” The primary example in snapshot testing is when the attribute is used on an enum that is already a leaf by default (this is an internal concept to the crate). Note that the daft crate does not use “warning error” as terminology. I am making that distinction based on my analysis of the snapshot test. Notes are here. . Second, the program can fail to compile even when code is emitted without error, for example, if a trait bound is not satisfied. The macro author cannot detect the problem because the reflection tools don’t expose the necessary information. They must rely on the compiler errors to guide their user. Finally, there are situations where the author cannot safely emit code because an input is ambiguous or wrong. With these “plain” errors, if the macro author tried to guess and got it incorrect, they’re feeding rust-analyzer incorrect information, which might confuse the end user more. For example, if an attribute that doesn’t exist, such as , is found, the macro author has no idea what was intended there and shouldn’t guess. While working on this classification exercise I found two snapshot tests where code isn’t emitted but could be. Obligatory: “It depends.” If you find your rust-analyzer horribly broken due to a proc-macro problem, then this is a great trick to suggest. However, it isn’t a critical feature that every macro should have. Instead, libraries should focus on improving error accumulation (talked about later). This code + error functionality requires a lot of plumbing, and ultimately, there’s only one code path (or two if they like my suggestion) that generates code + errors. Looking at how this code path came to be, it seems more that it was added because the plumbing already existed and the opportunity presented itself. From that lens, it’s easy to see why Daft goes this extra mile. The cost to implement was (comparatively) low: While the lede: emitting code + errors is likely too much of an ask for most crates, every proc macro should accumulate errors. Let’s look at that now. What do I mean by accumulated errors? In a Python (or Ruby) program that raises an error, you have to fix that to find out if there’s another error lurking that also needs to be fixed. Rust programmers don’t like playing that game. They want as many errors upfront as possible: This daft output says there are two errors on line 5. One error on lines 8 and 9. This code generated the following errors: Fields and their attributes are parsed iteratively, so it’s common for a macro to stop iterating on the first problem (line 5) before returning. Instead, Daft stores the errors and continues parsing until it longer can. I don’t think it’s the end of the world if a proc macro only emits a single error at a time, but it’s a requirement if you’re aiming for a “Michelin star proc-macro” experience. Instead of using a return, daft passes an accumulator that holds a to every fallible function. If there’s an error, it’s added to the accumulator. This pattern also means that instead of having to choose between emitting or (via a ), the programmer can do both by returning a while mutating the accumulator. That would indicate the problem is more of a “warning error” if the data structure can still be safely returned. Functions that return indicate they’re likely holding one or more plain errors that would prevent code generation when a is returned. Beyond affording the ability to return code + errors, not using a Result means that the try operator ( ) cannot be used accidentally for an early/eager return. This property encourages the macro author to capture as many errors as possible and emit them all instead of only emitting the first error. It’s a neat pattern, but it’s not the only way to accumulate errors. The struct has the capability of combining multiple errors without an accumulator by using . For example: This pattern is useful when the function signature isn’t changeable. For example, the trait is commonly used by proc macros as a building block, and it has a fixed signature: With this pattern, the error behavior of the function is encoded in its return type: That last one is verbose, but it prevents representing an invalid state when code and errors are returned simultaneously. The downside of this technique is that nothing prevents an early return on error with try ( ). I was curious how this pattern would look implemented in place of the daft one, so I experimented with a draft (not daft) PR . Note: The PR is to my own branch, not theirs. I don’t think any maintainer loves waking up to a giant PR with the “refactoring” in it. We learned why is sensitive to macro output. We explored the mechanics that Daft uses to emit code + errors, and accumulate errors. I introduced an alternative error accumulation method and I made some strong statements. Namely that emitting code + errors is a nice-to-have while accumulating and emitting all errors is an achievable best practice. Coming from Ruby, proc macros are wonderful things that allow Rust developers to write powerful and expressive DSLs, and I love them. With the power to meta-program, there’s also the possibility to meta-confuse your end user or toolchain (like rust-analyzer). I love that the Daft maintainers put as much work and care into the failure modes as the rest of their logic in addition to accumulating and presenting as many errors as possible. I hoped you enjoyed learning about these patterns as much as I did. FYI I’m working on a proc-macro tutorial and would love to hear from readers on Mastodon or Reddit about what real-world patterns you’ve seen around improving the end-user experience, especially around errors.

0 views
Schneems 7 months ago

Installing the sassc Ruby gem on a Mac. A debugging story

I’m not exactly sure about the timeline, but at some point, stopped working for me on my Mac (ARM). Initially, I thought this was because that gem was no longer maintained, and the last release was in 2020, but I was wrong. It’s 100% installable today. Read the rest to find out the real culprit and how to fix it. FWIW some folks on lobste.rs suggested switching to sass-embedded for sass needs. This post still, works but into the future it might not. In this post I’ll explain some things about native extensions libraries in Ruby and in the process tell you how to fix this error below if you’re getting it on your Mac: You can install the on your Mac by: If you want to know more about native compilation or my debugging process, read on! There might be a simpler way to solve the problem (such as directly editing the rbconfig file), but I’m comfortable sharing the above steps because that’s what I’ve done. If you fixed this differently, post the solution on your own site or in the comments somewhere. When I get an error, it makes sense to search for it and ask an LLM (if that’s your thing). I did both. GitHub copilot suggested that I make sure command-line tools are installed and that is installed via homebrew. This was unhelpful, but it’s worth double-checking. Searching brought me to https://github.com/sass/sassc-ruby/issues/248 . This brought me to https://github.com/sass/sassc-ruby/issues/225#issuecomment-2391129846 . Suggesting that the problem is related to RbConfig and native extensions. These have the fix in there, but don’t go into detail on the why the fix works. This post attempts to dig deeper using a debugging mindset. Meta narrative: This article skips between explaining things I know to be true and debugging via doing. Skip any explanations you feel are tedious. Skip if: You know what a native extension is Most Ruby libraries are plain Ruby code. For an example look at https://github.com/zombocom/mini_histogram . When you , it downloads the source code, and that’s all that’s needed to run it (well, that and a Ruby version installed on the machine). The term “native extensions” refers to libraries that use Ruby’s C API or FFI in some way. There are a few reasons why someone would want to do this: For developers who haven’t used much C or C++, it’s useful to know that system-installed packages are how they (mostly) share code. There’s no rubygems.org for C packages. Things like for Ubuntu might be conflated as a “C package manager,” but it’s really like (for Mac), where it installs things globally. Then, when you compile a program in C, it can dynamically or statically link to other libraries to use them. Back to native extensions: When a gem with a native extension is installed the source code is downloaded but then a secondary compilation process is invoked. Here’s a tutorial on creating a native extension . It utilizes a tool called rake-compiler . But under the hood it effectively boils down to when you it will run compilation code such as on the system. This process generates compiled binaries, these binaries are compiled against a specific CPU architecture that is native to the machine you’re on, hence why they’re called native extensions. You’re using native (binary) code to extend Ruby’s capabilities. Skip if: You understand why CPP files would be found in the gem Compiling code is hard. Or rather, dependency management is hard, and compiling code requires that the platform have certain dependencies installed; therefore, compiling code is hard. To make life easier, one common pattern that Ruby developers do is to vendor in dependencies into their native extension gem. Rather than assuming is installed on the system in a location that is easy to find, it can instead bring that code along with it. Here you can see that sassc from brings C++ source code from libsass: In this case may have dependencies that it hasn’t vendored and it expects to find on the system, but the key here is that when you it needs to not just its own bridge code (using Ruby’s C API), but it also needs to compile as well. That is where the errors are coming from, it’s not able to compile these C++ files: For completeness: There’s another type of vendoring that native-extension gems can do. They can statically compile and vendor in a binary. This bypasses the need to and is much faster, but moves the burden to the gem maintainer. Here’s an example where Nokogiri 1.18.4 is precompiled to run on my ARM Mac . You don’t need to know this for debugging the install problem, since that process isn’t being used here. When debugging, I like to remove layers of abstraction when possible to boil the problem down to its core essence. You might think “I cannot run “ is the problem, but really that’s the context; the real problem is that within that process, the command fails. The output of the command isn’t terribly well structured, but there are hints that this is the core problem: This is saying, “When I am in this directory” and “I run this command ” then I get this output. When someone is experiencing an exception on their Rails app, I encourage them to try copying that code into a session to reproduce the problem without the overhead of the request/response cycle. This helps reduce the scope and removes a layer of abstraction. Here removing abstraction will be manually go into that directory and run . Doing this gave me the same error: I was curious about how to get more information out of and found a SO post suggesting that will list out the commands. From or I see this description: Running that gave me some output: If you’re familiar with the output above you probably spotted the problem. If not, let’s detour and explain what this make tool even is. Skip this if you know what make is and how to write a GNU make describes itself as: GNU Make  is a tool which controls the generation of executables and other non-source files of a program from the program’s source files. The library Rake is a similar concept implemented in Ruby. The name “Rake” is short for “Ruby (M)ake.” In Rake, you can define a task and its prerequisites. The Rake tool will resolve those to ensure they’re run in the correct order without having to run them multiple times. This is commonly used for database migrations and generating assets for a web app, such as CSS and JS. Technically, that’s all Make does as well, it allows you to define tasks in a reusable way, and it handles some of the logic of execution. In practice, make has become the go-to composition tool for compiling C programs. In that world there are projects that don’t even tell you how to build the binaries because they expect you to in the same way some Ruby developers might forget instructions on adding a gem to the Gemfile in the README of their rubygem. You can see a makefile in action following Ruby’s instructions on compilation At the end of the day, does very little. It’s almost more like its own language that happens to be useful for compiling code rather than a “compiling code” tool. The result is that the bulk of the logic comes from the contents of the Makefile and what the developer put in there rather than the Make tool itself. The output ends up being indistinguishable from a bunch of shell scripts in a trenchcoat. Now we know that does very little and we have its output we see two lines (I added a space for clarity): We could remove from the equation by running them directly: That worked as expected, what about the next line? It starts with which, from the manual page So no matter what comes after this command, it will simply exit non-zero. This command can never work. This seems odd, definetly not what the author of this makefile intended. If this is the bug, and I think it is, where is that coming from? Is it dynamic from something in the environment (environment variables) or is it coming from shelling out to some other utility on disk or is it coming from some config file? Or is it static? Is it baked in already. Re-running with env vars (mentioned in the GitHub comments) such as has no effect. It’s the same output. This leads me to believe it’s something static. Looking at the contents of the Makefile: Huh, that’s weird. Where is that used? That last line comes from this code in the Makefile: Skip this if you know make syntax To understand what this is doing, we can write a tiny make program: The indentation under the should be a tab, but your editor or my blogging process might have converted it into a space. Now when we run that: It printed the command and then the output of that command. We’re not limited to static commands though. Modify the file: Here, we’ve extracted the command into a variable and are using that to produce the same effective command. What that means is tells make to replace with which is not what we want. But where did come from? I’m glad you asked. If you search the source code for that line, you won’t find it. That’s because this Makefile is generated. When we looked at native extensions before, notice that I talked about and not about hand-rolling a . Even when we looked at -s Makefile, it wasn’t hardcoded; it came to be after calling and . This Makefile is generated at install time. When you compile Ruby it needs to gather information about the system in order to know how to compile itself. Things like “what compiler are you using” (it could be gcc or clang, for example). Ruby isn’t the only program that needs to know this stuff; native extension code that compiles needs to know it, too. When you compile Ruby it generates a file that contains information that Ruby users can access via RbConfig . From the docs: The module storing Ruby interpreter configurations on building. This file was created by mkconfig.rb when ruby was built. It contains build information for ruby which is used e.g. by mkmf to build compatible native extensions. Any changes made to this file will be lost the next time ruby is built. So that info is what Ruby used at compile time. Where is it? When I looked at that file I saw something alarming: When Ruby was compiled it came to the conclusion that it should use to compile C code: But it mistakenly concluded that it should use the command to compile C++ code (the meaning of these environment variables). It SHOULD be or something like , but it’s not. When the Makefile for the gem is generated it hardcodes into it by mistake because it is pulling that information from the module generated by Ruby at compile time. Why did it record ? Well, I don’t know. I assume it has something to do with the interplay between Ruby’s configuration script and Xcode developer tools. I didn’t debug down that pathway. Since we can fix the problem by re-installing the same version of Ruby with a newer version of the Xcode developer tools, it seems that the problem is in Xcode, but there might be a more complicated interaction involved (perhaps Ruby is doing something Xcode didn’t expect, for example). Thankfully others came before me and came to the conclusion about where the problem was coming from and how to fix it. They suggested what I did above: After doing this you can inspect the file: Lookin good. It no longer reports . I mentioned above that it might be possible to manually edit these files to fix the problem. That would save the time and energy for re-compiling your Rubies. But you definitely want to upgrade your Xcode developer tools and ensure that future ruby installs have the right information. Going through the motions of this full process for at least one Ruby version (assuming you’re using a version switcher like chruby or asdf ) is recommended. Personally, I uninstalled everything to decrease the chances that I have to re-learn about this problem and find this blog post X months/years in the future because I missed something in my process. For those of you without this problem: Hopefully, this was educational. You might be wondering why I decided to blog about this specific topic (of all things). Well, I’ve got to do something while I’m recompiling all those rubies, and learning-via-teaching is a core pedagogy of mine. If you enjoyed this post consider:

0 views
Schneems 10 months ago

My Red Hot ADHD Programming 'Affliction'

Sorry, Dave, ADHD is real, and (not acknowledging it) can hurt you. Hi. I’m Richard. I’m a Ruby Core Contributor. I also code in Rust, and enjoy giving talks and writing books about How to (Contribute to) Open Source . I was diagnosed with ADHD in my late 30’s. What does it mean that I was “diagnosed” with ADHD? Am I simply a speed junkie? What even is ADHD, and why is there so much misinformation and misunderstanding about it? Keep reading to find out. For context, this is inspired by DHH, the creator of Ruby on Rails, musing out loud that maybe ADHD isn’t really a thing and that people who claim to have it are really just speed junkies in his post ‘Cold reading an ADHD affliction’ . Though it’s turned into a bit of a personal essay on my ADHD journey and thoughts on what it means to be a leader in the age of social media. First off. What was my diagnosis story? As a child, I was told I had a “great memory” and was a solid A- to B+ student. School wasn’t interesting, so I didn’t really try. My biggest behavioral issue was that I snuck books into class and read them under my desk. If your mental picture of someone with ADHD is an out-of-control “little monster,” then this seems like the opposite. I got through college with a degree, taught myself Ruby on Rails (thanks for the framework, Dave!), got a job, and, at some point, started giving conference talks around the world. So far, so good. No ADHD is on the horizon for me! I started seeing a therapist because I’m a knowledge worker, and I wanted to make sure my brain was in top shape. A high-performing athlete sees physical therapists and trainers even when performing well. I heard others talk about their ADHD on social media. I was surprised at some symptoms like “hyperfocus,” the blessing/curse that is an inability to stop a task you’re engaged with. I heard about time blindness and other traits that resonated with me. As Jessica McCabe from “How to ADHD” (YouTube channel and book) would say, “But every [programmer] has those problems, right?” You might think this is where my therapist stood up and loudly declared, “NO, YOU HAVE ADHD.” But they didn’t; I’ve found there’s a bit of a stigma about ADHD, even within the mental health community. My therapist’s approach was to not validate my “hey, this is resonating with me” but instead take the approach of Asking, “What are the impacts of that being true? How can that information be helpful?” for lack of a better term they “leaned out” of the self-diagnosis but didn’t dismiss it (thankfully). They encouraged me to seek more materials and bring them back to discuss with them. Oof, I skipped a part. Why was I bringing that self-diagnosis part up to my therapist? ADHD doesn’t just affect your professional life. In fact, my ADHD is amazing for my professional life (usually) as it helps me debug a problem for hours as the day slips away. It also affects relationships. Life for me growing up was tough. I always felt like something was wrong. I felt like life was harder for me than everyone else, but I did well enough, so it mostly sounded like I was a big complainer to adults. I was able to develop coping mechanisms that worked, such as bringing books to school when I was young. I had more agency to adapt to the world when I got older. This is one reason I’m so drawn to open source: it lets me fix problems that are trivial to others but deeply painful and personal. While I could terraform my professional and adult life to work with my brain quirks and preferences, terraforming my wife and children wasn’t an option. Children are quite inflexible in their needs, and it was difficult for me to adapt. I’m generally meh at doing mundane and basic things but then will over-compensate by knocking a handful of really difficult (and interesting) things out of the park. You can’t do that with kids. All their needs are primal yet basic, and it doesn’t matter how well you can change a diaper - there’s no way to do it so well that your infant says, “Wow, that was great.” One day, I was having an argument with my wife. It was about how I never planned dates. It got heated. She had to go to a meeting, and I planned to return to the conversation “Hey, how about we grab lunch after your meeting to talk it over.” I realized that I needed to meet the basic expectations of planning events and trying to compensate for them by taking action then and there. The suggestion might seem trite or pandering, but she thought it was good and agreed. An hour later, she walked into my office (garage), and I literally couldn’t remember why she was there. Or rather, I was in the middle of another task, and when she walked in, I had to rack my brain to quickly retrieve the info that would provide context to her: “I’m here.” It took a split second too long, and she noticed. Turns out that I don’t “have a good memory.” What I have is the ability to recall extremely detailed information about a situation, given that I’m primed with the context. This gives the false impression of me having a good memory but makes things like “forgetting” you were fighting with your wife and agreeing to go have lunch with her really confusing to other people. It makes it seem like you care about other things but not them. In fact, I deeply care about the people around me, especially my wife. She got pissed, I had a bit of a breakdown. We did go to lunch, and while there, I realized that I had lost track of time and missed my remote call with my therapist. Oops. That was a bit of a boiling-over moment, and I was convinced something was wrong in a way that “try harder (at relationships)” was not the fix. I pushed my therapist harder. They suggested an awful info session class on relationships for couples with ADHD that left my wife in tears. Eventually, we found the book “ADHD and Us” by Anita Robertson, and I could really see myself and my relationship in the words of that book. Even with the book in hand and anecdotes resonating with both my wife and me, I still deeply questioned my self-diagnosis. My therapist at the time still didn’t want to “label” me. I oscillated between “everyone has these problems” and “this matches my lived experience too well to be a coincidence.” It turns out that self-doubt and doubting whether or not you’ve got ADHD is really common. So when I see posts by people who want to be leaders in the community musing out loud that maybe ADHD isn’t really a thing and that it’s all in our heads , it’s pretty hurtful. Not because he’s wrong to have those thoughts or question norms but because everyone who has ADHD already HAS THOSE THOUGHTS. It’s a deeply unhelpful take, and there’s an already misunderstood world of ADHD, even within the medical community. At this point, I was not on meds, but I started bringing up the things from the book more and more. I didn’t actually know how to get meds to try them, but I was medication-curious. I found an online ADHD provider, and I felt a mixture of relief and doubt when they said, “Yes, sounds like you have ADHD,” and gave me a prescription. Was I telling them what they wanted to hear because I had hyperfocused on the topic and knew all the symptoms inside and out? Or did I actually have ADHD? For the longest time, that was my biggest battle. Answering the question, “Do I actually have ADHD? Or is this all in my head?” I eventually found an available in-person psychiatrist. They are REALLY hard to find. I blankly asked them, “Is this real, or did I make it up,” and through many sessions, they affirmed that I had ADHD. But I still didn’t believe it. I sought out more of an empirical approach. I eventually found some quite expensive but thorough testing. I signed up for hours of testing at a specialty clinic where they administered mind games like “say the color of a word when the word is the spelling of a different color.” For one test, I sat for 15 extremely boring minutes in front of a computer, and every time I heard a beep or saw a dot on the screen, I clicked a mouse button. I then took my meds and, about an hour later, took the exact same test. At the end of the day, I was convinced I was making everything up; the tests seemed easy. I could see how someone could game the attention one, but I was doing this for me; I wanted to know. I focused and tried as hard as possible. I was convinced the before and after scores would be the same. Spoiler alert, they were not. From the report: Integrated Visual and Auditory Continuous Performance Test – 2nd Edition (IVA-CPT-2) This is a computer-based test specifically designed to assess for symptoms of ADHD. Richard’s performance is considered a valid estimate of his functioning. Overall, he earned Impaired scores when he was tested without stimulant medication, whereas his performance was in the Normal range when he was tested with stimulant medication. Apparently I was “impaired” on the first test without my medication. On the second test, with my medication I’m “normal.” So maybe I didn’t do so great and the medication does have a measurable positive impact, even if I can’t always tell a difference when I take it. Later, I would move my prescription to my general practitioner (which I also found out is probably the normal way that most people are prescribed ADHD medication). That doctor also asserted that I had ADHD. And finally, I moved to see Anita Robertson as a therapist. You might remember that name as the author of the book “ADHD and Us”, which I loved so much. Turns out she lives in Austin, Texas. She also affirmed my suspicions. So, do I have ADHD, Dave? I don’t know. I still question everything about my brain regularly. Your post didn’t help that but didn’t cause it, so I don’t blame you for that question. Yes, I take “speed” regularly. Still, it’s not like people with ADHD are the only people in the world stimming themselves to keep up with society. It’s also a weird drug. I can tell a difference if I haven’t taken it, but it’s very subtle to me. I have my medication in a timer cap because I genuinely cannot tell whether I’ve taken it or not. Sometimes I forget. Before meds, I was self-medicating with caffeine, a lot so of it. I started drinking coffee when I was a child, lots of milk and sugar, but out of the same pot as my parents. I loved how it felt, and the world seemed less bad, if just a bit so. I still enjoy coffee, but more for the taste, and I have far less of it than I did before getting on medication. Medication helps quite a bit in many areas of my life. Personally and profesisonal. But it’s not a magic fix. One common community line is “Therapy and Meds, or Therapy. But not just Meds.” To that extent, I don’t think anyone can or should decide what’s right for me and my body. One thing I HATE about ADHD is it’s basically a meaningless term. Telling someone “I have ADHD” is usually as helpful as telling someone “I like ants.” It might be true, but most people need help figuring out what to do with the information. For the record, I hate ants; I’m mildly allergic to them. Why say that I have ADHD if it’s not helpful? It’s helpful to me to find others who have similar struggles so we can learn from one another and see one another. I’m a member of ADHD programmers subreddit , where I try to help fight the tide of negativity and self-doubt from time to time. People regularly ask there if they should tell their managers co,-co-workers, or even friends if they have ADHD. Usually, the answer is “no” or “it depends.” Again, only some people react to that information positively or helpfully. So what is positive or helpful? I told my last manager I had ADHD, and while I didn’t quite regret it, I got mentally lumped in with another co-worker who also has ADHD. I didn’t like that, as I felt negatively judged for someone else’s behavior. I sought to work with them to understand what accommodations could look like. I also did this with the help of my therapist. Some accomadations were helpful, some were not. If you’re looking for ways to help someone with ADHD this video talking about things that help and things that don’t was useful . Plenty of resources are also out there, but it’s a bit of a minefield. Some clinicians and personalities come in with doom and gloom: “ADHD is a chronic and debilitating…yada-yada,” and some come in with genuinely helpful and useful advice. One challenge here is that not all ADHD is alike. It is like an RPG game where you can assign certain character attributes at the beginning of the game. Everyone in the world has all of the attributes. An “average” or “median” player might assign all skills equally or go for a common min/max like a brawler that maxes strength or a rogue that maxes stealth. Someone with ADHD has just as many points to apply, but they end up in uncommon areas. One thing I’m quite good at is metacognition. I.e., thinking about thinking. Basically, I’ve never had a quiet moment in my brain while I was awake. Nearly every action I take, I think about it before and sometimes think about why I’m thinking about it while doing it. This translates to me being very good at writing tutorials, as I’m composing a mini-tutorial in my brain every time I do any task. It also translates to usability work because not only do I mess up a lot of processes accidentally, but I can remember exactly WHY I made the assumptions I made. I know what to change about the process or environment to steer me down a better path. I make it sound like a superpower, but it’s got a downside as well…I overthink some of the simplest tasks, and standardized test-taking is super hard for me. I’m constantly questioning whether the problem is really “that easy” or if the author hid some tricky language I’m not catching. Not all ADHD folks are good at metacognition. Going back to the RPG analogy. All wizards might have more intellect than strength; that’s one attribute that helps to classify them as in the wizard group, but some might have higher charisma than others. ADHD is kind of like that, but it’s harder to pin down. One common category is self-agency. That’s the ability to say to yourself, “I want to X”, and then actually be able to follow through with it. Like “I want to do the dishes” or “I want to send a patch to the Puma webserver.” Kids have a still-developing pre-frontal cortex and, therefore, have problems with agency. Last night, my 6-year-old kept telling me he wanted to apologize for his mistake, but he just couldn’t . He had the right idea; his car was pointed in the right direction, but there wasn’t any charge on the battery to get anywhere. That’s a common negative. One frequently cited positive is “good in an emergency.” One example that comes up at home is when my wife took me to a wedding with a bonfire. I didn’t know anyone, but I was generally social when another guest walked by and fell INTO the bonfire. Everyone else froze. I eyed a piece of wood that hadn’t caught fire yet, braced there with my hand, and pulled him out with my other hand. He was dusting himself off, and no one else had moved. Everyone said things like “Wow” and “That was amazing, what you just did”, but I literally didn’t even think about it. I don’t know why everyone didn’t jump up and do the same thing. On the flip side, I did not notice that I opened 10 drawers to find a thing and didn’t close any of them. In an RPG context, what distinguishes us is that our skill points are distributed in a not-normal way. You may have heard of neuro-divergent, neuro-diverse, or neuro-spicy folks. That’s what I think of when I hear those terms. I imagine a bell curve, and in the middle is most people, at either end: both high and low is a much smaller group, and that’s where you’ll find us. We generally have challenges that are hard to comprehend or relate to, but we excel in equally surprising ways. We’re not better or worse, we just are. One scene that comes to mind is from the classic “Butch Cassidy and the Sundance Kid.” Butch and Sundance are on the run from the law and need a job to feed themselves. They know how to rob, so they figure they’re qualified for a job protecting goods from robbers (more or less). In the job interview, the hiring manager hands Sundance a firearm and asks him to shoot a brick. It’s the gunslinger equivalent of a whiteboarding interview. Sundance moves to holster his weapon and then draw it when the guy stops him. He tells him, “I just want to know if you can shoot,” meaning drawing is above and beyond; he’s looking for basic proficiency. He couldn’t even do the minimum. Sundance misses the easier shot and says, “I’m better when I move.” The guy is confused, so Sundance holsters the weapon, then draws and fires 3 times in rapid succession, hitting the brick in an impressive display. On the surface, he can’t do the basics, but when tasked to go above and beyond in a field he cares about… he can deliver. That’s a lot of rambling, so let’s wrap this up. The same cognition test shows I have a higher-than-average IQ. On several tests for things like visual perception, I rank as “superior.” In some ways, this discrepancy helped me hide my symptoms and is what led me to not get a diagnosis sooner. Now that I’ve got my diagnosis, am I happier? Am I more productive? Is my family life better? In general, I’m more attentive to mundane things than before. I might not notice much difference, but my wife reports a noticeable change for the better. I can do boring things like standardized compliance training more easily at work. But meds and therapy aren’t “fixes,” as there’s no problem to be fixed. They’re about fitting my square brain and needs into the round holes of society. This isn’t a “society is bad and should accommodate me” conclusion. Society did not wrong me. Society didn’t make me. I’m not the Joker. If I could change one thing, it would be that we all try to have more empathy for one another. That doesn’t mean you must agree with everyone’s actions or perceptions. It means that when someone tells you a lived experience you don’t understand, come to it with curiosity before condemnation. I wrote this in response to a post by a “leader” in the community who idly wondered some thoughts aloud. If I ask for empathy, he deserves some as a person, the same as anyone else. If we want more empathy from one another, we need it from our leaders. We need their curiosity. Dave has actively fought to be a leader in the Ruby community. The post right before this ADHD screed is about him joining the board of directors of a billion-dollar corporation. He is definitively a person of power. The titles before that are: I appreciate his openness. I don’t feel he’s “hiding” anything from you or me. But there was a time when “Daring Greatly” by Brene Brown was on everyone’s bookshelf, daring people to be vulnerable. Many didn’t read it and missed that oversharing is its own form of deflection. Dave’s rambling stream-of-consciousness makes for interesting controversy, but what does it achieve? As a leader, what is Dave leading us towards? I don’t think he knows, at least not here. I wish he would distinguish his musings to differentiate between “I understand the consequences of these words and the outsized impact they may have and endorse them as a leader of a company, a board, and a community.” Versus “I woke up and had a thought and didn’t have anyone else to say it to, so I wrote it here.” I also wish to have more conversations with him, personally and with the community. I wish he was more curious about some of these statements. They’re certainly strong opinions, but are they weakly held? It’s unknowable. What exactly does he want out of that ADHD post? He claims it’s for me to say, “I do speed.” Weird flex, but okay. I take Lisdexamfetamine. I’ve not hidden that from anyone; I bring it up in casual conference conversations. He would know, but I don’t believe I’ve ever seen him at a RubyConf, and usually, he leaves the day after his talk at RailsConf as long as I’ve been going to them since ~2012. I’ve only chatted with Dave when I organized a fancy dinner. I’m guessing he’ll be around for Railsworld, so maybe I can catch him there next year. Maybe I can ask him, “What led to that ADHD observation and thought that produced the article? How does that make you feel? What do you need that leads you to want to write about this? What specific request might you have for your reader?” Maybe if you see him, you can ask.

0 views
Schneems 11 months ago

RubyConf 2024: Cloud Native Buildpack Hackday (and other Ruby deploy tools, too!)

I’ve spent the last decade+ working on Ruby deploy tooling, including (but not limited to) the Heroku classic and upcoming Cloud Native Buildpack. If you want to contribute to a Ruby deployment or packaging tool (even if it’s not one I maintain), I can help. If you want to learn more about Cloud Native Buildpacks (CNBs) and maybe get a green square on GitHub (or TWO!), keep reading for more resources. Note: This post is for an in-person hackday event at RubyConf 2024 happening on Thursday, November 14th. If you found this but are away from the event, you can still follow along, but I won’t be available for in-person collaboration. If you’re new to Cloud Native Buildpacks, it’s a way to generate OCI images (like docker) without a Dockerfile. Buildpacks take your application code on disk as input and inspect it to determine that it’s a Ruby app and needs to install gems with a bundler. Know before you go! Not strictly required, but will make your life better with iffy-wifi And clone the repo and install dependencies: If you’ve never heard of a buildpack, here are some getting-started guides you can try if you find a bug or run into questions. I can help. Once you’ve played with a buildpack, you’re ready for prime-time. Below, you’ll find some sample things to hack on. You can tackle one by yourself, if you’re ready, or A well-scoped-out task with a change example involves modifying code but requires minimal rust knowledge. Test drive Hanami with a Ruby CNB, document the experience and suggest changes or fixes. https://github.com/heroku/buildpacks-ruby/issues/333 https://github.com/heroku/buildpacks-ruby/issues/298 No link. Write a Cloud Native Buildpack. Bash tutorial at https://buildpacks.io/docs/ . For ideas of possible buildpack ideas, you can look at “classic” buildpacks existing Heroku “classic” buildpacks

0 views
Schneems 1 years ago

Docker without Dockerfile: Build a Ruby on Rails application image in 5 minutes with Cloud Native Buildpacks (CNB)

I love the power of containers, but I’ve never loved . In this post we’ll build a working OCI image of a Ruby on Rails application that can run locally without the need to write or maintain a Dockerfile . You will learn about the Cloud Native Buildpack (CNB) ecosystem, and how to utilize the pack CLI to build images. Let’s get to it! This post is extracted from a tutorial I wrote for Heroku Cloud Native Buildpacks . Future revisions will be updated on the GitHub repo . We assume you have docker installed and a working copy of git . Next, you will need to install the CLI tool for building CNBs, pack CLI . If you’re on a Mac you can install it via Homebrew: Ensure that is installed correctly: Once is installed, the only configuration you’ll need for this tutorial is to set a default builder: You can view your default builder at any time: The following tutorial is built on amd64 architecture (also known as x86). If you are building on a machine with different architecture (such as arm64/aarch64 for a Mac) you will need to tell Docker to use architecture. You can do this via a flag or by exporting an environment variable: Skip ahead if you want to build the application first and get into the details later. You won’t need to know about builders for the rest of this tutorial. In short, a builder is a delivery mechanism for buildpacks. A builder contains references to base images and individual buildpacks. A base image contains the operating system and system dependencies. Buildpacks are the components that will configure an image to run your application, that’s where the bulk of the logic lives and why the project is called “Cloud Native Buildpacks” and not “Cloud Native Builders.” You can view the contents of a builder via the command . For example: This output shows the various buildpacks that represent the different languages that are supported by this builder such as and . How do you configure a CNB? Give them an application. While Dockerfile is procedural, buildpacks, are declarative. A buildpack will determine what your application needs to function by inspecting the code on disk. For this example, we’re using a pre-built Ruby on Rails application. Download it now: Verify you’re in the correct directory: This tutorial was built using the following commit SHA: Now build an image named by executing the heroku builder against the application by running the command: Verify that you see “Successfully built image my-image-name” at the end of the output. And verify that the image is present locally: Skip ahead if you want to run the application first and get into the details later. When you run with a builder, each buildpack runs a detection script to determine if it should be eligible to build the application. In our case the buildpack found a file and buildpack found a file on disk. As a result, both buildpacks have enough information to install Ruby and Node dependencies. You can view a list of the buildpacks used in the output above: After the detect phase, each buildpack will execute. Buildpacks can inspect your project, install files to disk, run commands, write environment variables, and more . You can see some examples of that in the output above. For example, the Ruby buildpack installs dependencies from the automatically: If you’re familiar with Dockerfile you might know that many commands in a Dockerfile will create a layer . Buildpacks also use layers, but the CNB buildpack API provides for fine grained control over what exactly is in these layers and how they’re composed. Unlike Dockerfile, all images produced by CNBs can be rebased . The CNB api also improves on many of the pitfalls outlined in the satirical article Write a Good Dockerfile in 19 ‘Easy’ Steps . Even though we used and CNBs to build our image, it can be run with your favorite tools like any other OCI image. We will be using the command line to run our image. By default, images will be booted into a web server configuration. You can launch the app we just built by running: Now when you visit http://localhost:9292 you should see a working web application: Don’t forget to stop the docker container when you’re done. Here’s a quick breakdown of that command we just ran: So far, we’ve downloaded an application via git and run a single command to generate an image, and then we can use that image as if it was generated via a Dockerfile via the command. In addition to running the image as a web server, you can access the container’s terminal interactively. In a new terminal window try running this command: Now you can inspect the container interactively. For example, you can see the files on disk with : And anything else you would typically do via an interactive container session. Skip this section if you want to try building your application with CNBs and learn about container structure later. If you’re an advanced user you might be interested in learning more about the internal structure of the image on disk. You can access the image disk interactively by using the docker command above. If you view the root directory you’ll see there is a folder. Every buildpack that executes gets a unique folder. For example: Individual buildpacks can compose multiple layers from their buildpack directory. For example you can see that binary is present within that ruby buildpack directory: OCI images are represented as sequential modifications to disk. By scoping buildpack disk modifications to their own directory, the CNB API guarantees that changes to a layer in one buildpack will not affect the contents of disk to another layer. This means that OCI images produced by CNBs are rebaseable by default, while those produced by Dockerfile are not. We saw before how the image booted a web server by default. This is accomplished using an entrypoint. In another terminal outside of the running container you can view that entrypoint: From within the image, you can see that file on disk: While you might not need this level of detail to build and run an application with Cloud Native Buildpacks, it is useful to understand how they’re structured if you ever want to write your own buildpack. So far we’ve learned that CNBs are a declarative interface for producing OCI images (like docker). They aim to be no to low configuration and once built, you can interact with them like any other image. For the next step, we encourage you to try running with the Heroku builder against your application and let us know how it went. We encourage you to share your experience by opening a discussion and walking us through what happened: We are actively working on our Cloud Native Buildpacks and want to hear about your experience. The documentation below covers some intermediate-level topics that you might find helpful. Language support is provided by individual buildpacks that are shipped with the builder. The above example uses the buildpack which is visible on GitHub . When you execute with a builder, every buildpack has the opportunity to “detect” if it should execute against that project. The buildpack looks for a in the root of the project and if found, knows how to detect a node version and install dependencies. In addition to this auto-detection behavior, you can specify buildpacks through the flag with the CLI or through a project.toml file at the root of your application. For example, if you wanted to install both Ruby, NodeJS and Python you could create a file in the root of your application and specify those buildpacks. In file write: Ensure that a file, a file and a file all exist and then build your application: You can run the image and inspect everything is installed as expected: Most buildpacks rely on existing community standards to allow you to configure your application declaratively. They can also implement custom logic based on file contents on disk or environment variables present at build time. The is a configuration file format that was introduced by Heroku in 2011 , you can now use this behavior on your CNB-powered application via the , which like the rest of the buildpacks in our builder is open source . The buildpack allows you to configure your web startup process. This is the of the getting started guide: By including this file and using buildpack, your application will receive a default web process. You can configure this behavior by changing the contents of that file.

0 views
Schneems 2 years ago

It's dangerous to go alone, `pub` `mod` `use` this.rs

What exactly does and do in Rust? And how exactly do I “require” that other file I just made? This article is what I wish I could have given a younger me. This post starts with a fast IDE-centric tutorial requiring little prior knowledge. Good if you have a Rust project and want to figure out how to split up files. Afterward, I’ll dig into details so you can understand how to reason about file loading and modules in Rust. Tip: If you enjoy this article, check out my book How to Open Source to help you transform from a coder to an open source contributor. Skip this if: You’re the type of person who likes to see all the ingredients before you see the cake. For this tutorial, I’ll be using VS Code and the rust analyzer extension . With that software installed, create a new Rust project via the command: Note: The command isn’t required. I’m using it to show file hierarchy. You can install it on Mac via . In the directory, there’s only one file, . Since some people think tutorial apps are a joke, let’s make an app that tells jokes. Create a new file named by running: Add a function to the file: Then modify your file to use this code: This code fails with an error: You cannot use the code in code as cannot find it. When you create a file in a Rust project and get an error that it cannot be found, navigate to the file in your VS Code editor and hit (command key and period on Mac, or control period on Windows). You’ll get a “quick fix” prompt asking if you want: Your editor may look different than mine, but note the quick-fix menu right under my cursor: If you hit enter, then your should now look like this: Skip if: You already know how to use cargo-watch in the VS Code terminal Run your tests on save in the vscode terminal with cargo watch. You can open a terminal by pressing (command shift “p” on Mac or control shift p on Windows). Then type in “toggle terminal” and hit enter. This will bring up the terminal. Then, install cargo watch Now run the watch command in your terminal: This command tells to the file system for changes on disk, then clear ( ) the window and execute tests ( ). This will make iteration faster: Make sure cargo watch is running and save the file. Note that tests are failing since the program still cannot compile: The error “exists but is inaccessible” is similar to what we saw before but with additional information. If you run that command, it suggests: The very last line gives us a great clue. We need to update the function to be public. Edit your file to make the function public: On save, it still fails, but we’ve got a different message (always a good thing): It suggests “consider importing this function” by adding to . Use the quick-fix menu on the function in : After accepting that option, now looks like this: When I save the file, it compiles! To recap what happened here: You don’t have to memorize EVERYTHING required. All in all, our tools either did the work or gave us a strong hint as to what to do next. Start mapping if-this-error to then-that-fix behavior while learning , , and file loading. To import code from a file in the directory into , you must add a declaration to . How do you add files in a different directory? Let’s say we want to tell several kinds of jokes, so we split them into a directory and different files. Use the results of the first tutorial and add to it: Now add some code that we can import. Write a joke into : And another into : With these files saved, use the quick-fix menu, which will prompt you to insert a . Select the first option on both files and save both. You might be surprised it didn’t modify like our first tutorial. Instead, the extension modified the file with these additions: Modify to use the contents of those modules now: These two lines implicitly use the module imported above to its own namespace. You can also make this explicit using the keyword: You can save and run this code. Make sure you’re sitting down so you don’t end up rolling on the floor laughing. To recap what happened here: The function will output two jokes. Sometimes my kids think a joke is so funny that they want to hear it again. Let’s add again. This time put it directly in the function: When you save, you’ll get an error: We’ve seen this error before “cannot find function in this scope”. The help ends with this line: “If the item you are importing is not defined in some super-module of the current module, then it must also be declared as public (e.g., ).” That’s not super helpful since the function is already public: Before we saw that these two invocations are basically the same thing: So you might guess that calling inside of is the same as calling , which makes it a bit more explicit. You might also notice that is nowhere in . Where did it come from? That function came from , but Rust cannot find it. The first time we called the function, we started with and traversed the path. Let’s try that same technique: Did that work? No. But, our message changed again (always worth celebrating). Before, the error said that was “inaccessible.” Now it’s saying that is a private module. It points at a line in , where is defined for . Hover over in and press . It asks if you want to change the visibility of the module: Accept that change and save. Then the file compiles! To recap what happened here: You might wonder, “What’s and how is it different from ? The declaration sets visibility to public but is limited to the crate’s scope. This is less privileged than . The declaration allows a different crate with access to your code (via FFI or a library) to use that code. By setting , you indicate a semi-private state. Changing that code might affect code in other files of your project, but it won’t break other people’s code. I prefer using by default and only elevating to as needed. However, the core part of this exercise was seeing how far we could get by letting our tools figure out the problem for us. If all you want to do is put code in a file and load it from another file, you’re good to go. This is the high-level cheatsheet for what we did above: Reference Rust code in a file fast by: If you use the above methodology, that rust analyzer will create files. There are two ways to load files from a directory. Before, I used the convention that loaded all the files in the directory. That’s technically the preferred way of loading files in a directory, but there’s one other method which is: putting a file in the directory you want to expose. In short, would do the same thing as . Please don’t take my word for it. Try it out: Here’s the current file structure: Now move the joke file to : Now here’s what our directory looks like If you re-run tests. They still pass! As far as Rust is concerned, this code is identical to what we had before. Now that you’ve tasted our lovely IDE productivity cake. It’s time to learn about each of the ingredients. Here’s what we’ll cover: In the above example, we saw that we can reference code via a path starting with like . Here’s what you can start a code path within Rust: Starting a path with indicates that the code path is relative to the current module. Before we saw this code: In this case, is inside the module. This keyword is optional in this case. You can remove it, and the code will behave exactly the same. Most published libraries do not use ‘ self ‘ because it’s less typing to omit the word altogether. Most would write that code like this: In this code, is implied because we’re not using or . Using can be helpful as a reminder that you’re using a relative path when you’re starting. If you’re struggling to correct a path, try mentally substituting the module’s name (in this case, ) for the keyword as a litmus test to see if it still makes sense. While it might seem that and unqualified are interchangeable, they are not. This code will compile without self: However, this code will not: With error: In addition to being an implicit reference to , using an unqualified path also allows you access to elements that ship with Rust, like the macro or namespace: If you put in front of above, it would fail to compile. Using an unqualified path also gives you access to any crates you import via . For example, the crate: If you put in front of above, it would fail to compile. A code path that starts with is like an absolute file path. This keyword maps to the crate root ( or ). In our above example, was also since we were in the , so we could have written this code like this: Because refers to , these two lines of code are identical. However, if you try to copy and paste them to another file, only the one that starts from will continue to work. You can use an absolute path in any code module as long as all the parts of that path are visible to the current module. For example, here’s the file using a mix of absolute and relative paths: This code calls via its absolute path and from a relative path. The path references the path of a parent. In this case, the of is (otherwise known as the crate root). You can re-write the above code using then as a replacement for like this: Skip this if: You know how to use to rename modules. In Rust, there is a filesystem module , but you don’t have to it to well…use it. You can type out the full path: Instead of writing out and everywhere, you could tell Rust that you want to map a shorthand and rename it: This code says, “Anytime you see in this file, know what I mean is ”. This pattern is so common you don’t need the repetition of at the end. Since you’re -ing it as the same name, you can write this code: All three programs are exactly the same. The does not do anything. It simply renames things, usually for convenience. Beyond importing a namespace or a single item (such as an enum, struct, or function), you can import ALL the items in a namespace using a glob import. For example: In this code, we can call the function without naming it above. That’s because exists at , so when we import , it gets imported along with all of its friends. This is generally discouraged because two imports may conflict. If one-day introduces a new function named , then there would be a conflict, and your code would fail to compile. While glob imports might save time typing, they increase the mental load while reading. They’re not currently considered “idiomatic.” The exception would be for a prelude file within a crate. Beyond renaming imports, can change your code’s behavior! In Rust, you can define a trait on a struct that wasn’t defined in the same library through something known as an “extension trait” . This might sound exotic and weird, but it’s common. Even Rust core code uses this feature. For example, this code will not compile: You’ll get this compile error: But if you add a trait via , now this code will compile: Why did that work? In this case, the statement on the first line changes the code by bringing the trait into scope. Also, Rust was clever enough to realize that it could compile if we added a onto the code. It’s right there in the suggestion. Neat! This behavior change confused me for the longest time as most documentation and tutorials just beat the “Rust does not have imports, only renaming” drum to the point that it’s not helpful information. Confusingly enough, if you define the trait on your own struct, you’ll have to also everywhere you want to use that behavior too. The various permutations of , , and can be confusing, to say the least. I wrote this out as an exercise in understanding them. It’s more useful as a reference: I’ve focused on mapping modules and files, but you can use modules without files. Rust by example modules gives some examples. You’ll likely find modules used without filenames as you go through various docs on or the module system. Aside from and , most features map 1:1 whether or not your modules are backed by a file system. I really wanted to call this a “comprehensive” guide, but there’s more to this rabbit hole. If you want a depth-first kind of learner, you can dive into some further reading:

0 views
Schneems 2 years ago

What is github.com/zombocom and why most of my Ruby libraries there?

The other day I got another question about the org on GitHub that prompted me to write this post. This org, github.com/zombocom , holds most all of my popular libraries). Why put them in a custom GitHub org, and why name it zombocom? Let’s find out. If you’re maintaining one or two libraries, keeping them in your GitHub user’s namespace is easy enough. For me, this is https://github.com/schneems . The zombocom org has 18 libraries, 16 of which I created. I want to encourage people to contribute to my libraries, so I’ve taken to giving commit access to developers who land a successful PR. I still control releasing, so this permissive default is intended to allow developers with ambition to express themselves while giving me an opportunity to QA and chime in on changes. I wanted to take this a step further and give them access to ALL my libraries to make it even easier to contribute. This strategy is time-consuming if my libraries are all under github.com/schneems , so I moved them into a custom org name and called it a day. Making a custom org name is familiar for top-rated gems/libraries. For example, github.com/puma or github.com/CodeTriage . My most popular library on zombocom is with 52 million downloads. But an org with that same name would be overly restrictive. Custom orgs are usually named around a common theme. But my libraries don’t have a shared theme aside from being a thing I wrote that hopefully makes your life easier. I didn’t want 16 namespaces for 16 libraries. I wanted a place where anything was possible. So I named it after Zombo.com : Anything is possible at Zombocom. The infinite is possible at Zombocom. The unobtainable is unknown at Zombocom. Welcome to Zombocom Another org inspired this move github.com/sparklemotion . This org was initially used for Nokogiri (the most popular XML/HTML parsing library for Ruby), and other libraries were added over time. The name is a joke from the movie Donnie Darko: Sometimes I Doubt Your Commitment To Sparkle Motion. The Nokogiri library and the sparklemotion org were created by Aaron Patterson, aka tenderlove . I asked him about it at a conference once, and he said he made it for similar reasons. Some ask if I also made Zombo.com , but the answer is no. I have nothing to do with that site and don’t know who originally wrote it. The name is a tribute to a meme before memes. If the author ever wants to share my org name to put their website source, that would be fun. After all, anything is possible. If you want to get started in open source, but need a helping hand, check out my (paid) book How to Open Source or (free) service CodeTriage

0 views
Schneems 2 years ago

Pairing on Open Source

I came to love pairing after I hurt my hands and couldn’t type. I had to finish up the last 2 months of a graduate CS course without the ability to use a keyboard. I had never paired before but enlisted several other developers to type for me. After I got the hang of the workflow, I was surprised that even when coding in a language my pair had never written in (C or C++), they could spot bugs and problems as we went. Toward the end, I finished the assignments faster when I wasn’t touching the keyboard, than I was by myself. Talking aloud forced me to refine my thoughts before typing anything. It might be intimidating to try pairing for the first time, but as Ben puts “it’s just a way of working together.” This Hacktoberfest, I started a Slack group for the ~350 early purchasers of my book How to Open Source . In the intake survey, they told me they wanted to learn more about pairing. When I think pairing, I think of Ben Orenstein, CEO of the pairing app tuple.app . I jumped on Twitter and asked if I could interview him for the group, and he agreed! Listen in as we discuss the intersection of pairing and open source contribution. We’ll talk about how it’s different from regular pairing (or not), how to find people to pair with, and the best way to ask for help from a potential mentor. In addition to talking about pairing in the group, we had developers who organized together to pair for Hacktoberfest. One made their first-ever contribution after their first pairing session. Then she wrote her first ever English blog post about the experience . (If you find a glaring problem with the transcript you can send me a PR to https://github.com/schneems/schneems.) Ben: Yeah, let’s rock. Schneems: Well, welcome everyone. My name is Richard Schneeman, author of How to Open Source . Today I have Ben, the ceo. CEO or CTO? Schneems: Ceo. All right. Yeah, the big guns of tuple.app . So tuple is an application for pairing. It’s my favorite application for pairing coincidentally. So you wanna say Hi Ben? Ben: Yeah, I’m stoked to be here. Pairing was a huge game changer in my career and so I’m stoked to talk about this topic that made a big difference for me professionally. Schneems: Awesome, Awesome. Well I am very happy to have you. I recently launched a book and in doing so, asked as some of the people who bought it, what they’re interested in. A lot of ‘em indicated that they are really interested in pairing, so hence inviting Ben. And so yeah, wanted to ask you a couple of questions. One of ‘em that came up is just kind of high level, hey, are there differences pairing on open source software versus proprietary software? Ben: Probably not in the actual act of pairing, I would assume. I can’t think of any differences that would be there. I think it’s probably, you might see some differences in what it takes to get people to pair with you. I’m not quite sure the willingness of open source maintainers to pair with other contributors. , you would sort of hope it was high, or maybe once you had earned a little bit of trust or shown a bit of promise, that could be a thing. That would be an easier pitch to make. If I were an open source maintainer and I sort had all the responsibility of my project, I would want to train other people on that project. So it's to help share some of the burden, I think. . And I haven't found a better thing than pairing for transmitting that sort of knowledge yet. So I would think, whereas if you're at a company and you're all are in the same team and you're all working on the same app, it's probably pretty easy to just DM somebody and say, Hey, can we pair on this thing? Schneems: Totally, totally. Yeah. I think and you kind of alluded to a little bit of as a maintainer, needing to balance your like, yes, you wanna train people up, but you also need to balance that with, well hey, am I ever gonna see this person again? So in order to pair you to be able to find people, it’s a little bit different in working in the open, working in open source have you ever seen any sort of pairing groups or have you seen any sort of patterns with either a people who say like, Hey, I have a thing I wanna share, or other people come to the table and say, Oh, I have a problem, or I need help with this. Just, or tips really in general for people looking for someone to pair with, but maybe they don’t necessarily have a pre-built pool. Ben: It’s kind of a bigger question. I think the question of how do you find someone to pair with is a little bit like the question of how do you find colleagues or mentors? Perry is just a way of working together. It’s not a magical practice. It’s effective and it’s a big fan of it, but it’s not different than working with other people really. It’s just a type of working together. And so I think it really transposes to the question of how do I find great people to work with? And I think maybe a short answer to that is be worth working with. Be a friendly person, a productive person. Show some indications that you are going to be good to pair with, is probably a good way to start . I think if you email someone and say, Hey, I'm looking for a mentor, will you be my mentor? No one wants to say yes to that. That's not a compelling pitch. But if you say, Hey, I've been a fan of your work. I loved this project and this project, you inspired me to do this thing and I ran into this problem, could you offer me, do you have any advice on how I might tackle this? Now you're starting a relationship with somebody. You've shown that you have done work, you've, you've done some homework already and you're showing that you are possibly worth mentoring. And I think that approach is probably likely to pay better dividends than saying just how do I find someone to pair with who wants to pair with me? Well, in the open source context, try to get a couple prs under your belt maybe or ask if the maintainers have considered a organizing a pairing day where maybe they connect a few people that are interested in becoming contributors or new contributors, something like that. Schneems: , have you seen anything like that in the wild? And I ask because I, it's like, hey, I'm trying to run this community and it's like I've got people who want pair, I have people who wanna work on project. It seemed, Ben: It’s a weird thing. I’ve seen a lot of failed efforts here. I think it’s a pretty common programmer impulse to be like, I should build a site that matches pair people together that wanna pair programming. Cuz you could sort envision how the Apple work. And so a lot of programmers write it and I think it’s not a matching problem really just because these two people wanna pair in the same time zone and in the same language doesn't mean they're really gonna actually pair together. I think there's more social complex social dynamics at play than an app is going to solve. I think the closest thing I've seen to a direct effort to cause more pairing that has worked well has been inside a company. So one of the customers of Tuol is Shopify. They have thousands of developers using our product and they ran a internal pairing contest. They ran a contest and it was like whoever pairs the most this week or with the most people or the most time becomes eligible for these prizes. And I think the prizes were honestly just t-shirts and badges and things like that. I think they were fairly simple things, But I think you, I remain continually surprised by how much developers will do for a . So , that might be a possible approach Schneems: There. A hundred percent. Yeah. I mean we’re definitely seeing that there’s actually a lot of engagement within the group with within Oktoberfest is just something to focus on, I feel like is a big thing. When you said that, I was like, Oh yeah, it’s like a contest. It’s like, yeah, who can pair the best? It’s who can be the most friendly . Ah, I'm gonna crush it at being friendly. Ben: Totally. Schneems: That’s interesting. Yeah. Pairing contest. Yeah. And even one of the people in the group specifically called out and mentioned Focusmate. Have you ever heard of Focus Mate? Ben: Yeah. That’s the thing where you go and you work with somebody kind of simultaneously, but separately, Schneems: Right? Yeah. So for just everybody else, you log into the site and you just say, Hey, I need to focus. I need basically somebody to keep me accountable. It pairs you up with someone, I mean not pairs. It matches you with someone. There we go. And then you basically just sit there for a block of time. Well you both do work. And the idea is if you start playing on your phone or something, the other person would see. So it’s, it’s working in a cafe. So yeah. Good. You’re familiar with it. Somebody specifically was asking like, Hey, have you ever considered integrating with topple? And it, it’s back with the matching issue. Not super keen on that, like you said, as being a technology issue. Ben: Yeah, I don’t believe that you’re gonna pair random strangers together and have it worked that well. You might occasionally have success there if you get people with similar values and goals, you have higher success rate I’m sure. I think you probably want to pair strangers. I think you need to have a human in the loop and very motivated people. Otherwise it's kind of pairing up workout buddies where if any either person flakes, then the other person is also can't work out. It's now we're just tied together. In this particular foot race, we have considered building some things within teams. I think woodwork is saying Richard joined the team a week ago and hasn't paired with anyone. Maybe someone should pair with him or here's a leaderboard. Richard actually pairs with the most people in a given month. I wonder if anyone could catch him. Here's the second and third and fourth place kind of thing. And sort of take a thing that is already mostly working or is already an established social group and kind of give it nudges. I think that would possibly work. But this we should add matching into twofold to have people find other people to pair with broadly throughout the internet and among strangers. I'm pretty skeptical of, I might be wrong, maybe it would be amazing feature, but I have suspicions, Schneems: . Okay. Yeah, I makes sense. Ben: Your group might be a good different, might be an exception. So we did do some. So when I was running Upcase for Thought bot, which is a educational developer training service I think we did do some matching of pairs and there was some success. I wouldn’t say it was resounding success, but there was pairing happening. People reported that it actually occurred. So I think it is possible within a group that has kind of self-selected and identified, I wanna learn this thing, I’m willing to invest this effort to do this Schneems: for that. Within matching people. Of the times that you have mentioned pairing, it kind of sounds like you're mostly envisioning a different level. Senior, junior, I mean Schneems: Actually no labels and whatnot or same level or I don’t know. Are there any other than values and character traits and just connection, are there any other things to look for that might make a good pair, a good pairing setup or a good pairing pair? Ben: Pair programming is a more social endeavor than almost most programming activities. It’s live real time code review a little bit. And so just in a code coder view, you have to be a little careful about how you say things and you have to maybe use more emoji like positive emoji and more air on the side of politeness and friendliness and happiness than you might otherwise. . I think pairing has a little bit of that as well. I think a good pair is someone that has empathy and friendliness and a decent amount of easygoingness. And so I would seek to pair with people that you enjoy spending time with . If they're nice humans and pleasant and enjoy collaborating on things, then you'll probably have a good time pairing with them if they are the type to nerds snip you or while actually you all the time or be condescending. If you don't know something then you will not have a good time pairing with that person. And it's not parenting's fault, it just exposes a thing that was there. Schneems: Right. Yeah. There’s just already a relationship. It’s not gonna be literally a different person when they show up to the Ben: Yeah. Session. Schneems: Yeah, that makes sense to me. So on the site you do have why pairing and there there’s sort a five second, five minute and maybe that’s not the delineation. One of the people in my group was asking if you know of any research in the world of pairing. So they were trying to bring pairing to their company and they’re basically just looking for more ammo, fire power like hey, we say pairing has done X, Y, and Z. Do you, you know of either a existing research or I don’t know, maybe ongoing research? Ben: Yeah, if you Google scientific research into pair programming, our article is the top result I wrote. There’s not much is the short answer. So I wrote up what I found and it’s about six or so plausible studies that you could maybe say make a decent case for pairing . I think. So hopefully that will help. This is a site that we made learn to pair.com and that's one of the articles. There's lots of stuff there. Most of what I know about pairing, I tried to put in that site, learn Schneems: To pair.com. Got Ben: It. You try to justify this scientific, it feels like coming up with scientific research into programming productivity measures seems very hard to me. I think coming up with a measure of how productive programmers are is tantalizing and so far I think has been fairly intractable. I'm not sure you could get something that would convince a lot of expert practitioners that you have a good metric that actually demonstrates whether a programmer is productive. It might be almost impossible. And so to say here is some scientific research into the effectiveness of pair programming sort of requires some measurement otherwise you're just doing surveys. Did you feel more productive? Did you enjoy this experience more? And the surveys there, people have done this research with surveys and the surveys do show very strong positive results. People enjoyed pairing, they felt more productive, they felt more connected. I wouldn't probably take this track if I were trying to introduce pairing or try to justify pairing as an activity . I think I would try to no big deal my way into it. Meaning I think sometimes people think I need to convince my team, I should convince my team, my company, my management, that we should do pair programming. And so let me make the case. And it would be great if there were a study that showed that this is clearly a great idea. And I think what you probably should do instead is go to a coworker that you have a great relationship with and who is warm and friendly and empathetic and say, Hey, will you gimme a second set of eyes on this real quick and fire up something like Tuol or some other screen sharing tool or whatever. Or have them pull up a chair next to you. If you are working in person, plug in an extra keyboard and just write some code together. And don't even call it pairing. Don't make a big deal of it. Just be like, call it a second set of eyes. Cuz that's fundamentally what pairing is at the end of the day. It's two people looking at the same code at the same time. And all the rules or strategies or tactics about who types when and who has a mouse and who has a keyboard and where's the monitor And all this really is of implementation details, but the fundamental activity is two programmers looking at the same code. And so you could do that in person, you could do that remote and I would just casually start this practice and see how it goes. And if you like it and it works well try to do it some more and maybe do it with some other people. And maybe when someone at a standup says, Oh, I'm still stuck on that, whatever. And say, Oh, do you want me to come take a look at that with you at some point? And then lo and behold you're pairing with them or hey maybe, and Mary should pair on that thing or should work on that thing together. I would go at it grassroots and subtly and not make a big deal about it. And pairing is not for every team, so it might not work but I think that's probably gives you a pretty good chance of success. Top down, we will all pair can work occasionally. There are some organizations that have that and have done that and have become pairing organizations so it can work that's not how I would introduce pairing at an existing organization that is not already pairing or doesn't have extreme buy-in from the top who are willing to say, Yes, we're gonna make this. The priority of the quarter is everyone's gonna pair and we'll see how it goes Schneems: . So I think the reason behind maybe why they were asking, it's like, Oh hey, how can I get this top down buy in? It's maybe they were looking for a credit card, they're looking for the sign in, they're looking for like, Oh hey, I it's specifically, it's like, oh, I heard there's this great app that I would like to get access to. Yeah. I don't know. It's like Ben: What are their options? Schneems: Yeah, yeah. Well it’s, You have a free trial period, right? Ben: Yes, we do have a free trial. It doesn’t require a credit card card. So we see lots of our customers sign up as entry level developers or developers without a credit card. And then they try it and they like it and they ask someone high up the chain for permission to buy it and then become paid customers. But also and relevant to this group, I suspect, is that we give away the app for open source teams. So you can get a permanently free team if you are open source maintainers or working on open source and it’s just two.app/oss and fill out that form with what you’re working on. And we grant this to probably almost everyone that fills it out. So we leverage, we use a lot of open source software in our tool and our company is built on top of it. So we were very happy to give back to the OSS community in this way. Schneems: And full disclosure, I am a recipient of this, a happy recipient of this program. And I think only so one, both people don’t need to have paid for, is that correct? Yeah. Okay. So it’s if you wanna pair with somebody who’s never done it, it’s they don’t have to make this big investment. If one person has a license, you can invite somebody else to pair with you. Ben: Right? Exactly. Yep. Schneems: Okay, cool. I think that helps ease that transition and gets to the heart of the, it’s like research maybe not, but it’s really, it’s how can we hit that ground running? How can we do more pairing? And I think to makes total sense to me. Have you ever heard of, I guess, pairing on non non-technical tasks or just almost like Ben: Yeah, Schneems: Writing an RFC or research or I guess you can speak to that. Ben: I pair on non-technical tasks all the time every week. And this actually happens a lot at our company. A lot of non programming things happen on two sessions as a part of a pair. A lot of the benefit of pairing, there’s a lot of benefits of pairing that you get, even if there’s no code on the screen. So it’s often less boring to work on something or if something is boring, it’s less boring it’s less likely to be boring it's more engaging. If someone is there with you, it's harder to slack off if you're sharing your screen. So you tend to stay more focused and get the thing done faster with fewer Twitter accidental long Twitter breaks or email breaks or slack breaks or things like that. it spreads knowledge around the team. If I watch someone do a process or writing a doc or something, I'm learning that thing at the same time. So we are having all this rich communication happening. You can see how someone else works, how someone else thinks, which is really useful for just becoming more awesome yourself. You might see they have a wonderful Macs tool that you wanna steal or Oh hey, I didn't know you could do that in our app with this shortcut, or how did you make that thing happen through while programming, but also through while just using your computer. So there's all these benefits that are available to you. And I think it shines particularly well on coding because programming is so hard and it's just so easy to write bugs that having a second set of eyes is, and it's so hard to make a great design, a good coding design that it feels extra worth having another person there because bugs in production can be very costly. They're not fun to fix. They slow you down a lot. Bad design decisions probably that even worse. True for all those things, but even stronger probably . And so it shines a lot there. But we have found quite a lot of value in just pairing on running payroll or figuring out what's the policy on this thing or what are our goals for next quarter, all those sorts of Schneems: Things. Okay, cool. How do you initiate those conversations? Or do you just say, Hey, I’m working on this, somebody wanted somebody wanna do it with me? Ben: Yeah yeah. Same sort of casual way. I have standing pairing meetings, kinda like you my calendar with people that I work with a lot. True. That’s probably true for a number of our employees as well. But yeah, I think there’s also just a lot of, there’s a fair amount of ad hoc pairing happening where someone just call somebody else, I’m like, Oh, this is tricky. Let me just call this person. And just they get a notification that I’m calling and there’s my screen and we start talking and it happens just kind of fairly fluidly and we’ve built a cult. Our culture is sort of steeped in this as you’d imagine making a pairing app for a living. Great. There’s a lot of pairing happening. Everyone’s expecting it. No one is shocked when you wanna do it with them. So that does a lot of work there. Schneems: Cool. Very cool. Well yeah, I really appreciate you coming on board and answering some of the open source communities questions. So yeah, thank you Ben from couple.app. Ben: Yeah, my pleasure. For.

0 views