Posts in Swift (7 found)
baby steps 2 days ago

We need (at least) ergonomic, explicit handles

Continuing my discussion on Ergonomic RC, I want to focus on the core question: should users have to explicitly invoke handle/clone, or not? This whole “Ergonomic RC” work was originally proposed by Dioxus and their answer is simple: definitely not . For the kind of high-level GUI applications they are building, having to call to clone a ref-counted value is pure noise. For that matter, for a lot of Rust apps, even cloning a string or a vector is no big deal. On the other hand, for a lot of applications, the answer is definitely yes – knowing where handles are created can impact performance, memory usage, and even correctness (don’t worry, I’ll give examples later in the post). So how do we reconcile this? This blog argues that we should make it ergonomic to be explicit . This wasn’t always my position, but after an impactful conversation with Josh Triplett, I’ve come around. I think it aligns with what I once called the soul of Rust : we want to be ergonomic, yes, but we want to be ergonomic while giving control 1 . I like Tyler Mandry’s Clarity of purpose contruction, “Great code brings only the important characteristics of your application to your attention” . The key point is that there is great code in which cloning and handles are important characteristics , so we need to make that code possible to express nicely. This is particularly true since Rust is one of the very few languages that really targets that kind of low-level, foundational code. This does not mean we cannot (later) support automatic clones and handles. It’s inarguable that this would benefit clarity of purpose for a lot of Rust code. But I think we should focus first on the harder case, the case where explicitness is needed, and get that as nice as we can ; then we can circle back and decide whether to also support something automatic. One of the questions for me, in fact, is whether we can get “fully explicit” to be nice enough that we don’t really need the automatic version. There are benefits from having “one Rust”, where all code follows roughly the same patterns, where those patterns are perfect some of the time, and don’t suck too bad 2 when they’re overkill. I mentioned this blog post resulted from a long conversation with Josh Triplett 3 . The key phrase that stuck with me from that conversation was: Rust should not surprise you . The way I think of it is like this. Every programmer knows what its like to have a marathon debugging session – to sit and state at code for days and think, but… how is this even POSSIBLE? Those kind of bug hunts can end in a few different ways. Occasionally you uncover a deeply satisfying, subtle bug in your logic. More often, you find that you wrote and not . And occasionally you find out that your language was doing something that you didn’t expect. That some simple-looking code concealed a subltle, complex interaction. People often call this kind of a footgun . Overall, Rust is remarkably good at avoiding footguns 4 . And part of how we’ve achieved that is by making sure that things you might need to know are visible – like, explicit in the source. Every time you see a Rust match, you don’t have to ask yourself “what cases might be missing here” – the compiler guarantees you they are all there. And when you see a call to a Rust function, you don’t have to ask yourself if it is fallible – you’ll see a if it is. 5 So I guess the question is: would you ever have to know about a ref-count increment ? The trick part is that the answer here is application dependent. For some low-level applications, definitely yes: an atomic reference count is a measurable cost. To be honest, I would wager that the set of applications where this is true are vanishingly small. And even in those applications, Rust already improves on the state of the art by giving you the ability to choose between and and then proving that you don’t mess it up . But there are other reasons you might want to track reference counts, and those are less easy to dismiss. One of them is memory leaks. Rust, unlike GC’d languages, has deterministic destruction . This is cool, because it means that you can leverage destructors to manage all kinds of resources, as Yehuda wrote about long ago in his classic ode-to- RAII entitled “Rust means never having to close a socket” . But although the points where handles are created and destroyed is deterministic, the nature of reference-counting can make it much harder to predict when the underlying resource will actually get freed. And if those increments are not visible in your code, it is that much harder to track them down. Just recently, I was debugging Symposium , which is written in Swift. Somehow I had two instances when I only expected one, and each of them was responding to every IPC message, wreaking havoc. Poking around I found stray references floating around in some surprising places, which was causing the problem. Would this bug have still occurred if I had to write explicitly to increment the ref count? Definitely, yes. Would it have been easier to find after the fact? Also yes. 6 Josh gave me a similar example from the “bytes” crate . A type is a handle to a slice of some underlying memory buffer. When you clone that handle, it will keep the entire backing buffer around. Sometimes you might prefer to copy your slice out into a separate buffer so that the underlying buffer can be freed. It’s not that hard for me to imagine trying to hunt down an errant handle that is keeping some large buffer alive and being very frustrated that I can’t see explicitly in the where those handles are created. A similar case occurs with APIs like like 7 . takes an and, if the ref-count is 1, returns an . This lets you take a shareable handle that you know is not actually being shared and recover uniqueness. This kind of API is not frequently used – but when you need it, it’s so nice it’s there. Entering the conversation with Josh, I was leaning towards a design where you had some form of automated cloning of handles and an allow-by-default lint that would let crates which don’t want that turn it off. But Josh convinced me that there is a significant class of applications that want handle creation to be ergonomic AND visible (i.e., explicit in the source). Low-level network services and even things like Rust For Linux likely fit this description, but any Rust application that uses or might also. And this reminded me of something Alex Crichton once said to me. Unlike the other quotes here, it wasn’t in the context of ergonomic ref-counting, but rather when I was working on my first attempt at the “Rustacean Principles” . Alex was saying that he loved how Rust was great for low-level code but also worked well high-level stuff like CLI tools and simple scripts. I feel like you can interpret Alex’s quote in two ways, depending on what you choose to emphasize. You could hear it as, “It’s important that Rust is good for high-level use cases”. That is true, and it is what leads us to ask whether we should even make handles visible at all. But you can also read Alex’s quote as, “It’s important that there’s one language that works well enough for both ” – and I think that’s true too. The “true Rust gestalt” is when we manage to simultaneously give you the low-level control that grungy code needs but wrapped in a high-level package. This is the promise of zero-cost abstractions, of course, and Rust (in its best moments) delivers. Let’s be honest. High-level GUI programming is not Rust’s bread-and-butter, and it never will be; users will never confuse Rust for TypeScript. But then, TypeScript will never be in the Linux kernel. The goal of Rust is to be a single language that can, by and large, be “good enough” for both extremes. The goal is make enough low-level details visible for kernel hackers but do so in a way that is usable enough for a GUI. It ain’t easy, but it’s the job. This isn’t the first time that Josh has pulled me back to this realization. The last time was in the context of async fn in dyn traits, and it led to a blog post talking about the “soul of Rust” and a followup going into greater detail . I think the catchphrase “low-level enough for a Kernel, usable enough for a GUI” kind of captures it. There is a slight caveat I want to add. I think another part of Rust’s soul is preferring nuance to artificial simplicity (“as simple as possible, but no simpler”, as they say). And I think the reality is that there’s a huge set of applications that make new handles left-and-right (particularly but not exclusively in async land 8 ) and where explicitly creating new handles is noise, not signal. This is why e.g. Swift 9 makes ref-count increments invisible – and they get a big lift out of that! 10 I’d wager most Swift users don’t even realize that Swift is not garbage-collected 11 . But the key thing here is that even if we do add some way to make handle creation automatic, we ALSO want a mode where it is explicit and visible. So we might as well do that one first. OK, I think I’ve made this point 3 ways from Sunday now, so I’ll stop. The next few blog posts in the series will dive into (at least) two options for how we might make handle creation and closures more ergonomic while retaining explicitness. I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee   ↩︎ It’s an industry term .  ↩︎ Actually, by the standards of the conversations Josh and I often have, it was’t really all that long – an hour at most.  ↩︎ Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that’s a topic for another blog post.  ↩︎ Modulo panics, of course – and no surprise that accounting for panics is a major pain point for some Rust users.  ↩︎ In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep’ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring.  ↩︎ Or , which is one of my favorite APIs. It takes an and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍  ↩︎ My experience is that, due to language limitations we really should fix, many async constructs force you into bounds which in turn force you into and where you’d otherwise have been able to use .  ↩︎ I’ve been writing more Swift and digging it. I have to say, I love how they are not afraid to “go big”. I admire the ambition I see in designs like SwiftUI and their approach to async. I don’t think they bat 100, but it’s cool they’re swinging for the stands. I want Rust to dare to ask for more !  ↩︎ Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion).  ↩︎ Though I’d also wager that many eventually find themselves scratching their heads about a ref-count cycle. I’ve not dug into how Swift handles those, but I see references to “weak handles” flying around, so I assume they’ve not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It’s harder to do since we discourage interior mutability, but not that hard.  ↩︎ I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee   ↩︎ It’s an industry term .  ↩︎ Actually, by the standards of the conversations Josh and I often have, it was’t really all that long – an hour at most.  ↩︎ Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that’s a topic for another blog post.  ↩︎ Modulo panics, of course – and no surprise that accounting for panics is a major pain point for some Rust users.  ↩︎ In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep’ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring.  ↩︎ Or , which is one of my favorite APIs. It takes an and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍  ↩︎ My experience is that, due to language limitations we really should fix, many async constructs force you into bounds which in turn force you into and where you’d otherwise have been able to use .  ↩︎ I’ve been writing more Swift and digging it. I have to say, I love how they are not afraid to “go big”. I admire the ambition I see in designs like SwiftUI and their approach to async. I don’t think they bat 100, but it’s cool they’re swinging for the stands. I want Rust to dare to ask for more !  ↩︎ Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion).  ↩︎ Though I’d also wager that many eventually find themselves scratching their heads about a ref-count cycle. I’ve not dug into how Swift handles those, but I see references to “weak handles” flying around, so I assume they’ve not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It’s harder to do since we discourage interior mutability, but not that hard.  ↩︎

0 views
Cassidy Williams 1 months ago

Ductts Build Log

I built and released Ductts , an app for tracking how often you cry! I built it with React Native and Expo (both of which were new to me) and it was really fun (and challenging) putting it together. Yes! I should have anticipated just how many people would ask if I’m okay. I am! I just like data. Here’s a silly video I made of the app so you can see it in action first! The concept of Ductts came from my pile of domains, originally from November 2022 (according to my logs of app ideas, ha). I revisited the idea on and off pretty regularly since then, especially when I went through postpartum depression in 2023, and saw people on social media explain how they manually track when they cry in their notes apps for their therapists. I had a few different name ideas for the app, but more than anything I wanted it to have a clever logo, because it felt like there was a good opportunity for one. I called it crycry for a while, CryTune, TTears (because I liked the idea of the emoticon being embedded in the logo), and then my cousin suggested Ductts! With that name I could do the design idea, and I thought it might be a fun pun on tear ducts and maybe a duck mascot. Turns out ducks are hard to draw, so I just ended up with the wordmark: I really wanted this app to be native so it would be easy to use on a phone! I poked around with actually using native Swift, but… admittedly the learning curve slowed me down every time I got into it and I would lose motivation. So, in a moment of yelling at myself to “just build SOMETHING, Cassidy” I thought it might be fun to try using AI to get me started with React Native! I tried a0 at first, and it was pretty decent at making screens that I thought looked nice, but at the time when I tried it, the product was a bit too immature and wouldn’t produce much that I could actually work with. But, it was a good thing to see something that felt a bit real! So, from there, I started a fresh Expo app with: I definitely stumbled through building the app at first because I used the starter template and had to figure out which things I needed to remove, and probably removed a bit too much at first (more on that later). I got very familiar with the Expo docs , and GitHub Copilot was helpful too as I asked about how certain things worked. In terms of the “order” in which I implemented features, it went like this: And peppered throughout all of this was a lot of styling, re-styling, debugging, context changes, design changes, all that jazz. This list feels so small when I think about all of the tiny adjustments it took to make drawers slide smoothly, gestures move correctly, and testing across screen sizes. There’s a few notable libraries and packages that I used specifically to get everything where I wanted: I learned a lot about how Expo does magic with their Expo Go app for testing your apps. Expo software developer Kadi Kraman helped explain it to me best: A React Native app consists of two parts: you have the JS bundle, and all the native code. Expo Go is a sandbox environment that gives you a lot of the native code you might need for learning and prototyping. So we include the native code for image, camera, push notifications and a whole bunch of libraries that are often used, but it’s limited due to what is possible on the native platforms. So when you need to change things in the native-land, you need to build the native code part yourself (like your own custom version of Expo Go basically). One of the things I really wanted to implement was an animated splash screen, and y’all… after building the app natively, properly, about a million times, I decided that I’m cool with it being a static image. But, here’s the animation I made anyway, for posterity: So many things are funky when it comes to building things natively, for example, how dependencies work and what all is included. There are a handful of libraries where I didn’t read the README (I’m sorry!!!!) and just installed the package to keep moving forward, and then learned that the library would work fine in Expo Go, but needed different packages installed to work natively. Phew. Expo Router is one of them, where again, if I had just read the docs, I could have known that I shouldn’t have removed certain packages when using . This is actually what you need to run if you want to install : Kadi once again came in clutch with a great explanation: The reason this sometimes happens is: Expo Go has a ton of native libraries pre-bundled for ease of development. So, even if you’re not installing them in your project, Expo Go includes the native code for them. For a specific example, e.g. this QR code library requires react-native-svg as a peer dependency and they have it listed in the instructions . However if you were to ignore this and only install the QR code library, it would still work in Expo Go, because it has the native code from pre-bundled. But when you create a development build, preview build or a production build, we don’t want to include all the unused code from Expo Go, it will be a completely clean build with only the libraries you’ve installed explicitly. The Expo Doctor CLI tool saved my bacon a ton here as I stumbled through native builds, clearing caches, and reinstalling everything. Kadi and the Expo team actually made a PR to help check for peer dependencies after I asked them a bunch of questions, which was really awesome of them! Y’all shipping native apps is a horrible experience if you are used to web dev and just hitting “deploy” on your provider of choice. I love web development so much. It’s beautiful. It’s the way things should be. But anyway, App Store time. I decided to just do the iOS App Store at first because installing the Android Simulator was the most wretched developer experience I’ve had in ages and it made me want to throw my laptop in the sea. Kadi (I love you Kadi) had a list of great resources for finalizing apps: TL;DR: Build your app, make a developer account, get 3-5 screenshots on a phone and on a tablet, fill out a bunch of forms about how you use user data, make a privacy policy and support webpage, decide if you want it free or paid, and fill out forms if it’s paid. Y’all… I’m grateful for the Expo team and for EAS existing. Their hand-holding was really patient, and their Discord community is awesome if you need help. Making the screenshots was easy with Expo Orbit , which lets you choose which device you want for each screenshot, and I used Affinity Designer to make the various logos, screenshots, and marketing images it needed. I decided to make the app just a one-time $0.99 purchase, which was pretty easy (you just click “paid” and the amount you want to sell it for), BUT if you want to sell it in the European Union, you need to have a public address and phone number for that. It took a few pieces of verification with a human to make that work. I have an LLC with which I do consulting work and used the registered agent’s information for that (that’s allowed!), so that my personal contact info wouldn’t be front-and-center in the App Store for all of Europe to see. The website part was the least of my worries, honestly. I love web dev. I threw together an Astro website with a link to the App Store, a Support page, and a Privacy Policy page, and plopped on my existing my domain name ductts.app . One thing I did dive deep on, which was unnecessary but fun, was an Import Helper page to help make a Ductts-compatible spreadsheet for those who might already track their tears in a note on their phone. Making a date converter and a sample CSV and instructions felt like one of those things that maybe 2 people in the world would ever use… but I’m glad I did it anyway. Finally, after getting alllll of this done, it was just waiting a few days until the app was finally up on the App Store, almost anticlimactically! While I waited I made a Product Hunt launch page , which luckily used all the same copy and images from the App Store, and it was fun to see it get to the #4 Health & Fitness app of the day on Product Hunt, and #68 in Health & Fitness on the App Store! I don’t expect much from Ductts, really. It was a time-consuming side project that taught me a ton about Expo, React Native, and shipping native apps, and I’m grateful for the experience. …plus now I can have some data on how much I cry. I’m a parent! It happens! Download Ductts , log your tears, and see ya next time.

0 views

My agentic coding methodology of June 2025

I was chatting with some friends about how I'm using "AI" tools to write code. Like everyone else, my process has been evolving over the past few months. It seemed worthwhile to do a quick writeup of how I'm doing stuff today. At the moment, I'm mostly living in Claude Code. My "planning methodology" is: "Let's talk through an idea I have. I'm going to describe it. Ask me lots of questions. When you understand it sufficiently, write out a draft plan." After that, I chat with the LLM for a bit. Then, the LLM shows me the draft plan. I point out things I don't like in the plan and ask for changes. The LLM revises the plan. We do that a few times. Once I'm happy with the plan, I say something along the lines of: "Great. now write that to as a series of prompts for an llm coding agent. DRY YAGNI simple test-first clean clear good code" I check over the plan. Maybe I ask for edits. Maybe I don't. And then I type to blow away the LLM's memory of this nice plan it just made. "There's a plan for a feature in . Read it over. If you have questions, let me know. Otherwise, let's get to work." Invariably, there are (good) questions. It asks. I answer. "Before we get going, update the plan document based on the answers I just gave you." When the model has written out the updated plan, it usually asks me some variant of "can I please write some code now?" *"lfg" And then the model starts burning tokens. (Claude totally understands "lfg". Qwen tends to overthink it.) I keep an eye on it while it runs, occasionally stopping it to redirect or critque something it's done until it reports "Ok! Phase 1 is production ready." (I don't know why, but lately, it's very big on telling me first-draft code is production ready.) Usually, I'll ask it if it's written and run test. Usually, it actually has, which is awesome. *"Ok. please commit these changes and update the planning doc with your current status." Once the model has done that, I usually it again to get a nice fresh context window and tell it *"Read and do the next phase.` And then we lather, rinse, and repeat until there's something resembling software. This process is startlingly effective most of the time. Part of what makes it work well is the CLAUDE.md file that spells out a my preferences and workflow. Part of it is that Anthropic's models are just well tuned for what I'm doing (which is mostly JavaScript, embedded C++, and Swift.) Generally, I find that the size of spec that works is something the model can blaze through in less than a couple hours with a focused human paying attention, but really, the smaller and more focused the spec, the better. If you've got a process that looks like mine (or is wildly different), I'd love to hear from you about it. Drop me a line at [email protected].

0 views
Xe Iaso 4 months ago

Apple just Sherlocked Docker

EDIT(2025-06-09 20:51 UTC): The containerization stuff they're using is open source on GitHub . Digging into it. Will post something else when I have something to say. This year's WWDC keynote was cool. They announced a redesign of the OSes, unified the version numbers across the fleet, and found ways to hopefully make AI useful (I'm reserving my right to be a skeptic based on how bad Apple Intelligence currently is). However, the keynote slept on the biggest announcement for developers: they're bringing the ability to run Linux containers in macOS: The Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It’s built on an open-source framework optimized for Apple silicon and provides secure isolation between container images. This is an absolute game changer. One of the biggest pain points with my MacBook is that the battery life is great...until I start my Linux VM or run the Docker app. I don't even know where to begin to describe how cool this is and how it will make production deployments so much easier to access for the next generation of developers. Maybe this could lead to Swift being a viable target for web applications. I've wanted to use Swift on the backend before but Vapor and other frameworks just feel so frustratingly close to greatness. Combined with the Swift Static Linux SDK and some of the magic that powers Private Cloud Compute , you could get an invincible server side development experience that rivals what Google engineers dream up directly on your MacBook. I can't wait to see more. This may actually be what gets me to raw-dog beta macOS on my MacBook. The things I'd really like to know: I really wonder how Docker is feeling, I think they're getting Sherlocked . Either way, cool things are afoot and I can't wait to see more.

0 views

Posting through it

I'm posting this from a very, very rough cut at a bespoke blogging client I've been having my friend Claude build out over the past couple days. I've long suspected that "just edit text files on disk to make blog posts" is, to a certain kind of person, a great sounding idea...but not actually the way to get me to blog. The problem is that my blog is...a bunch of text files in a git repository that's compiled into a website by a tool called "Eleventy" that runs whenever I put a file in a certain directory of this git repository and push that up to GitHub. There's no API because there's no server. And I've never learned Swift/Cocoa/etc, so building macOS and iOS tooling to create a graphical blogging client has felt...not all that plausible. Over the past year or two, things have been changing pretty fast. We have AI agents that have been trained on...well, pretty much everything humans have ever written. And they're pretty good at stringing together software. So, on a whim, I asked Claude to whip me up a blogging client that talks to GitHub in just the right way. This is the very first post using that new tool, which I'm calling "Post Through It." Ok, technically, this is the fourth post. But it's the first one I've actually been able to add any content to.

0 views
baby steps 6 months ago

Dyn async traits, part 10: Box box box

This article is a slight divergence from my Rust in 2025 series. I wanted to share my latest thinking about how to support for traits with async functions and, in particular how to do so in a way that is compatible with the soul of Rust . Supporting in dyn traits is a tricky balancing act. The challenge is reconciling two key things people love about Rust: its ability to express high-level, productive code and its focus on revealing low-level details. When it comes to async function in traits, these two things are in direct tension, as I explained in my first blog post in this series – written almost four years ago! (Geez.) To see the challenge, consider this example trait: In Rust today you can write a function that takes an and invokes and everything feels pretty nice: But what I want to write that same function using a ? If I write this… …I get an error. Why is that? The answer is that the compiler needs to know what kind of future is going to be returned by so that it can be awaited. At minimum it needs to know how big that future is so it can allocate space for it 1 . With an , the compiler knows exactly what type of signal you have, so that’s no problem: but with a , we don’t, and hence we are stuck. The most common solution to this problem is to box the future that results. The crate , for example, transforms to something like . But doing that at the trait level means that we add overhead even when you use ; it also rules out some applications of Rust async, like embedded or kernel development. So the name of the game is to find ways to let people use that are both convenient and flexible. And that turns out to be pretty hard! I’ve been digging back into the problem lately in a series of conversations with Michal Goulet (aka, compiler-errors) and it’s gotten me thinking about a fresh approach I call “box box box”. The “box box box” design starts with the call-site selection approach. In this approach, when you call , the type you get back is a – i.e., an unsized value. This can’t be used directly. Instead, you have to allocate storage for it. The easiest and most common way to do that is to box it, which can be done with the new operator: This approach is fairly straightforward to explain. When you call an async function through , it results in a , which has to be stored somewhere before you can use it. The easiest option is to use the operator to store it in a box; that gives you a , and you can await that. But this simple explanation belies two fairly fundamental changes to Rust. First, it changes the relationship of and . Second, it introduces this operator, which would be the first stable use of the keyword 2 . It seems odd to introduce the keyword just for this one use – where else could it be used? As it happens, I think both of these fundamental changes could be very good things. The point of this post is to explain what doors they open up and where they might take us. Let’s start with the core proposal. For every trait , we add inherent methods 3 to reflecting its methods: In fact, method dispatch already adds “pseudo” inherent methods to , so this wouldn’t change anything in terms of which methods are resolved. The difference is that is only allowed if all methods in the trait are dyn compatible, whereas under this proposal some non-dyn-compatible methods would be added with modified signatures. Change 0 only makes sense if it is possible to create a even though it contains some methods (e.g., async functions) that are not dyn compatible. This revisits RFC #255 , in which we decided that the type should also implement the trait . I was a big proponent of RFC #255 at the time, but I’ve sinced decided I was mistaken 5 . Let’s discuss. The two rules today that allow to implement are as follows: The fact that implements is at times quite powerful. It means for example that I can write an implementation like this one: This impl makes implement for any type , including dyn trait types like . Neat. Powerful as it is, the idea of implementing doesn’t quite live up to its promise. What you really want is that you could replace any with and things would work. But that’s just not true because is . So actually you don’t get a very “smooth experience”. What’s more, although the compiler gives you a impl, it doesn’t give you impls for references to – so e.g. given this trait If I have a , I can’t give that to a function that takes an To make that work, somebody has to explicitly provide an impl like and people often don’t. However, the requirement that implement can be limiting. Imagine a trait like This trait has two methods. The method is dyn-compatible, no problem. The method has an argument is therefore generic, so it is not dyn-compatible 6 (well, at least not under today’s rules, but I’ll get to that). (The reason is not dyn compatible: we need to make distinct monomorphized copies tailored to the type of the argument. But the vtable has to be prepared in advance, so we don’t know which monomorphized version to use.) And yet, just because is not dyn compatible doesn’t mean that a would be useless. What if I only plan to call , as in a function like this? Rust’s current rules rule out a function like this, but in practice this kind of scenario comes up quite a lot. In fact, it comes up so often that we added a language feature to accommodate it (at least kind of): you can add a clause to your feature to exempt it from dynamic dispatch. This is the reason that can be dyn compatible even when it has a bunch of generic helper methods like and . Let me pause here, as I imagine some of you are wondering what all of this “dyn compatibility” stuff has to do with AFIDT. The bottom line is that the requirement that type implements means that we cannot put any kind of “special rules” on dispatch and that is not compatible with requiring a operator when you call async functions through a trait. Recall that with our trait, you could call the method on an without any boxing: But when I called it on a , I had to write to tell the compiler how to deal with the that gets returned: Indeed, the fact that returns an but returns a already demonstrates the problem. All types are known to be and is not, so the type signature of is not the same as the type signature declared in the trait. Huh. Today I cannot write a type like without specifying the value of the associated type . To see why this restriction is needed, consider this generic function: If you invoked with an that did not specify , how could the type of ? We wouldn’t have any idea how much space space it needs. But if you invoke with , there is no problem. We don’t know which method is being called, but we know it’s returning a . And yet, just as we saw before, the requirement to list associated types can be limiting. If I have a and I only call , for example, then why do I need to know the type? But I can’t write code like this today. Instead I have to make this function generic which basically defeats the whole purpose of using : If we dropped the requirement that every type implements , we could be more selective, allowing you to invoke methods that don’t use the associated type but disallowing those that do. So that brings us to full proposal to permit in cases where the trait is not fully dyn compatible: A lot of things get easier if you are willing to call malloc. – Josh Triplett, recently. Rust has reserved the keyword since 1.0, but we’ve never allowed it in stable Rust. The original intention was that the term box would be a generic term to refer to any “smart pointer”-like pattern, so would be a “reference counted box” and so forth. The keyword would then be a generic way to allocate boxed values of any type; unlike , it would do “emplacement”, so that no intermediate values were allocated. With the passage of time I no longer think this is such a good idea. But I do see a lot of value in having a keyword to ask the compiler to automatically create boxes . In fact, I see a lot of places where that could be useful. The first place is indeed the operator that could be used to put a value into a box. Unlike , using would allow the compiler to guarantee that no intermediate value is created, a property called emplacement . Consider this example: Rust’s semantics today require (1) allocating a 4KB buffer on the stack and zeroing it; (2) allocating a box in the heap; and then (3) copying memory from one to the other. This is a violation of our Zero Cost Abstraction promise: no C programmer would write code like that. But if you write , we can allocate the box up front and initialize it in place. 9 The same principle applies calling functions that return an unsized type. This isn’t allowed today, but we’ll need some way to handle it if we want to have return . The reason we can’t naively support it is that, in our existing ABI, the caller is responsible for allocating enough space to store the return value and for passing the address of that space into the callee, who then writes into it. But with a return value, the caller can’t know how much space to allocate. So they would have to do something else, like passing in a callback that, given the correct amount of space, performs the allocation. The most common cased would be to just pass in . The best ABI for unsized return values is unclear to me but we don’t have to solve that right now, the ABI can (and should) remain unstable. But whatever the final ABI becomes, when you call such a function in the context of a expression, the result is that the callee creates a to store the result. 10 If you try to write an async function that calls itself today, you get an error: The problem is that we cannot determine statically how much stack space to allocate. The solution is to rewrite to a boxed return value. This compiles because the compiler can allocate new stack frames as needed. But wouldn’t it be nice if we could request this directly? A similar problem arises with recursive structs: The compiler tells you As it suggestes, to workaround this you can introduce a : This though is kind of weird because now the head of the list is stored “inline” but future nodes are heap-allocated. I personally usually wind up with a pattern more like this: Now however I can’t create values with syntax and I also can’t do pattern matching. Annoying. Wouldn’t it be nice if the compiler just suggest adding a keyword when you declare the struct: and have automatically allocate the box for me? The ideal is that the presence of a box is now completely transparent, so I can pattern match and so forth fully transparently: Enums too cannot reference themselves. Being able to declare something like this would be really nice: In fact, I still remember when I used Swift for the first time. I wrote a similar enum and Xcode helpfully prompted me, “do you want to declare this enum as ?” I remember being quite jealous that it was such a simple edit. However, there is another interesting thing about a . The way I imagine it, creating an instance of the enum would always allocate a fresh box. This means that the enum cannot be changed from one variant to another without allocating fresh storage. This in turn means that you could allocate that box to exactly the size you need for that particular variant. 11 So, for your , not only could it be recursive, but when you allocate an you only need to allocate space for a , whereas a would be a different size. (We could even start to do “tagged pointer” tricks so that e.g. is stored without any allocation at all.) Another option would to have particular enum variants that get boxed but not the enum as a whole: This would be useful in cases you do want to be able to overwrite one enum value with another without necessarily reallocating, but you have enum variants of widely varying size, or some variants that are recursive. A boxed variant would basically be desugared to something like the following: clippy has a useful lint that aims to identify this case, but once the lint triggers, it’s not able to offer an actionable suggestion. With the box keyword there’d be a trivial rewrite that requires zero code changes. If we’re enabling the use of elsewhere, we ought to allow it in patterns: Under my proposal, would be the preferred form, since it would allow the compiler to do more optimization. And yes, that’s unfortunate, given that there are 10 years of code using . Not really a big deal though. In most of the cases we accept today, it doesn’t matter and/or LLVM already optimizes it. In the future I do think we should consider extensions to make (as well as and other similar constructors) be just as optimized as , but I don’t think those have to block this proposal. Yes and no. On the one hand, I would like the ability to declare that a struct is always wrapped in an or . I find myself doing things like the following all too often: On the other hand, is very special. It’s kind of unique in that it represents full ownership of the contents which means a and are semantically equivalent – there is no place you can use that a won’t also work – unless . This is not true for and or most other smart pointers. For myself, I think we should introduce now but plan to generalize this concept to other pointers later. For example I’d like to be able to do something like this… …where the type would implement some trait to permit allocating, deref’ing, and so forth: The original plan for was that it would be somehow type overloaded. I’ve soured on this for two reasons. First, type overloads make inference more painful and I think are generally not great for the user experience; I think they are also confusing for new users. Finally, I think we missed the boat on naming. Maybe if we had called something like the idea of “box” as a general name would have percolated into Rust users’ consciousness, but we didn’t, and it hasn’t. I think the keyword now ought to be very targeted to the type. In my [soul of Rust blog post], I talked about the idea that one of the things that make Rust Rust is having allocation be relatively explicit. I’m of mixed minds about this, to be honest, but I do think there’s value in having a property similar to – like, if allocation is happening, there’ll be a sign somewhere you can find. What I like about most of these proposals is that they move the keyword to the declaration – e.g., on the struct/enum/etc – rather than the use . I think this is the right place for it. The major exception, of course, is the “marquee proposal”, invoking async fns in dyn trait. That’s not amazing. But then… see the next question for some early thoughts. The way that Rust today detects automatically whether traits should be dyn compatible versus having it be declared is, I think, not great. It creates confusion for users and also permits quiet semver violations, where a new defaulted method makes a trait no longer be dyn compatible. It’s also a source for a lot of soundness bugs over time. I want to move us towards a place where traits are not dyn compatible by default, meaning that does not implement . We would always allow types and we would allow individual items to be invoked so long as the item itself is dyn compatible. If you want to have implement , you should declare it, perhaps with a keyword: This declaration would add various default impls. This would start with the impl: But also, if the methods have suitable signatures, include some of the impls you really ought to have to make a trait that is well-behaved with respect to dyn trait: In fact, if you add in the ability to declare a trait as , things get very interesting: I’m not 100% sure how this should work but what I imagine is that would be pointer-sized and implicitly contain a behind the scenes. It would probably automatically the results from when invoked through , so something like this: I didn’t include this in the main blog post but I think together these ideas would go a long way towards addressing the usability gaps that plague today. Side note, one interesting thing about Rust’s async functions is that there size must be known at compile time, so we can’t permit alloca-like stack allocation.  ↩︎ The box keyword is in fact reserved already, but it’s never been used in stable Rust.  ↩︎ Hat tip to Michael Goulet (compiler-errors) for pointing out to me that we can model the virtual dispatch as inherent methods on types. Before I thought we’d have to make a more invasive addition to MIR, which I wasn’t excited about since it suggested the change was more far-reaching.  ↩︎ In the future, I think we can expand this definition to include some limited functions that use in argument position, but that’s for a future blog post.  ↩︎ I’ve noticed that many times when I favor a limited version of something to achieve some aesthetic principle I wind up regretting it.  ↩︎ At least, it is not compatible under today’s rules. Convievably it could be made to work but more on that later.  ↩︎ This part of the change is similar to what was proposed in RFC #2027 , though that RFC was quite light on details (the requirements for RFCs in terms of precision have gone up over the years and I expect we wouldn’t accept that RFC today in its current form).  ↩︎ I actually want to change this last clause in a future edition. Instead of having dyn compatibility be determined automically, traits would declare themselves dyn compatible, which would also come with a host of other impls. But that’s worth a separate post all on its own.  ↩︎ If you play with this on the playground , you’ll see that the memcpy appears in the debug build but gets optimized away in this very simple case, but that can be hard for LLVM to do, since it requires reordering an allocation of the box to occur earlier and so forth. The operator could be guaranteed to work.  ↩︎ I think it would be cool to also have some kind of unsafe intrinsic that permits calling the function with other storage strategies, e.g., allocating a known amount of stack space or what have you.  ↩︎ We would thus finally bring Rust enums to “feature parity” with OO classes! I wrote a blog post, “Classes strike back”, on this topic back in 2015 (!) as part of the whole “virtual structs” era of Rust design. Deep cut!  ↩︎

0 views
W. Jason Gilmore 10 months ago

Building Menubar Apps with AI

Some people collect baseball cards, others obsess over video games. I love menubar apps. No clue why, I just really like the convenience they offer, because they provide such an easy way to view and interact with information of all types. I've always wanted to build one, but never wanted to invest the time learning Swift, Objective-C, or ElectronJS. The emergence of AI coding tools, and particularly agents, has completely changed the game in terms of writing software, and so I've lately been wondering how feasible it is to not only create my first menubar app but actually create some sort of software factory that can churn out dozens if not hundreds of menubar-first applications. The first app is called TerraTime . It's a menubar app that shows the current time in a variety of timezones. TerraTime was built with Cursor in about 20 minutes. I spent another 75 minutes or so figuring out how to sign and notarize the app according to Apple requirements. The app is currently for sale on Gumroad , and will soon be available on the Mac App Store. To catalog what I hope will quickly become a collection of useful menubar apps, I've created a new site called Useful Menubar Apps . It was also built with AI, and is hosted on Netlify.

0 views