Posts in Swift (18 found)
Stone Tools 2 days ago

Bank Street Writer on the Apple II

Stop me if you're heard this one . In 1978, a young man wandered into a Tandy Radio Shack and found himself transfixed by the TRS-80 systems on display. He bought one just to play around with, and it wound up transforming his life from there on. As it went with so many, so too did it go with lawyer Doug Carlston. His brother, Gary, initially unimpressed, warmed up to the machine during a long Maine winter. The two thus smitten mused, "Can we make money off of this?" Together they formed a developer-sales relationship, with Doug developing Galactic Saga and third brother Don developing Tank Command . Gary's sales acumen brought early success and Broderbund was officially underway. Meanwhile in New York, Richard Ruopp, president of Bank Street College of Education, a kind of research center for experimental and progressive education, was thinking about how emerging technology fit into the college's mission. Writing was an important part of their curriculum, but according to Ruopp , "We tested the available word processors and found we couldn’t use any of them." So, experts from Bank Street College worked closely with consultant Franklin Smith and software development firm Intentional Educations Inc. to build a better word processor for kids. The fruit of that labor, Bank Street Writer , was published by Scholastic exclusively to schools at first, with Broderbund taking up the home distribution market a little later. Bank Street Writer would dominate home software sales charts for years and its name would live on as one of the sacred texts, like Lemonade Stand or The Oregon Trail . Let's see what lessons there are to learn from it yet. 1916 Founded by Lucy Sprague Mitchell, Wesley Mitchell, and Harriet Johnson as the “Bureau of Educational Experiments” (BEE) with the goal of understanding in what environment children best learn and develop, and to help adults learn to cultivate that environment. 1930 BEE moves to 69 Bank Street. (Will move to 112th Street in 1971, for space reasons.) 1937 The Writer’s Lab, which connects writers and students, is formed. 1950 BEE is renamed to Bank Street College of Education. 1973 Minnesota Educational Computing Consortium (MECC) is founded. This group would later go on to produce The Oregon Trail . 1983 Bank Street Writer, developed by Intentional Educations Inc., published by Broderbund Software, and “thoroughly tested by the academics at Bank Street College of Education.” Price: $70. 1985 Writer is a success! Time to capitalize! Bank Street Speller $50, Bank Street Filer $50, Bank Street Mailer $50, Bank Street Music Writer $50, Bank Street Prewriter (published by Scholastic) $60. 1986 Bank Street Writer Plus $100. Bank Street Writer III (published by Scholastic) $90. It’s basically Plus with classroom-oriented additions, including a 20-column mode and additional teaching aides. 1987 Bank Street Storybook, $40. 1992 Bank Street Writer for the Macintosh (published by Scholastic) $130. Adds limited page layout options, Hypercard-style hypertext, clip art, punctuation checker, image import with text wrap, full color, sound support, “Classroom Publishing” of fliers and pamphlets, and electronic mail. With word processors, I want to give them a chance to present their best possible experience. I do put a little time into trying the baseline experience many would have had with the software during the height of its popularity. "Does the software still have utility today?" can only be fairly answered by giving the software a fighting chance. To that end, I've gifted myself a top-of-the-line (virtual) Apple //e running the last update to Writer , the Plus edition. You probably already know how to use Bank Street Writer Plus . You don't know you know, but you do know because you have familiarity with GUI menus and basic word processing skills. All you're lacking is an understanding of the vagaries of data storage and retrieval as necessitated by the hardware of the time, but once armed with that knowledge you could start using this program without touching the manual again. It really is as easy as the makers claim. The simplicity is driven by very a subtle, forward-thinking user interface. Of primary interest is the upper prompt area. The top 3 lines of the screen serve as an ever-present, contextual "here's the situation" helper. What's going on? What am I looking at? What options are available? How do I navigate this screen? How do I use this tool? Whatever you're doing, whatever menu option you've chosen, the prompt area is already displaying information about which actions are available right now in the current context . As the manual states, "When in doubt, look for instructions in the prompt area." The manual speaks truth. For some, the constant on-screen prompting could be a touch overbearing, but I personally don't think it's so terrible to know that the program is paying attention to my actions and wants me to succeed. The assistance isn't front-loaded, like so many mobile apps, nor does it interrupt, like Clippy. I simply can't fault the good intentions, nor can I really think of anything in modern software that takes this approach to user-friendliness. The remainder of the screen is devoted to your writing and works like any other word processor you've used. Just type, move the cursor with the arrow keys, and type some more. I think most writers will find it behaves "as expected." There are no Electric Pencil -style over-type surprises, nor VisiCalc -style arrow key manipulations. What seems to have happened is that in making a word processor that is easy for children to use, they accidentally made a word processor that is just plain easy. The basic functionality is drop-dead simple to pick up by just poking around, but there's quite a bit more to learn here. To do so, we have a few options for getting to know Bank Street Writer in more detail. There are two manuals by virtue of the program's educational roots. Bank Street Writer was published by both Broderbund (for the home market) and Scholastic (for schools). Each tailored their own manual to their respective demographic. Broderbund's manual is cleanly designed, easy to understand, and gets right to the point. It is not as "child focused" as reviews at the time might have you believe. Scholastic's is more of a curriculum to teach word processing, part of the 80s push for "computers in the classroom." It's packed with student activities, pages that can be copied and distributed, and (tellingly) information for the teacher explaining "What is a word processor?" Our other option for learning is on side 2 of the main program disk. Quite apart from the program proper, the disk contains an interactive tutorial. I love this commitment to the user's success, though I breezed through it in just a few minutes, being a cultured word processing pro of the 21st century. I am quite familiar with "menus" thank you very much. As I mentioned at the top, the screen is split into two areas: prompt and writing. The prompt area is fixed, and can neither be hidden nor turned off. This means there's no "full screen" option, for example. The writing area runs in high-res graphics mode so as to bless us with the gift of an 80-character wide display. Being a graphics display also means the developer could have put anything on screen, including a ruler which would have been a nice formatting helper. Alas. Bank Street offers limited preference settings; there's not much we can do to customize the program's display or functionality. The upshot is that as I gain confidence with the program, the program doesn't offer to match my ability. There is one notable trick, which I'll discuss later, but overall there is a missed opportunity here for adapting to a user's increasing skill. Kids do grow up, after all. As with Electric Pencil , I'm writing this entirely in Bank Street Writer . Unlike the keyboard/software troubles there, here in 128K Apple //e world I have Markdown luxuries like . The emulator's amber mode is soothing to the eyes and soul. Mouse control is turned on and works perfectly, though it's much easier and faster to navigate by keyboard, as God intended. This is an enjoyable writing experience. Which is not to say the program is without quirks. Perhaps the most unfortunate one is how little writing space 128K RAM buys for a document. At this point in the write-up I'm at about 1,500 words and BSW's memory check function reports I'm already at 40% of capacity. So the largest document one could keep resident in memory at one time would run about 4,000 words max? Put bluntly, that ain't a lot. Splitting documents into multiple files is pretty much forced upon anyone wanting to write anything of length. Given floppy disk fragility, especially with children handling them, perhaps that's not such a bad idea. However, from an editing point of view, it is frustrating to recall which document I need to load to review any given piece of text. Remember also, there's no copy/paste as we understand it today. Moving a block of text between documents is tricky, but possible. BSW can save a selected portion of text to its own file, which can then be "retrieved" (inserted) at the current cursor position in another file. In this way the diskette functions as a memory buffer for cross-document "copy/paste." Hey, at least there is some option available. Flipping through old magazines of the time, it's interesting just how often Bank Street Writer comes up as the comparative reference point for home word processors over the years. If a new program had even the slightest whiff of trying to be "easy to use" it was invariably compared to Bank Street Writer . Likewise, there were any number of writers and readers of those magazines talking about how they continued to use Bank Street Writer , even though so-called "better" options existed. I don't want to oversell its adoption by adults, but it most definitely was not a children-only word processor, by any stretch. I think the release of Plus embraced a more mature audience. In schools it reigned supreme for years, including the Scholastic-branded version of Plus called Bank Street Writer III . There were add-on "packs" of teacher materials for use with it. There was also Bank Street Prewriter , a tool for helping to organize themes and thoughts before committing to the act of writing, including an outliner, as popularized by ThinkTank . (always interesting when influences ripple through the industry like this) Of course, the Scholastic approach was built around the idea of teachers having access to computers in the classroom. And THAT was build on the idea of teachers feeling comfortable enough with computers to seamlessly merge them into a lesson-plan. Sure, the kids needed something simple to learn, but let's be honest, so did the adults. There was a time when attaching a computer to anything meant a fundamental transformation of that thing was assured and imminent. For example, the "office of the future" (as discussed in the Superbase post ) had a counterpart in the "classroom of tomorrow." In 1983, Popular Computing said, "Schools are in the grip of a computer mania." Steve Jobs took advantage of this, skating to where the puck would be, by donating Apple 2s to California schools. In October 1983, Creative Computing did a little math on that plan. $20M in retail donations brought $4M in tax credits against $5M in gross donations. Apple could donate a computer to every elementary, middle, and high school in California for an outlay of only $1M. Jobs lobbied Congress hard to pass a national version of the same "Kids Can't Wait" bill, which would have extended federal tax credits for such donations. That never made it to law, for various political reasons. But the California initiative certainly helped position Apple as the go-to system for computers in education. By 1985, Apple would dominate fully half of the education market. That would continue into the Macintosh era, though Apple's dominance diminished slowly as cheaper, "good enough" alternatives entered the market. Today, Apple is #3 in the education market, behind Windows and Chromebooks . It is a fair question to ask, "How useful could a single donated computer be to a school?" Once it's in place, then what? Does it have function? Does anyone have a plan for it? Come to think of it, does anyone on staff even know how to use it? When Apple put a computer into (almost) every school in California, they did require training. Well, let's say lip-service was paid to the idea of the aspiration of training. One teacher from each school had to receive one day's worth of training to attain a certificate which allowed the school to receive the computer. That teacher was then tasked with training their coworkers. Wait, did I say "one day?" Sorry, I meant about one HOUR of training. It's not too hard to see where Larry Cuban was coming from when he published Oversold & Underused: Computers in the Classroom in 2001. Even of schools with more than a single system, he notes, "Why, then, does a school's high access (to computers) yield limited use? Nationally and in our case studies, teachers... mentioned that training in relevant software and applications was seldom offered... (Teachers) felt that the generic training available was often irrelevant to their specific and immediate needs." From my perspective, and I'm no historian, it seems to me there were four ways computers were introduced into the school setting. The three most obvious were: I personally attended schools of all three types. What I can say the schools had in common was how little attention, if any, was given to the computer and how little my teachers understood them. An impromptu poll of friends aligned with my own experience. Schools didn't integrate computers into classwork, except when classwork was explicitly about computers. I sincerely doubt my time playing Trillium's Shadowkeep during recess was anything close to Apple's vision of a "classroom of tomorrow." The fourth approach to computers into the classroom was significantly more ambitious. Apple tried an experiment in which five public school sites were chosen for a long-term research project. In 1986, the sites were given computers for every child in class and at home. They reasoned that for computers to truly make an impact on children, the computer couldn't just be a fun toy they occasionally interacted with. Rather, it required full integration into their lives. Now, it is darkly funny to me that having achieved this integration today through smartphones, adults work hard to remove computers from school. It is also interesting to me that Apple kind of led the way in making that happen, although in fairness they don't seem to consider the iPhone to be a computer . America wasn't alone in trying to give its children a technological leg up. In England, the BBC spearheaded a major drive to get computers into classrooms via a countrywide computer literacy program. Even in the States, I remember watching episodes of BBC's The Computer Programme on PBS. Regardless of Apple's or the BBC's efforts, the long-term data on the effectiveness of computers in the classroom has been mixed, at best, or even an outright failure. Apple's own assessment of their "Apple Classrooms of Tomorrow" (ACOT) program after a couple of years concluded, "Results showed that ACOT students maintained their performance levels on standard measures of educational achievement in basic skills, and they sustained positive attitudes as judged by measures addressing the traditional activities of schooling." Which is a "we continue to maintain the dream of selling more computers to schools" way of saying, "Nothing changed." In 2001, the BBC reported , "England's schools are beginning to use computers more in teaching - but teachers are making "slow progress" in learning about them." Then in 2015 the results were "disappointing, "Even where computers are used in the classroom, their impact on student performance is mixed at best." Informatique pour tous, France 1985: Pedagogy, Industry and Politics by Clémence Cardon-Quint noted the French attempt at computers in the classroom as being, "an operation that can be considered both as a milestone and a failure." Computers in the Classrooms of an Authoritarian Country: The Case of Soviet Latvia (1980s–1991) by Iveta Kestere, Katrina Elizabete Purina-Bieza shows the introduction of computers to have drawn stark power and social divides, while pushing prescribed gender roles of computers being "for boys." Teachers Translating and Circumventing the Computer in Lower and Upper Secondary Swedish Schools in the 1970s and 1980 s by Rosalía Guerrero Cantarell noted, "the role of teachers as agents of change was crucial. But teachers also acted as opponents, hindering the diffusion of computer use in schools." Now, I should be clear that things were different in the higher education market, as with PLATO in the universities. But in the primary and secondary markets, Bank Street Writer 's primary demographic, nobody really knew what to do with the machines once they had them. The most straightforwardly damning assessment is from Oversold & Underused where Cuban says in the chapter "Are Computers in Schools Worth the Investment?", "Although promoters of new technologies often spout the rhetoric of fundamental change, few have pursued deep and comprehensive changes in the existing system of schooling." Throughout the book he notes how most teachers struggle to integrate computers into their lessons and teaching methodologies. The lack of guidance in developing new ways of teaching means computers will continue to be relegated to occasional auxiliary tools trotted out from time to time, not integral to the teaching process. "Should my conclusions and predictions be accurate, both champions and skeptics will be disappointed. They may conclude, as I have, that the investment of billions of dollars over the last decade has yet to produce worthy outcomes," he concludes. Thanks to my sweet four-drive virtual machine, I can summon both the dictionary and thesaurus immediately. Put the cursor at the start of a word and hit or to get an instant spot check of spelling or synonyms. Without the reality of actual floppy disk access speed, word searches are fast. Spelling can be performed on the full document, which does take noticeable time to finish. One thing I really love is how cancelling an action or moving forward on the next step of a process is responsive and immediate. If you're growing bored of an action taking too long, just cancel it with ; it will stop immediately . The program feels robust and unbreakable in that way. There is a word lookup, which accepts wildcards, for when you kinda-sorta know how to spell a word but need help. Attached to this function is an anagram checker which benefits greatly from a virtual CPU boost. But it can only do its trick on single words, not phrases. Earlier I mentioned how little the program offers a user who has gained confidence and skill. That's not entirely accurate, thanks to its most surprising super power: macros. Yes, you read that right. This word processor designed for children includes macros. They are stored at the application level, not the document level, so do keep that in mind. Twenty can be defined, each consisting of up to 32 keystrokes. Running keystrokes in a macro is functionally identical to typing by hand. Because the program can be driven 100% by keyboard alone, macros can trigger menu selections and step through tedious parts of those commands. For example, to save our document periodically we need to do the following every time: That looks like a job for to me. 0:00 / 0:23 1× Defining a macro to save, with overwrite, the current file. After it is defined, I execute it which happens very quickly in the emulator. Watch carefully. If you can perform an action through a series of discrete keyboard commands, you can make a macro from it. This is freeing, but also works to highlight what you cannot do with the program. For example, there is no concept of an active selection, so a word is the smallest unit you can directly manipulate due to keyboard control limitations. It's not nothin' but it's not quite enough. I started setting up markdown macros, so I could wrap the current word in or for italic and bold. Doing the actions in the writing area and noting the minimal steps necessary to achieve the desired outcome translated into perfect macros. I was even able to make a kind of rudimentary "undo" for when I wrap something in italic but intended to use bold. This reminded me that I haven't touched macro functionality in modern apps since my AppleScript days. Lemme check something real quick. I've popped open LibreOffice and feel immediately put off by its Macros function. It looks super powerful; a full dedicated code editor with watched variables for authoring in its scripting language. Or is it languages? Is it Macros or ScriptForge? What are "Gimmicks?" Just what is going on? Google Docs is about the same, using Javascript for its "Apps Script" functionality. Here's a Stack Overflow post where someone wants to select text and set it to "blue and bold" with a keystroke and is presented with 32 lines of Javascript. Many programs seem to have taken a "make the simple things difficult, and the hard things possible" approach to macros. Microsoft Word reportedly has a "record" function for creating macros, which will watch what you do and let you play back those actions in sequence. (a la Adobe Photoshop's "actions") This sounds like a nice evolution of the BSW method. I say "reportedly" because it is not available in the online version and so I couldn't try it for myself without purchasing Microsoft 365. I certainly don't doubt the sky's the limit with these modern macro systems. I'm sure amazing utilities can be created, with custom dialog boxes, internet data retrieval, and more. The flip-side is that a lot of power has has been stripped from the writer and handed over to the programmer, which I think is unfortunate. Bank Street Writer allows an author to use the same keyboard commands for creating a macro as for writing a document. There is a forgotten lesson in that. Yes, BSW's macros are limited compared to modern tools, but they are immediately accessible and intuitive. They leverage skills the user is already known to possess . The learning curve is a straight, flat line. Like any good word processor, user-definable tab stops are possible. Bringing up the editor for tabs displays a ruler showing tab stops and their type (normal vs. decimal-aligned). Using the same tools for writing, the ruler is similarly editable. Just type a or a anywhere along the ruler. So, the lack of a ruler I noted at the beginning is now doubly-frustrating, because it exists! Perhaps it was determined to be too much visual clutter for younger users? Again, this is where the Options screen could have allowed advanced users to toggle on features as they grow in comfort and ambition. From what I can tell in the product catalogs, the only major revision after this was for the Macintosh which added a whole host of publishing features. If I think about my experience with BSW these past two weeks, and think about what my wish-list for a hypothetical update might be, "desktop publishing" has never crossed my mind. Having said all of that, I've really enjoyed using it to write this post. It has been solid, snappy, and utterly crash free. To be completely frank, when I switched over into LibreOffice , a predominantly native app for Windows, it felt laggy and sluggish. Bank Street Writer feels smooth and purpose-built, even in an emulator. Features are discoverable and the UI always makes it clear what action can be taken next. I never feel lost nor do I worry that an inadvertent action will have unknowable consequences. The impression of it being an assistant to my writing process is strong, probably more so than many modern word processors. This is cleanly illustrated by the prompt area which feels like a "good idea we forgot." (I also noted this in my ThinkTank examination) I cannot lavish such praise upon the original Bank Street Writer , only on this Plus revision. The original is 40-columns only, spell-checking is a completely separate program, no thesaurus, no macros, a kind of bizarre modal switch between writing/editing/transfer modes, no arrow key support, and other quirks of its time and target system (the original Apple 2). Plus is an incredibly smart update to that original, increasing its utility 10-fold, without sacrificing ease of use. In fact, it's actually easier to use, in my opinion than the original and comes just shy of being something I could use on a regular basis. Bank Street Writer is very good! But it's not quite great . Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). AppleWin 32bit 1.31.0.0 on Windows 11 Emulating an Enhanced Apple //e Authentic machine speed (enhanced disk access speed) Monochrome (amber) for clean 80-column display Disk II controller in slot 5 (enables four floppies, total) Mouse interface in slot 4 Bank Street Writer Plus At the classroom level there are one or more computers. At the school level there is a "computer lab" with one or more systems. There were no computers. Hit (open the File menu) Hit (select Save File) Hit three times (stepping through default confirmation dialogs) I find that running at 300% CPU speed in AppleWin works great. No repeating key issues and the program is well-behaved. Spell check works quickly enough to not be annoying and I honestly enjoyed watching it work its way through the document. Sometimes there's something to be said about slowing the computer down to swift human-speed, to form a stronger sense of connection between your own work and the computer's work. I did mention that I used a 4-disk setup, but in truth I never really touched the thesaurus. A 3-disk setup is probably sufficient. The application never crashed; the emulator was rock-solid. CiderPress2 works perfectly for opening the files on an Apple ][ disk image. Files are of file extension, which CiderPress2 tries to open as disassembly, not text. Switch "Conversion" to "Plain Text" and you'll be fine. This is a program that would benefit greatly from one more revision. It's very close to being enough for a "minimalist" crowd. There are four, key pieces missing for completeness: Much longer document handling Smarter, expanded dictionary, with definitions Customizable UI, display/hide: prompts, ruler, word count, etc. Extra formatting options, like line spacing, visual centering, and so on. For a modern writer using hyperlinks, this can trip up the spell-checker quite ferociously. It doesn't understand, nor can it be taught, pattern-matching against URLs to skip them.

0 views
baby steps 1 months ago

We need (at least) ergonomic, explicit handles

Continuing my discussion on Ergonomic RC, I want to focus on the core question: should users have to explicitly invoke handle/clone, or not? This whole “Ergonomic RC” work was originally proposed by Dioxus and their answer is simple: definitely not . For the kind of high-level GUI applications they are building, having to call to clone a ref-counted value is pure noise. For that matter, for a lot of Rust apps, even cloning a string or a vector is no big deal. On the other hand, for a lot of applications, the answer is definitely yes – knowing where handles are created can impact performance, memory usage, and even correctness (don’t worry, I’ll give examples later in the post). So how do we reconcile this? This blog argues that we should make it ergonomic to be explicit . This wasn’t always my position, but after an impactful conversation with Josh Triplett, I’ve come around. I think it aligns with what I once called the soul of Rust : we want to be ergonomic, yes, but we want to be ergonomic while giving control 1 . I like Tyler Mandry’s Clarity of purpose contruction, “Great code brings only the important characteristics of your application to your attention” . The key point is that there is great code in which cloning and handles are important characteristics , so we need to make that code possible to express nicely. This is particularly true since Rust is one of the very few languages that really targets that kind of low-level, foundational code. This does not mean we cannot (later) support automatic clones and handles. It’s inarguable that this would benefit clarity of purpose for a lot of Rust code. But I think we should focus first on the harder case, the case where explicitness is needed, and get that as nice as we can ; then we can circle back and decide whether to also support something automatic. One of the questions for me, in fact, is whether we can get “fully explicit” to be nice enough that we don’t really need the automatic version. There are benefits from having “one Rust”, where all code follows roughly the same patterns, where those patterns are perfect some of the time, and don’t suck too bad 2 when they’re overkill. I mentioned this blog post resulted from a long conversation with Josh Triplett 3 . The key phrase that stuck with me from that conversation was: Rust should not surprise you . The way I think of it is like this. Every programmer knows what its like to have a marathon debugging session – to sit and state at code for days and think, but… how is this even POSSIBLE? Those kind of bug hunts can end in a few different ways. Occasionally you uncover a deeply satisfying, subtle bug in your logic. More often, you find that you wrote and not . And occasionally you find out that your language was doing something that you didn’t expect. That some simple-looking code concealed a subltle, complex interaction. People often call this kind of a footgun . Overall, Rust is remarkably good at avoiding footguns 4 . And part of how we’ve achieved that is by making sure that things you might need to know are visible – like, explicit in the source. Every time you see a Rust match, you don’t have to ask yourself “what cases might be missing here” – the compiler guarantees you they are all there. And when you see a call to a Rust function, you don’t have to ask yourself if it is fallible – you’ll see a if it is. 5 So I guess the question is: would you ever have to know about a ref-count increment ? The trick part is that the answer here is application dependent. For some low-level applications, definitely yes: an atomic reference count is a measurable cost. To be honest, I would wager that the set of applications where this is true are vanishingly small. And even in those applications, Rust already improves on the state of the art by giving you the ability to choose between and and then proving that you don’t mess it up . But there are other reasons you might want to track reference counts, and those are less easy to dismiss. One of them is memory leaks. Rust, unlike GC’d languages, has deterministic destruction . This is cool, because it means that you can leverage destructors to manage all kinds of resources, as Yehuda wrote about long ago in his classic ode-to- RAII entitled “Rust means never having to close a socket” . But although the points where handles are created and destroyed is deterministic, the nature of reference-counting can make it much harder to predict when the underlying resource will actually get freed. And if those increments are not visible in your code, it is that much harder to track them down. Just recently, I was debugging Symposium , which is written in Swift. Somehow I had two instances when I only expected one, and each of them was responding to every IPC message, wreaking havoc. Poking around I found stray references floating around in some surprising places, which was causing the problem. Would this bug have still occurred if I had to write explicitly to increment the ref count? Definitely, yes. Would it have been easier to find after the fact? Also yes. 6 Josh gave me a similar example from the “bytes” crate . A type is a handle to a slice of some underlying memory buffer. When you clone that handle, it will keep the entire backing buffer around. Sometimes you might prefer to copy your slice out into a separate buffer so that the underlying buffer can be freed. It’s not that hard for me to imagine trying to hunt down an errant handle that is keeping some large buffer alive and being very frustrated that I can’t see explicitly in the where those handles are created. A similar case occurs with APIs like like 7 . takes an and, if the ref-count is 1, returns an . This lets you take a shareable handle that you know is not actually being shared and recover uniqueness. This kind of API is not frequently used – but when you need it, it’s so nice it’s there. Entering the conversation with Josh, I was leaning towards a design where you had some form of automated cloning of handles and an allow-by-default lint that would let crates which don’t want that turn it off. But Josh convinced me that there is a significant class of applications that want handle creation to be ergonomic AND visible (i.e., explicit in the source). Low-level network services and even things like Rust For Linux likely fit this description, but any Rust application that uses or might also. And this reminded me of something Alex Crichton once said to me. Unlike the other quotes here, it wasn’t in the context of ergonomic ref-counting, but rather when I was working on my first attempt at the “Rustacean Principles” . Alex was saying that he loved how Rust was great for low-level code but also worked well high-level stuff like CLI tools and simple scripts. I feel like you can interpret Alex’s quote in two ways, depending on what you choose to emphasize. You could hear it as, “It’s important that Rust is good for high-level use cases”. That is true, and it is what leads us to ask whether we should even make handles visible at all. But you can also read Alex’s quote as, “It’s important that there’s one language that works well enough for both ” – and I think that’s true too. The “true Rust gestalt” is when we manage to simultaneously give you the low-level control that grungy code needs but wrapped in a high-level package. This is the promise of zero-cost abstractions, of course, and Rust (in its best moments) delivers. Let’s be honest. High-level GUI programming is not Rust’s bread-and-butter, and it never will be; users will never confuse Rust for TypeScript. But then, TypeScript will never be in the Linux kernel. The goal of Rust is to be a single language that can, by and large, be “good enough” for both extremes. The goal is make enough low-level details visible for kernel hackers but do so in a way that is usable enough for a GUI. It ain’t easy, but it’s the job. This isn’t the first time that Josh has pulled me back to this realization. The last time was in the context of async fn in dyn traits, and it led to a blog post talking about the “soul of Rust” and a followup going into greater detail . I think the catchphrase “low-level enough for a Kernel, usable enough for a GUI” kind of captures it. There is a slight caveat I want to add. I think another part of Rust’s soul is preferring nuance to artificial simplicity (“as simple as possible, but no simpler”, as they say). And I think the reality is that there’s a huge set of applications that make new handles left-and-right (particularly but not exclusively in async land 8 ) and where explicitly creating new handles is noise, not signal. This is why e.g. Swift 9 makes ref-count increments invisible – and they get a big lift out of that! 10 I’d wager most Swift users don’t even realize that Swift is not garbage-collected 11 . But the key thing here is that even if we do add some way to make handle creation automatic, we ALSO want a mode where it is explicit and visible. So we might as well do that one first. OK, I think I’ve made this point 3 ways from Sunday now, so I’ll stop. The next few blog posts in the series will dive into (at least) two options for how we might make handle creation and closures more ergonomic while retaining explicitness. I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee   ↩︎ It’s an industry term .  ↩︎ Actually, by the standards of the conversations Josh and I often have, it was’t really all that long – an hour at most.  ↩︎ Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that’s a topic for another blog post.  ↩︎ Modulo panics, of course – and no surprise that accounting for panics is a major pain point for some Rust users.  ↩︎ In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep’ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring.  ↩︎ Or , which is one of my favorite APIs. It takes an and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍  ↩︎ My experience is that, due to language limitations we really should fix, many async constructs force you into bounds which in turn force you into and where you’d otherwise have been able to use .  ↩︎ I’ve been writing more Swift and digging it. I have to say, I love how they are not afraid to “go big”. I admire the ambition I see in designs like SwiftUI and their approach to async. I don’t think they bat 100, but it’s cool they’re swinging for the stands. I want Rust to dare to ask for more !  ↩︎ Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion).  ↩︎ Though I’d also wager that many eventually find themselves scratching their heads about a ref-count cycle. I’ve not dug into how Swift handles those, but I see references to “weak handles” flying around, so I assume they’ve not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It’s harder to do since we discourage interior mutability, but not that hard.  ↩︎ I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee   ↩︎ It’s an industry term .  ↩︎ Actually, by the standards of the conversations Josh and I often have, it was’t really all that long – an hour at most.  ↩︎ Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that’s a topic for another blog post.  ↩︎ Modulo panics, of course – and no surprise that accounting for panics is a major pain point for some Rust users.  ↩︎ In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep’ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring.  ↩︎ Or , which is one of my favorite APIs. It takes an and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍  ↩︎ My experience is that, due to language limitations we really should fix, many async constructs force you into bounds which in turn force you into and where you’d otherwise have been able to use .  ↩︎ I’ve been writing more Swift and digging it. I have to say, I love how they are not afraid to “go big”. I admire the ambition I see in designs like SwiftUI and their approach to async. I don’t think they bat 100, but it’s cool they’re swinging for the stands. I want Rust to dare to ask for more !  ↩︎ Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion).  ↩︎ Though I’d also wager that many eventually find themselves scratching their heads about a ref-count cycle. I’ve not dug into how Swift handles those, but I see references to “weak handles” flying around, so I assume they’ve not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It’s harder to do since we discourage interior mutability, but not that hard.  ↩︎

0 views
NorikiTech 1 months ago

Rust struct field order

Part of the “ Rustober ” series. One of the Rust quirks is that when initializing a struct, the named fields can be in any order: In Swift, this is an error. However, looking at the rules for C initialization , it seems the C behavior is the same, called “designated initializer” and has been available since C99. Possibly, this also has to deal with Rust’s struct update syntax where you can initialize a struct based on another instance, in which case the set of field names would be incomplete, so their order does not really matter since they are named:

0 views
NorikiTech 1 months ago

Rust traits vs Swift protocols

Part of the “ Rustober ” series. As I said in the first post of the series , parts of Rust are remarkably similar to Swift, such as and . Let’s try to compare Rust traits to Swift protocols. I’m very new to Rust and I’m not aiming for completeness, so take it with a grain of salt. Looking at them both, Swift leans towards developer ergonomics (many things are implicit, less strict rules around what can be defined where) and Rust leans towards compile-time guarantees: there’s less flexibility but also less ambiguity. For example, in Swift you can add multiple protocol conformances at once, and the compiler will pick up any types that are named the same as associated types: And in Rust: Even this short example shows how flexible Swift is — and we haven’t even seen generics yet. I’m convinced Rust generics in traits are better done than in Swift partly because they are more granular. Whenever I tried to compose anything complicated out of Swift protocols, I always ran into problems either with “Self or associated type requirements” (when a protocol can only be used a generic constraint) or existential types. Here’s a real example where Swift couldn’t help me constrain an associated type on a protocol, so I had to leave it simply as an associated type without additional conformance. The idea is to have a service that would be able to swap between multiple instances of concrete providers, all conforming to several different types and ultimately descending (in the sense, not sense) from one common ancestor. Here’s similar code in Rust which does not have this problem: I’m looking forward to exploring the differences (and similarities) (and bashing my head on the wall) when I get to write some actual Rust code.

0 views
Cassidy Williams 3 months ago

Ductts Build Log

I built and released Ductts , an app for tracking how often you cry! I built it with React Native and Expo (both of which were new to me) and it was really fun (and challenging) putting it together. Yes! I should have anticipated just how many people would ask if I’m okay. I am! I just like data. Here’s a silly video I made of the app so you can see it in action first! The concept of Ductts came from my pile of domains, originally from November 2022 (according to my logs of app ideas, ha). I revisited the idea on and off pretty regularly since then, especially when I went through postpartum depression in 2023, and saw people on social media explain how they manually track when they cry in their notes apps for their therapists. I had a few different name ideas for the app, but more than anything I wanted it to have a clever logo, because it felt like there was a good opportunity for one. I called it crycry for a while, CryTune, TTears (because I liked the idea of the emoticon being embedded in the logo), and then my cousin suggested Ductts! With that name I could do the design idea, and I thought it might be a fun pun on tear ducts and maybe a duck mascot. Turns out ducks are hard to draw, so I just ended up with the wordmark: I really wanted this app to be native so it would be easy to use on a phone! I poked around with actually using native Swift, but… admittedly the learning curve slowed me down every time I got into it and I would lose motivation. So, in a moment of yelling at myself to “just build SOMETHING, Cassidy” I thought it might be fun to try using AI to get me started with React Native! I tried a0 at first, and it was pretty decent at making screens that I thought looked nice, but at the time when I tried it, the product was a bit too immature and wouldn’t produce much that I could actually work with. But, it was a good thing to see something that felt a bit real! So, from there, I started a fresh Expo app with: I definitely stumbled through building the app at first because I used the starter template and had to figure out which things I needed to remove, and probably removed a bit too much at first (more on that later). I got very familiar with the Expo docs , and GitHub Copilot was helpful too as I asked about how certain things worked. In terms of the “order” in which I implemented features, it went like this: And peppered throughout all of this was a lot of styling, re-styling, debugging, context changes, design changes, all that jazz. This list feels so small when I think about all of the tiny adjustments it took to make drawers slide smoothly, gestures move correctly, and testing across screen sizes. There’s a few notable libraries and packages that I used specifically to get everything where I wanted: I learned a lot about how Expo does magic with their Expo Go app for testing your apps. Expo software developer Kadi Kraman helped explain it to me best: A React Native app consists of two parts: you have the JS bundle, and all the native code. Expo Go is a sandbox environment that gives you a lot of the native code you might need for learning and prototyping. So we include the native code for image, camera, push notifications and a whole bunch of libraries that are often used, but it’s limited due to what is possible on the native platforms. So when you need to change things in the native-land, you need to build the native code part yourself (like your own custom version of Expo Go basically). One of the things I really wanted to implement was an animated splash screen, and y’all… after building the app natively, properly, about a million times, I decided that I’m cool with it being a static image. But, here’s the animation I made anyway, for posterity: So many things are funky when it comes to building things natively, for example, how dependencies work and what all is included. There are a handful of libraries where I didn’t read the README (I’m sorry!!!!) and just installed the package to keep moving forward, and then learned that the library would work fine in Expo Go, but needed different packages installed to work natively. Phew. Expo Router is one of them, where again, if I had just read the docs, I could have known that I shouldn’t have removed certain packages when using . This is actually what you need to run if you want to install : Kadi once again came in clutch with a great explanation: The reason this sometimes happens is: Expo Go has a ton of native libraries pre-bundled for ease of development. So, even if you’re not installing them in your project, Expo Go includes the native code for them. For a specific example, e.g. this QR code library requires react-native-svg as a peer dependency and they have it listed in the instructions . However if you were to ignore this and only install the QR code library, it would still work in Expo Go, because it has the native code from pre-bundled. But when you create a development build, preview build or a production build, we don’t want to include all the unused code from Expo Go, it will be a completely clean build with only the libraries you’ve installed explicitly. The Expo Doctor CLI tool saved my bacon a ton here as I stumbled through native builds, clearing caches, and reinstalling everything. Kadi and the Expo team actually made a PR to help check for peer dependencies after I asked them a bunch of questions, which was really awesome of them! Y’all shipping native apps is a horrible experience if you are used to web dev and just hitting “deploy” on your provider of choice. I love web development so much. It’s beautiful. It’s the way things should be. But anyway, App Store time. I decided to just do the iOS App Store at first because installing the Android Simulator was the most wretched developer experience I’ve had in ages and it made me want to throw my laptop in the sea. Kadi (I love you Kadi) had a list of great resources for finalizing apps: TL;DR: Build your app, make a developer account, get 3-5 screenshots on a phone and on a tablet, fill out a bunch of forms about how you use user data, make a privacy policy and support webpage, decide if you want it free or paid, and fill out forms if it’s paid. Y’all… I’m grateful for the Expo team and for EAS existing. Their hand-holding was really patient, and their Discord community is awesome if you need help. Making the screenshots was easy with Expo Orbit , which lets you choose which device you want for each screenshot, and I used Affinity Designer to make the various logos, screenshots, and marketing images it needed. I decided to make the app just a one-time $0.99 purchase, which was pretty easy (you just click “paid” and the amount you want to sell it for), BUT if you want to sell it in the European Union, you need to have a public address and phone number for that. It took a few pieces of verification with a human to make that work. I have an LLC with which I do consulting work and used the registered agent’s information for that (that’s allowed!), so that my personal contact info wouldn’t be front-and-center in the App Store for all of Europe to see. The website part was the least of my worries, honestly. I love web dev. I threw together an Astro website with a link to the App Store, a Support page, and a Privacy Policy page, and plopped on my existing my domain name ductts.app . One thing I did dive deep on, which was unnecessary but fun, was an Import Helper page to help make a Ductts-compatible spreadsheet for those who might already track their tears in a note on their phone. Making a date converter and a sample CSV and instructions felt like one of those things that maybe 2 people in the world would ever use… but I’m glad I did it anyway. Finally, after getting alllll of this done, it was just waiting a few days until the app was finally up on the App Store, almost anticlimactically! While I waited I made a Product Hunt launch page , which luckily used all the same copy and images from the App Store, and it was fun to see it get to the #4 Health & Fitness app of the day on Product Hunt, and #68 in Health & Fitness on the App Store! I don’t expect much from Ductts, really. It was a time-consuming side project that taught me a ton about Expo, React Native, and shipping native apps, and I’m grateful for the experience. …plus now I can have some data on how much I cry. I’m a parent! It happens! Download Ductts , log your tears, and see ya next time.

0 views
Peter Steinberger 3 months ago

Poltergeist: The Ghost That Keeps Your Builds Fresh

Meet Poltergeist: an AI-friendly universal build watcher that auto-detects and rebuilds any project—Swift, Rust, Node.js, CMake, or anything else—the moment you save a file. Zero config, just haunting productivity.

0 views

My agentic coding methodology of June 2025

I was chatting with some friends about how I'm using "AI" tools to write code. Like everyone else, my process has been evolving over the past few months. It seemed worthwhile to do a quick writeup of how I'm doing stuff today. At the moment, I'm mostly living in Claude Code. My "planning methodology" is: "Let's talk through an idea I have. I'm going to describe it. Ask me lots of questions. When you understand it sufficiently, write out a draft plan." After that, I chat with the LLM for a bit. Then, the LLM shows me the draft plan. I point out things I don't like in the plan and ask for changes. The LLM revises the plan. We do that a few times. Once I'm happy with the plan, I say something along the lines of: "Great. now write that to as a series of prompts for an llm coding agent. DRY YAGNI simple test-first clean clear good code" I check over the plan. Maybe I ask for edits. Maybe I don't. And then I type to blow away the LLM's memory of this nice plan it just made. "There's a plan for a feature in . Read it over. If you have questions, let me know. Otherwise, let's get to work." Invariably, there are (good) questions. It asks. I answer. "Before we get going, update the plan document based on the answers I just gave you." When the model has written out the updated plan, it usually asks me some variant of "can I please write some code now?" *"lfg" And then the model starts burning tokens. (Claude totally understands "lfg". Qwen tends to overthink it.) I keep an eye on it while it runs, occasionally stopping it to redirect or critque something it's done until it reports "Ok! Phase 1 is production ready." (I don't know why, but lately, it's very big on telling me first-draft code is production ready.) Usually, I'll ask it if it's written and run test. Usually, it actually has, which is awesome. *"Ok. please commit these changes and update the planning doc with your current status." Once the model has done that, I usually it again to get a nice fresh context window and tell it *"Read and do the next phase.` And then we lather, rinse, and repeat until there's something resembling software. This process is startlingly effective most of the time. Part of what makes it work well is the CLAUDE.md file that spells out a my preferences and workflow. Part of it is that Anthropic's models are just well tuned for what I'm doing (which is mostly JavaScript, embedded C++, and Swift.) Generally, I find that the size of spec that works is something the model can blaze through in less than a couple hours with a focused human paying attention, but really, the smaller and more focused the spec, the better. If you've got a process that looks like mine (or is wildly different), I'd love to hear from you about it. Drop me a line at [email protected].

0 views
Xe Iaso 5 months ago

Apple just Sherlocked Docker

EDIT(2025-06-09 20:51 UTC): The containerization stuff they're using is open source on GitHub . Digging into it. Will post something else when I have something to say. This year's WWDC keynote was cool. They announced a redesign of the OSes, unified the version numbers across the fleet, and found ways to hopefully make AI useful (I'm reserving my right to be a skeptic based on how bad Apple Intelligence currently is). However, the keynote slept on the biggest announcement for developers: they're bringing the ability to run Linux containers in macOS: The Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It’s built on an open-source framework optimized for Apple silicon and provides secure isolation between container images. This is an absolute game changer. One of the biggest pain points with my MacBook is that the battery life is great...until I start my Linux VM or run the Docker app. I don't even know where to begin to describe how cool this is and how it will make production deployments so much easier to access for the next generation of developers. Maybe this could lead to Swift being a viable target for web applications. I've wanted to use Swift on the backend before but Vapor and other frameworks just feel so frustratingly close to greatness. Combined with the Swift Static Linux SDK and some of the magic that powers Private Cloud Compute , you could get an invincible server side development experience that rivals what Google engineers dream up directly on your MacBook. I can't wait to see more. This may actually be what gets me to raw-dog beta macOS on my MacBook. The things I'd really like to know: I really wonder how Docker is feeling, I think they're getting Sherlocked . Either way, cool things are afoot and I can't wait to see more.

0 views
Peter Steinberger 5 months ago

Migrating 700+ Tests to Swift Testing: A Real-World Experience

How I migrated over 700 tests from XCTest to Swift Testing across two projects, with AI assistance and systematic refinement

0 views
HeyDingus 6 months ago

7 Things This Week [#176]

A weekly list of interesting things I found on the internet, posted on Sundays. Sometimes themed, often not. 1️⃣ Nick Heer does the work in dismantling this sexist post regarding Apple’s ass handed to it by Judge Gonzalez Rogers. [ 🔗 pxlnv.com ] 2️⃣ Whoa. Monty Python and the Holy Grail turned 50 this year! It still makes me laugh out loud every time I watch it (which you can do for free on YouTube). [ 🔗 kottke.org ] 3️⃣ I had no idea Taylor Swift was so web-forward right from the beginning. She had her music available to download from her website back in 2002 (when she was 13) and by 2003 had a ‘ Taylor Talk’ tab there — which I presume was an early blog before she had Tumblr. [ 🔗 webdesignmuseum.org ] 4️⃣ This restaurant is mind-blowing. It looks like a drawing inside! [ 🔗 kottke.org ] 5️⃣ The Baltimore Ravens went all out in their Severance -themed scheduled reveal video. [ ▶️ youtube.com ] 6️⃣ BasicAppleGuy is trying a new approach to reader support in which all his wallpapers and other haberdashery remain free to everyone, but can also be purchased to easily download the files all at once. I like the idea and hope it’s successful for him! [ 🔗 basicappleguy.com ] 7️⃣ This overlapping version of “ Dear Theodosia” is beautiful. [ ▶️ youtube.com ] Thanks for reading 7 Things . If you enjoyed these links or have something neat to share, please let me know . And remember that you can get more links to internet nuggets that I’m finding every day by following me @jarrod on the social web. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
xenodium 6 months ago

Awesome Emacs on macOS

Update: Added macOS Trash integration. While GNU/Linux had been my operating system of choice for many years, these days I'm primarily on macOS. Lucky for me, I spend most of my time in Emacs itself (or a web browser), making the switch between operating systems a relatively painless task. I build iOS and macOS apps for a living, so naturally I've accumulated a handful of macOS-Emacs integrations and tweaks over time. Below are some of my favorites. For starters, I should mention I run Emacs on macOS via the excellent Emacs Plus homebrew recipe. These are the options I use: Valeriy Savchenko has created some wonderful macOS Emacs icons . These days, I use his curvy 3D rendered icon , which I get via Emacs Plus's option. It's been a long while since I've settled on using macOS's Command (⌘) as my Emacs Meta key. For that, you need: At the same time, I've disabled the ⌥ key to avoid inadvertent surprises. After setting ⌘ as Meta key, I discovered C-M-d is not available to Emacs for binding keys. There's a little workaround : You may have noticed the Emacs Plus option. I didn't like Emacs refocusing other frames when closing one, so I sent a tiny patch over to Emacs Plus , which gave us that option. I also prefer reusing existing frames whenever possible. Most of my visual tweaks have been documented in my Emacs eye candy post . For macOS-specific things, read on… It's been a while since I've added this, though vaguely remember needing it to fix mode line rendering artifacts. I like using a transparent title bar and these two settings gave me just that: I want a menu bar like other macOS apps, so I enable with: If you got a more recent Apple keyboard, you can press the 🌐 key to insert emojis from anywhere, including Emacs. If you haven't got this key, you can always , which launches the very same dialog. Also check out Charles Choi's macOS Native Emoji Picking in Emacs from the Edit Menu . If you prefer Apple's long-press approach to inserting accents or other special characters, I got an Emacs version of that . I wanted to rotate my monitor from the comfort of M-x, so I made Emacs do it . While there are different flavors of "open with default macOS app" commands out there (ie. crux-open-with as part of Bozhidar Batsov's crux ), I wanted one that let me choose a specific macOS app . Shifting from Emacs to Xcode via "Open with" is simple enough, but don't you want to also visit the very same line ? Apple offers SF Symbols on all their platforms, so why not enable Emacs to insert and render them? This is particulary handy if you do any sort of iOS/macOS development, enabling you to insert SF Symbols using your favorite completion framework. I happen to remain a faithful ivy user. Speaking of enabling SF Symbol rendering, you can also use them to spiff your Emacs up. Check out Charles Choi's Calle 24 for a great-looking Emacs toolbar. Also, Christian Tietze shows how to use SF Symbols as Emacs tab numbers . While macOS's Activity Monitor does a fine job killing processes, I wanted something a little speedier, so I went with a killing solution leveraging Emacs completions . Having learned how simple it was to enable Objective-C babel support , I figured I could do something a little more creative with SwiftUI, so I published ob-swiftui on MELPA. I found the nifty duti command-line tool to change default macOS applications super handy, but could never remember its name when I needed it. And so I decided to bring it into dwim-shell-command as part of my toolbox . I got a bunch of handy helpers in dwim-shell-commands.el (specially all the image/video helpers via ffmpeg and imagemagick). Go check dwim-shell-commands.el . There's loads in there, but here are my macOS-specific commands: Continuing on the family, I should also mention . While I hardly ever change my Emacs theme, I do toggle macOS dark mode from time to time to test macOS or web development. One last … One that showcases toggling the macOS menu bar (autohide) . While this didn't quite stick for me, it was a fun experiment to add Emacs into the mix . This is just a little fun banner I see whenever I launch eshell . This is all you need: I wanted a quick way to record or take screenshots of macOS windows, so I now have my lazy way , leveraging macosrec , a recording command line utility I built. Invoked via of course. If you want any sort of code completion for your macOS projects, you'd be happy to know that eglot works out of the box. This is another experiment that didn't quite stick, but I played with controlling the Music app's playback . While I still purchase music via Apple's Music app, I now play directly from Emacs via Ready Player Mode . I'm fairly happy with this setup, having scratched that itch with my own package. By the way, those buttons also leverage SF Symbols on macOS. While there are plenty of solutions out there leveraging the command line tool to reveal files in macOS's Finder, I wanted one that revealed multiple files in one go. For that, I leveraged the awesome emacs-swift-module , also by Valeriy Savchenko . The macOS trash has saved my bacon in more than one occasion. Make Emacs aware of it . Also check out . While elisp wasn't in my top languages to learn back in the day, I sure am glad I finally bit the bullet and learned a thing or two. This opened many possibilities. I now see Emacs as a platform to build utilities and tools off of. A canvas of sorts , to be leveraged in and out of the editor. For example, you could build your own bookmark launcher and invoke from anywhere on macOS. Turns out you can also make Emacs your default email composer . While not exactly an Emacs tweak itself, I wanted to extend Emacs bindings into other macOS apps. In particular, I wanted more reliable Ctrl-n/p usage everywhere , which I achieved via Karabiner-Elements . I also mapped to , which really feels just great! I can now cancel things, dismiss menus, dialogs, etc. everywhere. With my Emacs usage growing over time, it was a matter of time until I discovered org mode. This blog is well over 11 years old now, yet still powered by the very same org file (beware, this file is big). With my org usage growing, I felt like I was missing org support outside of Emacs. And so I started building iOS apps revolving around my Emacs usage. Journelly is my latest iOS app, centered around note-taking and journaling. The app feels like tweeting, but for your eyes only of course. It's powered by org markup, which can be synced with Emacs via iCloud. Org habits are handy for tracking daily habits. However, it wasn't super practical for me as I often wanted to check things off while on the go (away from Emacs). That led me to build Flat Habits . While these days I'm using Journelly to jot down just about anything, before that, I built and used Scratch as scratch pad of sorts. No iCloud syncing, but needless to say, it's also powered by org markup. For more involved writing, nothing beats Emacs org mode. But what if I want quick access to my org files while on the go? Plain Org is my iOS solution for that. I'll keep looking for other macOS-related tips and update this post in the future. In the meantime, consider ✨ sponsoring ✨ this content, my Emacs packages , buying my apps , or just taking care of your eyes ;) dwim-shell-commands-macos-add-to-photos dwim-shell-commands-macos-bin-plist-to-xml dwim-shell-commands-macos-caffeinate dwim-shell-commands-macos-convert-to-mp4 dwim-shell-commands-macos-empty-trash dwim-shell-commands-macos-install-iphone-device-ipa dwim-shell-commands-macos-make-finder-alias dwim-shell-commands-macos-ocr-text-from-desktop-region dwim-shell-commands-macos-ocr-text-from-image dwim-shell-commands-macos-open-with dwim-shell-commands-macos-open-with-firefox dwim-shell-commands-macos-open-with-safari dwim-shell-commands-macos-reveal-in-finder dwim-shell-commands-macos-screenshot-window dwim-shell-commands-macos-set-default-app dwim-shell-commands-macos-share dwim-shell-commands-macos-start-recording-window dwim-shell-commands-macos-abort-recording-window dwim-shell-commands-macos-end-recording-window dwim-shell-commands-macos-toggle-bluetooth-device-connection dwim-shell-commands-macos-toggle-dark-mode dwim-shell-commands-macos-toggle-display-rotation dwim-shell-commands-macos-toggle-menu-bar-autohide dwim-shell-commands-macos-version-and-hardware-overview-info

0 views

Posting through it

I'm posting this from a very, very rough cut at a bespoke blogging client I've been having my friend Claude build out over the past couple days. I've long suspected that "just edit text files on disk to make blog posts" is, to a certain kind of person, a great sounding idea...but not actually the way to get me to blog. The problem is that my blog is...a bunch of text files in a git repository that's compiled into a website by a tool called "Eleventy" that runs whenever I put a file in a certain directory of this git repository and push that up to GitHub. There's no API because there's no server. And I've never learned Swift/Cocoa/etc, so building macOS and iOS tooling to create a graphical blogging client has felt...not all that plausible. Over the past year or two, things have been changing pretty fast. We have AI agents that have been trained on...well, pretty much everything humans have ever written. And they're pretty good at stringing together software. So, on a whim, I asked Claude to whip me up a blogging client that talks to GitHub in just the right way. This is the very first post using that new tool, which I'm calling "Post Through It." Ok, technically, this is the fourth post. But it's the first one I've actually been able to add any content to.

0 views
baby steps 8 months ago

Dyn async traits, part 10: Box box box

This article is a slight divergence from my Rust in 2025 series. I wanted to share my latest thinking about how to support for traits with async functions and, in particular how to do so in a way that is compatible with the soul of Rust . Supporting in dyn traits is a tricky balancing act. The challenge is reconciling two key things people love about Rust: its ability to express high-level, productive code and its focus on revealing low-level details. When it comes to async function in traits, these two things are in direct tension, as I explained in my first blog post in this series – written almost four years ago! (Geez.) To see the challenge, consider this example trait: In Rust today you can write a function that takes an and invokes and everything feels pretty nice: But what I want to write that same function using a ? If I write this… …I get an error. Why is that? The answer is that the compiler needs to know what kind of future is going to be returned by so that it can be awaited. At minimum it needs to know how big that future is so it can allocate space for it 1 . With an , the compiler knows exactly what type of signal you have, so that’s no problem: but with a , we don’t, and hence we are stuck. The most common solution to this problem is to box the future that results. The crate , for example, transforms to something like . But doing that at the trait level means that we add overhead even when you use ; it also rules out some applications of Rust async, like embedded or kernel development. So the name of the game is to find ways to let people use that are both convenient and flexible. And that turns out to be pretty hard! I’ve been digging back into the problem lately in a series of conversations with Michal Goulet (aka, compiler-errors) and it’s gotten me thinking about a fresh approach I call “box box box”. The “box box box” design starts with the call-site selection approach. In this approach, when you call , the type you get back is a – i.e., an unsized value. This can’t be used directly. Instead, you have to allocate storage for it. The easiest and most common way to do that is to box it, which can be done with the new operator: This approach is fairly straightforward to explain. When you call an async function through , it results in a , which has to be stored somewhere before you can use it. The easiest option is to use the operator to store it in a box; that gives you a , and you can await that. But this simple explanation belies two fairly fundamental changes to Rust. First, it changes the relationship of and . Second, it introduces this operator, which would be the first stable use of the keyword 2 . It seems odd to introduce the keyword just for this one use – where else could it be used? As it happens, I think both of these fundamental changes could be very good things. The point of this post is to explain what doors they open up and where they might take us. Let’s start with the core proposal. For every trait , we add inherent methods 3 to reflecting its methods: In fact, method dispatch already adds “pseudo” inherent methods to , so this wouldn’t change anything in terms of which methods are resolved. The difference is that is only allowed if all methods in the trait are dyn compatible, whereas under this proposal some non-dyn-compatible methods would be added with modified signatures. Change 0 only makes sense if it is possible to create a even though it contains some methods (e.g., async functions) that are not dyn compatible. This revisits RFC #255 , in which we decided that the type should also implement the trait . I was a big proponent of RFC #255 at the time, but I’ve sinced decided I was mistaken 5 . Let’s discuss. The two rules today that allow to implement are as follows: The fact that implements is at times quite powerful. It means for example that I can write an implementation like this one: This impl makes implement for any type , including dyn trait types like . Neat. Powerful as it is, the idea of implementing doesn’t quite live up to its promise. What you really want is that you could replace any with and things would work. But that’s just not true because is . So actually you don’t get a very “smooth experience”. What’s more, although the compiler gives you a impl, it doesn’t give you impls for references to – so e.g. given this trait If I have a , I can’t give that to a function that takes an To make that work, somebody has to explicitly provide an impl like and people often don’t. However, the requirement that implement can be limiting. Imagine a trait like This trait has two methods. The method is dyn-compatible, no problem. The method has an argument is therefore generic, so it is not dyn-compatible 6 (well, at least not under today’s rules, but I’ll get to that). (The reason is not dyn compatible: we need to make distinct monomorphized copies tailored to the type of the argument. But the vtable has to be prepared in advance, so we don’t know which monomorphized version to use.) And yet, just because is not dyn compatible doesn’t mean that a would be useless. What if I only plan to call , as in a function like this? Rust’s current rules rule out a function like this, but in practice this kind of scenario comes up quite a lot. In fact, it comes up so often that we added a language feature to accommodate it (at least kind of): you can add a clause to your feature to exempt it from dynamic dispatch. This is the reason that can be dyn compatible even when it has a bunch of generic helper methods like and . Let me pause here, as I imagine some of you are wondering what all of this “dyn compatibility” stuff has to do with AFIDT. The bottom line is that the requirement that type implements means that we cannot put any kind of “special rules” on dispatch and that is not compatible with requiring a operator when you call async functions through a trait. Recall that with our trait, you could call the method on an without any boxing: But when I called it on a , I had to write to tell the compiler how to deal with the that gets returned: Indeed, the fact that returns an but returns a already demonstrates the problem. All types are known to be and is not, so the type signature of is not the same as the type signature declared in the trait. Huh. Today I cannot write a type like without specifying the value of the associated type . To see why this restriction is needed, consider this generic function: If you invoked with an that did not specify , how could the type of ? We wouldn’t have any idea how much space space it needs. But if you invoke with , there is no problem. We don’t know which method is being called, but we know it’s returning a . And yet, just as we saw before, the requirement to list associated types can be limiting. If I have a and I only call , for example, then why do I need to know the type? But I can’t write code like this today. Instead I have to make this function generic which basically defeats the whole purpose of using : If we dropped the requirement that every type implements , we could be more selective, allowing you to invoke methods that don’t use the associated type but disallowing those that do. So that brings us to full proposal to permit in cases where the trait is not fully dyn compatible: A lot of things get easier if you are willing to call malloc. – Josh Triplett, recently. Rust has reserved the keyword since 1.0, but we’ve never allowed it in stable Rust. The original intention was that the term box would be a generic term to refer to any “smart pointer”-like pattern, so would be a “reference counted box” and so forth. The keyword would then be a generic way to allocate boxed values of any type; unlike , it would do “emplacement”, so that no intermediate values were allocated. With the passage of time I no longer think this is such a good idea. But I do see a lot of value in having a keyword to ask the compiler to automatically create boxes . In fact, I see a lot of places where that could be useful. The first place is indeed the operator that could be used to put a value into a box. Unlike , using would allow the compiler to guarantee that no intermediate value is created, a property called emplacement . Consider this example: Rust’s semantics today require (1) allocating a 4KB buffer on the stack and zeroing it; (2) allocating a box in the heap; and then (3) copying memory from one to the other. This is a violation of our Zero Cost Abstraction promise: no C programmer would write code like that. But if you write , we can allocate the box up front and initialize it in place. 9 The same principle applies calling functions that return an unsized type. This isn’t allowed today, but we’ll need some way to handle it if we want to have return . The reason we can’t naively support it is that, in our existing ABI, the caller is responsible for allocating enough space to store the return value and for passing the address of that space into the callee, who then writes into it. But with a return value, the caller can’t know how much space to allocate. So they would have to do something else, like passing in a callback that, given the correct amount of space, performs the allocation. The most common cased would be to just pass in . The best ABI for unsized return values is unclear to me but we don’t have to solve that right now, the ABI can (and should) remain unstable. But whatever the final ABI becomes, when you call such a function in the context of a expression, the result is that the callee creates a to store the result. 10 If you try to write an async function that calls itself today, you get an error: The problem is that we cannot determine statically how much stack space to allocate. The solution is to rewrite to a boxed return value. This compiles because the compiler can allocate new stack frames as needed. But wouldn’t it be nice if we could request this directly? A similar problem arises with recursive structs: The compiler tells you As it suggestes, to workaround this you can introduce a : This though is kind of weird because now the head of the list is stored “inline” but future nodes are heap-allocated. I personally usually wind up with a pattern more like this: Now however I can’t create values with syntax and I also can’t do pattern matching. Annoying. Wouldn’t it be nice if the compiler just suggest adding a keyword when you declare the struct: and have automatically allocate the box for me? The ideal is that the presence of a box is now completely transparent, so I can pattern match and so forth fully transparently: Enums too cannot reference themselves. Being able to declare something like this would be really nice: In fact, I still remember when I used Swift for the first time. I wrote a similar enum and Xcode helpfully prompted me, “do you want to declare this enum as ?” I remember being quite jealous that it was such a simple edit. However, there is another interesting thing about a . The way I imagine it, creating an instance of the enum would always allocate a fresh box. This means that the enum cannot be changed from one variant to another without allocating fresh storage. This in turn means that you could allocate that box to exactly the size you need for that particular variant. 11 So, for your , not only could it be recursive, but when you allocate an you only need to allocate space for a , whereas a would be a different size. (We could even start to do “tagged pointer” tricks so that e.g. is stored without any allocation at all.) Another option would to have particular enum variants that get boxed but not the enum as a whole: This would be useful in cases you do want to be able to overwrite one enum value with another without necessarily reallocating, but you have enum variants of widely varying size, or some variants that are recursive. A boxed variant would basically be desugared to something like the following: clippy has a useful lint that aims to identify this case, but once the lint triggers, it’s not able to offer an actionable suggestion. With the box keyword there’d be a trivial rewrite that requires zero code changes. If we’re enabling the use of elsewhere, we ought to allow it in patterns: Under my proposal, would be the preferred form, since it would allow the compiler to do more optimization. And yes, that’s unfortunate, given that there are 10 years of code using . Not really a big deal though. In most of the cases we accept today, it doesn’t matter and/or LLVM already optimizes it. In the future I do think we should consider extensions to make (as well as and other similar constructors) be just as optimized as , but I don’t think those have to block this proposal. Yes and no. On the one hand, I would like the ability to declare that a struct is always wrapped in an or . I find myself doing things like the following all too often: On the other hand, is very special. It’s kind of unique in that it represents full ownership of the contents which means a and are semantically equivalent – there is no place you can use that a won’t also work – unless . This is not true for and or most other smart pointers. For myself, I think we should introduce now but plan to generalize this concept to other pointers later. For example I’d like to be able to do something like this… …where the type would implement some trait to permit allocating, deref’ing, and so forth: The original plan for was that it would be somehow type overloaded. I’ve soured on this for two reasons. First, type overloads make inference more painful and I think are generally not great for the user experience; I think they are also confusing for new users. Finally, I think we missed the boat on naming. Maybe if we had called something like the idea of “box” as a general name would have percolated into Rust users’ consciousness, but we didn’t, and it hasn’t. I think the keyword now ought to be very targeted to the type. In my [soul of Rust blog post], I talked about the idea that one of the things that make Rust Rust is having allocation be relatively explicit. I’m of mixed minds about this, to be honest, but I do think there’s value in having a property similar to – like, if allocation is happening, there’ll be a sign somewhere you can find. What I like about most of these proposals is that they move the keyword to the declaration – e.g., on the struct/enum/etc – rather than the use . I think this is the right place for it. The major exception, of course, is the “marquee proposal”, invoking async fns in dyn trait. That’s not amazing. But then… see the next question for some early thoughts. The way that Rust today detects automatically whether traits should be dyn compatible versus having it be declared is, I think, not great. It creates confusion for users and also permits quiet semver violations, where a new defaulted method makes a trait no longer be dyn compatible. It’s also a source for a lot of soundness bugs over time. I want to move us towards a place where traits are not dyn compatible by default, meaning that does not implement . We would always allow types and we would allow individual items to be invoked so long as the item itself is dyn compatible. If you want to have implement , you should declare it, perhaps with a keyword: This declaration would add various default impls. This would start with the impl: But also, if the methods have suitable signatures, include some of the impls you really ought to have to make a trait that is well-behaved with respect to dyn trait: In fact, if you add in the ability to declare a trait as , things get very interesting: I’m not 100% sure how this should work but what I imagine is that would be pointer-sized and implicitly contain a behind the scenes. It would probably automatically the results from when invoked through , so something like this: I didn’t include this in the main blog post but I think together these ideas would go a long way towards addressing the usability gaps that plague today. Side note, one interesting thing about Rust’s async functions is that there size must be known at compile time, so we can’t permit alloca-like stack allocation.  ↩︎ The box keyword is in fact reserved already, but it’s never been used in stable Rust.  ↩︎ Hat tip to Michael Goulet (compiler-errors) for pointing out to me that we can model the virtual dispatch as inherent methods on types. Before I thought we’d have to make a more invasive addition to MIR, which I wasn’t excited about since it suggested the change was more far-reaching.  ↩︎ In the future, I think we can expand this definition to include some limited functions that use in argument position, but that’s for a future blog post.  ↩︎ I’ve noticed that many times when I favor a limited version of something to achieve some aesthetic principle I wind up regretting it.  ↩︎ At least, it is not compatible under today’s rules. Convievably it could be made to work but more on that later.  ↩︎ This part of the change is similar to what was proposed in RFC #2027 , though that RFC was quite light on details (the requirements for RFCs in terms of precision have gone up over the years and I expect we wouldn’t accept that RFC today in its current form).  ↩︎ I actually want to change this last clause in a future edition. Instead of having dyn compatibility be determined automically, traits would declare themselves dyn compatible, which would also come with a host of other impls. But that’s worth a separate post all on its own.  ↩︎ If you play with this on the playground , you’ll see that the memcpy appears in the debug build but gets optimized away in this very simple case, but that can be hard for LLVM to do, since it requires reordering an allocation of the box to occur earlier and so forth. The operator could be guaranteed to work.  ↩︎ I think it would be cool to also have some kind of unsafe intrinsic that permits calling the function with other storage strategies, e.g., allocating a known amount of stack space or what have you.  ↩︎ We would thus finally bring Rust enums to “feature parity” with OO classes! I wrote a blog post, “Classes strike back”, on this topic back in 2015 (!) as part of the whole “virtual structs” era of Rust design. Deep cut!  ↩︎

0 views
seated.ro 8 months ago

If you don't tinker, you don't have taste

Growing up, I never stuck to a single thing, be it guitar lessons, art school, martial arts – I tried them all. when it came to programming, though, I never really tinkered. I was always amazed with video games and wondered how they were made but I never pursued that curiosity. My tinkering habits picked up very late, and now I cannot go by without picking up new things in one form or another. It’s how I learn. I wish I did it sooner. It’s a major part of my learning process now, and I would never be the programmer person I am today. Have you ever spent hours tweaking the mouse sensitivity in your favorite FPS game? Have you ever installed a Linux distro, spent days configuring window managers, not because you had to, but purely because it gave you satisfaction and made your workflow exactly yours? Ever pulled apart your mechanical keyboard, swapped keycaps, tested switches, and lubed stabilizers just for more thock? That is what I mean. I have come to understand that there are two kinds of people, those who do things only if it helps them achieve a goal, and those who do things just because. The ideal, of course, is to be a mix of both. when you tinker and throw away, that’s practice, and practice should inherently be ephemeral, exploratory, and be frequent - @ludwigABAP There are plenty of people who still use the VSCode terminal as their default terminal, do not know what vim bindings are, GitHub desktop rather than the cli (at the very least). I’m not saying these are bad things necessarily, just that this should be the minimum, not the median. This does not mean I spend every waking hour fiddling with my neovim config. In fact, the last meaningful change to my config was 6 months ago. Finding that balance is where most people fail. Over the years I have done so many things that in hindsight have made me appreciate programming more but were completely “unnecessary” in the strict sense. In the past week I have, for the first time, written a glsl fragment shader, a rust procedural macro, template c++, a swift app, furthered my hatred for windows development (this is not new), and started using the helix editor more (mainly for good defaults + speed). I didn’t have to do these things, but I did, for fun! And I know more about these things now. No time spent learning, is time wasted. Acquiring good taste comes through using various things, discarding the ones you don’t like and keeping the ones you do. if you never try various things, you will not acquire good taste. And what I mean by taste here is simply the honed ability to distinguish mediocrity from excellence. This will be highly subjective, and not everyone’s taste will be the same, but that is the point, you should NOT have the same taste as someone else. Question the status quo, experiment, break things, do this several times, do this everyday and keep doing it.

0 views
maxdeviant.com 11 months ago

2024 in Review

In a rare turn of events, I'm writing this year-in-review in advance of the last few hours of the year. Normally I end up spending New Year's Eve writing it as I rush to publish by midnight. As I look back on this year and try to remember what all transpired—a process that is hampered by a frustrating lack of notekeeping on my part—I'm left feeling like there wasn't all that much. Of course, I know this not to be true. Plenty of things happened , but not many that make for tidy bullet points in an itemized record of the year. In many ways this year has felt like stasis, with not much to show in terms of outwardly-visible signs of progress. Internally, I've been constantly embroiled in battle with my inner thoughts and demons. This unending fight has taxed me both emotionally and physically, and has often left me with little left to give to my family, friends, and my work. Working on myself has taken up the vast majority of my time and energy this year. During one particularly rough bout I wrote: I can't think of a time I've been more exhausted than I have been this past week. Sure, there have been other times where I've felt downtrodden by my emotions and heavy thoughts, but there is something so tangibly exhausting about having to face them head-on. I suppose like pretty much everything else in life, forward progress takes work. It's easier to stay in one place—even if that place is miserable—than it is to take action and move forward. In the face of all this, I've tried to enjoy the little things when I can find them: I turned 30 this year and am still trying to determine how I feel about it. One recurring theme so far has been reflecting on what I want to do today so that I don't look back and wish I had started it today. I've found maintaining this future-oriented outlook to be quite difficult when dealing with a multitude of things in the moment. It reminds me of when I first started learning to drive and I was always looking just a car or two ahead of me (on account of being deathly afraid of hitting them). It wasn't until I took the Pennsylvania Motorcycle Safety Program 3 and was taught to look ahead towards your destination that I realized how much of a difference it makes in the awareness of your surroundings. For motorcycles, in particular, looking right in front of you is actually more detrimental than in a car. For instance, looking directly ahead of you when going into a curve instead of looking through the curve can actually negatively impact your ability to maintain your balance on the bike. Point being, when all your attention is focused on the here and now, it can be easy to forget to look ahead and see what adjustments need to be made for a better outcome down the road. This year marked ten years of this website being online in some shape or form. I had originally intended to write a "10 Years of maxdeviant.com" post, or something of that nature, but the aforementioned struggles of this year got the best of me. I did, however, ship a rebuild of my site this year. This site is now built by a bespoke static site generator, leveraging Razorbill , and I am excited by the possibilities this affords for the future. This was my first year using Rust in a professional capacity, and I could not be happier about it. It's been everything that I had hoped for, and more. I've observed that, for the first time in my career, the language I'm using largely fades away. I find that I can focus on the problem at hand without being abruptly pulled out of my flow state by reaching for a language feature that doesn't exist. This is something that has routinely frustrated me when working with other languages, and it's a welcome change to have the set of language features that I want at my disposal. A note on compile times: the rumors are true. Rust can be quite slow to compile once a codebase reaches a certain size. The Zed codebase, for instance, can be a real bear at times. For smaller projects, like my personal ones, I find that compile times are a non-issue. I do hope that further inroads can be made towards improving this, but I find that sacrificing a bit of compilation speed for all the other benefits Rust provides to be a no-brainer. Lastly, in September I attended RustConf 2024 along with the rest of the Zed team. I had a great time and I enjoyed getting to talk to so many fellow Rustaceans. It's hard to believe that we only open-sourced Zed in January of this year! That moment feels like forever ago, and so much has happened since then. Extension support —a feature I helped build and am deeply proud of—didn't even exist until February. Zed has come a long way this year. It's been a labor of love and tenacity by the entire team, all of whom I feel incredibly lucky to work with day-to-day. The level of talent and commitment to the craft embodied by my teammates is a sight to behold. There's still a lot to be done to make it possible for everyone to feel at home in Zed, but I'm confident that we're up to the task. For a look back at everything that happened in the Zediverse this year, check out the Zed 2024 Recap . As always, here's an assortment of stats from this year. I had an unbroken streak on GitHub of 193 days, from April 1st to October 11th. It would be even longer if I hadn't skipped that one day, but alas. I'm still quite pleased with my contribution chart: It's been a good year for me in the Zed repository as well: Sadly, GitHub no longer shows lines added/deleted once the commit count exceeds 10,000. This year I wrote 6,804 words across my various writings (not including this post). I'd like to bring this number up next year. My music listening was, once again, down from the previous year. I think this can be partly attributed to the change in work environment: we have a very pairing-heavy culture at Zed, and I can't listen to music while I'm pairing with someone. Here are the albums that I listened to the most this year: If there is one thing I am leaving 2024 with, it's a renewed desire for finding balance in my life. The pendulum continues to swing too far in either direction, dragging me with it from one extreme to another. I came into the year with a goal of "devising a system for sustaining my ideal lifestyle", and I have yet to achieve it. To all of you who have been there for me this year: thank you. I know I've been distant for much of it, so I deeply appreciate your steadfast camaraderie in spite of that. I look forward to what the new year will bring. The extended editions, naturally. It was 3 times in Fellowship, 4 times in Two Towers, and 3 more times in Return of the King. As much as I would love to claim the title of "motorcycle rider" in the hopes of sounding cool, I never did end up finishing the course. Having my siblings over for a Lord of the Rings 1 marathon and keeping notes on how many times I tear up or cry 2 Exchanging Strands and Connections results with Heather, and commiserating when the NYT makes them extra difficult Taking walks around my neighborhood where I've lived for 7 years and have yet to fully explore Hiking in the Great Smoky Mountains in Tennessee with nary a bar of cell phone service Sitting in my darkened sunroom during a thunderstorm-induced power outage sipping a Fat Tire while the lightning strikes periodically illuminate the room Hanging out in a Montreal coffee shop talking about Rust with some other engineers Spending a Sunday afternoon setting up a bird feeder next to my deck Watching from the kitchen window as the birds flit around said bird feeder BRAT - Charli xcx cold is the void - and all i can say is Still as the Night, Cold as the Wind - Vital Spirit Dance Fever (Complete Edition) - Florence + The Machine Autumn Eternal - Panopticon Wound - Despite Exile ERRA - ERRA Minecraft - Volume Beta - C418 THE TORTURED POETS DEPARTMENT : THE ANTHOLOGY - Taylor Swift Cutting the Throat of God - Ulcerate End of the World - Searows Nature Morte - Penitence Onirique Fiction - Syncatto Illuminate - Harvs Space Diver - Boris Brejcha Every Sound Has A Color In The Valley Of Night - Night Verses Of Mice & Men - Of Mice & Men ONI//KIJO - Memorist Love Exchange Failure - White Ward Triade III : Nyx - Aara

0 views
W. Jason Gilmore 11 months ago

Building Menubar Apps with AI

Some people collect baseball cards, others obsess over video games. I love menubar apps. No clue why, I just really like the convenience they offer, because they provide such an easy way to view and interact with information of all types. I've always wanted to build one, but never wanted to invest the time learning Swift, Objective-C, or ElectronJS. The emergence of AI coding tools, and particularly agents, has completely changed the game in terms of writing software, and so I've lately been wondering how feasible it is to not only create my first menubar app but actually create some sort of software factory that can churn out dozens if not hundreds of menubar-first applications. The first app is called TerraTime . It's a menubar app that shows the current time in a variety of timezones. TerraTime was built with Cursor in about 20 minutes. I spent another 75 minutes or so figuring out how to sign and notarize the app according to Apple requirements. The app is currently for sale on Gumroad , and will soon be available on the Mac App Store. To catalog what I hope will quickly become a collection of useful menubar apps, I've created a new site called Useful Menubar Apps . It was also built with AI, and is hosted on Netlify.

0 views
Pat Shaughnessy 3 years ago

LLVM IR: The Esperanto of Computer Languages

I empathize for people who have to learn English as a foreign language. English grammar is inconsistent, arbitrary and hard to master. English spelling is even worse. I sometimes find myself apologizing for my language’s shortcomings. But learning any foreign language as an adult is very difficult. Esperanto , an “artificial language,” is different. Invented by Ludwik Zamenhof in 1873, Esperanto has a vocabulary and grammar that are logical and consistent, designed to be easier to learn. Zamenhof intended Esperanto to become the universal second language. Computers have to learn foreign languages too. Every time you compile and run a program, your compiler translates your code into a foreign language: the native machine language that runs on your target platform. Compilers should have been called translators. And compilers struggle with the same things we do: inconsistent grammar and vocabulary, and other peculiarities of the target platform. Recently, however, more and more compilers translate your code to an artificial machine language. They produce a simpler, more consistent, more powerful machine language that doesn’t actually run on any machine. This artificial machine language, LLVM IR, makes writing compilers simpler and reading the code compilers produce simpler too. LLVM IR is becoming the universal second language for compilers. The Low Level Virtual Machine (LLVM) project had the novel idea of inventing a virtual machine that was easy for compiler engineers to use as a target platform. The LLVM team designed a special instruction set called intermediate representation (IR). New, modern languages such as Rust, Swift, Clang-based versions of C and many others, first translate your code to LLVM IR. Then they use the LLVM framework to convert the IR into actual machine language for any target platform LLVM supports: LLVM is great for compilers. Compiler engineers don’t have to worry about the detailed instruction set of each platform, and LLVM optimizes your code for whatever platform you choose automatically. And LLVM is also great for people like me who are interested in what machine language instructions look like and how CPUs execute them. LLVM instructions are much easier to follow than real machine instructions. Let’s take a look at one! Here’s a line of LLVM IR I generated from a simple Crystal program: Wait a minute! This isn’t simple or easy to follow at all! What am I talking about here? At first glance, this does look confusing. But as we’ll see, most of the confusing syntax is related to Crystal, not LLVM. Studying this line of code will reveal more about Crystal than it will about LLVM. The rest of this article will unpack and explain what this line of code means. It looks complex, but is actually quite simple. The instruction above is a function call in LLVM IR. To produce this code, I wrote a small Crystal program and then translated it using this command: The option directed Crystal to generate a file called array_example.ll, which contains the line above along with thousands of other lines. We’ll get to the Crystal code in a minute. But for now, how do I get started understanding what the LLVM code means? The LLVM Language Reference Manual has documentation for and all of the other LLVM IR instructions. Here’s the syntax for : My example instruction doesn’t use many of these options. Removing the unused options, I can see the actual, basic syntax of : In order from left to right, these values are: which register to save the result in the type of the return value a pointer to the function to call the arguments to pass to that function What does all of this mean, exactly? Let’s find out! Starting on the left and moving right, let’s step through the instruction: The token to the left of the equals sign tells LLVM where to save the return value of the function call that follows. This isn’t a normal variable; is an LLVM “register.” Registers are physical circuits located on microprocessor chips used to save intermediate values. Saving a value in a CPU register is much faster than saving a value in memory, since the register is located on the same chip as the rest of the microprocessor. Saving a value in RAM memory, on the other hand, requires transmitting that value from one chip to another and is much slower, relatively speaking. Unfortunately, each CPU has a limited number of registers available, and so compilers have to decide which values are used frequently enough to warrant saving in nearby registers, and which other values can be moved out to more distant memory. Unlike the limited number of registers available on a real CPU, the imaginary LLVM microprocessor has an infinite number of them. Because of this, compilers that target LLVM can simply save values to a register whenever they would like. There’s no need to find an available register, or to move an existing value out of a register first before using it for something else. Busy work that normal machine language code can’t avoid. In this program, the Crystal compiler had already saved 56 other values in “registers” and so for this line of LLVM IR, Crystal simply used the next register, number 57. Moving left to right, LLVM instructions next indicate the type of the function call’s return value: This name of this type, , is generated by the Crystal compiler, not by LLVM. That is, this is a type from my Crystal program. It could have been anything, and indeed other compilers that target LLVM will generate completely different type names. The example Crystal program I used to generate this LLVM code was: When I compiled this program, Crystal generated the instruction above, which returns a pointer to the new array, . Since is an array containing integers, Crystal uses a generic type Machine languages that target real machines only support hardware types that machine supports. For example, Intel x86 assembly language allows you to save integers of different widths, 16, 32 or 64 bits for example, and an Intel x86 CPU has registers designed to hold values of each of these sizes. LLVM IR is more powerful. It supports “structure types,” similar to a C structure or an object in a language like Crystal or Swift. Here the syntax indicates the name inside the quotes is the name of a structure type. And the asterisk which follows, like in C, indicates the type of the return value of my function call is a pointer to this structure. My example LLVM program defines the type like this: Structure types allow LLVM IR programs to create pointers to structures or objects, and to access any of the values inside each object. That makes writing a compiler much easier. In my example, the call instruction returns a pointer to an object which contains 4 32-bit integer values, followed by a pointer to other 32 integer values. But what are all of these integer values? Above I said this function call was returning a new array - how can that be the case? LLVM itself has no idea, and no opinion on the matter. To understand what these values are, and what they have to do with the array in my program, we need to learn more about the Crystal compiler that generated this LLVM IR code. Reading the Crystal standard library , we can see Crystal implements arrays like this: The comments above are very illustrative and complete - the Crystal team took the time to document their standard library and explain not only how to use each class, like , but how they are implemented internally. In this case, we can see the four values inside the LLVM structure type hold the size and capacity off the array, among other things. And the value is a pointer to the actual contents of the array. The target of the call instruction appears next, after the return type: This is quite a mouthful! What sort of function is this? There are two steps to understanding this: First, the syntax. This is simply a global identifier in this LLVM program. So my instruction is just calling a global function. In LLVM programs, all functions are global; there is no concept of a class, module or similar groupings of code. But what in the world does that crazy identifier mean? LLVM ignores this complex name. For LLVM this is just a name like or . But for Crystal, the name has much more significance. Crystal encoded a lot of information into this one name. Crystal can do this because the LLVM code isn’t intended for anyone to read directly. Crystal has created a “mangled name,” meaning the original version of the function to call is there but it’s been mangled or rewritten in a confusing manner. Crystal rewrites function names to ensure they are unique. In Crystal, like in many other statically typed languages, functions with different argument types or return value types are actually different functions. So in Crystal if I write: …I have two separate, different functions both called . The type of the parameter distinguishes one from the other. Crystal generates unique function names by encoding the arguments, return value and type of the receiver into the into the function name string, making it quite complex. Let’s break it down: - this is the type of the receiver. That means the function is actually a method on the generic class. And in this case, the receiver is an array holding 32 bit integers, the class. Crystal includes both names in the mangled function name. - this is the function Crystal is calling. - these are the function’s parameter types. In this case, Crystal is passing in a single integer, so we just see one type. - this is the return value type, a new array containing integers. As I discussed in my last post , the Crystal compiler internally rewrites my array literal expression into code that creates and initializes a new array object: In this expanded code, Crystal calls and passes in , the required capacity of the new array. And to distinguish this use of from other functions that might exist in my program, the compiler generated the mangled name we see above. Finally, after the function name the LLVM IR instruction shows the arguments for the function call: LLVM IR uses parentheses, like most languages, to enclose the arguments to a function call. And the types precede each value: is a 32-bit integer and is also a 32-bit integer. But wait a minute! We saw just above the expanded Crystal code for generating the array literal passes a single value, , into the call to . And looking at the mangled function name above, we also see there is a single parameter to the function call. But reading the LLVM IR code we can see a second value is also passed in: . What in the world does mean? I don’t have 610 elements in my new array, and 610 is not one of the array elements. So what is going on here? Crystal is an object oriented language, meaning that each function is optionally associated with a class. In OOP parlance, we say that we are “sending a message” to a “receiver.” In this case, is the message, and is the receiver. In fact, this function is really a class method. We are calling on the class, not on an instance of one array. Regardless, LLVM IR does’t support classes or instance methods or class methods. In LLVM IR, we only have simple, global functions. And indeed, the LLVM virtual machine doesn’t care what these arguments are or what they mean. LLVM doesn’t encode the meaning or purpose of each argument; it just does what the Crystal compiler tells it to do. But Crystal, on the other hand, has to implement object oriented behavior somehow. Specifically, the function needs to behave differently depending on which class it was called for, depending on what the receiver is. For example: … has to return an array of two integers. While: …has to return an array of two strings. How does this work in the LLVM IR code? To implement object oriented behavior, Crystal passes the receiver as a hidden, special argument to the function call: This receiver argument is a reference or pointer to the receiver’s object, and is normally known as . Here is a reference or tag corresponding to the class, the receiver. And is the actual argument to the method. Reading the LLVM IR code, we’ve learned that Crystal secretly passes a hidden argument to every method call to an object. Then inside each method, the code has access to , to the object instance that code is running for. Some languages, like Rust, require us to pass explicitly in each method call; in Crystal this behavior is automatic and hidden. LLVM IR is a simple language designed for compiler engineers. I think of it like a blank slate for them to write on. Most LLVM instructions are quite simple and easy to understand; as we saw above, understanding the basic syntax of the call instruction wasn’t hard at all. The hard part was understanding how the Crystal compiler, which targets LLVM IR, generates code. The LLVM syntax itself was easy to follow; it was the Crystal language’s implementation that was harder to understand. And this is the real reason to learn about LLVM IR syntax. If you take the time to learn how LLVM instructions work, then you can start to read the code your favorite language’s compiler generates. And once you can do that, you can learn more about how your favorite compiler works, and what your programs actually do when you run them. which register to save the result in the type of the return value a pointer to the function to call the arguments to pass to that function - this is the type of the receiver. That means the function is actually a method on the generic class. And in this case, the receiver is an array holding 32 bit integers, the class. Crystal includes both names in the mangled function name. - this is the function Crystal is calling. - these are the function’s parameter types. In this case, Crystal is passing in a single integer, so we just see one type. - this is the return value type, a new array containing integers.

0 views