Posts in Swift (20 found)
Simon Willison 4 days ago

I vibe coded my dream macOS presentation app

I gave a talk this weekend at Social Science FOO Camp in Mountain View. The event was a classic unconference format where anyone could present a talk without needing to propose it in advance. I grabbed a slot for a talk I titled "The State of LLMs, February 2026 edition", subtitle "It's all changed since November!". I vibe coded a custom macOS app for the presentation the night before. I've written about the last twelve months of development in LLMs in December 2023 , December 2024 and December 2025 . I also presented The last six months in LLMs, illustrated by pelicans on bicycles at the AI Engineer World’s Fair in June 2025. This was my first time dropping the time covered to just three months, which neatly illustrates how much the space keeps accelerating and felt appropriate given the November 2025 inflection point . (I further illustrated this acceleration by wearing a Gemini 3 sweater to the talk, which I was given a couple of weeks ago and is already out-of-date thanks to Gemini 3.1 .) I always like to have at least one gimmick in any talk I give, based on the STAR moment principle I learned at Stanford - include Something They'll Always Remember to try and help your talk stand out. For this talk I had two gimmicks. I built the first part of the talk around coding agent assisted data analysis of Kākāpō breeding season (which meant I got to show off my mug ), then did a quick tour of some new pelicans riding bicycles before ending with the reveal that the entire presentation had been presented using a new macOS app I had vibe coded in ~45 minutes the night before the talk. The app is called Present - literally the first name I thought of. It's built using Swift and SwiftUI and weighs in at 355KB, or 76KB compressed . Swift apps are tiny! It may have been quick to build but the combined set of features is something I've wanted for years . I usually use Keynote for presentations, but sometimes I like to mix things up by presenting using a sequence of web pages. I do this by loading up a browser window with a tab for each page, then clicking through those tabs in turn while I talk. This works great, but comes with a very scary disadvantage: if the browser crashes I've just lost my entire deck! I always have the URLs in a notes file, so I can click back to that and launch them all manually if I need to, but it's not something I'd like to deal with in the middle of a talk. This was my starting prompt : Build a SwiftUI app for giving presentations where every slide is a URL. The app starts as a window with a webview on the right and a UI on the left for adding, removing and reordering the sequence of URLs. Then you click Play in a menu and the app goes full screen and the left and right keys switch between URLs That produced a plan. You can see the transcript that implemented that plan here . In Present a talk is an ordered sequence of URLs, with a sidebar UI for adding, removing and reordering those URLs. That's the entirety of the editing experience. When you select the "Play" option in the menu (or hit Cmd+Shift+P) the app switches to full screen mode. Left and right arrow keys navigate back and forth, and you can bump the font size up and down or scroll the page if you need to. Hit Escape when you're done. Crucially, Present saves your URLs automatically any time you make a change. If the app crashes you can start it back up again and restore your presentation state. You can also save presentations as a file (literally a newline-delimited sequence of URLs) and load them back up again later. Getting the initial app working took so little time that I decided to get more ambitious. It's neat having a remote control for a presentation... So I prompted: Add a web server which listens on 0.0.0.0:9123 - the web server serves a single mobile-friendly page with prominent left and right buttons - clicking those buttons switches the slide left and right - there is also a button to start presentation mode or stop depending on the mode it is in. I have Tailscale on my laptop and my phone, which means I don't have to worry about Wi-Fi networks blocking access between the two devices. My phone can access directly from anywhere in the world and control the presentation running on my laptop. It took a few more iterative prompts to get to the final interface, which looked like this: There's a slide indicator at the top, prev and next buttons, a nice big "Start" button and buttons for adjusting the font size. The most complex feature is that thin bar next to the start button. That's a touch-enabled scroll bar - you can slide your finger up and down on it to scroll the currently visible web page up and down on the screen. It's very clunky but it works just well enough to solve the problem of a page loading with most interesting content below the fold. I'd already pushed the code to GitHub (with a big "This app was vibe coded [...] I make no promises other than it worked on my machine!" disclaimer) when I realized I should probably take a look at the code. I used this as an opportunity to document a recent pattern I've been using: asking the model to present a linear walkthrough of the entire codebase. Here's the resulting Linear walkthroughs pattern in my ongoing Agentic Engineering Patterns guide , including the prompt I used. The resulting walkthrough document is genuinely useful. It turns out Claude Code decided to implement the web server for the remote control feature using socket programming without a library ! Here's the minimal HTTP parser it used for routing: Using GET requests for state changes like that opens up some fun CSRF vulnerabilities. For this particular application I don't really care. Vibe coding stories like this are ten a penny these days. I think this one is worth sharing for a few reasons: This doesn't mean native Mac developers are obsolete. I still used a whole bunch of my own accumulated technical knowledge (and the fact that I'd already installed Xcode and the like) to get this result, and someone who knew what they were doing could have built a far better solution in the same amount of time. It's a neat illustration of how those of us with software engineering experience can expand our horizons in fun and interesting directions. I'm no longer afraid of Swift! Next time I need a small, personal macOS app I know that it's achievable with our existing set of tools. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . Swift, a language I don't know, was absolutely the right choice here. I wanted a full screen app that embedded web content and could be controlled over the network. Swift had everything I needed. When I finally did look at the code it was simple, straightforward and did exactly what I needed and not an inch more. This solved a real problem for me. I've always wanted a good way to serve a presentation as a sequence of pages, and now I have exactly that.

1 views
Stratechery 2 weeks ago

An Interview with Ben Thompson by John Collison on the Cheeky Pint Podcast

Listen to this post: Good morning, Today’s Stratechery Interview is with me! On January 27 I sat down with Stripe President John Collison in the Cheeky Pint pub in Stripe’s offices for an episode of the Cheeky Pint podcast. There is a YouTube video of the interview that you can watch here , or, you can read the transcript below. In this interview we discuss life in Taiwan, ads in AI, and how Mark Zuckerberg’s obsession with being a platform has harmed Meta. Then we talk about the TikTok deal, the impact of AI agents on ads and e-commerce, and, a week before Wall Street’s meltdown, discuss whether or not software is dead. We also discuss the history of Stratechery, and why I’m skeptical about bundles in the future, as well as my concern about TSMC’s conservative approach to CapEx. As a reminder, all Stratechery content, including interviews, is available as a podcast; click the link at the top of this email to add Stratechery to your podcast player. On to the Interview: This interview is lightly edited for clarity. John Collison: Ben Thompson is the founder and author of Stratechery, the newsletter that everyone in tech reads to make sense of what’s happening. He’s also early to the premium newsletter model that’s become very popular in media nowadays. For many years, he ran Stratechery as a solo founder in Taiwan. Cheers. Good to see you. Cheers. JC: It feels like people in San Francisco have not properly discovered Taiwan as a tourist destination. Do you agree with that characterization? And what’s your recommendation? People always ask me about Asia, and the way I always characterize Taiwan is, there’s lots of great places to visit in Asia, and I would put Japan top of the list. But I like to think I went to Japan before it was cool. JC: Yeah. Nothing against Japan. Well, the whole thing with Japan is going to Japan pre-smartphone was a completely different experience than going there post-smartphone. Like you think, “Oh, the subway system’s amazing, the trains…” — try navigating that with no smartphone and nothing’s in English. Japan used to be very low on English, it’s still lower than places like Taiwan. JC: It’s surprisingly low. Yeah. And the way to visit Japan is you just walk, don’t go to set destinations. Whereas the way I would talk about this is places to visit, but the best place to live is undoubtedly Taiwan. The one word everyone says for Taiwan sounds not that impressive, but the word is “convenient”, it is the most convenient place to live. JC: 7-Eleven has really good food. It’s actually downstream from the Japanese because Taiwan was a Japanese colony for the first 50 years of the 20th century, and it’s laid out a lot like — why is it great to walk around in Tokyo? Because Tokyo is all mixed use, that’s how Taipei is as well. You have these big blocks where the exterior will be commercial and the interior of these big blocks is all residential and the first floor is all like small shops or restaurants, things like that. So wherever you live, you basically have access to everything all around you. But I think the downside as a tourist is it’s kind of an ugly city. Taiwan’s kind of notorious for just these dumpy, dilapidated buildings, then you go inside and they’re palatial on the inside. Taipei is very, very rich. It’s in the top 10, I think, as the number of billionaires in the world or something like that, all downstream from building out China. It’s a very beautiful country. From Taipei, 30 minutes to the ocean, 30 minutes to the mountains, East Coast is amazing. JC: But if people listening to this are visiting, I feel like one thing they should do is — it’s a mistake to try and use Yelp or anything like that too much because you should maybe just try and go to a night market and follow your belly and see what looks good, there’s a lot of excellent street food, and so that’d be one thing is don’t try to over-plan. Well, here’s the problem though, where tech has made it worse, I would argue. When you’re living there, Taiwan is arguably the greatest Uber Eats market ever because there’s just amazing options. It’s all delivered by scooter, so it’s always like 10 minutes to get dinner. I think you were going to ask me about difficulties moving to the States, not having access to that is definitely one of them. But the problem is that it’s such a huge market now that I think there are fewer and fewer restaurants, in that a lot of these places actually just straight up close their storefronts are just ghost kitchens basically and all they do is just make Uber Eats orders all day. JC: I see. Famously, the restaurant economy and places like Taipei would have been really good, but it’s gotten worse because people are eating in more with Uber Eats and stuff like that. I think so. As far as walking around and just like stuff on — there’s still plenty of places, it’s still great. But there’s a number of restaurants that I used to always take people to, like holes in the wall that I knew were super good beef noodles or something, and I remember a couple times like, “Oh, you can’t actually go eat there anymore, but they’re still an Uber Eats”. JC: That’s a bummer, it’s like a separate problem. The San Francisco problem at restaurants is that nobody drinks anymore and so the restaurants lost a major revenue source. It’s so bad you had to get a pub in your own office! JC: Exactly, we’re trying just firsthand to fix it. Be the change you want to see in the world. JC: Should people visit places beyond Taipei? Oh, for sure. Yeah, Taipei is great, it’s great to walk around. Taipei 101, which is obviously very much in the news these days with the scaling . JC: But you can go up the elevator on the inside. You go up there because there’s a massive ball at the top. JC: The mass damper, yeah. Yes, which is amazing. If you’re into engineering, that’s actually a very underrated thing. National Palace Museum is amazing, but the East Coast in particular is incredible. There is a train, but driving, you’re driving on the coast. JC: It’s like a lost coast of Hawaii, kind of. Exactly. There’s an incredible gorge called Taroko Gorge that was really messed up by an earthquake a couple years ago, so I don’t know if it’s even reopened yet, but I used to take people to that all the time because it’s world class. JC: Yeah. It is impressive that they said, “We’re going to build the tallest skyscraper in the world in a very frequent earthquake region”. Yeah, it’s a beautiful skyscraper. It worked out well for Netflix. JC: So you’ve talked a lot over the years about Aggregation Theory and really popularized this idea where pre-Internet, often power would live with the supply, whereas on the Internet, because of the different marginal cost dynamics and things like that, power will rest with the demand aggregators. And so Booking.com is a much bigger company than any hotel chain, something like that. And Booking.com is a particularly interesting one because they aggregate all the hotels, but they are also aggregated by Google so they’re like Google’s biggest customer. Even as they’re also on the other side. JC: I feel like Booking.com is a very underappreciated success story in tech. They’re a European company, kind of much quieter in a lot of ways. But if you invested a dollar in Booking.com and a dollar in Google 20 years ago, you made much more money as a Booking.com shareholder and I think people don’t appreciate that fact, it’s a very well run business. But where I was going with this is, how does Aggregation Theory apply to AI? How does one need to update the framework? TBD, to a certain extent. This is part of the huge, probably one of the most angsty debates that I have internally generally, which is OpenAI’s welfare going forward. I put forward a few years ago that actually OpenAI could stop making models and be one of the most valuable companies in the world just because of ChatGPT, that’s their most valuable asset and part of the problem that they have is that was definitely the case in 2023 and 2024, but you have to actually build the business model around that. And I’ve, I think fairly famously , at least based on all the tweets that I got when they announced that they were going to launch ads, I’ve been losing my mind about this fact for a long time. And I think this is interesting, I’d actually be curious to hear your view of this, which is, there’s this mindset in the Valley of this skepticism of advertising and people have sort of internalized that it’s bad and evil. Do you sense that? Do you feel that? JC: I agree there’s kind of a knee-jerk skepticism of ads. Look, I’m a YouTube Premium subscriber. When I see someone watch a video without Premium— It’s horrifying. JC: I gasp, it’s like, “What are you doing with your life?”, and so I get the knee-jerk reaction and that Stripe at some level is kind of the anti-advertising company and we’re the opposite form of monetization. But I don’t know, I have no particular issue, I think it’s a very efficient form of monetization. It makes a lot of sense for certain products, and so I think it’s just different strokes. I think you’re with Stripe, you’re on the skepticism side. I think ads are amazing, and I’m talking about my book a little bit. Stratechery has gotten tremendous traction just by not hating ads, even though I’m not an ad model myself. JC: Exactly, but you’re a paid model. Well, it’s funny. I actually think I got a lot of traction over the years by talking about ads when no one else was, despite the fact it’s the most important business model in tech. And I look back and all my early writing about ads was terrible, I had no idea what I — but just by virtue of talking about it, it was helpful. The reality of advertising is, number one, people, if you are making a product in the world, like Stratechery is very fortunate — it is definitely a new model or a new Internet-native model in that I have subscribers in like 200 countries. Literally, the whole world is my market and Stripe obviously helps make that possible. JC: A few in the Vatican, they’re following along with that. I could go check. I mean, I would bet the odds very high that I do, at least one subscriber in the Vatican. But where I benefited from is I was a massive beneficiary of social media, particularly Twitter and that back in their early days, good days of Twitter, if you want to say whatever it was, there was currency in sharing smart links. And so if I was a regular provider of links that people felt made them smart, so they would share them and talk about them and be sort of back-and-forth, so that solved my customer acquisition issue. The reality though is most, because the other thing about content — actually this is the point I’m interested to come back to, is it’s something to talk about — it’s a commonality between us that we can both read the same thing, we can both have opinions on it, react to it, the sort of product that I buy on Instagram is not what I’m not going to post about it or talk about it, but it can be tremendously beneficial. They in theory have these small businesses or whatever they might be, or Chinese suppliers or whatever, they have the same opportunity, which is to sell to everyone in the world, they just need a way to tell people about it. And as someone who buys way too many products off of Instagram — and by the way, one of the great things about moving back to the US is that the Instagram ads are unbelievable, I thought we were pretty good in Taiwan, they’re so much better in the US. They’re like, “Oh, this is the best part of living in the richest country in the world”, it’s amazing — I’m like, “What’s this native content? Give me more ads!” Which by the way, Facebook is very happy to do over the last six months , lots of ads these days, but I get stuff that I never would have thought of, I didn’t even know about, and it’s great, it’s amazing. And it’s a real benefit to me as a consumer who for sure subscribes to YouTube Premium and looks down on people who don’t, but finding things that I didn’t know about that make my life better. So as a user, I’m benefiting. As a rich user, I’m benefiting. As the world is nine billion people, most of whom do not have the disposable income that I have, much less you have, much less anyone else in San Francisco has, they get the same experience I do. And something for AI, when you think about it, particularly when it’s so costly to provide, and the free product is so much worse than the paid product, of course it’s a win for them to be able to get access. So how do you have a mission, a belief that AI makes the world better and to not embrace ads? JC: I agree with ads being an efficient form of monetization. What do you think is the right way for consumer AI apps to do ads? Like ChatGPT just announced that they’re doing ads . They’re terrible. JC: Well, no, they’re doing them as a very separate experience to the answer. No, this is why it’s so bad. This is why I’m so frustrated with them . So what they’re doing is the bare minimum easiest solution. JC: It’s like banner ads basically. It’s banner ads, but it’s based on the context of the conversation. And the problem is that they released their ad principles , right? Which is, our ads do not influence your answer. If you’re using the easiest possible way to target ads, which is based on the context of the conversation, we’re going to show you a roughly relevant ad. Number one, your market’s way smaller because you have to hope someone starts a conversation that matches the inventory you have. Number two, you’re getting into a, “My T-shirt answers questions that my T-shirt is raising”, sort of situation where if the ad is clearly connected to the answer, you’re going to raise suspicion in the user’s minds about what the connection is, so I would prefer if the ads had nothing to do with the answer. The way you get there is you build a Meta style understanding of the user and show them stuff that’s relevant to them, like in Instagram. The best Instagram ads don’t have anything to do with the stuff I’m surfing. It’s from Meta’s understanding of me broadly. JC: So are you saying that AI ads should be more like Facebook ads than Google ads? And right now the focus is on doing targeted ads that are related to the prompt, whereas instead it should all be profiling the user and who this person is and what their interests are? Yes, I think that would be better. It would present less conflict of interest, less uncertainty amongst the user and it’s a model that I think — I’m not the world’s biggest fan of search ads precisely for the reason why they work so well. JC: Because of the confusion between organic and— Because they’re cannibalization. Why is it that I have to buy my own name in search ads? Because someone else will go in there and you’re harvesting a click on the ad that would have been there sort of organically, which is fine, it works. The search is providing a lot of value, but the challenge obviously is they only have one space for inventory, which is in ChatGPT. JC: Well, sorry, isn’t the defense of Google ads that everyone complains about the branded search and yeah, you’re paying for cannibalization, but Google pays so much attention to search quality that the sponsored listings themselves have a ranking of them, like a relevance ranking of them and so it’s really just like the Yellow Pages where you need to pay to be listed in the Yellow Pages. Oh, it’s fine. Again, I’m the ad lover here, I just think that Meta ads are more broadly valuable because they’re showing me stuff I didn’t know that I wanted. JC: But if the AI apps are to generate a profile of you, does that profile include the content of all your conversations? Well, so this is the thing. So [DeepMind CEO] Demis [Hassabis] is out there saying , “Wow, I can’t believe they’re adding ads, we’re not going to do that”. Which is hilarious because the entire Gemini DeepMind apparatus, what is it funded by? JC: Sure, yeah. It’s a Google ad machine. It’s funded by ads and that actually is probably the ideal model. So it’s actually very funny, I was in New York City last year, I was meeting with someone in a shared office or across the hallway was a hedge fund or someone and they came over and he’s like, “Oh, longtime reader, you’re responsible for our worst decision ever”, and I’m like, “What?”, he’s like, “Putting money in Twitter”, I’m like, “I’ve never said to put money in Twitter!”. That’s always been a terrible company, I’ve stopped covering them because it was such a bad business . And I’m like, “Oh no, I remember what it was,” it was when they bought MoPub , and my theory, my problem with Twitter advertising has always been that, especially it was very textual, I think text doesn’t work, all this applies to the Chat clients. Text isn’t the best interface for ads , obviously visuals are generally better and there’s also a posture. If I’m on Twitter, I’m like, “I’m ready to do battle, I’m locked in, or I’m searching for information”, if I’m on Instagram, the whole point of seeing an ad is I don’t really care what I’m seeing right now, I’m wasting time. You’re actually in a much better posture, I think, to absorb, just like TV, you’re sort of absorbing, can absorb the ad, and Twitter is bad for that. But Twitter, because it’s an interest-based network, at least in theory, it should be able to understand a lot about you above and beyond theoretically having pixels and SDK sort of all over the web and so my theory with the MoPub acquisition, I thought was a great acquisition because, “Oh, they can harness signal from Twitter and manifest it in other apps through this sort of MoPub network”. Now, Twitter was incompetent, so they did nothing with MoPub, gave it to AppLovin who’s now ridden MoPub to the top of the world, but that was my thesis and I think that could apply to AI as well. I think the ideal outcome for Google is they never put ads in Gemini, but they understand so much about you because of what you do in Gemini that they can then manifest that through ads on YouTube, through ads on Google, through ads on their other properties, and the challenge for OpenAI is they only have one place to put inventory, which is in ChatGPT. JC: So you’re saying that Google could use Gemini to just improve the targeting of the ads across the Google properties, and then maybe if you want to have ads in Gemini— I don’t think you need to ever put ads in Gemini. JC: But just if you did, you would also have the profile that Google has of you from across the web and you can choose faster. Yeah, and you don’t need to have ads that are making the user feel weird because, “Why are you showing me ads about what I’m asking about?”. JC: Okay. But in the scenario you’re just describing for Google, wouldn’t that have the same, just like the Meta’s listen to your microphone conspiracy theories where the targeting is too good, people get concerned. Wouldn’t you get similar issues? I think that’s a made-up concern. JC: No, it is, but sure, but people have it. And wouldn’t you have a similar issue where if you’re using the Gemini data to make better ads, wouldn’t ultimately the targeting be too good and people find it weird? I think that that’s a bridge that every tech company would be happy to cross if they came to it. JC: I see, it’s a thing that people say when you have very good targeting. I think there’s a real stated versus revealed preference about a lot of this stuff. The reality is, you could pay for Facebook or you could show ads — people would rather see the ads, I think most people don’t care. A lot of tech, and this sort of ties into the skepticism of ads, it’s sort of an elite town, there’s elite regulators, everyone’s thinking about these very theoretical things. JC: Isn’t that bit of the challenge of banner blindness where Instagram advertising works so well because it’s a picture feed and it’s showing you pictures and then some of the pictures are like commercials. Whereas with an AI app, you’re looking for an answer and you don’t want to look at the banner. It’s a huge concern, and this is one of the great ironies of Meta/Facebook is the extent to which, of course, Mark and everyone hates Apple for lots of, I think, very justifiable reasons, but Apple saved Facebook from itself . Back in the day, remember Facebook Platform and there’s like Facebook Payments and all this sort of thing and Mark has always wanted to build a platform, and if you’re just an app on a phone, you can’t build a platform. The problem is that I think being an advertising-based model is generally incompatible with being a platform, the whole point of a platform is you’re letting something else shine, something else to bring to the surface, the support structure for something to take over. So an operating system is not about the — ideally, it’s the application on top of it that you’re using. When Facebook was forced to not be a platform, but just be an app, suddenly they could be fully leaning into being an advertising thing. Think about a Facebook ad, even back in the day when it was a feed ad or a story ad, literally your entire device is all an ad, and somehow it’s not a banner, that’s a little thing on the edge. They literally have achieved permission from users to take over your entire device to show you a full screen ad every five seconds, it’s amazing, and they were forced into it by Apple. JC: Okay, this reminds me of, and I want to come back to the AI dynamics, but this reminds me of a view I’ve had that I’m curious for your thoughts on, which is often when tech companies become really big, they become really big just because the core idea works better than even the founders could have realized. And so Meta’s a really big company because they have a feed and the feed got really big and they were very smart along the way where they bought Instagram and they’re like incredibly targeted. It’s the feed. JC: But it turns out people spent a lot of time and many people, the P x Q of that with the feed and they monetized it very well, and that’s what got really big. And same with Nvidia, it just turns out that the GPU market got really big and they sell a lot of GPUs and so maybe founders, because they’re often like high powered individuals who want to have lots of new ideas, they’re often thinking about the next thing or like what the second act or the third act is and everyone wants to invent an AWS. But I’m curious what you would say to the idea that just generally it’s making the core thing really big and there’s more orders of magnitude at the top than you thought. Yes. I think that’s always the case and I think that sometimes people end up making something that they didn’t want to make and they continually push back. I think Meta is the perfect example. My impression is Mark’s not very interested in ads, he’s had very good people along the way that have helped him build these ad products. I think Meta has suffered from that because he has not been front and center fighting for, “Actually ads are good, they are societal good, they are the driver of all the consumer surplus that tech throws off”. The President uses the same search engine as the guy on the street or the same AI or sort of whatever it might be, that’s because of ads. JC: Probably not. The President probably uses a Palantir search engine or something. Yeah, it’s probably worse. Google has slipped a lot to be fair, there’s so much junk online. Has Donald Trump ever searched? I don’t know, that’s a good question. But he’s not made that case and I think Meta has suffered because of the failure to make that case. Then you get things like, “We’re going to do the Metaverse, we’re going to do X, Y, Z”, it’s always coming back to be a platform, be a platform, and Meta is an entertainment company. I wrote this years ago about the — it was simultaneously a good call and a bad call, do you remember that Paul Krugman quote, “The Internet’s not going to be very big, or have more impact than the fax machine because people don’t have anything interesting to say”, I actually defend that quote because it’s actually true, most people don’t have that much interesting things to say, and I brought up that quote around 2015 by saying, “This is a fundamental limiter on Meta’s long-term potential”, as long as they think of themselves as a social media company, they’re going to run into a problem with their feeds becoming insufficiently interesting over time. JC: The move from kind of peer content to— Well, so that was, if I might say so myself, a very brilliant insight. The bad insight was my prescription, which was they needed to do more with professional content makers, like more funding of the BuzzFeeds of the world and share revenue, all that. JC: It was actually user-generated content. The actual answer is what TikTok did , which is that TikTok’s not a social network at all. It is a harvesting and YouTube, the same sort of idea, it’s like personalized TV. What actually matters, and this is a key thing, people get hung up on relative numbers, what matters is absolute numbers. So it is better to have 0.1% of your content is good if your content is in the billions or trillions, as opposed to, “Oh, 10% of our content is good”, but you only have a hundred pieces of content, that’s actually worse, even if you have a better hit rate. And so spurring lots of creation, writing the algorithms to capture the good stuff, put it up there, that actually solves the Paul Krugman fax machine problem and Facebook was blindsided by that. They were so stuck on their identity of being a social network that they let TikTok take this huge chunk and it was their blind spot. JC: Speaking of TikTok, I feel like you don’t write about ByteDance that much. And I’m curious just what your thoughts are on ByteDance from here and the TikTok sale and everything. I mean, what a mess. I had to make a decision a long time ago, I wrote about Chinese companies more previously and I think there’s — number one, I have to decide what I’m going to be able to cover and what I’m not. I’m not in China, I was in Taiwan, it is a different Internet, and there was too much uncertainty and unknowns just in general about a lot of Chinese companies. I would write about them occasionally in the context of US tech companies. So I think I wrote about WeChat and what it meant for the iPhone’s relative competitive position in China , how it’s different from other countries, I think that sort of held up pretty well . I wrote about TikTok in the context , I mean, more about TikTok, I think in the context, particularly this context of Meta, TikTok came up around the same time as the, Quibi, which Quibi was the example of that . Quibi was actually right that there was room for a mobile entertainment product, it was totally wrong about the content acquisition strategy. So even if the hit rate was higher, their total volume was way too small. I follow them, but not super closely. It’s just a hard market to understand. JC: But like TikTok’s very relevant to the US market. So I wrote The TikTok War , basically making the case that the problem with TikTok, and back then everyone was talking about user data — who cares? The whole user data thing, people have this view of like the East German Stasi and like folders going through people’s data, these are like vector databases with numbers that no human can parse, it’s really quite anodyne, it’s just really the target ads, and I was very skeptical about that being a forcing function in terms of forcing divesture or whatever it might be — the issue I had was the algorithm. And I noticed , I think it was when the Hong Kong protest happened and Daryl Morey, the then-GM of the Houston Rockets tweeted, “Free Hong Kong”, or something like that, and there’s a huge meltdown of the NBA games being canceled. And I noticed that on TikTok, and this was from, I tested it from Taiwan and via VPN from the US, if you search for every single NBA team, you got NBA clips, except for the Rockets, you got nothing. JC: Oh, that’s funny, it got demonetized, the Houston Rockets. There was a thumb on the scale here. And I started talking about it then, and I did support the ban of TikTok or the forced sort of divesture from China because it seems fairly insane to have a primary information source controlled by your chief geopolitical adversary. JC: Yeah. Same like there’s rules over TV station ownership and it’s not wildly different. And so everything’s a trade off. Of course, I’m pretty well known for being a pretty stark defender of free speech and against censorship and my issue wasn’t TikTok per se. The reality of China is the founder of ByteDance is long gone because he got called to the carpet for ByteDance showing a little too much of what people liked, which is mostly like hot girls dancing and insufficiently showing the right things that the party wanted. The reality is China has the price of doing business is they’re somewhere on the control structure, they could tell you what to do and this just seemed like a very foolish thing to tolerate. Unfortunately, the US political process, or fortunately, maybe the reality is the US process and system is such a mess, can anyone really, truly impact it over time? The way that it shows up messily is we somehow do pass the law banning TikTok and it didn’t get banned and now it is sold, but China still controls the algorithm. So I think it’s a big disaster, it’s also like, what can I say about it? I said my piece. We ended up in the worst possible case, which is we violated property rights and we did all this stuff that’s ridiculous and we probably bartered X, Y, Z for ABC and we didn’t get the most important thing, which was control of the algorithm. JC: Has that not happened as part of the sale? No. ByteDance still controls the algorithm . JC: I didn’t know that. Yeah. Good job by us. JC: That does seem like it was the point of the spin out. Well, the data was always the most salient political point. So when I wrote about it, that was my point. It was like, “I don’t care about the data, the issue is the algorithm”. And unfortunately they did not care. Maybe I should have written about it more, but anything like all the politics stuff, there was a period — I mean, thank God for AI. When I wrote Aggregation Theory in 2015, a couple of weeks later, I wrote something about regulation , I’m like, “This is going to drive a bunch of regulatory issues and antitrust things and all these bits and pieces”, when that actually happened at the late end of the last decade, of course I was writing about it, I was watching congressional hearings, all this sort of thing, and that is the closest I came to quitting and burning out. I think burnout’s not a function of how much work you’re doing, it’s doing work you don’t enjoy. And at one point I’m like, “Either I quit or I stopped covering congressional hearings”, so I decided to stop covering congressional hearings. I only wrote about antitrust stuff that was super prominent and I’ve been much happier ever since. And maybe that’s part of the price of just not writing about that is maybe I should have pushed on the TikTok thing more. JC: That’s interesting. I said my piece. JC: Is Stratechery very widely read in DC? It is. Sometimes it’s gratifying. It’s great when you get called and asked for your opinion or you get certain responses or you see impact. It’s less gratifying when you get yelled at and people are mad at you. But fortunately the key thing to succeeding on the Internet is something I have in spades, which is a very high level of disagreeableness. So you can yell at me all you want, I’m not going to change my mind. JC: Okay. But getting back to Aggregation Theory as it pertains to AI. A simplistic view you could have is that the AI apps are the new aggregators and so a huge amount of economic value will accrue to them and that’s it. You could also say that that’s too simplistic in a bunch of ways because like we were saying Booking.com, you expect it to return new hotels that you should book, but you expect a little less of a commercial incentive from the AI apps and this is like a little more of an abstract technology where it’s actually not trivial to insert all of the commercial incentives in the right way. Anyway, you come up with various objections and so do you think— Well, I think that the ad model is probably the way to start, which is what I just talked about before, sort of the lean-in versus lean-back. Ads are very tied into human psychology and like what you’re sort of tapping into and people’s response to that and how do you make something creative? And in the short term, technology often makes old business models even more powerful before it kills them. So you have something like a newspaper, I used to be limited to my geographic area, now I can reach the whole world. And a few years later, everyone can reach the whole world, I mean pure competition, I’m screwed . That is certainly a concern about this model. If you get to a world of say agentic commerce and the agents are just buying the right thing and I think this is also something that has driven a lot of tech skepticism of ads. People in tech tend to be fairly nerdy, fairly obsessed, they’re doing a ton of research to find sort of the exact right thing. JC: Yes. Why didn’t you tell me what to buy when I’ve researched it for two hours? That’s right, and so ads have no effect on me. Well, what if that sort of obsessive deep dive approach is now trivially available to everyone because AI is the one actually doing it? Now where do ads function? I think this is definitely a bit of a “be careful what you wish for” scenario because what this entails is of course more transparency, more details, more understanding sounds good, what it actually entails is sort of perfect competition, which is a very brutal game that can just wipe out entire categories. That’s basically what happened in newspapers in many respects, so that’s number one. Number two, in this sort of world, you’re sort of by definition anchoring on whatever specifications or whatever can be measured, can be put down and you had the old Steve Jobs adage about feeds and speeds versus like the feel of something where the intersection of liberal arts and tech, what the fuck does that mean? And it’s just like, well, what it actually means is there’s things that can’t be measured and that don’t go on an Excel spreadsheet and everyone you talk to acknowledges this. They say yes, there’s things that can’t be measured and the way it actually plays out in practice is only the things that are measured — I think a huge problem with sports analytics is a great example of this where basketball is my favorite sport, there’s a lot that goes into basketball and winning that is somewhat hard to wrap your hands on. JC: It’s not like baseball which is very measurable. Baseball’s very measurable. I do think there’s aspects about clutchness and stuff that I don’t know that are properly measured, but around basketball for sure, there’s the interaction and the way teams play together and how your effort on, or your involvement on offense can affect defense or sort of back and forth and you see it again and again. I like Daryl Morey, I think there’s a reason his teams haven’t won, they’ve over-optimized at the expense of some of these other issues. And if you can’t measure them, they tend to get devalued. In a world of AI-mediated everything, how many things that can’t get measured fall by the wayside because we end up with very utilitarian goods that have no soul to them. Sort of a silly sort of thing to worry about in some respects, or it sounds silly, but I’m a human and I anticipate liking and preferring the humanity of things of all sorts in the long run. JC: But you could say that e-commerce aggregators like Amazon and lots of others have led to fairly anonymous manufacturers of lots of everyday goods, the kind of Amazon Basics type stuff, at a much lower price point than they were previously at still perfectly good quality. Isn’t that fine? So this is where you throw my ad argument in my face. Which is like, actually it brings up the base level for everyone, like your basic consumer, the access of items they have. JC: There’s no soul in an Amazon Basics power adapter. And that’s fine. Everyone thinks back to like, “Oh, my washing machine was so much better in the 1960s”, and it’s like, yes, that’s true and also far fewer people had washing machines. So I’m now making the opposite argument, sort of arguing with myself. JC: I will leave and you can have a one person play. I’ll just switch back and forth. JC: You can change sides of the booth. You mentioned agentic commerce, we obviously are big into that and had our announcement with OpenAI back in October . Where do you think that goes? How do you see agentic commerce playing out? The contrast between your own OpenAI announcement and Google’s announcement I think is pretty interesting and speaks to what the companies are driving for. OpenAI wants to be the place where you do everything, they want to be like the aggregator. I think a critic would say people compare them to Netscape, I think the better analogy if you’re an OpenAI skeptic would be AOL, where they want to be sort of like the interface for everything that you might do and it goes through their channels. And Google, just as they were relative to AOL is like, “Actually we want to equip everyone knowing that if everyone is capable, we are the greatest beneficiaries because we still marshal the sort of front end demand in that regard”. Now, how does that actually manifest in terms of commerce? The funny thing about tech is I don’t think it will manifest in terms of airplane tickets, which is everyone’s example. Everyone can never think of a better example than that, but what is the AI going to buy? What is it going to get? I don’t know. I think I would like to think people will want to have agency in their buying decisions, but then again, we have assistants, whether it be like for work or whatever it might be, and they make buying decisions that we’re necessarily not involved in and that I think is a good predictor precursor of what people will ideally — do I really need to know, actually, I have very strong paper towel ideas. But once that’s set, can that be sort of monitored and done? So I don’t know. I think this is a very unsatisfying answer, other than to say it has big implications on things like advertising and on things like is that going to be a viable business model going forward? What margins are going to be available? Is there going to be perfect competition? Things along those lines. JC: Okay. Let me try this on you for agentic commerce and I’m curious to have you critique it, which is sort of how I see things playing out. I think some skepticism is triggered by people pitching a very far end state with a lot of agentic autonomy. And so it’s like, “Please book me a honeymoon in Japan and all the activities”, it’s like no one actually does things that way. Whereas actually you should go from the bottom up in some very basic building blocks where step one is just replacing filling out web forms, that’s an activity that sucks, no one likes it. And so imagine you find the winter jacket you like and you copy the URL into ChatGPT and just say, “Please buy this for me”, and that’s a much better experience than going and clicking around a site you’ve never been to before. So there’s just the agents doing the kind of tool you saw on your behalf and everyone can create that. Maybe it clarifies there’s multiple colors, “Which one do you like?”, but it’s just replacing filling out form fields. This is, by the way, one thing that I am very — a lot of people are skeptical of this, but I am very optimistic about — which is, I call it just in time UI . JC: Exactly, it’s a better UI. So that’s like level one is a better UI for kind of doing it in action you want to know, then level two is better discovery and search. It is crazy that we’ve gotten this far in e-commerce with keyword-based search. Keyword-based search works really well when you’re buying a book that you know the name of. It’s like, “I want to go buy this particular title”, and for a winter jacket, it’s like, “I don’t know, I want, it’s like a puffer, like what’s it called?”, and so instead you want to be able to say, “I’m looking for a jacket, I’m going to this place, it’s going to be this cold, I like these kinds of things”, whatever. And so step two is just better search and the ability to search with parameters that like no existing search UI lets you specify the temperature of the place you’re going to actually get a jacket of appropriate warmth, but that’s obviously with a jacket one of the core things. And so better search UI is kind of level two from our point of view. Right, which I think is already sort of manifesting. Exactly. JC: We’re already seeing it and like in the early usage of the ChatGPT buying experience, I think that’s one of those super cool features. And then level three, which we haven’t really seen play out yet, is this idea again of a persistent profile of the user— That anticipates their needs. JC: Exactly. It’s like, I want to be able to just pin things I like as I go along. Or maybe if I can share my browser history or maybe if I can just share a Pinterest board of just like, “These are some styles I like, give me a good winter jacket for the cold based on that, here are some photos of me based on this”. Oh, I have an even better idea. Imagine if you were using ChatGPT and it’s circa October 1st and there’s an ad for a great winter jacket that is perfectly suited to me because they’ve been understanding my interests, they understand the context of where I am, I’m not searching for winter jackets because I don’t plan well, it’s going to get cold and then I’m searching winter jackets, but what if it could anticipate that and show me an ad at the right time when I need to see it? JC: Okay. Maybe that’s level four. That’s what I’ve been wanting them to build! This is my whole bit before, this is why they’re so late, they should be shipping that this year. You’re only shipping that this year if you started your ad product two or three years ago. This is doable today, this is what Meta ads are. You need to watch more Reels, I’ve bought more ski equipment this year that I don’t need, just because it just shows up, I’m like, “I’m moving back to Wisconsin, so I’m buying stuff for the house and I get all those ski hangers and think those would be great, that sounds very useful” — they’re still in a box, I haven’t actually put them up. JC: Yeah, so there’s a limit to what kind of banner ad type experiences you can do whereas I think the search thing is very powerful. But yeah, I’m curious what you think of step one, just the very act of checking out and then—or level one—the very act of checking out, level two better search, and then level three, defining your own embedding space of preferences. I completely agree with that approach, I just think you underrate the extent to which level three has already been built. Actually, one thing that Mark Zuckerberg said on a couple earnings calls ago that I thought was very astute, is we get hung up on technological definitions like, “What is an agent?”, and he’s like, “Actually the largest and most successful agent in the world today is Facebook advertising”, which is exactly right. Facebook advertising, people have it in their head that you go and you put in like demographics and you’re targeting and stuff. JC: It’s very autopilot. Yeah. What you do is you go in and you say, “Acquiring a customer for this is worth $10 to me, I’ll spend up to $10 and they will deliver you a customer for $10”, their margin will actually increase because they’ll make sure they deliver it at exactly $10 and they can do it for more and they actually make more money, you get exactly what you asked for. And I think the extent to how powerful this already works, they’re just stuck on “50% of my ads work, I don’t know which ones” — no, on Facebook, they all work. JC: I feel like a bunch of new, very big successful companies will be created in AI-powered e-commerce. It just feels like a different enough product space. You’re talking about retailers, merchants or agents? JC: I was just talking about discovery and kind of the demand side. Though also probably retailers. Yeah. Well, I certainly think, I think the part that would be new, which you were maybe talking about, is this real anticipatory aspect. To go back to Meta ads, is it helps merchants who have a very specialized product find customers that they never would’ve found otherwise. But there’s the inverse of, “I need a very specialized product, how do I find what it is?,” which I think you were referring to before, but to what extent can that not just be an in the moment I need this specific — I remember I needed a server, a piece for a rack to mount this router because I didn’t want to buy a whole new thing or whatever, I had this extra router. And of course there’s some guy in Australia that does 3D prints that perfectly matches this on Etsy or something and it was great. I found this random guy, I’m sure he made a bunch of money selling me a $40 piece that cost him $2 to make, good for him. But what if an AI should be capable of anticipating that need? So it’s not, “Oh, I have a need, let me go find it”, it’s like, “I know you’re going to need this and let me acquire it”, and that would be very powerful. JC: The public markets indicate as of January 2026 that SaaS is canceled. Are they right? I think it’s probably a mix . I think one of the brilliance of American business is — actually, this is one of my theories about why the Europeans are so gung-ho about data privacy and regulation is because they so often interact with European companies. So I was in Paris a couple years ago, and of course going on a tourist trip, going to the Louvre, going to Musée d’Orsay, just seeing a bunch of museums, they all have their own homegrown registration systems and they’re collecting so much data . JC: And they’re widely insecure. It’s like, yeah, what’s your age? What’s your pet? What color is — why do you need to know all this information? They’re all non-standard forms, this is where you need AI to fill all this sort of thing in. And there’s this theoretical idea in their head, “If we capture this data, it could be useful”, so they built these homegrown things in the 2000s that are horribly insecure and I use them. I’m like, “Where’s the regulator? This is ridiculous”, so I get the mindset. US companies don’t do that, US companies are so good, I think one of the big strengths of US business culture is understanding, and I think about this personally, this is when I give life advice. What’s the number one mistake people make when they’re young in particular? They focus on their weaknesses. They’re like, “I have to ameliorate my weakness”, I’m like, no, what you do is you double down on your strength, you get richly rewarded for that, and then you hire someone to take care of your weaknesses. I’m a big believer in the Getting Things Done system. Great book, Getting Things Done. Even if you don’t use the system, the book is really good, lots of great insights and there’s this whole thing like tickler files and all these sorts of things, it’s an amazing system. I’m completely incapable of managing the system on my own, so there’s a Mac app called OmniFocus that is completely built around this system that I don’t have a license for, my assistant has a license for and I text him stuff and his job is to maintain my Getting Things Done file because I can’t do it. What do I do? Actually, my life is very, very optimized around, I write three pieces a week, I do an interview and I do three podcasts and all my focus and energy needs to be on that, and if I do that, that will make a lot of money and I could pay to fix all my problems elsewhere. And I think American business does this very well, they don’t waste time and energy on stuff they’re not good at, they double down on what they’re good at and they’re focused on the upside, not on their cost centers. JC: Probably a result of the very large market in the US. I think so, and just the competition, it would be in a very large common market. You go back to newspapers, they have lots of homegrown stuff. If you’re a publication online, if you’re like me on the Internet, I get paid to comment on the big tech companies. It’s probably the most competitive market on earth, right? Lots of people have takes on the big tech companies. And so you have to be super focused. Given that, that speaks to the enduring value of just paying someone to manage these business functions from a software perspective. Now, there’s a lot of SaaS applications, not sure they’re all strictly necessary and worth the price. I like to think people talk about tech having a Big Five, obviously there’s a Big Six. The sixth was Silicon Valley Inc, which is basically this cookie cutter VC goes to this founder addressing this specific business case with the SaaS business model. Everyone likes to, they get to talk about changing the world and it’s actually the most predictable thing yet, that’s why VC returns compressed, but because they’re also very predictable in terms of like this sort of engine going. A big problem there is they’re all seat-based, anyone’s seat-based that is somewhat vestigial that there’s going to be probably fewer seats. And then if the replacement is more small scale, ideally there’s lots of — the internet in general has writing or content is a good example. There used to be you wanted to be in the big pond and everyone in the big pond ate. If you had a job at Condé Nast at one of their magazines, like you lived life well if you wrote for magazines — today if you want to be a writer, I give advice to people that want to be content producers all the time. And I’m like, look, you don’t want to be in a pond with me. Bill Simmons is like the first Internet sports writer and you don’t want to be doing a Bill Simmons impression on the Internet because he got there first. What you want to do is you want to make your own pond, the Internet enables the creation of a million different ponds. So you get to define your own pond, be the only biggest fish in that pond, that’s how you succeed. To the extent AI makes that, I think this is the upside case, is AI makes that possible for more than just content, for all sorts of businesses to be lots of smaller scale individual entrepreneurs or small teams, all of whom don’t really fit in the Salesforce driven, seat-based model for a lot of these companies. So there might be a big return to self-serve, or maybe they’ll just roll their own because their needs aren’t that large, so that’s more a larger structural change. But the problem is it’s fine to say businesses will be okay as they are, if you’re eliminating the growth, that’s the big problem, I think that’s the biggest issue for all the compression. JC: Via headcount growth? Just growth in general. If these are just stable businesses with astronomical stock-based compensation that is predicated on we’re going to be very large. JC: Yes. I can see two critiques you might have of the software space and why everything’s traded down. One is everyone’s just going to use Claude Code to rebuild their own version in house and so the software moat is less. And the second is actually just that many of these products price on a per-seat basis. And so if you’re growing headcount less, on the first— Or shrinking. JC: Exactly, yeah. On the first, like Anthropic just installed Workday. So I don’t think we’re Claude coding— Systems of record, that’s the category that is definitely safer. JC: We see this with Stripe Billing as well. I don’t think anyone’s Claude Code-ing one of those systems of records anytime soon. Do you use Workday? JC: Yeah we use Workday. I don’t know what to make of the second criticism, but again, it just feels like for a very broad and deep system of record, it’s kind of hard to make the argument that the business is somehow impaired versus a year or two ago. Right. But I think that’s my point though, is people saying they’re going to zero are wrong, but if the assumption is you’re fine, but you’re not going to be growing indefinitely, like that shift from thought of as being a growth company to being a stable — that’s a haircut, and again, it’s combined with these whole compensation structures. JC: Yeah, you’re now valued on EPS rather than revenue or something. So yeah. Can we talk about your business and Stratechery? JC: You were very early to the, I mean the sovereign writer concept, I think you were one of the first premium newsletters? I think so. Well, so there’s two predecessors to talk about. One is just on Wall Street in general, there’s a long history of faxed out newsletters and things like grants— JC: All this research and all that stuff. Yeah. The difference there was that those were very expensive and a very small addressable market. So the difference for Stratechery is it’s much cheaper and the market’s much larger. The other person that deserves a call out, which I think was the first person to do it before me, otherwise I think I was the first, was Andrew Sullivan . JC: I hadn’t realized he had a paid newsletter. He had a paywall for like a year. The problem is he did it all wrong . “You’re doing it wrong”, he would churn out like 50 posts a day, just about a gazillion different things, he totally burned out and all that sort of stuff. But that happened to be a great fit for the advertising model back in the day, because you would always go back there and there’d always be new stuff. And I’m sure he drove a gazillion impressions for The Atlantic, especially when he was with them. He went independent, he was pretty successful, I think he did around a million dollars or something like that, but it was this very leaky paywall. It was like after like 35 posts, then you’ll hit like a paywall and there’s a bit where you’re like, it’s very easy to get around. But he was actually very inspirational in how I thought about the model, in that he was hailed as a failure because he burnt out and then quit. But I’m like, “He made a million dollars, this is pretty good”. I wanted, from the beginning, just thinking about the psychology of this, when I started Stratechery, I had a gazillion ideas of things to write about, and I limited myself to writing a max of two times a week, and the reason is I had the subscription model in mind and when I added the model, I didn’t want it to be, “I’m taking stuff away and now you have to pay”, I’m like, “You like this so much, if you pay, you can get more”. And so I always wanted to be, you’re paying to get more sort of aspect and I think that probably mattered more at the beginning, especially because the model was new. My metric I looked at was people who visited Stratechery on days I didn’t post, because they were people that were going there hoping I had posted that day and they were leaving disappointed, and so in this case, usually previously a paywall would disappoint people that they hit it. In this case, the paywall would alleviate their disappointment because they could now get what they wanted. And so I’m like, “If I can capture X percentage of these visitors, it’ll be very good”. One day goal, one week goal, one month goal, failed to reach all of them. What happened was, I actually thought it was not going to work, I was going to have to go back to teaching English or something like that, but it sort of grew and grew and grew. And at six months, I hit my one year goal, which was a thousand subscribers, thousand true fans. It was a $100,000 run rate. JC: Took you how long to get to a thousand subscribers? Six months. And I posted a little note saying, “Hey, model works, my goal is a thousand for a year, I’ve already reached it”. And this is the only step change in subscribers I’ve ever had. In the next 24 hours, I got 250 new subscribers, a 25% increase, what they were was I had identified those people who wanted to be subscribers, they just didn’t trust that I was going to work and I was going to go out of business and take their money, and so once they realized I wasn’t going anywhere, then they all signed up. So I actually had my, my metrics were right, but I didn’t properly calculate the uncertainty, people’s fear of losing their money. So I’m very grateful that now people just sign up for stuff all the time. Of course, I’ll probably go to my grave being most well known for Stratechery, but I am equally proud of the model and that lots of people make a living doing this. JC: How far do you think this model can go? Again, the defining characteristic to me seems like the unbundling, like maybe 30 years ago you’d have been writing for a publication, whereas now it’s unbundling, it’s the direct relationship with your subscribers, it’s direct monetization and generally paid. There might be some ad supporting components as well as paid. Obviously, Substack has proven that this is very broad applicability, but how far do you think this goes versus traditional media bundling? I think there’s a couple interesting angles to this. Number one, I think people, including people in tech, seriously underrate how large the Internet is. And like some of the biggest pushback I got when I announced the Stratechery paid product was from VCs, I won’t say who, it’s like, “Love you Ben, just not going to work on the Internet”. Actually, my bit about ponds before, I don’t know that we’ve scratched the limit of how many ponds can be built in the world and you can sort of occupy it. And the other part of this, the critical piece of this, and AI is actually an important factor here, is the key to the model is your costs. So just as technology enables you to reach everyone, you need to leverage technology to keep your costs very, very low. And so for the first several years, for Stratechery, it was just me. So as long as I could feed my family, I was fine. And this is the problem for the traditional media companies, their cost structures were not Internet cost structures, they were predicated on much higher revenue. It’s interesting to think about and talk about this because a lot of this is like not really applicable. It’s like before, I write about ads a lot and I’m not an ad business, in this case, I write about VC, high scalable companies, but my actual business is very sort of boutique and small and artisan in that, that’s right in that regard and this is a super important point is managing your costs. If you manage your costs appropriately, then the possibilities, but that also means there’s some things that don’t work with this model. Like your traditional classic investigative journalism, six month sort of piece, it’s not well supported by this. What did support that was the bundle, having lots of different writers in one publication altogether. The thing I worry about, I wonder about, is bundles are good for everyone involved and no one wants to be a part of it. So TV is the classic example. Why did we have a TV bundle? I think it started in Pennsylvania , so you have a television station in Philadelphia and you have the Allegheny Mountains and you have a bunch of towns there that want to get the signal from Philadelphia, but they can’t get a good signal. So they band together, they put up a big tower to get the signal, they run actual cable from that tower to all their houses, and all of cable television started in small town rural America to get TV from the big cities. And Ted Turner comes along and is like, “I could just broadcast directly to these towers, this would be amazing”, and you get the model and suddenly, but you had a geographic forcing function and you ended up with all these companies with the best business model in the world. Everyone paid them whether they watched you or not, and they made a ton of money. And what happened the moment they could do something different? I could also go directly, I could stream directly, and there’s just something about business because it’s almost like you have to be forced into the game, the optimal game from a game theory perspective. And the moment you can desert, everyone always deserts, even if it’s the best thing. JC: How does this apply to your world? Because in theory, there should be bundles. Substack should be a bundle, you should be able to have one fee and get everything, but they started — I think a mistake Substack made, and I’m a huge Substack fan, just to be clear, I’ve made this disagreement before and they get mad about it, but they characterize themselves as being totally writer-friendly. And I think that was a mistake because it’s impossible for them to be ultimately writer-friendly because the most writer-friendly setup is running an open source software on your own server, then no one can do anything to you. JC: I thought you were going to say the most writer friendly thing is to have a humming consumer business. It would be. The problem is all their initial terms made the bundles impossible and all the individual publishers owning their own subscribers, having their own Stripe account and all these sorts of bits and pieces. JC: But hang on. I feel like there’s a well trodden path in tech here where OpenTable started as entirely, it started as on-prem, purely software for restaurants, and then they added the customer discovery layer on top of it. Shopify started as a solution just for merchants and then they added the Shop Pay kind of network file layer for consumers on top. Even with those businesses though, the Shop Pay bit is nice, it’s not the driver of the business. JC: It’s a pretty cool part of the business. It’s cool. I like it, but at the end of the day, the vast majority of shop interactions are I see an ad on Instagram and I go there and I get the Shop button, which is incredible, one of the greatest things. JC: No, but yeah, but then it makes the Shopify offering to merchants so much more compelling because you get the Shop Pay network. And it’s where I’m going with this— I’m skeptical that’s the driver of the business. I think it’s a nice to have. JC: But can Substack just add a Substack Prime bundle on top of it and merchants can— The problem is the merchants who will make that bundle valuable have no incentive to join the bundle because they could make more just monetizing users directly. So imagine I’m on Substack, how much more revenue does Substack have to give me for me to trade $15 a month from my subscribers for a smaller amount from whoever’s part of this? And so the problem is they have to really pay me off to be a part of it. Meanwhile, everyone who doesn’t have any subscribers, of course they love to be in the bundle. JC: This feels solvable to me. I think it’s solvable at the beginning. JC: No, sorry. It feels solvable now. I think a counter example is something like Spotify. Spotify is arguably the best bundle on the Internet. But the reason why they were able to assemble the bundle is because they only need to negotiate with four entities and so it’s interesting because on one hand that limits Spotify’s upside because those entities are able to negotiate such a large share of Spotify’s revenue. On the other hand, that’s also why Spotify was possible because they only need to negotiate with four. If you’re trying to get every artist on earth, well, of course I got to get Taylor Swift. Okay, good luck with that. All the small fry will sign up, but the music’s unique in particular because music, the moment a song comes out, it’s now part of the back catalog. And actually people only ever listen to back catalogs so it’s a particularly unique industry in that regard. But that is a bundle that formed, but I think it’s because there’s only four players. JC: Okay. And how do you use AI in writing Stratechery these days? I think it probably replaces what I used to do a lot of on — it’s much more efficient Googling. The most gratifying articles I write are when I write about a topic that I usually don’t, and then someone from that industry is like, “Wow, that was good”, because you’re always worried about— JC: You have this imposter syndrome. No. I mean, fortunately I don’t really have imposter syndrome. JC: That’s why it all works. What’s the mechanism where you’re reading something about your area of expertise? JC: Gell-Mann Amnesia. Yeah, that’s right. It’s like, “This is totally wrong”, and then you trust everything else. I don’t want to trigger Gell-Mann Amnesia amongst anyone. So if people ask me, I hate the book question, like what books do you read? I read a lot of books, but they’re very targeted. I’m a very, very fast reader. So sometimes I’ll write an Article and I know there’s a pertinent book and I will just read the whole book in the morning. But in general, I really want to make sure I fully understand a space, particularly if it’s new that I’m writing about. This is partly why I have a big competitive advantage, I’ve been thinking about tech since I was in junior high school and I’ve been writing about tech for 13 years, so I’ve already done so much preparatory work that anyone, starting from scratch, it’s hard. But something I want to dive into, I’m one of the world’s greatest Googlers, I’d like to think, I know every sort of parameter and how to find — so I think I can say it pretty authoritatively, Google has gotten worse. And I don’t think it’s Google’s fault, I just think that it’s harder. One of Google’s faults is they got so biased towards recency and so you have to be super diligent, but AI is so incredible for this. Just sort of getting background, making sure you understand an issue, the ins and outs of it, how things work. You can query stuff, dive deeper. So that is by far my number one use case. I do, not always, but I will sometimes ask it to — this is where I like ChatGPT. I type in BBEdit as an integration, this is also why I’m very annoyed by and am very sensitive to the cloying nature. “Oh, this is really great, but no, that’s not what I’m asking for, I want you to actually go in and find stuff”, so I do not use it to actually generate any exact content. JC: Okay. So targeted research and then critique. Yes, those would be the two biggest use cases. JC: You’ve written a lot about the TSMC Brake , this idea that the limiting factor on all AI expansion is basically the rate of TSMC capacity expansion because all AI chips are fabbed at TSMC. It seems like as you look at the AI space and everything interesting going on, so for mostly chip constrained right now, which would not have to be the case, you could be power constrained and stuff. But if you’re chip constrained, there’s a population of people who want to expand very quickly, the AI labs, NVIDIA, people like that. And then yeah, famously, which TSMC, which is more conservative in how it expands, why is that? Why does the market signal not cause them to build out fab capacity faster? Because the risks for fabs are basically larger than for anyone else. You’re spending billions and billions of dollars on a fab that if it’s not fully utilized, if you end up with too much capacity, number one, all your costs are locked in. Like basically 99.9% of the cost for a fab is depreciation, which you’re paying the depreciation, you already showed you paid in cash flow obviously, but it’s on your accounting statement no matter what. So the fabs can be extremely profitable, TSMC’s margins are higher than ever, but they can very quickly tip over into having a huge problem. And then once it’s already built, these fabs can run for a long time. So that excess capacity depresses prices for years to — I mean, we see this in memory all the time, memory famously goes through these cycles. Like what’s going to happen, believe it or not, we are going to have too much memory capacity in a few years because we have such a shortage right now. Micron just announced they’re a huge new fab in Singapore and everyone’s going to do that, but why does it happen to memory? There’s three competitors in memory. If Micron doesn’t do it, SK Hynix will. If SK Hynix doesn’t do it, Samsung will, and so you have a dynamic where — a healthy dynamic — which is the fabs know better, but they can’t help themselves. And so they take on the risk and they build these fabs. The problem we have with logic is that TSMC doesn’t have that pressure and so they’re actually behaving rationally. TSMC is giving up potential long-term revenue, but the downside for fab in particular is so large that they don’t want to realize that downside. JC: Can they not pass the risk onto the customer where it’s like, “You are going to pay for the entire fab”? That’s probably what they need to get to. Apple famously did a lot of this sort of prepaid and particularly when TSMC was sort of expanding hugely in the 2010s and they maybe need to get even more explicit about that. But I think the better solution and the cheaper solution for the hyperscalers in the long run would be to do what is necessary for TSMC to have competitors, then you get it for free, you don’t need to prepay it. So there’s this risk that’s out there, this risk of overbuilding. Right now, TSMC is shifting all that risk to the hyperscalers, to Nvidia, to Apple. And the way it manifests, and the reason why they get away with it is because the risk is foregone revenue, it’s money you don’t make and worse than that, it’s money you don’t make four or five years down the road. And everyone, like what does every company say on their earnings call right now? “We could have made more, but we don’t have enough supply”, and if you think it’s bad, why is it bad right now? ChatGPT comes out, every hyperscaler starts investing like crazy. What does TSMC do? They actually decreased their CapEx year over year, two years in a row. There was no market response from TSMC to the ChatGPT moment, now they increased to 41 [$ billion] last year, they’re going up to like 60 this year, but even that increase to 60 is a less percentage increase than last year. I think we’re looking at a massive shortness in chips in 2029 or so. Particularly as the other thing, the compute density of AI is so much larger, right? If you have an agent out doing stuff, it’s doing so many more computations in a limited amount of time than me and my Googling is even humanly possible to do and all these lookups, and so we have a CPU shortage too. And Intel, Intel shut down some of their CPU lines , right? So the whole semiconductor, I just think it’s a big problem and we’re shifting to, for a long time it’s like, how can we get an alternative to TSMC for geopolitical reasons ? And the truth is this is kind of like the bundling thing. It’s really hard to get companies to buy insurance, particularly when the insurance is — number one, you have everyone else wants someone else to do it, right? Who’s going to be the one to go and make the sacrifice? But also it might not happen, China might not attack Taiwan. And also, as long as it doesn’t happen, it’s super suboptimal to go somewhere else because TSMC is better. And their customers, it’s not just their fabs are better, their customer service is better and they have all the IP blocks you need and they’ve done this before and you have an existing relationship and they’ll punish you because they have limited — they have control because they’re not going to fulfill all their orders right now because there’s so much demand. So they can pick and choose sort of who — and so people are scared, they don’t want to go anywhere else. So how are we going to solve this problem? And I think I actually wrote on the front page of Stratechery this this week, which is basically the same thing I wrote in an Update, but this was a — the hyperscalers in particular need to appreciate, I think a massive crunch is coming and it’s now on them to get Intel up to speed, to get Samsung up to speed, to get a credible alternative. Yes, in theory, you could pay the— JC: For geopolitical reasons? Or for shortage reasons? No, we’ll get the geopolitical reasons for free. I think there’s massive economic reasons to do so, which is all the revenue you’re going to be foregoing in 2029 if you don’t do it now, and then we’ll happily get geopolitical insurance for free. JC: But if TSMC are the best, rather than stand up Intel, which seems hard, isn’t the answer to just again, prepay for an extra fab build out? But this is like, how do we feel in tech about ongoing operational costs as opposed to putting in some money up front and fixing the problem permanently? The market structure is a problem. You’re dealing with a monopolist and not like a mean monopolist. JC: Yeah, exactly. They’re very nice, right? And they actually have not arguably not raised prices nearly as much as they should have, but the reality is there’s this market structure problem that is going to impact the hyperscalers and it behooves them, I think, to fix the structure. Otherwise, the costs of ensuring that or overcoming that are just going to be larger and larger. JC: This seems like the topic you have felt strongest about in the past year or two. I felt pretty strong about the Apple Vision Pro ! JC: Okay, fair. What was your take with Apple Vision Pro? They finally showed an NBA game and they kept changing cameras. They’re applying 2D television production techniques to an immersive technology, just let me stay courtside! JC: The TSMC Brake seems like a bigger deal. Oh, probably. JC: I have some rapid fire questions, or I’m not going to say rapid fire necessarily, but more a collection of disconnected questions for you. I’ll connect them. That’s what I do. JC: Great. How should schools do homework now that AI exists? I think they should incorporate it and they should probably do in-person exams. I mean it’s silly to try to crush it out, I’m very opposed to these AI detectors because they don’t work. Probably I’m particularly sensitive to it because obviously a lot of my prose is in these models. My thing was I wasn’t an em-dash user, but I’m the world’s biggest semicolon user. JC: I was a big em dash user all along. Fortunately the models don’t seem to have really incorporated the semicolon. I haven’t been that influential, but yeah, no, you want kids to use it because whoever can use AI most effectively in their jobs going forward is going to have a big advantage. So there’s probably some return to more in-class being more important, I think this is my view on content generally. I think there’s a world in which not all content, but some content is more valuable than ever because AI is a perfectly individualized experience. What you read is not necessarily what I read so stuff that we both read is actually compelling and I’m very interested in figuring out how to leverage that to be beneficial to people in the long run and what can you get from school that you can’t get elsewhere, right? I can read the notes, I can read X, Y, Z, but there’s being in class, having a discussion about it, like actually interacting, being pushed on these sorts of things. All this is a a beautiful theoretical depiction of what school might be that is probably very far removed from the reality, but identifying things that are common experiences are going to be more and more valuable, common content, common classroom time, live events, like shared experiences, because anything that’s individualized is just going to be completely swallowed. JC: Yes. Do sports teams become more valuable in an AI abundance future? Of course. Everything live becomes more valuable. That’s something I’m thinking a lot about as far as my business. There’s some aspects of tens of thousands of people reading the same thing every day that is actually really powerful. There’s something interesting there, the possibility of doing live events where people can come together. I think a lot about community, I think no one’s going to really solve community around content, like a message board or comments or not, you actually get very bad dynamics, there’s a few people that dominate it. JC: Totally. Yeah. But what is great is if we’re in a group chat and you share an interesting article and you have a discussion about that. So there’s a lot of stuff around that that I think is really interesting and that I’m thinking a lot about. JC: How do you think of what’s going on in crypto these days? What’s crypto? (laughing) No, I’ve always been a crypto defender, just because digital scarcity is fundamentally interesting. It’s probably even more interesting to this point in a world of infinite content, which you thought we had infinite content before, now we have infinite content on steroids. Not just a six billion humans typing away, but agents generating stuff sort of constantly. And in that world, I think crypto as an identifier of authenticity is going to be more and more important. At the end of the day, I want the original, I don’t want a reproduction and I’m optimistic about humans’ ability to create value where it seemed impossible to ever exist. I’m literally a professional podcaster and content creator and get paid a lot of money to do it, imagine explaining that to someone on the farm worried about automation. JC: Speaking of that, you mentioned that a majority of Stratechery consumption is now in the audio form rather than the written form. As far as I can tell. I don’t do, but well more than half my subscribers are subscribed to— JC: I consume it in the audio form. Yeah, it’s quite interesting. This is actually where I started building my own software, I was begging everyone to support paid podcasts, there was dedicated paid podcasts and there was like writing ones and no one would do it. So of course I had to just hire engineers and build it myself, at which point it obviously was the right thing to do. Now everyone does it, whatever, that’s my fate in life, I guess. Yeah, people love it. The interesting thing is, I’m not sure it’s been good for my business. JC: Why? Oh, because people don’t share. The good news is, I think it drives retention because people would build up emails because it feels like a lot of work and they say, “Oh, I haven’t read this in ages, I can unsubscribe”, whereas they just consume seven minutes or eight minutes. The problem is they don’t share, audio content is not shared. JC: Totally. I listen to it in the car on the way home from work and that’s great. And then I— Never think about it ever again. But it’s great for me because I can say the same thing the next day and you’re like, “Oh, that was a very insightful comment”, you didn’t even know I said it yesterday. JC: Yeah. If we reason about what sectors are going to be important down the road, for the AI build out, energy is going to be a big deal and the ability to actually power the data centers that are coming online, that may be a bigger constraint going forward than even chips. Robotics are clearly going to be a big thing. It seems like China is doing better on energy and better on robotics and is catching up on chips, doing okay on the AI models, but does that mean China’s potentially very well positioned for the coming wave of tech trends? I think any country that is capable of actually building things is well positioned. But then again, the counter argument, if I could sort of put a silver lining on it, is the challenge, the trick going forward and to sort of defy the doomers as it were, is actually creating new sorts of value, new sources of value in a way that humans are uniquely capable of and that is by definition an innovation story. It’s a freeing up resources from things that can be done by machines to more productive — it’s having a consumer market that pulls out that sort of innovation, that makes it possible to write a newsletter or a podcast and actually pay for it. And so there’s a scenario where China is well positioned to win the total commodification of everything, which doesn’t have much margin and the actual value creation and what makes humans humans and generates the value that I think people in AI are skeptical can be created, despite the fact 90% of us used to work in agriculture and like 1% do. For some reason, that’s not going to repeat. If you want to be optimistic, that’s the sort of thing that America has always done well. JC: What’s your Stripe feedback for us? Oh, where to start? I mean, I’m obviously, it’s hard for me to write about Stripe because I’m biased because I was very early. I think you introduced the billing API in 2011, which is a direct spur for wanting to do Stratechery and thinking this was a business model that was possible. So very, very big thumbs up on that. Oh, you didn’t warn me about this, I should have thought about this. Actually, you have one huge issue that I was just dealing with. Oh yeah, ACH. Your ACH implementation is — someone can go in and if I try to add on to an ACH plan, so say I have a team, because that’s where you use ACH, like large companies, and they want to add someone on. If that add-on fails, the entire plan gets canceled, so we have to build a bunch of logic to handle that independently. That’s a very detailed specific problem that we’re facing. JC: Buggy ACH subscription directions. Okay, that’s a good one. There’s definitely more, I’d have to go back and think about it. But I mean, I do think the — we didn’t talk about stablecoins in this sort of area of, I’ve always been a big skeptic of micro transactions because the problem is, it goes to the investigative reporting thing. You can’t build something sustainable if you’re only monetizing on the back end and the only way to do that is to have a very large market, which is what YouTube is. YouTube is a bunch of speculative video makers hoping that they’ll get enough views that the ads will pay for it and there’s such a large scale and they monetize their ads so effectively that it works. There’s no market like this for written content or like podcast content. People are like, “Oh, let me pay for one article”, I’m like, no, what you’re paying for when you pay for me is you’re paying for my ongoing production. I’m making a promise to you I’m going to write something every day and you’re paying for that promise , you’re not paying for the actual content, the content is a byproduct of that. The question for AI and microtransaction is you have all these labs paying people all over the world to generate data and if you’re like a radiologist, you get paid $350 an hour, I saw some article about it and all these sliding scales and they’re all duplicating work because everyone feels the sense that I can get differentiation. What we clearly need is some sort of market mechanism for data generation that in the long run will replace what we’re getting from journalistic enterprises, which are even more doomed than ever before. So how do you generate? We’re paying directly for content and then AIs can get it and can you can build a large market like YouTube that people will speculatively do it, trusting that they’ll get paid because the market is large enough. That’s what needs to happen. Like a lot of things, there’s this massive value of how do we get from here to there, but we’ll see. I know Cloudflare is trying to push on that, so we’ll see what happens. JC: Last question. How would you rate the execution of the major tech companies? Like the Big Five? JC: Yeah, sure. Apple, traditionally very strong. Their manufacturing obviously remains amazing. Like the iPhone Air that just the alarm went off and made sure I turned the snooze off, the greatest smartphone ever made. JC: Really? Oh yeah. I never even seen one. Oh, it’s awesome. JC: Is the battery life good? Good enough. JC: It’s bad. What’s that? JC: It sounds like it’s bad then. No, it’s fine. I mean, I actually forgot my external battery and it’s doing okay now. And now that I’m back in Wisconsin I have to wear jeans because it’s cold and it slides right in. Actually, I love it. I’m very devastated to hear they might not make it regularly. Obviously Apple’s software has gotten pretty rough. Their relationship with the — I mean, Apple is so interesting because the reality is when it comes to platforms, you have to build, the price of becoming a platform is making a great product . So Apple gets platforms because they make great products and they’re terrible stewards of the platforms. Microsoft is a great platform steward, but they can’t make good products so they never get the permission to sort of have big platforms, which is sort of a tragedy there But Apple, it’s an old company driven by managers, not founders, and maybe they — the AI Siri got as bad as it was, is obviously really bad, but at the end of the day, we still need devices. They’re still better than anybody else, so they’ll probably be okay. Google, I’ve had the hardest time understanding Google, in part because I think Google does a lot of stuff suboptimally. Almost everything I feel like they do suboptimally, but I think that lack of — Apple can be super-optimized, but I think it’s Google’s lack of optimization that actually makes them maybe the most resilient of all the tech companies because they never get so exactly doing what they should do and they have all this extra fluff and doing things and gazillion science projects, but because their core business model is so good and throws off so much cash, they can just sort of like be sort of very flexible and I’ve come to appreciate that about them. Everything that frustrates me in analyzing them actually has this hidden benefit of resiliency and strength and adaptability. And they’re like the slime, and if they’re coming in your direction, like you’re actually in big — it might take them a really long time to get there, but when they get there, you’re doomed. So Microsoft, I’ve gotten a lot of mileage writing about Microsoft. Everyone, especially in the SaaS era, all these companies are like, “Oh, Microsoft sucks, we’re going to make the best of breed product”, and guess what? Startups in Silicon Valley, they want to buy all the best of breed products and they have the abilities to come together. Joe managing the tire shop doesn’t care about that, he just wants this crap to work and to work together and if it’s all mediocre, but it kind of works together, that’s better than best of breed. Microsoft is just squashing these companies that grow and boom, just hit that Microsoft wall again and again. Is that going to persist in an AI world? It’s probably tied to the SaaS question sort of before in some respects. Their distribution and power there remains sort of substantial. Meta’s probably, in my experience, been the best execution. I mean, you just see stuff like interacting with PR or executives, they just run such a tight show. JC: They’re really honest. Yeah. That’s always been very impressive to me. I think their ad model is underrated. The trick with them is keeping engagement, that’s what makes the whole thing go, they’ve done a decent job of that. Hours spent in ChatGPT are hours not spent on Instagram or not spent and I think that that’s an underrated area. And I think they’re kind of betting that, look, that’s all fine and well today, but in the long run, this is an infrastructure game. We have cashflow to fund it and OpenAI doesn’t. I think OpenAI might be a bigger threat to Facebook than Google, something worth considering, but Facebook is obviously clearly spending to meet it. So Amazon. Amazon, there’s a lot of fab capacity and power being spent on Trainium that one wonders could be better spent on other chips, but we’ll see what happens. JC: Aren’t people happy with the Trainium chips? The degree to which Amazon optimized cloud computing, I think is underappreciated. When you’re operating in a commodity market there’s two ways to succeed, right? You could have a differentiated product where you can charge a high margin, or you can have a lower cost structure in a commodity market where the price floor is the market price, but your cost structure is lower than your competitors, so that’s where you make your margin. That was how Amazon dominated the cloud, their cloud was way more optimized than anyone else’s. The whole Nitro architecture, just the way they architect everything, doing a lot of their own chips, shifting to Graviton, I think the thing with Graviton, their Arm CPU is they could — who’s the number one customer for Graviton? Amazon itself, and so they can move all their loads to that, optimize it, build all the software libraries, and then start offering it on a cost-plus basis to others. That’s the playbook that they’re trying to run with Trainium, where the number one customer of Trainium in the long run is Amazon, but then they develop all the capabilities around it for other people for it to be attracted to other people at lower prices, and they have that structurally smaller cost structure. The problem is that works when you’ve sort of leveled off in performance. Amazon executed this model between 2005 and 2025. Of course, processors got faster in that time, but it wasn’t like the ’80s or ’90s when every leap was massive. Does that work in a relatively new market when there’s massive leaps being made generation on generation? And they have Nvidia servers, do they have as many as they could because they’re on this strategy? Probably not. JC: Ben, thank you. This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery . The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly. Thanks for being a supporter, and have a great day!

0 views
Jimmy Miller 2 months ago

AI Has Made it Easy to Own Your Tools

I am a digital hoarder. Probably not an extreme one. I don't have some fancy complicated raid system for endless terabytes of media. But when I find a link or a pdf to some interesting article, I want to save it. But I almost never review it. That isn't to say I don't read articles. I read plenty. But I also collect them. But I've always had a problem: how do I organize them, classify them, keep them? For links, my answer was pocket. Well, you can imagine how that went. My exported list is living in Raindrop for the time being. My PDFs had been living in Muse . A nice piece of software, but it has diverged from the minimalism I enjoyed about it. So I've been on the search for alternatives. But there has always been this nagging feeling that what I want is not just some different piece of software, but a custom, fully owned setup for myself. But where would I ever find the time to do that? I admit that I'm far from having the tools I want. The tools I've built so far are rudimentary. But they were built in minutes, and they were exactly what I asked for. And they did things that a few years ago would not have been possible. Local LLMs are getting better and better. Seeing that trend, I bought a framework desktop . Using it, I was able to have Claude write a quite simple script that found every pdf on my machine, grabbed some of the initial text from them, and passed them to gpt-120b and asked, is this pdf about programming? Now I can find all those programming PDFs. But I needed to sort them. Now that I have the initial collection of potential PDFs. How was I going to sort them? I didn't want a tagging system. There's a chance that later I will. But for now, I wanted discrete categories. But what categories? I'd only find out once I started organizing them. So I asked Claude and got an app specifically designed to let me categorize them . A simple tool for syncing PDFs by hash to S3. Some of the PDFs have nice metadata built in. So we can just go ahead and extract that . That's why this tool. For the rest, we pass to Qwen3-VL-30B to grab the title and author . A swift application compiled for my Mac and iPad that lets me annotate PDFs . The app is far from fully featured yet. But the fact that it syncs between my Mac and iPad seamlessly is wonderful. I use this mainly for the podcast, so I haven't gotten to do a ton with it yet. But having it be something I can customize to my needs already has me excited. A page on my website that statically generates the archive for all to browse. We've yet to reach the point where local models can quite replace Claude for coding. But having a local model where I never had to worry about the cost to send it experiments, one where it ran not on my laptop, but in the background on a "server", was such an enabling feature. I would have been very hesitant to send all these PDFs to a model. But with a local model, I had so much flexibility to use it wherever judgment could be a substitute for deterministic processing. Nothing here was groundbreaking. Nothing here is something I couldn't have made myself. But they are all things I put off. All things I would have never prioritized, but wanted made. They are all imperfect tools. Many of them are one-offs. There were actually countless other small tools made along the way. Cleaning up titles, a tool that chose between the metadata title and the OCR ones (ocr were usually better). Any one of these little bottlenecks might have been enough for me to stop working on this project. I see lots of discussions about AI all having to do with "production code". I'm sure I'll write my thoughts about that at some point. But I also think it is important that we remember that this isn't the only code that exists. In this case, it's personal code . Code, I enjoy having the ability to modify. Code that I am happy isn't robust; it doesn't try to handle every edgecase. It doesn't need a complicated sync process. Doesn't need granular permissions. This is just the start of what I expect to be a continual evolution of my pdf (and eventually link) management software. But for the first time in my programming life (not career, not everything is a business transaction), I don't feel the weight of maintenance I've created for myself. I feel a sense of freedom to build more without incurring such a heavy cost. This, to me, is one of the most exciting features of our new AI capabilities.

1 views
マリウス 2 months ago

Domains as "Internet Handles"

A little while ago I cam across a post by Dan Abramov , a name that until then didn’t ring a bell, but who appears to be a former Meta employee and member of the React core team. The post links to a website made by Abramov , that addresses the issues of how, quote, every time you sign up for a new social app, you have to rush to claim your username , how, quote, if someone else got there first, too bad and how, quote, that username only works on that one app anyway . The website goes on: This is silly. The internet has already solved this problem. There already exists a kind of handle that works anywhere on the internet—it’s called a domain . A domain is a name you can own on the internet, like or . Most creators on the internet today don’t own a domain. Why not? Until recently, you could only use a domain for a website or custom email. But personal websites have mostly fallen out of fashion, and each social app sports its own kind of handles. However, open social apps are starting to change that. These apps let you use any internet domain you own as a handle Abramov highlights a familiar pain point: On every new platform, users must scramble to secure their preferred username, often discovering it was taken years ago. Domains, he suggests, solve this by offering a globally unique namespace. However, this solution introduces an even greater scarcity problem, amongst other more important issues. Short, meaningful domain names have been scarce for decades. Most desirable combinations of common words, short names, or initials were claimed long before modern social platforms even existed. For example, just like our author, I, too, would have loved to use or as my handle on e.g. Bluesky . Sadly, however, I’m more than two decades late for that, as the former seemingly belongs to a Russian company, and the latter to a namesake somewhere in Bavaria, Germany. Domain marketplaces and registries still list alternatives , but these often come with premium or recurring fees far exceeding what the average user is willing to pay. When platforms require domains as identity tokens, a user whose preferred domain is unavailable loses access to that identity everywhere , not just on a single platform. Unlike usernames, which can often be adapted with simple variations (e.g. adding punctuation), domains offer no such flexibility. TLD constraints mean that once a desirable domain is taken, there may be no practical semantic alternative. Domain scarcity does not solve the “handle availability” problem, it instead exacerbates it by moving contention from individual platforms to the internet’s global naming infrastructure. Usernames exist within individual platforms and their loss, while inconvenient, usually has contained consequences. Losing a username typically means losing access to a single isolated data silo (platform). Domains, by contrast, are subject to a multilayered hierarchy of control involving domain registrars, TLD operators, ICANN-affiliated registries and the DNS root zone. By using a domain as a cross-platform handle , users tie their entire online identity to this centralized, multi-stakeholder governance structure. Misconduct, even just alleged, on one platform could result in escalations to a registrar or registry, potentially leading to domain suspension. A suspended domain invalidates not just a handle on one platform, but an entire online identity across all services using that identifier. The risks extend beyond platform moderation. A compromised mailbox, a malware incident on a web server, or an automated threat-intelligence flag from entities such as the internet’s favorite bully Spamhaus can lead to domain suspension. In such scenarios, users may face lengthy appeals processes involving opaque third-party entities that wield far more power than a typical platform operator. Domains were designed for hosting services, not for acting as the cornerstone of individual identity. Using them as universal handles places disproportionate power in the hands of infrastructure operators who were never intended to serve as arbiters of personal identity. If you’re a long-time reader of this website you probably already knew that privacy must come up at some point. Well, here it is: Traditional username-based systems allow users to separate their personal identity from their public persona. After all, not everyone might want others to know about their activity in the Taylor Swift forum of FanForum.com , and that’s fine. Domains, however, increasingly erode this layer of privacy. While privacy-respecting domain registrars still exist, the mainstream domain ecosystem overwhelmingly encourages or requires KYC, traceable payment methods and paid WHOIS privacy services to maintain the illusion of privacy. Most users will register domains using a credit card or similar traceable payment method through large commercial registrars. Even if WHOIS privacy is enabled, metadata leakage and billing records remain. In the context of social identities, this creates an environment where domain-based handles can be correlated with real-world identities far more easily than pseudonymous usernames. A user posting under a domain such as time-to-get-swifty.com could find their identity exposed not through any platform breach, but simply through the structural nature of domain registration. Usernames are free. Domains are not. Even the cheapest domains incur recurring costs. More desirable, short, memorable, or branded names often command high premiums or elevated renewal fees. While this financial burden may appear negligible to, let’s say, former well-paid Meta employees who consider their online presence a professional asset, the majority of internet users do not attach the same value to domain ownership. For many, especially outside tech-centric circles, the ROI of maintaining a personal domain is negligible or non-existent. A farmer participating in an agricultural forum is unlikely to find value in purchasing and renewing a domain like solely to participate in an online community. Any identity system that introduces ongoing financial requirements creates unfair barriers to participation and risks entrenching socioeconomic inequality in digital spaces. Abramov ’s argument positions domains as a universal, user-controlled solution to fragmented identity systems. While his vision aligns with broader goals of data portability and user autonomy, domains introduce significant drawbacks that usernames do not suffer from: Greater scarcity and reduced availability, centralized infrastructure vulnerabilities and governance risks, reduced privacy and increased traceability, and recurring financial burdens for users. With statements like “You don’t have to squat handles anymore. Own a domain, and you can log into any open social app” the author makes it sound like domain names are less exclusive than simple usernames, when it’s clearly the other way around, and they fail to recognize that squatting is far worse of an issue for domains than it is for simple usernames. Moreover, the reliance of on conventional DNS infrastructure undermines the self-sovereignty that decentralized identifier systems aspire to. Without a complementary decentralized naming layer (e.g. Handshake ) domain-based identities merely exchange one set of constraints and issues for another (vastly more dangerous and impactful) one. For these reasons, users and platform developers should think carefully before adopting domains as universal “internet handles” . Usernames, for all their imperfections, remain simpler, safer, more private, and more equitable for everyday identity on the web, at least until the truly decentralized future is here. While one might say that the handle is merely a representation of the underlying decentralized ID , a loss of the domain will nevertheless come with functional implications across every service that uses it. Luckily, platforms that implement domain handles continue to offer accounts under their own domains for the time being, so that at least for uninformed users nothing really changes (on the surface). Note: I have an account on a platform that supports domain handles and I am using the feature in order to be able to make informed statements. The account is, however, nothing that is crucial to my existence on the internet. If my domain should spontaneously combust that account would be the least of my worries. Instead, I’d be more troubled about this site and its related services, which is why I have a fallback domain . While I’m sure the author of internethandle.org didn’t intend to, some statements on the website “sound” somewhat out of touch, or at the very least tone-deaf , e.g.: Most creators on the internet today don’t own a domain. Why not? Until recently, you could only use a domain for a website or custom email. But personal websites have mostly fallen out of fashion […] Dan , personal websites haven’t fallen out of fashion , but have suffered under the World Wide Web altered (dare I say destroyed ?) by the very companies you supported building as part of your previous roles and, to some extent, as part of the technologies you’re working with. Just because you, and the people you surround yourself with, seemingly don’t care about the small web it doesn’t mean it has fallen out of fashion ; If anything, personal websites are gaining popularity and are the weapon of choice against the enshittification of the web by companies like Meta and others.

14 views
Stone Tools 3 months ago

Bank Street Writer on the Apple II

Stop me if you're heard this one . In 1978, a young man wandered into a Tandy Radio Shack and found himself transfixed by the TRS-80 systems on display. He bought one just to play around with, and it wound up transforming his life from there on. As it went with so many, so too did it go with lawyer Doug Carlston. His brother, Gary, initially unimpressed, warmed up to the machine during a long Maine winter. The two thus smitten mused, "Can we make money off of this?" Together they formed a developer-sales relationship, with Doug developing Galactic Saga and third brother Don developing Tank Command . Gary's sales acumen brought early success and Broderbund was officially underway. Meanwhile in New York, Richard Ruopp, president of Bank Street College of Education, a kind of research center for experimental and progressive education, was thinking about how emerging technology fit into the college's mission. Writing was an important part of their curriculum, but according to Ruopp , "We tested the available word processors and found we couldn’t use any of them." So, experts from Bank Street College worked closely with consultant Franklin Smith and software development firm Intentional Educations Inc. to build a better word processor for kids. The fruit of that labor, Bank Street Writer , was published by Scholastic exclusively to schools at first, with Broderbund taking up the home distribution market a little later. Bank Street Writer would dominate home software sales charts for years and its name would live on as one of the sacred texts, like Lemonade Stand or The Oregon Trail . Let's see what lessons there are to learn from it yet. 1916 Founded by Lucy Sprague Mitchell, Wesley Mitchell, and Harriet Johnson as the “Bureau of Educational Experiments” (BEE) with the goal of understanding in what environment children best learn and develop, and to help adults learn to cultivate that environment. 1930 BEE moves to 69 Bank Street. (Will move to 112th Street in 1971, for space reasons.) 1937 The Writer’s Lab, which connects writers and students, is formed. 1950 BEE is renamed to Bank Street College of Education. 1973 Minnesota Educational Computing Consortium (MECC) is founded. This group would later go on to produce The Oregon Trail . 1983 Bank Street Writer, developed by Intentional Educations Inc., published by Broderbund Software, and “thoroughly tested by the academics at Bank Street College of Education.” Price: $70. 1985 Writer is a success! Time to capitalize! Bank Street Speller $50, Bank Street Filer $50, Bank Street Mailer $50, Bank Street Music Writer $50, Bank Street Prewriter (published by Scholastic) $60. 1986 Bank Street Writer Plus $100. Bank Street Writer III (published by Scholastic) $90. It’s basically Plus with classroom-oriented additions, including a 20-column mode and additional teaching aides. 1987 Bank Street Storybook, $40. 1992 Bank Street Writer for the Macintosh (published by Scholastic) $130. Adds limited page layout options, Hypercard-style hypertext, clip art, punctuation checker, image import with text wrap, full color, sound support, “Classroom Publishing” of fliers and pamphlets, and electronic mail. With word processors, I want to give them a chance to present their best possible experience. I do put a little time into trying the baseline experience many would have had with the software during the height of its popularity. "Does the software still have utility today?" can only be fairly answered by giving the software a fighting chance. To that end, I've gifted myself a top-of-the-line (virtual) Apple //e running the last update to Writer , the Plus edition. You probably already know how to use Bank Street Writer Plus . You don't know you know, but you do know because you have familiarity with GUI menus and basic word processing skills. All you're lacking is an understanding of the vagaries of data storage and retrieval as necessitated by the hardware of the time, but once armed with that knowledge you could start using this program without touching the manual again. It really is as easy as the makers claim. The simplicity is driven by very a subtle, forward-thinking user interface. Of primary interest is the upper prompt area. The top 3 lines of the screen serve as an ever-present, contextual "here's the situation" helper. What's going on? What am I looking at? What options are available? How do I navigate this screen? How do I use this tool? Whatever you're doing, whatever menu option you've chosen, the prompt area is already displaying information about which actions are available right now in the current context . As the manual states, "When in doubt, look for instructions in the prompt area." The manual speaks truth. For some, the constant on-screen prompting could be a touch overbearing, but I personally don't think it's so terrible to know that the program is paying attention to my actions and wants me to succeed. The assistance isn't front-loaded, like so many mobile apps, nor does it interrupt, like Clippy. I simply can't fault the good intentions, nor can I really think of anything in modern software that takes this approach to user-friendliness. The remainder of the screen is devoted to your writing and works like any other word processor you've used. Just type, move the cursor with the arrow keys, and type some more. I think most writers will find it behaves "as expected." There are no Electric Pencil -style over-type surprises, nor VisiCalc -style arrow key manipulations. What seems to have happened is that in making a word processor that is easy for children to use, they accidentally made a word processor that is just plain easy. The basic functionality is drop-dead simple to pick up by just poking around, but there's quite a bit more to learn here. To do so, we have a few options for getting to know Bank Street Writer in more detail. There are two manuals by virtue of the program's educational roots. Bank Street Writer was published by both Broderbund (for the home market) and Scholastic (for schools). Each tailored their own manual to their respective demographic. Broderbund's manual is cleanly designed, easy to understand, and gets right to the point. It is not as "child focused" as reviews at the time might have you believe. Scholastic's is more of a curriculum to teach word processing, part of the 80s push for "computers in the classroom." It's packed with student activities, pages that can be copied and distributed, and (tellingly) information for the teacher explaining "What is a word processor?" Our other option for learning is on side 2 of the main program disk. Quite apart from the program proper, the disk contains an interactive tutorial. I love this commitment to the user's success, though I breezed through it in just a few minutes, being a cultured word processing pro of the 21st century. I am quite familiar with "menus" thank you very much. As I mentioned at the top, the screen is split into two areas: prompt and writing. The prompt area is fixed, and can neither be hidden nor turned off. This means there's no "full screen" option, for example. The writing area runs in high-res graphics mode so as to bless us with the gift of an 80-character wide display. Being a graphics display also means the developer could have put anything on screen, including a ruler which would have been a nice formatting helper. Alas. Bank Street offers limited preference settings; there's not much we can do to customize the program's display or functionality. The upshot is that as I gain confidence with the program, the program doesn't offer to match my ability. There is one notable trick, which I'll discuss later, but overall there is a missed opportunity here for adapting to a user's increasing skill. Kids do grow up, after all. As with Electric Pencil , I'm writing this entirely in Bank Street Writer . Unlike the keyboard/software troubles there, here in 128K Apple //e world I have Markdown luxuries like . The emulator's amber mode is soothing to the eyes and soul. Mouse control is turned on and works perfectly, though it's much easier and faster to navigate by keyboard, as God intended. This is an enjoyable writing experience. Which is not to say the program is without quirks. Perhaps the most unfortunate one is how little writing space 128K RAM buys for a document. At this point in the write-up I'm at about 1,500 words and BSW's memory check function reports I'm already at 40% of capacity. So the largest document one could keep resident in memory at one time would run about 4,000 words max? Put bluntly, that ain't a lot. Splitting documents into multiple files is pretty much forced upon anyone wanting to write anything of length. Given floppy disk fragility, especially with children handling them, perhaps that's not such a bad idea. However, from an editing point of view, it is frustrating to recall which document I need to load to review any given piece of text. Remember also, there's no copy/paste as we understand it today. Moving a block of text between documents is tricky, but possible. BSW can save a selected portion of text to its own file, which can then be "retrieved" (inserted) at the current cursor position in another file. In this way the diskette functions as a memory buffer for cross-document "copy/paste." Hey, at least there is some option available. Flipping through old magazines of the time, it's interesting just how often Bank Street Writer comes up as the comparative reference point for home word processors over the years. If a new program had even the slightest whiff of trying to be "easy to use" it was invariably compared to Bank Street Writer . Likewise, there were any number of writers and readers of those magazines talking about how they continued to use Bank Street Writer , even though so-called "better" options existed. I don't want to oversell its adoption by adults, but it most definitely was not a children-only word processor, by any stretch. I think the release of Plus embraced a more mature audience. In schools it reigned supreme for years, including the Scholastic-branded version of Plus called Bank Street Writer III . There were add-on "packs" of teacher materials for use with it. There was also Bank Street Prewriter , a tool for helping to organize themes and thoughts before committing to the act of writing, including an outliner, as popularized by ThinkTank . (always interesting when influences ripple through the industry like this) Of course, the Scholastic approach was built around the idea of teachers having access to computers in the classroom. And THAT was build on the idea of teachers feeling comfortable enough with computers to seamlessly merge them into a lesson-plan. Sure, the kids needed something simple to learn, but let's be honest, so did the adults. There was a time when attaching a computer to anything meant a fundamental transformation of that thing was assured and imminent. For example, the "office of the future" (as discussed in the Superbase post ) had a counterpart in the "classroom of tomorrow." In 1983, Popular Computing said, "Schools are in the grip of a computer mania." Steve Jobs took advantage of this, skating to where the puck would be, by donating Apple 2s to California schools. In October 1983, Creative Computing did a little math on that plan. $20M in retail donations brought $4M in tax credits against $5M in gross donations. Apple could donate a computer to every elementary, middle, and high school in California for an outlay of only $1M. Jobs lobbied Congress hard to pass a national version of the same "Kids Can't Wait" bill, which would have extended federal tax credits for such donations. That never made it to law, for various political reasons. But the California initiative certainly helped position Apple as the go-to system for computers in education. By 1985, Apple would dominate fully half of the education market. That would continue into the Macintosh era, though Apple's dominance diminished slowly as cheaper, "good enough" alternatives entered the market. Today, Apple is #3 in the education market, behind Windows and Chromebooks . It is a fair question to ask, "How useful could a single donated computer be to a school?" Once it's in place, then what? Does it have function? Does anyone have a plan for it? Come to think of it, does anyone on staff even know how to use it? When Apple put a computer into (almost) every school in California, they did require training. Well, let's say lip-service was paid to the idea of the aspiration of training. One teacher from each school had to receive one day's worth of training to attain a certificate which allowed the school to receive the computer. That teacher was then tasked with training their coworkers. Wait, did I say "one day?" Sorry, I meant about one HOUR of training. It's not too hard to see where Larry Cuban was coming from when he published Oversold & Underused: Computers in the Classroom in 2001. Even of schools with more than a single system, he notes, "Why, then, does a school's high access (to computers) yield limited use? Nationally and in our case studies, teachers... mentioned that training in relevant software and applications was seldom offered... (Teachers) felt that the generic training available was often irrelevant to their specific and immediate needs." From my perspective, and I'm no historian, it seems to me there were four ways computers were introduced into the school setting. The three most obvious were: I personally attended schools of all three types. What I can say the schools had in common was how little attention, if any, was given to the computer and how little my teachers understood them. An impromptu poll of friends aligned with my own experience. Schools didn't integrate computers into classwork, except when classwork was explicitly about computers. I sincerely doubt my time playing Trillium's Shadowkeep during recess was anything close to Apple's vision of a "classroom of tomorrow." The fourth approach to computers into the classroom was significantly more ambitious. Apple tried an experiment in which five public school sites were chosen for a long-term research project. In 1986, the sites were given computers for every child in class and at home. They reasoned that for computers to truly make an impact on children, the computer couldn't just be a fun toy they occasionally interacted with. Rather, it required full integration into their lives. Now, it is darkly funny to me that having achieved this integration today through smartphones, adults work hard to remove computers from school. It is also interesting to me that Apple kind of led the way in making that happen, although in fairness they don't seem to consider the iPhone to be a computer . America wasn't alone in trying to give its children a technological leg up. In England, the BBC spearheaded a major drive to get computers into classrooms via a countrywide computer literacy program. Even in the States, I remember watching episodes of BBC's The Computer Programme on PBS. Regardless of Apple's or the BBC's efforts, the long-term data on the effectiveness of computers in the classroom has been mixed, at best, or even an outright failure. Apple's own assessment of their "Apple Classrooms of Tomorrow" (ACOT) program after a couple of years concluded, "Results showed that ACOT students maintained their performance levels on standard measures of educational achievement in basic skills, and they sustained positive attitudes as judged by measures addressing the traditional activities of schooling." Which is a "we continue to maintain the dream of selling more computers to schools" way of saying, "Nothing changed." In 2001, the BBC reported , "England's schools are beginning to use computers more in teaching - but teachers are making "slow progress" in learning about them." Then in 2015 the results were "disappointing, "Even where computers are used in the classroom, their impact on student performance is mixed at best." Informatique pour tous, France 1985: Pedagogy, Industry and Politics by Clémence Cardon-Quint noted the French attempt at computers in the classroom as being, "an operation that can be considered both as a milestone and a failure." Computers in the Classrooms of an Authoritarian Country: The Case of Soviet Latvia (1980s–1991) by Iveta Kestere, Katrina Elizabete Purina-Bieza shows the introduction of computers to have drawn stark power and social divides, while pushing prescribed gender roles of computers being "for boys." Teachers Translating and Circumventing the Computer in Lower and Upper Secondary Swedish Schools in the 1970s and 1980 s by Rosalía Guerrero Cantarell noted, "the role of teachers as agents of change was crucial. But teachers also acted as opponents, hindering the diffusion of computer use in schools." Now, I should be clear that things were different in the higher education market, as with PLATO in the universities. But in the primary and secondary markets, Bank Street Writer 's primary demographic, nobody really knew what to do with the machines once they had them. The most straightforwardly damning assessment is from Oversold & Underused where Cuban says in the chapter "Are Computers in Schools Worth the Investment?", "Although promoters of new technologies often spout the rhetoric of fundamental change, few have pursued deep and comprehensive changes in the existing system of schooling." Throughout the book he notes how most teachers struggle to integrate computers into their lessons and teaching methodologies. The lack of guidance in developing new ways of teaching means computers will continue to be relegated to occasional auxiliary tools trotted out from time to time, not integral to the teaching process. "Should my conclusions and predictions be accurate, both champions and skeptics will be disappointed. They may conclude, as I have, that the investment of billions of dollars over the last decade has yet to produce worthy outcomes," he concludes. Thanks to my sweet four-drive virtual machine, I can summon both the dictionary and thesaurus immediately. Put the cursor at the start of a word and hit or to get an instant spot check of spelling or synonyms. Without the reality of actual floppy disk access speed, word searches are fast. Spelling can be performed on the full document, which does take noticeable time to finish. One thing I really love is how cancelling an action or moving forward on the next step of a process is responsive and immediate. If you're growing bored of an action taking too long, just cancel it with ; it will stop immediately . The program feels robust and unbreakable in that way. There is a word lookup, which accepts wildcards, for when you kinda-sorta know how to spell a word but need help. Attached to this function is an anagram checker which benefits greatly from a virtual CPU boost. But it can only do its trick on single words, not phrases. Earlier I mentioned how little the program offers a user who has gained confidence and skill. That's not entirely accurate, thanks to its most surprising super power: macros. Yes, you read that right. This word processor designed for children includes macros. They are stored at the application level, not the document level, so do keep that in mind. Twenty can be defined, each consisting of up to 32 keystrokes. Running keystrokes in a macro is functionally identical to typing by hand. Because the program can be driven 100% by keyboard alone, macros can trigger menu selections and step through tedious parts of those commands. For example, to save our document periodically we need to do the following every time: That looks like a job for to me. 0:00 / 0:23 1× Defining a macro to save, with overwrite, the current file. After it is defined, I execute it which happens very quickly in the emulator. Watch carefully. If you can perform an action through a series of discrete keyboard commands, you can make a macro from it. This is freeing, but also works to highlight what you cannot do with the program. For example, there is no concept of an active selection, so a word is the smallest unit you can directly manipulate due to keyboard control limitations. It's not nothin' but it's not quite enough. I started setting up markdown macros, so I could wrap the current word in or for italic and bold. Doing the actions in the writing area and noting the minimal steps necessary to achieve the desired outcome translated into perfect macros. I was even able to make a kind of rudimentary "undo" for when I wrap something in italic but intended to use bold. This reminded me that I haven't touched macro functionality in modern apps since my AppleScript days. Lemme check something real quick. I've popped open LibreOffice and feel immediately put off by its Macros function. It looks super powerful; a full dedicated code editor with watched variables for authoring in its scripting language. Or is it languages? Is it Macros or ScriptForge? What are "Gimmicks?" Just what is going on? Google Docs is about the same, using Javascript for its "Apps Script" functionality. Here's a Stack Overflow post where someone wants to select text and set it to "blue and bold" with a keystroke and is presented with 32 lines of Javascript. Many programs seem to have taken a "make the simple things difficult, and the hard things possible" approach to macros. Microsoft Word reportedly has a "record" function for creating macros, which will watch what you do and let you play back those actions in sequence. (a la Adobe Photoshop's "actions") This sounds like a nice evolution of the BSW method. I say "reportedly" because it is not available in the online version and so I couldn't try it for myself without purchasing Microsoft 365. I certainly don't doubt the sky's the limit with these modern macro systems. I'm sure amazing utilities can be created, with custom dialog boxes, internet data retrieval, and more. The flip-side is that a lot of power has has been stripped from the writer and handed over to the programmer, which I think is unfortunate. Bank Street Writer allows an author to use the same keyboard commands for creating a macro as for writing a document. There is a forgotten lesson in that. Yes, BSW's macros are limited compared to modern tools, but they are immediately accessible and intuitive. They leverage skills the user is already known to possess . The learning curve is a straight, flat line. Like any good word processor, user-definable tab stops are possible. Bringing up the editor for tabs displays a ruler showing tab stops and their type (normal vs. decimal-aligned). Using the same tools for writing, the ruler is similarly editable. Just type a or a anywhere along the ruler. So, the lack of a ruler I noted at the beginning is now doubly-frustrating, because it exists! Perhaps it was determined to be too much visual clutter for younger users? Again, this is where the Options screen could have allowed advanced users to toggle on features as they grow in comfort and ambition. From what I can tell in the product catalogs, the only major revision after this was for the Macintosh which added a whole host of publishing features. If I think about my experience with BSW these past two weeks, and think about what my wish-list for a hypothetical update might be, "desktop publishing" has never crossed my mind. Having said all of that, I've really enjoyed using it to write this post. It has been solid, snappy, and utterly crash free. To be completely frank, when I switched over into LibreOffice , a predominantly native app for Windows, it felt laggy and sluggish. Bank Street Writer feels smooth and purpose-built, even in an emulator. Features are discoverable and the UI always makes it clear what action can be taken next. I never feel lost nor do I worry that an inadvertent action will have unknowable consequences. The impression of it being an assistant to my writing process is strong, probably more so than many modern word processors. This is cleanly illustrated by the prompt area which feels like a "good idea we forgot." (I also noted this in my ThinkTank examination) I cannot lavish such praise upon the original Bank Street Writer , only on this Plus revision. The original is 40-columns only, spell-checking is a completely separate program, no thesaurus, no macros, a kind of bizarre modal switch between writing/editing/transfer modes, no arrow key support, and other quirks of its time and target system (the original Apple 2). Plus is an incredibly smart update to that original, increasing its utility 10-fold, without sacrificing ease of use. In fact, it's actually easier to use, in my opinion than the original and comes just shy of being something I could use on a regular basis. Bank Street Writer is very good! But it's not quite great . Ways to improve the experience, notable deficiencies, workarounds, and notes about incorporating the software into modern workflows (if possible). AppleWin 32bit 1.31.0.0 on Windows 11 Emulating an Enhanced Apple //e Authentic machine speed (enhanced disk access speed) Monochrome (amber) for clean 80-column display Disk II controller in slot 5 (enables four floppies, total) Mouse interface in slot 4 Bank Street Writer Plus At the classroom level there are one or more computers. At the school level there is a "computer lab" with one or more systems. There were no computers. Hit (open the File menu) Hit (select Save File) Hit three times (stepping through default confirmation dialogs) I find that running at 300% CPU speed in AppleWin works great. No repeating key issues and the program is well-behaved. Spell check works quickly enough to not be annoying and I honestly enjoyed watching it work its way through the document. Sometimes there's something to be said about slowing the computer down to swift human-speed, to form a stronger sense of connection between your own work and the computer's work. I did mention that I used a 4-disk setup, but in truth I never really touched the thesaurus. A 3-disk setup is probably sufficient. The application never crashed; the emulator was rock-solid. CiderPress2 works perfectly for opening the files on an Apple ][ disk image. Files are of file extension, which CiderPress2 tries to open as disassembly, not text. Switch "Conversion" to "Plain Text" and you'll be fine. This is a program that would benefit greatly from one more revision. It's very close to being enough for a "minimalist" crowd. There are four, key pieces missing for completeness: Much longer document handling Smarter, expanded dictionary, with definitions Customizable UI, display/hide: prompts, ruler, word count, etc. Extra formatting options, like line spacing, visual centering, and so on. For a modern writer using hyperlinks, this can trip up the spell-checker quite ferociously. It doesn't understand, nor can it be taught, pattern-matching against URLs to skip them.

0 views
Alex White's Blog 4 months ago

Parsing GPX files with Swift

I've been playing around with a really fun experiment the past few days using Swift. I don't have much experience with Mac OS development, but have been very pleasantly surprised by Swift and Swift UI! There's so much that can be accomplished out of the box that I haven't even looked into third-party packages. It's also been really nice to take a step away from web development (I'm working on htmlCMS as my other project). No servers, auth, deployments, databases, CSS, etc. It's so refreshing to have one way of doing it right, not a million. For this Swift/Mac OS experiment, I've been building a parser for GPX files. A GPX file is generated by a GPS as a log of coordinates for a path. In cycling, this corresponds to your ride. The file can also include metadata from sensors, such as speed, cadence, heart rate, elevation and air temperature. I find this data fascinating and love exploring my stats after a ride, but sadly the best way to do that on the market, Strava, is undergoing some rapid enshittification, locking stuff behind a pay, introducing A.I. and actively making the experience worse. So I decided to build something for myself! My goal is to display your route on the map, along with "events" marked on the map. For example, instead of digging through charts you'll be able to look at the map to review your ride and see markers for things like "5% Grade Climb Start" -> "Zone 5 HR" -> "Climb Ended" -> "Max Speed" -> "Zone 4 HR". These markers let you see how quickly you achieved the climb, how much it stressed your body, and how far along the route it took to recover. I'm finding this to be a lot more effective than Strava's method of outlining information. Here's a look at what I've accomplished in the past 2 days, more to come soon!

0 views
baby steps 4 months ago

We need (at least) ergonomic, explicit handles

Continuing my discussion on Ergonomic RC, I want to focus on the core question: should users have to explicitly invoke handle/clone, or not? This whole “Ergonomic RC” work was originally proposed by Dioxus and their answer is simple: definitely not . For the kind of high-level GUI applications they are building, having to call to clone a ref-counted value is pure noise. For that matter, for a lot of Rust apps, even cloning a string or a vector is no big deal. On the other hand, for a lot of applications, the answer is definitely yes – knowing where handles are created can impact performance, memory usage, and even correctness (don’t worry, I’ll give examples later in the post). So how do we reconcile this? This blog argues that we should make it ergonomic to be explicit . This wasn’t always my position, but after an impactful conversation with Josh Triplett, I’ve come around. I think it aligns with what I once called the soul of Rust : we want to be ergonomic, yes, but we want to be ergonomic while giving control 1 . I like Tyler Mandry’s Clarity of purpose contruction, “Great code brings only the important characteristics of your application to your attention” . The key point is that there is great code in which cloning and handles are important characteristics , so we need to make that code possible to express nicely. This is particularly true since Rust is one of the very few languages that really targets that kind of low-level, foundational code. This does not mean we cannot (later) support automatic clones and handles. It’s inarguable that this would benefit clarity of purpose for a lot of Rust code. But I think we should focus first on the harder case, the case where explicitness is needed, and get that as nice as we can ; then we can circle back and decide whether to also support something automatic. One of the questions for me, in fact, is whether we can get “fully explicit” to be nice enough that we don’t really need the automatic version. There are benefits from having “one Rust”, where all code follows roughly the same patterns, where those patterns are perfect some of the time, and don’t suck too bad 2 when they’re overkill. I mentioned this blog post resulted from a long conversation with Josh Triplett 3 . The key phrase that stuck with me from that conversation was: Rust should not surprise you . The way I think of it is like this. Every programmer knows what its like to have a marathon debugging session – to sit and state at code for days and think, but… how is this even POSSIBLE? Those kind of bug hunts can end in a few different ways. Occasionally you uncover a deeply satisfying, subtle bug in your logic. More often, you find that you wrote and not . And occasionally you find out that your language was doing something that you didn’t expect. That some simple-looking code concealed a subltle, complex interaction. People often call this kind of a footgun . Overall, Rust is remarkably good at avoiding footguns 4 . And part of how we’ve achieved that is by making sure that things you might need to know are visible – like, explicit in the source. Every time you see a Rust match, you don’t have to ask yourself “what cases might be missing here” – the compiler guarantees you they are all there. And when you see a call to a Rust function, you don’t have to ask yourself if it is fallible – you’ll see a if it is. 5 So I guess the question is: would you ever have to know about a ref-count increment ? The trick part is that the answer here is application dependent. For some low-level applications, definitely yes: an atomic reference count is a measurable cost. To be honest, I would wager that the set of applications where this is true are vanishingly small. And even in those applications, Rust already improves on the state of the art by giving you the ability to choose between and and then proving that you don’t mess it up . But there are other reasons you might want to track reference counts, and those are less easy to dismiss. One of them is memory leaks. Rust, unlike GC’d languages, has deterministic destruction . This is cool, because it means that you can leverage destructors to manage all kinds of resources, as Yehuda wrote about long ago in his classic ode-to- RAII entitled “Rust means never having to close a socket” . But although the points where handles are created and destroyed is deterministic, the nature of reference-counting can make it much harder to predict when the underlying resource will actually get freed. And if those increments are not visible in your code, it is that much harder to track them down. Just recently, I was debugging Symposium , which is written in Swift. Somehow I had two instances when I only expected one, and each of them was responding to every IPC message, wreaking havoc. Poking around I found stray references floating around in some surprising places, which was causing the problem. Would this bug have still occurred if I had to write explicitly to increment the ref count? Definitely, yes. Would it have been easier to find after the fact? Also yes. 6 Josh gave me a similar example from the “bytes” crate . A type is a handle to a slice of some underlying memory buffer. When you clone that handle, it will keep the entire backing buffer around. Sometimes you might prefer to copy your slice out into a separate buffer so that the underlying buffer can be freed. It’s not that hard for me to imagine trying to hunt down an errant handle that is keeping some large buffer alive and being very frustrated that I can’t see explicitly in the where those handles are created. A similar case occurs with APIs like like 7 . takes an and, if the ref-count is 1, returns an . This lets you take a shareable handle that you know is not actually being shared and recover uniqueness. This kind of API is not frequently used – but when you need it, it’s so nice it’s there. Entering the conversation with Josh, I was leaning towards a design where you had some form of automated cloning of handles and an allow-by-default lint that would let crates which don’t want that turn it off. But Josh convinced me that there is a significant class of applications that want handle creation to be ergonomic AND visible (i.e., explicit in the source). Low-level network services and even things like Rust For Linux likely fit this description, but any Rust application that uses or might also. And this reminded me of something Alex Crichton once said to me. Unlike the other quotes here, it wasn’t in the context of ergonomic ref-counting, but rather when I was working on my first attempt at the “Rustacean Principles” . Alex was saying that he loved how Rust was great for low-level code but also worked well high-level stuff like CLI tools and simple scripts. I feel like you can interpret Alex’s quote in two ways, depending on what you choose to emphasize. You could hear it as, “It’s important that Rust is good for high-level use cases”. That is true, and it is what leads us to ask whether we should even make handles visible at all. But you can also read Alex’s quote as, “It’s important that there’s one language that works well enough for both ” – and I think that’s true too. The “true Rust gestalt” is when we manage to simultaneously give you the low-level control that grungy code needs but wrapped in a high-level package. This is the promise of zero-cost abstractions, of course, and Rust (in its best moments) delivers. Let’s be honest. High-level GUI programming is not Rust’s bread-and-butter, and it never will be; users will never confuse Rust for TypeScript. But then, TypeScript will never be in the Linux kernel. The goal of Rust is to be a single language that can, by and large, be “good enough” for both extremes. The goal is make enough low-level details visible for kernel hackers but do so in a way that is usable enough for a GUI. It ain’t easy, but it’s the job. This isn’t the first time that Josh has pulled me back to this realization. The last time was in the context of async fn in dyn traits, and it led to a blog post talking about the “soul of Rust” and a followup going into greater detail . I think the catchphrase “low-level enough for a Kernel, usable enough for a GUI” kind of captures it. There is a slight caveat I want to add. I think another part of Rust’s soul is preferring nuance to artificial simplicity (“as simple as possible, but no simpler”, as they say). And I think the reality is that there’s a huge set of applications that make new handles left-and-right (particularly but not exclusively in async land 8 ) and where explicitly creating new handles is noise, not signal. This is why e.g. Swift 9 makes ref-count increments invisible – and they get a big lift out of that! 10 I’d wager most Swift users don’t even realize that Swift is not garbage-collected 11 . But the key thing here is that even if we do add some way to make handle creation automatic, we ALSO want a mode where it is explicit and visible. So we might as well do that one first. OK, I think I’ve made this point 3 ways from Sunday now, so I’ll stop. The next few blog posts in the series will dive into (at least) two options for how we might make handle creation and closures more ergonomic while retaining explicitness. I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee   ↩︎ It’s an industry term .  ↩︎ Actually, by the standards of the conversations Josh and I often have, it was’t really all that long – an hour at most.  ↩︎ Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that’s a topic for another blog post.  ↩︎ Modulo panics, of course – and no surprise that accounting for panics is a major pain point for some Rust users.  ↩︎ In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep’ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring.  ↩︎ Or , which is one of my favorite APIs. It takes an and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍  ↩︎ My experience is that, due to language limitations we really should fix, many async constructs force you into bounds which in turn force you into and where you’d otherwise have been able to use .  ↩︎ I’ve been writing more Swift and digging it. I have to say, I love how they are not afraid to “go big”. I admire the ambition I see in designs like SwiftUI and their approach to async. I don’t think they bat 100, but it’s cool they’re swinging for the stands. I want Rust to dare to ask for more !  ↩︎ Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion).  ↩︎ Though I’d also wager that many eventually find themselves scratching their heads about a ref-count cycle. I’ve not dug into how Swift handles those, but I see references to “weak handles” flying around, so I assume they’ve not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It’s harder to do since we discourage interior mutability, but not that hard.  ↩︎ I see a potential candidate for a design axiom… rubs hands with an evil-sounding cackle and a look of glee   ↩︎ It’s an industry term .  ↩︎ Actually, by the standards of the conversations Josh and I often have, it was’t really all that long – an hour at most.  ↩︎ Well, at least sync Rust is. I think async Rust has more than its share, particularly around cancellation, but that’s a topic for another blog post.  ↩︎ Modulo panics, of course – and no surprise that accounting for panics is a major pain point for some Rust users.  ↩︎ In this particular case, it was fairly easy for me to find regardless, but this application is very simple. I can definitely imagine ripgrep’ing around a codebase to find all increments being useful, and that would be much harder to do without an explicit signal they are occurring.  ↩︎ Or , which is one of my favorite APIs. It takes an and gives you back mutable (i.e., unique) access to the internals, always! How is that possible, given that the ref count may not be 1? Answer: if the ref-count is not 1, then it clones it. This is perfect for copy-on-write-style code. So beautiful. 😍  ↩︎ My experience is that, due to language limitations we really should fix, many async constructs force you into bounds which in turn force you into and where you’d otherwise have been able to use .  ↩︎ I’ve been writing more Swift and digging it. I have to say, I love how they are not afraid to “go big”. I admire the ambition I see in designs like SwiftUI and their approach to async. I don’t think they bat 100, but it’s cool they’re swinging for the stands. I want Rust to dare to ask for more !  ↩︎ Well, not only that. They also allow class fields to be assigned when aliased which, to avoid stale references and iterator invalidation, means you have to move everything into ref-counted boxes and adopt persistent collections, which in turn comes at a performance cost and makes Swift a harder sell for lower-level foundational systems (though by no means a non-starter, in my opinion).  ↩︎ Though I’d also wager that many eventually find themselves scratching their heads about a ref-count cycle. I’ve not dug into how Swift handles those, but I see references to “weak handles” flying around, so I assume they’ve not (yet?) adopted a cycle collector. To be clear, you can get a ref-count cycle in Rust too! It’s harder to do since we discourage interior mutability, but not that hard.  ↩︎

0 views
NorikiTech 4 months ago

Rust struct field order

Part of the “ Rustober ” series. One of the Rust quirks is that when initializing a struct, the named fields can be in any order: In Swift, this is an error. However, looking at the rules for C initialization , it seems the C behavior is the same, called “designated initializer” and has been available since C99. Possibly, this also has to deal with Rust’s struct update syntax where you can initialize a struct based on another instance, in which case the set of field names would be incomplete, so their order does not really matter since they are named:

0 views
NorikiTech 4 months ago

Rust traits vs Swift protocols

Part of the “ Rustober ” series. As I said in the first post of the series , parts of Rust are remarkably similar to Swift, such as and . Let’s try to compare Rust traits to Swift protocols. I’m very new to Rust and I’m not aiming for completeness, so take it with a grain of salt. Looking at them both, Swift leans towards developer ergonomics (many things are implicit, less strict rules around what can be defined where) and Rust leans towards compile-time guarantees: there’s less flexibility but also less ambiguity. For example, in Swift you can add multiple protocol conformances at once, and the compiler will pick up any types that are named the same as associated types: And in Rust: Even this short example shows how flexible Swift is — and we haven’t even seen generics yet. I’m convinced Rust generics in traits are better done than in Swift partly because they are more granular. Whenever I tried to compose anything complicated out of Swift protocols, I always ran into problems either with “Self or associated type requirements” (when a protocol can only be used a generic constraint) or existential types. Here’s a real example where Swift couldn’t help me constrain an associated type on a protocol, so I had to leave it simply as an associated type without additional conformance. The idea is to have a service that would be able to swap between multiple instances of concrete providers, all conforming to several different types and ultimately descending (in the sense, not sense) from one common ancestor. Here’s similar code in Rust which does not have this problem: I’m looking forward to exploring the differences (and similarities) (and bashing my head on the wall) when I get to write some actual Rust code.

0 views
Cassidy Williams 6 months ago

Ductts Build Log

I built and released Ductts , an app for tracking how often you cry! I built it with React Native and Expo (both of which were new to me) and it was really fun (and challenging) putting it together. Yes! I should have anticipated just how many people would ask if I’m okay. I am! I just like data. Here’s a silly video I made of the app so you can see it in action first! The concept of Ductts came from my pile of domains, originally from November 2022 (according to my logs of app ideas, ha). I revisited the idea on and off pretty regularly since then, especially when I went through postpartum depression in 2023, and saw people on social media explain how they manually track when they cry in their notes apps for their therapists. I had a few different name ideas for the app, but more than anything I wanted it to have a clever logo, because it felt like there was a good opportunity for one. I called it crycry for a while, CryTune, TTears (because I liked the idea of the emoticon being embedded in the logo), and then my cousin suggested Ductts! With that name I could do the design idea, and I thought it might be a fun pun on tear ducts and maybe a duck mascot. Turns out ducks are hard to draw, so I just ended up with the wordmark: I really wanted this app to be native so it would be easy to use on a phone! I poked around with actually using native Swift, but… admittedly the learning curve slowed me down every time I got into it and I would lose motivation. So, in a moment of yelling at myself to “just build SOMETHING, Cassidy” I thought it might be fun to try using AI to get me started with React Native! I tried a0 at first, and it was pretty decent at making screens that I thought looked nice, but at the time when I tried it, the product was a bit too immature and wouldn’t produce much that I could actually work with. But, it was a good thing to see something that felt a bit real! So, from there, I started a fresh Expo app with: I definitely stumbled through building the app at first because I used the starter template and had to figure out which things I needed to remove, and probably removed a bit too much at first (more on that later). I got very familiar with the Expo docs , and GitHub Copilot was helpful too as I asked about how certain things worked. In terms of the “order” in which I implemented features, it went like this: And peppered throughout all of this was a lot of styling, re-styling, debugging, context changes, design changes, all that jazz. This list feels so small when I think about all of the tiny adjustments it took to make drawers slide smoothly, gestures move correctly, and testing across screen sizes. There’s a few notable libraries and packages that I used specifically to get everything where I wanted: I learned a lot about how Expo does magic with their Expo Go app for testing your apps. Expo software developer Kadi Kraman helped explain it to me best: A React Native app consists of two parts: you have the JS bundle, and all the native code. Expo Go is a sandbox environment that gives you a lot of the native code you might need for learning and prototyping. So we include the native code for image, camera, push notifications and a whole bunch of libraries that are often used, but it’s limited due to what is possible on the native platforms. So when you need to change things in the native-land, you need to build the native code part yourself (like your own custom version of Expo Go basically). One of the things I really wanted to implement was an animated splash screen, and y’all… after building the app natively, properly, about a million times, I decided that I’m cool with it being a static image. But, here’s the animation I made anyway, for posterity: So many things are funky when it comes to building things natively, for example, how dependencies work and what all is included. There are a handful of libraries where I didn’t read the README (I’m sorry!!!!) and just installed the package to keep moving forward, and then learned that the library would work fine in Expo Go, but needed different packages installed to work natively. Phew. Expo Router is one of them, where again, if I had just read the docs, I could have known that I shouldn’t have removed certain packages when using . This is actually what you need to run if you want to install : Kadi once again came in clutch with a great explanation: The reason this sometimes happens is: Expo Go has a ton of native libraries pre-bundled for ease of development. So, even if you’re not installing them in your project, Expo Go includes the native code for them. For a specific example, e.g. this QR code library requires react-native-svg as a peer dependency and they have it listed in the instructions . However if you were to ignore this and only install the QR code library, it would still work in Expo Go, because it has the native code from pre-bundled. But when you create a development build, preview build or a production build, we don’t want to include all the unused code from Expo Go, it will be a completely clean build with only the libraries you’ve installed explicitly. The Expo Doctor CLI tool saved my bacon a ton here as I stumbled through native builds, clearing caches, and reinstalling everything. Kadi and the Expo team actually made a PR to help check for peer dependencies after I asked them a bunch of questions, which was really awesome of them! Y’all shipping native apps is a horrible experience if you are used to web dev and just hitting “deploy” on your provider of choice. I love web development so much. It’s beautiful. It’s the way things should be. But anyway, App Store time. I decided to just do the iOS App Store at first because installing the Android Simulator was the most wretched developer experience I’ve had in ages and it made me want to throw my laptop in the sea. Kadi (I love you Kadi) had a list of great resources for finalizing apps: TL;DR: Build your app, make a developer account, get 3-5 screenshots on a phone and on a tablet, fill out a bunch of forms about how you use user data, make a privacy policy and support webpage, decide if you want it free or paid, and fill out forms if it’s paid. Y’all… I’m grateful for the Expo team and for EAS existing. Their hand-holding was really patient, and their Discord community is awesome if you need help. Making the screenshots was easy with Expo Orbit , which lets you choose which device you want for each screenshot, and I used Affinity Designer to make the various logos, screenshots, and marketing images it needed. I decided to make the app just a one-time $0.99 purchase, which was pretty easy (you just click “paid” and the amount you want to sell it for), BUT if you want to sell it in the European Union, you need to have a public address and phone number for that. It took a few pieces of verification with a human to make that work. I have an LLC with which I do consulting work and used the registered agent’s information for that (that’s allowed!), so that my personal contact info wouldn’t be front-and-center in the App Store for all of Europe to see. The website part was the least of my worries, honestly. I love web dev. I threw together an Astro website with a link to the App Store, a Support page, and a Privacy Policy page, and plopped on my existing my domain name ductts.app . One thing I did dive deep on, which was unnecessary but fun, was an Import Helper page to help make a Ductts-compatible spreadsheet for those who might already track their tears in a note on their phone. Making a date converter and a sample CSV and instructions felt like one of those things that maybe 2 people in the world would ever use… but I’m glad I did it anyway. Finally, after getting alllll of this done, it was just waiting a few days until the app was finally up on the App Store, almost anticlimactically! While I waited I made a Product Hunt launch page , which luckily used all the same copy and images from the App Store, and it was fun to see it get to the #4 Health & Fitness app of the day on Product Hunt, and #68 in Health & Fitness on the App Store! I don’t expect much from Ductts, really. It was a time-consuming side project that taught me a ton about Expo, React Native, and shipping native apps, and I’m grateful for the experience. …plus now I can have some data on how much I cry. I’m a parent! It happens! Download Ductts , log your tears, and see ya next time.

0 views
Peter Steinberger 6 months ago

Poltergeist: The Ghost That Keeps Your Builds Fresh

Meet Poltergeist: an AI-friendly universal build watcher that auto-detects and rebuilds any project—Swift, Rust, Node.js, CMake, or anything else—the moment you save a file. Zero config, just haunting productivity.

0 views

My agentic coding methodology of June 2025

I was chatting with some friends about how I'm using "AI" tools to write code. Like everyone else, my process has been evolving over the past few months. It seemed worthwhile to do a quick writeup of how I'm doing stuff today. At the moment, I'm mostly living in Claude Code. My "planning methodology" is: "Let's talk through an idea I have. I'm going to describe it. Ask me lots of questions. When you understand it sufficiently, write out a draft plan." After that, I chat with the LLM for a bit. Then, the LLM shows me the draft plan. I point out things I don't like in the plan and ask for changes. The LLM revises the plan. We do that a few times. Once I'm happy with the plan, I say something along the lines of: "Great. now write that to as a series of prompts for an llm coding agent. DRY YAGNI simple test-first clean clear good code" I check over the plan. Maybe I ask for edits. Maybe I don't. And then I type to blow away the LLM's memory of this nice plan it just made. "There's a plan for a feature in . Read it over. If you have questions, let me know. Otherwise, let's get to work." Invariably, there are (good) questions. It asks. I answer. "Before we get going, update the plan document based on the answers I just gave you." When the model has written out the updated plan, it usually asks me some variant of "can I please write some code now?" *"lfg" And then the model starts burning tokens. (Claude totally understands "lfg". Qwen tends to overthink it.) I keep an eye on it while it runs, occasionally stopping it to redirect or critque something it's done until it reports "Ok! Phase 1 is production ready." (I don't know why, but lately, it's very big on telling me first-draft code is production ready.) Usually, I'll ask it if it's written and run test. Usually, it actually has, which is awesome. *"Ok. please commit these changes and update the planning doc with your current status." Once the model has done that, I usually it again to get a nice fresh context window and tell it *"Read and do the next phase.` And then we lather, rinse, and repeat until there's something resembling software. This process is startlingly effective most of the time. Part of what makes it work well is the CLAUDE.md file that spells out a my preferences and workflow. Part of it is that Anthropic's models are just well tuned for what I'm doing (which is mostly JavaScript, embedded C++, and Swift.) Generally, I find that the size of spec that works is something the model can blaze through in less than a couple hours with a focused human paying attention, but really, the smaller and more focused the spec, the better. If you've got a process that looks like mine (or is wildly different), I'd love to hear from you about it. Drop me a line at [email protected].

0 views
Xe Iaso 8 months ago

Apple just Sherlocked Docker

EDIT(2025-06-09 20:51 UTC): The containerization stuff they're using is open source on GitHub . Digging into it. Will post something else when I have something to say. This year's WWDC keynote was cool. They announced a redesign of the OSes, unified the version numbers across the fleet, and found ways to hopefully make AI useful (I'm reserving my right to be a skeptic based on how bad Apple Intelligence currently is). However, the keynote slept on the biggest announcement for developers: they're bringing the ability to run Linux containers in macOS: The Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It’s built on an open-source framework optimized for Apple silicon and provides secure isolation between container images. This is an absolute game changer. One of the biggest pain points with my MacBook is that the battery life is great...until I start my Linux VM or run the Docker app. I don't even know where to begin to describe how cool this is and how it will make production deployments so much easier to access for the next generation of developers. Maybe this could lead to Swift being a viable target for web applications. I've wanted to use Swift on the backend before but Vapor and other frameworks just feel so frustratingly close to greatness. Combined with the Swift Static Linux SDK and some of the magic that powers Private Cloud Compute , you could get an invincible server side development experience that rivals what Google engineers dream up directly on your MacBook. I can't wait to see more. This may actually be what gets me to raw-dog beta macOS on my MacBook. The things I'd really like to know: I really wonder how Docker is feeling, I think they're getting Sherlocked . Either way, cool things are afoot and I can't wait to see more.

0 views
Peter Steinberger 8 months ago

Migrating 700+ Tests to Swift Testing: A Real-World Experience

How I migrated over 700 tests from XCTest to Swift Testing across two projects, with AI assistance and systematic refinement

0 views
HeyDingus 9 months ago

7 Things This Week [#176]

A weekly list of interesting things I found on the internet, posted on Sundays. Sometimes themed, often not. 1️⃣ Nick Heer does the work in dismantling this sexist post regarding Apple’s ass handed to it by Judge Gonzalez Rogers. [ 🔗 pxlnv.com ] 2️⃣ Whoa. Monty Python and the Holy Grail turned 50 this year! It still makes me laugh out loud every time I watch it (which you can do for free on YouTube). [ 🔗 kottke.org ] 3️⃣ I had no idea Taylor Swift was so web-forward right from the beginning. She had her music available to download from her website back in 2002 (when she was 13) and by 2003 had a ‘ Taylor Talk’ tab there — which I presume was an early blog before she had Tumblr. [ 🔗 webdesignmuseum.org ] 4️⃣ This restaurant is mind-blowing. It looks like a drawing inside! [ 🔗 kottke.org ] 5️⃣ The Baltimore Ravens went all out in their Severance -themed scheduled reveal video. [ ▶️ youtube.com ] 6️⃣ BasicAppleGuy is trying a new approach to reader support in which all his wallpapers and other haberdashery remain free to everyone, but can also be purchased to easily download the files all at once. I like the idea and hope it’s successful for him! [ 🔗 basicappleguy.com ] 7️⃣ This overlapping version of “ Dear Theodosia” is beautiful. [ ▶️ youtube.com ] Thanks for reading 7 Things . If you enjoyed these links or have something neat to share, please let me know . And remember that you can get more links to internet nuggets that I’m finding every day by following me @jarrod on the social web. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
xenodium 9 months ago

Awesome Emacs on macOS

Update: Added macOS Trash integration. While GNU/Linux had been my operating system of choice for many years, these days I'm primarily on macOS. Lucky for me, I spend most of my time in Emacs itself (or a web browser), making the switch between operating systems a relatively painless task. I build iOS and macOS apps for a living, so naturally I've accumulated a handful of macOS-Emacs integrations and tweaks over time. Below are some of my favorites. For starters, I should mention I run Emacs on macOS via the excellent Emacs Plus homebrew recipe. These are the options I use: Valeriy Savchenko has created some wonderful macOS Emacs icons . These days, I use his curvy 3D rendered icon , which I get via Emacs Plus's option. It's been a long while since I've settled on using macOS's Command (⌘) as my Emacs Meta key. For that, you need: At the same time, I've disabled the ⌥ key to avoid inadvertent surprises. After setting ⌘ as Meta key, I discovered C-M-d is not available to Emacs for binding keys. There's a little workaround : You may have noticed the Emacs Plus option. I didn't like Emacs refocusing other frames when closing one, so I sent a tiny patch over to Emacs Plus , which gave us that option. I also prefer reusing existing frames whenever possible. Most of my visual tweaks have been documented in my Emacs eye candy post . For macOS-specific things, read on… It's been a while since I've added this, though vaguely remember needing it to fix mode line rendering artifacts. I like using a transparent title bar and these two settings gave me just that: I want a menu bar like other macOS apps, so I enable with: If you got a more recent Apple keyboard, you can press the 🌐 key to insert emojis from anywhere, including Emacs. If you haven't got this key, you can always , which launches the very same dialog. Also check out Charles Choi's macOS Native Emoji Picking in Emacs from the Edit Menu . If you prefer Apple's long-press approach to inserting accents or other special characters, I got an Emacs version of that . I wanted to rotate my monitor from the comfort of M-x, so I made Emacs do it . While there are different flavors of "open with default macOS app" commands out there (ie. crux-open-with as part of Bozhidar Batsov's crux ), I wanted one that let me choose a specific macOS app . Shifting from Emacs to Xcode via "Open with" is simple enough, but don't you want to also visit the very same line ? Apple offers SF Symbols on all their platforms, so why not enable Emacs to insert and render them? This is particulary handy if you do any sort of iOS/macOS development, enabling you to insert SF Symbols using your favorite completion framework. I happen to remain a faithful ivy user. Speaking of enabling SF Symbol rendering, you can also use them to spiff your Emacs up. Check out Charles Choi's Calle 24 for a great-looking Emacs toolbar. Also, Christian Tietze shows how to use SF Symbols as Emacs tab numbers . While macOS's Activity Monitor does a fine job killing processes, I wanted something a little speedier, so I went with a killing solution leveraging Emacs completions . Having learned how simple it was to enable Objective-C babel support , I figured I could do something a little more creative with SwiftUI, so I published ob-swiftui on MELPA. I found the nifty duti command-line tool to change default macOS applications super handy, but could never remember its name when I needed it. And so I decided to bring it into dwim-shell-command as part of my toolbox . I got a bunch of handy helpers in dwim-shell-commands.el (specially all the image/video helpers via ffmpeg and imagemagick). Go check dwim-shell-commands.el . There's loads in there, but here are my macOS-specific commands: Continuing on the family, I should also mention . While I hardly ever change my Emacs theme, I do toggle macOS dark mode from time to time to test macOS or web development. One last … One that showcases toggling the macOS menu bar (autohide) . While this didn't quite stick for me, it was a fun experiment to add Emacs into the mix . This is just a little fun banner I see whenever I launch eshell . This is all you need: I wanted a quick way to record or take screenshots of macOS windows, so I now have my lazy way , leveraging macosrec , a recording command line utility I built. Invoked via of course. If you want any sort of code completion for your macOS projects, you'd be happy to know that eglot works out of the box. This is another experiment that didn't quite stick, but I played with controlling the Music app's playback . While I still purchase music via Apple's Music app, I now play directly from Emacs via Ready Player Mode . I'm fairly happy with this setup, having scratched that itch with my own package. By the way, those buttons also leverage SF Symbols on macOS. While there are plenty of solutions out there leveraging the command line tool to reveal files in macOS's Finder, I wanted one that revealed multiple files in one go. For that, I leveraged the awesome emacs-swift-module , also by Valeriy Savchenko . The macOS trash has saved my bacon in more than one occasion. Make Emacs aware of it . Also check out . While elisp wasn't in my top languages to learn back in the day, I sure am glad I finally bit the bullet and learned a thing or two. This opened many possibilities. I now see Emacs as a platform to build utilities and tools off of. A canvas of sorts , to be leveraged in and out of the editor. For example, you could build your own bookmark launcher and invoke from anywhere on macOS. Turns out you can also make Emacs your default email composer . While not exactly an Emacs tweak itself, I wanted to extend Emacs bindings into other macOS apps. In particular, I wanted more reliable Ctrl-n/p usage everywhere , which I achieved via Karabiner-Elements . I also mapped to , which really feels just great! I can now cancel things, dismiss menus, dialogs, etc. everywhere. With my Emacs usage growing over time, it was a matter of time until I discovered org mode. This blog is well over 11 years old now, yet still powered by the very same org file (beware, this file is big). With my org usage growing, I felt like I was missing org support outside of Emacs. And so I started building iOS apps revolving around my Emacs usage. Journelly is my latest iOS app, centered around note-taking and journaling. The app feels like tweeting, but for your eyes only of course. It's powered by org markup, which can be synced with Emacs via iCloud. Org habits are handy for tracking daily habits. However, it wasn't super practical for me as I often wanted to check things off while on the go (away from Emacs). That led me to build Flat Habits . While these days I'm using Journelly to jot down just about anything, before that, I built and used Scratch as scratch pad of sorts. No iCloud syncing, but needless to say, it's also powered by org markup. For more involved writing, nothing beats Emacs org mode. But what if I want quick access to my org files while on the go? Plain Org is my iOS solution for that. I'll keep looking for other macOS-related tips and update this post in the future. In the meantime, consider ✨ sponsoring ✨ this content, my Emacs packages , buying my apps , or just taking care of your eyes ;) dwim-shell-commands-macos-add-to-photos dwim-shell-commands-macos-bin-plist-to-xml dwim-shell-commands-macos-caffeinate dwim-shell-commands-macos-convert-to-mp4 dwim-shell-commands-macos-empty-trash dwim-shell-commands-macos-install-iphone-device-ipa dwim-shell-commands-macos-make-finder-alias dwim-shell-commands-macos-ocr-text-from-desktop-region dwim-shell-commands-macos-ocr-text-from-image dwim-shell-commands-macos-open-with dwim-shell-commands-macos-open-with-firefox dwim-shell-commands-macos-open-with-safari dwim-shell-commands-macos-reveal-in-finder dwim-shell-commands-macos-screenshot-window dwim-shell-commands-macos-set-default-app dwim-shell-commands-macos-share dwim-shell-commands-macos-start-recording-window dwim-shell-commands-macos-abort-recording-window dwim-shell-commands-macos-end-recording-window dwim-shell-commands-macos-toggle-bluetooth-device-connection dwim-shell-commands-macos-toggle-dark-mode dwim-shell-commands-macos-toggle-display-rotation dwim-shell-commands-macos-toggle-menu-bar-autohide dwim-shell-commands-macos-version-and-hardware-overview-info

0 views

Posting through it

I'm posting this from a very, very rough cut at a bespoke blogging client I've been having my friend Claude build out over the past couple days. I've long suspected that "just edit text files on disk to make blog posts" is, to a certain kind of person, a great sounding idea...but not actually the way to get me to blog. The problem is that my blog is...a bunch of text files in a git repository that's compiled into a website by a tool called "Eleventy" that runs whenever I put a file in a certain directory of this git repository and push that up to GitHub. There's no API because there's no server. And I've never learned Swift/Cocoa/etc, so building macOS and iOS tooling to create a graphical blogging client has felt...not all that plausible. Over the past year or two, things have been changing pretty fast. We have AI agents that have been trained on...well, pretty much everything humans have ever written. And they're pretty good at stringing together software. So, on a whim, I asked Claude to whip me up a blogging client that talks to GitHub in just the right way. This is the very first post using that new tool, which I'm calling "Post Through It." Ok, technically, this is the fourth post. But it's the first one I've actually been able to add any content to.

0 views
baby steps 11 months ago

Dyn async traits, part 10: Box box box

This article is a slight divergence from my Rust in 2025 series. I wanted to share my latest thinking about how to support for traits with async functions and, in particular how to do so in a way that is compatible with the soul of Rust . Supporting in dyn traits is a tricky balancing act. The challenge is reconciling two key things people love about Rust: its ability to express high-level, productive code and its focus on revealing low-level details. When it comes to async function in traits, these two things are in direct tension, as I explained in my first blog post in this series – written almost four years ago! (Geez.) To see the challenge, consider this example trait: In Rust today you can write a function that takes an and invokes and everything feels pretty nice: But what I want to write that same function using a ? If I write this… …I get an error. Why is that? The answer is that the compiler needs to know what kind of future is going to be returned by so that it can be awaited. At minimum it needs to know how big that future is so it can allocate space for it 1 . With an , the compiler knows exactly what type of signal you have, so that’s no problem: but with a , we don’t, and hence we are stuck. The most common solution to this problem is to box the future that results. The crate , for example, transforms to something like . But doing that at the trait level means that we add overhead even when you use ; it also rules out some applications of Rust async, like embedded or kernel development. So the name of the game is to find ways to let people use that are both convenient and flexible. And that turns out to be pretty hard! I’ve been digging back into the problem lately in a series of conversations with Michal Goulet (aka, compiler-errors) and it’s gotten me thinking about a fresh approach I call “box box box”. The “box box box” design starts with the call-site selection approach. In this approach, when you call , the type you get back is a – i.e., an unsized value. This can’t be used directly. Instead, you have to allocate storage for it. The easiest and most common way to do that is to box it, which can be done with the new operator: This approach is fairly straightforward to explain. When you call an async function through , it results in a , which has to be stored somewhere before you can use it. The easiest option is to use the operator to store it in a box; that gives you a , and you can await that. But this simple explanation belies two fairly fundamental changes to Rust. First, it changes the relationship of and . Second, it introduces this operator, which would be the first stable use of the keyword 2 . It seems odd to introduce the keyword just for this one use – where else could it be used? As it happens, I think both of these fundamental changes could be very good things. The point of this post is to explain what doors they open up and where they might take us. Let’s start with the core proposal. For every trait , we add inherent methods 3 to reflecting its methods: In fact, method dispatch already adds “pseudo” inherent methods to , so this wouldn’t change anything in terms of which methods are resolved. The difference is that is only allowed if all methods in the trait are dyn compatible, whereas under this proposal some non-dyn-compatible methods would be added with modified signatures. Change 0 only makes sense if it is possible to create a even though it contains some methods (e.g., async functions) that are not dyn compatible. This revisits RFC #255 , in which we decided that the type should also implement the trait . I was a big proponent of RFC #255 at the time, but I’ve sinced decided I was mistaken 5 . Let’s discuss. The two rules today that allow to implement are as follows: The fact that implements is at times quite powerful. It means for example that I can write an implementation like this one: This impl makes implement for any type , including dyn trait types like . Neat. Powerful as it is, the idea of implementing doesn’t quite live up to its promise. What you really want is that you could replace any with and things would work. But that’s just not true because is . So actually you don’t get a very “smooth experience”. What’s more, although the compiler gives you a impl, it doesn’t give you impls for references to – so e.g. given this trait If I have a , I can’t give that to a function that takes an To make that work, somebody has to explicitly provide an impl like and people often don’t. However, the requirement that implement can be limiting. Imagine a trait like This trait has two methods. The method is dyn-compatible, no problem. The method has an argument is therefore generic, so it is not dyn-compatible 6 (well, at least not under today’s rules, but I’ll get to that). (The reason is not dyn compatible: we need to make distinct monomorphized copies tailored to the type of the argument. But the vtable has to be prepared in advance, so we don’t know which monomorphized version to use.) And yet, just because is not dyn compatible doesn’t mean that a would be useless. What if I only plan to call , as in a function like this? Rust’s current rules rule out a function like this, but in practice this kind of scenario comes up quite a lot. In fact, it comes up so often that we added a language feature to accommodate it (at least kind of): you can add a clause to your feature to exempt it from dynamic dispatch. This is the reason that can be dyn compatible even when it has a bunch of generic helper methods like and . Let me pause here, as I imagine some of you are wondering what all of this “dyn compatibility” stuff has to do with AFIDT. The bottom line is that the requirement that type implements means that we cannot put any kind of “special rules” on dispatch and that is not compatible with requiring a operator when you call async functions through a trait. Recall that with our trait, you could call the method on an without any boxing: But when I called it on a , I had to write to tell the compiler how to deal with the that gets returned: Indeed, the fact that returns an but returns a already demonstrates the problem. All types are known to be and is not, so the type signature of is not the same as the type signature declared in the trait. Huh. Today I cannot write a type like without specifying the value of the associated type . To see why this restriction is needed, consider this generic function: If you invoked with an that did not specify , how could the type of ? We wouldn’t have any idea how much space space it needs. But if you invoke with , there is no problem. We don’t know which method is being called, but we know it’s returning a . And yet, just as we saw before, the requirement to list associated types can be limiting. If I have a and I only call , for example, then why do I need to know the type? But I can’t write code like this today. Instead I have to make this function generic which basically defeats the whole purpose of using : If we dropped the requirement that every type implements , we could be more selective, allowing you to invoke methods that don’t use the associated type but disallowing those that do. So that brings us to full proposal to permit in cases where the trait is not fully dyn compatible: A lot of things get easier if you are willing to call malloc. – Josh Triplett, recently. Rust has reserved the keyword since 1.0, but we’ve never allowed it in stable Rust. The original intention was that the term box would be a generic term to refer to any “smart pointer”-like pattern, so would be a “reference counted box” and so forth. The keyword would then be a generic way to allocate boxed values of any type; unlike , it would do “emplacement”, so that no intermediate values were allocated. With the passage of time I no longer think this is such a good idea. But I do see a lot of value in having a keyword to ask the compiler to automatically create boxes . In fact, I see a lot of places where that could be useful. The first place is indeed the operator that could be used to put a value into a box. Unlike , using would allow the compiler to guarantee that no intermediate value is created, a property called emplacement . Consider this example: Rust’s semantics today require (1) allocating a 4KB buffer on the stack and zeroing it; (2) allocating a box in the heap; and then (3) copying memory from one to the other. This is a violation of our Zero Cost Abstraction promise: no C programmer would write code like that. But if you write , we can allocate the box up front and initialize it in place. 9 The same principle applies calling functions that return an unsized type. This isn’t allowed today, but we’ll need some way to handle it if we want to have return . The reason we can’t naively support it is that, in our existing ABI, the caller is responsible for allocating enough space to store the return value and for passing the address of that space into the callee, who then writes into it. But with a return value, the caller can’t know how much space to allocate. So they would have to do something else, like passing in a callback that, given the correct amount of space, performs the allocation. The most common cased would be to just pass in . The best ABI for unsized return values is unclear to me but we don’t have to solve that right now, the ABI can (and should) remain unstable. But whatever the final ABI becomes, when you call such a function in the context of a expression, the result is that the callee creates a to store the result. 10 If you try to write an async function that calls itself today, you get an error: The problem is that we cannot determine statically how much stack space to allocate. The solution is to rewrite to a boxed return value. This compiles because the compiler can allocate new stack frames as needed. But wouldn’t it be nice if we could request this directly? A similar problem arises with recursive structs: The compiler tells you As it suggestes, to workaround this you can introduce a : This though is kind of weird because now the head of the list is stored “inline” but future nodes are heap-allocated. I personally usually wind up with a pattern more like this: Now however I can’t create values with syntax and I also can’t do pattern matching. Annoying. Wouldn’t it be nice if the compiler just suggest adding a keyword when you declare the struct: and have automatically allocate the box for me? The ideal is that the presence of a box is now completely transparent, so I can pattern match and so forth fully transparently: Enums too cannot reference themselves. Being able to declare something like this would be really nice: In fact, I still remember when I used Swift for the first time. I wrote a similar enum and Xcode helpfully prompted me, “do you want to declare this enum as ?” I remember being quite jealous that it was such a simple edit. However, there is another interesting thing about a . The way I imagine it, creating an instance of the enum would always allocate a fresh box. This means that the enum cannot be changed from one variant to another without allocating fresh storage. This in turn means that you could allocate that box to exactly the size you need for that particular variant. 11 So, for your , not only could it be recursive, but when you allocate an you only need to allocate space for a , whereas a would be a different size. (We could even start to do “tagged pointer” tricks so that e.g. is stored without any allocation at all.) Another option would to have particular enum variants that get boxed but not the enum as a whole: This would be useful in cases you do want to be able to overwrite one enum value with another without necessarily reallocating, but you have enum variants of widely varying size, or some variants that are recursive. A boxed variant would basically be desugared to something like the following: clippy has a useful lint that aims to identify this case, but once the lint triggers, it’s not able to offer an actionable suggestion. With the box keyword there’d be a trivial rewrite that requires zero code changes. If we’re enabling the use of elsewhere, we ought to allow it in patterns: Under my proposal, would be the preferred form, since it would allow the compiler to do more optimization. And yes, that’s unfortunate, given that there are 10 years of code using . Not really a big deal though. In most of the cases we accept today, it doesn’t matter and/or LLVM already optimizes it. In the future I do think we should consider extensions to make (as well as and other similar constructors) be just as optimized as , but I don’t think those have to block this proposal. Yes and no. On the one hand, I would like the ability to declare that a struct is always wrapped in an or . I find myself doing things like the following all too often: On the other hand, is very special. It’s kind of unique in that it represents full ownership of the contents which means a and are semantically equivalent – there is no place you can use that a won’t also work – unless . This is not true for and or most other smart pointers. For myself, I think we should introduce now but plan to generalize this concept to other pointers later. For example I’d like to be able to do something like this… …where the type would implement some trait to permit allocating, deref’ing, and so forth: The original plan for was that it would be somehow type overloaded. I’ve soured on this for two reasons. First, type overloads make inference more painful and I think are generally not great for the user experience; I think they are also confusing for new users. Finally, I think we missed the boat on naming. Maybe if we had called something like the idea of “box” as a general name would have percolated into Rust users’ consciousness, but we didn’t, and it hasn’t. I think the keyword now ought to be very targeted to the type. In my [soul of Rust blog post], I talked about the idea that one of the things that make Rust Rust is having allocation be relatively explicit. I’m of mixed minds about this, to be honest, but I do think there’s value in having a property similar to – like, if allocation is happening, there’ll be a sign somewhere you can find. What I like about most of these proposals is that they move the keyword to the declaration – e.g., on the struct/enum/etc – rather than the use . I think this is the right place for it. The major exception, of course, is the “marquee proposal”, invoking async fns in dyn trait. That’s not amazing. But then… see the next question for some early thoughts. The way that Rust today detects automatically whether traits should be dyn compatible versus having it be declared is, I think, not great. It creates confusion for users and also permits quiet semver violations, where a new defaulted method makes a trait no longer be dyn compatible. It’s also a source for a lot of soundness bugs over time. I want to move us towards a place where traits are not dyn compatible by default, meaning that does not implement . We would always allow types and we would allow individual items to be invoked so long as the item itself is dyn compatible. If you want to have implement , you should declare it, perhaps with a keyword: This declaration would add various default impls. This would start with the impl: But also, if the methods have suitable signatures, include some of the impls you really ought to have to make a trait that is well-behaved with respect to dyn trait: In fact, if you add in the ability to declare a trait as , things get very interesting: I’m not 100% sure how this should work but what I imagine is that would be pointer-sized and implicitly contain a behind the scenes. It would probably automatically the results from when invoked through , so something like this: I didn’t include this in the main blog post but I think together these ideas would go a long way towards addressing the usability gaps that plague today. Side note, one interesting thing about Rust’s async functions is that there size must be known at compile time, so we can’t permit alloca-like stack allocation.  ↩︎ The box keyword is in fact reserved already, but it’s never been used in stable Rust.  ↩︎ Hat tip to Michael Goulet (compiler-errors) for pointing out to me that we can model the virtual dispatch as inherent methods on types. Before I thought we’d have to make a more invasive addition to MIR, which I wasn’t excited about since it suggested the change was more far-reaching.  ↩︎ In the future, I think we can expand this definition to include some limited functions that use in argument position, but that’s for a future blog post.  ↩︎ I’ve noticed that many times when I favor a limited version of something to achieve some aesthetic principle I wind up regretting it.  ↩︎ At least, it is not compatible under today’s rules. Convievably it could be made to work but more on that later.  ↩︎ This part of the change is similar to what was proposed in RFC #2027 , though that RFC was quite light on details (the requirements for RFCs in terms of precision have gone up over the years and I expect we wouldn’t accept that RFC today in its current form).  ↩︎ I actually want to change this last clause in a future edition. Instead of having dyn compatibility be determined automically, traits would declare themselves dyn compatible, which would also come with a host of other impls. But that’s worth a separate post all on its own.  ↩︎ If you play with this on the playground , you’ll see that the memcpy appears in the debug build but gets optimized away in this very simple case, but that can be hard for LLVM to do, since it requires reordering an allocation of the box to occur earlier and so forth. The operator could be guaranteed to work.  ↩︎ I think it would be cool to also have some kind of unsafe intrinsic that permits calling the function with other storage strategies, e.g., allocating a known amount of stack space or what have you.  ↩︎ We would thus finally bring Rust enums to “feature parity” with OO classes! I wrote a blog post, “Classes strike back”, on this topic back in 2015 (!) as part of the whole “virtual structs” era of Rust design. Deep cut!  ↩︎

0 views
seated.ro 11 months ago

If you don't tinker, you don't have taste

Growing up, I never stuck to a single thing, be it guitar lessons, art school, martial arts – I tried them all. when it came to programming, though, I never really tinkered. I was always amazed with video games and wondered how they were made but I never pursued that curiosity. My tinkering habits picked up very late, and now I cannot go by without picking up new things in one form or another. It’s how I learn. I wish I did it sooner. It’s a major part of my learning process now, and I would never be the programmer person I am today. Have you ever spent hours tweaking the mouse sensitivity in your favorite FPS game? Have you ever installed a Linux distro, spent days configuring window managers, not because you had to, but purely because it gave you satisfaction and made your workflow exactly yours? Ever pulled apart your mechanical keyboard, swapped keycaps, tested switches, and lubed stabilizers just for more thock? That is what I mean. I have come to understand that there are two kinds of people, those who do things only if it helps them achieve a goal, and those who do things just because. The ideal, of course, is to be a mix of both. when you tinker and throw away, that’s practice, and practice should inherently be ephemeral, exploratory, and be frequent - @ludwigABAP There are plenty of people who still use the VSCode terminal as their default terminal, do not know what vim bindings are, GitHub desktop rather than the cli (at the very least). I’m not saying these are bad things necessarily, just that this should be the minimum, not the median. This does not mean I spend every waking hour fiddling with my neovim config. In fact, the last meaningful change to my config was 6 months ago. Finding that balance is where most people fail. Over the years I have done so many things that in hindsight have made me appreciate programming more but were completely “unnecessary” in the strict sense. In the past week I have, for the first time, written a glsl fragment shader, a rust procedural macro, template c++, a swift app, furthered my hatred for windows development (this is not new), and started using the helix editor more (mainly for good defaults + speed). I didn’t have to do these things, but I did, for fun! And I know more about these things now. No time spent learning, is time wasted. Acquiring good taste comes through using various things, discarding the ones you don’t like and keeping the ones you do. if you never try various things, you will not acquire good taste. And what I mean by taste here is simply the honed ability to distinguish mediocrity from excellence. This will be highly subjective, and not everyone’s taste will be the same, but that is the point, you should NOT have the same taste as someone else. Question the status quo, experiment, break things, do this several times, do this everyday and keep doing it.

0 views
maxdeviant.com 1 years ago

2024 in Review

In a rare turn of events, I'm writing this year-in-review in advance of the last few hours of the year. Normally I end up spending New Year's Eve writing it as I rush to publish by midnight. As I look back on this year and try to remember what all transpired—a process that is hampered by a frustrating lack of notekeeping on my part—I'm left feeling like there wasn't all that much. Of course, I know this not to be true. Plenty of things happened , but not many that make for tidy bullet points in an itemized record of the year. In many ways this year has felt like stasis, with not much to show in terms of outwardly-visible signs of progress. Internally, I've been constantly embroiled in battle with my inner thoughts and demons. This unending fight has taxed me both emotionally and physically, and has often left me with little left to give to my family, friends, and my work. Working on myself has taken up the vast majority of my time and energy this year. During one particularly rough bout I wrote: I can't think of a time I've been more exhausted than I have been this past week. Sure, there have been other times where I've felt downtrodden by my emotions and heavy thoughts, but there is something so tangibly exhausting about having to face them head-on. I suppose like pretty much everything else in life, forward progress takes work. It's easier to stay in one place—even if that place is miserable—than it is to take action and move forward. In the face of all this, I've tried to enjoy the little things when I can find them: I turned 30 this year and am still trying to determine how I feel about it. One recurring theme so far has been reflecting on what I want to do today so that I don't look back and wish I had started it today. I've found maintaining this future-oriented outlook to be quite difficult when dealing with a multitude of things in the moment. It reminds me of when I first started learning to drive and I was always looking just a car or two ahead of me (on account of being deathly afraid of hitting them). It wasn't until I took the Pennsylvania Motorcycle Safety Program 3 and was taught to look ahead towards your destination that I realized how much of a difference it makes in the awareness of your surroundings. For motorcycles, in particular, looking right in front of you is actually more detrimental than in a car. For instance, looking directly ahead of you when going into a curve instead of looking through the curve can actually negatively impact your ability to maintain your balance on the bike. Point being, when all your attention is focused on the here and now, it can be easy to forget to look ahead and see what adjustments need to be made for a better outcome down the road. This year marked ten years of this website being online in some shape or form. I had originally intended to write a "10 Years of maxdeviant.com" post, or something of that nature, but the aforementioned struggles of this year got the best of me. I did, however, ship a rebuild of my site this year. This site is now built by a bespoke static site generator, leveraging Razorbill , and I am excited by the possibilities this affords for the future. This was my first year using Rust in a professional capacity, and I could not be happier about it. It's been everything that I had hoped for, and more. I've observed that, for the first time in my career, the language I'm using largely fades away. I find that I can focus on the problem at hand without being abruptly pulled out of my flow state by reaching for a language feature that doesn't exist. This is something that has routinely frustrated me when working with other languages, and it's a welcome change to have the set of language features that I want at my disposal. A note on compile times: the rumors are true. Rust can be quite slow to compile once a codebase reaches a certain size. The Zed codebase, for instance, can be a real bear at times. For smaller projects, like my personal ones, I find that compile times are a non-issue. I do hope that further inroads can be made towards improving this, but I find that sacrificing a bit of compilation speed for all the other benefits Rust provides to be a no-brainer. Lastly, in September I attended RustConf 2024 along with the rest of the Zed team. I had a great time and I enjoyed getting to talk to so many fellow Rustaceans. It's hard to believe that we only open-sourced Zed in January of this year! That moment feels like forever ago, and so much has happened since then. Extension support —a feature I helped build and am deeply proud of—didn't even exist until February. Zed has come a long way this year. It's been a labor of love and tenacity by the entire team, all of whom I feel incredibly lucky to work with day-to-day. The level of talent and commitment to the craft embodied by my teammates is a sight to behold. There's still a lot to be done to make it possible for everyone to feel at home in Zed, but I'm confident that we're up to the task. For a look back at everything that happened in the Zediverse this year, check out the Zed 2024 Recap . As always, here's an assortment of stats from this year. I had an unbroken streak on GitHub of 193 days, from April 1st to October 11th. It would be even longer if I hadn't skipped that one day, but alas. I'm still quite pleased with my contribution chart: It's been a good year for me in the Zed repository as well: Sadly, GitHub no longer shows lines added/deleted once the commit count exceeds 10,000. This year I wrote 6,804 words across my various writings (not including this post). I'd like to bring this number up next year. My music listening was, once again, down from the previous year. I think this can be partly attributed to the change in work environment: we have a very pairing-heavy culture at Zed, and I can't listen to music while I'm pairing with someone. Here are the albums that I listened to the most this year: If there is one thing I am leaving 2024 with, it's a renewed desire for finding balance in my life. The pendulum continues to swing too far in either direction, dragging me with it from one extreme to another. I came into the year with a goal of "devising a system for sustaining my ideal lifestyle", and I have yet to achieve it. To all of you who have been there for me this year: thank you. I know I've been distant for much of it, so I deeply appreciate your steadfast camaraderie in spite of that. I look forward to what the new year will bring. The extended editions, naturally. It was 3 times in Fellowship, 4 times in Two Towers, and 3 more times in Return of the King. As much as I would love to claim the title of "motorcycle rider" in the hopes of sounding cool, I never did end up finishing the course. Having my siblings over for a Lord of the Rings 1 marathon and keeping notes on how many times I tear up or cry 2 Exchanging Strands and Connections results with Heather, and commiserating when the NYT makes them extra difficult Taking walks around my neighborhood where I've lived for 7 years and have yet to fully explore Hiking in the Great Smoky Mountains in Tennessee with nary a bar of cell phone service Sitting in my darkened sunroom during a thunderstorm-induced power outage sipping a Fat Tire while the lightning strikes periodically illuminate the room Hanging out in a Montreal coffee shop talking about Rust with some other engineers Spending a Sunday afternoon setting up a bird feeder next to my deck Watching from the kitchen window as the birds flit around said bird feeder BRAT - Charli xcx cold is the void - and all i can say is Still as the Night, Cold as the Wind - Vital Spirit Dance Fever (Complete Edition) - Florence + The Machine Autumn Eternal - Panopticon Wound - Despite Exile ERRA - ERRA Minecraft - Volume Beta - C418 THE TORTURED POETS DEPARTMENT : THE ANTHOLOGY - Taylor Swift Cutting the Throat of God - Ulcerate End of the World - Searows Nature Morte - Penitence Onirique Fiction - Syncatto Illuminate - Harvs Space Diver - Boris Brejcha Every Sound Has A Color In The Valley Of Night - Night Verses Of Mice & Men - Of Mice & Men ONI//KIJO - Memorist Love Exchange Failure - White Ward Triade III : Nyx - Aara

0 views