Latest Posts (20 found)

Orbital

Six people—four astronauts and two cosmonauts—circle the Earth. They may be among the last to do so, as the space station they live in is due to be dismantled. While they circle and observe, watching sunrise after sunset, seeing typhoons and dust storms wash across the surface below, another crew of astronauts takes off for the moon, passing them by. But their gaze remains stubbornly down, not out; down into the water and land and lights, into their own memories and histories, the deaths and lives that keep them tethered as certainly as gravity prevents them from falling away. A moving love letter to our one and only planet. View this post on the web , subscribe to the newsletter , or reply via email .

0 views

Ternus will succeed Cook as Apple’s CEO

I’m happy to see the rumors were true that John Ternus will succeed Tim Cook as Apple’s CEO in September: Apple announced that Tim Cook will become executive chairman of Apple’s board of directors and John Ternus, senior vice president of Hardware Engineering, will become Apple’s next chief executive officer effective on September 1, 2026. The transition, which was approved unanimously by the Board of Directors, follows a thoughtful, long-term succession planning process. ​ While no one can know right now how his leadership will differ from Cook’s, Ternus appears to be a worthy candidate: product-focused, likable, with a proven track record: Ternus’s work on Mac has helped the category become more powerful and more popular globally than at any time in its 40-year history. That includes the recent introduction of MacBook Neo, an all-new laptop that makes the Mac experience even more accessible to more people around the world. This past fall, his team’s efforts were on full display with the introduction of a redefined iPhone lineup, including the incredibly powerful iPhone 17 Pro and Pro Max, the radically thin and durable iPhone Air, and the iPhone 17, which has been an incredible upgrade for users. Under his leadership, his team also drove advancements in AirPods to make them the world’s best in-ear headphones, with unprecedented active noise cancellation, as well as the capability to become an all-in-one hearing health system that can serve as over-the-counter hearing aids. His personal quote in the press release is charming: “ I am profoundly grateful for this opportunity to carry Apple’s mission forward,” said Ternus. “ Having spent almost my entire career at Apple, I have been lucky to have worked under Steve Jobs and to have had Tim Cook as my mentor. It has been a privilege to help shape the products and experiences that have changed so much of how we interact with the world and with one another. I am filled with optimism about what we can achieve in the years to come, and I am so happy to know that the most talented people on earth are here at Apple, determined to be part of something bigger than any one of us. I am humbled to step into this role, and I promise to lead with the values and vision that have come to define this special place for half a century.” I’m on the record as being disappointed in Cook’s leadership of late, but he’s had a 15-year tenure — longer than any previous Apple CEO — with many ups and downs. His personal letter to the community ( archived ) is humanizing. I’m glad he wrote it: This is not goodbye. But at this moment of transition, I wanted to take the opportunity to say thank you. Not on behalf of the company, this time, though there is a wellspring of gratitude for you that overflows inside our walls. But simply on behalf of me. Tim. A person who grew up in a rural place in a different time and, for these magical moments, got to be the CEO of the greatest company in the world. Thank you for the confidence and kindness you’ve shown me. Thank you for saying hi to me on the street and in our stores. Thank you for cheering alongside me when we unveiled a new product or service. Thank you, most of all, for believing in me to lead the company that has always put you at the center of our work. Every day we get up and think about what we can do to make your life a little bit better. And every day, you’ve made mine the best I could have asked for. And let’s not forget Johny Srouji, who I would presume was on the short list of candidates for CEO , but has ended up as Chief Hardware Officer — a brand-new title made just for him: Apple today announced that, effective immediately, Apple executive Johny Srouji will become chief hardware officer. Srouji, who most recently served as senior vice president of Hardware Technologies, will assume an expanded role leading Hardware Engineering, which John Ternus most recently oversaw, as well as the hardware technologies organization. I have been sitting on this title for years. https:// 512pixels.net/2026/04/cook-out/ gotta catch ’ em all RE : https:// techhub.social/@Techmeme /11643 8959705663471 …and now his watch has ended. Apple announces that John Ternus, senior VP of Hardware Engineering, will become Apple’s next CEO on September 1; Tim Cook will become executive chairman (Business Wire) https://www . businesswire.com/news/home/202 60420318241/en/ http://www . techmeme.com/260420/p24#a26042 0p24 Beginning in September, Apple will have had two CEOs named John, still behind the three going by Michael or Mike. Tim Cooked. We need fewer “ Cooked” puns and more “ I’ll Ternus car right around” puns. Mom says it’s my Ternus the CEO https://www . apple.com/newsroom/2026/04/tim -cook-to-become-apple-executive-chairman-john-ternus-to-become-apple-ceo/ Assistant TO the regional CEO. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views

Writing an LLM from scratch, part 32l -- Interventions: updated instruction fine-tuning results

I've been working on a GPT-2-small-style LLM based on Sebastian Raschka 's book " Build a Large Language Model (from Scratch) ", and have tried a bunch of different things to see if I could get it to approach the quality of the original OpenAI GPT-2-small, measured in terms of loss on a held-back test dataset. After working through them, in my last post , I managed to train one that was almost (if not quite) there. Now, back before I started digging into these interventions, I was doing three evals for each model I built; a smoke test (to see if it could give a coherent completion to "Every effort moves you"), a test for that test set loss, and an instruction-following test that fine-tuned the model on the Alpaca dataset, got it to generate results for a test set of instructions, and then used an LLM as a judge to score them. The idea behind this was that the loss on the test set was an interesting technical measure of the quality of a model, but it didn't really tell us much about how useful it might be in reality. Unfortunately, in January, I realised that my methodology was bad ; because I was asking the LLM to score a model in isolation, the LLM's natural randomness would mean that results were not really comparable, at least for models that were reasonably close in quality. For example, if two models both replied to ...then one run of the instruction-following test might "find the judge LLM in a good mood" and get, say, 5% -- after all, the model tried to answer, and actually used a real person's name, even if the answer was totally wrong. But in another run, the judge might be in a "worse mood" and score it at 0%. My fix was to have two scripts: The details are here . Because doing it that way was significantly more work, I've not been doing these tests as part of the interventions mini-series. I felt it would make more sense to wait until I'd tried a bunch of interventions and got a number of models to try. Now I have those, so let's give it a go! At the end of the previous round of IFT tests, I had this table. It's sorted by the loss on the test set (shown to 3 decimal places), and has the score that the model got from an instruction fine-tuning run: There's a loose correlation where lower loss means a higher IFT score, with two weird exceptions: the two FineWeb-Edu training runs, where they got much higher results than you'd expect from the loss. My working hypothesis was that there were two components that led to a model getting a good score: So in those terms, the OpenAI models and Cloud FineWeb, 8x A100 40 GiB might be smart but not know very much, and the FineWeb-Edu ones might be dumb but knowledgeable. The ones in between, by contrast, could be relatively dumb too, but also not know very much. There was one other oddity: the Cloud FineWeb, 8x A100 40 GiB model seemed surprisingly good on the IFT results when considering its loss -- but perhaps there was some kind of step function, where as soon as a model got better than (say) 3.7 on the loss, it suddenly became smart in whatever way mattered. All very hand-wavy, of course, but it was a hypothesis of sorts. Would the new models fit that pattern? It was time to find out. I didn't think it was worth adding all 14 models that I've trained in my intervention-testing to that table, so I decided to just add four of them: Now, I already had files containing responses from fine-tuned versions of the other models, so I just needed to run the first of my two fine-tuning scripts against all four of the new models. I did that, and then also tweaked the judge script so that instead of using GPT-5.1, it used GPT-5.4. If you run the script multiple times, each time will normally give you different scores anyway; hopefully the ranking will remain roughly the same. So given that I was going to have to re-run the script to get new aggregate results, and those would not really be comparable to the original ones anyway, this seemed like a reasonable price to pay for (hopefully) a smarter judge. I ran that once, and got some results that surprised me -- so much that I decided to do three runs and see if the results stood up. They did; here's the new table, with scores for each run, the average, and the rank that each one got based on the average. You can see that relative rankings are fairly consistent across the IFT runs. But while in general the lower-loss runs get better IFT results, now there are even more exceptions to that trend than there were before. Let's look down the "IFT rank" column, which is based on the IFT average: That's a really odd situation. If the training runs using gradient accumulation rather than DDP had been consistently worse -- or vice versa -- then we could imagine some kind of connection. But in the first case, GA beat DDP, but in the second, it was the other way around. Apart from that, we do still see that the two FineWeb-Edu models are doing much better than the others. And the remaining models are all pretty close together, both in terms of loss and in terms of their ranking, apart from the Local FineWeb train, which is bad in both. It is, however, interesting that Local FineWeb-Edu extended train, which was trained on twice as much data as Local FineWeb-Edu train, is consistently worse in terms of the IFT numbers, though. That wasn't the case in my tests previously. All of this puzzled me. The "lots of knowledge makes a model better at this" idea seemed to be weakened by the relative ranks of the two FineWeb-Edu models (after all, if it was true, you'd expect the model trained on more data to be consistently better). And the "smart, low-loss models are better" side seemed to be contradicted by and 's bad results. What might be going on here? Looking at the training code, one thing stood out to me. The process was: In practice, the early-exit code always cut in pretty quickly. I'd noticed that during my original generation of the results for the new models: I decided to regenerate responses for all of the models, and then run the new responses past the LLM judge again. But this time I would keep a record of how many epochs of training we got before the exit: It was getting even harder to see any useful pattern! One thing that did stand out, though, was that the still oddly-high Cloud FineWeb, 8x A100 40 GiB model was being instruction-trained for seven epochs. It was also rather noticeable that the two FineWeb-Edu models had the same "advantage", if that's what it was. But the Local FineWeb train had seven epochs too, and got a poor score, the OpenAI models only got two each, and led the pack, and got a pretty poor result given its six epochs of training. Still, what would happen if we got rid of that confounder? I did yet another set of runs; this time, I changed the fine-tuning/generation script to always do four epochs -- no early exit. I chose four because it was the modal number in the previous trains -- no strong reason for it beyond that. Here's what came out at the end: Still no obvious pattern. What if we try seven epochs of training for all of them, so that they all get as much "benefit" (if that's what it is) as the FineWeb-Edu models? Just as confused as ever... Here's a table with all of the ranks we got from these tests: It's hard to draw much sense out of this, but a few things are clear: On the one hand, training different models for different numbers of epochs feels wrong for an evaluation like this, as they're being "treated differently". On the other hand, if it's meant to be a good evaluation of model usefulness in the real world, then individual models would be fine-tuned for different amounts of time, depending on validation loss. So perhaps it is better? But the differing results are still quite a puzzle. I figured that a modern AI could easily build me a data exploration interface, specifically for the original results and seven-epoch ones, so I asked Claude and got this rather nice one . After poring over that, though, I couldn't find a smoking gun -- for example, some kind of systematic error that was always making that pulled its score down. I think that the best -- albeit hand-wavy and incomplete -- mental model that I have right now is something like this. If we consider the loss landscape that these models are all in, they've all been trained to try to get to a place with as low loss as we could manage. When we do the instruction fine-tune on them, we're changing the landscape -- the objective of "be better at following instructions" is different to "be better at minimising loss". Now, those two landscapes could be completely different! You can imagine a task that we might set instead of instruction-following that could be completely uncorrelated with loss minimisation, or even inversely correlated. But instruction-following is relatively close; it at least shares features like "generate coherent text". So when we do the instruction fine-tuning, what we're trying to do is to move from the place where the model ended up after its pre-training, to a place where performance on the new goal -- instruction-following -- is best. Here's where I'm going to get more than a bit hand-wavy. You can easily imagine that some places where the loss was low, there might be downhill slopes pointing towards good locations in the new instruction-following landscape. With instruction fine-tuning, you'd be able to get a good IFT model. But other places with low loss might not have that advantage; maybe they're at or near a poor "local minimum" in the IFT landscape -- that is, a place where there is no downhill route to a better place. So simple fine-tuning like this might never get a good result! With this mindset, we might say that the OpenAI weights are pretty well-positioned, not just in the loss landscape but also in the IFT landscape. The FineWeb-Edu models happened to get lucky, and wind up in a place that (despite having poor loss), is well-positioned for the IFT objective. And by contrast, and were just unlucky: they got to a place where the loss landscape was not well-correlated with the IFT landscape. This seems plausible enough for me to use it as my working model for now, and see if I can work out some way to test it. Keeping track of the validation loss during the instruction fine-tuning process would certainly be a good start; unfortunately I only realised that after doing all of the tests above, and re-doing them would be quite a lot of work. One final thing is worth repeating. Our two "unlucky" models, and , each had a twin. The former was the DDP-trained counterpart of the gradient-accumulated , while the latter was the gradient-accumulated counterpart of . So while something odd clearly happened, it doesn't look like DDP or gradient accumulation by themselves are the culprit. I think that at this point, it's best for me to draw a line under this -- I have a bunch of other things I'd like to get to, and this is a bit of a side quest at this point. Still, I have one main takeaway from this: chasing lower loss is technically interesting but is not the only goal. In some cases, it seems likely that lower-loss models can be worse for actual use. Coming up next: I'm going to wrap up this "interventions" mini-series, and move on to the final steps in my LLM from scratch journey. See you then! One that fine-tuned the model then got it to generate responses, then saved those responses in a file. One that took a bunch of files generated by the above, one for each of a set of different models, and presented them to the LLM together, so that it would (hopefully) be consistent in how it rated them relative to each other. Its raw intelligence: lower-loss models were smarter, so they were better at instruction-following after the fine-tune. Its knowledge. All of the models -- mine and OpenAI's -- apart from the FineWeb-Edu ones were trained on what amounted to minimally-curated data from the Internet. But FineWeb-Edu is meant to be "the most educational" subset of FineWeb, so it presumably is more dense in useful facts. , the baseline cloud-trained model for all of the interventions . , the locally-trained version of the same -- the first model from this post . , the best model we managed to get in the cloud . , the best local model -- the second from this post . The first surprise is . It has the fourth-best loss, but it's the worst model out of all of them on the instruction fine-tuning test! It was trained on exactly the same data as all of the others apart from the OpenAI ones and the FineWeb-Edu ones. Even more perplexingly, it was as close a match to as I could make it, but got completely different results. You might remember from the post that those two runs started with the same weights and had exactly the same training config; the only difference was that they were trained on different architectures, and one used DDP with a real global batch size of 96, while the other used gradient accumulation to get the same batch size. also does much worse than you'd expect from its loss numbers; it's only a tiny bit worse than Cloud FineWeb, 8x A100 40 GiB in loss terms, but much worse on the IFT test. Again, this one is essentially a clone of another: , which was the same training run but using DDP rather than gradient accumulation. The same problem -- one of a pair of closely-matched models has worse results on the IFT test. But in this case, it's the gradient accumulation model that turned out bad. Fine-tune the model for a maximum of 100 epochs over the training set. If loss on a held-back validation set went above the result for the previous epoch, we did an early exit and used the previous epoch's model for the generation of the responses. took 6 epochs until validation loss started rising. Performance on this test is correlated with loss, but it's far from the only factor. The OpenAI weights consistently lead the pack. Of our own models, , Cloud FineWeb, 8x A100 40 GiB, and Local FineWeb-Edu train do pretty well. Strangely, Local FineWeb-Edu extended train, which is just Local FineWeb-Edu train that has been trained on a further 3B tokens of the FineWeb-Edu dataset, is consistently worse than the model it was based on. and are consistently bad. Cloud FineWeb, 8x A100 80 GiB is also not great.

0 views

My Everyday Carry

Thought it would be fun to do a simple everyday carry post! Here's what I typically have in my backpack. This is a very recent addition and replaced the iPod Classic I was carrying. I'll be honest, I really like this little guy. It's a MP3/FLAC/AAC player themed like a tape player. It's made of metal, super lightweight and has a good battery life. While I love the iPod, I haven't had time to mod it beyond Rockbox, so it's on the original HDD and battery. The Echo gives me a SD card slot, USB C charging, Bluetooth and a fresh battery. The UI is more clunky than an iPod, but it's good enough. I added these to the cart when buying the Echo, not expecting much with the low price, but they have blown me away. Seriously, these sound way better than they should. They do feel super cheap and flimsy, reminiscent of the headphones we had in the computer lab during grade school, but that does mean they are very lightweight. My current primary machine, recently replacing my MacBook Pro M1 as I've gone full in on Linux. I absolutely wouldn't recommend this laptop due to the horrible webcam/microphone/speakers, nearly unusable trackpad (hence the next item), USB C port that barely works and overall poor build quality. I did make it slightly more bearable by upgrading the screen . Specs below for those interested! Old mouse that I've had for ages but I love it. Feels great, battery lasts awhile, perfect substitute for my awful trackpad! Got this wallet on Amazon, honestly don't know the brand (probably one of those popup brands that disappears in a week). It's decent enough quality, has a money clip and a feature where you push a lever and your cards fan out. The best feature though is the AirTag holder, no more searching the house for my wallet! I've had this guy for awhile, but it's just recently dethroned my heavily modded GBA SP thanks to Allium OS . Allium provides a simple UI that gets out of the way. The killer feature for me is the "Guides" functionality. While playing a game, you can quickly pop up a walk through on screen. It remembers where you've scrolled through play sessions, making it perfect for RPGs. The battery and AUX jack are nice enhancements over my GBA SP as well (who knew adding a bright display and underglow to the SP would kill battery life??). I bought the Nomad on preorder before it released, and have loved it since. I use it to sketch designs, take notes and read books. I love the design (I have the crystal clear one) and the fact that it's repairable + offline + subscription free. I love my Palm Pilots, and the C is the one I most use as a everyday carry. The keyboard is absolutely a killer feature. I use this guy to track calories , store reminders, manage my calendar and write a journal. The OS is fast and just works . Man I wish they made modern Palm Pilots! It's overkill and honestly I probably didn't need it, but I love the bigger design of the Ultra. I use it for swimming, running and cycling. I tried switching to Android for awhile with a Pixel 9 Pro and I just hated it. To me, Android feels like KDE: super powerful, full of features, customizable and ugly. I wish I could love it, but I care a lot about good/consistent design and GNOME+iOS are clear winners in this category (in my opinion). That's it for me, would love to know what everyone else deems worthy of carrying on their back!

0 views

Exclusive: Microsoft To Shift GitHub Copilot Users To Token-Based Billing, Tighten Rate Limits

Note: Microsoft has now confirmed some of these details in a blog post . Leaked internal documents viewed by Where’s Your Ed At reveal that Microsoft intends to pause new signups for the student and paid individual tiers of AI coding product GitHub Copilot, tighter rate limits, and eventually move users to “token-based billing,” charging them based on what the actual cost of their token burn really is. The document says that although token-based billing has been a top priority for Microsoft, it became more urgent in recent months, with the week-over-week cost of running GitHub Copilot nearly doubling since January.  The move to token-based billing will see GitHub users charged based on their usage of the platform, and how many tokens their prompts consume — and thus, how much compute they use. It’s unclear at this time when this will begin. This is a significant move, reflecting the significant cost of running models on any AI product. Much like Anthropic, OpenAI, Cursor, and every other AI company , Microsoft has been subsidizing the cost of compute, allowing users to burn way, way more in tokens than their subscriptions cost.  The party appears to be ending for subsidized AI products, with Microsoft’s upcoming move following Anthropic’s ( per The Information ) recent changes shifting enterprise users to token-based billing as a means of reducing its costs. GitHub Copilot currently has two tiers for individual developers — a $10-per-month package called GitHub Copilot Pro, and a $39-a-month subscription called GitHub Copilot Pro+.  According to the leaked documents, both of these tiers will be impacted by the shutdown, as will the GitHub Copilot Student product, which is included within the free GitHub Education package. According to the documents, Microsoft also intends to tighten rate limits on some Copilot Business and Enterprise plans, as well as on individual plans, where limits have already been squeezed, and plans to suspend trials of paid individual plans as it attempts to “fight abuse.” Although Microsoft has regularly tweaked the rate limits for individual GitHub Copilot accounts, most recently at the start of April, the document notes that these changes weren’t enough, and that more rate limits changes are to come in the next few weeks. As part of this cost-cutting exercise, Microsoft intends to remove Anthropic’s Opus family of AI models from the $10-per-month GitHub Copilot Pro package altogether.  Microsoft most recently retired Opus 4.6 Fast at the start of April for GitHub Copilot Pro+ users , although this decision was framed as a way to “further improve service reliability” and “[streamline] our model offerings and focusing resources on the models our users use the most.” Other Opus models — namely Opus 4.6 and Opus 4.5 — will be removed from the GitHub Copilot Pro+ tier in the coming weeks, as Microsoft transitions to Anthropic’s latest Opus 4.7 model .  The move towards Opus 4.7 will likely see GitHub Copilot Pro+ users reach their usage limits faster.  Microsoft is offering a 7.5x request multiplier until April 30 — although it’s unclear what the multiplier will be after this date. This might sound like a good thing, but it actually means that each request using Opus 4.7 is actually 7.5 of them. Redditors immediately worked that out and are a little bit worried . Premium request multipliers allow GitHub to reflect the cost of compute for different models. LLMs that require the most compute will have higher premium request multipliers compared to those that are comparatively more lightweight.  For example, the GPT-5.4 Mini model has a premium request multiplier of 0.33 — meaning that every prompt is treated as one-third of a premium request — whereas the now-retired Claude Opus 4.6 Fast had a 30x multiplier, meaning each request was treated as thirty of them. The standard version of Claude Opus 4.6 has a premium request multiplier of three — meaning that, even with the promotional pricing, Claude Opus 4.7 is around 250% more expensive to use.  The announcements for all of these changes are scheduled to take place throughout the week.  If you liked this news hit and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including vast, detailed analyses of NVIDIA , Anthropic and OpenAI’s finances , and the AI bubble writ large . I recently put out the timely and important Hater’s Guide To The SaaSpocalypse , another on How AI Isn't Too Big To Fail , a deep (17,500 word) Hater’s Guide To OpenAI , and just last week put out the massive Hater’s Guide To Private Credit . Subscribing to premium is both great value and makes it possible to write these large, deeply-researched free pieces every week.  Internal documents reveal that Microsoft plans to temporarily suspend individual account signups to its GitHub Copilot coding product, as it transitions from requests (single interactions with Copilot) towards token-based billing.  The documents reveal that the weekly cost of running Github Copilot has doubled since the start of the year.  Microsoft also intends to tighten the rate limits on its individual and business accounts, and to remove access to certain models for those with the cheapest subscriptions.

0 views

Stories from Alaska Folk Fest 2026

[Folk Fest] is not an intellectual experience, it’s an emotional experience. Visiting Alaska gives me the feeling that people are chasing after when they travel: a little taste of what it’s like to be a part of another world. To live another version of life. Not just looking at it or fantasizing about it (which are fun too), but getting to live it for a little while. I’m lucky enough to have visited Juneau a number of times. My friend Justin Shoman lives there. President of the radio. His deep connection with the community makes the trip more fun than it might be otherwise, as I get to sidecar all that community goodness. Last year, I came up for the 50th annual Folk Fest , and it was a no-brainer to come back for the 51st. The 50th was such a milestone that documentarian Paige Sparks took the opportunity to make a literal movie about it, “50 Years of Folk Fest”. I caught a screening of it at KTOO and got to briefly meet Paige, who did a wonderful job. The documentary was a brisk 50 minutes and managed to explain the history without being boring, like how the original bylaws of the organization require the event to be free. It spotlighted some long-timers with zinger quotes, like the one at the top of this blog post, then focused on some of the new faces of folk fest, like Taylor Dallas and Annie Bartholomew , giving it modern relevance and freshness. A great thread in the documentary featured an awkward fella struggling with his own musical abilities and belonging. He blossomed into performing a really lovely original folk song that couldn’t have fit in anywhere better than Folk Fest. OH, I’M ALSO IN IT. There is a quick moment from an old-time jam at Amalga Distillery where you can see the back of my head. I loved that jam dearly last year and was sad that Amalga didn’t do it this year. They had make-your-own peanut butter and jam sandwiches (get it). C’mon that could have been a whole thing. When I landed in Juneau and walked out of security, I was relieved to see that my favorite plaque is still there. Thanks, plaque. I can’t wait to check out those additional displays throughout the terminal. I had some anxiety arriving. I didn’t get there until Thursday, DAYS LATE, so I had some FOMO — like I had already missed amazing opportunities. That feeling wore off quickly. I b-lined it to Devil’s Club , where I had tons of great jams last year. There was a great jam going on as I got there with Chaz from Ketchikan/Dude Mtn, Evan from Astoria/The Strongbacks, Rosemary from Fairbanks/Writing, and several others. Comradery was immediate. My friends Amy, Roger, Dave, Dennis, and Laura were there, all from various cities in Oregon. I think it was a first for most of them. I haven’t talked to them since leaving, but Amy was dreaming of getting two hotel suites next year instead of just one. One morning, I jammed with them in their hotel suite. It was a weird jam in the key of E, with the fiddles in calico tuning, which is fairly unusual for Old Time. I was on guitar and loving it. Heidi from Fairbanks is there, whom I love because of her unabashed love of banjos. The more banjos the better in her world (there are plenty of situations where people like to keep it to one banjo). She’s also very good, so I learn a lot. The book I read during the trip was an Alaska book I’ve been waiting to savor: Of Bears and Ballots . It delivers. It’s Heather Lende, of If You Lived Here, I’d Know Your Name fame. I’ve read a lot of Alaska books, but nobody evokes the feeling you get there like Heather, even as a mere visitor like me. I also picked up  The Tao of Raven , which I’ve only just started, but it starts with a lavishly wordy version of the fable where the Raven frees the sun, which I’m fond of. I have a version of the raven story that I typeset and letter-pressed myself, and my mom watercolored over, in my guest bathroom at home. Speaking of my banjo, I checked it on Alaska Airlines on the way up. I love my banjo, and it’s nice, but I’m not precious about it and don’t love schlepping things through airports. Some people gasp at the thought of checking an instrument. Well, here are some more points for them. The peg for my 5th string must have loosened and straightened out, causing a buzz as it went over the little mini nut on that string. That’s not an acceptable state to leave the banjo in for Folk Fest, so I had Justin swing by a shop to grab some wood glue, then did emergency surgery on it. I yanked out the peg with a channel lock, rotated it back correctly, then glued it up and hammered it back in. Not pretty, but it’s held up just fine since then. A bar that doesn’t seem to officially participate in Folk Fest (but is at the heart of it anyway) is The Triangle. It ends up being kind of a home base or where to go sit in lieu of any better idea. It’s a place that ends up generating memories for me. A drunk local buying us shots for listening to his life story. Two mandolin players trading fascinating chord transition licks. A beautiful woman frantically trying to find her friends, only to be calmly distracted by the historical photos on the wall. I promised to tell her what I know of them when she comes back, but alas. One of the many cool things KTOO does, in addition to the studio-audience shows, documentary screening, and all that, is to put every main stage performance on the radio. Every second of it! Plus they stream it so people around the world can listen. Driving around, or if we happen to be at Justin’s spot, we’d usually have it on. One thing we caught that way was Sea of Heartbreak (feat. Katy Harris, Caroline Oakley, Reeb Willms, Ava Honey, Pharis Romero). Kind of a supergroup of old-time ladies. I only know exactly who it is now, because it was so good on the air, I looked it up on the official website. One day, sitting at the Alaskan, I was chatting with the bartender, Morgan, who used to run the place. It seems people, bartenders especially, live in this palpable daze of excitement and exhaustion during Folk Fest. The next day, after a nice beach walk “up the road”, as they say, at Eagle Beach , we stopped into Squirez, a cozy little bar that overlooks Auke Bay. It was Morgan bartending again. There was an awful lot of bartender overlap like that. Just the night before, the day bartender at The Alaskan was working the door bar in the evening at The Crystal Saloon. Morgan is extra fun, though, as she travels a lot to interesting places and seems to be doing interesting things with her life, like starting a new gig at Uncruise. She also works at the Lucky Lady, although I didn’t see her there. At Squirez, she did a little rave about what’s so great about Folk Fest. It’s the end of winter (this was a rough one up there), and it’s before the cruise ships come. So it’s a week that feels like a special treat just for the locals. A beautiful gift. Morgan was on the same flight out on Tuesday morning as I was. It was nice to high five out along with another friend (a board member of KTOO) I met at the corndog brunch who had a daughter the same age as Ruby running around. That made me miss Ruby and think of my hope that Ruby and I get to share a love of music and community events one day. One particularly fun live show was Raisin’ Holy Hell at The Crystal Saloon. There were a bunch of rowdy old-timers in the band (some faces I recognized from the documentary) who really got after it and made a ruckus of a show. They played classics like Angeline the Baker and Stickin’ to the Union, mixed with Sublime covers and modern shit like that to switch it up. They had a drummer and a solid bass player holding it all together and making it more than worthy of the killer night slot it had. The whole audience was super into it, and I was having a great time. This feels weird to write, but one of the things that fed into the fun and the feeling of living a different life for a moment is that I’m essentially single now and approaching the point I’d be ready to date (long story, private). Chatting with single strangers can have that hey, is this… something? feeling that can be exciting if a little emotionally dangerous. In my real life, I’m a dad and a co-founder of a busy tech company, and I wouldn’t have it any other way. But once in a while, I can LARP as a freewheelin’ banjo-playin’ Alaskan. Another day, I popped into The Alaskan only to be perfectly on time to catch The Strongbacks , a sea-shanty group of five dudes that I quite like, hosting a “vocal jam”. I was surprised at how many sea shanty enthusiasts showed up. Half the people in the audience were mouthing along to the songs. An Irish session in the back of the bar didn’t stop playing for them, which made me furious. I considered saying something, but ultimately chose not to, as somehow nobody else seemed to care. Not even the bartender? Perhaps, as this wasn’t an official show and the jam had just as much right to make sound, asking them to stop would have been an injustice in its own right. Whatever, I’m still mad about it. The beauty of unamplified harmonizing voices should always take precedence over a mediocre Irish session. Just move! There is so much going on at Folk Fest, you’re definitely going to miss more than you do, even if you shortlist stuff you’re especially interested in. Here’s my list of things I would have liked to do but just… didn’t get to: That’s a big list. And yet: no regrets. Bocca al Lupo hosts a Corndog Bruch at 11am on Saturday. I missed it last year so I was glad to catch it this year. Arriving at 10:40am, there were already a few dozen people in line ahead of us. They passed out paper fliers detailing the gourmet corndogs that would be available. You were supposed to pass the paper back, but you could tell nobody wanted to actually be the one holding the paper. Way too much responsibility for a hungover Saturday morning. I had the elote and the honeybutter, both extraordinary, but I eyed up pickle-style with envy. The cashier was drinking a Bush NA. It sounded good at the time, so I ordered one. She had brought it from home. The band playing at the corndog brunch was The Heists , the last name of the lead couple, fleshed out by a great fiddler and bassist. Importantly: they replaced words in the songs with corndogs and corndog puns. Will the circle be a corndog and the like. I would have liked to be consulted on this endeavor, as I like to think I could have gotten the corndog integration density even higher. I recognized [Andrew] Heist from previous visits as I think he played in the band Taking Care of Bluegrass, which I’d seen a couple times, and saw again on this trip, but he didn’t seem to be in anymore. Possibly because he was in EVERY OTHER BAND . I saw them together again in The Boyfriend Girlfriend Bluegrass Band at the Alaskan. I saw him play with Raisin’ Holy Hell at The Crystal Saloon. I saw them in some very endearing moments in the documentary. I saw them play the main stage. I saw him out jamming. It’s a good thing they kick ass. There were so many times I was doubled over with laughter on this trip. Maybe that, all things considered, was the best part. I’ve come to think that laughing is my #1 bucket filler. One night at dinner, there was an appetizer called “Bread and Bones” (which turned out to be a bone marrow thing), but we weren’t sure, so we just made silly guesses about what it might be, and I haven’t laughed that hard in a long time. One day, sitting at Amalga (and I have absolutely no memory of how this came up), we opened up the Claude app on my phone and vibe-coded different trivia-style games. It competently crafted an “alive or dead” game with random celebrities, and we kept adding features and making variations. The new bar game is making your own. Justin is seeing someone. It was lovely to meet her. We spent a lot of time all together as a group of three (plus dogs!). She was kind, endearing, funny, and up for anything. I’m glad to have made another friend. I think three can be a magic number. There are more personalities and things going on to play off of. I need to remember this more specifically for friend trips: 3-5 is a good number range. Last year, for the 50th, the weather was shit. It was cold and rainy the entire time. That’s how it always is. I’m sure months of dark, wet weather generally have mental consequences for the Alaskan natives, but it doesn’t seem to affect people’s moods during Folk Fest. There was a bit in the documentary about where they are clear on the matter: it just doesn’t matter . Put on your coat. That was put to the test this year in an interesting way. While there were still big piles of snow everywhere, it was kinda nice out. Twice! Blue skies; warm sun. I was curious whether people would take to the streets, with outside jams, impromptu parties, and such. There was a little. I saw a couple of jams move chairs outside or play on the concrete outside the Sealaska Heritage Museum. It was kinda fun, but it wasn’t like this transformative thing for the festival. It was fun, but again, the weather just doesn’t seem to matter much. One of those nice days I popped into Devil’s Club to find the jam was Irish. Which is fine , but I’m not skilled enough in Irish to contribute much and there is usually enough going on I don’t need to force it. There was another fella sitting there, I noticed, who had a fiddle case, and we got to talking and turned out he played old time like me. So we found a little stoop over by Deckhand Dave’s, he flipped over an old, dirty bucket, and we played old-time duets for a couple of hours. Didn’t even catch his name. I only went to the main stage once this year. The very last night. There’s just so much to do, it’s not even weird to miss most of the main stage stuff. One way to engage with Folk Fest is to hang out at the main stage primarily, and I’m sure a ton of people do that, but the musician types are always seeking out gigs and jams, and the younger crowd (and people that just don’t care that much about folk music) take the opportunity to enjoy all the great human energy downtown. Bar hopping and seeing the many non-folk shows and such. I’m so glad I went to that last night, though. RO Shapiro had a powerful voice, sang beautiful songs alone on stage, and reminded us how important it is to support musicians. He had a wonderful song about how they all pass the same $20 bill around. I was stoked to see Riley Baugus, a banjo hero of mine. He was charming and funny and interesting in a way I definitely did not think he would be, and he managed to keep the huge audience captivated entirely alone with a banjo. He was there with The Red Hots , who I unfortunately missed. Willie Carlisle closed it up, playing with a couple of multi-instrumentalists (one of whom I got to jam a little with, incredibly). Willie is a monster with a huge voice, huge personality, and huge opinions. He’s got a kind of old timey way of speaking and choosing words. He felt like a modern embodiment of folk, blending instruments and styles that are quite different while carrying a consistent air of quality. He opened with a monster vocal-only The Balad of Penny Evans, a Steve Goodman song about Penny who’s husband dies in Vietnam and is none too happy about that. A song called Crittertown brought out a surprise friend in a giant possum costume to wander the audience (gave me big Northern Exposure feels). My favorite was Big Butt Billy, an extra-folky guitar number about a kinda gender-neutral waiter at a diner with an ass so incredible Willie breaks down into exasperated spoken word in the middle of the song, finding different wild-eyed words to praise the ass. One day in the afternoon, I was sitting in The Alaskan having a pint and waiting for Justin to get off work. There was a band setting up I’d never heard of: Big Sissy. Sisters from Connecticut. They played well and harmonized beautifully. I remember a First Aid Kit cover perfectly done. Fifteen minutes after their set was over, we had walked over to Griz Bar, and they all walked in. I got a chance to say hi and thank them for their amazing and unexpected set. It was a warm moment. Another day sitting on a stool at Griz Bar, there was a woman playing guitar really well and singing a Tom Waits cover. Rosemary was sitting, putting in little fiddle fills. They came over to the bar, and I got to buy them a drink, and the world felt warm again for another moment. She then played another Tom Waits cover. Yet another day at Griz, Dude Mountain was playing an acoustic set. It was packed, even in the drizzle. There was a large man dressed up as a kind of cartoon wizard. He didn’t look like he left the house much, honestly, but he was out now, and he brought his cat, which kinda crawled around on his shoulders. Then someone brought like a dozen Domino’s pizzas and passed them out for free. I’d say food isn’t particularly notable in Juneau. I had a steak dinner at SALT one night. The service was good. We laughed our asses off at stupid jokes. The steak was good, but everything else was fairly poor, honestly. They put this huge dollop of horseradish on my plate, camouflaged next to the au gratin potatoes, and I accidentally ate the entire thing. It was a real mouth problem for a minute there. My bad, I guess, but like, isn’t this a plating UX issue? I had a Pickle Rick at The Hanger. The Cubano at Devil’s Club. The Taco Bell replica Crunchwrap Supreme at the Imperial (regrettable but necessary). Pizza at the Island Pub over on Douglas was good, but gave me heartburn that was hard to kick. One night, we had a decent Indian spread at Spice. The vibes are a little sleepy; they didn’t seem to book any musicians this year, and the naan was a bit dry. The Mexican food at Mar y Sol is fine, but they are a dry restaurant, and no margs with Mexican is rough. Amy and crew had dinner there, and I got a text from her that they started a jam there, and honestly, that was really fun. Kinda brought Folk Fest to another area of town that doesn’t normally get it. The noon latte at Coppa was a 10. What you want out of a culinary experience in Juneau is to go out to Sand Bar in the valley and get the fried halibut. It’s literally all they do. The halibut comes from fishermen literally in Juneau. Even as a totally non-fish guy, I love it. I was sad to miss it this year. On my last full day there, I wanted to do some gift shopping. I called it Power Shopping because it was something I wanted to do, but wasn’t super in the mood for it, so the plan was hot’n’fast. I ended up getting: While Folk Fest officially ends on Sunday, and I imagine a lot of folks need to take off on Sunday or Monday, I scheduled my flight out on Tuesday on purpose because Monday is reserved for an all-day jam at The Imperial . The Imperial is right at the heart of downtown Juneau, but doesn’t seem to be an active participant in Folk Fest. Until Monday, when it’s absolutely taken over. All the stragglers show up there and all the musical styles represent. I listened to an alt-old-time jam singing Reeltime Travelers, a classic old-time jam, a country jam, and a monster cajun jam. It took me a while to get the nerve up to get my banjo and get in on it (my confidence ebbs and flows). Honestly, a couple of beers always helps, which I don’t love, but it is what it is. I ended up playing with Heidi again for a while, bookending the trip nicely, and then another group of lovely folks before feeling good about retiring the banjo for the trip. Lodestone library was hosting jams, and I peeked in and saw it, but I didn’t stop to jam, and should have. There is a new brewery in town, Harbor Mountain, that hosted stuff, but I never made it in there, even just to try a beer. I like the group Wool Pullers, who had a couple of shows, and I missed them both. I really wanted to see the band High Costa Living featuring the exuberant powerhouse that is Collette Costa , but the line at the door for that show at The Red Dog Saloon was just insane (hundreds long?) seemingly the entire night. I missed the rad metal band Bards of Mendenhall I missed The Red Hots (I should have gone to the live studio audience show at KTOO). I didn’t go to any dances. I’m dead scared of making a fool of myself at a dance, but I also want to get over it and do it. I didn’t do any workshops. I didn’t catch Caleb & Reeb, who had a LOT of shows. I saw them around a ton but didn’t seem them play, other than Reebs Sea of Heartbreak thing. I’ve still never even met Caleb, who’s a bit of a hero to me. A little intimidating. I missed the Canadian tuxedo party. I missed the cosmic truckstop brunch thing. A book from Sealaska Heritige Store . They had a Trickster basketball that was freakin’ art , but I just couldn’t justify traveling with it Some postcards and a book from Kindred Post A comic book at art supplies from Alaska Robotics (which had an incredible display of paintings of hikes in Juneau) T-Shirts from Treetop Obligatory shirts from Devil’s Club and The Alaskan

0 views
iDiallo Today

Advice from a millionaire

#storytime "Is this seat taken?" A man dressed in a black suit and a coffee in hand asked. He was already halfway into the chair when he said it. I was at the adjacent table when I heard it. He wasn't asking me. He was asking the woman who looked up, one hand holding a paper cup, the other trying to keep a small boy from sliding off his seat. A second child sat beside her, quietly peeling the label off a juice bottle. "Michael, no!" she yelled at the kid. But that didn't deter him, he was sitting beside her, sipping on his latte. I noticed him because he didn't belong to the table. He had given himself permission to be part of this story. At first I could only hear fragments of the conversation carried between the hiss of the espresso machine and the scrape of the chairs. "…people always ask me…" "…not about luck…" "…mindset is everything…" He spoke with the rhythm of someone used to being listened to. Not pausing for responses, just enough space to suggest one might exist. The woman nodded, when it seemed appropriate. Not because she agreed, I think, but because her attention kept breaking apart. The younger child had dropped something. The older one was tugging at her shirt asking a question that went unanswered. He leaned forward, elbows on the table, lowering his voice as if sharing something confidential. I leaned in. "You have to recognize opportunity," he said. I caught that part clearly. "Most people don't. They're not trained to see it." The woman murmured something, agreement, maybe. Or just acknowledgement. She clearly wanted to listen, to hear him. But her eyes drifted to the door when it opened. Then to the counter when a name was called that wasn't hers. He didn't seem to notice. "People often ask me how I made my first million dollars, like what the turning point was?" he continued. "And I tell them, it's never just one moment. It's discipline. Consistency. Character." One of the kids tugged at her sleeve. She bent down, whispered something, brushed hair out of the child's face. The man waited, but not really. More like he paused until the interruption stopped existing. "Early in my career," he said, picking up exactly where he left off, "I joined a small company. Nobody had heard of it." He smiled, like this was the part that mattered most. "But I saw something." The phrase hung there. I had the sense he liked the way it sounded. "They always ask me, 'How did you know?'" he said, shaking his head lightly. "And the truth is, I had prepared for this every single day of my life. So when the moment comes, you just know." The older child had started tapping the table with a plastic lid. A soft, repetitive sound. The woman placed her hand over it gently, stopping the rhythm without looking away from the man. He kept going. "We were a small team. Took risks. Worked hard. No guarantees." He gestured vaguely, as if summing up effort itself. "That's what people don't understand." The woman nodded again, looking at the counter if her name was ever going to be called. "They gave us stock," he added. "Didn't mean much back then." He said it casually, like it wasn't the point. Like it was just part of the scenery. "And then we got acquired." He leaned back slightly, watching her reaction. I don't think she gave him one. "A bigger company came in," he said. "That's what happens when you build something valuable." Behind the counter, milk steamed loudly. Someone laughed. A chair fell over and was quickly set back upright. "At that point," he continued, "those shares… well." He made a small lifting motion with his hand. The woman followed the movement with her eyes, just for a second. "That's the strategy," he said. "Recognize opportunity. Take risks. Build character." He delivered it like a conclusion. Something that could be written down. The younger child had climbed halfway out of the chair now. She pulled them back gently, whispering again. This time more urgently. He checked his watch. Then, as if remembering he was not alone in the conversation, he asked, "So what do you do?" The question landed awkwardly, like it had been taken from a different script. She hesitated. This whole time she had been made to listen. Now her answer was needed. "I'm… figuring things out right now," she said. It was the kind of answer that usually ends a line of questioning. He nodded, but it didn't slow him down. "That's good," he said. "You have to stay open. That's how opportunities find you." One of the kids started crying. Not loudly, but enough. She stood halfway, then sat back down, unsure which problem to solve first. He smiled, patient in a way that suggested he believed he was being generous with his time. "Anyway," he said, standing up and adjusting his jacket, "that's how I did it." He placed a business card on the table. It slid slightly, stopping near the peeled labels. "Come find me when you're ready to talk about becoming a millionaire." She nodded, because there was nothing else to do. He left without looking back. But he looked in my direction and noticed me. He stopped. Walked over, and shook my hand with both of his. "I've read everything you've written," he said. He stood there a moment longer, as if hoping I might say something he could write down. I didn't. He left. I went to the counter and asked for hot water in a cup. The barista made it available without question. From my coat pocket I produced a small paper envelope, mint and garlic, blended to a ratio I had refined over many years. I placed it in the cup and let it steep. I never leave the house without it. It is the first thing I take in the morning and the last thing I take at night. There is a clarity it produces that I have not found elsewhere. I walked to the woman's table. She looked up. I sat down, and moved her drink to one side. "This will serve you better," I said, and placed the cup in front of her. She looked at it. One of the children leaned over to smell it and made a face. I didn't acknowledge this. "The mind," I said, "cannot find opportunity in a state of agitation. I learned this early." She wrapped both hands around the cup, the way people do when they don't know what else to do with them. I placed my card on the table. It was a solid thing, matte black with beveled edges. It covered the millionaire's card entirely. "Come find me," I said, "when you're ready." I didn't say for what. I didn't need to. She could tell I was a billionaire. A barista in a coffee shop told me this story. Not verbatim, but it was funny. Two "rich" guys trying to give advice to a woman and her two kids who live in a van. They feel like they had done her a great service. One offered useless advice, the other offered hot smelly water. Neither of the men helped her. I thought it would make a perfect LinkedIn story.

0 views

TSMC Earnings, New N3 Fabs, The Nvidia Ramp

TSMC's earnings suggest that the company's leadership is not truly bought into the AI growth story.

0 views
annie's blog Yesterday

GOOSE IT UP

I’m in school again. I’m going back to school because my work, my entire career, for my entire adult life, has been writing things for the Internet. That’s going away, at least as a livable career option. By livable, I mean an option I can live with . When I started writing for the Internet, early 2000s, I could find decent paying gigs on Craigslist. A quarter a word wasn’t uncommon. It wasn’t easy — I spent a lot of time searching and researching and answering inane qualifiers and writing samples for zero money. So we’re not talking about a pot of gold at the end of the freelance writing rainbow. But you could gather enough gold thru your efforts to make it worthwhile. I wasn’t pleased when SEO became a thing I had to do to keep working. I am less pleased with AI. I have been lucky and somewhat insulated for the last year or two but things change, and I can see the trend. I still have a job with a great team but already the work is shifting in a direction I do not want to go. So, I am not going. I am making a different choice. I am choosing a different direction. I am goosing it up , baby. I have started over several times in my life. New places, new communities, new jobs, new scenarios, new perspectives. I feel, at this point, that I have lived a few complete different lifetimes already. That’s kinda cool, even if it’s not always by choice . Starting over requires a lot of energy but it also a relief. Every time I start over I establish a new baseline. I get to reset. I get to peruse my space, both exterior and interior, and declutter: Throw out old junk, worn-out habits, misplaced loyalties, dusty grievances, faded beliefs. Starting over, at any scale, always means leaving things behind . You do some grieving , releasing , mud-scraping . You definitely light up the bullshit cabinet (there’s no better time really). Hopefully you also do a lot of self care . Then you take the next step. And the next. Along the way you decide who you get to be now.

0 views

My 1/2 Marathon Strategy as a Slow Starter

This Saturday (April 25, 2026) I'm running the Ohio health half marathon here in Columbus and unlike past races where I was merely trying to survive, this time I actually have a goal of finishing in approximately two hours and 10 minutes. I'm feeling reasonably confident about meeting this goal despite a rain forecast because I've been training pretty diligently for the past few months and know I can achieve the required pace. However, I’ve noticed anytime I'm out on a run I'm just a very slow starter. For the first maybe 2 to 3 miles my stride tends to be very short, I don't seem to be able to catch my breath, and generally speaking I just feel very uncomfortable. Naturally I've been fretting about this in light of the ambitious desired finish time. Even just a few weeks ago I was on an 8 1/2 mile run with my training partner Charlie and once again during those first 2 to 3 miles I just did not feel good at all, but starting around mile 4 both my pace and comfort level increased significantly to the point that my pace towards the end of the run far exceeded the beginning! So over the weekend I spent some time yesterday talking to our good friend ChatGPT and it actually came up with a pretty good suggestion that's apparently called a negative split strategy. In a nutshell, ChatGPT said I can't hide the fact that I'm a slow starter and so I should just incorporate it into the plan. For the first 3 miles I'm going to run a 10:30/mile pace then starting at mile four I'm going to run a 10:05 pace until I complete mile 8. Starting at mile 9 I'll increase pace again, accelerating to 9:50/mile pace and maintain it through the end of mile 11. Naturally at this point all bets are off but my guess is starting at mile 12 I'll be able to accelerate one final time to 9:30/mile pace through to the finish line. In summary my race pace will be: If I can meet this pace then my finish time will be approximately 2:11:22 (10:02/mile average), which would be a new PR (by far) for me. These days I'm also paying much closer attention to my diet, and want to be particularly diligent in the days ahead of the race. So 48 hours before the race I'm going to focus on eating grilled chicken, rice, vegetables, yogurt, granola, and fruit. The day before the race I'll have oatmeal, a banana, toast and peanut butter. For lunch I'll have a turkey sandwich and fruit, and then for dinner I'll have some pasta and chicken. On Saturday the race starts at 8am. Around 6am I plan on having oatmeal, banana, and coffee. At 7am I'll have another banana. During the race I plan on consuming a gel pack at the following times: My gel pack brand of choice is the Gu pack, specifically the salted caramel flavor. In past races I've worn a fannie pack to carry my phone and gel packs and absolutely hated it so this time around I bought a vest from Amazon. Nothing fancy, it's light and has a few pockets where I can store my phone and Gu packs. Miles 0-3: 10:30/mile pace Miles 4-8: 10:05/mile pace Miles 9-11: 9:50/mile pace Miles 12-13.1: 9:30am/mile pace First pack: start of mile 4 Second pack: start of mile 8 Third pack: start of mile 10

0 views

Emacs is my browser

In my ever-increasing desire to use emacs as my sole computing environment , I have started to take browsing the web inside it far more seriously. Where I previously had thought EWW to be a niceity but far from capable, after using it for a few days it seems to be useable for about 85-90% of use cases - even on the javascript riddled hellhole that is the internet. I have seen in my own use of the modern browser (chromium/firefox) that it is far too easy to get distracted - too easy to get off track and fall down rabbit holes that take over my day. There are infinite suggestions as to how I should spend my time, uncountable shiny objects that take my eye off the prize that is Creativity and Depth. This has been momentously negated with EWW (or any terminal browser, lynx or browsh for the non-emacs users). In addition to this, my belief is that we should make strides toward leaving behind Layer 8 of the internet - the limiting frontends of social platforms and locked away corners of the net that limit actual discourse (Discord, I’m looking at you). We have given up far too much to big tech platforms, and gotten nothing of value in return, to the point that many now think the internet is dying . The internet is just a delivery mechanism, and for people that see it for what it really is, the internet has never been more alive . I truly recommend using the internet as if it was 1999 . Even on my phone, I’ve stopped using javascript frontends and embraced using eww in emacs (in termux), only falling back to fennec for about 5-10% of usecases: I find this way of using the internet far higher in signal than any other method, allowing me to look up information, read documentation, and produce more. For the uninitiated, emacs ships with EWW (Emacs Web Wowser) permitting you to browse the web with image and gif (when did gifs die, I almost never see them these days!) support directly inside Emacs. I have some sane defaults that permit ease of use such as using for back, for yanking url at point, and a few built in functions. Hitting will invoke which is similar to reader mode in firefox, removing headers and footers and focusing in on the text on page. Useful. You can send the page to your default browser with . If a page doesn’t render nicely, this is a good fallback. will download images locally, goes back a page, will save bookmarks. All of this makes browsing in eww truly enjoyable. I have also set .pdf to open in emacs, .mp4 and youtube/video hosting links to open in mpv, and gopher/gemini links to open in elpher. Elpher is the EWW for the gopher and gemini protocols, the smolweb that is all signal and no noise. EWW uses Shr to render html, so you will see some callouts to that in my configuration below: Here’s how I have configured EWW to work in my Emacs configuration : My default search engine is my own Searx instance , a privacy respecting frontend that amalgamates all other search engines. You will not be watching youtube videos or reading Tweets (x.com is adversarial to non-JS supported browsers). You will not be using social media, nor will you be logging into any platforms. You will not be doing your online banking, filling in government forms, or viewing client portals. The usecase is quick web searches, documentation, and reading blogs generally. Once more, hitting in any web page will bring up your default browser to continue your session in chromium or firefox when you do run into pages that don’t work well in EWW. I have reverted all my url functions to default to browsing in EWW, so any web interaction must first go through EWW. This has encouraged me to deeply consider what I am doing online first and foremost, and then only falling back to a modern browser when needed. I have been pleasantly surprised with how much I am able to do inside emacs, and continue to move toward using it as my computing environment in perpetuity. As always, God bless, and until next time. If you enjoyed this post, consider Supporting my work , Checking out my book , Working with me , or sending me an Email to tell me what you think.

0 views
HeyDingus Yesterday

7 Things This Week [#186]

A weekly list of interesting things I found on the internet, posted on Sundays. Sometimes themed, often not. 1️⃣ I think you’ll like this picture of the world’s biggest and smallest Macs (an an original Macintosh) that Scott Knaster shared. [ 🔗 scottknaster.substack.com ] 2️⃣ Robert Birming made a really cool calendar view for his Bear blog , so you can browse posts month-by-month. [ 🔗 robertbirming.com ] 3️⃣ So, uh, someone made a compass that points to the Olive Garden in Times Square. And that’s all it does. And I don’t hate it. [ 🔗 theverge.com ] 4️⃣ The Am Dash is a new punctuation mark introduced in two typefaces and is designed to signal that some text was written by a human — not em dash-happy AI . [ 🔗 theamdash.com ] 5️⃣ Lynn Fisher has a handy mnemonic for remembering Markdown’s link and image syntax. [ 🔗 lynnandtonic.com ] 6️⃣ This 14-year-old won a research prize for his origami prowess, which he thinks — based on the incredible strength-to-weight ratio of the Miura-ori fold — could be used for disaster relief. Incredible stuff. (Via The Good News Podcast ) [ 🔗 businessinsider.com ] 7️⃣ Louie Mantia makes an impassioned argument for processed American cheese — certainly the first I’ve heard in favor of it. It’s a convincing one, too. [ 🔗 burgerdigest.com ] Thanks for reading 7 Things . If you enjoyed these links or have something neat to share, please let me know . And remember that you can get more links to internet nuggets that I’m finding every day by following me @jarrod on the social web. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
Jim Nielsen Yesterday

Hook It Up to the Machine

In the early 2000’s, my parents took us on a road trip to Glacier National Park in Montana. We made the journey in our new (used) family van: a green Dodge Caravan whose reputation was soon to become “a lemon”. I was a teenager and didn’t pay a lot of attention to the details of what was happening around me, but I do remember how the van kept overheating. It ran fine on the interstate, but anything under 40MPH had the car’s temperature gauge rising into unsafe zones. I remember stopping in some small town in Montana to get it checked out by a mechanic. He checked it out, took it for a test drive, etc., and told my Dad the reason the car was overheating was because the idling fan wasn’t turning on. At higher speeds, like on the interstate, that was fine because there was enough airflow to keep the engine cool but at lower speeds the car would overheat. The mechanic said he didn’t know why the fan wasn’t turning on. There was nothing wrong mechanically from what he could see. But he couldn't fix it. He told my Dad that this was one of those increasingly common “computerized” cars that you have to hook up to another computer to diagnose the source of the issue. And he didn’t have one of those computers. So we continued on our way. The rest of the trip required my Dad taking “the long way around”, like back roads where he could keep up his speed in order to avoid the car overheating. It was all very amusing to us as kids, almost thrilling because Dad had a legitimate excuse to drive fast (suffice it to say, Mom did not like this). Once the trip was over and we returned home, my Dad was able to get the car in to a dealer where they hooked up the car’s computer to another computer to diagnose and fix the issue. I don’t really remember the specifics, but the issue was seemingly some failed digital sensor that prevented the idling fan from turning on. Once the sensor was replaced, things worked again. Computers talking to computers. Growing up in an era that shifted so many things from analog to digital, mechanical to electronic , I’ve thought about this trip a lot. And I’m thinking about it again in this new era of building software with LLMs. I think about that mechanic. This guy who grew up around mechanical cars that could be physically inspected, diagnosed, and repaired. So much of his experience and knowledge unusable in the face of a computerized car. You can tell when a mechanical switch has failed with your eyes, but not a digital one. You need a computer to help you understand the computer. Will this be my future? If a codebase was made with the assistance of an LLM, will its complexity and bugs only be inspectable, understandable, diagnosable, and fixable with an LLM? “Hey, can you help me, there’s a problem with my codebase?” “Ok, I can confirm the issue, but I can’t fix it without hooking your codebase up to an LLM.” Reply via: Email · Mastodon · Bluesky

0 views
Brain Baking Yesterday

The Strange Heterogeneity of Hiking Signs Part II

In 2022, I wrote about our encounter with the strange heterogeneity of hiking signs during A Short Hike (that’s also a video game but not the thing we were doing). The photo shared then depicted a signpost with arrows on top of specific shapes (i.e. a blue diamond, a yellow cross, …) identifying different—and in most cases, much longer—routes. It turns out that these symbols never represent the same distance. When I meet my friend from another province, we usually go hiking somewhere near his home. There, the weirdly shaped signs are nowhere to be seen. Instead, the remarkably clear numbered “knooppunten” (nodes) let you plan your own route. It’s in fact exactly like the bigger blue node signs we’re accustomed to when biking ( https://www.fietsknooppunt.be/ becomes https://www.wandelknooppunt.be/ ). Last year, I noticed our province finally adapting the same system: also features a virtual map where you can select which numbered nodes to follow. Finally some consistency! Except that of course the existing plaques didn’t move. Instead, various governmental instances only added signs to the poles. The confusing heterogeneity was back with a vengeance. We found out that the best way to battle this is to simply ignore all the rest and follow the “standardised” numbered nodes from . Last week we were on another short trip just to get out of the house. Unfortunately, the misery of having small kids seems to follow you around if you take them with you. It also makes packing for just a few nights a literal and figurative nightmare, but I digress. On Another Short Hike (the hopefully to be announced video game sequel), we encountered this very insightful pole depicting the same junction-style number system: Hiking Node 58 in the province of Antwerpen. And god knows what else. I mean, really? Let’s tackle them from top to bottom: Biggest plaque on top; node 58 with directions to node 69, 89, 57, and 51. Remember, these numbers are local as well, so the 69 here won’t be the 69 say 20 kilometres away. We also encountered a big map highlighting these numbers so it’s fairly easy to follow these. If you’ve got a smartphone, you can always look up which direction to go. Second from the top; yellow/green with a black arrow to the left: a very local Mol Om sign indicating the long distance path created by the local walking club to celebrate the municipality of Mol. The site discusses its funny history of pragmatism that might cause trouble: Trail markers at that time did not always make it a habit to request permission from the landowner or manager before marking the trails. That practice, combined with mistrust, led to conflicts more often than it does today. Sabotage by scratching out or removing markers was commonplace at Mol-Om, to such an extent that for the first official trail walk [1974] the Mol Sports Council would only apply the markers the day before each event, for fear that they would otherwise disappear too quickly again. The sign was (and still is) very obtuse: we only found out about it now by looking up what “Mol Om” means. No indication of it on local maps either. I presume their clandestine markings turned tolerable predate the numbered nodes. Third from the top; a Santiago de Compostella pilgrim route. The iconic yellow scallop with blue background, Camino De Santiago . The Flemish Compostella Society lists all pilgrim routes going trough Belgium; the one we found is part of the Via Monastica ( ). I’m fascinated by these routes. If the kids are old enough… Who knows. Fourth from the top; an orange arrow to the right: who knows? This is not part of the usual symbols indicating hiking paths like the orange circle and blue diamonds on the right side of the same pole. The other, fatter arrow, of course with the same orange colour, going the other direction, is possibly another route? The way to the restrooms? The last one that looks like a triangle with legs: an initiative of Sport Vlaanderen , a governmental instance promoting walking as a sport. Why they couldn’t reconcile with the numbered node network of beats me. Maybe they were first? No geocaches to be found along the way but plenty of hidden boxes that used to be there. I’ll save some meat for another post. Meatwhile , let’s get hiking . Related topics: / hiking / signs / By Wouter Groeneveld on 19 April 2026.  Reply via email .

0 views

Fits on a Floppy

I stumbled across Matt's work via a post on Tildes. His "manifesto for small software" describes how he has targeted building applications that could fit on a 1.44 MB floppy disk. Most of his apps are either Mac OS or iOS, which honestly shocked me that you could bundle apps for those platforms at such a small size. All this has got me thinking, and when I start thinking I typically end up changing my opinion on things. You see, I agree with Matt, "software has lost its way". My recent post on using Palm OS for weight tracking proves that. The extremely powerful database software I talk about in that post is 758KB, heck you could almost fit two copies on a floppy. For comparison, Numbers (Apple's spreadsheet software on iOS) is 617.2MB. You could fit 833 copies of the Palm OS app in that amount of space! Here's the thing, it's going to get worse. Much worse. When everything is vibe coded and built on the backs of bloated frameworks, the size of applications will continue to grow. Optimization is an art of the past, and LLM driven development will further solidify it in carbonite. Instead of optimizing for software to better utilize our hardware, we've turned to constantly scaling hardware to fit the software. Buy, buy, buy! At the same time, the price of hardware is skyrocketing, which means it will become increasingly difficult for most to run increasingly bloated software. I'm sure Microsoft will be happy to rent a cloud server running Copilot OS to you though...for a monthly fee of course. All that to said, I've changed my mind (again) on using AI. Admittedly I had started to give in due to it being used heavily at work. What I've come to realize is that I don't want to make software that way, it's not meaningful to me. As @eniko said on Mastodon , it's taking the artistry out of coding. The artistry of a well optimized system, of meaningful decisions, of re usability and composition. I've been reading "Microinteractions: Designing with Details"" by Dan Saffer and it's had me thinking a lot about the details that are getting missed in modern "software development". When you stop optimizing and internalizing every piece of an application, how could you possibly focus on the microinteractions that compose it? The only thing that matters at that point is the list of features used in a sales pitch. The actual experience of using the app is left to Claude to figure out. Heck, the industry is rushing head first into letting AI take over everything human in the UX of applications. Teams use AI to write the requirements documents. Then use AI to create work tickets. AI is brought in to build the design and user experience. AI writes the code and submits the PR. AI reviews the PR and tests the functionality. What's the point? You end up using software that had near zero human involvement. Sure, some engineers were needed to drive the AI and keep it on track, and they probably did a cursory glance at the PRs and some level of QA. Maybe. But when so many of the decisions are automated by the machine, what you've created is not something built for users. So yeah, I'm done letting Claude create anything for me personally. I'll still occasionally use these tools to solve issues, after all they are pattern matching engines which has advantages over simple web searches. But for coding, my opinion is now the same as what I stated in my post on using AI for writing , when you take the human out of the process you're not producing art. And code is art.

0 views
Justin Duke 2 days ago

Masters of Doom

One way to approach writing about Masters of Doom is to talk about its outsized influence. Just off the top of my head: two pretty meaningful pieces of art about technology — blackberry and Halt and Catch Fire — both crib heavily from its narrative and its depictions of the early-90s technology zeitgeist. On the private-sector side, the founders of Reddit and Oculus both cite it as a core text that inspired them to start their companies. While in 2026 some of its narratives and ideas sound a little dated or pat, it manages to be both hagiographic and educational. Kushner does a good job balancing the personality cult (though I found the cloying early chapters about the various protagonists' childhoods to be unrewarding) and the legitimate technology breakthroughs that brought id its success and fame. This is perhaps the strongest thesis espoused by the book, which goes something like as follows: id Software was successful because it had a maniacal engineer single-mindedly focused on technological breakthroughs, and creative designers in his orbit who could leverage those breakthroughs into games beloved by millions. Everything else is incidental and auxiliary, and the alchemy of Doom and Quake 's success hinged on the chimeric bond between the two Johns, neither of whom were able to replicate it independently. In the twenty years that followed, of course, the narrative becomes a bit messier. We leave the book before Doom 3 was released, and while Kushner suggests that Doom 3 may be a middling title and that Carmack is no longer interested in engineering, he manages to both hit and miss the mark. Doom 3 was another smashing success, but id Software faded into irrelevance shortly thereafter, and the realm of first-person shooters became dominated by the antithesis of id Software: very large tech companies with embedded game studios, treating the production line like a factory floor rather than a monastery. Romero's career after Ion Storm is hallmarked by a series of downwardly mobile steps — a fate that, if I may borrow some of Kushner's psychoanalytic inquiry, must seem a little worse than death. Having achieved fame and fortune, but not peace, and having burned through two more wives and four more studios since the book's publication. For all the duality that Kushner tries to imbue into the narrative, this is really Carmack's story, and Carmack's arc after the book is less depressing, but more surprising. Despite vowing to never sell, id Software sold to ZeniMax in 2009, having achieved nothing notable since Doom 3 's launch six years prior. Four years after that sale — and with nothing more to show for it besides perhaps a larger checking account — Carmack left to go work on Oculus as CTO, which is both a confirmation of the book's espousal of Carmack's love of VR and yet objectively a bit of a failure. Oculus never achieved anything close to mainstream success, and ten years after he joined as CTO, Carmack left Meta to work in his own personal AGI lab. Carmack is an interesting character, and I think some of the stickiness that Kushner deploys when describing him — the autistic mannerisms, the obsession with pizza and Diet Coke — belies what is truly great. Carmack is relentlessly charitable with intellectual property. He is also, as the book describes him, a sociopath who is willing to give away his cat if it starts bothering him, and cut his friends out of a company in order to meet his ends. We know through many media of technical sociopaths, and generally associate them with greed and vanity. Carmack is not one of those people. He seems earnest and driven, and also, during the book's events, a 20-year-old who is in way over his head. I started off this book really not liking it, and then by the end — the power of the narrative, the slow progression into the world I remembered of my youth, having never played Quake but knowing most of the personalities and zeitgeists depicted, including a US populace that was obsessed with the concept of video game violence (a concept which now seems alien) — my esteem of it kept ticking up and up, until it became a book I would generally recommend, and have done so already. Kushner's reportage is impressive. He moved to Texas for five years to embed himself in the history and the scene, and this is not the airport book it feels like at first glance. It is not barbarians-at-the-gate , but it is something quite close.

0 views

Figma's woes compound with Claude Design

I think Figma is increasingly becoming a go-to case study in the victims of the so-called "SaaSpocalypse". And Claude Design's recent launch last week just adds a whole new dimension of pain. Firstly, I should say that I love(d?) the Figma product. It's hard to understand now what a big deal Figma's initial product was when it launched in the mid 2010s. The initial product ushered in a whole new category of SaaS - using the nascent WebGL and asm.js technologies to allow designers to design entirely in browser. It used to be the running joke that an app like Photoshop would ever run in the browser, but Figma proved it wrong. It quickly overtook Sketch as the defacto design tool in the market. Firstly for UI/UX wireframing and prototyping, but increasingly for everything graphic design. As it was based in the browser, it was a revelation from the developer side to be able to open UI/UX files if you weren't on a Mac (Sketch is Mac only). It was also brilliant to be able to leave comments on the design and collaborate with the designer(s) to iterate on designs really quickly. The collaborative features (without requiring anyone to download any software) quickly meant it got adoption outside of pure design roles - PMs and executives could finally collaborate in real time on the product they were building, without having to (at best) send back revisions and notes from badly screenshotted files that tended to be out of date by the time they were received. I'll skip over the rest of the history, including a no doubt distracting takeover attempt by Adobe, that was later blocked on competition grounds. But (of course) LLMs happened and suddenly one of the most forward looking SaaS companies became very vulnerable to disruption itself. One completely unexpected development me and others noticed (and wrote up a few months ago at How to make great looking reports with Claude Code ) was that LLMs started to get fairly "good" at design. By good I do not mean as good as a talented designer, clearly it's nowhere near that - currently. But like many things, not everything requires a great designer. Even if you use a great design team to build out your core product experience (and many do not ), there's an awful lot of design 'resource' required for auxiliary parts of the product, reports, proposals etc. It's not stuff that tends to get designers excited but can sap an awful lot of time going back and forth on a pitch deck. And this is exactly why I think Figma is almost uniquely vulnerable. The way it managed to expand into organisations by getting uptake with non-designers becomes a liability if those non-designers can get an AI agent to do the design for them. Looking at Figma's S1 (which is somewhat out of date by now, but is the only reported breakdown I can find) corroborates this potential weakness. Only 33% of Figma's userbase in Q1 2025 was designers, with developers making up 30% and other non-design roles making up 37%. A lot of Figma's continued expansion depended on this part of their userbase. A lot of their recent product development has been to enable further expansion in organisations - "Dev Mode" for developers (which now looks incredibly quaint against LLMs), Slides (to compete against PowerPoint and other presentation tools) and Sites (a WebFlow-esque site builder) all are about expanding their TAM out of "pure" design. The real surprise for me though was how basic their "flagship" AI design product Figma Make is. It really does feel like something that someone put together in an internal AI hackathon one weekend and it never progressed beyond that. Given how much Figma managed to push the envelope on web technology I found this surprising - perhaps they were caught off guard with how quickly LLMs' design prowess improved, or there were internal disagreements about the role AI should or will play in design. Regardless, it's an incredibly underwhelming product as it stands. If things weren't bad enough, Anthropic themselves launched Claude Design which is a pretty direct competitor to Figma in many ways. While it's nowhere near functional and polished enough to replace Figma's core design product, I expect it will get significant traction outside of that. The ability for it to grab a design system from your existing assets in one click is very powerful - and allows you to then pull together prototypes, presentations or reports in your corporate design style that look and feel far better than anything a non-designer could do themselves. And I thought it was extremely telling that unlike a lot of the other Anthropic product launches that have touched design - Figma did not provide a testimonial on it (understandably). Canva did , which I found extremely odd (they are in my eyes even more vulnerable to this product than Figma). I think this really underlines two major weaknesses in many SaaS companies' AI strategies: Firstly, it's very difficult to compete on AI against the company that is providing your AI inference. A quick check on Figma Make suggests that Figma (at least on my account) is indeed using Sonnet 4.5 for its inference - though I have seen it use Gemini in the past: At this point Figma is effectively funding a competitor - and the more AI usage Figma has - the more money they send over to Anthropic for the tokens they use. Even worse, Sonnet 4.5 is miles behind what Anthropic uses on Claude Design (Opus 4.7, which has vastly improved vision capabilities [1] ), so the results a user gets on Make vs Claude Design are almost certainly going to underwhelm. Also, unlike most/all SaaS costs, inference (especially with these frontier models) is expensive . As Cursor found out, the frontier labs can charge a lot less to end users than API customers like Figma. When you are potentially looking at a shrinking userbase, it's far from ideal to have very expensive variable costs that start pulling your profitability down. Secondly, it really underlines to me how incredibly efficient headcount-wise companies can build products now. Figma has close to 2,000 employees - not all working on product engineering of course. I really doubt Anthropic even needed 10 to build Claude Design. Indeed the entirety of Anthropic is around 2,500 people. It's also worth noting that a lot of the things that would traditionally lock a company like Figma in stop working as well in an agent-first world. Multiplayer matters less when your collaborator is an agent iterating on a prompt. Plugin ecosystems matter less when you can just ask for the functionality directly. Design system tooling is the whole point of Claude Design. Enterprise SSO - Claude already has that. Most of the moats that protect a mature SaaS company are moats against other SaaS companies, not against the thing providing their inference. I might be wrong about how bad this gets for Figma specifically. Companies with strong brands, great distribution and genuinely talented teams can often adapt faster than outsiders expect, and I'd rather be long Figma than most of its competitors. But the structural point is harder to wriggle out of. Figma has ~2,000 employees. Anthropic has ~2,500 total and I doubt Claude Design took more than a handful to build. Figma now needs to out-execute a competitor whose inference is ~free to them, whose marginal cost to ship is roughly zero, and who employs fewer people on the competing product than Figma has on a single pod. That's a very hard position to pivot out of. This feels like a preview of where SaaS economics are heading. The companies that built big orgs on the assumption of steady seat expansion are going to find themselves competing with products built by tiny teams inside the frontier labs. Figma just happens to be the first big public name where one of their primary inference suppliers has started competing against them. Both GPT 5.4 and Opus 4.7 can now "see" screenshots at much higher resolution - Opus 4.7 jumped from 1568px / 1.15MP to 2576px / 3.75MP. Resolution isn't the whole story (scaffolding and post-training matter a lot too) but it meaningfully helps with small-element detection and layout judgement. If you've ever pasted a screenshot of something broken and the model told you it looks great, the previous lack of resolution is one of the reasons why. ↩︎ Both GPT 5.4 and Opus 4.7 can now "see" screenshots at much higher resolution - Opus 4.7 jumped from 1568px / 1.15MP to 2576px / 3.75MP. Resolution isn't the whole story (scaffolding and post-training matter a lot too) but it meaningfully helps with small-element detection and layout judgement. If you've ever pasted a screenshot of something broken and the model told you it looks great, the previous lack of resolution is one of the reasons why. ↩︎

0 views
A Smart Bear 2 days ago

How to hire people who are better than you

If you don't hire people better than you, the organization gets bigger, not better. But how do you hire for something you don't understand?

0 views

How to Install a Specific Version of a Homebrew Package with brew extract

I previously wrote about how to install older versions of homebrew packages . That method involves installing a package from a Ruby file but it’s outdated and doesn’t always work. There’s a better way with , although it still comes with caveats. I’ll be using as an example. Let’s say I wanted to install v0.145.0 because v0.146.0 introduced breaking changes that broke my theme. To install hugo v0.145.0: Note that this process will point your command to the older version, but you can switch between versions with . It will enable developer mode. This is normal and safe. Next, run . At the time of writing, it’s a 1.3GB download. This is necessary to get this working because Homebrew no longer keeps homebrew-core cloned locally. The command needs the full git history to search for older versions. Now we can use . This command will find a commit where the formula was at the version we want and copy that locally as . In this case we want Hugo v0.145.0, so we run : This isn’t needed for every formula and is something I ran into specifically with Hugo. Without this patch, you’ll run into errors. After running , edit the file: . Change this line: The reason we need to patch this file is because it prevents the error: It’s a mismatch between the path Homebrew expects ( ) vs the path that is created when using on Hugo ( ). Now that Hugo is extracted and patched, we can install with : Hugo v0.145.0 is now installed. There’s a warning with long output in the previous example due to the normal Hugo package being already installed but that is expected. Homebrew is now pointing the binary to v0.145.0 instead of the latest version (v0.160.1 at the time of writing). We can verify with : We can also see that Hugo v0.145.0 is installed along with the latest version with : Currently the command is pointing to v0.145.0. To have it point back to the regular version, run : And if we want to point back to the old version, run At first I expected to work right off the bat, but running both and is necessary to switch between versions properly. This is because homebrew tracks linked formulas and actual symlinks on disk separately. To help Homebrew track things properly we need to run both to clean the records, then to write the new symlinks. There’s no need to use to prevent the older version of Hugo from updating. Since this is a local copy, there is no remote repository that would be updated that would in turn update our local version. You can even try running to see the warning message: If you no longer need Hugo v0.145.0 you can run : If you don’t have any other packages you extracted with , you can also remove your local tap with Finally, if you don’t plan on using again in the future, you can remove the local clone of homebrew-core with . This will clean up the 1.3GB of files that was downloaded: Then re-link to the latest version with : Create a local tap with Tap homebrew/core which is a 1.3GB clone at the time of writing Extract the formula with Patch the formula. This isn’t needed for every formula. Install as usual https://docs.brew.sh/Manpage https://github.com/orgs/Homebrew/discussions/2941 https://emmer.dev/blog/installing-old-homebrew-formula-versions/

0 views