Latest Posts (20 found)
Jim Nielsen -20 days ago

You Might Debate It — If You Could See It

Imagine I’m the design leader at your org and I present the following guidelines I want us to adopt as a team for doing design work: How do you think that conversation would go? I can easily imagine a spirited debate where some folks disagree with any or all of my points, arguing that they should be struck as guidelines from our collective ethos of craft. Perhaps some are boring, or too opinionated, or too reliant on trends. There are lots of valid, defensible reasons. I can easily see this discussion being an exercise in frustration, where we debate for hours and get nowhere — “I suppose we can all agree to disagree”. And yet — thanks to a link to Codex’s front-end tool guidelines in Simon Willison’s article about how coding agents work — I see that these are exactly the kind of guidelines that are tucked away inside an LLM that’s generating output for many teams. It’s like a Trojan Horse of craft: guidelines you might never agree to explicitly are guiding LLM outputs, which means you are agreeing to them implicitly. It’s a good reminder about the opacity of the instructions baked in to generative tools. We would debate an open set of guidelines for hours, but if there’re opaquely baked in to a tool without our knowledge does anybody even care? When you offload your thinking, you might be on-loading someone else’s you’d never agree to — personally or collectively. Reply via: Email · Mastodon · Bluesky Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system). Motion: Use a few meaningful animations (page-load, staggered reveals) instead of generic micro-motions. Background: Don't rely on flat, single-color backgrounds; use gradients, shapes, or subtle patterns to build atmosphere. Overall: Avoid boilerplate layouts and interchangeable UI patterns. Vary themes, type families, and visual languages.

1 views

using AI to inflate your ego

Personally, I’m open to retrying AI use cases every now and then. I’ve written about it before, and I freely share the fails and wins in chats I am part of. In my case, it’s no use to endorse it based on hype online, nor to try it once and keep my opinion fixed on that experience. I’m expected to engage with it somewhat at work, and independently, I want to know what I can expect from these tools so I can make better decisions and write better when it comes to the data protection impact these tools have. No use to stick my head in the sand when my desired career path is touched so heavily by it. What bewilders me is how many people seem to use the tool (and topic in general) to inflate their ego. I don’t just mean the literal sycophancy displayed in the model outputs, but also in the conversation around its use. There’s a group of people that are saying they do very important, difficult and smart work every day thanks to AI, in a pace and way humans just can’t. The gist of it is: “ I am better than you because I use AI, and have more productive output than you, and do more difficult work. The fact I need AI to do it means my work is very demanding, very admirable and at the bleeding edge, and humans could never do it like this, or have an output this fast. The fact that you don't want or need to use AI for your work must mean it's low-value. ” Often, they remain very vague about what that work even is, so it's hard from the outside to verify that. On the other hand, there are also people who do the inverse: They don’t plainly say that AI performed badly in their use cases, so it’s not useful for them, but instead, it becomes a way to prove that the work is so difficult and demanding that AI could just never do that. Something like: “ Behold, I am god’s gift to research and problem solving, and the machine cannot beat my perfect brain. The fact that you are able to use AI for your work must mean you are stupid and your work is easy, since AI can, at best, only do stupid and easy work. ” Both of these groups then make sweeping generalizations of what other people should do. The former group tends warn that “ you’ll get left behind! ”. It’s such a pathetic cope. It looks like people who were never the top at any skill in their environment, but now think they can finally have an edge thanks to adopting early and shitting out as much as possible in the quest to "learn" or hit some kind of jackpot, attract the right eyes. They have to cling to the fantasy that the nay-sayers will have a disadvantage somehow just so they can feel justified and special. But tell me, were you left behind when you started using Excel late? Was it bad you only learned office stuff when you needed it? Were you not able to catch up? Chances are, it makes no difference and with some effort and a workshop or YouTube videos, you can use the tools equally well. In the case of LLMs, using it has never been easier. You can just use plain, natural language! No submenus, settings, buttons, search operators and the like to remember. It‘s designed to be easy . Prompt engineering is and was always a scam. There are no secret incantations you only learn in a 500 Euro class. Anyone can use the tool, learn, and refine. It’s embarrassing to pretend otherwise. Coworkers that have trouble with basic Outlook and Word do surprisingly well with ChatGPT. And why wouldn't they? They have spoken a natural language all their life and have probably trained multiple other new employees in their career; they know how to explain standards and expectations, and how to explain tasks, to a human and a tool. The other group I mentioned is so weirdly dismissive based on their attempts at a very niche, still unstable use cases. I understand criticism that’s about directly advertised claims by the companies that aren’t fulfilled, or commonly seem use cases online that just don’t actually seem to reliably work; I wrote about the same thing in the past, and how the free models available are not capable enough to do many of the advertised things we are inundated with. What I don’t understand is thinking “ The LLM couldn’t generate a PDF with all my branding included and a table with this and that and accurate graphs and footnotes with sources. That means it’s not even useful to create an email draft, or for grandma’s grocery shopping list, and you shouldn’t use it for a motivational letter. ” Why can’t there be nuance? It obviously sucks bad for some complex stuff, but it really hits the corporate bullshit text creation just right. Don’t tell me I don’t get it - I recently tried out what it would recommend for a business card and it said I should use a transparent plastic card to signal transparency in my work. Of course I see how stupid it can be, even for some simple stuff. I get how it could royally screw up grandma's shopping list. But for me, both of the groups previously identified also ignore that most people simply aren’t in these high-stakes positions, interested in these hobbies or working these jobs. Many have no need or interest to vibecode some custom solution for their smart home or a family app that rewards homework time of the kids with gaming time automatically just to sell it to VCs or make a SaaS out of it, and they aren’t researchers or problem solvers coding complicated stuff or writing the next bleeding-edge paper in the field. They aren't hustlers scared of being outpaced by competition. Many people on this planet are taxi and bus drivers, nurses, kindergarteners, cleaners, cashiers, baristas, warehouse workers, construction workers, and the like. Or doing a boring secretary job that is about writing e-mails and sending out meeting details via buttons, using templates or pre-generated e-mails. They’re some boomers or part-time parents who aren’t that good with tech or don’t need much of it and pass office time clicking a couple buttons. What are you optimizing for, when you realistically only work like four hours of your eight hours a day and it’s the easiest work ever, just following protocol? They sure as hell aren’t interested in automating themselves out of a job, and they don’t wanna work anything else or do something more demanding. They wanna earn money with the least amount of effort and with the least amount of changing their workflow, and they don’t particularly care for computers or hustle. But if they can get out of some annoying text-based stuff like some e-mail aspect, maybe they’ll use it. And that's fine! They shouldn't be told by some AI fans that them not letting AI take over everything is making them a redundant NPC that has nothing to offer, or told by AI haters that doing easy work that AI can actually somewhat do means they're doing worthless work. The funny thing is: Their jobs often are just easy enough that it is faster and more foolproof to do it themselves than attempt a vibecoded or generated solution, while also having many use cases that work most reliably at this point and can actually be recommended. For example: Writing a short email thanking your boss for something is faster done by yourself than typing the prompt; but asking an LLM to make your angry email disagreeing with your superior sound nicer and more diplomatic works. My coworker can’t vibecode a solution to let AI enter text fields in the database automatically, but she can ask ChatGPT how to hide cells in Excel (nevermind that a search engine could also do this). I definitely am in that boat of “ no use, better done quickly myself ” with the core part of my job. So I just don’t understand why so many people need to brag that they’re moving the needle so much with their daily work either by using or not using AI, and subtly also shitting on people whose jobs are either replaceable with AI or aren’t fit for AI use, which I allege many fall into! It can’t be everyone that has such an unusual, high impact knowledge worker job where AI is either the magic enabler or not capable enough. I mean for fucks sake, seems like most of them posting the stuff are students, trainees, junior devs, or vague office job. It’s like people use this controversial topic to present themselves as less expendable and more important than they actually are. There’s also a group of people who won’t intellectually engage with the topic at all because they just do what everyone else does. Their personal podcast idols have about AI? Better give it all the data and put together a self-improvement plan and let it talk you through some journaling prompts. They don’t wanna discuss the bad sides because the people they admire love it. In my experience, they’re also very easily impressed with shoddy work just because it’s written in a charismatic way. “ This was groundbreaking ” and it’s something a Tumblr girl would have posted at age 14. All your friends hate AI? Better not touch it, out of fear of social repercussions. They can’t talk with you about the bullshit it did last time they tried it, or ethical, privacy, or environmental concerns, because they just never actually cared to develop an opinion aside from not wanting to be hated by their circle. That’s boring and people pleaser behavior. I think you look silly if you have no deeper reason to not use something, no interesting arguments. A tangent about arguments: I no longer care about whether images created by AI are good or bad and I don’t care about water or electricity usage. That is because the capabilities as well as resource usage can and will likely improve, and to me, are more representative of missing regulation and a shitty government than the tech itself; it’s better happening in a context more removed from the actual core of the tool and in how the industry needs to be regulated. I want to be more precise in what is actually the fault of the tool vs. the fault of the region many of these services are located in, and its political problems. If you claim to hate the tool, but only for the fact that it makes soulless images and it starts making better ones, what then? I'm sure you won't suddenly have no concerns! You usually hate the tech for other reasons than that, so we should focus on these better arguments instead. I think it’s much more interesting to debate whether it is art or not, about responsibility in war or accidents, or focus on the privacy aspect, the intellectual theft, the e-waste, job market effects and so on. Additionally, if we truly focus on electricity and water use (irrespective of regulation, placement, and other factors that cause issues of droughts and rising prices thanks to data centers), I think we would quickly have to argue against the terabytes of useless bullshit we all hurl onto the net to be stored for ages, take up space, and are another reason for more data centers and people’s increased use of their devices. Even your well-meaning blog post about enjoying a good sandwich counts, or your favorite cat video. I don’t want to discuss an intellectual bar or importance metric that online content has to clear before it can be uploaded because of our precious resources, because it would hit most of us, and it would hit art and marginalized voices. If we haven’t ever seriously discussed looking critically at each search engine use, each video we watch etc. as something potentially excessive that uses too much resources compared to how useful it was, I don’t know if this is the right way to start. I think for many, it’s only okay to start that conversation because it’s about something they don’t (yet?) use. It’s hypocritical, as many are not ready to give up their other online consumption behaviors for resource reasons either, because they don’t even cease them when mental health and privacy are harmed. 🤡 Lastly, there is some weird ego stuff going on about talking or not talking about AI. “You hate AI, yet you talk about it. Curious!” “Why do you wanna focus on something negative?” “The more you talk about it, the more you speak it into existence.” I don’t need to speak it into existence; billion-dollar industries funnel money into the bubble and force it into every device and software and ad. Don’t be disingenuous. Each industry, or art form (if you believe AI art is art) needs its critics. And as AI fans love to bring up, every new invention has had its moral panic, so if that also applies here, why are you mad? On the other side: Boohooo, you avoid the word "AI" to “ not give it more power ”; fine, have fun self-censoring for virtue signalling reasons, Mr. I-did-it-with-all-ten-fingers-and-a-few-braincells. I will keep writing about it, because everywhere I look, I just see people exploiting both ends of the spectrum for views and money, making extreme claims to get engagement. Who screams the loudest and makes the most absolute judgments is seen as more correct, after all. I will write about the AI Act, about labeling requirements, and more of the spectacular failures and okay-ish results I’ve had, though, and I will have to name the beast. And I don't care to read weird ego boosting shit swirling around elsewhere. Reply via email Published 28 Mar, 2026

0 views
Kev Quirk Yesterday

Slowing Down on Pure Blog

I'm really proud of what Pure Blog has become, and honestly a little overwhelmed by the interest it's received since launch. The feedback and enthusiasm from the community has been genuinely lovely, so thank you. That said, since announcing it a couple of months ago, I've spent pretty much every spare minute working on it, and most of that time building things I didn't personally need. I said a few weeks ago that Pure Blog was feature complete...ish . The "ish" turned out to be a mistake, because it left the door open and I smashed right though it. 🙃 So as of today, with the release of version 2.2.0, I'm dropping the asterisk. Pure Blog is feature complete. It does everything I want it to do, so it's time to actually use it. I've worked through and closed the all open issues and PRs. From here, if you request a feature, thank you, but it's unlikely to be implemented. But there are some things I will continue to accept: If there's a feature you want to implement, Pure Blog is open source, so fork it and build what you need - it's a tonne of fun! As for me, I'm going to get back to actually writing on this lovely little platform I've built. That was the whole point, after all. 😊 Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment . Bug reports — if something's broken, I want to know. Translation improvements — some non-English translations are AI-generated and could use native speaker review. New translations — if you want to add a new language, submit a PR using as a reference.

0 views

Premium: How Much Of The AI Bubble Is Real?

I’m turning 40 in a month or so, and at 40 years young, I’m old enough to remember as far back as December 11 2025, when Disney and OpenAI “reached an agreement” to “bring beloved characters from across Disney’s brands to Sora.” As part of the deal, Disney would “become a major customer of OpenAI,” use its API “to build new products, tools and experiences (as well as showing Sora videos in Disney+),” and “deploy ChatGPT for its employees,” as well as making a $1 billion equity investment in OpenAI. Just one small detail: none of this appears to have actually happened. Despite an alleged $1 billion equity investment, neither Disney’s FY2025 annual report nor its February 2, 2026 Q1 FY2026 report mention OpenAI or any kind of equity investment. Disney+ does not show any Sora videos, and searching for “Sora” brings up “So Random,” a musical comedy sketch show from 2011 with a remarkably long Wikipedia page that spun off from another show called “Sonny With A Chance” after Demi Lovato went into rehab. It doesn’t appear that investment ever happened, likely because — as was reported earlier this week by The Information and the Wall Street Journal — OpenAI is killing Sora. Shortly after the news was reported, The Hollywood Reporter confirmed that the deal with Disney was also dead . Per The Journal, emphasis mine: Oh, okay! The app that CNBC said was “ challenging Hollywood ” and “ freaking out the movie industry ” and The Hollywood Report would suggest could somehow challenge Pixar and was Sam Altman successfully “ playing Hollywood ” and that The Ankler said was OpenAI “ going to war with Hollywood ” as it “ shook the industry ” and that Deadline said made Hollywood “ sore ” and that Boardroom said was in a standoff with Hollywood and that the LA Times said was “ deepening a battle between Hollywood and OpenAI ” and “ igniting a firestorm in Hollywood ” and that Puck said had “ Hollywood panicking ” and TechnoLlama said was “ the end of copyright as we know it ” and that Slate said was a case of AI " crushing Hollywood as it we’ve known it ” is completely dead a little more than five months after everybody claimed it was changing everything.  It’s almost as if everybody making these proclamations was instinctually printing whatever marketing copy had been imagined by the AI labs to promote compute-intensive vaporware, and absolutely nobody is going to apologize to the people working in the entertainment industry for scaring the fuck out of them with ghost stories! Every single person who blindly repeated that Sora existed and was changing everything should be forced to apologize to their readers!  I cannot express the sheer amount of panic that spread through every single part of the entertainment industry as a result of these specious, poorly-founded mythologies spread by people that didn’t give enough of a shit to understand what was actually going on. Sora 2 was always an act of desperation — an attempt to create a marketing cycle to prop up a tool that burned as much as $15 million a day that most of the mainstream media bought into because they believe everything OpenAI says and are willing to extrapolate the destruction of an entire industry from a fucking facade.  Thanks to everyone who participated in this grotesque scare-campaign, everybody I know in the film industry has been freaking out because every third headline about Sora 2 said that it would quickly replace actors and directors. The majority of coverage of Sora 2 acted as if we were mere minutes from it replacing all entertainment and all video-based social media, even though the videos themselves were only a few seconds long and looked like shit!  Sora 2 was never “challenging Hollywood” or “a threat to actors and directors,” it was a way to barf out videos that looked very much like Sora 2’s training data, and the reason you could only generate a few seconds at a time was these models started hallucinating stuff very quickly, because that’s what Large Language Models do.   Yet this is what the AI bubble is — poorly-substantiated media-driven hype cycles that exploit a total lack of awareness or willingness to scrutinize the powerful. Sora 2 was always a dog, it always looked like shit, it never challenged Hollywood, it never actually threatened the livelihoods of actors or directors or DPs or screenwriters outside of the tiny brains of studio executives that don’t watch or care about movies. Anybody that published a scary story about the power of Sora 2 helped needlessly spread panic through the performing arts, and should feel deep, unbridled shame.  You have genuinely harmed people I know and love, and need to wise up and do your fucking job.  I know, I know, you’re going to say you were “just reporting what was happening,” and that “OpenAI seemed unstoppable,” but none of that was ever true other than in your mind and the minds of venture capitalists and AI boosters. No, Sora 2 was never actually replacing anyone, that’s just not true, you made it up or had it made up for you.  But that, my friends, is the AI bubble. Five months can pass and an app can go from The End of Hollywood that apparently raised $1 billion to “ discontinued via Twitter post that reads exactly like the collapse of a failed social network from 2013 ” and “didn’t actually raise anything.” It doesn’t matter if stuff actually exists, because it’ll be reported as if it does as long as a company says it’ll happen. Perhaps I sound a little deranged, but isn’t anybody more concerned that a billion dollars that was meant to move from one company to another simply didn’t happen? Or, for that matter, that this keeps happening, again and again and again? I’m serious! As I discussed in last year’s Enshittifinancial Crisis , OpenAI has had multiple deals that seem to be entirely fictional: That’s just the AI bubble, baby! We don’t need actual stuff to happen! Just announce it and we’ll write it up! No problem, man! It doesn’t matter that one of the largest entertainment companies in the world simply didn’t give the most-notable startup in the world one billion dollars, much as it’s not a big deal that the entire media flew like Yogi Bear lured with a delicious pie toward every single talking point about OpenAI destroying Hollywood, much like it’s not a problem that Broadcom, AMD, SK Hynix, and Samsung all have misled their investors and the media about deals that range from threadbare to theoretical. Except it is a problem, man! As I covered in this week’s free newsletter , I estimate that only around 3GW of actual IT load (so around 3.9GW of power) came online last year, and as Sightline reported , only 5GW of data center construction is actually in progress globally at this time, despite somewhere between 190GW and 240GW supposedly being in progress. In reality, data centers take forever to build (and obtaining the power even longer than that), but nobody needs to harsh their flow by looking into what’s actually happening. In reality, the AI industry is pumped full of theoretical deals, obfuscations of revenues, promises that never lead anywhere, and mysterious hundreds of millions or billions of dollars that never seem to appear.  Beneath the surface, very little actual economic value is being created by AI , other than the single-most-annoying conversations in history pushed by people who will believe and repeat literally anything they are told by a startup or public company. No, really. The two largest consumers of AI compute have made — at most, and I have serious questions about OpenAI — a combined $25 billion since the beginning of the AI bubble, and beneath them lies a labyrinth of different companies trying to use annualized revenues to obfuscate their meager cashflow and brutal burn-rate.  To make matters worse, almost every single data center announcement you’ve read for the last four years is effectively theoretical, their nigh-on-conceptual “AI buildouts” laundered through major media outlets to give the appearance of activity where little actually exists. The AI industry is grifting the finance and media industry, exploiting a global intelligence crisis where the people with some of the largest audiences and pocketbooks have fundamentally disconnected themselves from reality. I don’t like being misled, and I don’t like seeing others get rich doing so.  It’s time to get to the bottom of this. Let’s rock . Its supposed $100 billion investment (that was always a “letter of intent”) from NVIDIA that went from OpenAI allegedly buying billions of GPUs from NVIDIA in October 2025 to “only a commitment” in February 2026 in a mere four months. A “letter of intent” between SK Hynix and Samsung to supply 900,000 wafers of RAM a month that was reported as representing 40% of the global supply of DRAM that never resulted in anybody buying or selling any fucking RAM. A supposed “definitive agreement” with AMD from October 2025 that would involve OpenAI using AMD’s GPUs to power its “next-generation AI infrastructure,” except AMD didn’t change guidance and does not appear to have any revenue from OpenAI , despite the first gigawatt of data center capacity being due by the end of this year. Part of the deal also involved OpenAI being able to buy 10% of AMD’s stock, but that was so stupid I can’t even bring myself to write it up. When asked about this on its latest earnings call , AMD CEO Lisa Su said that “the ramp is on schedule to start in the second half of the year,” repeating the deal existed while not increasing guidance to account for a gigawatt of chips, which would work out to somewhere in the region of $20 billion to $30 billion of sales as its weak guidance of $9.8 billion in the next quarter, sending the stock tumbling as a result . Isn’t it also weird that Meta signed a near-identical deal on February 24 2026 and nobody seemed to notice that guidance wasn’t changing and AMD was apparently also going to install a gigawatt of GPUs with Meta by the end of 2026? Is everybody drunk? What’s going on? A “strategic collaboration” with Broadcom “...to deploy 10 gigawatts of openAI-designed AI accelerators” by the end of 2029 that has resulted in no sales of any kind and no increase in guidance to match, with no mentions of OpenAI in its latest quarterly earnings report .  On its most-recent earnings call, Broadcom CEO Hock Tan added that it expected OpenAI to “deploy in volume their first-generation XPU in 2027 at over 1 gigawatt of capacity,” but did not raise guidance or, when asked directly, say how it would deploy 10GW by the end of 2029. I’ll also add that there isn’t a chance in hell OpenAI deploys a gigawatt of these chips in that timeframe, and Broadcom has yet to show any proof that these chips are going to be made.

0 views
Stratechery Yesterday

2026.13: So Long to Sora

Welcome back to This Week in Stratechery! As a reminder, each week, every Friday, we’re sending out this overview of content in the Stratechery bundle; highlighted links are free for everyone . Additionally, you have complete control over what we send to you. If you don’t want to receive This Week in Stratechery emails (there is no podcast), please uncheck the box in your delivery settings . On that note, here were a few of our favorites this week. This week’s Stratechery video is on Agents Over Bubbles . R.I.P. Sora, 2025-2026. AI Sam came, AI Sam saw, and AI Sam stole those GPUs. We’ll always have the memories . Unfortunately, it turns out that Sam would rather have the GPUs , so on Sharp Tech this week, Ben and I eulogized the app that took over the world for about two weeks last year . That included thoughts on copyright battles that may have sealed its fate, why Ben’s reluctant to be too critical, and more signs that OpenAI is serious about its enterprise pivot. Come for that conversation, and then stay for a rollicking spring mailbag that includes a great take on search advertising, F1 venting, the Vision Pro and my wife, kids and phones, and more. — Andrew Sharp The 2026 Bullseye List. The NBA Playoffs are only a few weeks away, which means Ben Golliver and I are already in preparation mode, including a delightful episode today running through a “Bullseye List” of superstars who will be under pressure this spring . We discuss everyone from Kevin Durant and Alperin Sengun to Jalen Brunson, Chet Holmgren, and Victor Wembanyama, a debatable inclusion, but undeniably the most magnetic star in the league right now. And yes, given my Luka takes in January , and Luka looking incredible throughout March, I did take accountability and add myself to bullseye list. — AS Arm’s Big Shift. If you wanted more evidence that AI is changing everything, look no further than Arm: the company was famous for its high margin IP-licensing business model, but this week announced that instead of (just) facilitating other company’s making chips, it would start making and selling chips itself. Naturally, their first offering is explicitly focused on AI data centers. I explained Arm’s motivations in Wednesday Update , and interviewed Arm CEO Rene Haas to get his point of view on Thursday . — Ben Thompson Arm Launches Own CPU, Arm’s Motivation, Constraints and Systems — Arm is selling its own chips, not just licensing IP. It’s a big change compared to Arm’s history, but not surprising given how computing is evolving. An Interview with Arm CEO Rene Haas About Selling Chips — An interview with Arm CEO Rene Haas about the company’s decision to not just license IP but make their own chips. Tilting at Windmills — As the Iran war continues, let’s take a look at the Democratic Party, institutional media, and offshore wind farms. John Ternus and Responsible Individuals Sora and Mac Pro Dead Singapore’s Sound Card Hero A Giant Mess with Super Micro; Completely Correct Xiong’an Progress; The PRC’s Balancing Act on Iran; Manus, Apple and Router News The Intrigue(?) in the East, Peterson and Acuff On Center Stage, Revisiting Draft Kevin Durant The BULLSEYE List in 2026: Playoff Questions for Ant, Chet, Tatum, Mitchell, Wemby, and Beyond A Spring Break Mailbag: RIP Sora, Ads and Surplus, F1 Going in Reverse, Elon Inc., Smartphone Parenting, and More

0 views
Farid Zakaria Yesterday

Does anyone actually use the large code-model?

I have been focused lately on trying to resolve relocation overflows when compiling large binaries in the small & medium code-models. Often when talking to others about the problem, they are quick to offer the idea of using the large code-model. Despite the performance downsides of using the large code-model from the instructions generated, it’s true that its intent was to support arbitrarily large binaries. However does anyone actually use it? Turns out that large binaries do not only affect the instructions generated in the section but may also have effects on other sections within the ELF file such as (exception handling information), (optimized binary search table for ), and even . Let’s take and as an example. They specifically allow various encodings for the data within them ( or for 4 bytes and 8 bytes respectively) irrespective of the code-model used. However, it looks like the userland has terrible support for it! If we look at the format, we can see how these encodings are applied in practice. The entries in this column are the ones that actually resolve to specific DWARF exception header encoding formats (like , , , etc.) depending on the values provided in the preceding fields. format [ ref ]: Note: The values for and dictate their byte size and format. For example, if is set to , the field will be processed as an (signed 4-byte) value. Up until very recently ( pull#179089 ), LLVM’s linker would crash if it tried to link exception data ( ) beyond 2GiB. This section is always generated to help stack searching algorithms avoid linear search. Once we fix that though, it looks like ( gcc-patch@ ) and ( pull#964 ) explicitly either crash on or avoid the binary search table completely reverting back to linear search. How devasting is linear search here? If you have a lot of exceptions, which you theoretically might for the large code-model, I had benchmarks that started at ~13s improve to ~18ms for a ~700x speedup . Other fun failure modes that exist: Note: Don’t let confuse you, it’s actually 32bit: It seems like the large code-model “exists” but no one is using it for it’s intended purpose which was to build large binaries. I am working to make massive binaries possible without the large code-model while retaining much of the performance characteristics of the small code-model. You can read more about it in x86-64-abi google-group where I have also posted an RFC.

0 views
ava's blog Yesterday

art feelings

Inspired by Vaudeville Ghost’s make bad art . I’ve always felt a resistance towards learning how to do art “properly”. Over the course of my life so far, I did occasionally look at short tutorials for some things, or booked one-time art workshops, but I just couldn’t find anything in there that made me want to stay at it perfecting it, and I don’t think I kept any technique long term. I also felt restricted by art class in school (despite great grades), which just wanted me to reproduce a style as closely as possible. I know most people in art progress by emulating others and learning the rules before producing stunning art in their own style. They grind practice sessions and drawing exercises and use palettes that have all the right values and complement each other, and they set all the shadows and highlights just right, their use of color underlines the piece. The result is something really amazing and kind to the eyes, but it’s also very technical and mechanical at times. Some of it treats art like this thing you can win, that can be graded finely and put neatly into boxes, and that it’s something you perform for others. That if you follow the rules to a T, the result is always good art. And they’re mostly right about that. But I’ve never had the drive to optimize my art this hard, for it to be checking off a list. I never could see it as a challenge to master and learn specific techniques (aside from some oil stuff I tried). For me, the more I look at what others do in art, the more my creativity and style disappears, and I want to protect that instead. I don’t want to feel limited by having to think about whether I’m doing something right. I know some limits and rules can set others’ art free, or polish the piece, but not mine; too much time spent looking elsewhere and I just emulate others, too many limits and I just stop. Some things in art also sound too serious to me, mathematical and snobbish. Like colors are formulas you are only allowed to calculate in a specific way, or a language whose vocabulary lists you learn by heart. I was never good at math, and my difficulties with my mental eye has forced me to be more experimental and see where I end up. Color theory is a law to others and an optional guide for me; I think the rules are not bent and questioned enough. For a lot of things, I think “This only “looks right/better” to you because we are inundated with this style or use of color everywhere.” I hate that people’s style is called wrong due to weird dimensions, weird use of color or not respecting the rules of the medium, but if they stick to it enough and it becomes popular, suddenly it’s “allowed” and taken seriously, analyzed and retroactively has reasons and interpretations applied to it. It only gets legitimized when close enough to an existing style or palatable enough or following some made-up rules. I think if I really tried, my art would be so much better objectively, and it would be nicer for others to see, but simultaneously, it would ruin the experience for me. It would introduce guardrails I don’t wanna have. This used to be a point of shame for me, like I’m choosing to stay uneducated, ignorant, and with unused potential. Then years ago, I read a post by an artist whose art I really like who said they ignored all advice that’s usually given for the medium (you aren’t allowed to do this, only xyz is the proper way to do it!) but now they’re successful with their style. I know people will say a good study will have you learn in a few weeks what could otherwise take you years to learn on your own (if at all), but I am fine with it. I don’t want to become a professional artist, and I don’t wanna become good at this hobby; I just wanna do it when I feel like it. This is also protecting me from the effects of perfectionism. Some hobby artists seem like they’re only allowing themselves to enjoy and engage with this hobby if they’re aiming for a specific standard and pretending they’re gonna have to pass an exam about it, because free time has to be productive as well, and they cannot bear to spend time on something that isn’t useful or earning admiration by others. Time is scarce, why throw paint on the paper for fun, if you can follow a YouTube guide in earnest tension and afterwards say you have studied a technique? So much more worth your time in today’s metrics. A while ago I was obsessed with drawing butterflies, currently it’s circles and gradients and colorful waves. Nothing impressive. I would like to draw more pixel art of rooms, more nature landscapes with gouache again, and - surprisingly, after writing all this - I’d live to try the jelly art style. Probably the closest I’ve ever come to wanting to submit to a set of rules, because obviously I need to adhere to nail the style. We’ll see. I like my mixed stuff the most. I have a canvas where I mixed acrylics, gouache and makeup. I have another that has acrylic paint and some crystals stitched on it. Honestly, looking back on it all, I think there’s also been too many times where I felt like making art for others was a net negative for me, or that my style wasn’t understood or respected and people didn’t go about feedback in a respectful way. Like, if their character feature is big, it’s “stylized”, when I do it it’s “awkward”; and I know it’s because one works within the established rules and one doesn’t, so one is seen as skill and one as an accident or lack of skill. People will always see a person as more skilled even they make art that’s more harmonious to look at, and it seems to me I just don’t consistently create art that looks harmonious to anyone else but me; makes sense, with so many mediums and months or years of not making art. If I wanted to make better art, I’d have to draw more often, and draw like others do. I remember a time a teacher scribbled over my art, and I never want to experience that again. So I released that expectation, and I make “bad” art for me. Reply via email Published 27 Mar, 2026

0 views
Stone Tools Yesterday

Aldus PageMaker on the Apple Macintosh

In life, there are love affairs and there are marriages. Deluxe Paint was (and is) an amazing, beautiful piece of software. It taught me so much about color, texture, and painting with light, but more specifically it opened my eyes to the possibilities of digital art as a medium. Yet, as much fun as I had, I never became a digital painter, I don't really do any pixel art these days, and over time the passion faded, never truly gone, but certainly diminished. A love affair. In college, with a declared major in electrical engineering, I took a chance at writing for the school newspaper, at the urging of my English professor. I was hooked from the jump, caught the reporting bug, and learned the ins and outs of journalism. Over the next four years, I became adept at Aldus PageMaker , the heart of our student media production process, fascinated by its ability to amplify the written word. It was because of PageMaker I switched majors to graphic design; we stuck together well into my professional career. A marriage. Sometimes these retrospectives are fun peeks into the past, opportunities to understand computing history a little better. Other times, I'm revisiting a condemned old house I used to live in, in a town I abandoned, finding and dusting off a forgotten jewelry box, inside which sits a tarnished wedding ring. What we had was beautiful, once. With this exploration, I'm honestly not expecting to rekindle any deep love for PageMaker. It taught me much in my youth, lessons I've taken to heart over the years and carry with me to this day. Still, you never know, maybe there's something yet to learn. Only one way to to find out. This was the last version released under the Aldus label. Soon thereafter, Aldus merged with Adobe, and this was re-released as Adobe PageMaker 5.0a. I have a very specific project in mind this time around. No, "project" is not quite the right word. It's not a project, it's a calling . Many years ago, one Mr. Robert Charles Joseph Edward Sabatini Guccione had a dream. That dream? : to compete against Hugh Hefner's Playboy magazine for dominance in the adult erotica print landscape. His dream expanded into a hotel staffed by Penthouse Pets and visited by Saddam Hussein. The dream grew further still into an X-rated box-office bomb starring Malcolm McDowell, Helen Mirren, and Peter O'Toole. Good times. Guccione's eventual wife, Kathy Keeton, had her own dream: a kind of Penthouse magazine, but for the mind . It was to be a heady packaging of art, literature, and investigations into science and the paranormal, presented on high-gloss paper, with a design sensibility that promised intellectual value well beyond the $2.00 cover price. Heck, the liberal use of spot-color metallic ink in every issue was itself worth the $2.00. Edited by Frank Kendig, with Art Direction by Frank Devino, issue one of OMNI Magazine hit newsstands with the October 1978 issue, around the time of the debuts of the Speak & Spell, Intel's 8086 processor, and Space Invaders. As Bob Guccione wrote in the premiere issue's "First Word" publisher's column, "This then, is the editorial promise of OMNI - an original if not controversial mixture of science fact, fiction, fantasy, and the paranormal." The first issue set the table neatly. A story about scientific advances in age-defiance, fiction from Isaac Asimov, an interview with Freeman Dyson (he of the Dyson Sphere joke ), and artistic photography of soap bubbles all combined to take the reader on a magical journey of enlightenment. It worked on me at any rate. OMNI's print run ended in 1995, with flaccid attempts to reanimate its corpse over the years. A new issue appeared on newsstands in 2017 with a cover design I will charitably call, "I guess they tried." That was a cheap shot, and I need to be cautious throwing stones here, as I am about to make my own attempt at designing OMNI Magazine . My name isn't Christopher Hubris Drum for nothing. Launching into PageMaker brings back a tidal wave of memories. Good lord, the volume of Diet Coke I drank during long production nights back in the 90s! Even now, to place an image is as reflexive as breathing. If I mentally prod at the inner crevices of my brain matter, the rest of the expert knowledge appears to be long gone. Considering PageMaker from a digital native's perspective, its tools and way of thinking can be quite anachronistic. Today, we enjoy a kind of fluidity in page design, worrying about things like "reactive design" with flexible, auto-adjusting layouts. That was not such a concern to early desktop publishers, except in coarse-grained measures. What they really needed was a bridge for the mental divide between manual paste-up and the new digital hotness. Consider the tools of the trade at the time: X-Acto blades, point tape, light tables, non-reproducible pens, rubylith/amberlith, vellum, wax machines, PMT machines, photo typesetters, and just so much paper. All but the paper were replaced with a mouse. A fantastic video showing someone doing manual paste-up, just to give you a sense of the dramatic sea change desktop publishing introduced. PageMaker provides a digital equivalence to a physical pasteboard. Much like other software I've looked at, especially in the word processing arena, the "Don't worry! Yes it's on a computer but we've reproduced a metaphor you understand" approach informs a lot of decisions behind its interface and tool-set. Everything you do is "manually digital" if that makes any sense? User actions are similar to the pre-digital workflow, it just happens on a screen instead of a light table. For example, there are no tools to assist with positioning elements relative to one another. There is no real concept of "layers." There is no such thing as "grouping." If you want text in columns, you must manually lay those down, one at a time. So here's our blank page in PageMaker and the palettes which are ready to assist. We have a Letter-sized page, outlined in black, with default margins in pink and purple. In the bottom left are and icons for the left and right "master pages" (elements that will be included on every left or right page, respectively) and a little page icon showing that this is a 1-page document and we are on page 1. I really love the cute little pages in the scroll area; it's so easy to immediately jump to a specific page or spread. It's all so simple, it's kind of impossible to forget how to use it. Toolbox does what it says, offering the selection arrow, lines, text, object rotation, boxes, circles, and image cropping. Styles has pre-built paragraph styles, which can be modified to meet your design spec, and which can "cascade" by basing styles on other styles. Colors are of your own mixing, or can be pulled from licensed libraries, like Pantone, Toyo, Trumatch, and others. There is also a Library palette, for storing reusable objects in your publication, and which utterly failed me in my tests. It might be hard to wrap a modern mind around it, but that's basically it for the palettes. They don't dock with one another (that would come in the Adobe era). There are no hidden sub-palettes. There's just what you see: tools, styles, colors, and the one at the bottom, a context-sensitive "control palette." This is either a merciful culling of modern palette madness, or a frustrating barrier to artistic expression, depending on which side of 2000 you were born on. Interestingly to me during my early poking around in the tools, it isn't so much that I find myself wishing for more palettes, so much as I just want a refinement of these. As a simple example, notice what is not inside the palettes? There's no method for creating new colors or styles, for example. Those are separate options under the Element and Type menus, respectively. A little button would be nice. While re-familiarizing myself with the forgotten contours of the program, I'm remembering how great the control palette is. It debuted with PageMaker 4 and completely changed the usability of the program. In the image above you can see how the control palette morphs itself to show a core set of commonly used functions specific to the currently selected tool, represented by the left-most icon: box, text, line, and image (top to bottom). The palette gives live stats and mathematically precise control over most aspects of each tool. I find the control palette so adept at handling 90% of what I need to do, I basically don't touch the menus of the program. If color selection could be worked into the palette in some fashion, that would handle another 9.9% of what I need. The controls for numeric positioning are not just useful, they're basically required. Trying to position anything with precision by hand is futile, which reveals a letdown. The palette shows in real-time a dragged item's position on the page. When dragging out a guideline from the ruler, we can see where that guideline will fall when released. However, I just said that positioning by hand is futile, and guidelines need high precision. Yet guidelines in PageMaker are special, delicate creatures treated with unique rules. Unlike everything else, guidelines are not page objects and so, they cannot be selected for editing. We can grab them and move them around, but we cannot just click-select one. If we can't select it, we can't fine-tune it with the control palette. It's the one thing we need precision for, but it's the one we're denied. This blank page is driving me crazy. Let's get a masthead on there, so I can at least pretend like I'm a real designer for a moment. 0:00 / 0:40 1× Using the control palette to set the masthead. (measurements derived from here ) Continuing to think of PageMaker as just a big area for building collages out of raw material, this means it also doesn't have any concept of layers. Things are layered, but to find something in a stack means sifting through the stack item by item to reach the desired element. This is PageMaker's biggest flaw, and proves frustrating time and again. To be completely fair, when Aldus PageMaker 5 released in 1993, Adobe Photoshop was at version 2.5 and didn't have layers either. Photoshop wouldn't get layers until version 3, in 1994. It's particularly frustrating because the simple act of clicking on elements can bring them to the front automatically, as with the main cover image. Once I have it in place, I often find it is obscuring the masthead. So I have to over and over and over again, with every accidental click. That happens a lot, because PageMaker misunderstands my click intent quite frequently, clicking "through" my desired object into the background image. That jumps the image to the front, and here we go again. This brings up another issue, which is there is no way to "lock" objects into position. Everything is loosey-goosey and free-form, again mimicking old-school paste-up methodology (I recall dropping paste-up boards and losing a carefully arranged layout or two back in the day). That adherence to the old ways makes sense to me for PageMaker 1 and 2. By version 3, I think the digital nature could have been better explored. By version 4, it absolutely should have been. By version 5, it feels like weaponized incompetence. There is a clear reason QuarkXPress enjoyed a reported 90%+ market dominance in desktop publishing by the time PageMaker 6 came around. Simply put, they embraced the future of layout, not the past. Until they didn't, but that's a story for another day. Now to flesh out this cover a bit more. One thing that takes getting used to is how much of the design occurs in our imaginations. The screen is simply too small and too low-resolution to know with 100% certainty that what we see is what we want. That's one reason the control palette is so invaluable, is because we can know with mathematical certainty that an object is where we intend, despite what we see on screen when zoomed out. Like EA developing the IFF file format for the Amiga community, Aldus likewise developed TIFF (tagged image file format) to unify image handling on the Macintosh. TIFF was the image standard for continuous tone images in publishing on the Mac, bar none. Of course, images fit for print were pretty heavy objects for the RAM restricted Macintoshes of old. Lightweight 72dpi images might have been fine for the screen, but 300dpi was needed for output to Linotype for final camera-ready artwork. Here's a dpi vs. lpi explanation , in case a digital-only workflow has shielded you from learning of it. The cover will need a 9" x 11.5" image at 300dpi in CMYK. Using the only PageMaker -recognized compression method LZW, that's a 20MB file and PageMaker only requires 3MB to run. It's efficient at what it does, but something's gotta give. In PageMaker we can link to TIFF files (embedding is also an option), with three on-screen preview options: greyed out, normal, and high resolution. Your choice will depend on your system and complexity of layout. If things are chugging too hard, step down. Turn on high to get it right, then turn back to grey to avoid the ulcers of slow screen redraw on your Mac SE. This may still be taxing to early systems, but we have another option. A common practice in the day was to use FPOs, "For Position Only" images. Those were low-resolution proxies, good enough for a designer to marry text and graphics with some degree of confidence without stressing her computer. After delivering digital files to the printer (oftentimes literally handing over floppies, SyQuest , or Zip disks in person), a process for swapping FPOs with print-ready high resolution versions of the same images was available to the prepress team. Design in low-rez, output in high-rez. For OMNI , the only printer I have available is the coin-operated color laser copier at the convenience store, so I'm not overly concerning myself with "press ready" on this. However, I don't want to make things artificially easy on myself either. There is no art without pain, as they say. In reading about the origins of PageMaker , and interviews with and about Aldus's founding by Paul Brainerd, it seems he was a real stickler for typography. Of note, he pushed hard for things like typographer's quotes (curly vs. straight), and so within PageMaker there are quite a few options for setting type "just so." Typographer's quotes can be toggled on a document as the default. Text tracking, leading, baseline shift, and kerning are all settable in precise increments by the control palette. Letter-by-letter kerning is also easily achievable through to set nice, tight TA pairs (a little Guccione callback joke for you there). Despite Brainerd's self-professed love for good type, the Quark crowd lamented PageMaker's typographical controls. One area in which QuarkXPress and Ventura Publisher had innovated were the tools for laying down columns of text. Those used a "text box" methodology, which is pretty much the standard today. A box could be drawn, delineating an area of the page which should hold text. That box could then be set up to contain columns, gutters, insets, a frame and so on, and the text would flow within accordingly. Move the box and the internal formatting moves with it. It makes too much sense, and so PageMaker doesn't do that. PageMaker kicks it old-school, forcing us to put down guidelines on the page that show where columns of text should fall, nay where they could fall if one were so inclined. They're mere suggestions, really, and it is up to the designer to place the text within those guidelines, or not. This kind of adheres to the concept of using a grid structure for a page, where the grid can be used rigidly or fluidly, as the designer may choose. Using an OMNI scan for measurements, I've set up a template for two-page spreads. Notice how column guides fill the page top to bottom. Left and right can have different column counts, but a single page cannot. With the box layout methods of Quark and company, if we want to split the layout into 4 columns on top and 3 on the bottom, we can draw two text boxes and assign respective column counts. To do the same thing in PageMaker , we have to set the page to 4 columns, lay out the 4 columns, then change to 3 columns, and lay those out. This kind of futzing about is the drum-beat of using PageMaker , a rhythm of "set a value, do a thing, change that value, do the next thing, reset the value, do another thing" which my muscles have remembered long before my brain does. PageMaker offers a few tools for wrangling long-form publications. The story editor is a lightweight, built-in word processor, with spell check and find-and-replace. Styles can be applied in its stripped down text view, which aren't visible until exiting the story editor, but are annotated in the margin. It's nice not having to jump out of PageMaker just to do a quick edit. Throwing everything together into a monolithic document can be unwieldy. It's a far sight better to break the publication into separate documents for work by various contributors simultaneously. will let us link multiple individual documents into one larger, logical construct. Select a set of files, reorder them into their book order, and away you go. Once those document relations are set, we have a number of tools for helping our reader navigate the tome. A table of contents can be auto-generated, thanks to paragraph styles. Turn on the "Include in table of contents" flag for any given style to get table of contents coalescing for free. The formatting options will probably get you about 60% of the way toward a final layout. 0:00 / 0:30 1× Setting a "next style" for each paragraph style lets me simply type to automatically receive a perfect column header. This exists in page layout software even today; see, we weren't completely hopeless back then! I can't one-shot an " OMNI perfect" table of contents, but it's a good starting point and saves me from annoying minutiae, like laying down 1-point rules. Automatic page numbering is also available, by positioning page number placeholders on our master pages. When collated, each document will receive the appropriate page numbering relative to that document's position in the complete book. How about a nifty end-of-book index? It, too, can be auto-generated, though it requires good planning and forethought. Highlight a piece of text and promote it to an index entry with gives you an opportunity to tweak the data which drives the index layout, and will generate a text block containing a neatly formatted index. Such an index might need alphabetical ordering, or perhaps some kind of topical ordering, and both are possible. Tools for setting up index topics, and the rules for PageMaker to follow when extracting that data, are available to ease the pain. It takes some playing around, testing the waters, to really get how the pieces fit together into a final index, but proves to be a fairly robust, data-driven solution to a logistical nightmare. PageMaker accepts a wide variety of content types. Various word processor formats, graphic formats in both raster and vector (in EPS format), and even Lotus 1-2-3 and dBase data can be imported, for those worried I wouldn't tie things neatly back into previous posts. With all of the various pieces on the page, we need to be able to make sure they're linked to the right source documents and stay up to date as our team makes changes. For a while, there was a publish/subscribe mechanism on the Mac, which danced on the edge of OpenDoc ideas (but was not related, to my knowledge). PageMaker supports this, functioning as a "subscriber," and it is up to other applications to function as data "publishers." If you know OLE on Windows, you know what I'm describing here. Once subscribed to a component, which could be as hyper-specific as a single word from a Microsoft Word document (I tried it before I claimed it!), PageMaker will sense changes to the source data and prompt the designer to keep it up to date. This won't help if the change alters the length of the text and forces a reflow, or if the shape and dimensions of the graphic require a new text wrap. Also, any styling previously applied to the subscribed element will be lost, and will need to be re-styled to match as before, after an update. "Technically, this all works," he said with a shrug. Honestly, I find it annoying, both to set up and to utilize. PageMaker interrupts right in the middle of working on something else to announce updates to subscribed elements. The Links panel already lists everything placed into the project with each link's update status. It also lets me one-click update all links globally, on my own time, at my own pace, when I'm ready. It's unobtrusive and puts the control back into my hands. Publish/subscribe? More like PUNISH/subscribe, am I right folks? the audience boos, pelting me with unopened copies of Microsoft BOB I don't need to dig into this too much, because it's very much a "going to press" feature, and a vigorous interrogation of pre-press technologies falls far outside the scope of this article. In the context of the desktop publishing wars, it is important to note PageMaker 's constant catch-up to Quark in the professional arena. Where Aldus was initially content to appeal to a "making flyers at home for church fund-raising bake sales" kind of crowd, Quark had gone for the professional jugular. Generating separations, the component cyan, magenta, yellow, and black layers that, when combined in ink on paper build our final image, was a major missing component of a robust publishing strategy. Or at least that was true until QuarkXPress 2 in 1989, itself contemporaneous with PageMaker 3. Around 1992, Aldus attempted to staunch the bleeding of users to Quark. PageMaker 4.2 came bundled with a standalone application for generating color separations, called Aldus PrePrint . As one review said , "It does the job." I can hear the yawn that accompanied the sentiment. Finally, four years after QuarkXPress 2, PageMaker 5 integrated color separation generation into the application proper. CMYK and spot color plates can intermingle, plate order can be assigned, colors can be set to overprint/knockout, and line screen/angle are all adjustable. It's all pretty coarse-grained though. For example, adjusting for dot gain on uncoated paper stock, removing color cast, grey color removal, adjusting plate levels and curves to compensate for a finicky press; situations like those are far more suited to sophisticated tools like Letraset ColorStudio or even Adobe Photoshop, after it gained CMYK control. For simple, basic, day-to-day separation needs, especially for those on a budget, PageMaker 5 does a fine job; an assertion I can illustrate with a clever video. 0:00 / 0:14 1× I brought the PDF into Affinity 3, tinted each separation and set those to "multiply". Dragging them together simulates the final printing effect. I'd say those separations look accurate. Having made the effort to catch up to QuarkXPress with its color separation utilities, Aldus had further catching up to do with Quark's plug-in architecture. What Lotus 1-2-3 "add-ins" did for spreadsheets, Quark Xtensions did for desktop publishing. Aldus had to keep their ball in play, and so introduced Aldus Additions with version 4, expanding the breadth of bundled tools in version 5. In practice, using the tools reveals how weak Aldus's retort to Quark was. For example, PageMaker 5 adds the ability to "group items" through an Addition. Hooray! "PS Group It" and "PS Ungroup It" kind of do what they state, except all selected items must be completely contained within the current page boundaries. If anything sticks off into the pasteboard, it cannot be grouped. Additions are, put simply, a mess of a solution to a real problem. MacWorld PageMaker 5 Bible concurs. Where Aldus kind of dropped the ball, third-party Additions didn't do much to make up the slack. The biggest package, and one I remember using, was Extensis PageTools for about $100 in 1994. A visually heavy, kinda Microsoft Word 5 -esque toolbar with lots of geegaws and whoozits, character-level styles ( PageMaker only did paragraph-level), find and replace colors, visual thumbnail document navigator, and more formed a grab bag of solutions to a variety of random PageMaker annoyances. It's not nothin '. While I was researching the history of desktop publishing, one word came up again and again: democratization . Desktop computers would ostensibly simplify formerly specialized skills into tools so simple anyone could use them. This would drive down production costs, opening print publishing to a wider audience. It occurred to me to do a Google N-Gram search and this graph in particular got me thinking about democratization a little more. I wasn't doubting the truth of it all, per se, but the chart gave me a "this needs further investigation" itch I needed to scratch. Searching for "social impact of desktop publishing" turns up surprisingly little, at least in the way that I mean it. There is a good amount of information on the technical side of the discussion, extolling the virtues of PostScript and the cost/time savings gained by the new desktop tools. But we see in the chart that talk of the "digital divide" followed desktop publishing's hype cycle. Those two didn't seem to get a lot of time to chat with one another. The birth of desktop publishing, the rise of personal laser printers, and the rapidly lowering costs of powerful personal computers all converged to lower the barrier to entry into publishing. There's no denying that. There are many stories talking about production times being cut in half, or typesetting costs being cut by up to 90%. PageMaker: Desktop Publishing on the Macintosh , by Kevin Strehlo, noted that traditional typesetting could run up to US$400/page in 1989, about US$1,000/page in 2026 dollars. 90% off ain't 50% bad. "Cheaper than ever before" doesn't necessarily mean "cheap." In 1985, a Macintosh 512K ($3,195; $9,700 in 2026) + LaserWriter ($6,995; $21,000) + PageMaker v1.2 (w/PostScript printer font support, $495; $1500) cost over US$30,000, in 2026 money. Even without the LaserWriter, that's $10K. I can appreciate the dramatic reduction in costs, but personally I would still be priced out of joining that revolution, even if I "acquired" certain tools through "alternative means." Everything I read about the impact on publishing seems to be from the point of view of publishing elites, and the CEOs of the companies involved. Brainerd would often recount a story about a church that was able to do print runs of 600,000 units thanks to PageMaker . Dan Putnam, Adobe employee #2, called out a risqué lesbian newsletter, and a fundamentalist Christian newsletter as examples representing the breadth of materials PostScript helped enable. If we're talking about empowerment and democratization, I don't particularly want to get that information secondhand from corporate execs. Join me then, won't you, on a small audit of desktop publishing's impact on the rest of us, and let's try to get a sense of how the "revolution" was seen by those who fought in the streets. Sometimes, the revolutionary street fighting was literal. In 1991, Communist Party of the Soviet Union hardliners attempted to wrest control of the country away from Mikhail Gorbachev and newly-elected president Boris Yeltsin. During the coup attempt, Gorbachev was stolen away and newspaper presses were locked down. According to Brainerd's obituary in GeekWire , Aldus PageMaker played a role in defanging the "Gang of Eight," the core hardliners who staged the coup. As an alternative way to get the pro-democracy word out, flyers carrying Yeltsin's message were created in PageMaker (the story goes) and photocopied for mass distribution. "During the coup in Moscow all the presses had been shut down. Boris Yeltsin commandeered an HP printer, a PC, and a copy machine. There were pictures of Yeltsin surrounded by people with their hands outreached, trying to get copies of documents that were all produced in PageMaker . Its really a powerful image. It made me very proud," Brainerd said in Inside the Publishing Revolution: The Adobe Story , by Pamela Pfiffner. Brainerd's obituary states that Aldus later ran an ad with the tagline "We helped create a revolution." That ad ran in the... in the... huh, where did that ad run, anyway? Its existence is corroborated by ex-Aldus employee Gabi Clayton in a Facebook post after Brainerd's death. She recalls having a copy by her desk, but doesn't have it any longer. I looked high and low for the tagline and couldn't find it in archive.org, Google Books, nor the internet at large. My best current guess is that it ran in Aldus Magazine , whose digital archives are almost non-existent. 20 years ago, Computer History Museum did an interview with Brainerd in which he mentioned neither the event nor the ad whatsoever, which was a strange omission, in my opinion. At the end of the interview, he's asked if he has materials to donate to the CHM, which he affirmed. I checked the CHM online archive and found nothing, so I reached out to see if Brainerd ever followed through on that donation. CHM responded saying that he had indeed done so and they would scan the materials at my request. I have no idea if the ad is amongst those items, and am waiting for the results. I will update this article if new information comes to light. In the meantime, I thought I'd look through print materials associated with the coup, looking for some tell-tale sign of desktop publishing's involvement in the production of revolutionary materials. That would be a quite literal "democratization" artifact; democracy was precisely what they were fighting for! Harvard keeps a small selection of coup-related materials online for perusal; you can check that stuff out here . I may have found something that matches the story. I cannot say "this was done in PageMaker. " However, if translation tools are to be trusted, this is a celebration of the failure of the coup attempt, including a mocking piece about "How not to stage a coup d'etat." If you've ever tried to do manual text wrap, you'll know that what we see in that sample could only be done digitally. Full text justification with hyphenation, and the slightly staggered baselines (probably shifted due to subhead leading) feels very PageMaker , especially since I encountered the same issue in my OMNI project. It also appears to be a photocopied handout, which matches the publishing methodology of the resistance. It doesn't prove PageMaker , per se, but it definitely tingles my spider senses. Sometimes its very easy to see the before and after of desktop publishing on a publication. A typical layout in smaller publications was a literal typewritten page published as-is, like this example from an early issue of Azania Worker . Columns? Bah, who needs stupid columns! (please don't make us manually type out columns!) Later, Azania Worker explicitly called out their transition to desktop publishing for tightening up the layouts for their anti-apartheid publication. Bob Symes is credited with handling that, and a hallmark of early struggles with digital typesetting tools, and over-trusting of "forced justification" is evident. I remember distinctly playing around with kerning and leading to make articles fit into given spaces in the student newspaper. If an article were a few lines too short, that was nothing an increase in font size or leading by 0.1 points couldn't fix. The overly-tight tracking in the example suggests that cutting text was not a consideration. They were determined to fit every important word into the limited space available, evenifitmeantmakingeverythingruntogether. A desire to join the desktop publishing revolution was expressed across a few publications I peeked through. As I suggested earlier, pricing still shut some groups out of enjoying the new tools of the trade. It would take more time yet for prices to fall enough to open the doors wider and let more people join in the fun. Lesbian Connection struggled to figure out how to afford to give the people what they wanted, nay demanded: COLUMNS!!! The author then lays out the costs and is excited to deliver. Let's look at the next issue and see those beautiful, highly-demanded columns at work. Oh, well, maybe next issue? I won't drag this gag on any longer. Over the next two years no transition occurred. The reason for this is explained to their column-desiring audience: it was still too expensive. Even budgeting for a PC over a Mac, and with laser printer costs having cut in half or more over the years, the savings still weren't enough. This publication continues to this day, and as they're on the web they clearly made the transition to digital production. But when? Online archives of the print edition stop before that happened. I need closure on this story! I reached out to the editors and tried to make a case for helping me learn when the transition occurred. It seems to me that it would have had a big hullabaloo, something like, "You demanded it for 20 years, so we're proud to bring you columns!" Unfortunately, my journalistic persuasion skills seem to have atrophied, and I didn't get a response. Maybe someday I'll find out how and when their readers received the columnar layouts they so craved, nay deserved . Zines, "blogs in print form" I suppose I'd call them today, were and still are an interesting subculture of the publishing scene. Unapologetically hand-crafted, sometimes constructed as literal collage on the kitchen floor, topics ranged from personal ramblings to the adventures of a man who wanted to wash dishes in every state. I defy you to tell me that story wouldn't trend on Hacker News today. In Notes From Underground: Zines & the Politics of Alternative Culture, author Stephen Duncombe noted a tension for zine makers trying to incorporate desktop publishing into their workflows. The editor of William Wants a Doll , Arielle Greenberg, struggled to use desktop publishing "in a way that didn't dehumanize her zine." Lizzard Amazon, editor of Slut Utopia , wrote, "it is not so hard to use pagemaker," but, "i am still going to write all over this thing in pen at the last minute." and apparently she did. The zine ethos is one steeped in anti-establishment, rejecting utterly the trappings of mass produced media. It is supposed to be an antidote, a vaccine against anything that smacks of corporate influence. The tools of desktop publishing offer democratization of professional layout tools, yet the author suggests that very democratization runs counter to zine-culture ethos. When the tools are democratized, and by extension homogenized, maintaining the expression of authenticity becomes harder, if not impossible. Duncombe concludes that the internet and web publishing, more so than any of the print desktop publishing tools before, actually fulfilled the original promise of democratization. But, at what cost? "In the zine scene we preach the ethics of DIY and democratic creation but the experience of self-publishing on the Internet demonstrates that when everyone begins to express themselves then there isn't the scale or coherence that encourages the formation of an alternative world-view." Every technology has its naysayers. Some, like the anti-generative-AI crowd, are right, and just, and 100% correct to fight the dumb AI companies and not let them turn everything we love into room-temperature mayonnaise like the flat-out wrong information that keeps turning up in search results when I'm just a guy trying to do his best to inform his readership about ancient publishing practices and the history of those technologies and is it so terrible to want real information and.... Ahem, excuse me. Let's start again. In the HyperCard article I noted Sheldon Leemon's reactionary stance to all things hyperlinked, "Do we really want to give hypertext to young school children, who already have plenty of distractions?" Similar naysaying naturally accompanied the advent of desktop publishing. Even those who acknowledged the benefits still felt some sense of loss. As the editor of Tradeswomen said, "we don't have nearly as much fun." It is hard to impress upon a digital-native, remote-only workforce just how fun physical production was. The late nights, the mishaps, the heartaches, the triumphs, of a team united around putting an issue to bed, all felt earned . In the end, when real newspapers hit the newsstands and students and faculty were reading it over lunch, every person on staff could point to something specific in every tangible artifact and state, "I did that." Before I close, it's important to acknowledge font handling vis-a-vis desktop publishing back in the day. Font management, printing, and on-screen rendering could be a real struggle at times, so it needs at least a little discussion. I will do this by way of confession. Woz forgive me, for I have sinned, I cheated throughout this post. I used... ah, I'm almost too embarrassed to admit this... I used TrueType fonts. Hold your comments until I've made my case! John Warnock, don't pout! Fonts for the original Macintosh started life as font "suitcases," a special folder which held system resources and a collection of hand drawn bitmap fonts at various sizes. Susan Kare kept it real. If a font wasn't explicitly drawn at the size you wanted, it would scale to match your desire, which could result in ugly, chunky, pixelated on-screen text. PostScript fonts could, at the very least, print nicely even when the on-screen representations were ugly. PostScript had two commonly-used font types: Type 1 and Type 3. Type 1 was Adobe's crown jewel, the font standard that included what we know today as font hinting , as well as a coveted secret recipe which Adobe refused to share at the time (see the timeline for more details). Type 3 was a more open, but inferior standard, and didn't include Type 1's secret sauce. For a time, it was the only option to font vendors who didn't want a licensing agreement with Adobe. Put simply, Type 1 fonts looked better in print. The gulf between on-screen representations and printer output was vast, and TrueType promised to fix that. Announced at Seybold 1989, it's core selling point was a single font file that could provide both a clean on-screen representation at any size, as well as sharp printer output. Inside the Publishing Revolution says, "Gates claimed that TrueType's quadratic splines were far superior to PostScript's Bezier curves." Warnock was beside himself, calling it on stage, "the biggest bunch of garbage and mumbo jumbo," and " on the verge of tears , he said, 'What those people are selling you is snake oil!'" Adobe's immediate response was two-fold: one, open up their proprietary Type 1 spec for all to use, license-free, and two, the development of Adobe Type Manager , a system control panel that used PostScript Type 1 font definitions to generate crisp, clean on-screen representations. Once more from Inside the Publishing Revolution , "David Lemon recalls the "manic" pace of (ATM) development (after the Seybold shock), "They'd look at me and say, 'It's not life or death if we get this out. It's only the future of the company.'" Working at a breakneck pace, Adobe brought ATM to market at least a year before any TrueType fonts from Apple or Microsoft appeared. "If we hadn't gotten ATM out then, we would be living in an all-TrueType world now."" Their beachhead fortified, ATM became the must-have extension for every Macintosh I ever touched; TrueType fonts were kind of snubbed by the Mac design community, is my recollection of those times. All of that said, when sending modern fonts back in time onto older Macs, TrueType has proven to be the path of least resistance by far. I feel irrational shame for using TrueType in this project, eschewing ATM. Forgive me for taking the coward's path! Alright, it's time to make this OMNI dream a reality, and get these ideas out of the computer and onto paper. I'm excited! Everything you see was generated as PDFs by Adobe Acrobat Distiller 3.1 from PostScript generated by Aldus PageMaker 5.0a. I copied those PDFs over, untouched and unedited, and printed them as-is to the convenience store copier. First, I need to explain the chill that ran down my spine when I held those prints in my hands for the first time. Here was something tangible, something I crafted myself made physically manifest. I have done this in the past hundreds of times, but I'd forgotten the rush. It was a great feeling. I think this makes the case that design work can be done with PageMaker. Of course it can. It was used in the past, so why wouldn't it be able to continue to do what it was built to do? With Acrobat Distiller , we can generate PDFs that print perfectly on modern systems. Done and done. Would I choose to use it today? No way. The text workflow is too much of a PITA to do anything longer than a few pages; I almost can't believe I used to lay out 80-page magazines in it. The Additions are a fumbling mess. Guideline management is bumpy, although an Extensis Addition can smooth that a little. While I love the Control palette, the palettes in general need yet more refinement to become truly useful, time-saving features. Clicking on images and having them automatically pop to the top of the stack, without having any control over layering, is one annoyance too many. This is a case where you really can't go home again. I mean you can, but you're going to wonder, "Were the walls always this greasy? Did the toilet always back up like this?" PageMaker literally altered the course of my life, steering me from electrical engineering into graphic design. It was fun at the time, being new and exciting, but offers little today except as an exploration of the opposing forces at work during the "desktop publishing revolution." I am struck by one curiosity, however. I've been working as a professional software engineer for 20 years. With one exception, everything I've built professionally is gone. The companies folded, the apps were discontinued, contracts were ceased, the products the apps promoted were killed, and so on. There are any number of reasons, but they all converge at the same result: my professional digital legacy has been, will be, erased. Everything I published with PageMaker still exists. It's physically in the archives at UNC-Charlotte. It's framed on a business owner's wall when she was featured on the cover of Business Leader Magazine. It's sitting in a box in someone's attic waiting to be rediscovered. It is often said that what goes on the internet is forever. Yet every digital work I produce lives in someone else's infrastructure, subject to someone else's decisions about what is worth keeping. The work I produced on "bird cage liner" remains free to this day, and no popped stock bubble, no digital decay, no coup d'etat, can stop those ideas from propagating, once let loose in the world. Looks like PageMaker had one more lesson to teach me, after all. Thanks for reading all the way through. I have a reward for your effort. You may have noticed the OMNI font I used in the layouts, Continuum. Its possible the web crawlers have found it by now, but most likely you won't find it easily until then. It is, in fact, my gift to you. Before I started this blog, I built it from scratch in Affinity and FontForge , using OMNI Magazine as the sole source of truth for all shapes, default leading, and kerning pairs. It was just a for-fun project, to learn how fonts are created and to see if I could get it working on the machines of my youth. There's no point to my gatekeeping it any longer; it's time to set it free. You can grab Continuum on my personal GitHub. I accept bug reports and pull-requests, so long as they are backed by real, in-print proof that a change is warranted. Be aware, the goal is not to make a font "inspired by" OMNI , it is to be the OMNI font, full stop. Maybe you can help me get it there. https://github.com/christopherdrum/continuum PageMaker native files are not compatible with anything that exists these days. However, the PostScript PageMaker generates works fine with Distiller on classic Mac and Ghostscript on modern systems. The resultant PDF files in either case printed perfectly on a Sharp MX-3631DS color copier. There may be a conversion path by opening PageMaker 5 files in PageMaker 7 , then finding a copy of InDesign CS6 or prior . CS6 should be able to open the PM7 document, thereby converting it to InDesign format. It should technically be possible to open that converted file in a more modern copy of InDesign . This setup requires access to software I simply don't have, so this is my best, educated guess. Affinity 3 could open the PageMaker PDFs as well, but exhibited a text rendering bug that wasn't found in any other PDF viewer, modern or classic, nor in the final print. I have reported it to the developers. Basilisk II v1.1 on Windows 11 Mac IIci w/68040 CPU, 64MB RAM 1024 x 768 24-bit color Macintosh System 7.5.5 StickyClick v1.2 Suitcase 3.0 Adobe Acrobat 3.01 Microsoft Word 5.1a StuffIt Deluxe 5.0 GraphicConverter 2.2 TTConverter 1.5 Aldus PageMaker 5.0a Everything you see was generated as PDFs by Adobe Acrobat Distiller 3.1 from PostScript generated by Aldus PageMaker 5.0a. I copied those PDFs over, untouched and unedited, and printed them as-is to the convenience store copier. First, I need to explain the chill that ran down my spine when I held those prints in my hands for the first time. Here was something tangible, something I crafted myself made physically manifest. I have done this in the past hundreds of times, but I'd forgotten the rush. It was a great feeling. I think this makes the case that design work can be done with PageMaker. Of course it can. It was used in the past, so why wouldn't it be able to continue to do what it was built to do? With Acrobat Distiller , we can generate PDFs that print perfectly on modern systems. Done and done. Would I choose to use it today? No way. The text workflow is too much of a PITA to do anything longer than a few pages; I almost can't believe I used to lay out 80-page magazines in it. The Additions are a fumbling mess. Guideline management is bumpy, although an Extensis Addition can smooth that a little. While I love the Control palette, the palettes in general need yet more refinement to become truly useful, time-saving features. Clicking on images and having them automatically pop to the top of the stack, without having any control over layering, is one annoyance too many. This is a case where you really can't go home again. I mean you can, but you're going to wonder, "Were the walls always this greasy? Did the toilet always back up like this?" PageMaker literally altered the course of my life, steering me from electrical engineering into graphic design. It was fun at the time, being new and exciting, but offers little today except as an exploration of the opposing forces at work during the "desktop publishing revolution." I am struck by one curiosity, however. I've been working as a professional software engineer for 20 years. With one exception, everything I've built professionally is gone. The companies folded, the apps were discontinued, contracts were ceased, the products the apps promoted were killed, and so on. There are any number of reasons, but they all converge at the same result: my professional digital legacy has been, will be, erased. Everything I published with PageMaker still exists. It's physically in the archives at UNC-Charlotte. It's framed on a business owner's wall when she was featured on the cover of Business Leader Magazine. It's sitting in a box in someone's attic waiting to be rediscovered. It is often said that what goes on the internet is forever. Yet every digital work I produce lives in someone else's infrastructure, subject to someone else's decisions about what is worth keeping. The work I produced on "bird cage liner" remains free to this day, and no popped stock bubble, no digital decay, no coup d'etat, can stop those ideas from propagating, once let loose in the world. Looks like PageMaker had one more lesson to teach me, after all. The literal copier I used is inexplicably viewable on Google Maps. A gift! A gift comes! Thanks for reading all the way through. I have a reward for your effort. You may have noticed the OMNI font I used in the layouts, Continuum. Its possible the web crawlers have found it by now, but most likely you won't find it easily until then. It is, in fact, my gift to you. Before I started this blog, I built it from scratch in Affinity and FontForge , using OMNI Magazine as the sole source of truth for all shapes, default leading, and kerning pairs. It was just a for-fun project, to learn how fonts are created and to see if I could get it working on the machines of my youth. There's no point to my gatekeeping it any longer; it's time to set it free. You can grab Continuum on my personal GitHub. I accept bug reports and pull-requests, so long as they are backed by real, in-print proof that a change is warranted. Be aware, the goal is not to make a font "inspired by" OMNI , it is to be the OMNI font, full stop. Maybe you can help me get it there. https://github.com/christopherdrum/continuum Sharpening the Stone Emulator improvements The Basilisk II emulator itself is solid and I don't have any real issues with it, once I had it set up following precisely the Emaculation instructions . Getting the emulator set up this time around was quite frustrating. Some of it was inadvertently a quagmire of my own creation. Some was just easy to overlook. Some was just plain craziness. Unless you really understand Classic Macintosh systems and how they work, I would recommend building a new VM hard drive from scratch for your DTP work. My disk image carried over from the Hypercard article, and I was rather cavalier with my installs on top of that. This caused nothing but pain, including crashing apps, odd PostScript generation, and more. A full reinstall of System 7.5.5 was step one. That gave me a base system, which I backed up as a "pristine" starting point for the future. Then, installing apps one by one with testing at each phase helped establish pristine "checkpoint" images I could use as starting points for future projects. If you go for a PageMaker 5.0a installation, be absolutely certain to install the "RSRC patch" files. They are easy to overlook, but are absolutely critical. They fixed my PostScript rendering offset bugs. I don't recommend installing Distiller 3.01 on top of 3.0. I did that and something went wrong, resulting in a flaky application. A pristine install of Distiller 3.01 worked great. This is the biggest frustration with Basilisk II on Windows (apparently other platforms don't have this issue). Once installed, PageMaker wouldn't copy anything. I could copy from any other application, and I could paste into every application, including PageMaker . But I couldn't copy anything while in PageMaker . The helpful experts at the Macintosh Garden forums got me straightened out. It seems that on Windows, the system clipboard ( ) must be flushed for copy/paste to work properly in Basilisk II . You'll have to do this again and again while using the program, if you jump out of Basilisk II into Windows and back again. Very annoying.

0 views
(think) Yesterday

fsharp-ts-mode: A Modern Emacs Mode for F#

I’m pretty much done with the focused development push on neocaml – it’s reached a point where I’m genuinely happy using it daily and the remaining work is mostly incremental polish. So naturally, instead of taking a break I decided it was time to start another project that’s been living in the back of my head for a while: a proper Tree-sitter-based F# mode for Emacs. Meet fsharp-ts-mode . I’ve written before about my fondness for the ML family of languages, and while OCaml gets most of my attention, last year I developed a soft spot for F#. In some ways I like it even a bit more than OCaml – the tooling is excellent, the .NET ecosystem is massive, and computation expressions are one of the most elegant abstractions I’ve seen in any language. F# manages to feel both practical and beautiful, which is a rare combination. The problem is that Emacs has never been particularly popular with F# programmers – or .NET programmers in general. The existing fsharp-mode works, but it’s showing its age: regex-based highlighting, SMIE indentation with quirks, and some legacy code dating back to the caml-mode days. I needed a good F# mode for Emacs, and that’s enough of a reason to build one in my book. I’ll be honest – I spent quite a bit of time trying to come up with a clever name. 1 Some candidates that didn’t make the cut: In the end none of my fun ideas stuck, so I went with the boring-but-obvious . Sometimes the straightforward choice is the right one. At least nobody will have trouble finding it. 2 I modeled directly after , and the two packages share a lot of structural similarities – which shouldn’t be surprising given how much OCaml and F# have in common. The same architecture (base mode + language-specific derived modes), the same approach to font-locking (shared + grammar-specific rules), the same REPL integration pattern ( with tree-sitter input highlighting), the same build system interaction pattern (minor mode wrapping CLI commands). This also meant I could get the basics in place really quickly. Having already solved problems like trailing comment indentation, hybrid navigation, and with qualified names in neocaml, porting those solutions to F# was mostly mechanical. The initial release covers all the essentials: If you’re currently using , switching is straightforward: The main thing doesn’t have yet is automatic LSP server installation (the package does this for ). You’ll need to install FsAutoComplete yourself: After that, is all you need. See the migration guide in the README for a detailed comparison. Working with the ionide/tree-sitter-fsharp grammar surfaced some interesting challenges compared to the OCaml grammar: Unlike OCaml, where indentation is purely cosmetic, F# uses significant whitespace (the “offside rule”). The tree-sitter grammar needs correct indentation to parse correctly, which creates a chicken-and-egg problem: you need a correct parse tree to indent, but you need correct indentation to parse. For example, if you paste this unindented block: The parser can’t tell that is the body of or that belongs to the branch – it produces ERROR nodes everywhere, and has nothing useful to work with. But if you’re typing the code line by line, the parser always has enough context from preceding lines to indent the current line correctly. This is a fundamental limitation of any indentation-sensitive grammar. OCaml’s tree-sitter-ocaml-interface grammar inherits from the base grammar, so you can share queries freely. F#’s and grammars are independent with different node types and field names for equivalent concepts. For instance, a binding is in the grammar but in the grammar. Type names use a field in one grammar but not the other. Even some keyword tokens ( , , ) that work fine as query matches in fail at runtime in . This forced me to split font-lock rules into shared and grammar-specific sets – more code, more testing, more edge cases. F# script ( ) files without a declaration can mix bindings with bare expressions like . The grammar doesn’t expect a declaration after a bare expression at the top level, so it chains everything into nested nodes: Each subsequent ends up one level deeper, causing progressive indentation. I worked around this with a heuristic that detects declarations whose ancestor chain leads back to through these misparented nodes and forces them to column 0. Shebangs ( ) required a different trick – excluding the first line from the parser’s range entirely via . I’ve filed issues upstream for the grammar pain points – hopefully they’ll improve over time. Let me be upfront: this is a 0.1.0 release and it’s probably quite buggy. I’ve tested it against a reasonable set of F# code, but there are certainly indentation edge cases, font-lock gaps, and interactions I haven’t encountered yet. If you try it and something looks wrong, please open an issue – will collect the environment details for you. The package can currently be installed only from GitHub (via or manually). I’ve filed a PR with MELPA and I hope it will get merged soon. I really need to take a break from building Tree-sitter major modes at this point. Between , , , and now , I’ve spent a lot of time staring at tree-sitter node types and indent rules. 3 It’s been fun, but I think I’ve earned a vacation from . I really wanted to do something nice for the (admittedly small) F#-on-Emacs community, and a modern major mode seemed like the most meaningful contribution I could make. I hope some of you find it useful! That’s all from me, folks! Keep hacking! Way more time than I needed to actually implement the mode.  ↩︎ Many people pointed out they thought was some package for neovim. Go figure why!  ↩︎ I’ve also been helping a bit with erlang-ts-mode recently.  ↩︎ fsharpe-mode (fsharp(evolved/enhanced)-mode) Fa Dièse (French for F sharp – because after spending time with OCaml you start thinking in French, apparently) fluoride (a play on Ionide , the popular F# IDE extension) Syntax highlighting via Tree-sitter with 4 customizable levels, supporting , , and files Indentation via Tree-sitter indent rules Imenu with fully-qualified names (e.g., ) Navigation – , , F# Interactive (REPL) integration with tree-sitter highlighting for input dotnet CLI integration – build, test, run, clean, format, restore, with watch mode support .NET API documentation lookup at point ( ) Eglot integration for FsAutoComplete Compilation error parsing for output Shift region left/right, auto-detect indent offset, prettify symbols, outline mode, and more Way more time than I needed to actually implement the mode.  ↩︎ Many people pointed out they thought was some package for neovim. Go figure why!  ↩︎ I’ve also been helping a bit with erlang-ts-mode recently.  ↩︎

0 views
Jeff Geerling Yesterday

Bring back MiniDV with this Raspberry Pi FireWire HAT

In my last post, I showed you to use FireWire on a Raspberry Pi with a PCI Express IEEE 1394 adapter. Now I'll show you how I'm using a new FireWire HAT and a PiSugar3 Plus battery to make a portable MRU, or 'Memory Recording Unit', to replace tape in older FireWire/i.Link/DV cameras. The alternative is an old used MRU like Sony's HVR-MRC1 , which runs around $300 on eBay 1 .

0 views

Nikhil Anand

This week on the People and Blogs series we have an interview with Nikhil Anand, whose blog can be found at nikhil.io . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hi I'm Nikhil! I grew up the UAE and came to the United States for college and graduated with a degree in biomedical engineering. I worked in academia and industry for about 15 years before deciding to turn my attention and energies towards problems in healthcare. I'm now a graduate student at Columbia University's Medical Center and am studying clinical informatics and loving the magnificent beehive that is New York City. With the time I have, I love going to art museums, practicing calligraphy, reading short stories and graphic novels, and watching every suspense/mystery show or movie I can (huge fan of the genre; for example I've watched all of Columbo at least three times). I'm also trying to learn CAD and have 3D printed several small abominations. I started blogging around 2003 after discovering blogs like Kottke.org, Jeffrey Zeldman's blog , Greg Storey's Airbag.ca , and Todd Dominey's WhatDoIKnow.org . My first blog was at freeorange.net which I now use as a placeholder for my tiny LLC's future site. I used to live in Ames, Iowa at the time and decided to and blog what I knew, about stuff going on in the town: gossip, lectures and shows I'd attended, photos of random scenes and events, and so on. That last part proved to be great: I'd hear from a quite a few alumni or former residents who'd have photo requests for nostalgia and I'd gladly oblige, especially since I was super excited to use my first digital camera, a whopping 5 megapixel Sony DSC-F717 😊 I then stopped blogging for about 10 or so years and resumed in 2018. My current blog is essentially a freeform dump: just this mélange of stuff I find interesting and/or may want to reference later. There's really no audience in mind. I use a lot of tags on my posts and am often delighted by exploring them a while later. I moved all my bookmarks over from PinBoard (an excellent service) and am trying to get off Instagram . I'm also trying to be better about making and sharing things (photos, calligraphy, art) no matter how terrible they are and not just consuming them. As for the name, I really wanted a domain hack, , but this sadly required permission from the Israeli government I was pretty sure I wouldn't get 😅 So I went with the shortest and 'coolest' TLD I could find and ended up with nikhil.io. I also have nikhil.fish as an alias for no reason. I think half my site's half a a tumblelog . As for the other half, I have a Markdown file called in my iCloud Drive that I dump inchoate thoughts into (it's at about half a meg right now). I also use the excellent Things app on my phone to save blog posts, names, recommendations, articles, and media of interest to peruse later. When I have time, I look at these two sources to post and comment on something I think is beautiful, interesting, or funny. All professional creatives I know personally have a space that they attend to do their work and they have told me that this matters immensely to them. In my case, I have a setup I've used reliably over many years and love it. I especially love my sit-to-stand desk (on wheels), giant display, and clickity-clack keyboard. I always listen to ambient music or white noise while working on anything ( Loscil 's works are a favorite). I've found that I just cannot focus in coffeehouses or libraries. And I absolutely cannot work or think in harsh "cool white" lighting (3000K or lower; if you need me to divulge secrets, just put me in a room with two tubelights for thirty seconds). I know a lot of people (like my wife , a writer) who can work anywhere and may be a bit envious. I am also in the habit of pacing around and muttering things to myself while working and these are not nice things to do at coffeehouses or libraries. I write all my posts in Markdown and use an old and heavily modded version of 11ty.js with several Markdown-it plugins and supported by quite a few and Node scripts to generate the HTML pages. Images are processed with Sharp . The blog theme is a mess of TSX and SASS files. All posts and code are in and Github. I build everything on my laptop and sync all the files to an S3 bucket that serves my blog through CloudFront. Not really. I've spent enough time monkeying with the design/structure and code where my setup fits my needs like a bespoke suit. You can always nerd out over tooling, and it's a lot of fun, but I've suspended that in favor of using the tools. For the time being at least 😅 Now if my wife or a friend were starting a blog, I would absolutely recommend a platform like Bear . Anything simple, hosted, not creepy, and not run by greedy and/or awful people. It costs ~$5 a month. A giant part of that cost is the domain name. Zero revenue. No plans on 'growing' it or whatever; it's just my little garden on the internet. I have no problem with people monetising their blogs as long as the strategy they employ is respectful to visitors' privacy and unobtrusive to their experience. Patronage/memberships aside, The Deck comes to mind as an ad platform that achieved both these things very well. I do have my problems with platforms like Substack and might write a blog post about this later. Please interview Chris Glass ! His lovely and popular blog is a huge inspiration for mine, layout and content, and he's been at it since at least 2003 IIRC. Another old favorite is Witold Riedel's log . I'm also really digging this blog I discovered recently. I just put up a small project I've wanted to do for a while, my own little curated digital gallery of art I've loved over the years. It was mostly a design exercise but I thought I might use some LLM to discover some themes in why I love these works (or maybe you just love looking at things and don't really need to understand why). Other than that, I am so happy with what feels to me like a resurgence in personal blogging (here's a recent index of personal blogs from readers of HackerNews). Thank you for having me in your beautiful space and featuring several other lovely and interesting people! This is a fantastic project Manu 🤗 Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 134 interviews . People and Blogs is possible because kind people support it.

0 views
iDiallo Yesterday

Sharing a Name

My bank card never arrived. I called the bank and, after being redirected through several departments, was assured that it had been mailed. Then we argued a bit about what "7 to 10 business days" meant, we were already on day 14. We ended the call by agreeing to disagree. Eventually, I did get my card. But it wasn't the mailman who delivered it. Instead, it was my neighbor from two streets down. On the envelope, my address had been crossed out, and the word "incorrect" was handwritten beside it. Why? Because the mailman had done it. You see, I had just moved into the apartment complex, and my name looked familiar to him. Of course he knew who Ibrahima Diallo was, he had been delivering his mail for years. So he corrected it. In the US, both my first and last name are uncommon (or so I thought). They're often a source of confusion when my Starbucks order gets called out. As it turns out, one of my neighbors shares the exact same name. And on top of that, he uses the same West African spelling: Ibrahima . The mailman, trying to be helpful, had redirected my mail to what he thought was the right address. My neighbor and I laughed about it. Then I immediately cancelled the card and requested a new one... Some years ago, I dated a woman from Bulgaria. She grew up in a small city where everyone knew each other. In their town, there was a single Black family. You probably know where this is going, but pretend you don't and follow along. It was so unusual to have an outsider in this town that the man and his family became local fixtures. Wherever they went, people stopped to take pictures with them. They were like minor celebrities. So naturally, when she pulled out a photo from her childhood, there he was, posing cheerfully with the neighbors. She turned the photo over to read the names written on the back. She stopped. She burst out laughing. I looked at the name. I can't read Cyrillic, but I know exactly how to spell my name in Bulgarian. His name read: Ibrahima Diallo . When I was hired at AT&T many years ago, there was a week of confusion at first. I didn't receive my welcome kit. My manager swore that he had carefully selected my name, and sent it to my Texas address... As you may have guessed, I do not have a Texas address. I lived in Los Angeles and the company where we worked in person was in Los Angeles. Somewhere in Texas, a long time employee must have been confused with this new welcome kit showing up in the mail. Back when I was featured on the BBC , a wave of people reached out. Even though my picture was prominently displayed in the article, several people emailed me as if they already knew me, picking up conversations we had apparently started at work, signing off with "see you tomorrow." According to my inbox, I had met quite a few people in London. The only problem was, well, I've never been to London. As it turned out, my neighbor's uncle had called him to say that some journalists were trying to reach his nephew through him. You'll never guess the uncle's name. Yes, it's Ibrahima Diallo. I eventually met this uncle. We had a long conversation and discovered that he knew my father from back home. In fact, he had gone to school with one of my uncles and spoke fondly of him, saying he was a brilliant student. What's my uncle's name, you ask? Of course it's Ibrahima Diallo. Growing up, I assumed my name was uniquely mine. But as I've made my way through the world, I've found that I share it with a surprisingly large number of people. I already snagged ibrahimdiallo.com . I'm keeping an eye on ibrahimadiallo.com , hoping it expires this June so I can claim that one too. If it does become available, I'll gather an army of Ibrahimas, and we will... Well, I'm not entirely sure what we'll do yet. But it will definitely be fun. Anyway, that's a story about my name. A postscript worth mentioning: Both of my older brothers share the same first and last name as each other. You can imagine the fun they have. This is what happens in West African families when you name your children after their grandparents, and the grandparents happen to share the same name. One brother does have a middle name, intended as a differentiator. But middle names are rarely included in US mailing addresses, so that doesn't help much either.

0 views
neilzone Yesterday

I am a cis man

A friend asked: Have you thought about your gender? What it would be like to not be your current gender? Until 2017, no, I had not thought about my gender. This might not be quite the turn of words that I want here, but I had no reason to think about my gender. I grew up as a boy, and I never disliked, or doubted, that I was a boy. As I turned into a man, it never crossed my mind that I was not a man. I never had any reason or motivation - internal or otherwise - to think about it. I have had no sense of gender dysphoria, or not feeling comfortable in my own body shape / appearance, and such like. So what changed in 2017? What changed was a book. Sarah Jamie Lewis’s edited book, “Queer Privacy” , was eye opening for me. Not only was it thoroughly fascinating, from the perspective of privacy, it showed my ignorance: I did not know what some of the terms meant. So I think that it was 2017 when I learned that I was “cis”, in the sense of learning that there is a term which described what I was: someone whose gender identity matches their assigned sex at birth. When I joined the fediverse, and started spending more time there from 2018 onwards, I got to rub virtual shoulders with a whole load of amazing people, with all sorts of gender identities and no gender identities. This was a new experience for me. I’d grown up with gay friends, but not, as far as I know (appreciating that gender identity is about what someone is, rather than how someone looks etc.) any trans, non-binary, or agender friends. Over the last few years, yes, I do occasionally think about my own gender identity, generally stimulated by conversations on the fediverse with others. And, so far at least, the conclusion has always been the same: I am a cis man. It might be interesting to experience being something other than a cis man, but I have no longing to be so, or a feeling that, actually, that is me.

0 views
Robin Moffatt Yesterday

Look Ma, I made a JAR! (Building a connector for Kafka Connect without knowing Java)

As a non-Java coder, for the last ten years I’ve stumbled my way through the JVM-centric world of "big data" (as it was called then), relying on my wits with SQL and config files to just about muddle through. One of the things that drew me to Kafka Connect was that I could build integrations between Kafka and other systems without needing to write Java, and the same again for ksqlDB and Flink SQL—now stream processing was available to mere RDBMS mortals and not just the Java adonises. One thing defeated me though; if a connector didn’t exist for Kafka Connect, then I was stuck. I’d resort to cobbled-together pipelines leaning heavily on kafkacat kcat, such as I did in this blog post . I built some cool analytics on top of maritime AIS data about ships' locations, but the foundations were shaky at best: No failure logic, no schema handling, no bueno. What I really needed was a connector for Kafka Connect. However for that, you need Java. I don’t write Java. But Claude can write Java.

0 views
HeyDingus Yesterday

RIP Mac Pro

The Mac Pro is no longer a product in Apple’s lineup . For a computer that has caused so much consternation over the years, its story can be told very succinctly. Stephen Hackett captured it all in six sentences : The Mac Pro was introduced way back in 2006 as a replacement for the outgoing Power Mac G5 . It had a good few years , then languished until the 2013 model was announced . That machine was a dud, and it languished until the 2019 model was announced . It came out in December 2019, which was less than a year before Apple silicon was announced and the M1 shipped . The Mac Pro got one last update in June 2023, when Apple dropped the Intel version for one with an M2 Ultra inside. It’s been languishing again ever since. (Or, for the long version, read this retrospective by Joe Rossignol on MacRumors .) Definitely sad to see the Mac Pro, and its amazingly-still-modern-looking-even-seven-years-later chassis head to the farm upstate. I’d held out hope for a new screamer of a machine with an ‘ Extreme’ M-series chip, but alas. It seems that Apple was waiting for permission from John Siracusa , the world’s preeminent Mac Pro believer , to kill the product. Here he is in the latest episode of the Accidental Tech Podcast , recorded just last night: @ marcoarment @ siracusa if you sell it, I will buy it and wear it to WWDC @ marcoarment @ siracusa The Mac Pro dies twice: first, when Apple discontinues it, second, when its name is spoken by John for the last time. Exciting that both “ Believe” shirts were resolved this month. ✅ Upgrade AirPods Max Believe ☠️ ATP Mac Pro Believe There’s something poetic about the Mac Pro being discontinued as the MacBook Neo takes off like a rocket. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
Max Bernstein Yesterday

Using Perfetto in ZJIT

Originally published on Rails At Scale . Look! A trace of slow events in a benchmark! Hover over the image to see it get bigger. Now read on to see what the slow events are and how we got this pretty picture. The first rule of just-in-time compilers is: you stay in JIT code. The second rule of JIT is: you STAY in JIT code! When control leaves the compiled code to run in the interpreter—what the ZJIT team calls either a “side-exit” or a “deopt”, depending on who you talk to—things slow down. In a well-tuned system, this should happen pretty rarely. Right now, because we’re still bringing up the compiler and runtime system, it happens more than we would like. We’re reducing the number of exits over time. We can track our side-exit reduction progress with , which, on process exit, prints out a tidy summary of the counters for all of the bad stuff we track. It’s got side-exits. It’s got calls to C code. It’s got calls to slow-path runtime helpers. It’s got everything. Here is a chopped-up sample of stats output for the Lobsters benchmark, which is a large Rails app: (I’ve cut out significant chunks of the stats output and replaced them with because it’s overwhelming the first time you see it.) The first thing you might note is that the thing I just described as terrible for performance is happening over twelve million times . The second thing you might notice is that despite this, we’re staying in JIT code seemingly a high percentage of the time. Or are we? Is 80% high? Is a 4.5% class guard miss ratio high? What about 11% for shapes? It’s hard to say. The counters are great because they’re quick and they’re reasonably stable proxies for performance. There’s no substitute for painstaking measurements on a quiet machine but if the counter for Bad Slow Thing goes down (and others do not go up), we’re probably doing a good job. But they’re not great for building intuition. For intuition, we want more tangible feeling numbers. We want to see things. The third thing is that you might ask yourself “self, where are these exits coming from?” Unfortunately, counters cannot tell you that. For that, we want stack traces. This lets us know where in the guest (Ruby) code triggers an exit. Ideally also we would want some notion of time: we would want to know not just where these events happen but also when. Are the exits happening early, at application boot? At warmup? Even during what should be steady state application time? Hard to say. So we need more tools. Thankfully, Perfetto exists. Perfetto is a system for visualizing and analyzing traces and profiles that your application generates. It has both a web UI and a command-line UI. We can emit traces for Perfetto and visualize them there. Take a look at this sample ZJIT Perfetto trace generated by running Ruby with 1 . What do you see? I see a couple arrows on the left. Arrows indicate “instant” point-in-time events. Then I see a mess of purple to the right of that until the end of the trace. Hover over an arrow. Find out that each arrow is a side-exit. Scream silently. But it’s a friendly arrow. It tells you what the side-exit reason is. If you click it, it even tells you the stack trace in the pop-up panel on the bottom. If we click a couple of them, maybe we can learn more. We can also zoom by mousing over the track, holding Ctrl, and scrolling. That will get us look closer. But there are so many… Fortunately, Perfetto also provides a SQL interface to the traces. We can write a query to aggregate all of the side exit events from the table and line them up with the topmost method from the backtrace arguments in the table: This pulls up a query box at the bottom showing us that there are a couple big hotspots: It even has a helpful option to export the results Markdown table so I can paste (an edited version) into this blog post: Looks like we should figure out why we’re having shape misses so much and that will clear up a lot of exits. (Hint: it’s because once we make our first guess about what we think the object shape will be, we don’t re-assess… yet .) This has been a taste of Perfetto. There’s probably a lot more to explore. Please join the ZJIT Zulip and let us know if you have any cool tracing or exploring tricks. Now I’ll explain how you too can use Perfetto from your system. Adding support to ZJIT was pretty straightforward. The first thing is that you’ll need some way to get trace data out of your system. We write to a file with a well-known location ( ), but you could do any number of things. Perhaps you can stream events over a socket to another process, or to a server that aggregates them, or store them internally and expose a webserver that serves them over the internet, or… anything, really. Once you have that, you need a couple lines of code to emit the data. Perfetto accepts a number of formats. For example, in his excellent blog post , Tristan Hume opens with such a simple snippet of code for logging Chromium Trace JSON-formatted events (lightly modified by me): This snippet is great. It shows, end-to-end, writing a stream of one event. It is a complete (X) event, as opposed to either: It was enough to get me started. Since it’s JSON, and we have a lot of side exits, the trace quickly ballooned to 8GB large for a several second benchmark. Not great. Now, part of this is our fault—we should side exit less—and part of it is just the verbosity of JSON. Thankfully, Perfetto ingests more compact binary formats, such as the Fuchsia trace format . In addition to being more compact, FXT even supports string interning. After modifying the tracer to emit FXT, we ended with closer to 100MB for the same benchmark. We can reduce further by sampling —not writing every exit to the trace, but instead every K exits (for some (probably prime) K). This is why we provide the option. Check out the trace writer implementation from the point this article was written. We could trace: Visualizations are awesome. Get your data in the right format so you can ask the right questions easily. Thanks for Perfetto! Also, looks like visualizations are now available in Perfetto canary. Time to go make some fun histograms… This is also sampled/strobed, so not every exit is in there. This is just 1/K of them for some K that I don’t remember.  ↩ two discrete timestamped begin (B) and end (E) events that book-end something, or an instant (i) event that has no duration, or a couple other event types in the Chromium Trace Event Format doc When methods get compiled How big the generated code is How long each compile phase takes When (and where) invalidation events happen When (and where) allocations happen from JITed code Garbage collection events This is also sampled/strobed, so not every exit is in there. This is just 1/K of them for some K that I don’t remember.  ↩

0 views
Anton Sten Yesterday

Taste isn't a screenshot

I keep seeing designers share their "taste libraries." Folders full of screenshots. Apps they admire, interfaces that inspired them, UI details they want to remember. It's a lovely habit. I've done versions of it myself. But I've started to wonder if we've confused collecting taste with having it. There's a difference between recognizing that something is good and understanding why it's good. And an even bigger difference between that and knowing what to leave out. Steve Jobs said it better than I could: "People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things." A screenshot folder is a yes list. Taste — real taste — is mostly no's. ## The constraint was always judgment Alfred Lin from Sequoia recently wrote something that stuck with me. In [AI Adoption vs. AI Advantage](https://outlierspath.com/2026/03/23/ai-adoption-vs-ai-advantage/), he argues that for two decades, the binding constraints in software were hiring engineers, writing code, and shipping products. Capital flowed around those bottlenecks. Competitive advantage was often just about who could attract talent and move fast. AI is dissolving those constraints. Code gets generated. Prototypes are instant. Iteration is nearly free. Which means the question is no longer "can we build this?" It's "should we?" Lin's point is that when execution constraints disappear, what's left is judgment. The ability to distinguish signal from noise, to say no to good ideas in favor of great ones, to hold conviction when the data is ambiguous. That's what compounds. Clear thinking compounds. Confused thinking unravels. The gap between good judgment and bad judgment doesn't close when the tools get better. It widens. ## What this looks like in practice Right now, there are people vibe coding their own todo app, adding it to their todo list, and using it to remind themselves to vibe code a new one. That's not a critique — it's genuinely how you learn to build. But it does make Things an interesting thing to look at. If you've used it, you know it feels different from other task managers. It's not just that it looks good — though it does. It's that it feels considered. Every interaction has been thought about. Nothing is there by accident. That didn't happen because the team had a good Dribbble board. It happened because someone said no to hundreds of features that would have made Things more powerful on paper and worse in practice. The craft is visible. But the restraint is what makes the craft matter. That kind of restraint is hard. It requires conviction. You have to believe — without always being able to prove it — that the thing you're not building is the right call. ## Who this is actually about I want to be careful here, because I'm not talking about people who are new to building. The weekend builders, the indie hackers spinning up their fifth productivity app — they're learning something genuinely valuable. They're developing intuition by doing. That's how it works. What I'm less sure about is experienced professionals — designers, product people — who've started measuring their output by volume. Fifteen apps shipped. Twenty experiments running. A new launch every week. Shipping a lot isn't the same as building well. And in a world where anyone can ship anything, the signal you're sending with volume isn't "I have great judgment." It's "I haven't figured out what I actually want to say yet." I wrote recently about how [AI will happily design the wrong thing for you](https://www.antonsten.com/articles/ai-will-happily-design-the-wrong-thing-for-you/). The tools are neutral. They amplify whatever you point them at. Strong judgment gets faster and more focused. Weak judgment gets noisier. The tools don't fix the underlying problem. They just make it more visible. ## The harder skill So what does it actually take to develop judgment? Not a bigger screenshot folder. Not more launches. Not faster iteration for its own sake. It takes slowing down enough to ask whether the thing you're building is worth building at all. Whether the feature you're adding is solving a real problem or just filling a roadmap. Whether the app you're designing needs to exist, or whether you're building it because you can. That question — *should we?* — is harder than it sounds. It requires understanding users well enough to know what they actually need, not just what they say they want. It requires understanding the business well enough to know what moves the needle. It requires enough confidence in your own judgment to say no even when someone is excited about the idea. Taste, in the sense that actually matters, is the accumulation of those decisions. Not the screenshots you've saved. The calls you've made — especially the ones where you chose not to build something. The tools have never been more capable. That's real, and it's exciting. But capability without judgment is just a faster way to build the wrong thing. The ceiling has gone up. That's good news — for people who already know what matters.

0 views
iDiallo Yesterday

How we get radicalized in America

Be healthy, be young, fall ill. You have a great job of course, you have insurance. It would be ok if the worst thing about health insurance in America was it is hard to navigate. No! The actual problem is that your insurance is incentivized not to cover you at your most vulnerable moment. You pay them every month. That's money that goes from your paycheck, into their pockets. Now if they cover you, that's money that leaves their pocket, and go into your treatment. There are two ways they can make money. 1. You continue paying every month, and never fall ill. 2. You fall ill, and they deny you care. Only the second option is an active option. Health Insurance is a scam that we have normalized in the United States. It helps no one, it makes healthcare unaffordable, and you have to fight tooth and nail to get any sort of care. When Luigi was in the headlines, and news anchors were asking how such a young man can get radicalized, I shook my head. In America, it is our tradition to get 2 jobs. It is our tradition to live paycheck to paycheck. And it is our tradition to get radicalized the moment we get sick. When you get sick, the healthcare industry tries to charge much as they can get away with and the insurance industry tries to deny as much as it can.

1 views

"Tokenmaxxing" Is Not Productivity

Everyone wants to be productive. As a manager, we don’t produce much ourselves, so our productivity is the vicarious productivity of our whole team. It’s an important part of how we measure ourselves and how we are measured. If there’s a secret unlock that will increase the return on our investment in people, processes, tools, etc. - of course we’re going to want to use it. And heaven forbid someone else figures it out before you and you end up looking like a doofus trying to keep up with race cars speeding along the road while you’re plodding along riding a donkey. I've been trying to avoid writing too much that's directly about AI and how it pertains to the software development world and the management thereof. The "AI is good/bad" discourse is already everywhere and maybe you'd like to read about something else once in a while. That said, the fact that it's having an impact on the industry is unavoidable. I can't stop thinking about this piece in the New York Times (gift link). Specifically this bit: If you care about the practice of management and the goals of your team, this hopefully has provoked a similar reaction in you as it has in me: 🤦‍♂️ Set aside for a second whether or not you think using LLMs for software development at all is a good idea in the first place, that's not what this is about. This is about managing outcomes. Obviously, (well, let's say 'hopefully') nobody's getting promoted for using a lot of AI while shipping nothing . Shipping working software is the point. That's the thing that matters. How you get there is incidental and certainly not something to give more than a passing glance at in a performance review. Imagine two developers. One uses a full-featured IDE and one uses a basic text editor and command line tools. You might be curious about why the text editor user doesn't prefer the IDE. You might even inquire. You might not even agree with their reasoning but come performance review time, if they got their work done and their sum total of their contributions to the team was on par or better than everyone else, who cares? They've found a way to work that works for them so don't mess with it! If your assessment of that person is "did a great job, but stubbornly refuses to do things the way I would so I'm scoring them 'does not meet expectations'" you need to adjust your expectations. Your goals for your team should focus on outcomes, which boils down to shipped software, personal growth, and team health. The specifics of how you get there are going to vary by person, team, circumstance and other factors and should not be goals in and of themselves. Managers who negatively judge other people for doing things in a way that's different from how they themselves would do that same thing are suffering from a counter-productive form of narcissism that will inevitably lead to worse overall outcomes. Either people will get tired of it and quit or they'll acquiesce and be less efficient than they otherwise would be doing things in a way that works for them. In any case, the overall outcomes will be worse. This is engineering management 101. If you make a game, people will play it. I've touched on this before , but it bears repeating. Counting lines of code, tickets closed, or now, AI token use are all things that a person incentivized to increase will find a way to do so even if the overall outcomes you actually care about, like completed features, more sales, increased profits, etc. don't change. Maximizing LLM token use for its own sake is trivially simple for any capable programmer to do. Additionally, they cost money. Financially, this is equivalent of incentivizing a delivery driver to burn more gasoline on the presumption that more gas used -> more deliveries made. Metrics are useful as measures but they're only a small part of the performance picture and should never be turned into goals. The output of knowledge work is not truly countable. It has countable aspects but those aspects are not the outcome. A good novel has a countable number of words and pages, and those impact the cost of printing the book itself, but the number is not what makes a novel good or bad or profitable to publish. The costs of building a house are knowable, as is the bill of materials needed to build it but they don't tell you the whole story of whether or not the house is well designed and delightful to live in. Unless you're a machine screwing tops on jars, most productivity is not directly measurable. You feel it when you have a good day, you were focused, you got a lot done. Maybe it was one thing off your list or twenty, but you know it was a 'productive' day. When it's your team's productivity you rely on what you can observe but the set is limited, which produces anxiety. Our productivity-through-others might be sub-optimal! That's why it's tempting to focus on the measurable things like lines of code or tokens consumed, if the number goes up it gives us comfort and something tangible we can point to that indicates that we're in the race-car lane, not the donkey-riding lane. But our feelings can fool us. Some studies have apparently shown that LLM use can make you feel faster while actually slowing you down but, again, that presumes an objective measure of productivity exists and that one can successfully control for LLM use or not alone. Also everything is changing daily so what was true last month might not be anymore. Nevertheless, it's certainly true that we, as humans, are capable of feeling in a way that doesn't match reality and that's doubly true when those feelings are about other people. Don't mistake the trappings of productivity for the real thing. Be mindful if you're optimizing for true productivity or the feeling of productivity. Focus on outcomes, trust your team’s methods, and let the chaos of creativity drive your success. Like this? Please feel free to share it on your favourite social media or link site! Share it with friends! Hit subscribe to get new posts delivered to your inbox automatically. Feedback? Get in touch ! Image: " tokens casino " by Token Company is licensed under CC BY 2.0 .

0 views