Latest Posts (20 found)

The Blandness of Systematic Rules vs. The Delight of Localized Sensitivity

Marcin Wichary brings attention to this lovely dialog in ClarisWorks from 1997: this breaks the rule of button copy being fully comprehensible without having to read the surrounding strings first, perhaps most well-known as the “avoid «click here»” rule. Never Register/​Register Later/​Register Now would solve that problem, but wouldn’t look so neat. This got me thinking about how you judge when an interface should bend to fit systematic rules vs. exert itself and its peculiarities and context? The trade-off Marcin points out is real: "Never Register / Register Later / Register Now" is fully self-describing and avoids the «click here» rule. However, it kills the elegant terseness that makes that dialog so delightful. “Now / Later / Never” is three words with no filler and a perfect parallel structure. It feels like one of those cases where the rule is sound as a guideline but a thoughtful design supersedes the baseline value provided by the rule. Rules, in a way, are useful structures when you don’t want to think more. But more thinking can result in delightful exceptions that prove better than the outcome any rule can provide. I suppose it really is trade-offs everywhere : As software moves towards “scale”, I can’t help but think that systematic rules swallow all decision making because localized exceptions become points of friction — “We can’t require an experienced human give thought and care to the design of every single dialog box.” What scale wants is automated decision making that doesn’t require skill or expertise because those things, by definition, don’t scale. Then again, when you manufacture upon inhuman lines how can you expect humane outcomes? Reply via: Email · Mastodon · Bluesky When you choose to make decisions on a case-by-case basis, the result can be highly-tailored to the specific context of the problem at hand. However, within a larger system, you can start to lose consistency and coherence across similar UX decision points. When you choose to make system rules override the sensitivities of individual cases, you can lose the magic and delight of finding waypoints tailored exclusively to their peculiarities.

0 views

Browsing the web with JavaScript turned off

Some time ago, I tried to use my web browser with JavaScript turned off by default. The experiment didn’t last long , and my attempt at a privacy-protecting, pain-free web experience failed. Too many websites rely on JavaScript, which made this type of web browsing rather uncomfortable. I’ve kept a Safari extension like StopTheScript around, on top of a content blocker like Wipr , just in case I needed to really “trim the fat” of the occasional problematic webpage. * 1 Recently, I’ve given this setup a new chance to shine, and even described it in a post. The results are in: the experiment failed yet again. But I’m not done. Even if this exact setup isn’t the one I currently rely on, JavaScript-blocking is nevertheless still at the heart of my web browsing hygiene on the Mac today. For context, this need for fine-tuning comes from the fact that my dear old MacBook Air from early 2020, rocking an Intel chip, starts to show its age. Sure, it already felt like a 10-year-old computer the moment the M1 MacBook Air chip was released, merely six months after I bought it, but let’s just say that a lot of webpages make this laptop choke. My goal of making this computer last one more year can only be reached if I manage not to throw the laptop through the window every time I want to open more than three tabs. On my Mac, JavaScript is now blocked by default on all pages via StopTheScript. Leaving JavaScript on, meaning giving websites a chance, sort of defeated the purpose of my setup (performance and privacy). Having JS turned off effectively blocks 99% of ads and trackers (I think, don’t quote me on that) and makes browsing the web a very enjoyable experience. The fan barely activates, and everything is as snappy and junk-free as expected. For websites that require JavaScript — meaning frequently visited sites like YouTube or where I need to be logged in like LanguageTool  — I turn off StopTheScript permanently via the Websites > Extensions menu in the Safari Settings. I try to keep this list to a bare minimum, even if this means I have to accept a few annoyances like not having access to embedded video players or comments on some websites. For instance, I visit the Guardian multiple times daily, yet I won’t add it to the exception list, even if I’m a subscriber and therefore not exposed to the numerous “please subscribe” modals. I can no longer hide some categories on the home page, nor watch embedded videos: a small price to pay for a quick and responsive experience, and a minimal list of exceptions. For the few times when I actually need to watch a video on the Guardian, comment on a blog post, or for the occasional site that needs JavaScript simply to appear on my screen (more on that later), what I do is quickly open the URL in a new private window. There, StopTheScript is disabled by default (so that JavaScript is enabled: sorry, I know this is confusing). Having to reopen a page in a different browser window is an annoying process, yes. Even after a few weeks it still feels like a chore, but it seems to be the quickest way on the Mac to get a site to work without having to mess around with permissions and exceptions, which can be even more annoying on Safari. Again, a small price to pay to make this setup work. * 2 Another perk of that private browsing method is that the ephemeral session doesn’t save cookies and the main tracking IDs disappear when I close the window. I think. The problem I had at first was that these sessions tended to display the webpages as intended by the website owners: loaded with JavaScript, ads, modals, banners, trackers, &c. Most of the time, it is a terrible mess. Really, no one should ever experience the general web without any sort of blocker. To solve this weakness of my setup, I switched from Quad9 to Mullvad DNS to block a good chunk of ads and trackers (using the “All” profile ). Now, the private window only allows the functionality part of the JavaScript, a few cookie banners and Google login prompt annoyances, but at least I am not welcomed by privacy-invading and CPU-consuming ads and trackers every time my JS-free attempt fails. I know I could use a regular content blocker instead of a DNS resolver, but keeping it active all the time when JS is turned off feels a bit redundant and too much of an extension overlap. More importantly, I don’t want to be tempted to manage yet another exception list on top of the StopTheScript one (been there, done that, didn’t work). Also, with Safari I don’t think it’s possible to activate an extension in Private Mode only. John Gruber , in a follow-up reaction to The 49MB Web Page article from Shubham Bose, which highlights the disproportionate weight of webpages related to their content, wrote: One of the most controversial opinions I’ve long espoused, and believe today more than ever, is that it was a terrible mistake for web browsers to support JavaScript. Not that they should have picked a different language, but that they supported scripting at all. That decision turned web pages — which were originally intended as documents — into embedded computer programs. There would be no 49 MB web pages without scripting. There would be no surveillance tracking industrial complex. The text on a page is visible. The images and video embedded on a page are visible. You see them. JavaScript is invisible. That makes it seem OK to do things that are not OK at all. Amen to that. But if JavaScript is indeed mostly used for this “invisible” stuff, why are some websites built to use it for the most basic stuff? Video streaming services, online stores, social media platforms, I get it: JavaScript makes sense. But text-based sites? Blogs? Why? The other day I wanted to read this article , and only the website header showed up in my browser. Even Reader Mode didn’t make the article appear. When I opened the link in a private window, where StopTheScript is disabled, lo and behold, the article finally appeared. For some obscure reason, on that website (and others) JavaScript is needed to load text on a freaking web page. Even if you want your website to have a special behaviour regarding loading speeds, design subtleties, or whatever you use JavaScript for, please, use a tag, either to display the article in its most basic form, or at least to show a message saying “JavaScript needed for no apparent reason at all. Sorry.” * 3 This is what I do on my phone, as managing Safari extensions on iOS is a painful process. Quiche Browser is a neat solution and great way for me to have the “turn off JavaScript” menu handy, but without a way to sync bookmarks, history or open tabs with the Mac, I still prefer to stick to Safari, at least for now. ^ I still wish StopTheScript had a one-touch feature to quickly reload a page with JavaScript turned on until the next refresh or for an hour or so, but it doesn’t. ^ This is what I do for this site’s search engine , where PageFind requires JavaScript to operate. Speaking of search engine, DuckDuckGo works fine in HTML-only mode (the only main search engine to offer this I believe). ^ This is what I do on my phone, as managing Safari extensions on iOS is a painful process. Quiche Browser is a neat solution and great way for me to have the “turn off JavaScript” menu handy, but without a way to sync bookmarks, history or open tabs with the Mac, I still prefer to stick to Safari, at least for now. ^ I still wish StopTheScript had a one-touch feature to quickly reload a page with JavaScript turned on until the next refresh or for an hour or so, but it doesn’t. ^ This is what I do for this site’s search engine , where PageFind requires JavaScript to operate. Speaking of search engine, DuckDuckGo works fine in HTML-only mode (the only main search engine to offer this I believe). ^

0 views

Fragments: April 2

As we see LLMs churn out scads of code, folks have increasingly turned to Cognitive Debt as a metaphor for capturing how a team can lose understanding of what a system does. Margaret-Anne Storey thinks a good way of thinking about these problems is to consider three layers of system health : While I’m getting a bit bemused by debt metaphor proliferation, this way of thinking does make a fair bit of sense. The article includes useful sections to diagnose and mitigate each kind of debt. The three interact with each other, and the article outlines some general activities teams should do to keep it all under control ❄                ❄ In the article she references a recent paper by Shaw and Nave at the Wharton School that adds LLMs to Kahneman’s two-system model of thinking . Kahneman’s book, “Thinking Fast and Slow”, is one of my favorite books. Its central idea is that humans have two systems of cognition. System 1 (intuition) makes rapid decisions, often barely-consciously. System 2 (deliberation) is when we apply deliberate thinking to a problem. He observed that to save energy we default to intuition, and that sometimes gets us into trouble when we overlook things that we would have spotted had we applied deliberation to the problem. Shaw and Nave consider AI as System 3 A consequence of System 3 is the introduction of cognitive surrender, characterized by uncritical reliance on externally generated artificial reasoning, bypassing System 2. Crucially, we distinguish cognitive surrender, marked by passive trust and uncritical evaluation of external information, from cognitive offloading, which involves strategic delegation of cognition during deliberation. It’s a long paper, that does into detail on this “Tri-System theory of cognition” and reports on several experiments they’ve done to test how well this theory can predict behavior (at least within a lab). ❄                ❄                ❄                ❄                ❄ I’ve seen a few illustrations recently that use the symbols “< >” as part of an icon to illustrate code. That strikes me as rather odd, I can’t think of any programming language that uses “< >” to surround program elements. Why that and not, say, “{ }”? Obviously the reason is that they are thinking of HTML (or maybe XML), which is even more obvious when they use “</>” in their icons. But programmers don’t program in HTML. ❄                ❄                ❄                ❄                ❄ Ajey Gore thinks about if coding agents make coding free, what becomes the expensive thing ? His answer is verification. What does “correct” mean for an ETA algorithm in Jakarta traffic versus Ho Chi Minh City? What does a “successful” driver allocation look like when you’re balancing earnings fairness, customer wait time, and fleet utilisation simultaneously? When hundreds of engineers are shipping into ~900 microservices around the clock, “correct” isn’t one definition — it’s thousands of definitions, all shifting, all context-dependent. These aren’t edge cases. They’re the entire job. And they’re precisely the kind of judgment that agents cannot perform for you. Increasingly I’m seeing a view that agents do really well when they have good, preferably automated, verification for their work. This encourages such things as Test Driven Development . That’s still a lot of verification to do, which suggests we should see more effort to find ways to make it easier for humans to comprehend larger ranges of tests. While I agree with most of what Ajey writes here, I do have a quibble with his view of legacy migration. He thinks it’s a delusion that “agentic coding will finally crack legacy modernisation”. I agree with him that agentic coding is overrated in a legacy context, but I have seen compelling evidence that LLMs help a great deal in understanding what legacy code is doing . The big consequence of Ajey’s assessment is that we’ll need to reorganize around verification rather than writing code: If agents handle execution, the human job becomes designing verification systems, defining quality, and handling the ambiguous cases agents can’t resolve. Your org chart should reflect this. Practically, this means your Monday morning standup changes. Instead of “what did we ship?” the question becomes “what did we validate?” Instead of tracking output, you’re tracking whether the output was right. The team that used to have ten engineers building features now has three engineers and seven people defining acceptance criteria, designing test harnesses, and monitoring outcomes. That’s the reorganisation. It’s uncomfortable because it demotes the act of building and promotes the act of judging. Most engineering cultures resist this. The ones that don’t will win. ❄                ❄                ❄                ❄                ❄ One the questions comes up when we think of LLMs-as-programmers is whether there is a future for source code. David Cassel on The New Stack has an article summarizing several views of the future of code . Some folks are experimenting with entirely new languages built with the LLM in mind, others think that existing languages, especially strictly typed languages like TypeScript and Rust will be the best fit for LLMs. It’s an overview article, one that has lots of quotations, but not much analysis in itself - but it’s worth a read as a good overview of the discussion. I’m interested to see how all this will play out. I do think there’s still a role for humans to work with LLMs to build useful abstractions in which to talk about what the code does - essentially the DDD notion of Ubiquitous Language . Last year Unmesh and I talked about growing a language with LLMs. As Unmesh put it Programming isn’t just typing coding syntax that computers can understand and execute; it’s shaping a solution. We slice the problem into focused pieces, bind related data and behaviour together, and—crucially—choose names that expose intent. Good names cut through complexity and turn code into a schematic everyone can follow. The most creative act is this continual weaving of names that reveal the structure of the solution that maps clearly to the problem we are trying to solve. Technical debt lives in code. It accumulates when implementation decisions compromise future changeability. It limits how systems can change. Cognitive debt lives in people. It accumulates when shared understanding of the system erodes faster than it is replenished. It limits how teams can reason about change. Intent debt lives in artifacts. It accumulates when the goals and constraints that should guide the system are poorly captured or maintained. It limits whether the system continues to reflect what we meant to build and it limits how humans and AI agents can continue to evolve the system effectively.

0 views

CSS subgrid is super good

I’m all aboard the CSS subgrid train. Now I’m seeing subgrid everywhere. Seriously, what was I doing before subgrid? I feel like I was bashing rocks together. Consider the follower HTML: The content could be simple headings and paragraphs. It could also be complex HTML patterns from a Content Management System (CMS) like the WordPress block editor, or ACF flexible content (a personal favourite). Typically when working with CMS output, the main content will be restricted to a maximum width for readable line lengths. We could use a CSS grid to achieve such a layout. Below is a visual example using the Chromium dev tools to highlight grid lines. This example uses five columns with no gap resulting in six grid lines. The two outer most columns are meaning they can expand to fill space or collapse to zero-width. The two inner columns are which act as a margin. The centre column is the smallest or two values; either , or the full viewport width (minus the margins). Counting grid line correctly requires embarrassing finger math and pointing at the screen. Thankfully we can name the lines. I set a default column of for all child elements. Of course, we could have done this the old fashioned way. Something like: But grid has so much more potential to unlock! What if a fancy CMS wraps a paragraph in a block with the class . This block is expected to magically extend a background to the full-width of the viewport like the example below. This used to be a nightmare to code but with CSS subgrid it’s a piece of cake. We break out of the column by changing the to — that’s the name I chose for the outer most grid lines. We then inherit the parent grid using the template. Finally, the nested children are moved back to the column. The selector keeps specificity low. This allows a single class to override the default column. CSS subgrid isn’t restricted to one level. We could keep nesting blocks inside each other and they would all break containment. If we wanted to create a “boxed” style we can simply change the to instead of . This is why I put the margins inside. In hindsight my grid line names are probably confusing, but I don’t have time to edit the examples so go paint your own bikeshed :) On smaller viewports below the outer most columns collapse to zero-width and the “boxed” style looks exactly like the style. This approach is not restricted to one centred column. See my CodePen example and the screenshot below. I split the main content in half to achieve a two-column block where the text edge still aligns, but the image covers the available space. CSS subgrid is perfect for WordPress and other CMS content that is spat out as a giant blob of HTML. We basically have to centre the content wrapper for top-level prose to look presentable. With the technique I’ve shown we can break out more complex block patterns and then use subgrid to align their contents back inside. It only takes a single class to start! Here’s the CodePen link again if you missed it. Look how clean that HTML is! Subgrid helps us avoid repetitive nested wrappers. Not to mention any negative margin shenanigans. Powerful stuff, right? Browser support? Yes. Good enough that I’ve not had any complaints. Your mileage may vary, I am not a lawyer. Don’t subgrid and drive. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views

Harness engineering for coding agent users

Last month Birgitta Böckeler wrote some initial thoughts about the recently developed notion of Harness Engineering. She's been researching and thinking more about this in the weeks since and has now written a thoughtful mental model for understanding harness engineering that we think will help people to drive coding agents more effectively.

0 views

An Interview with Asymco’s Horace Dediu About Apple at 50

An interview with Asymco's Horace Dediu about his career in tech, Apple's first 50 years, and the prospects for the next 50, particularly in the face of AI

0 views
HeyDingus Yesterday

Apple at 50: A Dent in the Universe

A lot has been said about Apple’s 50th anniversary. Stories shared . Favorites ranked . Contributions celebrated . But as I reflect on why we even care that a computer company has been around for five decades, I keep coming back to the fabled challenge that Steve Jobs gave to John Sculley as he tried to woo him into becoming Apple’s CEO : Do you want to sell sugar water for the rest of your life, or do you want to come with me and change the world? Somehow — I sure couldn’t have — Sculley turned him down, at least at first. But eventually he, and thousands of other people — developers, engineers, marketers, retail staff, artists — answered that call to put a dent in the universe . Through their contributions as employees, app developers, evangelists, and executives, they’ve made some wonderful things . Products that have changed the world. That help us connect and build , that democratize access to information and to privacy , that entertain and watch out for us. Apple’s not a perfect company. I’ve been less enthused by some of its actions and inactions in its latest years. But as a whole, I still find myself inspired by the products they make. No, actually, that’s not quite right. I’m not inspired by the products . I’m inspired by the attention to detail, the exquisite taste, the enormous effort, and the giving a damn by the people who make them . Sure, they just make computers. Hardware, software, and services melded together into computers of different shapes and sizes. But what attracts me to Apple’s computers is that they — unlike the computers from nearly every other company in the market — carry with them the spirit, or DNA as Jobs would say , of the people that built them. From the iPod nano, to the iMac and macOS, to the iPhone, the iPad, the Apple Watch and AirPods, and, yes, the Vision Pro. There’s something about each of these products that ignited curiosity in me . What could I do with them? I sit here, typing these words on a MacBook in my car while traveling across a lake on a ferry, connected to the internet through Wi-Fi ( which Apple helped birth ) tethered to my iPad. I’ll publish it to the World Wide Web ( invented on Jobs’ NeXTSTEP , which would serve as the foundation for Mac OS X ) on a website themed and named to pay tribute to Apple. I spent my youth expanding my taste with an iPod and iTunes. I took notes and studied in college with an iPad. I launched my business and keep it running with a Mac. I track my runs and pay for almost everything using my Apple Watch. My favorite TV shows are the ones that Apple produces. If Apple made shitty things, I would look elsewhere. But, so far, they keep making wonderful things. It’s been fun to look back at how far Apple has come from two guys selling 50 computers to the local Byte Shop to one of the largest and most successful companies in the world. But now I’m most excited to see what they’ll do next .  I’ll update this post with quotes from other articles and retrospectives that make me smile as I come across them. Hope you enjoy them too. HeyDingus is a blog by Jarrod Blundy about technology, the great outdoors, and other musings. If you like what you see — the blog posts , shortcuts , wallpapers , scripts , or anything — please consider leaving a tip , checking out my store , or just sharing my work. Your support is much appreciated! I’m always happy to hear from you on social , or by good ol' email .

0 views
iDiallo Yesterday

13th Year of Blogging

Of all the days to start a blog, I chose April Fools' Day. It wasn't intentional, maybe more of a reflection of my mindset. When I decide to do something, I shut off my brain and just do it. This was a commitment I made without thinking about the long-term effects. I knew writing was hard, but I didn't know how hard. I knew that maintaining a server was hard, but I didn't know the stress it would cause. Especially that first time I went viral. Seeing traffic pour in, reading back the article, and realizing it was littered with errors. I was scrambling to fix those errors while users hammered my server. I tried restarting it to relieve the load and update the content, but to no avail. It was a stressful experience. One I wouldn't trade for anything in the world. 13 years later, it feels like the longest debugging session I've ever run. Random people message me pointing out bugs. Some of it is complete nonsense. But others... well, I actually sent payment to a user who sent me a proof of concept showing how to compromise the entire server. I thought he'd done some serious hacking, but when I responded, he pointed me to one of my own articles where I had accidentally revealed a vulnerability in my framework. The amount you learn from running your own blog can't be replicated by any other means. Unlike other side projects that come and go, the blog has to remain. Part of its value is its longevity. No matter what, I need to make sure it stays online. In the age of AI, it feels like anyone can spin up a blog and fill it with LLM-generated content to rival any established one. But there's something no LLM can replicate: longevity. No matter what technology we come up with, no tool can create a 50-year-old oak tree. The only way to have one is to plant a seed and give it the time it needs to grow. Your very first blog post may not be entirely relevant years later, but it's that seed. Over time, you develop a voice, a process, a personality. Even when your blog has an audience of one, it becomes a reflection of every hurdle you cleared. For me, it's the friction in my career, the lessons I learned, the friends I made along the way. And luckily, it's also the audience that keeps me honest and stops me from spewing nonsense. Nothing brings a barrage of emails faster than being wrong. Maybe that's why I subconsciously published it on April Fools' Day. Maybe that's the joke. I'm going to keep adding rings to my tree, audience or no audience, I'm building longevity. Thank you for being part of this journey. Extra : Some articles I wrote on April Fools day. So you've been blogging for 2 years Quietly waiting for Overnight Success Happy 5th Anniversary Count the number of words with MySQL How to self-publish a book in 7 years The Art of Absurd Commitment Happy 12th Birthday Blog What is Copilot exactly?

0 views
Jeff Geerling Yesterday

DRAM pricing is killing the hobbyist SBC market

Today Raspberry Pi announced more price increases for all Pis with LPDDR4 RAM , alongside a 'right-sized' 3GB RAM Pi 4 for $83.75. The price increases bring the 16GB Pi 5 up to $299.99 . Despite today's date, this is not a joke. I published a video going over the state of the hobbyist 'high end SBC' market (4/8/16 GB models in the current generation), which I'll embed below: But if you'd like the tl;dr :

0 views
Brain Baking Yesterday

Favourites of March 2026

Our daughter turned three. We’re beyond exhausted but a ripgrep search in this repository yields five more instances of the word exhausted in combination of parenting so I’ll shut up. I guess we also celebrate that after three years of pure chaos, we’re… still alive? Previous month: February 2026 . I am just two levels short of finishing Gobliins 6 before deciding to throw in the towel. Thanks to the increased amount of moon logic presence, the entire adventure was more frustrating than relaxing. As a big Gobliins fan, I have to admit: the game left me a bit disappointed. It’s all right; I’ll just replay Gob3 again. As it left me wanting more, I went back to the original Gobliiins game that I somehow missed as back in the day my dad bought Gobliins 2 and we just continued with 3 without looking back. It’s still worth exploring but very basic and the presence of the life bar is a very strange (and bad!) design choice that fortunately was abandoned in the sequels. I charged the Analogue Pocket and hope to get in some good ol’ Game Boy (Color) games in the coming month. I read a depressing amount of personal genAI tales; more than enough to fill another blog post. I’ll try to keep these out of here as much as possible. My wife bumped into an hacker called Un Kyu Lee crafting his own micro journal hardware. The result looks very cool, including hinge to hang on the door as a physical reminder: I’d rather keep on journaling with my fountain pens, but still, very cool! Related topics: / metapost / By Wouter Groeneveld on 1 April 2026.  Reply via email . Michael vibe-code-ported an X11 window manager into Wayland ; an interesting Claude experiment to see how agentic development works. Greg Newman hosted the Emacs Blog Post Carnival 2025-07 on writing experiences and summarised the participating links. Lots of little gems in there. Rijksmuseum writes about the discovery of the new Rembrandt painting . Well, “new”—it’s been in private collection for years and only recently resurfaced. Peter Bridger shares his experience in the retro happening SWAG February 2026 . I wish we had something similar nearby! Chuck Jordan shares SimCity vibes . As one of the original programmers involved in the projects, he would know. (Via The Virtual Moose ) The 1MB Club has an interesting (older) article I read last month: consider disabling HTTPS auto redirects . I can’t remember why I turned this back on: I want my old WinXP machine to be able to reach as well without the extra TLS overhead. Funny though: they mention “You can freely view this website on both HTTPS and HTTP.”. I remove the in the protocol, press , and get redirected. Whoops. PolyWolf has been thinking about blazing fast static site generators . This is a goldmine as I have a wild idea to write my own generator in Clojure. When the exhaustion and brain fog go away, that is. According to Rishi Baldawa the reviewer isn’t the bottleneck . This one’s a bit AI flavoured, so beware if you’re coming down with an AI cold. (I know I have. Handkerchiefs full.) Marcin Wichary’s keyboard grandmastery again shines through in his Apple Fn endgame article . I wish his keyboard book wasn’t sold out. Wordsmith writes about the underrated simplicity of the original Harvest Moon (1996) video game. Dale Mellor defends sing a dynamically-produced blog site which is a nice change given the static site generator craziness. I’m still on Hugo and have little need for the points he brings up, but still, some others might. Tazjin tries out Guix as a Nixer . I was eyeing on Guix as a budding Lisp fanboy, but both options still can’t seem to fit in my head. I’ll let it stew for a little while longer. Homo Ludditus announces distro hopping time . The conclusion? “The madhouse could be a valid destination. But I’m still looking for better alternatives.” So far for 2026 as the year of the Linux desktop huh. The Digital Antiquarian writes about the year of peak Might & Magic , when New World Computing still was on top of the world. Here’s an interesting thought experiment by Andrey Listopadov: What if structural editing was a mistake? In this 2020 post by Vincent Bernat, photos of a bunch of cool vintage PC expansion cards are shared in conjunction with timeperiod-correct software that made great use of them. Gabor Torok switched to KDE Plasma , an interesting read because we both switched to OSX because of resons and are trying to crawl out of the Apple hole. I don’t know if I’m quite ready yet. Did you know there’s a relation between knitting and programming ? Abbey Perini does. Mykal Machon shares some insightful guiding principles to lead a fuller life. Judging by the principles, I don’t think Mykal has any young kids. I’m using this as a checklist to find out if I missed essential albums: Hip Hop Golden Age’s Top 40 Hip Hop Albums of 1998 . Here’s another GitHub “awesome” list; this time public APIs . Could be useful. Already used for my courses. It doesn’t hurt to link to the 2007 Slow Code manifesto . FontCrafter is a cool way to generate a real font based on your handwriting. WireTap is an open source Ngrok alternative. The Stump Window Manager is the only WM (except the obvious EXWM) I could find that’s written in Common Lisp. I should look into Ulauncher if I ever want to make the switch to Linux to replace Alfred. Christoph Frick shares a cool GitHub Gist showcasing you can write your AwesomeWM config in Fennel instead of Lua. Yazi looks like an Emacs Dired inside a shell?

0 views
Taranis Yesterday

Go has some tricks up its logging sleeve

Since it's more or less TDOV (IYKYK...), I'm going to talk about logging instead. Logging isn't exactly the most shiny or in-your-face thing that coders tend to think about, but it really can make or break large systems. Throwing in a few print statements (or fmt.Printf, or whatever) only scratches the surface. I'm mostly talking about my own logging library here. If there's interest, I'd consider releasing it as open source, but it's currently a bit of a moving target. Feel free to comment if you think you'd find it useful, and I'll try to find the time to split it out from the Euravox codebase and put it on GitHub. The Go programming language ships with logging capabilities in the standard library, found in the log package. If you don't have any better alternatives, using that package rather than raw fmt.Printf is far preferable. My own logging package is a bit nicer. It's not my first – one of my first jobs working in financial markets data systems back in the 90s was the logging subsystem for the Reuter Workstation, and there is some influence from that 30-odd years later in my library. One of the first things I always recommend is breaking out log messages by log level. I currently define the following: It's possible to set a configuration parameter that limits logging at a particular level. This makes it possible to crank logging all the way up for tests, but dial it down for production without changing the code or having to introduce if/then guards around the logging. It was a finding back in the 90s that systems would sometimes break when you took the logging out – this isn't something that's normally a problem with Go, because idiomatic code doesn't tend to have too many side-effects, but it was quite noticeable with C++. Of course, the library doesn't do the string formatting if the level is disabled, but any parameters are still evaluated, which tends to be a less risky approach. It's common to send log messages to stdout or stderr. There's nothing fundamentally wrong with this, but I find it useful to have deeper capabilities than this. My own library has three options, which can be used together (and with different log levels): Any good logging solution should be able to include file name and line number information in log output. Using an IDE like vscode, this allows control/command-clicking a log entry and immediately seeing the code that generated it. C and C++ support this via some fancy #define stunts. Go lacks this kind of preprocessor, but actually has something far better: the runtime.Caller() library function. This makes it possible to pull back the file name and line number (and program counter if you care) anywhere up the call stack. This code fragment comes from my logging function. The argument to Caller is typically 2, because this code is called from one of many convenience functions for syntactic sugar. Typical log commands look something like this: The logging library will automatically pick up the file paths and line numbers where the log commands are located. However, this isn't always useful, and sometimes can be a complete nightmare. Here's a small example: In this case, the file name and line number that will be logged will be where the command is located. This can be absolutely maddening if has many call sites, because they will look exactly the same in the log. My logging library has a small tweak that I've not seen elsewhere – I'm not claiming invention or ownership, because it's so obviously useful that I'd be shocked if nobody else has ever done it. It's just I've not personally seen it. Anyway, here goes: In this case, works similarly to , but it takes an extra parameter at the start, which represents how many extra stack frames to look through to find the filename and line number. The parameter returns the filename and line number of the immediate caller, so the thing that makes its way into the log is the location of the calls, not the logging calls themselves. This might seem to be a subtle difference, but the practical consequences are huge – get this right, and logs become useful traces of activity that make it possible to look backwards in time to see when particular data items have been acted upon, and exactly by what code. Almost as good as single-stepping with a debugger, but can be done after the fact. Anyway, in conclusion, trans women are women, trans men are men, nonbinary and all other variant identities are valid. And fuck fascism. SPM -- Spam messages. Very verbose logging, not something you'd normally use, but the kind of thing that makes all the difference doing detailed debugging. INF -- Information messages. These are intended to be low volume, used to help trace what systems are doing, but not actually representing a an error (i.e., they explicitly are used to log normal behaviour) WRN -- Warning messages. What it says on the tin. Something is possibly wonky, but not bad enough to be an actual error. Real production systems should have as close to zero of these things as possible -- samething should either be normal (INF) or an actual error (ERR). ERR -- Error messages. This represents recoverable errors. Something bad happened, but the code can keep running without risk. FTL -- Fatal errors. These errors show that something very bad has happened, and that the code must abort immediately. There are two cases where this is appropriate. One is when something catastrophic has happened -- system has run out of handles, process is OOMing, etc. The second is where a serious logic bug has been detected. Though in some cases ERR can be OK for this, aborting makes it easier to spot that processes in production are badly broken (e.g., after a bad push), and need to be rolled back. stdout. Nothing special here, but I do have the option to send colour control codes for terminals that support it, which makes logs much more readable. Files. This is similar to piping the process through the tee command, but has the advantage that things like log rotation can be built in. I need to get around to supporting log rotation, but file output works now. Circular buffer. This is the one you don't see often. The idea here is you maintain an in-RAM circular buffer of N lines (say about 5000), which can be exposed via code. I use this to provide an HTTP/HTML interface that makes it possible to watch log output on a process via a web browser. This is a godsend when you have a large number of processes running across multiple VMs and/or physical machines.

0 views
Andy Bell Yesterday

I want an alarm clock

Nothing fancy is needed here and certainly nothing “smart”, but my one actual use for an Apple Watch — as a chill alarm clock — is silly really. I’m so fed up of my Apple Watch, so has anyone got a recommendation for an alarm clock that: Is chill with the sounds. I don’t need to be yelled awake thanks. Allows me to set a different time alarm — or no alarm — for different days Is not smart and never connects to the internet Doesn’t tick

2 views
David Bushell Yesterday

I quit. The clankers won.

… is what I’m reading far too often! Some of you are losing faith! A growing sentiment amongst my peers — those who haven’t already resigned to an NPC career path † — is that blogging is over. Coding is cooked. What’s the point of sharing insights and expertise when the Cognitive Dark Forest will feed on our humanity? Before I’m dismissed as an ill-informed hater please note: I’ve done my research. † To be fair it’s a valid choice in this economy. Clock in, slop around, clock out. Why not? It’s never been more important to blog. There has never been a better time to blog. I will tell you why. We’re being starved for human conversation and authentic voices. What’s more: everyone is trying to take your voice away. Do not opt-out of using it yourself. First let’s accept the realities. The giant plagiarism machines have already stolen everything. Copyright is dead. Licenses are washed away in clean rooms . Mass surveillance and tracking are a feature, privacy is a bug. Everything is an “algorithm” optimised to exploit. How can we possibly combat that? From a purely selfish perspective it’s never been easier to stand out and assert yourself as an authority. When everyone is deferring to the big bullshitter in the cloud your original thoughts are invaluable. Your brain is your biggest asset. Share it with others for mutual benefit. I find writing stuff down improves my memory and hardens my resolve. I bet that’s true for you too. It’s part rote learning part rubberducking † . Writing publicly in blog form forces me to question assumptions. Even when research fails me Cunningham’s Law saves me. † Some will claim writing into a predictive chat box helps too, and sure, they’re absolutely right! Blogging makes you a better professional. No matter how small your audience, someone will eventually stumble upon your blog and it will unblock their path. Don’t accept a fate being forced upon you. The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm . (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?) The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates. We’re saying goodbye to Sora. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing. @soraofficialapp - XCancel Is there anything, in the entire recorded history of human creation, that could have possibly mattered less than the flatulence Sora produced? NFTs had more value. I’m not protective over the word “art”. Generative AI is art. It’s irredeemably shit art; end of conversation. A child’s crayon doodle is also lacking refined artistry but we hang it on our fridge because a human made it and that matters. We care and caring has a positive effect on our lives. When you pass human creativity through the slop wringer, or just prompt an incantation, the result is continvoucly morged ; a vapid mockery of the input. The garbage out no longer matters, nobody cares, nobody benefits. I forgot where I was going with this… oh right: don’t resign yourself to the deskilling of our craft . You should keep blogging! Take pride in your ability and unique voice. But please don’t desecrate yourself with slop. The only winning move is not to play. WarGames (1983) We’ve gotten too comfortable with the convenience of Big Tech . We do not have to continue playing their game. Don’t buy the narratives they’re selling. The AI industry is built on the predatory business model of casinos. Except they’ve forget the house is supposed to win. One upside of this looming economic and intellectual depression is that the media is beginning to recognise gate keepers are no longer the hand that feeds them. Big Tech is not the web. You don’t have to use it nor support it. Blog for the old web , the open web , the indie web — the web you want to see. And if you think I’m being dramatic and I’ve upset your new toys, you’re welcome to be left behind in the miasmatic dystopia these technofacists are racing to build. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views

RTSpMSpM: Harnessing Ray Tracing for Efficient Sparse Matrix Computations

RTSpMSpM: Harnessing Ray Tracing for Efficient Sparse Matrix Computations Hongrui Zhang, Yunan Zhang, and Hung-Wei Tseng ISCA'25 I recall a couple of decades ago when Pat Hanrahan said something like “all hardware wants to be programmable”. You can find a similar sentiment here : With most SGI machines, if you opened one up and looked at what was actually in there—processing vertexes in particular, but for some machines, processing the fragments—it was a programmable engine. It’s just that it was not programmable by you; it was programmable by me. And now, twenty years later, GPU companies have bucked the programmability trend and added dedicated ray tracing hardware to their chips. Little did they know, users would find a way to utilize this hardware for applications that have nothing to do with graphics. The task at hand is multiplying two (very) sparse matrices ( and ). Each matrix can be partitioned into a 2D grid, where most cells in the grid contain all 0’s. Cells in with non-zero entries must be multiplied by specific cells in with non-zero entries (using a dense matrix multiplication for each product of two cells). The core idea is elegantly simple, and is illustrated in Fig. 5: Source: https://dl.acm.org/doi/full/10.1145/3695053.3731072 The steps are: Build a ray tracing acceleration structure corresponding to the non-zero cells in For each non-zero cell in Trace a ray through to determine if there are any non-zero cells in that need to be multiplied by the current cell in In fig. 5 the coordinates of the non-zero cells in matrix are: [(2, 1) (2, 3) (3, 3) (7, 1)]. The figure shows rays overlaid on top of the result matrix, but I find it easier to think of the rays traced through matrix . The ray corresponding to the cell in at (2, 1) has a column index of 1, so the algorithm traces a ray horizontally through B at row 1. The ray tracing hardware will find that this ray intersects with the cell from at coordinate (1, 4). So, these cells are multiplied together to determine their contribution to the result. Fig. 7 has benchmark results. All results are normalized to the performance of the library (i.e., values greater than one represent a speedup). corresponds to the Intel MKL library running on a Core i7 14700K processor. The “w/o RT cores” bars show results from the same algorithm with ray tracing implemented in general CUDA code rather than using the ray tracing accelerators. It is amazing that this beats across the board. Source: https://dl.acm.org/doi/full/10.1145/3695053.3731072 Dangling Pointers It seems like the core problem to be solved here is pointer-chasing. I wonder if a more general-purpose processor that is located closer to off-chip memory could provide similar benefits. Subscribe now Build a ray tracing acceleration structure corresponding to the non-zero cells in For each non-zero cell in Trace a ray through to determine if there are any non-zero cells in that need to be multiplied by the current cell in

0 views
iDiallo Yesterday

What is Copilot exactly?

A coworker of mine told me that he uses Microsoft Copilot frequently. In fact, he said "I don't know how I did my work without it." That came as a surprise to me. I can't stand Copilot. This is a very productive employee, one of those 10x engineers you can throw any problem at and he'll find a solution. Obviously, if he found a use for Copilot, then I was probably holding it wrong. So I decided to give it a shot. I put all my prejudice aside and embraced the tool fully. AI is the future, and it shouldn't be hard to find a way to integrate it into my everyday workflow. I decided to give it a week, meaning I wouldn't complain even when I didn't get the result I wanted. Instead, for every frustration, I would use Copilot to help me turn that frown into a smile. The result? I created a workflow. I automated a lot of the things I find super annoying: scrum ceremonies, BRD reviews, email writing. All the things I feel like I must do only for someone else to tick a box in their own workflow. After the first week, I decided to extend my trial for a full sprint. By embracing this tool, I felt like I had eliminated my manager's job. Instead of having him check boxes on his end, I could just present my reports at the end of the week. I created a template prompt where I could dump information throughout the day, and at the end of the day it would generate a report in whatever format I wanted. I was so proud of my template that I shared it with my 10x coworker. He didn't respond with the enthusiasm I was expecting. He didn't understand what I was trying to do. In fact, he told me he had never used Copilot before. That was in direct contradiction of what he'd told me earlier. He was the only reason I gave this tool a shot, and here he was pretending we'd never had that conversation. Well, he clarified: "I meant Copilot on VS Code." Now, can you guess which Copilot I was using? Whatever Copilot is offered through Teams. And I say "whatever" because I genuinely don't know which one that is. Is it the same as accessing Copilot on the web? I wouldn't know. Our corporate firewall blocks that one. Teams seems to be the only approved method. Anyway, what is Copilot exactly? Is it just a white-labeled ChatGPT? When I asked it directly, it said: "It's Microsoft's AI companion, powered by advanced models (including OpenAI's), but shaped by Microsoft's ecosystem, design philosophy, and capabilities. If ChatGPT is a powerful engine, Copilot is the full car built around it — with Microsoft's dashboard, safety systems, and features." But where did the name come from? I'm sure I first heard it in the context of GitHub. The first AI code assistant shipped with VS Code. Even though they're both Microsoft products, they're two distinct products. If you use GitHub Copilot, your data isn't siphoned back to your Microsoft account (for now). What I was using in Teams is Copilot for Microsoft 365 , which is apparently different from Microsoft Copilot . The 365 version lives inside Microsoft 365 apps (that's Microsoft Office's new name, for those not keeping up). The key difference is that the 365 version can work with your emails, documents, OneDrive, and so on. But if you have a Windows device, you also have Windows Copilot , distinct from the one in Microsoft 365. This one is your AI assistant inside the OS, meant to help you launch apps, summarize what's on your screen, and handle everyday tasks. In my experience, I couldn't get it to do any of those things. Apparently, I don't have a Copilot+ PC. Reading through Microsoft's docs, I also found something called Copilot Chat . It's not quite a distinct product, but I'm not sure how else to classify it. Microsoft describes it as a general-purpose reasoning tool for writing, brainstorming, and coding. You can find it in M365 apps, and also within GitHub Copilot. That's the part that explains code, suggests fixes, and helps with debugging. I asked Copilot Chat via GitHub Copilot to explain the difference between all the offerings. It summarized it neatly: "Same family, different jobs." I'm only scratching the surface of what Copilot is supposed to be, and I'm already tired. I felt inspired by a developer to explore it, only to find that he was touching just a small slice of this ecosystem. I still think it's worth encouraging teammates to embrace a tool that everyone else is losing sleep over. I should have stopped there, but I wanted to learn more about his workflow. I'm a developer after all, and whatever he's doing would be worth implementing with my team. So I asked him. "What is your developer workflow using Copilot?" I was not prepared for the answer he gave me: "Actually, I made a mistake. I meant Cursor." And there it was. He wasn't talking about Copilot at all. Not the Teams one, not the GitHub one, not any of them. He had used "Copilot" the way most people use "Kleenex". To him, any AI code assistant was just a copilot. I had spent a whole sprint, struggling through this tool, inspired by someone who couldn't have cared less about Microsoft's ecosystem. There's a lesson there, I'm sure. I just didn't learn anything.

0 views
Stratechery Yesterday

Axios Supply Chain Attack, Claude Code Code Leaked, AI and Security

AI is going to be bad for security in the short-term, but much better than humans in the long-term.

0 views
Manuel Moreale 2 days ago

Slash AI

I’ve seen pages popping up here and there on other people’s blogs . The idea for these pages is, and I quote, «promote trust and transparency». Trust, in the context of 2026 internet—and society in general—is quite the complex topic. Dishing out trust willy-nilly is no longer a reasonable thing to do, and I also think we’re getting to the point where the “benefit of the doubt” is no longer worth considering. If I were to write on this /ai page that I don’t let these tools touch anything I post on this blog, would you trust me? Would that change the perception you have of me? And if you did trust me, why are you doing it? After all, you have no way to actually know for sure. But that is precisely what trust is, isn’t it? Trust is not based on knowledge, but on instinct, on intuitions, on feelings, and on prior experience. Personally, I couldn’t care less what you write on your /ai page. The same way I couldn’t care less if you use em-dashed. Words are cheap, easy to write, and they mean less and less. But your history, all the baggage you carry with you, all you have written and said, that is harder to fake, building it is time-consuming, but destroying it takes a second. If you start posting AI slop, my trust in you is gone in an instant, and no matter how you’ll try to justify it, that trust will not come back. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
ava's blog 2 days ago

offer: blogmaxxing class

Looksmaxxing is all the rage nowadays, but what about your blog? Look no further! I am easily one of the bloggers ever, and I have compiled everything I have learned in the years on this platform. And you guys get it first, for 50% off! ✍️ For only 67.67 Euro , you'll get course material covering ✨ For a steal of 69.99 Euro , you unlock access to everything about 🚀 The final lessons are yours for 42.00 Euro : Your blog deserves more than mediocrity. It deserves at least 50 upvotes . With this, you’ll unlock the secret 3-step system top bloggers use to dominate the Trending page while looking effortlessly perfect. ⏳ WARNING: Only 17 spots left for VIP access , and only available until 01.04.2026 23:59:59 CET ! Reply via email Published 01 Apr, 2026 High-impact writing and leveling up your Word/Memorability Ratio . Striking the balance between Jestermaxxing and Corporatemogging . Sharp sentence structure for a chiseled outline! Lessons learned from beating your header with a hammer. Smoothing out your CSS wrinkles with hardcore AI Sculpting ™. How the optimal font-weight changed my life! The art of biohacking Cortisol and Dopamine spikes that turns readers into fans. FOMO Widgets : “ 15 people are reading this now, ” and other social proof hacks that build core community moments! The undeniable magic of using OpenClaw to auto-respond to reader mails and letting it clean your Inbox for you :)

0 views
./techtipsy 2 days ago

Improving my focus by giving up my big monitor

Keeping my focus has been challenging. It’s not a new phenomenon, and I suspect that there are contributing factors that have lead to the unfocused state dominating. For example, I’ve been that guy who wants to be on top of things, to be in the loop, to respond to urgent issues. It feels fantastic to be in that firefighter role as it gives me the feeling of having an impact, but it results in me being drained at the end of the day and often over-caffeinated. One day I was doing work on my laptop on a couch because hitting 30 apparently means that sleeping slightly incorrectly results in debilitating back pain. During that session, I was working on a larger task and making tons of tiny little changes that needed to be done in order to release a new feature. I was finally in the zone again, and it felt fantastic! That’s when I decided to start an experiment: can I improve my focus by giving up my big monitor? I’ve done this type of “experiment” a few times in the past when the power has gone out and my super duper ergonomic setup has become useless. No power, no USB-C dock, no monitor. It wasn’t that fun and my eyes hated reading text off of a laptop screen. A few things have changed since then: Almost a month in, I’ve had a pleasant experience with this experiment. I feel more focused. Yeah, that’s it. Am I actually more focused is up for debate, as I’m not sure how to measure it objectively. 1 Working off of a single screen forces me to focus at what’s at hand. Alt-tabbing to a different app is quick, but just enough to deter me from doing it in meetings or other focused tasks. In my personal free time 2 , this has also resulted in computer use becoming more intentional. On a 34" ultrawide monitor, it was too easy to put YouTube running on the left side, and whatever else on the right. It was distracting and resulted in time being wasted doing nothing. Interestingly enough, making computer use more intentional was a trick that I tried when recovering from burnout, and it helped a lot. As a side effect, the power consumption of my whole home office setup is significantly smaller, as I don’t have to power my ultrawide monitor. That made up most of the power consumption, with peaks of up to 100W. I also don’t have to fight with my dock killing my whole network, because there is no dock. If you’re just cleaning up your desk and plopping your laptop on there, you will likely have a bad time. The posture will be off, and depending on your laptop, the keyboard and touchpad combination can prove to be an ergonomic nightmare. At the very least, you should put your laptop up somewhere higher. Ideally, it should be using a stand that allows you to use your favourite wireless keyboard and mouse below it. A simple laptop stand could get you most of the way there, but the ideal solution is a freely adjustable monitor arm combined with a VESA-mounted laptop holder. This gives you the freedom to place the laptop exactly as you’d like while leaving the desk free for your peripherals. Most monitor arm laptop holders have side arms that keep it in place, but I found them to be extremely annoying, so I removed them by disassembling the holder and yanking out the side arms and springs. You may still need them if you are using a very aggressive vertical angle, but I hated having to give up one USB-A port and blocking about 25% of the exhaust fan also didn’t seem like a good idea. Mounting the laptop with the springy side arms was also awkward. If you’re using a desktop and have a big display, then intentionally using a smaller and cheaper one for a while may prove to be just as effective. If you’re using a laptop with a horrible display with poor viewing angles, glare and crappy resolution (which a lot of older ThinkPads have), then you can still try this out, but I suspect that you’ll not have a very good experience with it due to this reason alone. I still prefer to do my gaming sessions on a big screen. It’s more immersive, and I can make out tiny details better, such as spotting a car in the distance while driving in the oncoming lane in Need for Speed Most Wanted. I’m happy with this setup. That’s all I ever needed. go ahead, try to measure developer productivity objectively. Good luck!  ↩︎ that’s what I call the time window between putting my son to sleep and midnight.  ↩︎ GNOME has working fractional scaling that you can simply enable in display settings ThinkPad displays have gotten better, with the picture being quite cromulent, and the 16:10 aspect ratio helps fit more on the screen the nature of my work has changed and will keep changing in the near future go ahead, try to measure developer productivity objectively. Good luck!  ↩︎ that’s what I call the time window between putting my son to sleep and midnight.  ↩︎

0 views

Summary of reading: January - March 2026

"Intellectuals and Society" by Thomas Sowell - a collection of essays in which Sowell criticizes "intellectuals", by which he mostly means left-leaning thinkers and opinions. Interesting, though certainly very biased. This book is from 2009 and focuses mostly on early and mid 20th century; yes, history certainly rhymes. "The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics" by Ben Buchanan - a pretty good overview of some of the the major cyber-attacks done by states in the past 15 years. It doesn't go very deep because it's likely just based on the bits and pieces that leaked to the press; for the same reason, the coverage is probably very partial. Still, it's an interesting and well-researched book overall. "A Primate's Memoir: A Neuroscientist’s Unconventional Life Among the Baboons" by Robert Sapolsky - an account of the author's years spent researching baboons in Kenya. Only about a quarter of the book is really about baboons, though; mostly, it's about the author's adventures in Africa (some of them surely inspired by an intense death wish) and his interaction with the local peoples. I really liked this book overall - it's engaging, educational and funny. Should try more books by this author. "Seeing Like a State" by James C. Scott - the author attempts to link various events in history to discuss "Why do well-intentioned plans for improving the human condition go tragically awry?"; discussing large state plans like scientific forest management, building pre-planned cities and mono-colture agriculture. Some of the chapters are interesting, but overall I'm not sure I'm sold on the thesis. Specifically, the author mixes in private enterprises (like industrial agricultire in the West) with state-driven initiatives in puzzling ways. "Karate-Do: My Way of Life" by Gichin Funakoshi - short autobiography from the founder of modern Shotokan Karate. It's really interesting to find out how recent it all is - prior to WWII, Karate was an obscure art practiced mostly in Okinawa and a bit in other parts of Japan. The author played a critical role in popularizing Karate and spreading it out of Okinawa in the first half of the 20th century. The writing is flowing and succinct - I really liked this book. "A Tale of a Ring" by Ilan Sheinfeld (read in Hebrew) - a multi-generational fictional saga of two families who moved from Danzig (today Gdansk in Poland) to Buenos Aires in late 19th century, with a touch of magic. Didn't like this one very much. "The Wide Wide Sea: Imperial Ambition, First Contact and the Fateful Final Voyage of Captain James Cook" by Hampton Sides - a very interesting account of Captain Cook's last voyage (the one tasked with finding a northwest passage around Canada). The book has a strong focus on his interaction with Polynesian peoples along the way, especially on Hawaii (which he was the first European to visit). "The Suitcase" by Sergei Dovlatov - (read in Russian) a collection of short stories in Dovlatov's typical humorist style. Very nice little book. "The Second Chance Convenience Store" by Kim Ho-Yeon - a collection of connected stories centered around a convenience store in Seoul, and an unusual new employee that began working night shifts there. Short and sweet fiction, I enjoyed it. "A History of the Bible: The Story of the World's Most Influential Book" by John Barton - a very detailed history of the Bible, covering both the old and new testaments in many aspects. Some parts of the book are quite tedious; it's not an easy read. Even though the author tries to maintain a very objective and scientific approach, it's apparent (at least for an atheist) that he skirts as close as possible to declaring it all nonsense, given that he's a priest! "Rust Atomics and Locks: Low-Level Concurrency in Practice" by Mara Bos - an overview of low-level concurrency topics using Rust. It's a decent book for people not too familiar with the subject; I personally didn't find it too captivating, but I do see the possibility of referring to it in the future if I get to do some lower-level Rust hacking. A comment on the code samples: it would be nice if the accompanying repository had test harnesses to observe how the code behaves, and some benchmarks. Without this, many claims made in the book feel empty without real data to back them up, and it's challenging to play with the code and see it perform in real life. "Hot Chocolate on Thursday" by Michiko Aoyama - a bit similar to "What You Are Looking for Is in the Library" by the same author: connected short stories about ordinary people living their life in Japan (with one detour to Australia). Slightly worse than the previous book, but still pretty good. "The Silmarillion" by J.R.R. Tolkien - enen though I'm a big LOTR fan, I've never gotten myself to read this one, due to its reputation for being difficult. What changed things eventually (25 years after my first read through of LOTR) is my kids! They liked LOTR so much that they went straight ahead to Silmarillion and burned through it as well, so I couldn't stay behind. What can I say, this book is pretty amazing. The amazing thing is how a book can be both epic and borderline unreadable at the same time :) Tolkien really let himself go with the names here (3-4 new names introduced per page, on average), names for characters, names for natural features like forests and rivers, names for all kinds of magical paraphenalia; names that change in time, different names given to the same thing by different peoples, and on and on. The edition I was reading has a helpful name index at the end (42 pages long!) which was very helpful, but it still made the task only marginally easier. Names aside though, the book is undoubtedly monumental; the language is outstanding. It's a whole new mythology, Bible-like in scope, all somehow more-or-less consistent (if you remember who is who, of course); it's an injustice to see this just as a prelude to the LOTR books. Compared to the scope of the Simlarillion, LOTR is just a small speck of a quest told in detail; The Silmarillion - among other things - includes brief tellings of at least a dozen stories of similar scope. Many modern book (or TV) series build whole "universes" with their own rules, history and aesthetic. The Silmarillion must be considered the OG of this. "Travels with Charley in Search of America" by John Steinbeck "Deep Work" by Cal Newport "The Philadelphia chromosome" by Jessica Wapner "The Price of Privelege" by Madeline Levine

0 views