Latest Posts (20 found)
Jim Nielsen -28 days ago

You Might Debate It — If You Could See It

Imagine I’m the design leader at your org and I present the following guidelines I want us to adopt as a team for doing design work: How do you think that conversation would go? I can easily imagine a spirited debate where some folks disagree with any or all of my points, arguing that they should be struck as guidelines from our collective ethos of craft. Perhaps some are boring, or too opinionated, or too reliant on trends. There are lots of valid, defensible reasons. I can easily see this discussion being an exercise in frustration, where we debate for hours and get nowhere — “I suppose we can all agree to disagree”. And yet — thanks to a link to Codex’s front-end tool guidelines in Simon Willison’s article about how coding agents work — I see that these are exactly the kind of guidelines that are tucked away inside an LLM that’s generating output for many teams. It’s like a Trojan Horse of craft: guidelines you might never agree to explicitly are guiding LLM outputs, which means you are agreeing to them implicitly. It’s a good reminder about the opacity of the instructions baked in to generative tools. We would debate an open set of guidelines for hours, but if there’re opaquely baked in to a tool without our knowledge does anybody even care? When you offload your thinking, you might be on-loading someone else’s you’d never agree to — personally or collectively. Reply via: Email · Mastodon · Bluesky Typography: Use expressive, purposeful fonts and avoid default stacks (Inter, Roboto, Arial, system). Motion: Use a few meaningful animations (page-load, staggered reveals) instead of generic micro-motions. Background: Don't rely on flat, single-color backgrounds; use gradients, shapes, or subtle patterns to build atmosphere. Overall: Avoid boilerplate layouts and interchangeable UI patterns. Vary themes, type families, and visual languages.

0 views

A Satisfied Customer Review Of The Yogurtia

And now for something completely different. For years, we’ve been happy users of the Yogurtia , a Japanese “fermented food maker”. That alone should sound enticing enough to warrant this small review! What’s a fermented food maker? I’m glad you ask. It’s a maker for food to ferment. Next question. In case that wasn’t crystal clear, here’s a common way we employ our Yogurtia: to make yoghurt. Shocking, given the name, right? There are plenty of mundane looking kitchen appliances out there that can “make yoghurt” so why should you import a Japanese device instead? While researching yoghurt making machines, we often encounter contraptions you can put multiple small containers in that will be heated to 40 degrees Celcius for eight to twelve hours. Once it’s done, you pull out the containers and voilà: your very own yoghurt pots. The Yogurtia doesn’t do this. Instead, there’s one giant contiainer where you pour in milk and remnants of your previous yoghurt. That means you can make much more in one go—but that also means you can more easily put in other stuff. The biggest reason for buying the Yogurtia is the capability to precisely configure the temperature and time it needs to ferment. Most basic yoghurt makers just come with an on/off switch. We can set it to 60 degrees instead of the usual 40 if we want to more easily ferment other stuff. Preparing breakfast with a freshly made yoghurt container thanks to the Yogurtia maker. Perhaps I should elaborate on the “other stuff”. While the Yogurtia obviously markets itself in the west towards yoghurt lovers, the real purpose of this neat little contraption is to make amazake and nattō . I’ve had great success with the former. To make amazake, you’ll need to grow a specific mold on rice first called koji . Activating that koji is done at 60 degrees which is too hot for most small fermentation chambers/yoghurt makers. I produce koji-fied rice in my fridge-hacked inoculation room . A rice cooker that can be properly configured might be another option, but cheaper machines often have trouble maintaining the temperature, requiring you to add some cold water. If the temperature is too high, the koji will be killed off, resulting in a less sweet beverage as the mold is responsible for breaking down the carbs of the rice into simple sugars. In a previous employer’s cantine, I was known as the amazake guy. I brought the smelly stuff to work for interested colleages to try it out and enthuse them to get started on fermenting stuff themselves. The result was met with mixed success: most people said yuck! , I got the label “the amazake guy”, and one time I forgot to take the canister out of the fridge at work. Or maybe the order is reversed here, that would certainly make more sense. I tried once more with spamming everyone to go out and buy Sandor Katz’ The Art of Fermentation bible. Then I tried bringing pickled stuff to work. More yuck! and what strange colour does that radish have? The one thing I didn’t try, which I’m making up for by writing this satisfied customer review, is convincing them to buy a Yogurtia. Maybe I should have done that instead. In Belgium, yoghurt is one of the few “fresh” fermented products almost everyone eats regularly (we’ll ignore cheese; sausages; wine; olives; and yes, even chocolate ; …. for now). Did you know you can use a spoonful of sourdough starter to jump-start the yoghurt making process? Did you know you can jump-start the bread rising process by using a spoonful of yoghurt? Food for thoug—no, a new blog post. A+++. Would buy again. (And did buy again. Never connect a Japanese electronic device that assumes directly to the European power grid of . Ouch. That plastic did melt good.) Related topics: / fermentation / By Wouter Groeneveld on 20 March 2026.  Reply via email .

0 views

Melanie Richards

This week on the People and Blogs series we have an interview with Melanie Richards, whose blog can be found at melanie-richards.com/blog . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m a Group Product Manager co-leading the core product at Webflow, i.e. helping teams visually design and build websites. My personal mission is to empower people to make inspiring, impactful, and inclusive things on the web. That’s been the through line of my career so far: I started out as a designer at a full-service agency called Fuzzco, moved to the web platform at Microsoft Edge, continued building for developers at Netlify, and am now aiming to make web creation even more democratic with the Webflow platform. I transitioned from design to product management while at Microsoft Edge. I wanted to take part in steering the future of the web platform, instead of remaining downstream of those decisions. I feel so lucky to have worked on new features in HTML, ARIA, CSS, and JavaScript with other PMs and developers in the W3C and WHATWG. I’m a builder at heart, so I love to work on webby side projects as well as a whole bevy of analog hobbies: knitting, sewing, weaving, sketchbooking, and journaling. I have a couple primary blogs right now: From 2013–2016 I also had a blog and directory called Badass Lady Creatives (wish I had spent more than five minutes on the name, haha). This featured women who were doing cool things in various “creative” industries. At the time it seemed like every panel, conference lineup, and group project featured all or mostly dudes. The blog was a way to push back on that a little bit and highlight people who were potentially overlooked. Since then gender representation (for one) seems to have gotten a bit better in these industries. But the work and joy of celebrating diverse, inspiring talent is never done! Big “yeet to production” vibes for me! I use Obsidian to scribble down my thoughts and write an initial draft. Obsidian creates Markdown files, so I copy and paste those into Visual Studio Code (my code editor), add some images and make some tweaks, and then push to production. I really try not to overthink it too much. However, I will admit that I have a tons of drafts in Obsidian that never see the light of day. It can be cathartic enough just to scribble it down, even if I never publish the thought. For my Learning Log posts, I use a Readwise => Obsidian workflow I describe in this blog post . Reader by Readwise is the app where I store and read all my RSS feeds and newsletter forwards. “Parallel play” is the biggest, most joyful boon to my creativity. I love to be in the company of others as we independently work on our own projects side by side. There’s a delicate balance when it comes to working on creative projects socially. For example, my mom, my aunt, and I often have Sew Day over FaceTime on Sundays. Everyone’s pretty committed to what they’re working on, so it’s easy to sew and talk and sing (badly 😂) at the same time. I also used to go to a local craft night that very sadly disbanded when the host shop changed hands. For writing or coding, that takes a bit more mental focus for me. I started a Discord server with a few friends, which is dedicated to working on blog posts and side projects. We meet up once a month to talk about our projects (and shoot the breeze, usually about web accessibility and/or the goodness of dogs). Then we all log off the voice channel to go do the thing! Both of these blogs use Eleventy and plain ol’ Markdown, and are hosted on Netlify. Some of my other side projects use a content management system (CMS) like Webflow’s CMS, or Contentful + Eleventy. Again, Webflow is my current employer. I use a Netlify form for comments on my “Making” blog, and Webmentions for my main blog. I will probably pull out Webmentions from that code base: conceptually they’ve never really “landed” for me, and it would be nice to delete a ton of code. I generally like my setup, though sometimes I think about migrating my “Making” blog onto a CMS. As far as CMSes go, I quite like Webflow’s: it’s straightforward and has that Goldilocks level of functionality for me. Some other CMSes I’ve tried have felt bloated yet seemed to miss obvious functionality out of the box. I have a Bookshop.org affiliate link and it took me several years to meet the $20 minimum payout so…yeah I’ve never truly monetized my blogging! I find there’s freedom in giving away your thoughts for free. As far as costs go, I have pretty low overhead: just paying for the domain name. I’m fine with other folks monetizing personal blogs, though of course there’s a classy and not-classy way to do so. If monetizing is what keeps bloggers’ work on the open web, on sites they own and control, I prefer that over monetizing through walled gardens. Related: Substack makes it easy to monetize but there are some very compelling reasons to consider alternatives. This is highly topical: I’m currently scheming about a directory site listing “maker” blogs! So many communities in the visual arts and crafts are stuck on social media platforms they don’t even enjoy, beholden to the whims of an algorithm. I’d like to connect makers in a more organic way. If you’re a crafter who would like to be part of this, feel free to fill out this Google form ! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 133 interviews . People and Blogs is possible because kind people support it. melanie-richards.com/blog, simply the blog that lives at my main website. I post here about the web, design, development, accessibility, product management, etc. One practice I’ve been keeping for a few years now is my monthly Learning Log. These posts are a compendium of what I’ve been shipping or making, what I’ve been learning, side quests, neat links around the internet, and articles I’ve been reading. When I’m in a particularly busy period (as was the case in 2025; my first child was born in September), this series is my most consistent blogging practice. making.melanie-richards.com : this is the blog where I post about my aforementioned analog projects. Quite a lot of sewing over the past year! Mandy Brown , Oliver Burkeman (technically a newsletter with a “view on web” equivalent), and Ethan Marcotte ’s writing have been helping to fill my spiritual cup over the last couple years. Anh and Katherine Yang are doing neat things on their sites What Claudia Wore for a nostalgic pick; I’d love to recreate some of these outfits sometime. Thank you Kim for keeping the blog up! Sarah Higley would be a great next interview. She blogs less frequently, but always at great depth and thoughtfulness on web accessibility. Web developers can learn quite a lot on more involved controls and interactions from Sarah.

0 views

Three Men Tried To Steal My Motorbike While I Was On It!

Spring is in the air here in North Wales, so I decided to take one of my motorbikes to the office yesterday. On the way home, not too far from where I live, I was sat at traffic lights when all of a sudden three men on off-road bikes surrounded me. One left, one right, one right up to my back wheel. And they were really close, like, inches from me kinda close. I immediately felt uneasy, like something was about to happen. I think we as a species have a sense for this kinda thing. Anyway, seconds later the guy on my left reached over to, I assume, grab the keys for the bike, but I was on the BMW, which has a keyless ignition, luckily. I clocked what the guy was trying to do, panicked, and kicked the side of his bike as hard as I could. Which, thankfully was enough to put him off balance, causing him to topple over. Then I clobbered the guy to the right around the head - he was wearing a helmet so it wouldn't have hurt him, but I suppose I figured it would be enough to shock him by me a couple seconds. I dunno, I basically shitting my pants at this point. As I soon as I'd hit the guy to my right, I took off like the absolutely clappers, running a red light in the process (that goodness nothing was coming the other way). My BMW is a fast bike, at 1000cc and over 170BHP. They were on dirt bikes, which are nowhere near as quick as mine. I also had knowledge of the local roads, which I hoped they didn't. As I flew off, they gave chase but quickly dropped back. A brief glance of my speedo showed I was doing over 120MPH, but it was working. In my panic I didn't know what to do - shall I go home? What if they see me pull in and find out where I live? Should I go somewhere else? But it's rush hour and if I get caught in traffic they could catch up to me again - my bike is a lot quick on the open road, but in traffic, they would have the advantage. I decided to floor it and get home as quick as possible. There's a straight road that leads to my village, so I figured if I can't see them behind I'll quickly swing the bike in and hide behind the garage (which can't be seen from the road). If I could, I'd just carry on and continue trying to lose them. I'm nearing my drive now, so I glance in the mirrors and see nothing; I decide to risk it and swing in, going up our gravel drive as quickly as I dare, while simultaneously hoping the kids aren't playing in the drive. They aren't. I dive in behind the garage and wait...5 seconds...10 seconds...I hear bikes getting closer. They fly right past my drive, going way too fast for our single track village road. My wife later asked the owner of the village pub if he caught anything on his CCTV. Here's what he found for us: It looks like only 1 of them had a number plate, and it's pretty much parallel with the road, so impossible to identify from the video. We've passed it onto the Police and we're waiting to hear back from their forensics dept. to see if they can pickup any prints from my bike. I don't remember if they had gloves on though, and I'm not very confident it will come to anything. I'm fine now, but it shook me up. I just hope they were opportunist idiots, rather than something more sinister. I've already bought myself a camera for the garage. Stay safe out there, folks. Thanks for reading this post via RSS. RSS is ace, and so are you. ❤️ You can reply to this post by email , or leave a comment .

0 views

Podsync - I finally built my podcast track syncer

I host and edit a podcast 1 . When recording remotely, we each record our own audio locally (I on my end, my co-host on his). The service we use (Adobe Podcast, Zoom, Skype-RIP) captures everyone together as a master track. But the quality doesn’t match what each person records locally with their own microphone. So we use that master as a reference point and stitch the individual local tracks together. This is what the industry calls a “ double-ender ”. Add a guest and it becomes a “triple-ender”. But this gets hairy during editing. Each person starts their recording at a slightly different moment — everyone hits record at a different time. Before I can edit, I need to line everything up. Drop all the tracks into a DAW, play the master alongside each individual track, nudge by ear until the speech aligns. Add a guest and it gets tedious fast. 10–15 minutes of fiddly, ear-straining alignment before I’ve even started editing. There’s also drift. Each machine’s audio clock runs at a slightly different rate, so two tracks that are perfectly aligned at minute one might be 200ms apart by minute sixty. So I built PodSync 2 . I first heard of a similar technique from Marco Arment — back in ATP episode 25 . He had a new app for aligning double-ender tracks and was already thinking about whether something so niche was even worth releasing publicly. I don’t think he ever released it. Being a Kotlin developer at the time, I figured I’d build my own. Java was mature. Surely there were audio processing libraries that could handle this. There weren’t 😅. At least not in any clean, usable form. Getting the right signal processing pieces together in JVM-land was awkward enough that my interest fizzled, so I kept doing it by hand. When I revamped Fragmented , I finally came back to this. I used Claude to help me build it — in Rust, no less. 3 But before you chalk this up to another vibecoded project, hear me out. The interesting part here wasn’t just that AI made it easier. It was thinking through the actual algorithm: Voice activity detection ( VAD ) to find speech regions. MFCC features to fingerprint the audio. Cross-correlation to find where the tracks match. Some real signal processing techniques, not just prompt engineering. Now, could I have prompted my way to a solution? Probably. But I like to think, years of manually aligning tracks — and some sound engineering intuition — helped me steer AI towards a better solution. Working on this felt refreshing. In an era where half the conversation is about AI replacing engineering work, here’s a problem where the hard part is still the problem itself — understanding the domain, picking the right approach, knowing what “correct” sounds like. It gives me confidence that solving real problems well still has its place. I like how Dax put it: thdxr on twitter I really don’t care about using AI to ship more stuff. It’s really hard to come up with stuff worth shipping. The core idea: take a chunk of speech from a participant track, compare it against the master recording, find where they match best. That position is the time offset. The trick is picking which chunk of speech to use. Rather than betting on a single region, Podsync finds a few strong candidates per track (longer contiguous speech blocks preferred) and tries each one against the master. For long candidates, it samples from the start, middle, and end. The highest-confidence match wins; if a second independent region agrees on the same offset, that corroboration factors in as a tie-breaker. After finding the offset, Podsync pads or trims each track to align with the master and match its length (and outputs some info on the offset). Drop the output into my DAW at 0:00. Done. I even wrote an agent skill you can just point your agent harness to and it will take care of all the steps for you : What used to be 10–15 minutes of alignment per episode is now a single command. Marco, if you ever read this, would still love to see your implementation! His solution (as I understand) is aimed more at correcting the drift vs getting the offset right. In practice, I haven’t found drift to be much of a problem. It exists but stays minor, and I’m typically editing every second of the podcast anyway so it’s easy enough to handle by hand. I even had a branch that corrected drift by splicing at silence points, but it complicated things more than it helped. It’s a podcast on AI development but we strive to make it high signal. None of that masturbatory AI discourse .  ↩︎ See also Phone-sync .  ↩︎ I chose Rust (it’s what interests me these days ) and a CLI tool with no runtime dependency is more pleasant to distribute.  ↩︎ It’s a podcast on AI development but we strive to make it high signal. None of that masturbatory AI discourse .  ↩︎ See also Phone-sync .  ↩︎ I chose Rust (it’s what interests me these days ) and a CLI tool with no runtime dependency is more pleasant to distribute.  ↩︎

0 views

My heuristics are wrong. What now?

More words. More meaning? Some people who ask me for advice at get a lot of words in reply. Sometimes, those responses aren’t specific to my particular workplace, and so I share them here. In the past, I’ve written about echo chambers , writing , writing for an audience , time management , and getting big things done . Do you remember Cool Runnings ? In the movie, John Candy is a retired bobsled champion, who uses his experience, connections, and lovable curmudgeon character to turn a rag-tag group of sprinters into an olympic bobsled team. A lot of principal engineer types think of themselves this way: they used to bobsled, they don’t bobsled, but they still know the skills and the people and the equipment. And that worked well enough, while we were still bobsledding. But we’re not bobsledding anymore. Many of the heuristics that we’ve developed over our careers as software engineers are no longer correct. Not all of them. But many. What it means for a system to be maintainable. How much it costs to write code versus integrate libraries versus take service dependencies. What it means for an API to be well designed, or ergonomic, or usable. What it means to understand code. Where service boundaries should be. Where security and data integrity should be enforced. What’s easy. What’s hard. We’ve seen this play out in small ways before. Over the last decade, I’ve frequently been frustrated by experienced folks who didn’t update their system design heuristics to match the cloud, to match SSDs, to match 100Gb/s networks, and so on. But this is the biggest change I’ve seen in my career by far. An extinction-level event for rules of thumb. But you’re a tech leader, and you need to lead, and leading is heavily based on using your experience to help people and teams be more effective. What now? The victorious man in the day of crisis is the man who has the serenity to accept what he cannot help and the courage to change what must be altered. 1 Let me assume that you want to continue to be a valuable tech leader. You want your teams and organizations to succeed. That you’re willing to sound less smart and less sure, in interests of being right and helpful. In that case, and I hope that is the case, your job has changed. Your job, for the foreseeable future, is to have the humility to accept that many of your heuristics are wrong, the courage to believe some are still right, and the curiosity to actively learn the difference. You can’t throw out everything you know. Your taste, your high standards, your understanding of your business and customers and the deep technical trade-offs in your area are more valuable than ever before. This is like that fantasy that people have of going back to middle school knowing all the things they know now 2 . You’re ahead of the pack in many ways. But you also need to really deeply question the things you know, and the things you assume. Before you share one of your rules of thumb, you need to deeply examine whether it’s still right. And the way you’re going to know that, right now, is by getting back on the ice. Build. Own. Get your hands dirty and use the tools. Build something real. Build a prototype. Build a thousand little experiments in an afternoon. Challenge yourself to try to do something you previously would have assumed is impossible, or infeasible, or unaffordable. Find one of the ways that you’re worried that the new tools are going to lead to trouble, and actively fix it. Then examine the things you’re learning. Update your constants. Over the next couple of years, the most valuable people to have on a software team are going to be experienced folks who’re actively working to keep their heuristics fresh. Who can combine curiosity with experience. Among the least valuable people to have on a software team are experienced folks who aren’t willing to change their thinking. Beyond that, it’s hard to see. This is going to be hard for some folks. It’s hard to admit where you’re wrong. It’s hard to go back to being a beginner. It’s easy to stick your fingers in your ears and say “No, it’s the children who are wrong”. My advice is to not be that guy. The good news? It’s as fun as hell. Get building, get learning, make something exist that you couldn’t imagine before. Winnifred Crane Wygal paraphrasing Reinhold Niebuhr A fantasy I have never understood. Being 13 once was enough for a lifetime, thank you very much.

0 views

SQLAlchemy 2 In Practice - Chapter 1 - Database Setup

Welcome! This is the start of a journey which I hope will provide you with many new tricks to improve how you work with relational databases in your Python applications. Given that this is a hands-on book, this first chapter is dedicated to help you set up your system with a database, so that you can run all the examples and exercises. This is the first chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon . Thank you!

0 views

Thoughts on OpenAI acquiring Astral and uv/ruff/ty

The big news this morning: Astral to join OpenAI (on the Astral blog) and OpenAI to acquire Astral (the OpenAI announcement). Astral are the company behind uv , ruff , and ty - three increasingly load-bearing open source projects in the Python ecosystem. I have thoughts! The Astral team will become part of the Codex team at OpenAI. Charlie Marsh has this to say : Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement , OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community -- and for the broader Python ecosystem -- just as we have from the start. [...] After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development. OpenAI's message has a slightly different focus (highlights mine): As part of our developer-first philosophy, after closing OpenAI plans to support Astral’s open source products. By bringing Astral’s tooling and engineering expertise to OpenAI, we will accelerate our work on Codex and expand what AI can do across the software development lifecycle. This is a slightly confusing message. The Codex CLI is a Rust application, and Astral have some of the best Rust engineers in the industry - BurntSushi alone ( Rust regex , ripgrep , jiff ) may be worth the price of acquisition! So is this about the talent or about the product? I expect both, but I know from past experience that a product+talent acquisition can turn into a talent-only acquisition later on. Of Astral's projects the most impactful by far is uv . If you're not familiar with it, is by far the most convincing solution to Python's environment management problems, best illustrated by this classic XKCD : Switch from to and most of these problems go away. I've been using it extensively for the past couple of years and it's become an essential part of my workflow. I'm not alone in this. According to PyPI Stats uv was downloaded more than 126 million times last month! Since its release in February 2024 - just two years ago - it's become one of the most popular tools for running Python code. Astral's two other big projects are ruff - a Python linter and formatter - and ty - a fast Python type checker. These are popular tools that provide a great developer experience but they aren't load-bearing in the same way that is. They do however resonate well with coding agent tools like Codex - giving an agent access to fast linting and type checking tools can help improve the quality of the code they generate. I'm not convinced that integrating them into the coding agent itself as opposed to telling it when to run them will make a meaningful difference, but I may just not be imaginative enough here. Ever since started to gain traction the Python community has been worrying about the strategic risk of a single VC-backed company owning a key piece of Python infrastructure. I wrote about one of those conversations in detail back in September 2024. The conversation back then focused on what Astral's business plan could be, which started to take form in August 2025 when they announced pyx , their private PyPI-style package registry for organizations. I'm less convinced that pyx makes sense within OpenAI, and it's notably absent from both the Astral and OpenAI announcement posts. An interesting aspect of this deal is how it might impact the competition between Anthropic and OpenAI. Both companies spent most of 2025 focused on improving the coding ability of their models, resulting in the November 2025 inflection point when coding agents went from often-useful to almost-indispensable tools for software development. The competition between Anthropic's Claude Code and OpenAI's Codex is fierce . Those $200/month subscriptions add up to billions of dollars a year in revenue, for companies that very much need that money. Anthropic acquired the Bun JavaScript runtime in December 2025, an acquisition that looks somewhat similar in shape to Astral. Bun was already a core component of Claude Code and that acquisition looked to mainly be about ensuring that a crucial dependency stayed actively maintained. Claude Code's performance has increased significantly since then thanks to the efforts of Bun's Jarred Sumner. One bad version of this deal would be if OpenAI start using their ownership of as leverage in their competition with Anthropic. One detail that caught my eye from Astral's announcement, in the section thanking the team, investors, and community: Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that. As far as I can tell neither the Series A nor the Series B were previously announced - I've only been able to find coverage of the original seed round from April 2023 . Those investors presumably now get to exchange their stake in Astral for a piece of OpenAI. I wonder how much influence they had on Astral's decision to sell. Armin Ronacher built Rye , which was later taken over by Astral and effectively merged with uv. In August 2024 he wrote about the risk involved in a VC-backed company owning a key piece of open source infrastructure and said the following (highlight mine): However having seen the code and what uv is doing, even in the worst possible future this is a very forkable and maintainable thing . I believe that even in case Astral shuts down or were to do something incredibly dodgy licensing wise, the community would be better off than before uv existed. Astral's own Douglas Creager emphasized this angle on Hacker News today : All I can say is that right now , we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever". I like and trust the Astral team and I'm optimistic that their projects will be well-maintained in their new home. OpenAI don't yet have much of a track record with respect to acquiring and maintaining open source projects. They've been on a bit of an acquisition spree over the past three months though, snapping up Promptfoo and OpenClaw (sort-of, they hired creator Peter Steinberger and are spinning OpenClaw off to a foundation), plus closed source LaTeX platform Crixet (now Prism) . If things do go south for and the other Astral projects we'll get to see how credible the forking exit strategy turns out to be. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views
Martin Fowler Yesterday

Fragments: March 19

David Poll points out the flawed premise of the argument that code review is a bottleneck To be fair, finding defects has always been listed as a goal of code review – Wikipedia will tell you as much. And sure, reviewers do catch bugs. But I think that framing dramatically overstates the bug-catching role and understates everything else code review does. If your review process is primarily a bug-finding mechanism, you’re leaving most of the value on the table. Code review answers: “Should this be part of my product?” That’s close to how I think about it. I think of code review as primarily about keeping the code base healthy. And although many people think of code review as pre-integration review done on pull requests, I look at code review as a broader activity both done earlier (Pair Programming) and later (Refinement Code Review) . At Firebase, I spent 5.5 years running an API council… The most valuable feedback from that council was never “you have a bug in this spec.” It was “this API implies a mental model that contradicts what you shipped last quarter” or “this deprecation strategy will cost more trust than the improvement is worth” or simply “a developer encountering this for the first time won’t understand what it does.” Those are judgment calls about whether something should be part of the product – the same fundamental question that code review answers at a different altitude. No amount of production observability surfaces them, because the system can work perfectly and still be the wrong thing to have built. His overall point is that code review is all about applying judgment, steering the code in a good direction. AI raises the level of that judgment, focusing review on more important things. I agree that we shouldn’t be thinking of review as a bug-catching mechanism, and that it’s about steering the code base. In addition I’d also add that it’s about communication between people, enabling multiple perspectives on the development of the product. This is true both for code review, and for pair programming. ❄                ❄                ❄                ❄                ❄ Charity Majors is unhappy with me and rest of the folks that attended the Thoughtworks Future of Software Development Retreat. But the longer I sit with this recap, the more troubled I am by what it doesn’t say. I worry that the most respected minds in software are unintentionally replicating a serious blind spot that has haunted software engineering for decades: relegating production to the realm of bugs and incidents. There are lots of things we didn’t discuss in that day-and-a-half, and it’s understandable that a topic that matters so deeply to her is visible by its absence. I’m certainly not speaking for anyone else who was there, but I’ll take the opportunity to share some of my thoughts on this. I consider observability to be a key tool in working with our AI future. As she points out, observability isn’t really about finding bugs - although I’ve long been a supporter of the notion of QA in Production . Observability is about revealing what the system actually does, when in the hands of its actual users. Test cases help you deal with the known paths, but reality has a habit of taking you into the unknowns, not just the unknowns of the software’s behavior in unforeseen places, but also the unknowns of how the software affects the broader human and organizational systems it’s embedded into. By watching how software is used, we can learn about what users really want to achieve, these observed requirements are often things that never popped up in interviews and focus groups. If these unknown territories are true in systems written line-by-line in deterministic code, it’s even more true when code is written in a world of supervisory engineering where humans are no longer to look over every semi-colon. Certainly harness engineering and humans in the loop help, and I’m as much a fan as ever about the importance of tests as a way to both explain and evaluate the code. But these unknowns will inevitably raise the importance of observability and its role to understand what the system thinks it does. I think it’s likely we’ll see a future where much of a developer’s effort is figuring what a system is doing and why it’s behaving that way, where observability tools are the IDE. In this I ponder the lesson of AI playing Go. AlphaGo defeated the best humans a decade ago, and since then humans study AI to become better players and maybe discover some broader principles. I’m intrigued by how humans can learn from AI systems to be improve in other fields, where success is less deterministically defined. ❄                ❄                ❄                ❄                ❄ Tim Requarth questions the portrayal of AI as an amplifier for human cognition. He considers the different way we navigate with GPS compared to maps. If you unfold a paper map, you study the streets, trace a route, convert the bird’s-eye abstraction into the first-person POV of actually walking—and by the time you arrived, you’d have a nascent mental model of how the city fits together. Or you could fire up Google Maps: A blue dot, an optimal line from A to B, a reassuring robotic voice telling you when to turn. You follow, you arrive, you have no idea, really, where you are. A paper map demands something from you, and that demand leaves you with knowledge. GPS requires nothing, and leaves you with nothing. A paper map and GPS are tools with the same purpose, but opposite cognitive consequences. He introduces some attractive metaphors here. Steve Jobs called computers “bicycles for the mind”, Satya Nadella said with the launch of ChatGPT that “we went from the bicycle to the steam engine”. Like another 19th-century invention, the steam locomotive, the bicycle was a technological revolution. But a train traveler sat back and enjoyed the ride, while a cyclist still had to put in effort. With a bicycle, “you are traveling,” wrote a cycling enthusiast in 1878, “not being traveled.” In both examples, there’s a difference between tools that extend capability and tools that replace it. The question is what we lose when we are passive in the journey? He argues that Silicon Valley executives are too focused on the goal, and ignoring what happens to the humans being traveled. Much of this depends, I think, on whether we care about what we are losing. I struggle with mental arithmetic, so I value calculators, whether on my phone or . I don’t think I lose anything when I let the machine handle the toil of calculation. I share missing the sense of place when using a GPS over a map, but am happy that I can now drive though Lynn without getting lost. And when it comes to writing, I have no desire to let an LLM write this page.

0 views

Social media reimagined

We’re all familiar with social media: the Facebooks, the Twitters, the TikToks of this silly digital world. They have invaded our lives and taken over our time and attention. We have spent the past decade posting, snapping, tweeting, reeling (?), tiktoking (??). We fall asleep youtubing, only to wake up with our “for you” page completely fucked up because the algorithm lives a life of its own and has decided to profile us as someone who loves sheep herding and carpet cleaning (and, you know, maybe it's right). But imagine for a second if someone managed to reinvent social media. Imagine if there was a new product out there on the internet. A product so revolutionary, so original, so refreshingly different, that it will completely transform the way you feel and interact with other people online. Can you feel the excitement building? Well, I’m sorry—not sorry—to disappoint you because that product is not here. What is here, though (blame Kevin), is a silly little experiment: the Dealgorithm IRC server. I was thinking about setting an IRC server up just for fun, and he took the idea, ran with it, and the server is now live. Now, contrary to the fools at Digg , I know how the web works, and there’s no chance in hell I’d let this server open to the internet, so that every weirdo out there could join. Which is why, if you’re interested in joining, you need to apply by filling out this form . I’m not going to request a copy of your ID…for now. The server is currently set up to retain up to 2000 messages per channel for up to 48 hours. We might play with these settings, but I don’t want this to be a place for content to stick around. The idea is to have a space where a bunch of people can hang out in a very casual way and talk about anything they find interesting. We may or may not permanently ban you if you profess your love for AI. You’ve been warned. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views

Nexus Machine: An Energy-Efficient Active Message Inspired Reconfigurable Architecture

Nexus Machine: An Energy-Efficient Active Message Inspired Reconfigurable Architecture Rohan Juneja, Pranav Dangi, Thilini Kaushalya Bandara, Tulika Mitra, and Li-Shiuan Peh MICRO'25 This paper presents an implementation of the Active Message (AM) architecture, as an alternative to FPGA/CGRA architectures. AM architectures have been studied for a while; this was my first exposure. An accelerator implemented on an FPGA or CGRA typically uses a spatial computing paradigm. Each “instruction” in the algorithm is pinned to a physical location on the chip, and data flows between the instructions. I prefer to think of the data in motion as the local variables associated with threads that also move (using a specialized memory consistency model ). The active message architecture flips that script around. Data structures are pinned, while instructions move to the relevant data . Fig. 5 shows two processing elements (PEs), each of which contain two active messages (AMs). An active message looks a lot like an instruction: it contains an opcode, source operands, and a result operand. Throughout the computation, AMs move between PEs. PEs have a local ALU and local memory. Source: https://dl.acm.org/doi/10.1145/3725843.3756091 The AM at the top of the figure has and . Here, is an operand that is being carried around for future use. The AM with a opcode will make its way through the chip until it arrives at the PE which contains the data to be loaded. At this point, the load operation will execute, and a new AM will be created. In the figure above, the new AM is the one at the bottom of PE0. It has , , and . Op1 is forwarded unchanged from the predecessor AM. The value of was the value of the data loaded from memory. The new opcode was obtained from the config memory , which contains a description of the program that is being executed. The next step to be performed is to multiply . One might expect PE0 to perform the multiplication, but in the figure above the AM is routed to , which performs the multiplication. A reason why you would want to do this is in a situation where there are many AMs queued to access the data memory associated with PE0, but few AMs queued to access the data memory associated with PE1. In this situation, it is better to let PE0 perform loads for other AMs (because PE0 is the only PE that can fulfill that task) and find a PE that is currently idle to perform the multiplication (any PE can perform the multiplication). Now the question you should be asking is: what real-world applications exhibit load imbalances between PEs like this? If a data structure were split between all PEs evenly, you would think that load will be spread nicely across the PEs. The answer is: irregular workloads like sparse matrix-vector multiplication. Fig. 6 shows how a source matrix, source vector, and result vector could be partitioned across 4 PEs. You can imagine how the sparsity of the tensors being operated on would cause load imbalance between the PEs. Source: https://dl.acm.org/doi/10.1145/3725843.3756091 Fig. 11 compares the Nexus Machine against other architectures (each design has the same number of ALUs). Fig. 12 shows performance-per-watt. Source: https://dl.acm.org/doi/10.1145/3725843.3756091 Dangling Pointers I imagine that AM architectures work best for algorithms that are insensitive to the order in which AMs are executed. That would be the case for matrix/vector multiplication (assuming addition is associative). It seems like there is a large design space here related to PE capabilities. Data structures could be replicated across PEs to enable memory access AMs to be serviced by multiple PEs, or the ALUs inside of each PE could be heterogeneous (e.g., some PEs can do division, others cannot). Subscribe now

0 views
Oya Studio Yesterday

When MultiChildLayoutDelegate is not enough

MultiChildLayoutDelegate is great for custom layouts, but it can't size the parent from its children. SlottedMultiChildRenderObjectWidget gives you that power — here's how to use it.

0 views
Stratechery Yesterday

Spring Break

Stratechery is on a bit of a disjointed Spring Break, as my usual week off will be spread out: I will return to my usual posting schedule on Tuesday, March 31. All other Stratechery Plus content, including my podcasts, will stay on schedule. There will be no Update on Thursday, March 19 There will be no Update on Monday and Tuesday, March 23–24; there will be an Update and Interview on Wednesday and Thursday, March 25–26 There will be no Update on Monday, March 30

0 views
Herman's blog Yesterday

On becoming a day person

I was recently asked on a podcast what my biggest game-changer was, whether it be a habit, way of thinking, purchase, or change of context. I didn't need to fish around for an answer, since I already know my biggest game-changer : becoming a day person. By this I mean I operate within daylight hours, getting up early, making good coffee and watching the sunrise with Emma. There’s something grounding about witnessing both the start and the end of the light; it makes me feel in tune with this natural cycle 1 . I used to be someone who stayed up late and slept through most of the morning. It's only been the last 5 years that I've consistently gotten out of bed early. I wake up naturally around 6am, hand grind some coffee while I'm still a bit muzzy and then, once the pour-over is blooming, wake Emma up to watch the sun rise over Cape Town while the air is still crisp and cool, and cars haven't ruined the soundscape and air quality. We sit and enjoy the coffee and view, generally in silence at first then check in with each other, ask about the day, and just enjoy the quality time together. Having the mornings available is delightful since most people aren't awake yet, which makes it feel like a secret, special pocket in which to operate. I like to take my time getting into the day. I don't need to rush and instead have a gentle start, which puts me in a good mood. I think rushing in the morning is one of the more stressful things that I'm happy to leave behind. It takes me about an hour from waking up to leaving for the gym or a trail run—living in Cape Town comes with mountain perks you see. I like to exercise in the morning because there are fewer commitments and plans that can derail me. The morning belongs to me, and I can do with it as I please. After exercise I shower, make a tasty breakfast, clean the kitchen, then get into work for the morning. I tend to not open emails until after lunch so that my morning can be used for focussed work, one task at a time, no distractions. After lunch (and usually a nap) I dig into emails, admin, and other tasks that need tending to. This causes the rest of the day to get quite messy and unfocussed, but that's okay because if my morning goes right (and it usually does) then all the important things are already done. I usually close my laptop around 3 or 4 and enjoy the rest of the afternoon in whichever way I see fit. Conveniently, around 8:30 or 9 I start getting tired since I've been awake for 15 hours already. I don't have any bright overhead lights on in the evenings, and the apartment has a nice warm glow which signals to my body that it's time to start winding down. And because I keep "regular business hours" my mind isn't overactive in the evening (it helps that I'm not on my phone ). We're generally in bed by 9:15 and after about half an hour of reading (currently Monstrous Regiment by Terry Pratchett ) I'm fast asleep. This sounds early to some, but the tradeoff is worth it. Generally the activities past 10pm involve watching series or going to a bar, neither of which I'm particularly attached to. I know Europeans like to eat dinner late at night, but luckily that's not the culture here, with South Africans having the earliest bedtimes in the world 2 . That isn't to say that I don't stay up late on occasion. I like to socialise over late dinners, go to music festivals, the cinema, and also get dragged to the theatre on occasion. It's just that these are exceptions, with the downside being that even when I'm out until 1am I still wake up naturally at 6. This is what naps are made for. I'm not suggesting everyone make the switch to being daytime people (I like having them to myself, thank you very much). Experiment and do what feels best for you. This is just something that had an outsized positive impact on me, and I suspect there are many other people who would enjoy mornings if they gave them a proper chance. Opinion: Research about "morning larks and night owls" tends to be a bit muddy and suggests that people can't make the switch due to genetics. In a research setting I'm sure it's pretty difficult to make the switch in X number of weeks, but the research tends to ignore that people make the switch all the time. It also ignores that historically humans have by-and-large been day-time creatures, since artificial lighting (including fire) is a fairly recent invention in evolutionary time, and we have pretty terrible night vision. All of the great apes being diurnal too suggests that we are too. ↩ Here's a neat ranking of sleep and wake times globally ↩ Opinion: Research about "morning larks and night owls" tends to be a bit muddy and suggests that people can't make the switch due to genetics. In a research setting I'm sure it's pretty difficult to make the switch in X number of weeks, but the research tends to ignore that people make the switch all the time. It also ignores that historically humans have by-and-large been day-time creatures, since artificial lighting (including fire) is a fairly recent invention in evolutionary time, and we have pretty terrible night vision. All of the great apes being diurnal too suggests that we are too. ↩ Here's a neat ranking of sleep and wake times globally ↩

0 views
matklad Yesterday

Consensus Board Game

I have an early adulthood trauma from struggling to understand consensus amidst a myriad of poor explanations. I am overcompensating for that by adding my own attempts to the fray. Today, I want to draw a series of pictures which could be helpful. You can see this post as a set of missing illustrations for Notes on Paxos , or, alternatively, you can view that post as a more formal narrative counter-part for the present one. The idea comes from my mathematics of consensus lecture, with versions in English and Russian . I am going to aggressively hand wave the details away, please refer to Notes for filling in the blanks. And, before we begin, I want to stress again that here I am focusing strictly on the mathematics behind the algorithm, on the logical structure of the universe that makes some things impossible, and others doable. Consensus is but a small part of the engineering behind real data management systems, and I might do something about pragmatics of consensus at some point, just not today ;) There’s a committee of five members that tries to choose a color for a bike shed, but the committee members are not entirely reliable. We want to arrive at a decision even if some members of the committee are absent. The fundamental idea underpinning consensus is simple majority vote. If R0, … R4 are the five committee members, we can use the following board to record the votes: A successful vote looks like this: Here, red collected 3 out of 5 votes and wins. Note that R4 hasn’t voted yet. It might, or might not do so eventually, but that won’t affect the outcome. The problem with voting is that it can get stuck like this: Here, we have two votes for red, two votes for blue, but the potential tie-breaker, R4, voted for green, the rascal! To solve split vote, we are going to designate R0 as the committee’s leader, make it choose the color, and allow others only to approve. Note that meaningful voting still takes place, as someone might abstain from voting — you need at least 50% turnout for the vote to be complete: Here, R0, the leader (marked with yellow napoleonic bicorne), choose red, R2 and R3 acquiesced, so the red “won”, even as R1 and R4 abstained (x signifies absence of a vote). The problem with this is that our designated leader might be unavailable itself: Which brings us to the central illustration that I wanted to share. What are we going to do now is to multiply our voting. Instead of conducting just one vote with a designated leader, the committee will conduct a series of concurrent votes, where the leaders rotate in round-robin pattern. This gives rise to the following half-infinite 2D board on which the game of consensus is played: Each column plays independently. If you are a leader in a column, and your cell is blank, you can choose whatever color. If you are a follower, you need to wait until column’s leader decision, and then you can either fill the same color, or you can abstain. After several rounds the board might end up looking like this: The benefit of our 2D setup is that, if any committee member is unavailable, their columns might get stuck, but, as long as the majority is available, some column somewhere might still complete. The drawback is that, while individual column’s decision is clear and unambiguous, the outcome of the board as whole is undefined. In the above example, there’s a column where red wins, and a column where blue wins. So what we are going to do is to scrap the above board as invalid, and instead require that any two columns that achieved majorities must agree on the color. In other words, the outcome of the entire board is the outcome of any of its columns, whichever finishes first, and the safety condition is that no two colors can reach majorities in different columns. Let’s take a few steps back when the board wasn’t yet hosed, and try to think about the choice of the next move from the perspective of R3: As R3 and the leader for your column, you need to pick a color which won’t conflict with any past or future decisions in other columns. Given that there are some greens and blues already, it feels like maybe you shouldn’t pick red… But it could be the case that the three partially filled columns won’t move anywhere in the future, and the first column gets a solid red line! Though choices! You need to worry about the future and the infinite number of columns to your right! Luckily, the problem can be made much easier if we assume that everyone plays by the same rules, in which case it’s enough to only worry about the columns to your left. Suppose that you, and everyone else is carefully choosing their moves to not conflict with the columns to the left. Then, if you chose red, your column wins, and subsequently some buffoon on the right chooses green, it’s their problem, because you are to their left. So let’s just focus on the left part of the board. Again, it seems like blue or green might be good bets, as they are already present on the board, but there’s a chance that the first column will eventually vote for red. To prevent that, what we are going to do is to collect a majority of participants (R0, R2, R3) and require them to commit to not voting in the first columns. Actually, for that matter, let’s prevent them from voting in any column to the left: Here, you asked R0, R2 and R3 to abstain from casting further votes in the first three columns, signified by black x. With this picture, we can now be sure that red can not win in the first column — no color can win there, because only two out of the five votes are available there! Still, we have the choice between green and blue, which one should we pick? The answer is the rightmost. R2, the participant that picked blue in the column to our immediate left, was executing the same algorithm. If they picked blue, they did it because they knew for certain that the second column can’t eventually vote for green. R2 got a different majority of participants to abstain from voting in the second column, and, while we, as R3, don’t know which majority that was, we know that it exists because we know that R2 did pick blue, and we assume fair play. That’s all for today, that’s the trick that makes consensus click, in the abstract. In a full distributed system the situation is more complicated. Each participant only sees its own row, the board as a whole remains concealed. Participants can learn something about others’ state by communicating, but the knowledge isn’t strongly anchored at time. By the time a response is received the answer could as well be obsolete. And yet, the above birds-eye view can be implemented in a few exchanges of messages. Please see the Notes for further details.

0 views
neilzone Yesterday

Musings on 'digital sovereignty'

I’ve heard a lot about “digital sovereignty” recently. I’ve heard it mostly in connection with USA-based tech companies, big ones in particular. I am not aware of a clear, agreed, definition, but it seems to boil down to wanting control over (all? some of?) one’s digital systems. Or, at least, not depending on technologies which are controlled by people/organisations in other countries. But I wonder how far the notion of “digital sovereignty” goes. Take me, for instance. I use almost exclusively Free software, which I run locally on my own hardware. No-one can - short of hacking my systems - remove or limit the software that I use. No-one can lock me out, or delete my data. Does that make me “digitally sovereign”? If it does, that seems like a very shallow concept of sovereignty. Sure, it is better than being subject to the whims of a SaaS provider. But I am still dependent on a whole range of other people, whose software I benefit from using. And the people who maintain that software. And the people who package that software. And the people who distribute that software. And so on. I, personally, could not expect to have control over anything but a tiny, tiny part of that. Perhaps I can never, realistically, be “digitally sovereign”? These wonderful, generous people could be anywhere in the world. They are - most likely - all over the world. So while I might have control over the software that I have already installed, I have no (realistic) control over updates, security patches, and the like. And while I might host everything myself, I have to get that software from somewhere . Sometimes - often - it is from Debian’s repositories. Sometimes, that is from people’s own code forges. And sometimes it is from Github. My Mastodon (glitch-soc) instance, for example. Were Github to stop hosting that code, or to stop me from accessing it, I’d either need to find another way to obtain it (to maintain patching/updates), or cease to run it. Let’s Encrypt is a USA-based organisation, so perhaps I should find another ACME TLS certificate provider… Perhaps viewing this from the perspective of me - just one person - is fundamentally flawed? Because of course I am dependent on others - if I chose not to be so, I, and the vast majority of the population, would not be “digitally sovereign”, but rather digitally neutered. But individuals are indeed vulnerable to the whims of third parties, just as much as governments or big businesses. In fact, perhaps more so, based on the number of software providers that I’ve seen switch from on-machine software to SaaS, and then proceed to screw over their customers with increasingly expensive subscriptions and lock-ins. I wonder, to what extent geographic borders are relevant. Does “digital sovereignty” require that a nation (or company? Or individual? not sure…) can support all its own software, hardware, routing, hosting requirements etc. solely by or with people and companies from within its own geographic borders? Does it extend beyond supporting software, into only running software which is created within its regions? If it does, then that sounds incredibly inefficient, with each country needing to develop its own operating system, its own applications etc. What a waste of effort, competing rather than collaborating. From an individual point of view, sure, placing my trust in a company in another country may not be a great idea, but is placing my trust in a company within my own country’s borders significantly better? I self-host for a reason. I could have the rug pulled out from under my feet by a domestic provider, with just as great an impact as a foreign provider. I question if I can be “sovereign” at all, if I am reliant on someone else. If this is true, is geography-based “digital sovereignty” little more that digital xenophobia? Perhaps the principle of “digital sovereignty” only relates to governments, and others who have significant bargaining power. I’ve yet to see a good, solid indication of how “digitally sovereignty” is to be funded. Yes, sure, an organisation might be spending a small fortunate on Microsoft’s services. They could indeed channel that money into a Free software alternative, and associated training. But are they going to do so? I’ve seen press releases about “savings”, which suggests money not being spent, rather than that money being spent elsewhere. I imagine that, in reality, “digital sovereignty” would be a remarkably expensive undertaking. Perhaps more expensive than buying commodity services from overseas third parties. Digital sovereignty may come at premium pricing, rather than being a cheaper alternative, and that money needs to come from somewhere. And, beyond money, and beyond tech, there might be issues of incentivising local development (boosting local employment), removing tax breaks available to behemoth organisations, making laws comprehensible and applicable for small organisations with a cadre of lawyers and lobbyists, and so on. Digital sovereignty might be grounded in considerations of technology, but likely requires far, far broader thinking.

0 views
Chris Coyier 2 days ago

Meets Style Sheets

I’ve accepted an invitation to speak at Smashing’s (Online) Conference Meets Style Sheets. It’s free on Wednesday, May 6th. I named my talk In-N-Out Styling . Long time CSS evangelist Chris Coyier will talk about how you can style things on their way into view on a webpage, and on their way out. Of course, with Chris being Chris, there will be plenty of things which are food for discussion, as well as plenty of practical take-aways as well. I’ve been preparing for it. I’ve got like 35 minutes or so, and the concept of modern “entry” and “exit” styles is plenty for that time. It’s kinda complicated in my opinion, involving multiple ways of doing things, modern syntax with weird names, and specificity footguns. I think we can all come out of it with an understanding of what’s possible.

0 views
iDiallo 2 days ago

Communication Is Surveillance by Design

In the very last scene of The Bourne Supremacy , Jason Bourne calls the CIA from what they presume is a public phone. Landy, who answers the call, instructs her team to trace it. Bourne says he wants to come in and asks for someone specific to meet with him. Landy stalls for time while her team tries to triangulate his exact location, so she asks how she can find the person he's referring to. That's when Bourne drops his famous line: "It's easy. She's standing right next to you." revealing that he's right in their vicinity. He hangs up seconds before the team could have located him. That's one badass ending. (֊⎚-⎚) It's not the only film where the protagonist, or antagonist, is clever enough to know exactly when to hang up before being pinpointed. There seems to be this universal piece of software that all law enforcement agencies use to triangulate calls in movies. It's some application built in the '90s, operating at modem speed, that just needs a little more time. A countdown clock. Tense music. Cut to black. What is that software actually doing? "Triangulate" implies three points, maybe three cell towers sending a ping and measuring the response time from each, then using the difference to calculate distance. Computers, even old ones, are very good at math. So why would that take a full minute? Well, mostly it doesn't. That's fiction. The moment your phone connects to a cell tower, it generates a Call Detail Record (CDR) . This record includes who you're calling (the network needs to know in order to route the call), how long the call lasts, and which specific tower and sector handled it. Location data is captured and stored automatically from the instant the call begins. In other words, the moment Jason Bourne hits send, he's already been logged. When you connect to a single tower, location accuracy can still be within several hundred meters. But phones typically connect to multiple towers simultaneously, and triangulation narrows that down to tens of meters. If you're calling from a payphone, there's no triangulation needed at all. The address of each payphone is already on record. The one advantage the protagonist realistically has is that CDR data isn't usually available in real time. Law enforcement needs to contact the telecom provider, obtain a court order, and wade through all the bureaucracy that entails. If there's a clock ticking, it should be for the number of days it takes to gather that data. Not how long the triangulation software takes to calculate. The moment you accessed this page, you left a trail. Your device asked your Internet Service Provider (ISP) to connect you to my website. That request generated a log ,the digital equivalent of a CDR, recording that your IP address requested a connection to mine. When your ISP routed you to my server, it handed over your IP address so I'd know where to send the data back. From that IP address alone, I can make a rough guess at your location, usually accurate to your city or region. Your ISP, however, knows exactly where you are. They assigned you that IP address and are actively providing your connection. This is where HTTPS comes in. You've probably noticed the padlock icon in your browser. When you connect to a website over HTTPS, the content of your communication is encrypted in transit. Your ISP (or anyone listening on the network) can see that you connected to, say , but they cannot read what you sent or received. The data looks like noise to them. The main distinction is that HTTPS hides the content, not the connection. Your ISP still sees the domain you visited. They still have a timestamp. They still have your IP address. The metadata is fully visible, even if the message itself is not. Using HTTPS wasn't something most people worried about until 2013, when Edward Snowden's leaked documents revealed that the NSA had been running programs like PRISM that compelled major technology companies to hand over user data. They tapped directly into the fiber-optic cables connecting Google and Yahoo's data centers. At those interception points, traffic that hadn't yet been encrypted internally was flowing in the open. The NSA could read emails, messages, and files, not by breaking encryption, but by scooping up data before encryption was ever applied. Or by accessing it at a point where it had already been decrypted. The content was exposed. You can partially obscure your activity from your ISP by using a VPN. A VPN tunnels your traffic through a third-party server, so your ISP sees only that you connected to the VPN, not where you went from there. But now the VPN provider holds that information instead. You haven't entirely eliminated the trail, you've relocated it . One way or another, when you use any electronic means of communication, you leave breadcrumbs. The connection is always recorded somewhere. That's why end-to-end encryption (E2EE) is important. Unlike HTTPS, which encrypts data in transit but means the server itself can read your messages, with end-to-end encryption only the sender and recipient can read the content. The service provider in the middle never holds the keys. In practice, when you send a message through an E2EE app like Signal , your device encrypts the message using your recipient's public key before it ever leaves your phone. The encrypted message travels through Signal's servers, but Signal cannot read it, because they don't have the private key needed to decrypt it. Only your recipient's device holds that key. Even if Signal were compelled by a government order to hand over your messages, all they could produce is scrambled data that's meaningless without the key. This is a meaningful protection. But it doesn't change the underlying reality: Signal still knows that your device contacted another device, at what time, and how often. The content is hidden. The connection is not. We cannot make communication invisible. We can only make it unreadable. In the realistic world, the only thing keeping Jason Bourne two steps ahead of law enforcement, is the bureaucracy and legal delays involved to retrieve CDR data. It's not his cleverness, not the speed of triangulating software, it's not technology.

0 views

Reformed

We are, once again and inexplicably, seeing a conversation unfold about reforming the military force in our streets, with body cameras and training standing in for a moral reckoning about the kind of world we want to live in, the kind of world that is livable for more than the wealthy few. We know what such “reforms” accomplish, because we’ve seen this many times before: an armed, unaccountable force with body cameras is no less deadly or immoral than an armed, unaccountable force without. A trained secret police is still the secret police. A short walk from where I write this is the old Walnut Street Jail , the first penitentiary built in the US, a precursor to the more infamous Eastern State Penitentiary, which was designed and operated by the Quakers. The Quakers advocated for reforms to the old prison systems, in which deprivation and corporal punishment were the norm, arguing that solitude, cleanliness, and discipline were better methods for rehabilitation. More than 200 years after those “reforms,” our prisons remain locations of intense deprivation, physical violence, coerced labor, and, frequently, inhumane solitary confinement—the “penitence” the Quakers were after still in short supply. Reports from the detention centers built today to house people pulled from the streets without due process shows that even those minimal standards are anything but: inedible food, overcrowding, lights on twenty-four hours a day, refusal of medical care, rape, and murder are all regular occurrences in these new prisons. This is the process that reform takes: the system is modified around the edges, often in ways that seem to cushion or obscure its real purpose, but the underlying conditions that maintain it remain unchanged. The old ways resurface, eventually. But if not reform, then what? What else can we do? André Gorz proposes a concept of “non-reformist reforms,” reforms which bring the future into the present…[that] make power tangible now by means of actions which demonstrate to the workers their positive strength. For Gorz, a reform is non-reformist if it both exercises the power and agency of workers acting together and foreshadows the future world in the present. That is, a non-reformist reform requires both concrete, bottoms-up action and the reflection of a different world within that action, the way a small fractal prefigures the large. Body cameras promise increased surveillance with no attendant increase in accountability, while training maintains the distribution of money and resources away from care and towards cops and prisons; both reforms represent business as usual, not a remade world. Only abolitionist demands—to defund militarized police forces in all their many forms, to invest instead in schools, libraries, homes, healthcare, childcare, and more—can both exercise that power and foreshadow a world where care overcomes criminalization. To put this another way: a reform maintains the old world, often under cover. While a non-reformist reform demands that we build a new world, one in which all humans and the more-than-human world can thrive. We must take small steps towards the future we want; there is no other way. But each step must point the way toward that future, a drop of water that heralds the wave. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
The Coder Cafe 2 days ago

3 Bullets and a Call to Action

☕ Welcome to The Coder Cafe! Today, we discuss an efficient communication method presented in the Debugging Teams book called 3 bullets and a call to action. I’ve been using it extensively over the past months, and I can confirm its efficiency. Get cozy, grab a coffee, and let’s begin! At Google, I recently switched to a new domain: Google Distributed Cloud Connected 1 . Here, all the teams are very busy, and finding an efficient way to communicate over email or chat can be challenging, especially when asking someone to do something. Recently, I came across a simple technique: three bullets and one call to action. The idea is the following: Add three bullet points explaining the key context Follow with one clear call to action Let’s look at a concrete example. Suppose you receive the following email: I recently wrote a design doc on how to save storage in the context of X, where I describe the current problem and the approach we could take to address it. In the document, I go through the main trade-offs involved and explain why the proposal focuses on solution Y in particular. I also included several open questions related to the deployment strategy and some areas where feedback would be especially helpful. It would be great if you could take a look at the document and leave comments by Friday. Quite a mouthful. It requires a non-trivial amount of brain time to understand both the context and what the person is actually asking for. Now let’s apply the three bullets and a call to action strategy: I recently wrote a design doc on how to save storage in the context of X. It highlights the main trade-offs and focuses on the solution Y. I’ve added open questions around the deployment strategy. Could you please have a look and leave comments by Friday? Much better, right? The call to action is clear, and the context is structured around short and easy-to-scan sentences. Why does it work? When communicating via email or chat, people prefer short and memorable messages that do not require too much cognitive effort to process. Bullet points help break information into smaller chunks, which makes the message easier to scan quickly. Ideally, the bullet points and the call to action should be as short as possible. Another aspect is that 3 is often a magic number in communication. With 2 items, you often get a contrast. With 3 items, you start to get a small structure or rhythm that is easier for the mind to process. That is one of the reasons why the rule of three appears so often in writing, storytelling, and presentations, where it helps make ideas more engaging and convincing. Remember: to improve your chances of getting an answer to your request, use 3 short bullets and an efficient call to action. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. 10 Rules I Learned About Technical Writing The XY Problem Don’t Forget About Your Mental Health Rule of three — Thinking Insights Rule of three (writing) — Wikipedia Only 2, sorry about that. ❤️ If you enjoyed this post, please hit the like button. 💬 What do you think about this strategy? Have you tried something similar? Leave a comment That partially explains why I wasn’t so active with The Coder Cafe these days. It will get better, I promise. At Google, I recently switched to a new domain: Google Distributed Cloud Connected 1 . Here, all the teams are very busy, and finding an efficient way to communicate over email or chat can be challenging, especially when asking someone to do something. Recently, I came across a simple technique: three bullets and one call to action. The idea is the following: Add three bullet points explaining the key context Follow with one clear call to action I recently wrote a design doc on how to save storage in the context of X. It highlights the main trade-offs and focuses on the solution Y. I’ve added open questions around the deployment strategy. 10 Rules I Learned About Technical Writing The XY Problem Don’t Forget About Your Mental Health Rule of three — Thinking Insights Rule of three (writing) — Wikipedia

0 views