Latest Posts (20 found)

Pidgin Markup For Writing, or How Much Can HTML Sustain?

Read on the website: HTML is flexible and was shaped by generations of web practitioners. It has enough tricks up its sleeve to actually be nice to author. Here are some.

0 views

Meta Compute, The Meta-OpenAI Battle, The Reality Labs Sacrifice

Mark Zuckerberg announced Meta Compute, a bet that winning in AI means winning with infrastructure; this, however, means retreating from Reality Labs.

0 views

Running cheap and crappy USB hard drives in RAID0 is indeed a very terrible idea

Some of my dumb experiments result in interesting findings and unexpected successes. Some end up with very predictable failures. What happens when you have two crappy USB hard drives running 1 in mode? Nothing, until something goes wrong on one of the drives. Here’s what it looks like: But in a way, this setup worked exactly as expected. If you want to have a lot of storage on the cheap, or simply care about performance, or both, then running disks in RAID0 mode is a very sensible thing to do. I used it mainly for having a place where I can store a bunch of data temporarily, such a full disk images or data that I can easily replace. Now I can test that theory out! I feel like I need to point out that this is not the fault of . When you instruct a file system to provide zero redundancy, then that is what you will get.  ↩︎ I feel like I need to point out that this is not the fault of . When you instruct a file system to provide zero redundancy, then that is what you will get.  ↩︎

0 views
Maurycy Today

How to write your own website:

I recently wrote an essay on why you should set up a personal website rather then using social media. Doing so lets you own your space on the internet, customize it and free your readers from constant advertising and algorithmic feeds designed to keep you stuck doomscrolling all day. However, despite how much time we spend using it, creating something for the intenet is seen as arcane wizardy by most people. This is a fairly accessable guide to getting started. You’ll need a text editor (any will do) and a browser (you already have one). All pages are written in HTML, which is a simple text-based format. To start with, this is a perfectly valid HTML document: To try this, just create a text file with a ".html" extension, and open it in your favorite browser. Do this now : experimenting is the best way to learn how everything works. This is what it should look like: Plain text is boring, so let’s add some formatting: The angle bracket things are tags: "<b>" is an opening tag, and "</b>" is the matching closing tag. The word surrounded by brackets ("b") is the tag name, which tells the browser what to do: In this case, b olding the enclosed text. The other formatting tags are <em> em phasis , <u> u nderline , <sub> sub scipt , <sup> sup erscript , <small> small text , <mark> highlight and <del> del eted . You don’t have to memorize this list, but go and try a few out. There’s also <br/> ( br eak), which adds a line break. It’s special because there’s no closing tag: It always immediately closed and can’t contain any text. I like to add a slash after the tag name to indicate this A big wall of text can get quite ugly, so it’s good to break it up with <p> ( p aragraph) tags. Each paragraph will be visually separated from other content on the page: Check out my new site: I have many epic things here. Together, the maching tags and their contents form an an element . Elements can contain other elements, but it’s important that they are closed in the correct order: This is wrong: … but this is fine: Browsers will attempt to render invalid HTML, but the results may not be what you intended: It’s best to make it easy for them. On that topic, it’s good practice to put all your content inside a <body> element which is itself inside a <html> element: Check out my new site: I have many epic things here. This isn’t mandatory, but helps browsers render your page correctly: In the case of an old browser, you don’t want metadata (we’ll add some later) getting confused for page content. Ok, back to text-wall-avoidance: the <ul> and <ol> ( u nordered/ o rdered l ist) tags create, well, lists. Each item should be wraped in <li> tags ( l ist i tem) About this site (unordered): It has epic things ... and is handwritten HTML It uses these tags: (ordered) <html> <body> <p> <ul> and <ol> <li> You can add angle brackets to a page with &gt; (>), &lt; (<) and &amp; (&). These entities will render as the corresponding charater, but won’t form tags. Headings use <h1> ( h eading 1 ) through <h5> ( h eading 5 ), with larger numbers using smaller font sizes: This site has epic things and I wrote it myself. To do: Figure out how to add links. About that. Links are just <a> ( a nchor) tags, but they have something new: an attribute after the tag name but before the bracket. The "href= " attribute sets where the link points to. A lot of other tags can also have attributes: For example, ordered lists with "reverse=true" count backwards. The URL in "href=" can be relative: If linking up multiple pages on the same site, instead of this: … just write this: Images work similarly to links, except that they are self-closing elements like <br/>: Check out this picture of a nebula I took! (If you don’t have a URL for your image, skip to the hosting section to set one up) That’s all the essentials, but there’s a lot of other useful tags. For example <details> creates a dropdown that works with ctrl-f: This is a dropdown with just HTML. It works well with browser features (ctrl-f, fragment identifiers, screen readers, etc) by default. (better usability than 99% of commercial sites!) …but I can’t cover everything without writing a whole book. (The Mozzila docs are a fantastic reference) At this point, you should have something like this: I made this site to write about things I do. More updates soon™ . Here's my picture of the Dumbbell Nebula: Let’s start by giving the page a machine-readable title: Like with <body>, the <head> tag isn’t required, but it is good to include it: Otherwise, any metadata that the browser doesn’t understand might be mistaken for content. The page still looks kinda bad: Text extending the edges of the page isn’t exactly easy to read. It’s not too bad when crammed into my blog, but longer paragraphs will look terrible on large monitors. To fix this, we need to add some style and layout information using the <style> tag: Unlike other tags, the contents of <style> isn’t HTML, but CSS: a whole other langauge embedded within the file. CSS is compoosed of blocks, each begining with a selector to control what gets effected. Here, this is just the name of a tag: "head" The selector is followed by a series of declarations wraped in curly braces. My example only has one: "max-width: 30em;" This caps the width of the element at 30 times the font size: I made this site to write about things I do. More updates soon™ . Here's my picture of the Dumbbell Nebula: The page is looking rather asymetrical, so let’s center the column. For fixed-width elements, this can be done using the "margin" property: I made this site to write about things I do. More updates soon™ . Here's my picture of the Dumbbell Nebula: (For varable width elements, use flexbox for centering and other fancy layouts. A single line of text can be centered with "text-align=center") Personally, I like dark themed sites, so lets change some of the colors: I made this site to write about things I do. More updates soon™ . Here's my picture of the Dumbbell Nebula: The "color" style will carry over to every element inside of the styled tag, so there’s no need to individually change the text-color of every element. However, the links do need to be changed because they override the color by default. That’s it. Everything you need to replicate my blog, minus a few small bits like the sans-serif font, nagivation box, etc. Of course, your website can and should be different: It’s yours . I highly recomend you read some documenation and play around with CSS. There’s also way more to it then I can possbly cover here. Every website you see was created with it, and it even supports animations and basic interactivity . … also, check out your browser’s devtools (ctrl-shift-i): It will have a nice GUI for editing which shows you the result in real time and shows you what’s going on under the hood. If you ever run out of tags, you can just make up your own and style them as needed. As long as the name includes a hypen, it’s guaranteed not to be included in any future version of HTML. The specification even lists <math-α> and <emotion-😍> as allowed custom elements names. I’ve used this heavily on this page: All the example websites aren’t screenshots, they are <fake-frame> elements styled up to look like a browser window. Custom tags are also very handy for styling text: At this point you should have a reasonably nice page ready to put up on the internet. The easiest way to do this is to use a static file hosting service like Github Pages or Cloudflare Pages . Both of these have generous free tiers that should last a very long time. If you don’t like big companies, there are plenty of similar, smaller services. These can be more limited: The popular Neocities charges $5/mo to use a custom domain. Another option is to rent a server ($3-$5/mo) or, if you have good internet, run one yourself. This is by far the most fiddly option: I would not recommend it unless you like playing with computers. All off these (except a server) will give you a subdomain by default. For example, Github Pages will give you your-username .github.io However, I do recommend setting up a custom domain: This will let you switch providers seamlessly should anything happen. All of these will work in a similar way: Upload a file with some name, and it will given a URL with that same name. The one exception is that files called "index.html" will be viewable at the root of the folder they are in. You should put an index.html in the root of your site to serve as the homepage, but apart from that, the organization is up to you. It has epic things ... and is handwritten HTML <html> <body> <ul> and <ol> Ken Shirriff's blog Ken Shirriff's blog Ken Shirriff's blog Ken Shirriff's blog

0 views

Porting MiniJinja to Go With an Agent

Turns out you can just port things now. I already attempted this experiment in the summer, but it turned out to be a bit too much for what I had time for. However, things have advanced since. Yesterday I ported MiniJinja (a Rust Jinja2 template engine) to native Go, and I used an agent to do pretty much all of the work. In fact, I barely did anything beyond giving some high-level guidance on how I thought it could be accomplished. In total I probably spent around 45 minutes actively with it. It worked for around 3 hours while I was watching, then another 7 hours alone. This post is a recollection of what happened and what I learned from it. All prompting was done by voice using pi , starting with Opus 4.5 and switching to GPT-5.2 Codex for the long tail of test fixing. MiniJinja is a re-implementation of Jinja2 for Rust. I originally wrote it because I wanted to do a infrastructure automation project in Rust and Jinja was popular for that. The original project didn’t go anywhere, but MiniJinja itself continued being useful for both me and other users. The way MiniJinja is tested is with snapshot tests: inputs and expected outputs, using insta to verify they match. These snapshot tests were what I wanted to use to validate the Go port. My initial prompt asked the agent to figure out how to validate the port. Through that conversation, the agent and I aligned on a path: reuse the existing Rust snapshot tests and port incrementally (lexer -> parser -> runtime). This meant the agent built Go-side tooling to: This resulted in a pretty good harness with a tight feedback loop. The agent had a clear goal (make everything pass) and a progression (lexer -> parser -> runtime). The tight feedback loop mattered particularly at the end where it was about getting details right. Every missing behavior had one or more failing snapshots. I used Pi’s branching feature to structure the session into phases. I rewound back to earlier parts of the session and used the branch switch feature to inform the agent automatically what it had already done. This is similar to compaction, but Pi shows me what it puts into the context. When Pi switches branches it does two things: Without switching branches, I would probably just make new sessions and have more plan files lying around or use something like Amp’s handoff feature which also allows the agent to consult earlier conversations if it needs more information. What was interesting is that the agent went from literal porting to behavioral porting quite quickly. I didn’t steer it away from this as long as the behavior aligned. I let it do this for a few reasons. First, the code base isn’t that large, so I felt I could make adjustments at the end if needed. Letting the agent continue with what was already working felt like the right strategy. Second, it was aligning to idiomatic Go much better this way. For instance, on the runtime it implemented a tree-walking interpreter (not a bytecode interpreter like Rust) and it decided to use Go’s reflection for the value type. I didn’t tell it to do either of these things, but they made more sense than replicating my Rust interpreter design, which was partly motivated by not having a garbage collector or runtime type information. On the other hand, the agent made some changes while making tests pass that I disagreed with. It completely gave up on all the “must fail” tests because the error messages were impossible to replicate perfectly given the runtime differences. So I had to steer it towards fuzzy matching instead. It also wanted to regress behavior I wanted to retain (e.g., exact HTML escaping semantics, or that must return an iterator). I think if I hadn’t steered it there, it might not have made it to completion without going down problematic paths, or I would have lost confidence in the result. Once the major semantic mismatches were fixed, the remaining work was filling in all missing pieces: missing filters and test functions, loop extras, macros, call blocks, etc. Since I wanted to go to bed, I switched to Codex 5.2 and queued up a few “continue making all tests pass if they are not passing yet” prompts, then let it work through compaction. I felt confident enough that the agent could make the rest of the tests pass without guidance once it had the basics covered. This phase ran without supervision overnight. After functional convergence, I asked the agent to document internal functions and reorganize (like moving filters to a separate file). I also asked it to document all functions and filters like in the Rust code base. This was also when I set up CI, release processes, and talked through what was created to come up with some finalizing touches before merging. There are a few things I find interesting here. First: these types of ports are possible now. I know porting was already possible for many months, but it required much more attention. This changes some dynamics. I feel less like technology choices are constrained by ecosystem lock-in. Sure, porting NumPy to Go would be a more involved undertaking, and getting it competitive even more so (years of optimizations in there). But still, it feels like many more libraries can be used now. Second: for me, the value is shifting from the code to the tests and documentation. A good test suite might actually be worth more than the code. That said, this isn’t an argument for keeping tests secret — generating tests with good coverage is also getting easier. However, for keeping code bases in different languages in sync, you need to agree on shared tests, otherwise divergence is inevitable. Lastly, there’s the social dynamic. Once, having people port your code to other languages was something to take pride in. It was a sign of accomplishment — a project was “cool enough” that someone put time into making it available elsewhere. With agents, it doesn’t invoke the same feelings. Will McGugan also called out this change . Lastly, some boring stats for the main session: This did not count the adding of doc strings and smaller fixups. Pi session transcript Narrated video of the porting session Parse Rust’s test input files (which embed settings as JSON headers). Parse the reference insta snapshots and compare output. Maintain a skip-list to temporarily opt out of failing tests. It stays in the same session so I can navigate around, but it makes a new branch off an earlier message. When switching, it adds a summary of what it did as a priming message into where it branched off. I found this quite helpful to avoid the agent doing vision quests from scratch to figure out how far it had already gotten. Agent run duration: 10 hours ( 3 hours supervised) Active human time: ~45 minutes Total messages: 2,698 My prompts: 34 Tool calls: 1,386 Raw API token cost: $60 Total tokens: 2.2 million Models: and for the unattended overnight run

0 views
Kaushik Gopal Yesterday

AI model choices 2026-01

Which AI model do I use? This is a common question I get asked, but models evolve so rapidly that I never felt like I could give an answer that would stay relevant for more than a month or two. This year, I finally feel like I have a stable set of model choices that consistently give me good results. I’m jotting it down here to share more broadly and to trace how my own choices evolve over time. GPT 5.2 (High) for planning and writing, including plans Opus 4.5 for anything coding, task automation, and tool calling Gemini ’s range of models for everything else: Gemini 3 (Thinking) for learning and understanding concepts (underrated) Gemini 3 (Flash) for quick fire questions Nano Banana (obv) for image generation NVIDIA’s Parakeet for voice transcription

0 views

LoopFrog: In-Core Hint-Based Loop Parallelization

LoopFrog: In-Core Hint-Based Loop Parallelization Marton Erdos, Utpal Bora, Akshay Bhosale, Bob Lytton, Ali M. Zaidi, Alexandra W. Chadwick, Yuxin Guo, Giacomo Gabrielli, and Timothy M. Jones MICRO'25 To my Kanagawa pals: I think hardware like this would make a great target for Kanagawa, what do you think? The message of this paper is that there is plenty of loop-level parallelism available which superscalar cores are not yet harvesting. Fig. 1 illustrates the classic motivation for multi-core processors: scaling the processor width by 4x yields a 2x IPC improvement. In general, wider cores are heavily underutilized. Source: https://dl.acm.org/doi/10.1145/3725843.3756051 The main idea behind is to add hints to the ISA which allow a wide core to exploit more loop-level parallelism in sequential code. If you understand Fig. 2, then you understand , the rest is just details: Source: https://dl.acm.org/doi/10.1145/3725843.3756051 The compiler emits instructions which the processor can use to understand the structure of a loop. Processors are free to ignore the hints. A loop which can be optimized by comprises three sections: A header , which launches each loop iteration A body , which accepts values from the header A continuation , which computes values needed for the next loop iteration (e.g., the value of induction variables). Each execution of the header launches two threadlets . A threadlet is like a thread but is only ever executed on the core which launched it. One threadlet launched by the header executes the body of the loop. The other threadlet launched by the header is the continuation, which computes values needed for the next loop iteration. Register loop-carried dependencies are allowed between the header and continuation, but not between body invocations. That is the key which allows multiple bodies to execute in parallel (see Fig. 2c above). At any one time, there is one architectural threadlet (the oldest one), which can update architectural state. All other threadlets are speculative . Once the architectural threadlet for loop iteration completes, it hands the baton over to the threadlet executing iteration , which becomes architectural. Dependencies through memory are handled by the speculative state buffer (SSB). When a speculative threadlet executes a memory store, data is stored in the SSB and actually written to memory later on (i.e., after that threadlet is no longer speculative). Memory loads read from both the L1 cache and the SSB, and then disambiguation hardware determines which data to use and which to ignore. The hardware implementation evaluated by the paper does not support nested parallelization, it simply ignores hints inside of nested loops. Fig. 6 shows simulated performance results for an 8-wide core. A core which supports 4 threadlets is compared against a baseline which does not implement . Source: https://dl.acm.org/doi/10.1145/3725843.3756051 can improve performance by about 10%. Fig. 1 at the top shows that an 8-wide core experiences about 25% utilization, so there may be more fruit left to pick. Thanks for reading Dangling Pointers! Subscribe for free to receive new posts and support my work. Source: https://dl.acm.org/doi/10.1145/3725843.3756051 The main idea behind is to add hints to the ISA which allow a wide core to exploit more loop-level parallelism in sequential code. Structured Loops If you understand Fig. 2, then you understand , the rest is just details: Source: https://dl.acm.org/doi/10.1145/3725843.3756051 The compiler emits instructions which the processor can use to understand the structure of a loop. Processors are free to ignore the hints. A loop which can be optimized by comprises three sections: A header , which launches each loop iteration A body , which accepts values from the header A continuation , which computes values needed for the next loop iteration (e.g., the value of induction variables).

0 views

The Insecure Evangelism of LLM Maximalists

I just can't help feeling these training wheels are getting in the way of my bicycle commute.

0 views
Stratechery Yesterday

Apple and Gemini, Foundation vs. Aggregation, Universal Commerce Protocol

The deal to put Gemini at the heart of Siri is official, and it makes sense for both sides; then Google runs its classic playbook with Universal Commerce Protocol.

0 views

How You Read My Content

A week ago, after chatting with Kev about his own findings , I created a similar survey (which is still open if you want to answer it) to collect a second set of data because why the heck not. Kev’s data showed that 84.5% of responses picked RSS, Fediverse was second at 7.6%, direct visits to the site were third at 5.4%, and email was last at 2.4%. My survey has a slightly different set of options and allows for multiple choices—which is why the % don’t add up to 100—but the results are very similar: This is the bulk of the data, but then there’s a bunch of custom, random answers, some of which were very entertaining to read: So the takeaway is: people still love and use RSS. Which makes sense, RSS is fucking awesome, and more people should use it. Since we’re talking data, I’m gonna share some more information about the numbers I have available, related to this blog and how people follow it. I don’t have analytics, and these numbers are very rough, so my advice is not to give them too much weight. 31 people in the survey said they read content in their inbox, but there are currently 103 people who are subscribed to my blog-to-inbox automated newsletter. RSS is a black box for the most part, and finding out how many people are subscribed to a feed is basically impossible. That said, some services do expose the number of people who are subscribed, and so there are ways to get at least an estimate of how big that number is. I just grabbed the latest log from my server, cleaned the data as best as I could in order to eliminate duplicates and also entries that feel like duplicates, for example: In this case, it’s obvious that those two are the same service, and at some point, one more person has signed up for the RSS. But how about these: All those IDs are different, but what should I do here? Do I keep them all? Who knows. Anyway, after cleaning up everything, keeping only requests for the main RSS feed, I’m left with 1975 subscribers, whatever that means. Are these actual people? Who knows. Running the exact same log file (it’s the NGINX access log from Jan 10th to Jan 13th at ~10AM) through Goaccess, with all the RSS entries removed, tells me the server received ~50k requests from ~8000 unique IPs. 33% of those hits are from tools whose UA is marked as “Unknown” by Goaccess. Same story when it comes to reported OS: 35% is marked as “Unknown”. Another 15% on both of those tables is “Crawlers”, which to me suggests that at least half of the traffic hitting the website directly is bots. In conclusion, is it still worth serving content via RSS? Yes. Is the web overrun by bots? Also yes. Is somebody watching me type these words? Maybe. If you have a site and are going to run a similar experiment, let me know about it, and I’ll be happy to link it here. Also, if you want some more data from my logs, let me know. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs 80.1% reads the content inside their RSS apps 23.8% uses RSS to get notified, but then read in the browser 10.7% visits the site directly 4.9% reads in their inbox. 1 person said they follow on Mastodon, and I am not on Mastodon, so 🤷‍♂️ 1 person left a very useful message in German, a language I don’t speak, which was quite amusing 1 person lives in my house and looks over my shoulder when I write A couple of people mentioned that they read on RSS but check the site every now and again because they like the website

0 views
Kev Quirk Yesterday

Linux in the Air

Sal talks about how Linux is going through somewhat of a revival at the moment, as well as some of his own thoughts on the whole Mac vs Windows vs Linux debacle. Read Post → I think a lot of this Linux revival is thanks to a perfect storm going on in the OS space, namely: I’ve been back on Linux (specifically Ubuntu) since I bought my Framework 13 , and I’ve been very happy. The only issues I’ve really had are with some apps being blurry under Wayland, but I’ve been able to easily work around these issues. Sal has had some similar problems with Wayland, but has also managed to work around them. My son also runs Linux on his iMac , and I’m about to replace Windows 10 on my wife’s X1 Carbon with Ubuntu too. So we’re going to be a Linux household very soon. And you know what? It’s fine. My son doesn’t know (or care) that he’s running Linux. My wife will be in the same boat - as long as she can check her emails, browse the web, and manage our finances in a spreadsheet, she’s good. Linux based operating systems are great, and I’m thrilled they’re going through this revival. If you’re thinking about switching, I’d implore you to do so - remember you can always try before you “buy” with a live USB. So there’s no commitment required. If you do switch, please remember to donate to your distro of choice. ❤ Thanks for reading this post via RSS. RSS is great, and you're great for using it. ❤️ You can reply to this post by email , or leave a comment . Microsoft forcing many users to buy new hardware because of arbitrary hardware requirements, as well as forcing users to have an online accounts. Apple completely screwing up MacOS Tahoe with their Liquid Glass update.

0 views
Thomasorus Yesterday

2025

Despite everything happening in the world, 2025 was one of the happiest years of my recent life. I feel that a lot of my efforts paid off, making my life richer, more interesting, and allowing me to learn more about myself. My partner's disability was recognized by the state. She receives a temporary pension so we don't have to stress about money as much. She had surgery, recovered swiftly, and can walk again. Therefor, my role as a caregiver was reduced immensely, a relief as we enter our 20th year together! As for my own health, it's getting better. Pain in my back, shoulders, hands and arms diminished thanks to exercise. I was able to rule-out a few concerns I had (among them tachycardia and nerve issues) thanks to exams, and others are ongoing. Despite a lot of troubles at work, going from management issues, toxic clients, or people quitting, I'm still feeling OK. I am actually detached and it feels great. I still care enough to work at the best of my abilities but once I close my laptop, work disappears. But I am getting very bored by frontend web development tho, and asked if I could evolve into project management. I had a positive response, but we'll have to discuss it further. I won't do it if the company doesn't accept a best effort obligation. Taking up the role without having the means to do it is a trap I refuse to fall into. And finally, my non-monogamous life stabilized. I now understand what I am looking for, I am able to explain it properly. It led to long term lover-friends relationships, with people who are all emotionally mature, caring, and are overall amazing and interesting human beings. A thing that didn't stabilize is my brain. I had several moments that felt not right . Among them: My theory, as for why it happens now and more, is that I am slowly getting rid of a performative social attitude I built years ago. For the last 20, maybe even 30 years I spent a lot of energy into pleasing others, doing things right, being a reliable friend, colleague and spouse. To the detriment of my own well being and sometimes, health. I am unsure why I work like this. Could be anxiety, past experiences, educations, role models. Who knows? Almost everyone around me, including psychiatrists, think I have ADHD. It lead to a huge mental breakdown . I'll be seeing a neuropsychiatrist in a few weeks and try to find out what's going on. This year marked a radical shift: I am actually listening to my needs, wants and obsessions. Allowing myself to be... well, myself, has made me extremely happy and fulfilled this year. I regained a lot of freedom to do new things, made a lot of encounters that led to new activities, experiences and discoveries. It can go from little things like wearing earrings, buying new clothes or doing new piercings ; to more fundamental shifts like trying out stressful social gatherings, asking others to do things for my own benefit, or abandoning moral positions that actually led me to infantilize or disrespect the individuality of others. Seeing my loved ones respond positively to these changes lifted a huge amount of stress from my shoulders. For all these reasons, I can say 2025 was among the best years of my life. I haven't felt this alive, this myself for years, and I hope it will continue for a long time. The urge for something, anything, to happen, right now . I crave adventure and novelty, but also stress when outside a familiar environment. Feeling overwhelmed and overstimulated by the number of things in my environment, mostly objects but also sounds, lights, etc. An inability to find, use and sometimes think about things if they are not directly visible. For example, messy storage becomes just a mass of indistinct things I cannot manage and I give up. The feeling of never doing enough while never defining what is enough . Happened especially in social and profesional settings. An increasing difficulty to focus on tasks that aren't interesting to me, to the point of forgetting about them. Stopped my 15 years old podcast + website dedicated to fighting games. Stopped my 7 years old freelance shop. David Lynch died and it affected me more than I expected (here's an interview of him I enjoyed). Exhibited and sol my art for the first time and had a blast full blog post . Tried to code with AI and became stupid and lazy . Migrated this website from a static site generator to my own CMS built in GO . I started documenting the steps for some drawings I did. You can see the creative process behind Drool and Mandala in their respective pages. Failed Inktober/Drawtober due to sickness. I focused on a fantasy setting I had in mind for a few years, both writing and drawing elements of this world. Keep being a little more me every day. Learn to let go for real. Continue to balance equally taking care of myself and others. Nurture my relationships with others. Participate in an art fare with a broader range of art than last year. Write more on my website about various things. Finish my CMS. Go on vacation alone again, and with my partner. Refurnish and redecorate more rooms in my apartment. Keep exploring fashion to find my own style.

0 views
devansh Yesterday

HonoJS JWT/JWKS Algorithm Confusion

After spending some time looking for security issues in JS/TS frameworks , I moved on to Hono - fast, clean, and popular enough that small auth footguns can become "big internet problems". This post is about two issues I found in Hono's JWT/JWKS verification path: Both were fixed in hono 4.11.4 , and GitHub Security Advisories were published on January 13, 2026 . If you already have experience with JWT stuff, you can skip this: The key point here is that, algorithm choice must not be attacker-controlled. Hono's JWT helper documents that is optional - and defaults to HS256. That sounds harmless until you combine it with a very common real-world setup: In that case, the verification path defaults to HS256, treating that public key string as an HMAC secret, and that becomes forgeable because public keys are, well… public. If an attacker can generate a token that passes verification, they can mint whatever claims the application trusts ( , , , etc.) and walk straight into protected routes. This is the "algorithm confusion" class of bugs, where you think you're doing asymmetric verification, but you're actually doing symmetric verification with a key the attacker knows. This is configuration-dependent. The dangerous case is: The core issue is, Hono defaults to , so a public key string can accidentally be used as an HMAC secret, allowing forged tokens and auth bypass. Advisory: GHSA-f67f-6cw9-8mq4 This was classified as High (CVSS 8.2) and maps it to CWE-347 (Improper Verification of Cryptographic Signature) . Affected versions: Patched version: 4.11.4 In the JWK/JWKS verification middleware, Hono could pick the verification algorithm like this: GitHub's advisory spells it out, when the selected JWK doesn't explicitly define an algorithm, the middleware falls back to using the from the unverified JWT header - and since in JWK is optional and commonly omitted, this becomes a real-world issue. If the matching JWKS key lacks , falls back to token-controlled , enabling algorithm confusion / downgrade attacks. "Trusting " is basically letting the attacker influence how you verify the signature. Depending on surrounding constraints (allowed algorithms, how keys are selected, and how the app uses claims), this can lead to forged tokens being accepted and authz/authn bypass . Advisory: GHSA-3vhc-576x-3qv4 This was classified as High (CVSS 8.2) , also CWE-347 , with affected versions and patched in 4.11.4 . Both advisories took the same philosophical stance i.e. Make explicit. Don't infer it from attacker-controlled input. The JWT middleware now requires an explicit option — a breaking change that forces callers to pin the algorithm instead of relying on defaults. Before (vulnerable): After (patched): (Example configuration shown in the advisory.) The JWK/JWKS middleware now requires an explicit allowlist of asymmetric algorithms, and it no longer derives the algorithm from untrusted JWT header values. It also explicitly rejects symmetric HS* algorithms in this context. Before (vulnerable): After (patched): (Example configuration shown in the advisory.) JWT / JWK / JWKS Primer Vulnerabilities [CVE-2026-22817] - JWT middleware "unsafe default" (HS256) Why this becomes an auth bypass Who is affected? Advisory / severity [CVE-2026-22817] - JWK/JWKS middleware fallback Why it matters Advisory / severity The Fix Fix for #1 (JWT middleware) Fix for #2 (JWK/JWKS middleware) Disclosure Timeline a default algorithm footgun in the JWT middleware that can lead to forged tokens if an app is misconfigured a JWK/JWKS algorithm selection bug where verification could fall back to an untrusted value JWT is . The header includes (the signing algorithm). JWK is a JSON representation of a key (e.g. an RSA public key). JWKS is a set of JWKs, usually hosted at something like . The app expects RS256 (asymmetric) The developer passes an RSA public key string But they don't explicitly set you use the JWT middleware with an asymmetric public key and you don't pin Use if present Otherwise, fall back to from the JWT (unverified input) Discovery: 09th Dec, 2025 First Response: 09th Dec, 2025 Patched in: hono 4.11.4 Advisories published: 13 Jan, 2026 Advisory: GHSA-f67f-6cw9-8mq4 Advisory: GHSA-3vhc-576x-3qv4

0 views
Simon Willison 2 days ago

First impressions of Claude Cowork, Anthropic's general agent

New from Anthropic today is Claude Cowork , a "research preview" that they describe as "Claude Code for the rest of your work". It's currently available only to Max subscribers ($100 or $200 per month plans) as part of the updated Claude Desktop macOS application. I've been saying for a while now that Claude Code is a "general agent" disguised as a developer tool. It can help you with any computer task that can be achieved by executing code or running terminal commands... which covers almost anything, provided you know what you're doing with it! What it really needs is a UI that doesn't involve the terminal and a name that doesn't scare away non-developers. "Cowork" is a pretty solid choice on the name front! The interface for Cowork is a new tab in the Claude desktop app, called Cowork. It sits next to the existing Chat and Code tabs. It looks very similar to the desktop interface for regular Claude Code. You start with a prompt, optionally attaching a folder of files. It then starts work. I tried it out against my perpetually growing "blog-drafts" folder with the following prompt: Look at my drafts that were started within the last three months and then check that I didn't publish them on simonwillison.net using a search against content on that site and then suggest the ones that are most close to being ready It started by running this command: That path instantly caught my eye. Anthropic say that Cowork can only access files you grant it access to - it looks to me like they're mounting those files into a containerized environment, which should mean we can trust Cowork not to be able to access anything outside of that sandbox. It turns out I have 46 draft files from the past three months. Claude then went to work with its search tool, running 44 individual searches against to figure out which of my drafts had already been published. Here's the eventual reply: Based on my analysis, here are your unpublished drafts that appear closest to being ready for publication : 🔥 Most Ready to Publish (substantial content, not yet published) That's a good response! It found exactly what I needed to see, although those upgrade instructions are actually published elsewhere now ( in the Datasette docs ) and weren't actually intended for my blog. Just for fun, and because I really like artifacts , I asked for a follow-up: Make me an artifact with exciting animated encouragements to get me to do it Here's what I got: I couldn't figure out how to close the right sidebar so the artifact ended up cramped into a thin column but it did work. I expect Anthropic will fix that display bug pretty quickly. I've seen a few people ask what the difference between this and regular Claude Code is. The answer is not a lot . As far as I can tell Claude Cowork is regular Claude Code wrapped in a less intimidating default interface and with a filesystem sandbox configured for you without you needing to know what a "filesystem sandbox" is. Update : It's more than just a filesystem sandbox - I had Claude Code reverse engineer the Claude app and it found out that Claude uses VZVirtualMachine - the Apple Virtualization Framework - and downloads and boots a custom Linux root filesystem. I think that's a really smart product. Claude Code has an enormous amount of value that hasn't yet been unlocked for a general audience, and this seems like a pragmatic approach. With a feature like this, my first thought always jumps straight to security. How big is the risk that someone using this might be hit by hidden malicious instruction somewhere that break their computer or steal their data? Anthropic touch on that directly in the announcement: You should also be aware of the risk of " prompt injections ": attempts by attackers to alter Claude's plans through content it might encounter on the internet. We've built sophisticated defenses against prompt injections, but agent safety---that is, the task of securing Claude's real-world actions---is still an active area of development in the industry. These risks aren't new with Cowork, but it might be the first time you're using a more advanced tool that moves beyond a simple conversation. We recommend taking precautions, particularly while you learn how it works. We provide more detail in our Help Center . That help page includes the following tips: To minimize risks: I do not think it is fair to tell regular non-programmer users to watch out for "suspicious actions that may indicate prompt injection"! I'm sure they have some impressive mitigations going on behind the scenes. I recently learned that the summarization applied by the WebFetch function in Claude Code and now in Cowork is partly intended as a prompt injection protection layer via this tweet from Claude Code creator Boris Cherny: Summarization is one thing we do to reduce prompt injection risk. Are you running into specific issues with it? But Anthropic are being honest here with their warnings: they can attempt to filter out potential attacks all they like but the one thing they can't provide is guarantees that no future attack will be found that sneaks through their defenses and steals your data (see the lethal trifecta for more on this.) The problem with prompt injection remains that until there's a high profile incident it's really hard to get people to take it seriously. I myself have all sorts of Claude Code usage that could cause havoc if a malicious injection got in. Cowork does at least run in a filesystem sandbox by default, which is more than can be said for my habit! I wrote more about this in my 2025 round-up: The year of YOLO and the Normalization of Deviance . Security worries aside, Cowork represents something really interesting. This is a general agent that looks well positioned to bring the wildly powerful capabilities of Claude Code to a wider audience. I would be very surprised if Gemini and OpenAI don't follow suit with their own offerings in this category. I imagine OpenAI are already regretting burning the name "ChatGPT Agent" on their janky, experimental and mostly forgotten browser automation tool back in August ! bashtoni on Hacker News : Simple suggestion: logo should be a cow and and orc to match how I originally read the product name. I couldn't resist throwing that one at Nano Banana : You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . - "Frequently Argued Questions about LLMs" (22,602 bytes) This is a meaty piece documenting common arguments about LLMs with your counterpoints Well-structured with a TL;DR and multiple sections No matching published article found on your site Very close to ready - just needs a final review pass - "Claude Code Timeline and Codex Timeline" (3,075 bytes) About viewing JSONL session logs from Claude Code and Codex You published on Dec 25, but this appears to be a different/earlier piece about timeline viewing tools Shorter but seems complete - Plugin Upgrade Guide (3,147 bytes) Technical guide for plugin authors You published the main 1.0a20 announcement but this companion upgrade guide appears unpublished Would be valuable for plugin maintainers Avoid granting access to local files with sensitive information, like financial documents. When using the Claude in Chrome extension, limit access to trusted sites. If you chose to extend Claude’s default internet access settings, be careful to only extend internet access to sites you trust. Monitor Claude for suspicious actions that may indicate prompt injection.

1 views

2026-2: Week Notes

This week felt like a slow, slightly awkward return to routine. I worked from home , which I’m grateful for, but with the kids home (summer holidays) and my mum visiting, it took a surprising amount of energy to focus and do anything at all. Not productive necessarily. Just not completely stagnant. I noticed how easily I slip into managing everyone’s time and behavior when I’m physically around. It also made me notice, again, where most of my mental energy actually goes outside of work. One big chunk goes into managing my food and weight (as much as I hate to admit it). The second big energy drain is navigating the kids and electronics. (I am just mentioning it here, but I plan to write about it some more later). A bright spot was spending time creating my 2026 direction. I realised I don’t really want achievement-style goals right now. I want a way of being. My central theme is “Let myself be happier.” With gentler yoga goals, I managed to do yoga every day this week (15–20 minutes). I can already feel the difference. I went for almost two weeks without it and could feel myself getting stiffer. It doesn’t take long at this age. On the fun side, I’ve been watching Dark Matter and thinking about regret and the paths we don’t take. I’ve always enjoyed Blake Crouch’s work. It’s slightly terrifying and bordering on hard sci-fi. I also discovered (and loved!) Pluribus . If you’ve watched it, do the Others remind you of ChatGPT or other GenAI? (to save from spoiling it for anyone, I won’t say why). Family movie nights were dominated by Avatar rewatches and finally seeing the latest one in the cinema last night. It’s three and a half hours long, which honestly felt offensive. I kept thinking, who does James Cameron think he is, taking that much of my life? It was beautiful and fine, but not three-and-a-half-hours good. I would have happily traded that time for three more episodes of Pluribus. That said, the kids loved it, especially my (almost sixteen year old) son. My husband had a terrible cough, so I ended up sleeping on a mattress on the floor in my daughter’s room so everyone (maybe not him) could get some sleep, especially with my mum in the guest room. It reminded me (again) how much I care about furniture being practical and multi-use. I still regret not insisting on couches you can properly sleep on. Where I come from, all couches can become beds. It just makes sense to me. I don’t like furniture that only serves one purpose, no matter how pretty it may be. This also nudged me back toward the idea of doing another round of simplifying at home, not because the house is cluttered, but because less always feels lighter to me (makes me feel lighter, I guess). I will make a plan. Maybe start in February or so. Socially, I’m moving toward my 2026 direction of hosting gatherings and bringing people together. Drinks with a neighbour, lunches with my mum and the kids, and long phone calls with friends overseas. The first gathering of neighbours for 2026 is booked for next Saturday (granted, my husband organised that one, but nevertheless). I’ve been thinking more about how many social catch-ups become pure life recaps and updates rather than shared experiences. The life itself is lived somewhere else, not inside the friendship. I’d like to experiment with hosting and gatherings that create something memorable together, not just conversation. That idea has been sitting with me. Because of that, I’m feeling more drawn to creating gatherings that have some kind of purpose or shared experience, not just conversation. I’m reading The Life Impossible by Matt Haig. I usually enjoy his books. The lessons and themes tend to be obvious, a bit like Paulo Coelho, but that’s part of the appeal and probably why they’re so popular. And also, I have no idea where this book is taking me. It’s also nice to see an older protagonist. The main character is 72. I also just finished Better Than Happiness: The True Antidote to Discontent by Gregory P. Smith, a memoir I picked up from the library intending to skim, but it fascinated me enough to read the whole thing. There were some really nice insights around acceptance, self-acceptance, anger, and learning how to actually live in the present moment. “In some ways, it’s a paradox. To change something we first have to accept it for what it is. Only through accepting my perceived flows and limitations? Could I see that there were pathways to improvement? The same applied when it came to learning to accept one of the biggest conundrums in my life, the man in the mirror. Self acceptance is the main reason I’m not only here today, but able to look at myself in the mirror.” Overall, the week felt reflective. I’m noticing how hard I still am on myself and trying to soften that. Self-acceptance! If this year really is about letting myself be happier, then noticing these small choices and energy leaks feels like the right place to start. PREVIOUS WEEK: 2026-1: Week Notes One big chunk goes into managing my food and weight (as much as I hate to admit it). The second big energy drain is navigating the kids and electronics.

0 views
Jeff Geerling 2 days ago

Raspberry Pi Pico Mini Rack GPS Clock

I wanted to have the most accurate timepiece possible mounted in my mini rack. Therefore I built this: This is a GPS-based clock running on a Raspberry Pi Pico in a custom 1U 10" rack faceplate. The clock displays time based on a GPS input, and will not display time until a GPS timing lock has been acquired. For full details on designing and building this clock, see: When you turn on the Pico, the display reads Upon 3D fix, you get a time on the clock, and the colon starts blinking If the 3D fix is lost, the colon goes solid When the 3D fix is regained, the colon starts blinking again

0 views
Manuel Moreale 2 days ago

A moment with tea

Learning to appreciate different flavors is something that comes very hard for me. And yet, for some reason, tea is one of those things that no matter how hard it is for my tastebuds, I’ll constantly come back to. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

0 views
iDiallo 2 days ago

Boredom is the Gatekeeper

That first Monday of my holiday break, I made a promise to myself. No work emails, no side projects, not even glancing at my blog. This time was for family, for Netflix queues, for rereading dog-eared novels. One thing I was really looking forward to was learning something new, a new skill. Not for utility, but purely for curiosity. I wanted to learn about batteries. They power our world, yet they're a complete mystery to me. I only vaguely remember what I learned in high school decades ago. This would be the perfect subject for me. I went straight to a website I had bookmarked years ago in a fit of intellectual ambition: BatteryUniversity.com. I started with the chemistry of lead acid batteries. I was ready to be enlightened. Twenty minutes later, I was three paragraphs in, my mind adrift. The text was dense, packed with terms like "lead-antimony" and "acid-starved." My finger twitched. Then I read this: the sealed lead acid battery is designed with a low over-voltage potential to prohibit the battery from reaching its gas-generating potential during charge. I thought, wouldn't this be easier to understand as a YouTube video? A nice animation? I clicked away. It seemed like I had just met the gatekeeper, and it had turned me away. I was bored. We talk about boredom as if it's the absence of stimulation. Having nothing to do. But in our hyperconnected world, where information is constantly flowing and distractions are a finger tap away, true emptiness is rare. Modern boredom isn't having nothing to do. I had plenty of material to go over. Instead, it's the friction of deep focus. It's the resistance you feel when you move from consuming information to building those neural connections in your brain. Learning feels slow and hard, and it is ungratifying compared to dopamine-induced YouTube videos. Have you ever watched a pretty good video on YouTube and learned nothing from it? This reaction to learning the hard way, masquerading as boredom, is the gatekeeper. And almost every important skill in life lives on the other side of that gate. When I started working for an AI startup, I was fascinated by what we were able to accomplish with a team of just two engineers. It looked like magic to me at first. You feed the AI some customer's message, and it tells you exactly what this person needs. So, to be an effective employee, I decided to learn profoundly about the subject. Moving from just a consumer of an API to a model creator made the process look un-magical. It started with spreadsheets where we cleaned data. There was a loss function that stubbornly refused to budge for hours. There was staring at a single Python error that said the tensor dimensions don't align. The boring part was the meticulous engineering upon which the magic is built. I find it fascinating now, but it was frustrating at the time, and I had to force myself to learn it. Like most developers, video games inspired me to become a programmer. I wanted to code my own game from scratch. I remember playing Devil May Cry and thinking about how I would program those boss battles. But when I sat with a keyboard and the cursor on my terminal flashed before me, I struggled to move a gray box on the screen using SDL. For some reason, when I pressed arrow keys, the box jittered instead of following a straight line. I would spend the whole day reading OpenGL and SDL documentation only to fix a single bug. Boredom was going through all this documentation, painfully, only to make small incremental progress. When you start a business, the gatekeeper shows its face. It stares back at you when you open that blank document and write a single line of text in it: My idea. For indie developers, it's the feeling you get when you build the entire application and feel compelled to start over rather than ship what you've built. This boredom is the feeling of creation from nothing, which is always harder than passive consumption. We've conflated "interesting" with "easy to consume." The most interesting things in the world, like building software, writing a book, mastering a craft, understanding a concept, are never easy to produce. Their initial stages are pure effort. Gamification tries to trick us past the gatekeeper with points and badges, but that's just putting a costume on it. The real work remains. There is no way around it. You can't eliminate that feeling. Instead, you have to recognize it for what it is and push through. When you feel that itchy tug toward a distracting tab, that's the gatekeeper shaking its keys. It's telling you that what you're doing is really hard, and it would be easier to just passively consume it. You might even enjoy the process without ever learning anything. Instead, whenever you feel it, set a timer for 25 minutes. Agree to wrestle with the battery chemistry, the Python error, or the empty page. Just for that short time span. There is no dopamine hit waiting on the other side of boredom like you get from passive consumption. Instead, the focus, the struggle, the sustained attention, that's the process of learning. The gatekeeper ensures only those willing to engage in the hard, quiet work of thinking get to the good stuff. I did not become a battery expert over the holidays. But at least I learned to recognize the gatekeeper's face. Now, when I feel that familiar, restless boredom descend as I'm trying to learn something hard, I smile a little. I know I'm at the threshold. And instead of turning back, I take a deep breath, set my timer to 25 minutes, and I power through the gate.

2 views