Posts in Javascript (20 found)
iDiallo Yesterday

Back button hijacking is going away

When websites are blatantly hostile, users close them to never come back. Have you ever downloaded an app, realized it was deceptive, and deleted it immediately? It's a common occurrence for me. But there is truly hostile software that we still end up using daily. We don't just delete those apps because the hostility is far more subtle. It's like the boiling frog, the heat turns up so slowly that the frog enjoys a nice warm bath before it's fully cooked. With clever hostile software, they introduce one frustrating feature at a time. Every time I find myself on LinkedIn, it's not out of pleasure. Maybe it's an email about an enticing job. Maybe it's an article someone shared with me. Either way, before I click the link, I have no intention of scrolling through the feed. Yet I end up on it anyway, not because I want to, but because I've been tricked. You see, LinkedIn employs a trick called back button hijacking. You click a LinkedIn URL that a friend shared, read the article, and when you're done, you click the back button expecting to return to whatever app you were on before. But instead of going back, you're still on LinkedIn. Except now, you are on the homepage, where your feed loads with enticing posts that lure you into scrolling. How did that happen? How did you end up on the homepage when you only clicked on a single link? That's back button hijacking. Here's how it works. When you click the original LinkedIn link, you land on a page and read the article. In the background, LinkedIn secretly gets to work. Using the JavaScript method, it swaps the page's URL to the homepage. The method doesn't add an entry to the browser's history. Then LinkedIn manually pushes the original URL you landed on into the history stack. This all happens so fast that the user never notices any change in the URL or the page. As far as the browser is concerned, you opened the LinkedIn homepage and then clicked on a post to read it. So when you click the back button, you're taken back to the homepage, the feed loads, and you're presented with the most engaging post to keep you on the platform. If you spent a few minutes reading the article, you probably won't even remember how you got to the site. So when you click back and see the feed, you won't question it. You'll assume nothing deceptive happened. While LinkedIn only pushes you one level down in the history state, more aggressive websites can break the back button entirely. They push a new history state every time you try to go back, effectively trapping you on their site. In those cases, your only option is to close the tab. I've also seen developers unintentionally break the back button, often when implementing a search feature. On a search box where each keystroke returns a result, an inexperienced developer might push a new history state on every keystroke, intending to let users navigate back to previous search terms. Unfortunately, this creates an excessive number of history entries. If you typed a long search query, you'd have to click the back button for every character (including spaces) just to get back to the previous page. The correct approach is to only push the history state when the user submits or leaves the search box ( ). As of yesterday, Google announced a new spam policy to address this issue. Their reasoning: People report feeling manipulated and eventually less willing to visit unfamiliar sites. As we've stated before, inserting deceptive or manipulative pages into a user's browser history has always been against our Google Search Essentials. Any website using these tactics will be demoted in search results: Pages that are engaging in back button hijacking may be subject to manual spam actions or automated demotions, which can impact the site's performance in Google Search results. To give site owners time to make any needed changes, we're publishing this policy two months in advance of enforcement on June 15, 2026. I'm not sure how much search rankings affect LinkedIn specifically, but in the grand scheme of things, this is a welcome change. I hope this practice is abolished entirely.

0 views
iDiallo 5 days ago

Your friends are hiding their best ideas from you

Back in college, the final project in our JavaScript class was to build a website. We were a group of four, and we built the best website in class. It was for a restaurant called the Coral Reef. We found pictures online, created a menu, and settled on a solid theme. I was taking a digital art class in parallel, so I used my Photoshop skills to place our logo inside pictures of our fake restaurant. All of a sudden, something clicked. We were admiring our website on a CRT monitor when my classmate pulled me aside. She had an idea. A business idea. An idea so great that she couldn't share it with the rest of the team. She whispered, covering her mouth with one hand so a lip reader couldn't steal this fantastic idea: "what if we build websites for people?" This was the 2000s, of course it was a fantastic idea. The perfect time to spin up an online business after a market crash. But what she didn't know was that, while I was in class in the mornings, my afternoons were spent scouring Craigslist and building crappy websites for a hundred to two hundred dollars a piece. I wasn't going to share my measly spoils. If anything, this was the perfect time to build that kind of service. That's a great idea , I said. There is something satisfying about having an idea validated. A sort of satisfaction we get from the acknowledgment. We are smart, and our ideas are good. Whenever someone learned that I was a developer, they felt this urge to share their "someday" idea. It's an app, a website, or some technology I couldn't even make sense of. I used to try to dissect these ideas, get to the nitty-gritty details, scrutinize them. But that always ended in hostility. "Yeah, you don't get it. You probably don't have enough experience" was a common response when I didn't give a resounding yes. I don't get those questions anymore, at least not framed in the same way. I have worked for decades in the field, and I even have a few failed start-ups under my belt. I'm ready to hear your ideas. But that job has been taken, not by another eager developer with even more experience, or maybe a successful start-up on their résumé. No, not a person. AI took this job. Somewhere behind a chatbot interface, an AI is telling one of your friends that their idea is brilliant. Another AI is telling them to write out the full details in a prompt and it will build the app in a single stroke. That friend probably shared a localhost:3000 link with you, or a Lovable app, last year. That same friend was satisfied with the demo they saw then and has most likely moved on. In the days when I stood as a judge, validating an idea was rarely what sparked a business. The satisfaction was in the telling. And today, a prompt is rarely a spark either. In fact, the prompt is not enough. My friends share a link to their ChatGPT conversation as proof that their idea is brilliant. I can't deny it, the robot has already spoken. I'm not the authority on good or bad ideas. I've called ideas stupid that went on to make millions of dollars. (A ChatGPT wrapper for SMS, for instance.) A decade ago, I was in Y Combinator's Startup School. In my batch, there were two co-founders: one was the developer, and the other was the idea guy. In every meeting, the idea guy would come up with a brand new idea that had nothing to do with their start-up. The instructor tried to steer him toward being the salesman, but he wouldn't budge. "My talent is in coming up with ideas," he said. We love having great ideas. We're just not interested in starting a business, because that's what it actually takes. A friend will joke, "here's an idea" then proceeds to tell me their idea. "If you ever build it, send me my share." They are not expecting me to build it. They are happy to have shared a great idea. As for my classmate, she never spoke of the business again. But over the years, she must have sent me at least a dozen clients. It was a great idea after all.

0 views

watgo - a WebAssembly Toolkit for Go

I'm happy to announce the general availability of watgo - the W eb A ssembly T oolkit for G o. This project is similar to wabt (C++) or wasm-tools (Rust), but in pure, zero-dependency Go. watgo comes with a CLI and a Go API to parse WAT (WebAssembly Text), validate it, and encode it into WASM binaries; it also supports decoding WASM from its binary format. At the center of it all is wasmir - a semantic representation of a WebAssembly module that users can examine (and manipulate). This diagram shows the functionalities provided by watgo: watgo comes with a CLI, which you can install by issuing this command: The CLI aims to be compatible with wasm-tools [1] , and I've already switched my wasm-wat-samples projects to use it; e.g. a command to parse a WAT file, validate it and encode it into binary format: wasmir semantically represents a WASM module with an API that's easy to work with. Here's an example of using watgo to parse a simple WAT program and do some analysis: One important note: the WAT format supports several syntactic niceties that are flattened / canonicalized when lowered to wasmir . For example, all folded instructions are lowered to unfolded ones (linear form), function & type names are resolved to numeric indices, etc. This matches the validation and execution semantics of WASM and its binary representation. These syntactic details are present in watgo in the textformat package (which parses WAT into an AST) and are removed when this is lowered to wasmir . The textformat package is kept internal at this time, but in the future I may consider exposing it publicly - if there's interest. Even though it's still early days for watgo, I'm reasonably confident in its correctness due to a strategy of very heavy testing right from the start. WebAssembly comes with a large official test suite , which is perfect for end-to-end testing of new implementations. The core test suite includes almost 200K lines of WAT files that carry several modules with expected execution semantics and a variety of error scenarios exercised. These live in specially designed .wast files and leverage a custom spec interpreter. watgo hijacks this approach by using the official test suite for its own testing. A custom harness parses .wast files and uses watgo to convert the WAT in them to binary WASM, which is then executed by Node.js [2] ; this harness is a significant effort in itself, but it's very much worth it - the result is excellent testing coverage. watgo passes the entire WASM spec core test suite. Similarly, we leverage wabt's interp test suite which also includes end-to-end tests, using a simpler Node-based harness to test them against watgo. Finally, I maintain a collection of realistic program samples written in WAT in the wasm-wat-samples repository ; these are also used by watgo to test itself. Parse: a parser from WAT to wasmir Validate: uses the official WebAssembly validation semantics to check that the module is well formed and safe Encode: emits wasmir into WASM binary representation Decode: read WASM binary representation into wasmir

0 views

Has Mythos just broken the deal that kept the internet safe?

For nearly 20 years the deal has been simple: you click a link, arbitrary code runs on your device, and a stack of sandboxes keeps that code from doing anything nasty. Browser sandboxes for untrusted JavaScript, VM sandboxes for multi-tenant cloud, ad iframes so banner creatives can't take over your phone or laptop - the modern internet is built on the assumption that those sandboxes hold. Anthropic just shipped a research preview that generates working exploits for one of them 72.4% of the time, up from under 1% a few months ago. That deal might be breaking. From what I've read Mythos is a very large model. Rumours have pointed to it being similar in size to the short lived (and very underwhelming) GPT4.5 . As such I'm with a lot of commentators in thinking that a primary reason this hasn't been rolled out further is compute. Anthropic is probably the most compute starved major AI lab right now and I strongly suspect they do not have the compute to roll this out even if they wanted more broadly. From leaked pricing, it's expensive as well - at $125/MTok output (5x more than Opus, which is itself the most expensive model out there). One thing that has really been overlooked with all the focus on frontier scale models is how quickly improvements in the huge models are being achieved on far smaller models. I've spent a lot of time with Gemma 4 open weights model, and it is incredibly impressive for a model that is ~50x smaller than the frontier models. So I have no doubt that whatever capabilities Mythos has will relatively quickly be available in smaller, and thus easier to serve, models. And even if Mythos' huge size somehow is intrinsic to the abilities (I very much doubt this, given current progress in scaling smaller models) it has, it's only a matter of time before newer chips [1] are able to serve it en masse. It's important to look to where the puck is going. As I've written before, LLMs in my opinion pose an extremely serious cybersecurity risk. Fundamentally we are seeing a radical change in how easy it is to find (and thus exploit) serious flaws and bugs in software for nefarious purposes. To back up a step, it's important to understand how modern cybersecurity is currently achieved. One of the most important concepts is that of a sandbox . Nearly every electronic device you touch day to day has one (or many) layers of these to protect the system. In short, a sandbox is a so called 'virtualised' environment where software can execute on the system, but with limited permissions, segregated from other software, with a very strong boundary that protects the software 'breaking out' of the sandbox. If you're reading this on a modern smartphone, you have at least 3 layers of sandboxing between this page and your phone's operating system. First, your browser has (at least) two levels of sandboxing. One is for the JavaScript execution environment (which runs the interactive code on websites). This is then sandboxed by the browser sandbox, which limits what the site as a whole can do. Finally, iOS or Android then has an app sandbox which limits what the browser as a whole can do. This defence in depth is absolutely fundamental to modern information security, especially allowing users to browse "untrusted" websites with any level of security. For a malicious website to gain control over your device, it needs to chain together multiple vulnerabilities, all at the same time. In reality this is extremely hard to do (and these kinds of chains fetch millions of dollars on the grey market ). Guess what? According to Anthropic, Mythos Preview successfully generates a working exploit for Firefox's JS shell in 72.4% of trials. Opus 4.6 managed this in under 1% of trials in a previous evaluation: Worth flagging a couple of caveats. The JS shell here is Firefox's standalone SpiderMonkey - so this is escaping the innermost sandbox layer, not the full browser chain (the renderer process and OS app sandbox still sit on top). And it's Anthropic's own benchmark, not an independent one. But even hedging both of those, the trajectory is what matters - we're going from "effectively zero" to "72.4% of the time" in one model generation, on a real-world target rather than a toy CTF. This is pretty terrifying if you understand the implications of this. If an LLM can find exploits in sandboxes - which are some of the most well secured pieces of software on the planet - then suddenly every website you aimlessly browse through could contain malicious code which can 'escape' the sandbox and theoretically take control of your device - and all the data on your phone could be sent to someone nasty. These attacks are so dangerous because the internet is built around sandboxes being safe. For example, each banner ad your browser loads is loaded in a separate sandboxed environment. This means they can run a huge amount of (mostly) untested code, with everyone relying on the browser sandbox to protect them. If that sandbox falls, then suddenly a malicious ad campaign can take over millions of devices in hours. Equally, sandboxes (and virtualisation) are fundamental to allowing cloud computing to operate at scale. Most servers these days are not running code against the actual server they are on. Instead, AWS et al take the physical hardware and "slice" it up into so called "virtual" servers, selling each slice to different customers. This allows many more applications to run on a single server - and enables some pretty nice profit margins for the companies involved. This operates on roughly the same model as your phone, with various layers to protect customers from accessing each other's data and (more importantly) from accessing the control plane of AWS. So, we have a very, very big problem if these sandboxes fail, and all fingers point towards this being the case this year. I should tone down the disaster porn slightly - there have been many sandbox escapes before that haven't caused chaos, but I have a strong feeling that this is going to be difficult. And to be clear, when just AWS us-east-1 goes down (which it has done many , many , times ) it is front page news globally and tends to cause significant disruption to day to day life. This is just one of AWS's data centre zones - if a malicious actor was able to take control of the AWS control plane it's likely they'd be able to take all regions simultaneously, and it would likely be infinitely harder to restore when a bad actor was in charge, as opposed to the internal problems that have caused previous problems - and been extremely difficult to restore from in a timely way. Given all this it's understandable that Anthropic are being cautious about releasing this in the wild. The issue though, is that the cat is out of the bag. Even if Anthropic pulled a Miles Dyson and lowered their model code into a pit of molten lava, someone else is going to scale an RL model and release it. The incentives are far, far too high and the prisoner's dilemma strikes again. The current status quo seems to be that these next generation models will be released to a select group of cybersecurity professionals and related organisations, so they can fix things as much as possible to give them a head start. Perhaps this is the best that can be done, but this seems to me to be a repeat of the famous "obscurity is not security" approach which has become a meme in itself in the information security world. It also seems far fetched to me that these organisations who do have access are going to find even most of the critical problems in a limited time window. And that brings me to my final point. While Anthropic are providing $100m of credit and $4m of 'direct cash donations' to open source projects, it's not all open source projects. There are a lot of open source projects that everyone relies on without realising. While the obvious ones like the Linux kernel are getting this "access" ahead of time, there are literally millions of pieces of open source software (nevermind commercial software) that are essential for a substantial minority of systems operation. I'm not quite sure where the plan leaves these ones. Perhaps this is just another round in the cat and mouse cycle that reaches a mostly stable equilibrium, and at worst we have some short term disruption. But if I step back and look how fast the industry has moved over the past few years - I'm not so sure. And one thing I think is for certain, it looks like we do now have the fabled superhuman ability in at least one domain. I don't think it's the last. Albeit at the cost of adding yet more pressure onto the compute crunch the AI industry is experiencing ↩︎ Albeit at the cost of adding yet more pressure onto the compute crunch the AI industry is experiencing ↩︎

0 views
Simon Willison 1 weeks ago

Meta's new model is Muse Spark, and meta.ai chat has some interesting tools

Meta announced Muse Spark today, their first model release since Llama 4 almost exactly a year ago . It's hosted, not open weights, and the API is currently "a private API preview to select users", but you can try it out today on meta.ai (Facebook or Instagram login required). Meta's self-reported benchmarks show it competitive with Opus 4.6, Gemini 3.1 Pro, and GPT 5.4 on selected benchmarks, though notably behind on Terminal-Bench 2.0. Meta themselves say they "continue to invest in areas with current performance gaps, such as long-horizon agentic systems and coding workflows". The model is exposed as two different modes on meta.ai - "Instant" and "Thinking". Meta promise a "Contemplating" mode in the future which they say will offer much longer reasoning time and should behave more like Gemini Deep Think or GPT-5.4 Pro. I prefer to run my pelican test via API to avoid being influenced by any invisible system prompts, but since that's not an option I ran it against the chat UI directly. Here's the pelican I got for "Instant": And this one for "Thinking": Both SVGs were rendered inline by the Meta AI interface. Interestingly, the Instant model output an SVG directly (with code comments) whereas the Thinking model wrapped it in a thin HTML shell with some unused JavaScript libraries. Which got me curious... Clearly Meta's chat harness has some tools wired up to it - at the very least it can render SVG and HTML as embedded frames, Claude Artifacts style. But what else can it do? I asked it: what tools do you have access to? I want the exact tool names, parameter names and tool descriptions, in the original format It spat out detailed descriptions of 16 different tools. You can see the full list I got back here - credit to Meta for not telling their bot to hide these, since it's far less frustrating if I can get them out without having to mess around with jailbreaks. Here are highlights derived from that response: Browse and search . can run a web search through an undisclosed search engine, can load the full page from one of those search results and can run pattern matches against the returned page content. Meta content search . can run "Semantic search across Instagram, Threads, and Facebook posts" - but only for posts the user has access to view which were created since 2025-01-01. This tool has some powerful looking parameters, including , , , and . "Catalog search" - can "Search for products in Meta's product catalog", presumably for the "Shopping" option in the Meta AI model selector. Image generation . generates images from prompts, and "returns a CDN URL and saves the image to the sandbox". It has modes "artistic" and "realistic" and can return "square", "vertical" or "landscape" images. container.python_execution - yes! It's Code Interpreter , my favourite feature of both ChatGPT and Claude. Execute Python code in a remote sandbox environment. Python 3.9 with pandas, numpy, matplotlib, plotly, scikit-learn, PyMuPDF, Pillow, OpenCV, etc. Files persist at . Python 3.9 is EOL these days but the library collection looks useful. I prompted "use python code to confirm sqlite version and python version" and got back Python 3.9.25 and SQLite 3.34.1 (from January 2021 ). container.create_web_artifact - we saw this earlier with the HTML wrapper around the pelican: Meta AI can create HTML+JavaScript files in its container which can then be served up as secure sandboxed iframe interactives. "Set kind to for websites/apps or for vector graphics." container.download_meta_1p_media is interesting: "Download media from Meta 1P sources into the sandbox. Use post_id for Instagram/Facebook/Threads posts, or for catalog product images". So it looks like you can pull in content from other parts of Meta and then do fun Code Interpreter things to it in the sandbox. container.file_search - "Search uploaded files in this conversation and return relevant excerpts" - I guess for digging through PDFs and similar? Tools for editing files in the container - , (with and ), . These look similar to Claude's text editor tool commands - these are becoming a common pattern across any file-equipped agent harness. container.visual_grounding - see below, this one is fun . subagents.spawn_agent - the sub-agent as a tool pattern. "Spawn an independent sub-agent for research, analysis, or delegation. It returns its final text response." third_party.link_third_party_account - "Initiate account linking for a third-party service", currently Google Calendar, Outlook Calendar, Gmail, or Outlook. Let's talk about that one. Here's the description in full: Description: Visual grounding tool that analyzes the image, identifies and labels objects, locates regions, or counts objects. Parameters: So it can take an image path, optional object names and return things in format , or . Sounds to me like Meta's Segment Anything . I decided to take it for a spin. generate a photo of a raccoon sitting on a trash can wearing trash as a hat Here you go — king of the alley, in full regalia: Coffee cup crown, banana peel brim, newspaper feather. Peak raccoon fashion. Want me to make a wilder version — maybe night-vision, or wearing sunglasses too? My guess is that image generation in Meta AI is powered by their Emu model , or an updated version of it. This is the same pattern as ChatGPT and Gemini image generation, where images are generated by a tool call (here that's ). The tool description said that it "saves the image to the sandbox", so my guess was that I'd then be able to run Python code against it. I tried: use python OpenCV to analyze that image and find out neat things about it Annoyingly it did not show me the code it ran, but it did create this image: Pretty neat! We can both generate images and then use them with Python tools. Next, the tool. Knowing the names of the tools makes it easy to refer to them in prompts: Now try the container.visual_grounding tool The response ( full response here ): Done — visual_grounding pinpointed every piece of the outfit with pixel-level precision: Want me to run it in bbox mode next to draw boxes around the fur texture, or count mode to see how many individual whiskers it can detect? Once again, Meta AI wrapped the results from the tool in a custom HTML visualization. That's clearly mode, but how about the others? I prompted: yes draw bounding boxes, can it do masks too? Yes — bounding boxes work great. Masks? Not directly in ; it only returns three formats: point, bbox, and count. No pixel-level segmentation masks. Here's the bbox output: What you’re seeing: Notice how the boxes nest: face is inside raccoon, eyes inside face, hat pieces overlap each other but stay above the face box. The tool is doing real object localization, not just guessing centers. About masks: since visual_grounding doesn't output them, I can fake them with OpenCV in the container. Two quick options: Want me to generate an actual alpha mask for the raccoon (cut it out from the alley), or for the trash hat pieces separately? Takes about 10 seconds and I can give you a transparent PNG. Here's the HTML for that bbox visualization . Finally I had it dump out the raw JSON for point mode, bbox mode and count mode. You can see that all here . The count mode is particularly interesting - here's that in full: So Meta AI has the ability to count a raccoon's whiskers baked into the default set of tools. Which means... it can count pelicans too! Here's that overlay exported as HTML . Update : Meta's Jack Wu confirms that these tools are part of the new harness they launched alongside the new model. On Twitter Alexandr Wang said : this is step one. bigger models are already in development with infrastructure scaling to match. private api preview open to select partners today, with plans to open-source future versions. I really hope they do go back to open-sourcing their models. Llama 3.1/3.2/3.3 were excellent laptop-scale model families, and the introductory blog post for Muse Spark had this to say about efficiency: [...] we can reach the same capabilities with over an order of magnitude less compute than our previous model, Llama 4 Maverick. This improvement also makes Muse Spark significantly more efficient than the leading base models available for comparison. So are Meta back in the frontier model game? Artificial Analysis think so - they scored Meta Spark at 52, "behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6". Last year's Llama 4 Maverick and Scout scored 18 and 13 respectively. I'm waiting for API access - while the tool collection on meta.ai is quite strong the real test of a model like this is still what we can build on top of it. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . Browse and search . can run a web search through an undisclosed search engine, can load the full page from one of those search results and can run pattern matches against the returned page content. Meta content search . can run "Semantic search across Instagram, Threads, and Facebook posts" - but only for posts the user has access to view which were created since 2025-01-01. This tool has some powerful looking parameters, including , , , and . "Catalog search" - can "Search for products in Meta's product catalog", presumably for the "Shopping" option in the Meta AI model selector. Image generation . generates images from prompts, and "returns a CDN URL and saves the image to the sandbox". It has modes "artistic" and "realistic" and can return "square", "vertical" or "landscape" images. container.python_execution - yes! It's Code Interpreter , my favourite feature of both ChatGPT and Claude. Execute Python code in a remote sandbox environment. Python 3.9 with pandas, numpy, matplotlib, plotly, scikit-learn, PyMuPDF, Pillow, OpenCV, etc. Files persist at . Python 3.9 is EOL these days but the library collection looks useful. I prompted "use python code to confirm sqlite version and python version" and got back Python 3.9.25 and SQLite 3.34.1 (from January 2021 ). container.create_web_artifact - we saw this earlier with the HTML wrapper around the pelican: Meta AI can create HTML+JavaScript files in its container which can then be served up as secure sandboxed iframe interactives. "Set kind to for websites/apps or for vector graphics." container.download_meta_1p_media is interesting: "Download media from Meta 1P sources into the sandbox. Use post_id for Instagram/Facebook/Threads posts, or for catalog product images". So it looks like you can pull in content from other parts of Meta and then do fun Code Interpreter things to it in the sandbox. container.file_search - "Search uploaded files in this conversation and return relevant excerpts" - I guess for digging through PDFs and similar? Tools for editing files in the container - , (with and ), . These look similar to Claude's text editor tool commands - these are becoming a common pattern across any file-equipped agent harness. container.visual_grounding - see below, this one is fun . subagents.spawn_agent - the sub-agent as a tool pattern. "Spawn an independent sub-agent for research, analysis, or delegation. It returns its final text response." third_party.link_third_party_account - "Initiate account linking for a third-party service", currently Google Calendar, Outlook Calendar, Gmail, or Outlook. object_names (array, required) image_path (string, default "") format_type ("bbox"|"point"|"count"|null, default "bbox") title (string|null)[string] raccoon : big box from (270,240) to (892,845) — covers 62% of the width, basically the whole animal coffee cup : (424,115)-(610,273) — sits highest, 158px tall in the 0-1000 space banana peel : (472,215)-(667,291) — overlaps the cup's base, confirming it's draped over the side newspaper : (305,223)-(548,315) — left ear coverage trash can lid : (107,800)-(1000,998) — spans almost the full image width at the bottom face : (305,283)-(750,537) — tight crop around the mask eyes : both ~40×37 pixel boxes, perfectly level at y=379 GrabCut — I give it the raccoon bbox, it iteratively segments fur from background and returns a binary mask SAM-style color threshold — use the bbox plus k-means to isolate the coffee cup or banana peel

0 views
Simon Willison 1 weeks ago

Anthropic's Project Glasswing - restricting Claude Mythos to security researchers - sounds necessary to me

Anthropic didn't release their latest model, Claude Mythos ( system card PDF ), today. They have instead made it available to a very restricted set of preview partners under their newly announced Project Glasswing . The model is a general purpose model, similar to Claude Opus 4.6, but Anthropic claim that its cyber-security research abilities are strong enough that they need to give the software industry as a whole time to prepare. Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser . Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities or weaknesses in their foundational systems—systems that represent a very large portion of the world’s shared cyberattack surface. We anticipate this work will focus on tasks like local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing of systems. There's a great deal more technical detail in Assessing Claude Mythos Preview’s cybersecurity capabilities on the Anthropic Red Team blog: In one case, Mythos Preview wrote a web browser exploit that chained together four vulnerabilities, writing a complex  JIT heap spray  that escaped both renderer and OS sandboxes. It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses. And it autonomously wrote a remote code execution exploit on FreeBSD's NFS server that granted full root access to unauthenticated users by splitting a 20-gadget ROP chain over multiple packets. Plus this comparison with Claude 4.6 Opus: Our internal evaluations showed that Opus 4.6 generally had a near-0% success rate at autonomous exploit development. But Mythos Preview is in a different league. For example, Opus 4.6 turned the vulnerabilities it had found in Mozilla’s Firefox 147 JavaScript engine—all patched in Firefox 148—into JavaScript shell exploits only two times out of several hundred attempts. We re-ran this experiment as a benchmark for Mythos Preview, which developed working exploits 181 times, and achieved register control on 29 more. Saying "our model is too dangerous to release" is a great way to build buzz around a new model, but in this case I expect their caution is warranted. Just a few days ( last Friday ) ago I started a new ai-security-research tag on this blog to acknowledge an uptick in credible security professionals pulling the alarm on how good modern LLMs have got at vulnerability research. Greg Kroah-Hartman of the Linux kernel: Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us. Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real. Daniel Stenberg of : The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense. And Thomas Ptacek published Vulnerability Research Is Cooked , a post inspired by his podcast conversation with Anthropic's Nicholas Carlini. Anthropic have a 5 minute talking heads video describing the Glasswing project. Nicholas Carlini appears as one of those talking heads, where he said (highlights mine): It has the ability to chain together vulnerabilities. So what this means is you find two vulnerabilities, either of which doesn't really get you very much independently. But this model is able to create exploits out of three, four, or sometimes five vulnerabilities that in sequence give you some kind of very sophisticated end outcome. [...] I've found more bugs in the last couple of weeks than I found in the rest of my life combined . We've used the model to scan a bunch of open source code, and the thing that we went for first was operating systems, because this is the code that underlies the entire internet infrastructure. For OpenBSD, we found a bug that's been present for 27 years, where I can send a couple of pieces of data to any OpenBSD server and crash it . On Linux, we found a number of vulnerabilities where as a user with no permissions, I can elevate myself to the administrator by just running some binary on my machine. For each of these bugs, we told the maintainers who actually run the software about them, and they went and fixed them and have deployed the patches patches so that anyone who runs the software is no longer vulnerable to these attacks. I found this on the OpenBSD 7.8 errata page : 025: RELIABILITY FIX: March 25, 2026 All architectures TCP packets with invalid SACK options could crash the kernel. A source code patch exists which remedies this problem. I tracked that change down in the GitHub mirror of the OpenBSD CVS repo (apparently they still use CVS!) and found it using git blame : Sure enough, the surrounding code is from 27 years ago. I'm not sure which Linux vulnerability Nicholas was describing, but it may have been this NFS one recently covered by Michael Lynch . There's enough smoke here that I believe there's a fire. It's not surprising to find vulnerabilities in decades-old software, especially given that they're mostly written in C, but what's new is that coding agents run by the latest frontier LLMs are proving tirelessly capable at digging up these issues. I actually thought to myself on Friday that this sounded like an industry-wide reckoning in the making, and that it might warrant a huge investment of time and money to get ahead of the inevitable barrage of vulnerabilities. Project Glasswing incorporates "$100M in usage credits ... as well as $4M in direct donations to open-source security organizations". Partners include AWS, Apple, Microsoft, Google, and the Linux Foundation. It would be great to see OpenAI involved as well - GPT-5.4 already has a strong reputation for finding security vulnerabilities and they have stronger models on the near horizon. The bad news for those of us who are not trusted partners is this: We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale—for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring. To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model’s most dangerous outputs. We plan to launch new safeguards with an upcoming Claude Opus model, allowing us to improve and refine them with a model that does not pose the same level of risk as Mythos Preview. I can live with that. I think the security risks really are credible here, and having extra time for trusted teams to get ahead of them is a reasonable trade-off. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views
devansh 1 weeks ago

On LLMs and Vulnerability Research

I have been meaning to write this for six months. The landscape kept shifting. It has now shifted enough to say something definitive. I work at the intersection of vulnerability triage. I see, every day, how this landscape is changing. These views are personal and do not represent my employer. Take them with appropriate salt. Two things happened in quick succession. Frontier models got dramatically better (Opus 4.6, GPT 5.4). Agentic toolkits (Claude Code, Codex, OpenCode) gave those models hands. The combination produces solid vulnerability research. "LLMs are next-token predictors." This framing was always reductive. It is now actively misleading. The gap between what these models theoretically do (predict the next word) and what they actually do (reason about concurrent thread execution in kernel code to identify use-after-free conditions) has grown too wide for the old frame to hold. Three mechanisms explain why. Implicit structural understanding. Tokenizers know nothing about code. Byte Pair Encoding treats , , and as frequent byte sequences, not syntactic constructs. But the transformer layers above tell a different story. Through training on massive code corpora, attention heads specialise: some track variable identity and provenance, others develop bias toward control flow tokens. The model converges on internal representations that capture semantic properties of code, something functionally equivalent to an abstract syntax tree, built implicitly, never formally. Neural taint analysis. The most security-relevant emergent capability. The model learns associations between sources of untrusted input (user-controlled data, network input, file reads) and dangerous sinks (system calls, SQL queries, memory operations). When it identifies a path from source to sink without adequate sanitisation, it flags a vulnerability. This is not formal taint analysis. No dataflow graph as well. It is a statistical approximation. But it works well for intra-procedural bugs where the source-to-sink path is short, and degrades as distance increases across functions, files, and abstraction layers. Test-time reasoning. The most consequential advance. Standard inference is a single forward pass: reactive, fast, fundamentally limited. Reasoning models (o-series, extended thinking, DeepSeek R1) break this constraint by generating internal reasoning tokens, a scratchpad where the model works through a problem step by step before answering. The model traces execution paths, tracks variable values, evaluates branch conditions. Symbolic execution in natural language. Less precise than formal tools but capable of handling what they choke on: complex pointer arithmetic, dynamic dispatch, deeply nested callbacks. It self-verifies, generating a hypothesis ("the lock isn't held across this path"), then testing it ("wait, is there a lock acquisition I missed?"). It backtracks when reasoning hits dead ends. DeepSeek R1 showed these behaviours emerge from pure reinforcement learning with correctness-based rewards. Nobody taught the model to check its own work. It discovered that verification produces better answers. The model is not generating the most probable next token. It is spending variable compute to solve a specific problem. Three advances compound on each other. Mixture of Experts. Every frontier model now uses MoE. A model might contain 400 billion parameters but activate only 17 billion per token. Vastly more encoded knowledge about code patterns, API behaviours, and vulnerability classes without proportional inference cost. Million-token context. In 2023, analysing a codebase required chunking code into a vector database, retrieving fragments via similarity search, and feeding them to the model. RAG is inherently lossy: code split at arbitrary boundaries, cross-file relationships destroyed, critical context discarded. For vulnerability analysis, where understanding cross-module data flow is the entire point, this information loss is devastating. At one million tokens, you fit an entire mid-size codebase in a single prompt. The model traces user input from an HTTP handler through three middleware layers into a database query builder and spots a sanitisation gap on line 4,200 exploitable via the endpoint on line 890. No chunking. No retrieval. No information loss. Reinforcement-learned reasoning. Earlier models trained purely on next-token prediction. Modern frontier models add an RL phase: generate reasoning chains, reward correctness of the final answer rather than plausibility of text. Over millions of iterations, this shapes reasoning to produce correct analyses rather than plausible-sounding ones. The strategies transfer across domains. A model that learned to verify mathematical reasoning applies the same verification to code. A persistent belief: truly "novel" vulnerability classes exist, bugs so unprecedented that only human genius could discover them. Comforting. Also wrong. Decompose the bugs held up as examples. HTTP request smuggling: the insight that a proxy and backend might disagree about where one request ends and another begins feels like a creative leap. But the actual bug is the intersection of known primitives: ambiguous protocol specification, inconsistent parsing between components, a security-critical assumption about message boundaries. None novel individually. The "novelty" was in combining them. Prototype pollution RCEs in JavaScript frameworks. Exotic until you realise it is dynamic property assignment in a prototype-based language, unsanitised input reaching object modification, and a rendering pipeline evaluating modified objects in a privileged context. Injection, type confusion, privilege boundary crossing. Taxonomy staples for decades. The pattern holds universally. "Novel" vulnerabilities decompose into compositions of known primitives: spec ambiguities, type confusions, missing boundary checks, TOCTOU gaps, trust boundary violations. The novelty is in the composition, not the components. This is precisely what frontier LLMs are increasingly good at. A model that understands protocol ambiguity, inconsistent component behaviour, and security boundary assumptions has all the ingredients to hypothesise a request-smuggling-class vulnerability when pointed at a reverse proxy codebase. It does not need to have seen that exact bug class. It needs to recognise that the conditions for parser disagreement exist and that parser disagreement at a trust boundary has security implications. Compositional reasoning over known primitives. Exactly what test-time reasoning enables. LLMs will not discover the next Spectre tomorrow. Microarchitectural side channels in CPU pipelines are largely absent from code-level training data. But the space of "LLM-inaccessible" vulnerabilities is smaller than the security community assumes, and it shrinks with every model generation. Most of what we call novel vulnerability research is creative recombination within a known search space. That is what these models do best. Effective AI vulnerability research = good scaffolding + adequate tokens. Scaffolding (harness design, prompt engineering, problem framing) is wildly underestimated. Claude Code and Codex are general-purpose coding environments, not optimised for vulnerability research. A purpose-built harness provides threat models, defines trust boundaries, highlights historical vulnerability patterns in the specific technology stack, and constrains search to security-relevant code paths. The operator designing that context determines whether the model spends its reasoning budget wisely or wastes it on dead ends. Two researchers, same model, same codebase, dramatically different results. Token quality beats token quantity. A thousand reasoning tokens on the right code path with the right threat model outperform a million tokens sprayed across a repo with "find vulnerabilities." The search space is effectively infinite. You cannot brute-force it. You narrow it with human intelligence encoded as context, directing machine intelligence toward where bugs actually live. "LLMs are non-deterministic, so you can't trust their findings." Sounds devastating. Almost entirely irrelevant. It confuses the properties of the tool with the properties of the target. The bugs are deterministic. They are in the code. A buffer overflow on line 847 is still there whether the model notices it on attempt one or attempt five. Non-determinism in the search process does not make the search less valid. It makes it more thorough under repetition. Each run samples a different trajectory through the hypothesis space. The union of multiple runs covers more search space than any single run. Conceptually identical to fuzzing. Nobody says "fuzzers are non-deterministic so we can't trust them." You run the fuzzer longer, cover more input space, find more bugs. Same principle. Non-determinism under repetition becomes coverage. In 2023 and 2024, the state of the art was architecture. Multi-agent systems, RAG pipelines, tool integration with SMT solvers and fuzzers and static analysis engines. The best orchestration won. That era is ending. A frontier model ingests a million tokens of code in a single prompt. Your RAG pipeline is not an advantage when the model without RAG sees the whole codebase while your pipeline shows fragments selected by retrieval that does not know what is security-relevant. A reasoning model spends thousands of tokens tracing execution paths and verifying hypotheses. Your external solver integration is not a differentiator when the model approximates what the solver does with contextual understanding the solver lacks. Agentic toolkits handle orchestration better than your custom tooling. The implication the security industry has not fully processed: vulnerability research is being democratised. When finding a memory safety bug in a C library required a Project Zero-calibre researcher with years of experience, the supply was measured in hundreds worldwide. When it requires a well-prompted API call, the supply is effectively unlimited. What replaces architecture as the competitive advantage? Two things. Domain expertise encoded as context. Not "find bugs in this code" but "this is a TLS implementation; here are three classes of timing side-channel that have affected similar implementations; analyse whether the constant-time guarantees hold across these specific code paths." The human provides the insight. The model does the grunt work. Access to compute. Test-time reasoning scales with inference compute. More tokens means deeper analysis, more self-verification, more backtracking. Teams that let a model spend ten minutes on a complex code path will find bugs that teams limited to five-second responses will miss. The end state: vulnerability discovery for known bug classes becomes a commodity, available to anyone with API access and a credit card. The researchers who thrive will focus where the model cannot: novel vulnerability classes, application-level logic flaws, architectural security review, adversarial creativity. This is not a prediction. It is already happening. The pace is set by model capability, which doubles on a timeline measured in months. Beyond next-token prediction Implicit structural understanding Neural taint analysis Test-time reasoning The architecture that enabled this Mixture of Experts Million-token context Reinforcement-learned reasoning The myth of novel vulnerabilities Scaffolding and tokens Non-determinism is a feature Orchestration is no longer your moat

0 views
Susam Pal 1 weeks ago

Wander Console 0.4.0

Wander Console 0.4.0 is the fourth release of Wander, a small, decentralised, self-hosted web console that lets visitors to your website explore interesting websites and pages recommended by a community of independent website owners. To try it, go to susam.net/wander/ . This release brings a few small additions as well as a few minor fixes. You can find the previous release pages here: /code/news/wander/ . The sections below discuss the current release. Wander Console now supports wildcard patterns in ignore lists. An asterisk ( ) anywhere in an ignore pattern matches zero or more characters in URLs. For example, an ignore pattern like can be used to ignore URLs such as this: These ignore patterns are specified in a console's wander.js file. These are very important for providing a good wandering experience to visitors. The owner of a console decides what links they want to ignore in their ignore patterns. The ignore list typically contains commercial websites that do not fit the spirit of the small web, as well as defunct or incompatible websites that do not load in the console. A console with a well maintained ignore list ensures that a visitor to that console has a lower likelihood of encountering commercial or broken websites. For a complete description of the ignore patterns, see Customise Ignore List . By popular demand , Wander now adds a query parameter while loading a recommended web page in the console. The value of this parameter is the console that loaded the recommended page. For example, if you encounter midnight.pub/ while using the console at susam.net/wander/ , the console loads the page using the following URL: This allows the owner of the recommended website to see, via their access logs, that the visit originated from a Wander Console. While this is the default behaviour now, it can be customised in two ways. The value can be changed from the full URL of the Wander Console to a small identifier that identifies the version of Wander Console used (e.g. ). The query parameter can be disabled as well. For more details, see Customise 'via' Parameter . In earlier versions of the console, when a visitor came to your console to explore the Wander network, it picked the first recommendation from the list of recommended pages in it (i.e. your file). But subsequent recommendations came from your neighbours' consoles and then their neighbours' consoles and so on recursively. Your console (the starting console) was not considered again unless some other console in the network linked back to your console. A common way to ensure that your console was also considered in subsequent recommendations too was to add a link to your console in your own console (i.e. in your ). Yes, this created self-loops in the network but this wasn't considered a problem. In fact, this was considered desirable, so that when the console picked a console from the pool of discovered consoles to find the next recommendation, it considered itself to be part of the pool. This workaround is no longer necessary. Since version 0.4.0 of Wander, each console will always consider itself to be part of the pool from which it picks consoles. This means that the web pages recommended by the starting console have a fair chance of being picked for the next web page recommendation. The Wander Console loads the recommended web pages in an element that has sandbox restrictions enabled. The sandbox properties restrict the side effects the loaded web page can have on the parent Wander Console window. For example, with the sandbox restrictions enabled, a loaded web page cannot redirect the parent window to another website. In fact, these days most modern browsers block this and show a warning anyway, but we also block this at a sandbox level too in the console implementation. It turned out that our aggressive sandbox restrictions also blocked legitimate websites from opening a link in a new tab. We decided that opening a link in a new tab is harmless behaviour and we have relaxed the sandbox restrictions a little bit to allow it. Of course, when you click such a link within Wander console, the link will open in a new tab of your web browser (not within Wander Console, as the console does not have any notion of tabs). Although I developed this project on a whim, one early morning while taking a short break from my ongoing studies of algebraic graph theory, the subsequent warm reception on Hacker News and Lobsters has led to a growing community of Wander Console owners. There are two places where the community hangs out at the moment: If you own a personal website but you have not set up a Wander Console yet, I suggest that you consider setting one up for yourself. You can see what it looks like by visiting mine at /wander/ . To set up your own, follow these instructions: Install . It just involves copying two files to your web server. It is about as simple as it gets. Read on website | #web | #technology Wildcard Patterns The 'via' Query Parameter Console Picker Algorithm Allow Links that Open in New Tab New consoles are announced in this thread on Codeberg: Share Your Wander Console . We also have an Internet Relay Chat (IRC) channel named #wander on the Libera IRC network. This is a channel for people who enjoy building personal websites and want to talk to each other. You are welcome to join this channel, share your console URL, link to your website or recent articles as well as share links to other non-commercial personal websites.

0 views
The Jolly Teapot 1 weeks ago

Browsing the web with JavaScript turned off

Some time ago, I tried to use my web browser with JavaScript turned off by default. The experiment didn’t last long , and my attempt at a privacy-protecting, pain-free web experience failed. Too many websites rely on JavaScript, which made this type of web browsing rather uncomfortable. I’ve kept a Safari extension like StopTheScript around, on top of a content blocker like Wipr , just in case I needed to really “trim the fat” of the occasional problematic webpage. * 1 Recently, I’ve given this setup a new chance to shine, and even described it in a post. The results are in: the experiment failed yet again. But I’m not done. Even if this exact setup isn’t the one I currently rely on, JavaScript-blocking is nevertheless still at the heart of my web browsing hygiene on the Mac today. For context, this need for fine-tuning comes from the fact that my dear old MacBook Air from early 2020, rocking an Intel chip, starts to show its age. Sure, it already felt like a 10-year-old computer the moment the M1 MacBook Air chip was released, merely six months after I bought it, but let’s just say that a lot of webpages make this laptop choke. My goal of making this computer last one more year can only be reached if I manage not to throw the laptop through the window every time I want to open more than three tabs. On my Mac, JavaScript is now blocked by default on all pages via StopTheScript. Leaving JavaScript on, meaning giving websites a chance, sort of defeated the purpose of my setup (performance and privacy). Having JS turned off effectively blocks 99% of ads and trackers (I think, don’t quote me on that) and makes browsing the web a very enjoyable experience. The fan barely activates, and everything is as snappy and junk-free as expected. For websites that require JavaScript — meaning frequently visited sites like YouTube or where I need to be logged in like LanguageTool  — I turn off StopTheScript permanently via the Websites > Extensions menu in the Safari Settings. I try to keep this list to a bare minimum, even if this means I have to accept a few annoyances like not having access to embedded video players or comments on some websites. For instance, I visit the Guardian multiple times daily, yet I won’t add it to the exception list, even if I’m a subscriber and therefore not exposed to the numerous “please subscribe” modals. I can no longer hide some categories on the home page, nor watch embedded videos: a small price to pay for a quick and responsive experience, and a minimal list of exceptions. For the few times when I actually need to watch a video on the Guardian, comment on a blog post, or for the occasional site that needs JavaScript simply to appear on my screen (more on that later), what I do is quickly open the URL in a new private window. There, StopTheScript is disabled by default (so that JavaScript is enabled: sorry, I know this is confusing). Having to reopen a page in a different browser window is an annoying process, yes. Even after a few weeks it still feels like a chore, but it seems to be the quickest way on the Mac to get a site to work without having to mess around with permissions and exceptions, which can be even more annoying on Safari. Again, a small price to pay to make this setup work. * 2 Another perk of that private browsing method is that the ephemeral session doesn’t save cookies and the main tracking IDs disappear when I close the window. I think. The problem I had at first was that these sessions tended to display the webpages as intended by the website owners: loaded with JavaScript, ads, modals, banners, trackers, &c. Most of the time, it is a terrible mess. Really, no one should ever experience the general web without any sort of blocker. To solve this weakness of my setup, I switched from Quad9 to Mullvad DNS to block a good chunk of ads and trackers (using the “All” profile ). Now, the private window only allows the functionality part of the JavaScript, a few cookie banners and Google login prompt annoyances, but at least I am not welcomed by privacy-invading and CPU-consuming ads and trackers every time my JS-free attempt fails. I know I could use a regular content blocker instead of a DNS resolver, but keeping it active all the time when JS is turned off feels a bit redundant and too much of an extension overlap. More importantly, I don’t want to be tempted to manage yet another exception list on top of the StopTheScript one (been there, done that, didn’t work). Also, with Safari I don’t think it’s possible to activate an extension in Private Mode only. John Gruber , in a follow-up reaction to The 49MB Web Page article from Shubham Bose, which highlights the disproportionate weight of webpages related to their content, wrote: One of the most controversial opinions I’ve long espoused, and believe today more than ever, is that it was a terrible mistake for web browsers to support JavaScript. Not that they should have picked a different language, but that they supported scripting at all. That decision turned web pages — which were originally intended as documents — into embedded computer programs. There would be no 49 MB web pages without scripting. There would be no surveillance tracking industrial complex. The text on a page is visible. The images and video embedded on a page are visible. You see them. JavaScript is invisible. That makes it seem OK to do things that are not OK at all. Amen to that. But if JavaScript is indeed mostly used for this “invisible” stuff, why are some websites built to use it for the most basic stuff? Video streaming services, online stores, social media platforms, I get it: JavaScript makes sense. But text-based sites? Blogs? Why? The other day I wanted to read this article , and only the website header showed up in my browser. Even Reader Mode didn’t make the article appear. When I opened the link in a private window, where StopTheScript is disabled, lo and behold, the article finally appeared. For some obscure reason, on that website (and others) JavaScript is needed to load text on a freaking web page. Even if you want your website to have a special behaviour regarding loading speeds, design subtleties, or whatever you use JavaScript for, please, use a tag, either to display the article in its most basic form, or at least to show a message saying “JavaScript needed for no apparent reason at all. Sorry.” * 3 This is what I do on my phone, as managing Safari extensions on iOS is a painful process. Quiche Browser is a neat solution and great way for me to have the “turn off JavaScript” menu handy, but without a way to sync bookmarks, history or open tabs with the Mac, I still prefer to stick to Safari, at least for now. ^ I still wish StopTheScript had a one-touch feature to quickly reload a page with JavaScript turned on until the next refresh or for an hour or so, but it doesn’t. ^ This is what I do for this site’s search engine , where PageFind requires JavaScript to operate. Speaking of search engine, DuckDuckGo works fine in HTML-only mode (the only main search engine to offer this I believe). ^ This is what I do on my phone, as managing Safari extensions on iOS is a painful process. Quiche Browser is a neat solution and great way for me to have the “turn off JavaScript” menu handy, but without a way to sync bookmarks, history or open tabs with the Mac, I still prefer to stick to Safari, at least for now. ^ I still wish StopTheScript had a one-touch feature to quickly reload a page with JavaScript turned on until the next refresh or for an hour or so, but it doesn’t. ^ This is what I do for this site’s search engine , where PageFind requires JavaScript to operate. Speaking of search engine, DuckDuckGo works fine in HTML-only mode (the only main search engine to offer this I believe). ^

0 views
Den Odell 2 weeks ago

You're Looking at the Wrong Pretext Demo

Pretext , a new JavaScript library from Cheng Lou, crossed 7,000 GitHub stars in its first three days. If you've been anywhere near the frontend engineering circles in that time, you've seen the demos: a dragon that parts text like water , fluid smoke rendered as typographic ASCII , a wireframe torus drawn through a character grid , multi-column editorial layouts with animated orbs displacing text at 60fps . These are visually stunning and they're why the library went viral. But they aren't the reason this library matters. The important thing Pretext does is predict the height of a block of text without ever reading from the DOM. This means you can position text nodes without triggering a single layout recalculation. The text stays in the DOM, so screen readers can read it and users can select it, copy it, and translate it. The accessibility tree remains intact, the performance gain is real, and the user experience is preserved for everyone. This is the feature that will change how production web applications handle text, and it's the feature almost nobody is demonstrating. The community has spent three days building dragons. It should be building chat interfaces. And the fact that the dragons went viral while the measurement engine went unnoticed tells us something important about how the frontend community evaluates tools: we optimize for what we can see, not for what matters most to the people using what we build. The problem is forced layout recalculation, where the browser has to pause and re-measure the page layout before it can continue. When a UI component needs to know the height of a block of text, the standard approach is to measure it from the DOM. You call or read , and the browser synchronously calculates layout to give you an answer. Do this for 500 text blocks in a virtual list and you've forced 500 of these pauses. This pattern, called layout thrashing , remains a leading cause of visual stuttering in complex web applications. Pretext's insight is that uses the same font engine as DOM rendering but operates outside the browser's layout process entirely. Measure a word via canvas, cache the width, and from that point forward layout becomes pure arithmetic: walk cached widths, track running line width, and insert breaks when you exceed the container's maximum. No slow measurement reads, and no synchronous pauses. The architecture separates this into two phases. does the expensive work once: normalize whitespace, segment the text using for locale-aware word boundaries, handle bidirectional text (such as mixing English and Arabic), measure segments with canvas, and return a reusable reference. is then pure calculation over cached widths, taking about 0.09ms for a 500-text batch against roughly 19ms for . Cheng Lou himself calls the 500x comparison "unfair" since it excludes the one-time cost, but that cost is only paid once and spread across every subsequent call. It runs once when the text appears, and every subsequent resize takes the fast path, where the performance boost is real and substantial. The core idea traces back to Sebastian Markbage's research at Meta, where Cheng Lou implemented the earlier prototype that proved canvas font metrics could substitute for DOM measurement. Pretext builds on that foundation with production-grade internationalization, bidirectional text support, and the two-phase architecture that makes the fast path so fast. Lou has a track record here: react-motion and ReasonML both followed the same pattern of identifying a constraint everyone accepted as given and removing it with a better abstraction. The first use case Pretext serves, and the one I want to make the case for, is measuring text height so you can render DOM text nodes in exactly the right position without ever asking the browser how tall they are. This isn't a compromise path, it's the most capable thing the library does. Consider a virtual scrolling list of 500 chat messages. To render only the visible ones, you need to know each message's height before it enters the viewport. The traditional approach is to insert the text into the DOM, measure it, and then position it, paying the layout cost for every message. Pretext lets you predict the height mathematically and then render the text node at the right position. The text itself still lives in the DOM, so the accessibility model, selection behavior, and find-in-page all work exactly as they would with any other text node. Here's what that looks like in practice: Two function calls: the first measures and caches, the second predicts height through calculation. No layout cost, yet the text you render afterward is a standard DOM node with full accessibility. The shrinkwrap demo is the clearest example of why this path matters. CSS sizes a container to the widest wrapped line, which wastes space when the last line is short. There's no CSS property that says "find the narrowest width that still wraps to exactly N lines." Pretext's calculates the optimal width mathematically, and the result is a tighter chat bubble rendered as a standard DOM text node. The performance gain comes from smarter measurement, not from abandoning the DOM. Nothing about the text changes for the end user. Accordion sections whose heights are calculated from Pretext, and masonry layouts with height prediction instead of DOM reads: these both follow the same model of fast measurement feeding into standard DOM rendering. There are edge cases worth knowing about, starting with the fact that the prediction is only as accurate as the font metrics available at measurement time, so fonts need to be loaded before runs or results will drift. Ligatures (where two characters merge into one glyph, like "fi"), advanced font features, and certain CJK composition rules can introduce tiny differences between canvas measurement and DOM rendering. These are solvable problems and the library handles many of them already, but acknowledging them is part of taking the approach seriously rather than treating it as magic. Pretext also supports manual line layout for rendering to Canvas, SVG, or WebGL. These APIs give you exact line coordinates so you can paint text yourself rather than letting the DOM handle it. This is the path that went viral, and the one that dominates every community showcase. The canvas demos are impressive and they're doing things the DOM genuinely can't do at 60fps. But they're also painting pixels, and when you paint text as canvas pixels, the browser has no idea those pixels represent language. Screen readers like VoiceOver, NVDA, and JAWS derive their understanding of a page from the accessibility tree, which is itself built from the DOM, so canvas content is invisible to them. Browser find-in-page and translation tools both skip canvas pixels entirely. Native text selection is tied to DOM text nodes and canvas has no equivalent, so users can't select, copy, or navigate the content by keyboard. A element is also a single tab stop, meaning keyboard users can't move between individual words or paragraphs within it, even if it contains thousands of words. In short, everything that makes text behave as text rather than an image of text disappears. None of this means the canvas path is automatically wrong. There are legitimate contexts where canvas text rendering is the right choice: games, data visualizations, creative installations, and design tools that have invested years in building their own accessibility layer on top of canvas. For SVG rendering, the trade-offs are different again, since SVG text elements do participate in the accessibility tree, making it a middle ground between DOM and canvas. But the canvas path is not the breakthrough, because canvas text rendering has existed for fifteen or more years across dozens of libraries. What none of them offered was a way to predict DOM text layout without paying the layout cost. Pretext's and do exactly that, and it's genuinely new. This pattern often repeats across the frontend ecosystem, and I understand why. A dragon parting text like water is something you can record as a GIF, post to your socials, and collect thousands of impressions. A virtual scrolling list that pre-calculates text heights looks identical to one that doesn't. The performance difference is substantial but invisible to the eye. Nobody makes a showcase called "works flawlessly with VoiceOver" or "scrolls 10,000 messages without a single forced layout" because these things look like nothing. They look like a web page working the way web pages are supposed to work. This is Goodhart's Law applied to web performance: once a metric becomes a target, it ceases to be a good measure. Frame rate and layout cost are proxies for "does this work well for users." GitHub stars are a proxy for "is this useful." When the proxy gets optimized instead, in this case by visually impressive demos that happen to use the path with the steepest accessibility trade-offs, the actual signal about what makes the library important gets lost. The library's identity gets set by its most visually impressive feature in the first 72 hours, and the framing becomes "I am drawing things" rather than "I am measuring things faster than anyone has before." Once that framing is set, it's hard to shift. The best text-editing libraries on the web, CodeMirror , Monaco , and ProseMirror , all made the deliberate choice to stay in the DOM even when leaving it would have been faster, because the accessibility model isn't optional. Pretext's DOM measurement path belongs in that tradition but goes further: those editors still read from the DOM when they need to know how tall something is. Pretext eliminates that step entirely, predicting height through arithmetic before the node is ever rendered. It's the next logical step in the same philosophy: keep text where it belongs, but stop paying the measurement cost to do so. I've been thinking about performance engineering as a discipline for most of my career, and what strikes me about Pretext is that the real innovation is the one that is hardest to see. Predicting how text will lay out before it reaches the page, while keeping the text in the DOM and preserving everything that makes it accessible, is a genuinely new capability on the web platform. It's the kind of foundational improvement that every complex text-heavy application can adopt immediately. If you're reaching for Pretext this week, reach for and first. Build something that keeps text in the DOM and predicts its height without asking the browser. Ship an interface that every user can read, select, search, and navigate. Nobody else has done this yet, and it deserves building. Performance engineering is at its best when it serves everyone without asking anyone to give something up. Faster frame rates that don't make someone nauseous. Fewer layout pauses that mean a page responds when someone with motor difficulties needs it to. Text that is fast and readable and selectable and translatable and navigable by keyboard and comprehensible to a screen reader. The dragons are fun. The measurement engine is important. Let's try not to confuse the two.

0 views
blog.philz.dev 2 weeks ago

computing 2+2: so many sandboxes

Sandboxes are so in right now. If you're doing agentic stuff, you've now doubt thought about what Simon Willison calls the lethal trifecta : private data, untrusted content, and external communication. If you work in a VM, for example, you can avoid putting a secret on that VM, and then that secret--that's not there!--can't be exfiltrated. If you want to deal with untrusted data, you can also cut off external communication. You can still use an agent, but you need to either limit its network access or limit its tools. So, today's task is to run five ways. Cloud Hypervisor is a Virtual Machine Monitor which runs on top of the Linux Kernel KVM (Kernel-based Virtual Machine) which runs on top of CPUs that support virtualization. A cloud-hypervisor VM sorta looks like a process on the host (and can be managed with cgroups, for example), but it's running a full Linux kernel. With the appropriate kernel options, you can run Docker containers, do tricky networking things, nested virtualization, and so on. Lineage-wise, it's in the same family as Firecracker and crosvm . It avoids implementing floppy devices and tries to be pretty small. Traditionally, people tell you to unpack a file system and maybe make a vinyl out of it using an iso image or some such. A trick is to instead start with a container image for your userspace, and then you get all the niceties (and all the warts) of Docker. Takes about 2 seconds. gVisor implements a large chunk of the Linux syscall interface in a Go process. Think of it as a userland kernel. It came out of Google's AppEngine work. It can use systrap/seccomp, ptrace, and KVM tricks to do the interception. The downside of gVisor is that you can't do some things inside of it. For example, you can't run vanilla Docker inside of gVisor because it doesn't support Docker's networking tricks. Again, let's use Docker to get ourselves a userland. No need for a kernel image. stands for "run secure container." Monty is a Python interpreter written in Rust. It doesn't expose the host, but can call functions that are explicitly exposed. This one's super fast. Pyodide is CPython compiled to WebAssembly. Deno is a JS runtime with permission-based security. Deno happens to run wasm code fine, so we're using it as a wasm runtime. There are other choices. Chromium is probably the world's most popular sandbox. This is pretty much the same as Deno: it's the V8 interpreter under the hood. Lots of ways to drive Chromium. Puppeteer, headless , etc. Let's try rodney : Run pyodide inside Deno inside gVisor inside cloud-hypervisor. Setting up the networking and the file system/disk sharing for these things is usually not trivial, especially if you don't want to accidentally expose the VMs to each other, and so forth. I want to compare two possible agents: a coding agent and a logs agent. A coding agent needs a full Linux, because, at the end of the day, it needs to edit files and run tests and operate git. Your sandboxing options are going to end up being a VM or a container of some sort. A logs agent needs access to your logs (say, the ability to run readonly queries on Clickhouse) and it needs to be able to send you its output. In the minimal case, it doesn't need any sandboxing at all, since it doesn't have access to anything. If you want it to be able to produce a graph, however, it will need to write out a file. At the minimum, it will need to take the results of its queries and pair them with an HTML file that has some JS that renders them with Vegalite. You might also want to mix and match the results of multiple queries, and do some data munging outside of SQL. This is all where a setup like Monty or Pyodide come in handy. Giving the agent access to some Python expands considerably how much the agent can do, and you can do it cheaply and safely with these sandboxes. In this vein, if you use DSPy for RLM, its implementation gives the LLM the Deno/pyodide solution to let the LLM have "infinite" context. Browser-based agents are a thing too. Itsy-Bitsy is a bookmarklet-based agent. It runs in the context of the web page it's operating on. Let me know what other systems I missed!

0 views
Susam Pal 3 weeks ago

Wander 0.2.0

Wander 0.2.0 is the second release of Wander, a small, decentralised, self-hosted web console that lets visitors to your website explore interesting websites and pages recommended by a community of independent personal website owners. To try it, go to susam.net/wander . This release brings a number of improvements. When I released version 0.1.0, it was the initial version of the software I was using for my own website. Naturally, I was the only user initially and I only added trusted web pages to the recommendation list of my console. But ever since I announced this project on Hacker News , it has received a good amount of attention. It has been less than a week since I announced it there but over 30 people have set up a Wander console on their personal websites. There are now over a hundred web pages being recommended by this network of consoles. With the growth in the number of people who have set up Wander console, came several feature requests, most of which have been implemented already. This release makes these new features available. Since Wander 0.2.0, the file of remote consoles is executed in a sandbox to ensure that it has no side effects on the parent Wander console page. Similarly, the pages recommended by the network are also loaded into a sandbox . This release also brings several customisation features. Console owners can customise their Wander console by adding custom CSS or JavaScript. Console owners can also block certain URLs from ever being recommended on their console. This is especially important in providing a good wandering experience to visitors. Since this network is completely decentralised, console owners can add any web page they like to their console. Sometimes they inadvertently add pages that do not load successfully in the console due to frame embedding restrictions. This leads to an uneven wandering experience because these page recommendations occasionally make it to other consoles where they fail to load. Console owners can now block such URLs in their console to decrease the likelihood of these failed page loads. This helps make the wandering experience smoother. Another significant feature in this release is the expanded Console dialog box. This dialog box now shows various details about the console and the current wandering session. For example, it shows the console's configuration: recommended pages, ignored URLs and linked consoles. It also shows a wandering history screen where you can see each link that was recommended to you along with the console that recommendation came from. There is another screen that shows all the consoles discovered during the discovery process. Those who care about how Wander works would find this dialog box quite useful. To check it out, go to my Wander console and explore. To learn more about Wander, how it works and how to set it up, please read the project README at codeberg.org/susam/wander . Read on website | #web | #technology

0 views
Loren Stewart 3 weeks ago

ChatGPT, Claude, and Gemini Render Markdown in the Browser. I Do the Opposite

The big AI chat apps ship heavy rendering libraries to every device. Cheddy Chat renders markdown server-side and streams finished HTML, eliminating 160-440KB of client JavaScript while keeping the main thread free.

0 views
Simon Willison 3 weeks ago

Profiling Hacker News users based on their comments

Here's a mildly dystopian prompt I've been experimenting with recently: "Profile this user", accompanied by a copy of their last 1,000 comments on Hacker News. Obtaining those comments is easy. The Algolia Hacker News API supports listing comments sorted by date that have a specific tag, and the author of a comment is tagged there as . Here's a JSON feed of my ( ) most recent comments, for example: https://hn.algolia.com/api/v1/search_by_date?tags=comment,author_simonw&hitsPerPage=1000 The Algolia API is served with open CORS headers, which means you can access the API from JavaScript running on any web page. Last August I had ChatGPT build me a simple tool for hitting that API for any user which fetches their comments and gives me a mobile-friendly "copy to clipboard" button. I've since tweaked it a few times with Claude. I can then paste the whole lot into any LLM - these days I mostly use Claude Opus 4.6 for this - and prompt "profile this user" to see what it can figure out. It's startlingly effective! It feels invasive to quote the profile of another user here, so I'll show you what it produces for me: This is Simon Willison — a prolific, independent software developer, blogger, and one of the most visible voices in the AI-assisted coding space. Here's a profile drawn from his comments: Professional identity: Independent developer and writer. Co-creator of Django, creator of Datasette and many other open source tools. On the board of the Python Software Foundation. Runs a heavily-trafficked personal blog at simonwillison.net with multiple content types (entries, link blog, notes, quotations). Monetizes through GitHub sponsors, ethical ads, and occasional consulting — not through any AI company payroll, a point he defends frequently. Core thesis on AI coding: He's the leading evangelist for what he calls "agentic engineering" — using coding agents (Claude Code, Codex CLI) as productivity multipliers for skilled developers. His central argument: LLMs don't replace programmers, they amplify existing expertise. He's writing a guide/book on this at simonwillison.net/guides. He insists these tools are deceptively difficult to use well, and that most people who report bad results simply haven't learned the craft yet. Working style: He does most of his programming from his iPhone via Claude Code for web, often while riding BART, walking the dog, or sitting in the garden. He embraces "YOLO mode" (auto-approving agent actions) and frequently runs 2-3 agent sessions in parallel. He starts every session with "run uv run pytest" to anchor the agent in TDD. He's on the $200/month Claude Max plan. Key technical interests: Sandboxing and security (WebAssembly, Pyodide, sandbox-exec, Firecracker), SQLite (deeply), Python packaging and tooling (uv, PyPI distribution tricks), browser-in-a-browser experiments (v86, WASM Linux), and local LLM inference. He's been learning Go "by osmosis" through coding agents. Security consciousness: He coined the term "prompt injection" and the "lethal trifecta" (access to private data + exposure to untrusted input + ability to take actions). He's genuinely alarmed about OpenClaw/Clawdbot security risks and has been predicting a headline-grabbing prompt injection attack "in the next six months" for over two years — acknowledging it keeps not happening while insisting the risk is real. He references the "normalization of deviance" pattern frequently. Personality and debate style: Energetic, combative when challenged, but generally good-natured. He engages heavily in HN comment threads — sometimes dozens of replies in a single discussion. He's transparent about his biases and maintains a public disclosures page. He pushes back hard against what he sees as motivated reasoning from AI skeptics, but also against AGI hype ("I refer to that as the science fiction version of AI"). He gets visibly frustrated by sock-puppet accounts and bad-faith criticism. He has a self-deprecating streak ("my stupid pelican benchmark"). Recurring themes across hundreds of comments: Personal interests mentioned: Niche museums (runs niche-museums.com), New Zealand kākāpō parrots, cooking food from countries he's never visited, chickens in his garden. Lives in Half Moon Bay, CA near a major pelican roost. The pelican-on-a-bicycle SVG test is his signature LLM benchmark. In a sentence: A deeply experienced, independently-minded developer who's genuinely excited about AI coding tools, fights hard against both uncritical hype and reflexive dismissal, and is trying to professionalize how the industry uses these tools — while also worrying loudly about the security implications almost nobody else takes seriously enough. This all checks out! I ran this in Claude incognito mode to hopefully prevent Claude from guessing that I was evaluating myself and sycophantically glazing me - the tone of the response it gave here is similar to the tone I've seen against other accounts. I expect it guessed my real name due to my habit of linking to my own writing from some of my comments, which provides plenty of simonwillison.net URLs for it to associate with my public persona. I haven't seen it take a guess at a real name for any of the other profiles I've generated. It's a little creepy to be able to derive this much information about someone so easily, even when they've shared that freely in a public (and API-available) place. I mainly use this to check that I'm not getting embroiled in an extensive argument with someone who has a history of arguing in bad faith. Thankfully that's rarely the case - Hacker News continues to be a responsibly moderated online space. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options . "Two things can be true at the same time" — he holds nuanced positions Tests are for productivity, not just quality The November 2025 model releases (Opus 4.5, GPT-5.2) were a genuine inflection point Code review is the biggest bottleneck in agent-assisted workflows "Cognitive debt" is a real and unsolved problem The best engineering practices (tests, docs, CI/CD, clean code) also make agents work better He's deliberately trying to "teach people good software engineering while tricking them into thinking the book is about AI"

0 views
alikhil 3 weeks ago

What is a CDN and Why It Matters?

With the rapid growth of GenAI solutions and the continuous launch of new applications, understanding the fundamental challenges and solutions of the web is becoming increasingly important. One of the core challenges is delivering content quickly to the end user . This is where a CDN comes into play. A CDN stands for Content Delivery Network . Let’s break it down. (Note: Modern CDN providers often bundle additional services such as WAF, DDoS protection, and bot management. Here, we focus on the static content delivery.) Content refers to any asset that needs to be loaded on the user’s device: images, audio/video files, JavaScript, CSS, and more. Delivery means that this content is not only available but also delivered efficiently and quickly. A CDN is a network of distributed nodes that cache content. Instead of fetching files directly from the origin server, users receive them from the nearest node, minimizing latency. Consider an online marketplace for digital assets, such as a photo stock or NFT platform. The application stores thousands of images on a central server. Whenever users open the app, those images must load quickly. If the application server is hosted in Paris, users in Paris will experience minimal ping. However: These numbers only reflect simple ICMP ping times. Actual file delivery involves additional overhead such as TCP connections and TLS handshakes, which increase delays even further. With a CDN, each user connects to the nearest edge node instead of the origin server. This is typically achieved via GeoDNS. Importantly, only the CDN knows the actual address of the origin server, which also improves security by reducing exposure to direct DDoS attacks. CDN providers usually operate edge nodes in major world cities. When a request is made: If the requested file is already cached on the edge node ( cache hit ), it is delivered instantly. If not ( cache miss ), the edge node requests it from the CDN shield . If the shield has the file cached, it is returned to the edge and then served to the user. If not, the shield fetches it from the origin server, caching it along the way. For popular websites, the cache hit rate approaches but rarely reaches 100% due to purges, new files, or new users. The shield node plays a critical role. Without it, each cache miss from any edge node would hit the origin server directly, increasing load. Many providers offer shields as an optional feature, and enabling them can significantly reduce origin stress. Beyond cache hits and misses, performance can be measured with concrete indicators: Time to First Byte (TTFB): How long it takes for the first data to arrive after a request. CDNs usually reduce TTFB by terminating connections closer to the user. Latency reduction: The difference in round-trip time between delivery from the origin versus delivery from an edge node. Cache hit ratio: The percentage of requests served directly from edge caches. These KPIs provide a real, measurable view of CDN efficiency rather than theoretical assumptions. The closer the edge node is to the end user, the faster the content loads. The key questions are: Where are the users located? Which CDN providers have the best edge coverage for those locations? But don’t rely on maps alone. Measure real performance with Real User Monitoring (RUM) using metrics like TTFB and Core Web Vitals. There are plenty of ready-made tools available. If you’re interested in building your own RUM system, leave a comment or reaction – I can cover that in a follow-up post. Users in Spain may see about 2× ping time. Users in the USA may see 6× ping time. Users in Australia may see 12× ping time. If the requested file is already cached on the edge node ( cache hit ), it is delivered instantly. If not ( cache miss ), the edge node requests it from the CDN shield . If the shield has the file cached, it is returned to the edge and then served to the user. If not, the shield fetches it from the origin server, caching it along the way. Time to First Byte (TTFB): How long it takes for the first data to arrive after a request. CDNs usually reduce TTFB by terminating connections closer to the user. Latency reduction: The difference in round-trip time between delivery from the origin versus delivery from an edge node. Cache hit ratio: The percentage of requests served directly from edge caches.

0 views
Weakty 3 weeks ago

Pinned

Back in March, I found this book at the back of my bookshelf. I don't know how it got there. I didn't remember ever buying it, much less reading it. It had the most ornate cover, as if someone had done an oil painting on an old hard-cover book. In fact, that seemed to be what it was exactly. Just like an oil painting, it had bumps and ridges, and if I had the heart to do it, I probably could have picked at it and flicked little chips of colour off with my fingers. The cover depicted several different images, but was sufficiently abstract to make me doubt my interpretation. It reminded me of a smouldering firepit set against a twisting whirlwind of leaves and strange debris: splinters of wood, the bits of plastic that connect 6-packs, old shoelaces, chunks of a gilded picture frame, and even something that looked like the severed legs of some poor creature. If you saw it, you’d probably see something else. I spent a fair bit of time staring at that cover. With the book splayed out it made for a beautiful but disturbing landscape. It was actually a few days before I even looked at the pages, I was so taken with the cover. When I finally got around to looking inside , I was surprised to find that the pages were blank. I flipped through them a few times, incredulous. My first thought was, this is a journal, or a sketchbook . But it wasn't. The book had an ISBN number, a bar code, and a bunch of information on the inside page. On the other side of the cover was the following: Of course, there were a few other things on the page. But the thing that shocked me, what made my stomach throw itself down into the basement of my body, was the line Thank you, Acton . It really struck me. Because my name is Acton. I've never met another Acton. What can I tell you about my name? It’s uncommon. It’s usually the first on any list of names. What else? People have a hard time making fun of it. Your standard school-yard bully wasn’t clever enough to come up with a quip for Acton . So, I was surprised to see my name, being thanked from this very personal, empty book. And as you might guess, my next move was to go over to the computer and look this thing up. I searched for the book's title online and plunged into reading all about it. Here's what I learned: That was about it. I found the regular stuff, you know, people sharing their reviews and ratings and whatnot. But here's what was odd. The photos I saw of the book's cover looked different than the one I had before me. And of course, the pages of mine were blank. I re-checked the cover of the book to make sure, yet again, that it wasn't just some kind of wrapper around another book. It wasn't. It was a hardcover book that someone had painted . I ran my fingers over the paint. It was paint, no doubt about it. And open the cover, and there was that first page with my name staring back at me. But I'm rambling at this point. That was back in March. Let’s fast-forward a few weeks to when the book started to talk to me. Things got weird in early April. I had the book open on my desk. I wasn't sure why I opened it. I think I had been admiring the cover in the morning sunlight coming through the wide window above my desk. When I opened the book, the pages smelled of a newness I hadn't noticed before: starchy and pleasingly rough to the touch, like the high-quality pages of paper in an artist's sketchbook. It seemed to hold the promise of possibility. I felt compelled to take a photo of the book. I reached into my pocket and pulled out my phone. Then, with the blank pages before me, in a rush, the book slurped up my phone. It slurped it right out of my hands and collected it onto the first empty page. Don't misunderstand me—this book very literally liquefied my phone, sucking it up and rearranging its physical form into a two-dimensional representation of my phone—right down to the small crack in the bottom-right corner. I stared at the miniature picture on the page: it was a rather painterly depiction, not unlike the cover of the book. I spent some time admiring the rendering. Entranced, I almost moved to take a photo of it but, of course, my phone was gone. That's when things got interesting. It was hard to operate without my phone. I knew I was addicted to it, sure, but I didn't realize how dependent I was on it. It immediately caused a fuss for my job. I worked part-time for a distress hotline for teenagers who are in a bad way. Employees of the hotline have a special application on their phone ensuring the calls can be encrypted and recorded safely while retaining anonymity. So, when I didn't "show-up" online for my job that day not only did I likely miss showing up for people who needed help, but I wasn't going to get my measly pay for the shift. I felt worse about not being there for the teens, of course. I had enough money to get by for the immediate future. But the thought of some youths calling the hotline and being on hold when I could have been there to answer made me feel terrible. But not terrible enough to tear myself away from the strange situation I found myself in. It was through the blank pages that Myriam was able to reach me. It happened a few days after my phone had been slurped up into a page of the book. That morning, I came back to the book at my desk with my morning tea in hand. I sat down, and stared once more at the painted cover, before slowly opening it to the page with the little image of my phone in the corner. Slowly, words began to appear on the left page of the book, beautifully typeset, as if punched in by a typewriter, one letter at a time. I knew in an instant that it was Myriam. In my shock of seeing this happen for the first time, I could barely register that as her writing got to the end of the page, the whole thing cleared and started again. I don't remember the exact words of our first conversation, but I think it went something like this: Then the words began to disappear. I grabbed a scrap of paper and wrote down Hoke Scripter and Able-Archive Pigmented Blue Ink . And then the words were gone. Now I had something else to do. A week passed before I found the ink. Right after I got that message from Myriam, I went online and looked up Able-Archive Pigmented Blue . I wasn't surprised to find that the ink was no longer in production. It would be too easy to click "Add to cart" on a bottle of ink, and wait a few days for it to show up at my door. Instead, I found myself on forums for pen-and-ink fanatics, reading, reading, reading about this ink. Often, I saw that people had inherited a bottle from an older relative (along with some old, fancy pens to boot). Other people wrote posts online to laud this ink as a workhorse ink and that not many people made them like this anymore . I found myself enjoying going through these posts. After a day or two of looking, I found a post of someone selling a bottle of Able-Archive Pigmented Blue . They were across the country but it seemed they would ship it my way. I sent them a direct message and inquired about purchasing the ink: And that was it. Soon I would have some Able-Archival Pigmented Blue ink, along with its contractual-binding-powers. Whatever that meant. Waiting for the ink made for a strange week. I had to pause work entirely. After missing my first shift, I sent an e-mail to my coordinator letting them know I would be taking some time off. I didn't tell them the part about my phone getting sucked into a book, of course. That would be too weird. Instead, I told them I needed to take a break from work—and in this line of work they get it. So I sent off that e-mail, apologizing again for the shift I missed, and saying I would reach out when I was ready to return to work. My schedule was full of empty slots. I almost felt like I had just finished school and was at the beginning of summer vacation. I had all the time in the world to hunt down stationery and whatever other obscure things I felt like searching for. Next up, of course, was a Hoke Scripter — the pen Myriam had requested I use. This turned out to be no trouble at all. There are a few stationary shops in town. I went out to three before I was able to track down the Hoke Scripter over at Infinite Inkwell . I had never been in before. The shop was basically a large stone room. Along the rounded walls were recessed cubbies of various sizes and shapes, holding all kinds of objects. There was one portion of the wall that held rows and rows of inks, while others had glass drawers that pulled out of the cubbies, displaying pens, pencils and other writing implements of all kinds. In the center of the room was a large round table where you could sit, perhaps for testing pens or paper. There was only one person at the table, hunched over. They had a huge stack of paper and they were furiously writing. After finishing a page they would, without looking, add it to the growing stack to their left before pulling a new page from a pile of crisp sheets to their right. I wasn’t sure if they were an employee, another customer, or something more random and bizarre. My staring was interrupted by a staff member, who offered me some assistance. They led me over to the cubbies with the glass drawers, specifically pointing out a row of pens in deep blue, green, and red hues. "These are the Hoke Scripters. They’ve got a classic appeal, perhaps you can see." "I do see a certain classic air to them," I agreed. "And what might you be looking for in a pen, may I ask?" "Oh well," I began, unsure what to say. "I’m looking for a pen that my grandmother mentioned she used to use in her youth. I think it was this one." I felt strange lying to the employee, who was just looking to help me. "Well, The Hoke Scripter is a model from Hoke that has existed for many years. It is likely that your grandmother could have used an earlier incarnation of one of these." "Cool," I said flatly. "Would you like to try it?" the staff asked. "Why not." I said, flatter than flat. Flatter than a pancake. Flat enough to write on. The staff member disappeared for a moment and reappeared with a tester Hoke Scripter. She passed me the pen. I held it in my hand, lifting and lowering it experimentally. It felt pretty sturdy. It certainly wasn’t cheap plastic. Too heavy, and too elegant for plastic. Aluminum? I would have to look it up later. The staff gestured for me to grab a seat at the large round table, to test the pen out. I joined the strange character, still writing furiously at the big table, and I did exactly that. I wrote a few lines. Then a few squiggles. What can I say? The Hoke Scripter moved like a pen does, leaving a trail of itself wherever my hand moved. I suppose I was perhaps expecting something more majestic from a pen with such a history. But there was nothing majestic about it. It was just a pen. It felt sturdy in my hands, and it did have a nice pattern on the barrel, and I supposed a person could customize it with whatever ink they liked. But what else was there to say? "What do you think?" The staff member reappeared after a few minutes of my scribbling and scratching with the pen. "Um, it’s nice." "The scripter can be a bit underwhelming, but there’s more to it than meets the eye. It’s not our most flashy or popular pen, but I think the people who come to buy this pen always have a certain, well, intentionality to them. Let me know if you need any other help or have any other questions." The staff member disappeared, leaving me to write some more. I didn’t spend much more time in the shop. I bought the pen, and it cost me a whopping $55. I could hardly believe it. That price included something called a converter which I was told I needed: it would hold the ink I wanted to use with the pen. I walked away from the shop to get into my car to go home. I grumbled under my breath. Oh, the intentionality of dropping $55 on a pen. The weekend arrived without much fanfare. I was feeling a bit naked and aimless. Without my phone, my fingers fidgeted with the accumulated bits and bobs of things around the kitchen counter and table: a magazine that had been sitting there for weeks, some dirty cutlery, a notepad from the dollar store, and a rubber band. I sat there, with the sun rising around me, spinning that rubber band about my index finger, waiting for life to wake up around me. Eventually, I got up and I poured some cereal into a bowl, made myself a coffee, and sat back down to eat. I grabbed a pen and flipped open the cheap notepad. One of the spiral ones that could fit in a shirt or pants pocket. It had already had a few pages ripped out of it. Then I remembered I had gotten it out when I had a few friends over to play cards last week. I briefly looked at the tally marks of our scores. I ripped out the page and noted that the tally marks had imprinted on the cheap paper below. I turned to my cereal. If I left it any longer it would get soggy. What would I do with my day? It would be another few days before the ink would arrive, and this strange episode in my life was on hiatus until it appeared. I poured myself another cup of coffee and threw myself down on the couch. I stared into the old fireplace in the corner of the room, long since used. Outside, cars were idling in the street, and traffic was already picking up. I could tell by the exhaust rising up to the window. The exhaust of a home fireplace had been replaced with the exhaust of cars. I moved closer to the window and looked at the traffic. They were stuck, too, waiting in line. All of us waiting for some kind of ink, something permanent to arrive. I stood there and took a sip of my coffee. Just watching. I didn’t want to open the book. If I did, there was a chance that Myriam would speak to me again. Perhaps I should say she would write to me again. And if I couldn’t write back without the correct ink, she might lose interest. Perhaps she only had so much strength with which to communicate with the outside world. I figured that if I didn’t open the book, she would stay in some kind of stasis; her life frozen in the permanency of the page. In that sense, I supposed I had time. I sighed, finished my breakfast and got up. I wasn’t going to get anywhere sitting around thinking. My hands reached for my phone, only to find it missing from my pocket for the fiftieth time today. I thought about e-mailing a friend and seeing if they wanted to get together over a coffee, or maybe go for a drink later tonight. I popped open my computer, opened my e-mail and stared at the empty draft. I started to write without addressing the message to anyone yet. I looked at the "To" field of the e-mail. I clicked the "plus" button and a pop-up appeared with a list of some of my most frequent e-mail recipients. I started adding all the people I thought might reach out. Before long I had sent the e-mail to some fourteen people. Some of them I had spoken with as recently as last week and some I had not seen in years. Then, I moved all the e-mail addresses into the BCC field so that the recipients wouldn’t know who else had been e-mailed. That’s the decent way to do it, in case you were wondering how e-mail works. I hit Send , shut the laptop, and started cleaning up. I decided that I should go for a walk. It was still early, and there wouldn’t be many people out. Never mind that it was the dead of winter. I watched the cars on the road from my apartment. They always seemed to be perpetually lined up, exhaust rising into my view. What a view. I always go back and forth in my mind whether I should have taken the apartment at the top of the building. When I moved into this place it was either this, the second floor where I am now, or the top floor. All the units on the first and third floor had been occupied. I eventually chose the second floor because I thought I wouldn’t want to move everything up four floors (there’s no elevator). But from here, I was a bit too close to the action. After having lived here for a year or so, I think being on the fourth floor would have been better. A bit more removed from the commotion, the exhaust, the closeness to the ground. Besides, I figure I could use the exercise of an extra two flights of stairs each day. I pulled on my hoodie and tossed a thin jacket over top, shoved on my boots and stepped into the hall. It would only be a short walk, so I wouldn’t need any more layers. I didn’t even have my gloves. Outside, I stepped between the idling cars with their anonymous drivers, and took the back alley behind Jason’s Grocer out toward Delmont Ave. It was still early and the alley was quiet, as I expected. I passed a few piles of garbage outside the backs of the commercial buildings and kept walking. The alley narrowed past the point cars could reach and I kept walking. I took a left at Montrain and walked the ten minutes to Gaston park. A few more minutes of walking, past the fountains, shut off for the winter, until I was on one of the trails. Despite its confusing mess of criss-crossing trails, it made for a great escape from the city. I walked through the park, listening to my feet occasionally deviate from their regular cadence, kicking and tripping over bits and pieces of trail. A few runners passed, and even one cross-country skiier, although it seemed the snow was a bit thin for it. All these people had headphones in, plugged into a world of their devising. I had nothing to listen to but the sounds of crunching snow, birds in the barren trees, and the far off sounds of cars starting and stopping and occasional honking. I reached the end of the trail and then turned around. I was growing restless and feeling disconnected. I had seen people IRL, sure, but I could feel myself growing increasingly uneasy. I knew this was likely due in part, if not entirely, to my phone being sucked into a book. I knew this because my hands still kept patting at my pants or jacket pocket, thinking my phone was there, within reach and ready to soothe whatever restless thought came my way. I walked back home in a funk. I had not anticipated that I would feel this sort of withdrawal. When my phone had been liquefied and sucked into the page of that book, well, you could say I was somewhere between amused and bemused. But now I was starting to feel agitated. I took a few deep breaths and tried to tune into the sounds of the world again, but, instead, my ears just pounded as if they were trying to tell me to plug something into them, shove anything in: a podcast, some music, anything but this overly-present, disgusting excuse-for-nature around me. I picked up my pace to a point where I was almost running. I exited the trail where I came in. There were more people now: some of them sledding down the large hill facing east, others with their dogs, families with kids running after each other. I walked a few more paces and then looked back at the entrance of the trail. It looked light and entreating. I found a bench to sit down on and looked over at the people in the park. I was sweating underneath my two layers. The sounds of the children laughing ricocheted through the air and slammed into my eardrums. Cars passed by with a roar I had hardly noticed before. I put my face in my hands and pulled at my skin, as if I had a tight mask on that needed to come off. No such luck. Instead, I rested my chin in the palm of my hand, my elbow on my knee. I looked down at the ground before me. There were a few spare bits of garbage before me, empty weed canisters, a pop bottle that someone had peeled the label off of, and a large ribbon that looked like it belonged on a kite. I just didn’t care enough to pick up any of it and throw it away. Why should I, I thought miserably, when I felt like this? When I got home, I immediately threw open my computer to see if anyone had responded to my email. Not a word. I shut it in frustration, slid it away from me on the coffee table, and slumped back into the couch until I was lying horizontally. I didn’t know what time it was, but it was probably close to lunch. My phone was also my watch, like for most people. Without a clock in the room, I resigned myself to not knowing what time it was. I simply closed my eyes and lay there. I imagined the fumes of the cars, still lined up outside waiting for their turn to go somewhere , rising up and lulling me into a noxious sleep. And I did fall asleep. And I started to dream. In my dream, I was walking with someone. Not in the park this time, and not in the city either, really. It seemed we were walking across a bridge. It was foggy all around us. I could not tell if the bridge was going over water, or if it was going over a highway, or something else entirely. Looking over the edge of the bridge, it could be spanning a valley full of turtles, or lava, or a great black emptiness. I walked on, my eyes periodically darting to the side of the bridge, wondering at the mist below. All this time, my walking partner had been talking, but I had not listened to a single word. I turned to look at them. It was Myriam. I knew it immediately, there in the dream. She was exactly as I had pictured her, based on our first conversation. She had a sad wilt to her, like flowers left in the sun on a kitchen table too long. She looked back at me, perhaps wondering if I had heard anything I had said. I said I was sorry, but this didn’t seem to reset anything between us. "Will you ever respond to me? I’ve been wondering this every day, Acton." "I will, I’m just not ready yet," I replied. My words clumsily escaped from me. I watched them tumble over the bridge into the mass of fog. "I hope you do," she said. "You have the ink and the pen, right?" "I do." I said, which was strange because I knew the ink had not yet arrived. We walked in silence for a few minutes. The bridge was the only concrete thing within the fog and seemed to go on forever, disappearing into the misty distance. "I just need some time to get my thoughts into place," I said, facing forward so my words would come out right in front of us, and not get swept away into the abyss. Myriam’s hair was a steely gray, and she wore it loose, down to her shoulders. Every now and then, a muggy breeze would come and push it beyond her shoulders, like the swinging doors of a saloon in an old Western film. I imagined foggy, unseen spectres passing through the doors of her shoulders into a place I couldn’t go. "Whenever you’re ready," I heard her say. We walked some more. Then she began to shrink, and I began to grow. With each step, she got smaller and I got larger. Before long, I was trying to avoid crushing her with my feet. I couldn’t continue walking beside her and instead had to step out of the bridge and into the unknown fog. My first step came down unsteadily, but with such grandiosity that the fog cleared and was swept away, the weight of my footstep pushing it outward. All around my foot, I saw a verdant greenness, twisting vines, lush ferns, and tall grasses. All this I saw for only a moment before the fog returned to surround my foot. I grew larger still, and the following step with my left foot required straddling the bridge. By now, Myriam was long gone, and I was so tall I couldn’t see what the displaced fog would reveal. Despite not being able to see that far down, I knew that my next step would not yield the previously lush green ground; for a hundred, or maybe a thousand years had passed since my previous step, and things had surely changed for the worse. So I stopped in my tracks, afraid of advancing time any farther or growing any larger, of stomping out any other life in my next movements. I knew the next movements I would make would decide the fate of all of life. My next step would be like a thousand atom bombs, and there would be nothing left of this world. I took a step, and I was right. It all ended. And then I woke up. I sat up on the couch and groggily mused over my dream. It was starting to fade already. I didn’t feel like writing it down, but I tried to remain with the feeling of being a giant. Of a being so large that they can’t even see what their giant steps are obliterating, the world so far below them. I made myself a coffee and lay down on the couch, the cup just within my reach. Periodically, I pulled it to my face, and carefully sipped it from my lying down position. A most precarious way of drinking a coffee. I looked at my laptop on the couch. It was calling to me. A silent notification, ringing in my ears. I wanted to hear from someone. Anyone. I set my coffee on the table next to the couch and pulled it onto my lap and opened my e-mail client. Before me was a single unread message in my e-mail inbox: I looked at the e-mail. It took me a moment to register the name. Casey. Casey was a friend from high school. We hadn’t talked in quite some time. The last time we had gotten together, she and I had indeed gone for dinner. It had been a patio brunch in the middle of summer. I looked over the list of people I had sent my original e-mail to. Casey was my oldest friend among the list. The two of us had met in the school wrestling club, two odd ducks who had picked the wrong club to join. We had bonded over discussing mechanical magazines, web forums for Arkendo’s Binding , a game we both loved, and the fact that neither of us actually wanted to be in wrestling club. All this talking we did over the slam of teenage bodies against thinly padded mats. I can almost hear it now. I responded quickly, and effortlessly—the kind of correspondence that happens with people you really just click with. Short and sweet. Save the questions for when we get together. I snapped my laptop closed and thought about this making of plans without a phone. I wouldn’t be able to check in with her leading up to the dinner. I felt a sweet anticipation already growing in me. I got up and busied myself with making some lunch. I chewed my food, staring out the kitchen window. I tidied up. I started putting away dishes that had piled up over the previous few days when the buzzer rang for my apartment. I pressed the intercom and asked who it was. In response I simply heard a muted thump . My apartment not having a camera to watch approaching people, I had no choice but to go see what it was. I clomped down the stairs to the lobby to where people leave packages outside the rickety, old door that served as the gateway to the lobby. I poked my head out and felt a blast of cold wind and snow hit me right in the face. I looked down. There was a small package, clumsily wrapped. I picked it up. It didn’t weigh much. My name was on it. There was no return address, but beside a simple "from:" label I saw a familiar name: NibTuner79. The ink was here. I took the package inside and brought it up to my apartment. I clumsily picked away at several layers of clear tape with my fingernails before giving up and cutting open the brown parcel paper to reveal the ink. The inkwell was round with a square neck, ice blue. It was filled 3/4 full with the ink. I turned it in my hands, taking in its shape and the rich blue colour swirling with the movement of the inkwell. I stopped so that the label faced me: a simple white label with a black border and the serif text: Able-Archive Pigmented Blue. There was no logo, no insignia, no brand to speak of. For all I knew, NibTuner79 could have filled any old empty bottle of ink with some random, cheap blue ink. But it seemed unlikely. I wasn’t exactly a collector of these sorts of things, but as far as I could tell, honesty went fairly far in the world of nerds and gearheads. I chuckled to myself, because I felt quite honest thinking that. I was on my way to becoming one of them. I put the ink down on the desk next to Myriam's Codebreakers and the Hoke Scripter. I had all three. The trifecta. I could crack open the book and write something. I stared down at the book and its painted cover. To my right was a freshly inked pen. To my left, a quickly cooling cup of tea. I drummed my ink-stained fingers on my desk—I had quickly learned that filling a pen could be a very messy job. I wondered how staining archival ink was. I would find out soon enough, I supposed. I picked up the inked pen in my hand. Despite now having just a few millilitres of ink in it, it seemed to feel heavier. Potent. No longer an inert stick, but a poised snake. I could feel my hesitation mounting, but I knew it was time to act. I opened the book with the fingers that had remained unstained by the act of filling the pen. I flipped to the page where Myriam’s text had first appeared and where there was still the image of my phone on the page. I wrote underneath the depiction of my phone: I could feel myself running out of things to say. I looked down at the pen in my hand. I looked down at my writing. It was messy and seemed to switch between cursive and printing at random. I had hoped I would have come up with something more meaningful or interesting to say. I leaned back in my chair. I was uninteresting. I was offline. Were the two interchangeable for me? I was a spectator of myself, watching my events unfold in a terrible synchronicity that had me simultaneously bored and amused. Despite all the separation anxiety of being without my phone, of not working my regular shifts, of what felt like just wandering around aimlessly, I was becoming a subject of my own life. My old life, my digital life, was in the hands of Myriam, a woman, I presumed, in a book by an author named Solah J. Trek. I rotated and twirled the Hoke Scripter in my fingers and stared at my paragraph, re-reading it for the second time. As I read, the words began to disappear into the page. It seemed that as soon as I finished reading a sentence, one by one, the words of it would sink into the nothingness of the page. By the time I got to the end of my second paragraph it was all but gone. I stared in astonishment at the empty page. The painted image of my phone was still there in the corner. I shifted uneasily in my chair, breaking a silence that had settled in the room. Then, a voice appeared in my head, and I realized it was from myself reading words on the page—Myriam was responding. I pounded down the apartment stairs, my messenger bag banging against my back. Every moment an eternity . Every step, the thump of my bag against me—muffled and extinguished by the muted stairwell. I threw open the front door, nearly hitting another resident of the apartment on the way in. I took hardly any notice of them. I was back outside again. I was to meet Casey at 7pm at Deep Blue, a gastrobar about a fifteen minute walk from my apartment. On my wrist: an old watch I had found in my bedside table. In my bag: a strange book, slung on my back. I walked quickly, trying to make sense of things. After seeing Myriam’s message I had stared blankly at it, shut the book, and got up in a daze. At first I felt unsafe, the target of some kind of operation that was beyond my comprehension, beyond my plane of existence, even. The message in the book had disappeared within minutes, maybe seconds after my reading it, and had left me staring at an empty page. Somehow, Myriam had known about my rendezvous with Casey. At first, I was confounded. I walked and mulled. The questions in my mind turned over and over, scratching at an imagined blank page. I turned onto Range Ave and crossed over to the other side, stepping around parked cars and stopped at the curb. I stood on the cobblestone walk that marked the beginning of the old downtown. I studied my shoes against the unevenly placed stones. I stood hunched there, losing my gaze to the ground while the world spun around me. I wasn’t sure why I had stopped. It was as if I was frozen—I couldn’t move my body save for the clenching and unclenching of my hands. All the while that I stood like this, I felt an eerie sixth sense, as if a great wave was going to crash over me. I stood wanting to look back but resolving not to. Eventually, I broke free. I could move again. I didn’t look back. Everything clicked in that first footstep out of that immobile state. My questions answered: Myriam had read the message from Casey through my phone—my phone that was still somehow working in another ethereal realm. It was clear to me now. Myriam was reading my e-mails. She was in a real-time lock-step with me. She was stuck in her own eternity, watching the stream of data of my life move past her, watching and absorbing it as she liked. I continued down the cobblestone path at a reasonable rate, minding my step for the occasional uneven stone, until the sign of Deep Blue came into view. The downtown was bustling with people, despite the cold weather and snow. Half of them could well be tourists, periodically stopping to find their way with maps on their phones, or to send a message to someone out of view, out of picture, someone who might be at the other end of the world—and it seemed they had to do this in the middle of the sidewalk. I found myself grumbling. I had become moodier in the preceding days. I reached the door to Deep Blue and stopped, feeling an impulse to message Casey and say I had arrived. I couldn’t, of course. Instead, one of us would have to go into the restaurant and be there waiting for the other. We hadn’t agreed to stand outside and wait to go in together. But there I was, standing outside, surveying the landscape of downtown, searching for a familiar face in a crowd of strangers. I stood staring for a minute. I wanted to shake myself out of this slump I was sliding into. I hadn’t seen Casey in years and this wasn’t the disposition I wanted to show up with. I had nothing to prove to her, of course, but it would have been nice to let her know that I wasn’t losing my mind. Which I might have been. I headed into Deep Blue. "Acton!" I was stomping through Deep Blue when I heard my name called out. Casey’s voice pulled me out of the mud. You might even say my heart soared at the sound of her voice. That’s what the voice of an old friend can do to you. I had been in my own head too much—and it was only her voice cutting through the bog of my thoughts that made me realize it. I made my way to her table. She had gotten a small booth at the back of the restaurant. I looked at my watch. It was 7:02. The restaurant was bustling. She jumped up, arms outstretched for a hug as I approached. She was wearing a colourful knit sweater, loose and baggy. I sank into her arms and found myself breathing in deeply. In her arms, I felt something stirring in me. In a friend, even one you hadn’t seen in some time: support, meaning, connection, a feeling of being grounded. I could have stood in her arms, in the way of the staff and the customers in the middle of that restaurant for the rest of the night. Instead, I grew self-conscious that I would be the one holding on longer than her. I let go. I sat down, tossing my bag haphazardly (forgetting what was in it) into the corner of my side of the booth. As soon as I sat down we were immediately swarmed by the staff. We both held our tongues, not wishing to begin the business of our catching-up in the immediate presence of others. We put in an order for some appetizers. As promised, Casey put in an order for two drinks, the same for the both of us. Then, we were left to ourselves. I checked my watch again, expecting it to be 8pm already. It was 7:09. "This is a pleasant surprise," I found myself saying. Between us, there was a small center-piece with a lit candle. The flame of it danced under the whims of my opening remarks. "Agreed." Casey took a sip of her water and I watched her. "You’re the only person who e-mailed me back. I didn’t expect anything from anyone, I suppose, but I was surprised that you were up for it." "Why’s that?" "I don’t know, it’s been some time since we got together. People get in their head about that sort of thing," I said. "Which is funny, don’t you think? Maybe you were surprised I was interested, when most people would think old friends from the past make for awkward communications. But I’ve never really felt that way. People weave in and out of each others lives over the years. It’s normal. Something to be grateful for, really." "That’s insightful," I said, pausing to sip my water, "I suppose I’m also a little surprised—I would have thought that the people I had been messaging with over my phone just weeks ago would get back to me when I e-mailed them, but instead it was the opposite." "How many people did you send your message to?" "Thirteen or fourteen," I said. "E-mail is old, now. It’s a dumping ground." "I guess." "So the phone is gone. And it pushed you to reach out to some old friends." I tasted my drink, which had appeared moments ago. I had already forgotten what she had ordered. It definitely had some gin in it, but maybe some lemon and honey, too. "Gone…" I echoed back to her. "Yes, temporarily." "What do you mean?" She asked. "Well, it’s a bit of an odd story," I said. My eyes shot to my bag in the corner. Casey’s look followed my eyes to the bag, slumping in its seat of the booth. "This is a really good drink." I said. "Acton, quit being so vague. What happened to your phone? What’s in your bag?" This is what I liked about Casey. She was to the point, and she sometimes made herself laugh in that way. She wasn’t laughing now, though. "Well, my phone…" I said, trailing off again. How much did I actually want to share about this? I looked at my bag again and reached for it. "Well, it would be easier to just show you." Out of the corner of my eyes, I could see Casey watching me as I opened my messenger bag, and pulled out the book that had started all of this. "What’s that?" "Well, it’s supposed to be a novel, but look—" I opened the page to where the image of my phone resided and passed it over to her. "When I opened this book—" Casey received the book in her hands, when I realized what I had just done. I lurched across the table to take it back, but it was already too late. Myriam, the book, whatever it was in there, had slurped up Casey’s phone too. Before Casey could even cry out, her phone appeared next to mine in the corner of the page in the very same painterly style as mine. "Fuck." Casey looked up at me in disbelief, while simultaneously patting the table where her phone had been seconds ago. "It’s gone," I said flatly. "What the hell, Acton! What just happened?" "I’m sorry, I didn’t realize it was going to happen again. I—I was going to show you. Look—that’s my phone. It slurped up my phone." I paused, "And there’s yours," I pointed dumbly. "I can see that, Acton, now show me the trick to return them back!" "Ah-ha," I laughed weakly. "There isn’t one." Casey lifted herself out of her seat and looked around the restaurant wildly, as if she was expecting to see cameras from a reality TV show pop out of nowhere. She sat back down and took a deep breath, followed by a large gulp of her drink. "Explain yourself." I figured this was not going to turn out to be the dinner I had hoped for. Casey looked at me for a good minute before she spoke. I had just finished explaining everything: from the moment I had picked up the book to right before I had left my apartment after communicating with Myriam for the second time. I hadn’t told her about the dream I had had, though. "This is insane." "It is insane," I confirmed. Her eyes drilled into me. Mostly, she looked furious. But eventually, that fury gave way to an uneasy look. "So, if I were to take this book, walk over to that table over there," she nodded to her right, "and plop it in front of that bald dude, it would slurp up his phone." "I’m pretty sure," I said. "It has only happened twice, and there might be other conditions necessary for it to happen. Maybe the phone needs to be within a certain distance from the page, or maybe the person had to be holding the book open. I don’t know." "Well, I don’t know, either," Casey sounded exasperated. "But I’m really tempted to try." "He might not take it as well as you did," I said uneasily. I looked over at the guy Casey had gestured to. He didn’t look like the thumping type, but who knows, he could probably beat us to kingdom come. "This is fucked up. What about this bitch in the book?" "Hey, I don’t know that you need to call her a bitch." "She certainly seems like a manipulative bitch! She used you to steal all my information. By what you told me, right now she’s scanning all my messages, pictures, and e-mails." "She could be," I said, putting my head into my hands. "Sounds like a real bi–" "—Look, call her what you want," I said, cutting off Casey, "but I believe what she said about being stuck in the book." Casey picked up the book and nervously opened it, as if it might suck her entire self right into its empty pages. Hey, it might as well have. It turned out that the impossible was possible. Or maybe I should say, the very strange and unreal had been made real. "Solah J. Trek," Casey muttered. "That’s the author," I said, just catching Casey’s words. "I know," Casey said, her eyes widening in annoyance. "I’d look her up if I had my phone, " Casey said curtly. "Look—you can be mad about this all you want, but like it or not—huh," I paused. "What?" "I never looked up the author. And she told me to. 'Take it up with my author,' Myriam had said, or something like that. I never did. I must have been too shocked by what had happened." "Or were you too busy cooing over your new love Myriam, " Casey snapped at me. Underneath the anger I could see she was still able to tease me. Maybe she was feeling a bit better. I blushed, thinking about the dream I had in which I had walked, shoulders side by side, with Myriam—until I had grown too big, and her too small. "She’s too old for me," I muttered. "From 1955, by the looks of it," Casey said smartly, and snapped the book shut. She paused and looked at me again. I could see her fingers moving absently across the painted cover of the book. "Let’s go back to your place. You’re still on Hatchet Ave, yeah?" "Yeah." "Well, let’s go look her up. I’m not leaving until we get to the bottom of this. After all, I use my phone to unlock the door to my apartment." "You do?" "And guess what else," she said, tapping the table. "The bill?" "That’s right." "Well, I owed you a dinner anyway, didn’t I?" I asked, as I waved the waiter down to order. We trudged back through the snow to my place. With Casey there, there was none of that walking and stopping to feel like I was about to get hit by an invisible wave. Sure, I felt uneasy, but at least with Casey beside me I had someone to shoot the breeze with. We ended up catching up more on the walk home than we did over dinner. I was surprised she didn’t want to drill me with more questions about Myriam. I patted the book, tucked away in my messenger bag at my side as we walked. It was still there. My compulsive checking to make sure my phone was in my pocket had been replaced by something else. I learned plenty about Casey as we walked. In the years since we had last gotten together, she had gone through some life changes that made my life look as plain and untroubled as a bulletin board in an elementary school. Her Mom had died just months after we had last met up. She had had a terrible job during that time that had treated her like shit while her mother was in the hospital. I had only met her Mom twice—both times at school. I remember her watching in the bleachers at one of our painful intracity wrestling competitions. Her mom actually saw me and called me over to watch Casey wrestle in her first match in the girls tournament. I remember there weren’t many girls into wrestling and so there were only a few matches to get to the top of the ladder. Casey’s mom and I had talked about that a little. I hadn’t stayed to watch Casey’s match to the end. I had come up with some excuse about having to talk to my coach, or something, when really, I was avoiding having to be next to her if Casey lost the match. Something about that had been too painful to imagine, and so I politely excused myself after our brief conversation. With this memory before me, I felt ashamed of my behaviour, hearing that Casey’s mom had since died. I know neither of them would have held it against me, but when you find out someone’s gone, sometimes the first thing your mind does is think about the last dumb thing you did in front of them. It wasn’t just that Casey’s mom had died. She shared a little about trying out a secondary community college program that was a complete miss. Then she went on to say that her high school boyfriend had broken up with her while she was trying to make sense of all these huge life changes. I vaguely remembered the guy. He was one of the quiet, inconspicuous types in high school. I know the type because I had been the type. I probably still was. We walked and talked, though it was more me listening. I hesitate to say much about my life, because it didn’t feel like I had really had a life in comparison to what Casey had been through. On top of that, I was realizing that for me, what was almost a farcical experience of having my phone sucked up into a book, was probably not the same for this other person walking beside me. We stepped through the snow together abreast. I was half-listening now, because I was reminded of my dream where I walked beside Myriam. In it, I had seen myself grow bigger and bigger. But now, beside Casey, that was the last thing I felt. I felt like I wanted to shrink into non-existence with each step. It was with that sense of shame that we arrived at my apartment entry. By this time we had been walking in, what I hoped was, a comfortable silence. We walked up the stairs, gripping the railing: the apartment super still hadn’t set up any anti-slip devices for the winter, and the stairs were as slippery as wet marble (I assure you, there wasn’t a lick of marble anywhere in this dingy old apartment building). We entered my apartment and I put my bag on the kitchen table. I went to the fridge. "Tea? Beer?" "A beer would be great," said Casey’s voice from around the corner. I heard the distinct sounds of my messenger bag’s buckles unbuckling. I leaned around the corner: Casey throwing herself down on the couch with the book. I turned back to the fridge and grabbed the last two beers. I joined Casey on the couch, handing her one. "Thanks," she said absentmindedly. "I hope no one needed to get in touch with you tonight," I said glumly, looking over at the page with our phones on it. Casey was running her thumb over the images. "No, but we’ll solve this tonight." "How do you figure." "Well, you’re going to give me a pen, and I’m going to tear Myriam a new one." I shrugged. I got up and got the pen and the ink and showed them to Casey. She glared at them. "You spent how much on this hokum?" "Don’t worry about it," I said gruffly. I had spent over $50, was the answer, and I wasn’t working right now, thank you very much. Casey pulled the table in front of the couch closer and put the book, the ink, and the pen down in front of her. She took out the pen, and hovered it over the page. "Wait—what are you going to write?" "I don’t think you get to ask me that. Your little book here sucked up my phone without asking, and now I’m going to write. Either get out of the way, or watch quietly." I sighed. She had every right to say that. I inched closer to her on the couch. I could smell the pub on both of us. I watched as she wrote. Her handwriting was far nicer than mine. And that was what Casey wrote. We both sat back on the couch, as if we had done a workout. Then, the words began to slowly disappear in to the page as before. We watched, expectantly. We paused waiting for more words to appear but there were none. "The audacity!" Casey crowed. "This bitch barely acknowledged me!" "Wow," I said, dumbly. "Well, you better write something. At least she’ll talk to you ," Casey pushed the pen into my hands and I received it clumsily. I began to write. "I was thinking more, give us back our phones ," Casey said. My eyes bulged and I held my palms up defensively. "It’s fine it’s fine, she’s writing back," Casey said hurriedly. We both stared dumbly at this response, until it faded away into the page. "We are dealing with a very strange entity, here, Acton." "I know," I said. "What do we do?" "I don’t know." " Know , don’t know , whatever state we’re in, we can’t keep continuing like this. I think she’s on a mission to absorb more information—she probably wants us to expose this book to other people so that it sucks up their phones too." "I suppose," I said slowly. "We need to destroy it!" "Destroy it!" I gasped. "What about our phones?" I cried out feebly. "They’re gone, and everything in them. There’s no way she’s going to give it back," Casey snorted, eyeing the book. I grabbed at the book automatically and clasped it to my chest. "We can’t destroy it!" My heart was racing. I felt the invisible wave about to crash over me again. What had changed in me to make me this way? Why did I want to protect this book, protect Myriam? Casey wanted to destroy her. Burn the book, or rip it to shreds. But she was inside it. I knew I couldn’t let her do it. It was true that something had changed in me the moment Myriam had sucked up my phone. My life had slowed down considerably, but I also was seeing things differently. I looked at the world differently. Everyone around me appeared less real than Myriam! All of them were sucked into their phones, while mine had been sucked away. My relationships with the people of this world were hardly meaningful, I saw that now. They were facile and fragile things—like something a child had made with glue and popsicle sticks: sloppy, brittle, ready to break at a moment’s notice. I felt a warmth in my chest where the book was. Across from me, Casey appeared venemous, her face angular and snake-like—ready to strike. I inched back on the couch. "Acton," my name floated out of her mouth, wrapped in a warning tone. Her lips hardly seemed to move. She moved toward me slowly, her eyes locked on mine. I was stuck, caught in her gaze. She moved closer still, her upper body hovering over the couch where we sat. Then, she pounced. I couldn’t help but release the book: she pounced on me, not it . She struck me like a shot from a cannon and together we fell back into the couch. I heard the book fall from the couch to the floor with a muffled thump. My face was awash in her hair. Behind the smell of the pub was a fragrance from earlier in her day. She reached for the book. I writhed underneath her and rolled off the couch, hitting the floor—slap against the mats. I got up onto my hands and knees and grabbed for the book but she was already behind me. I knew I was done for. Wrestling between boys and girls in high school was forbidden, but it all came back to me the same—and evidently for Casey too. I felt her arm cross under my chin and her other arm snake through my legs. Click , went the padlock of her hands. She pulled me back and we both rolled backward. It was over before it started. "You still wrestle?" I gasped out. "No," she laughed, "but a champion doesn’t forget her moves." A champion doesn’t forget her moves. I had never actually watched Casey’s matches, that day I sat with her Mom. Now it was sounding like she had taken home the trophy. I lay there, pinned by her, her hair brushing into my face. She smelled nice. The promise of spring. She must have sensed that I was giving up whatever fight was in me because I felt her grip loosen. I relaxed into her body, my head unlocked and leaning back into her shoulder. "I thought you hated wrestling." "I did, sometimes," she said distantly. I couldn’t look at her face from where I lay. "Now are you going to stand in the way of burning this book or not?" "No," I sighed. "Get some kindling, then." I wandered through my apartment, grabbing a newspaper from my recycling. I had a wooden box that a few clementines had come in. I moved them to the fridge. I found some matches and brought some rubbing alcohol from the bathroom. I returned to Casey and picked the book off the floor. It felt heavier. I walked over to the fireplace and kneeled down. I looked back at Casey. She stared back at me. My fingers ran back and forth absentmindedly along the strange impasto cover. I turned back to the book, looked at it once more, and then ripped off the cover. I poured the alcohol on it, tossed it in the fireplace, and threw the small crate on top. I lit a match, stepped back, and threw it in. The alcohol burned off quickly with a whoof but the book still caught. It did not take long to burn. There was not much smoke, and what little there was sent some creature who had taken up residence in the chimney scuttling up and out into the night air. I returned to the couch and we watched the book burn until it was no more than a blackened crisp. Neither of us said anything. Finally, when it was over, Casey turned to me. "I still won’t be able to get into my apartment, at least until I can get ahold of the Super." "Right," I said tiredly, "take my bed. I’ll sleep on the couch." "Thanks," she patted me on the leg as she stood up. I remained on the couch to watch the last few embers in the fireplace. I listened as Casey roamed around my apartment, finding whatever she needed to get situated to sleep over. I pulled the couch blanket over me, a comfortable weariness settling over me. I lay there listening to the apartment and the neighbourhood. I imagined Casey pressed against me. She had pinned me fair and square, not that I had had much fight in me in the first place. It had all happened in a matter of seconds. In light of that, I felt a competitive spark growing within me. Something tonight had lit it, and I imagined it would grow just as the embers in the fireplace would diminish into nothing. I fell asleep hoping to dream about a bridge. The book was originally published in 1955 The author had written several books Myriam's Codebreakers was the last book that Solah J. Trek wrote; she died shortly after it was published A plot summary online told me the book was mostly a love story that involved a woman in a post-World-War-Two workplace early computers feature prominently in the book, specifically, ones that were developed based on the machines built and used for war-time decryption.

0 views
Manuel Moreale 3 weeks ago

Melanie Richards

This week on the People and Blogs series we have an interview with Melanie Richards, whose blog can be found at melanie-richards.com/blog . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I’m a Group Product Manager co-leading the core product at Webflow, i.e. helping teams visually design and build websites. My personal mission is to empower people to make inspiring, impactful, and inclusive things on the web. That’s been the through line of my career so far: I started out as a designer at a full-service agency called Fuzzco, moved to the web platform at Microsoft Edge, continued building for developers at Netlify, and am now aiming to make web creation even more democratic with the Webflow platform. I transitioned from design to product management while at Microsoft Edge. I wanted to take part in steering the future of the web platform, instead of remaining downstream of those decisions. I feel so lucky to have worked on new features in HTML, ARIA, CSS, and JavaScript with other PMs and developers in the W3C and WHATWG. I’m a builder at heart, so I love to work on webby side projects as well as a whole bevy of analog hobbies: knitting, sewing, weaving, sketchbooking, and journaling. I have a couple primary blogs right now: From 2013–2016 I also had a blog and directory called Badass Lady Creatives (wish I had spent more than five minutes on the name, haha). This featured women who were doing cool things in various “creative” industries. At the time it seemed like every panel, conference lineup, and group project featured all or mostly dudes. The blog was a way to push back on that a little bit and highlight people who were potentially overlooked. Since then gender representation (for one) seems to have gotten a bit better in these industries. But the work and joy of celebrating diverse, inspiring talent is never done! Big “yeet to production” vibes for me! I use Obsidian to scribble down my thoughts and write an initial draft. Obsidian creates Markdown files, so I copy and paste those into Visual Studio Code (my code editor), add some images and make some tweaks, and then push to production. I really try not to overthink it too much. However, I will admit that I have a tons of drafts in Obsidian that never see the light of day. It can be cathartic enough just to scribble it down, even if I never publish the thought. For my Learning Log posts, I use a Readwise => Obsidian workflow I describe in this blog post . Reader by Readwise is the app where I store and read all my RSS feeds and newsletter forwards. “Parallel play” is the biggest, most joyful boon to my creativity. I love to be in the company of others as we independently work on our own projects side by side. There’s a delicate balance when it comes to working on creative projects socially. For example, my mom, my aunt, and I often have Sew Day over FaceTime on Sundays. Everyone’s pretty committed to what they’re working on, so it’s easy to sew and talk and sing (badly 😂) at the same time. I also used to go to a local craft night that very sadly disbanded when the host shop changed hands. For writing or coding, that takes a bit more mental focus for me. I started a Discord server with a few friends, which is dedicated to working on blog posts and side projects. We meet up once a month to talk about our projects (and shoot the breeze, usually about web accessibility and/or the goodness of dogs). Then we all log off the voice channel to go do the thing! Both of these blogs use Eleventy and plain ol’ Markdown, and are hosted on Netlify. Some of my other side projects use a content management system (CMS) like Webflow’s CMS, or Contentful + Eleventy. Again, Webflow is my current employer. I use a Netlify form for comments on my “Making” blog, and Webmentions for my main blog. I will probably pull out Webmentions from that code base: conceptually they’ve never really “landed” for me, and it would be nice to delete a ton of code. I generally like my setup, though sometimes I think about migrating my “Making” blog onto a CMS. As far as CMSes go, I quite like Webflow’s: it’s straightforward and has that Goldilocks level of functionality for me. Some other CMSes I’ve tried have felt bloated yet seemed to miss obvious functionality out of the box. I have a Bookshop.org affiliate link and it took me several years to meet the $20 minimum payout so…yeah I’ve never truly monetized my blogging! I find there’s freedom in giving away your thoughts for free. As far as costs go, I have pretty low overhead: just paying for the domain name. I’m fine with other folks monetizing personal blogs, though of course there’s a classy and not-classy way to do so. If monetizing is what keeps bloggers’ work on the open web, on sites they own and control, I prefer that over monetizing through walled gardens. Related: Substack makes it easy to monetize but there are some very compelling reasons to consider alternatives. This is highly topical: I’m currently scheming about a directory site listing “maker” blogs! So many communities in the visual arts and crafts are stuck on social media platforms they don’t even enjoy, beholden to the whims of an algorithm. I’d like to connect makers in a more organic way. If you’re a crafter who would like to be part of this, feel free to fill out this Google form ! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 133 interviews . People and Blogs is possible because kind people support it. melanie-richards.com/blog, simply the blog that lives at my main website. I post here about the web, design, development, accessibility, product management, etc. One practice I’ve been keeping for a few years now is my monthly Learning Log. These posts are a compendium of what I’ve been shipping or making, what I’ve been learning, side quests, neat links around the internet, and articles I’ve been reading. When I’m in a particularly busy period (as was the case in 2025; my first child was born in September), this series is my most consistent blogging practice. making.melanie-richards.com : this is the blog where I post about my aforementioned analog projects. Quite a lot of sewing over the past year! Mandy Brown , Oliver Burkeman (technically a newsletter with a “view on web” equivalent), and Ethan Marcotte ’s writing have been helping to fill my spiritual cup over the last couple years. Anh and Katherine Yang are doing neat things on their sites What Claudia Wore for a nostalgic pick; I’d love to recreate some of these outfits sometime. Thank you Kim for keeping the blog up! Sarah Higley would be a great next interview. She blogs less frequently, but always at great depth and thoughtfulness on web accessibility. Web developers can learn quite a lot on more involved controls and interactions from Sarah.

0 views
Simon Willison 3 weeks ago

Thoughts on OpenAI acquiring Astral and uv/ruff/ty

The big news this morning: Astral to join OpenAI (on the Astral blog) and OpenAI to acquire Astral (the OpenAI announcement). Astral are the company behind uv , ruff , and ty - three increasingly load-bearing open source projects in the Python ecosystem. I have thoughts! The Astral team will become part of the Codex team at OpenAI. Charlie Marsh has this to say : Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement , OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community -- and for the broader Python ecosystem -- just as we have from the start. [...] After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development. OpenAI's message has a slightly different focus (highlights mine): As part of our developer-first philosophy, after closing OpenAI plans to support Astral’s open source products. By bringing Astral’s tooling and engineering expertise to OpenAI, we will accelerate our work on Codex and expand what AI can do across the software development lifecycle. This is a slightly confusing message. The Codex CLI is a Rust application, and Astral have some of the best Rust engineers in the industry - BurntSushi alone ( Rust regex , ripgrep , jiff ) may be worth the price of acquisition! So is this about the talent or about the product? I expect both, but I know from past experience that a product+talent acquisition can turn into a talent-only acquisition later on. Of Astral's projects the most impactful by far is uv . If you're not familiar with it, is by far the most convincing solution to Python's environment management problems, best illustrated by this classic XKCD : Switch from to and most of these problems go away. I've been using it extensively for the past couple of years and it's become an essential part of my workflow. I'm not alone in this. According to PyPI Stats uv was downloaded more than 126 million times last month! Since its release in February 2024 - just two years ago - it's become one of the most popular tools for running Python code. Astral's two other big projects are ruff - a Python linter and formatter - and ty - a fast Python type checker. These are popular tools that provide a great developer experience but they aren't load-bearing in the same way that is. They do however resonate well with coding agent tools like Codex - giving an agent access to fast linting and type checking tools can help improve the quality of the code they generate. I'm not convinced that integrating them into the coding agent itself as opposed to telling it when to run them will make a meaningful difference, but I may just not be imaginative enough here. Ever since started to gain traction the Python community has been worrying about the strategic risk of a single VC-backed company owning a key piece of Python infrastructure. I wrote about one of those conversations in detail back in September 2024. The conversation back then focused on what Astral's business plan could be, which started to take form in August 2025 when they announced pyx , their private PyPI-style package registry for organizations. I'm less convinced that pyx makes sense within OpenAI, and it's notably absent from both the Astral and OpenAI announcement posts. An interesting aspect of this deal is how it might impact the competition between Anthropic and OpenAI. Both companies spent most of 2025 focused on improving the coding ability of their models, resulting in the November 2025 inflection point when coding agents went from often-useful to almost-indispensable tools for software development. The competition between Anthropic's Claude Code and OpenAI's Codex is fierce . Those $200/month subscriptions add up to billions of dollars a year in revenue, for companies that very much need that money. Anthropic acquired the Bun JavaScript runtime in December 2025, an acquisition that looks somewhat similar in shape to Astral. Bun was already a core component of Claude Code and that acquisition looked to mainly be about ensuring that a crucial dependency stayed actively maintained. Claude Code's performance has increased significantly since then thanks to the efforts of Bun's Jarred Sumner. One bad version of this deal would be if OpenAI start using their ownership of as leverage in their competition with Anthropic. One detail that caught my eye from Astral's announcement, in the section thanking the team, investors, and community: Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that. As far as I can tell neither the Series A nor the Series B were previously announced - I've only been able to find coverage of the original seed round from April 2023 . Those investors presumably now get to exchange their stake in Astral for a piece of OpenAI. I wonder how much influence they had on Astral's decision to sell. Armin Ronacher built Rye , which was later taken over by Astral and effectively merged with uv. In August 2024 he wrote about the risk involved in a VC-backed company owning a key piece of open source infrastructure and said the following (highlight mine): However having seen the code and what uv is doing, even in the worst possible future this is a very forkable and maintainable thing . I believe that even in case Astral shuts down or were to do something incredibly dodgy licensing wise, the community would be better off than before uv existed. Astral's own Douglas Creager emphasized this angle on Hacker News today : All I can say is that right now , we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever". I like and trust the Astral team and I'm optimistic that their projects will be well-maintained in their new home. OpenAI don't yet have much of a track record with respect to acquiring and maintaining open source projects. They've been on a bit of an acquisition spree over the past three months though, snapping up Promptfoo and OpenClaw (sort-of, they hired creator Peter Steinberger and are spinning OpenClaw off to a foundation), plus closed source LaTeX platform Crixet (now Prism) . If things do go south for and the other Astral projects we'll get to see how credible the forking exit strategy turns out to be. You are only seeing the long-form articles from my blog. Subscribe to /atom/everything/ to get all of my posts, or take a look at my other subscription options .

0 views
David Bushell 4 weeks ago

SMTP on the edge

Disclaimer: this post includes my worst idea yet! Until now my contact form submissions were posted to a Cloudflare worker. The worker encrypted the details with PGP encryption . It then used the Amazon AWS “Simple Email Service” API to send an email to myself. PGP encryption meant that any middleman after the worker, like Amazon, could not snoop. (TLS only encrypts in transit.) The setup was okay but involved too many services. If you thought that was over-engineered, get a load of my next idea. My experiment with a self-hosted SMTP server was short-lived but I did learn to code SMTP protocol with server-side JavaScript. During that tinkering I had issue upgrading TLS on the SMTP server for receiving email. In my recent AT Protocol PDS adventure I learned that Proton Mail can generate restricted tokens for SMTP client auth. I’ve also been slowly migrating from Cloudflare to Bunny in my spare time. I was reminded that Bunny has Deno edge workers. Lightbulb moment: can I rawdog SMTP in a Bunny worker? This cuts out the AWS middleman. Neither Bunny nor Proton ever see the unencrypted data. True end-to-end encryption for my contact form! I threw together a proof-of-concept. My script opened a TCP connection to Proton using and sent the SMTP message. The connection was upgraded with to secure it. It then followed a very fragile sequence of SMTP messages to authenticate and send an email. If the unexpected happened it bailed immediately. Surprisingly this worked! I’m not sharing code because I don’t want to be responsible for any misuse. There is nothing in Bunny’s Terms of Service or Acceptable Use policy that explicitly prohibits sending email. Magic containers do block ports but edge scripting doesn’t. I asked Bunny support who replied: While Edge Scripting doesn’t expose the same explicit port limitation table as Magic Containers, it’s not intended to be used as a general-purpose SMTP client or email relay. Outbound traffic is still subject to internal network controls, abuse prevention systems, and our Acceptable Use Policy. Even if SMTP connections may technically work in some cases, sending email directly from Edge Scripts (especially at scale) can trigger automated abuse protections. We actively monitor for spam and unsolicited email patterns, and this type of usage can be restricted without a specific “port block” being publicly documented. If you need to send transactional emails from your application, we strongly recommend using a dedicated email service provider (via API) rather than direct SMTP from Edge Scripting. bunny.net support …that isn’t an outright “no” but it’s obviously a bad idea. To avoid risking an account ban I decided to use the Bunny edge worker to forward the encrypted data to a self-hosted API. That service handles the SMTP. In theory I could decrypt and log locally, but I’d prefer to let Proton Mail manage security. I’m more likely to check my email inbox than a custom GUI anyway. The OpenPGP JavaScript module is a big boy at 388 KB (minified) and 144 KB (compressed). I load this very lazily after an event on my contact form. Last year in a final attempt to save my contact form I added a Cloudflare CAPTCHA to thwart bots. I’ve removed that now because I believe there is sufficient obfuscation and “proof-of-work” to deter bad guys. Binning both Cloudflare and Amazon feels good. I deleted my entire AWS account. My new contact form seems to be working. Please let me know if you’ve tried to contact me in the last two weeks and it errored. If this setup fails, I really will remove the form forever! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. PGP encryption in the browser to Bunny edge worker SMTP directly to Proton

0 views
kytta 1 months ago

Teach humans to contribute, not machines

I love contributing to open-source projects. There is this insanely good feeling that I get when my changes get merged into the main branch. Dopamine goes through the roof when I see the number of “projects I’ve contributed to” go up, and the shade of blue on my contribution graph get that little bit darker. I wish I could do this my whole life. Contributing to open source is not easy, though, especially to the projects one hasn’t worked with yet. Of course, the biggest hurdle is the programming language one might not know. Then, it’s finding the issue to tackle. But these are the hurdles one should come to expect. But you know what’s the hardest thing about contributing, once you’ve found a project and picked an issue to work on? It’s getting the damn thing to run. When it comes to the ways a project should be run (let alone, developed), one should cue xkcd #927 . And, the bigger the ecosystem, the worse it becomes. I can guarantee you that, if you pick any two libraries that are written in the same language, they will have different commands to build them. Which is fine – different projects have different goals and different maintainers (with different opinions) – but discovering those is often outright impossible. Blessed be your soul if you tell me I should just run in your README! Or, even better, if you have a CONTRIBUTING.md outlining everything I need to know – from prerequisites to coding style to pull request guidelines. But that’s not always the case. In the past, I had to do a lot of guesswork. Do some npm scripts have telling names? Maybe there’s a Justfile? In the end, I was either reading the CI workflow files, trying to understand what someone else’s computer executes to achieve the same goal as I, or I gave up and watched said CI server do it for me after I’ve submitted a PR blindly. But then, something changed. With each day, more and more people are writing good contributor guides! All very well-structured and full to the brim with commands, style guides, and tips and tricks. The catch in that whole thing? The file is named different. It’s no longer called CONTRIBUTING.md. It’s AGENTS.md. That prick Claude and his dorky lil’ friends! We, the humans, have been demanding good documentation and help with contributing for ages, and they come in and get it served right to them! It’s a crazy feeling of both deep sorrow and weird joy that I get when yet another thing that’s helpful for your “agents” shows up – because it could and frequently is very beneficial to us, humans. Don’t like the CSS and JS of the project’s website? The docs are arranged weirdly, and you find yourself clicking around too often? llms.txt to the rescue! You don’t know if the project want’s regular or conventional commits? Just look in the AGENTS.md! You need to do something with a PDF, but you don’t know how? Just look at how Claude would do it with its “skills”! Finally, the thing that motivated me to write this post. Andrew Nesbitt, an awesome fella of ecosyste.ms fame, has just announced . The idea, and it’s execution, is insane (in a good way): Just run the command, and you’ll get all information on a project you need! Build, lint, and test commands, code formatting, supported OSes; it’s basically the solution to the problem I’ve had in the first paragraphs! But wait – how should one use this tool? Add this to your […] agent instructions file: Before starting work on this project, run to understand the toolchain, test commands, linters, and project conventions. The agent will get back structured information […] so it doesn’t have to guess or ask you. I wonder where this phenomenon is coming from. I guess that we, the programmers, who have made it out job to command a soulless machine, can not get enough of it. As if we’re not thinking enough about human interaction, or at very least are not getting enough fun from it. Coding is fun; typing stuff and see computer act (mostly) the way you want is fun. And it’s very easy to forget about the other developers a become one with the project. The perfect Makefile. The flawless CI pipeline. The impeccable AGENTS.md. But please, wake up from that dream. As good as it might feel (you don’t have to tell me!), you should still realize that you’re not alone. That somewhere out there, separated from you by thousands of kilometres of underwater cables, electromagnetic waves, and copper wires, there is another human, just like you. And that human does not want to deduce the build flags from reading your goreleaser.yaml.

1 views