Posts in Css (20 found)
Manuel Moreale 2 days ago

Karen

This week on the People and Blogs series we have an interview with Karen, whose blog can be found at chronosaur.us . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Pete Millspaugh and the other 127 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hello! My name is Karen. I work in IT support for a large company’s legal department, and am currently working on my Bachelors in Cybersecurity and Information Assurance. I live near New Orleans, Louisiana, with my husband and two dogs - Daisy, A Pembroke Welsh Corgi, and Mia, a Chihuahua. Daisy is The Most Serious Corgi ever (tm), and Mia has the personality of an old lady who chain smokes, plays Bingo every week at the rec center, and still records her soap operas on a VHS daily. My husband is an avid maker (woodworking and 3D printing, mostly), video gamer, and has an extensive collection of board games that takes up the entire back wall of our livingroom. As for me, outside of work, I’m a huge camera nerd and photographer. I love film photography, and recently learned how to develop my own negatives at home! I also do digital - I will never turn my nose up at one versus the other. I’ve always been into assorted fandoms, and used to volunteer at local sci-fi/fantasy/comic conventions up to a few years ago. I got into K-Pop back in 2022, and am now an active participant in the local New Orleans fan community, providing Instax photo booth services for events. I’ve also hosted K-Pop events here in NOLA as well. My ult K-Pop group is ATEEZ, but I’m a proud multi fan and listen to whatever groups or music catch my attention, including Stray Kids, SHINee, and Mamamoo. I also love 80s and 90s alternative, mainly Depeche Mode, Nine Inch Nails, and Garbage. And yes, I may be named Karen but I refuse to BE a “Karen”. I don’t get upset when people use the term, I find it hilarious. So I have been blogging off and on since 2001 or so - back when they were still called “weblogs” and “online journals”. Originally, I was using LiveJournal, but even with a paid account, I wanted to learn more customization and make a site that was truly my own. My husband - then boyfriend - had their own server, and gave me some space on it. I started out creating sites in Microsoft Frontpage and Dreamweaver (BEFORE Adobe owned them!), and moved to using Greymatter blog software, which I loved and miss dearly. I moved to Wordpress in - 2004 maybe? - and used that for all my personal sites until 2024. I’d been reading more and more about the Indieweb for a while and found Bear , and I loved the simplicity. I’ve had sites ranging from a basic daily online journal, to a fashion blog, to a food blog, to a K-Pop and fandom-centric blog, to what it is today - my online space for everything and anything I like. I taught myself HTML and CSS in order to customize and create my sites. No classes, no courses, no books, no certifications, just Google and looking at other people’s sites to see what I liked and how they did it. My previous job before this one, I was a web administrator for a local marketing company that built sites using DNN and Wordpress, and I’m proud to say that I got that job and my current one with my self-developed skills and being willing to learn and grow. I would not be where I am today, professionally, if it wasn’t for blogging. I’ll be totally honest - I don’t have a writing process. I get inspiration from random thoughts, seeing things online, wanting to share the day-to-day of my life. I don’t draft or have someone proof read, I just type out what I feel like writing. When I had blogs focusing on specific things - plus size fashion and K-Pop, respectively - I kept a list of topics and ideas to refer back to when I was stuck for ideas. That was when I was really focused on playing the SEO and search engine algorithm game, though, where I was trying to stick to the “two-three posts a week” rule in an attempt to boost my search engine results. I don’t do that now. I do still have a list of ideas on my phone, but it’s nothing I am feeling FORCED to stick to. It’s more along the lines of that I had an idea while I was out, and wanted to note it so I don’t forget. Memory is a fickle thing in your late 40s, LOL. My space absolutely influences my mindset for writing. I prefer to write in the early morning, because my brain operates best then. (I know I am an exception to the rule by being an early bird.) I love weekend mornings when I can get up really early and settle into my recliner with my laptop and coffee, and just listen to some lofi music and just feel topics and ideas out. I also made my office/guest bedroom into a cozy little space, with a daybed full of soft blankets and fluffy pillows and cushions, and a lap desk. In all honesty, my preferred location to write is at a coffeeshop first thing in the morning. I love sitting tucked in a booth with a coffee and muffin, headphones on and listening to music, when the sun is just on the cusp of rising and the shop is still a little too chilly. That’s when the creative ideas light up the brightest and the synapses are firing on all cylinders. Currently, my site is hosted on Bear . I used to be a self-hosted Wordpress devotee, but in mid-late 2024, I got really tired of the bloat that the apps had become. In order to use it efficiently for me, I had to install entirely too many plugins to make it “simpler”. (Shout-out to the Indieweb Wordpress team, though - they work so hard on those plugins!) Of course, the more plugins you have, the less secure your site… My domain is registered through Hostinger . To write my posts, I use Bear Markdown Notes. I heard about this program after seeing a few others talking about using it for drafts, notes, etc. I honestly don’t think I’d change much! I really love using Bear Blog. It reminds me of the very old school LiveJournal days, or when I used Greymatter. It takes me back to the web being simpler, more straightforward, more fun. I also like Bear’s manifesto , and that he built the service for longevity . I would probably structure my site differently, especially after seeing some personal sites set up with more of a “digital garden” format. I will eventually adjust my site at some point, but for now, I’m fine with it. (That and between school and work, it’s kind of low on the priority list.) I purchased a lifetime subscription to Bear after a week of using it, which ran around $200 - I don’t remember exactly. I knew that I was going to be using the service for a while and thought I should invest in a place that I believed in. My Hostinger domain renewals run around $8.99 annually. My blog is just my personal site - I don’t generate any revenue or monetise in any way. I don’t mind when people monetize their site - it’s their site and they can do what they choose. As long as it’s not invading others’ privacy or harmful, I have absolutely no issue. Make that money however you like. Ooooh I have three really good suggestions for both checking out and interviewing! Binary Digit - B is kind of an influence for me to play with my site again. They have just this super cool and early 2000s vibe and style that I really love. Their site reminds me of me when I first started blogging, when I was learning new things and implementing what I thought was cool on my site, joining fanlistings, making new online friends. Kevin Spencer - I love Kevin’s writing and especially his photography. Not only that, he has fantastic taste in music. I’ve left many a comment on his site about 80s and 90s synthpop and industrial music. A Parenthetical Departure - Sylvia was one of the first sites I started reading when I started looking up info on Bear Blog. They are EXTREMELY talented and have an excellent knack for playing with design, and showing others how it works. One of my side projects is Burn Like A Flame , which is my local K-pop and fandom photography site. I actualy just started a project there that is more than slightly based on People and Blogs - The Fandom Story Project . I’m interviewing local fans to talk about what they love and what their feelings are on fandom culture now, and I’m accompanying that with a photoshoot with that person. It’s a way to introduce people to each other within the community. Two of my favorite YouTube channels that I have recently been watching are focused on fashion discussion and history - Bliss Foster and understitch, . If you like learning and listening to information on fashion, I highly recommend these creators. I know a TON of people have now seen K-Pop Demon Hunters (which I love, and the movie has a great message for not only kids, but adults). If you’ve seen this and are interested in getting into K-Pop, I suggest checking out my favorite group, ATEEZ. If you think that most K-Pop is all chirpy bubbly cutesy songs, let me suggest two by this group that aren’t what you’d expect: Guerrilla and Turbulence . I strongly suggesting watching without the translations, and then watching again with them. Their lyrics are the thing that really drew me into this group, and had me learning more about the deeper meaning behind a lot of K-Pop songs. And finally, THANK YOU to Manu for People and Blogs! I always find some really great new sites to check out after reading these interviews, and I am truly honored to be asked to join this list of great bloggers. It’s inspiring me to work harder on my blog and to post more often. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 117 interviews . Make sure to also say thank you to Benny and the other 127 supporters for making this series possible.

0 views
Josh Comeau 5 days ago

Brand New Layouts with CSS Subgrid

Subgrid allows us to extend a grid template down through the DOM tree, so that deeply-nested elements can participate in the same grid layout. At first glance, I thought this would be a helpful convenience, but it turns out that it’s so much more. Subgrid unlocks exciting new layout possibilities, stuff we couldn’t do until now. ✨

0 views
iDiallo 1 weeks ago

How Do You Send an Email?

It's been over a year and I didn't receive a single notification email from my web-server. It could either mean that my $6 VPS is amazing and hasn't gone down once this past year. Or it could mean that my health check service has gone down. Well this year, I have received emails from readers to tell me my website was down. So after doing some digging, I discovered that my health checker works just fine, but all emails it sends are being rejected by gmail. Unless you use a third party service, you have little to no chance of sending an email that gets delivered. Every year, email services seem to become a tad bit more expensive. When I first started this website, sending emails to my subscribers was free on Mailchimp. Now it costs $45 a month. On Buttondown, as of this writing, it costs $29 a month. What are they doing that costs so much? It seems like sending emails is impossibly hard, something you can almost never do yourself. You have to rely on established services if you want any guarantee that your email will be delivered. But is it really that complicated? Emails, just like websites, use a basic communication protocol to function. For you to land on this website, your browser somehow communicated with my web server, did some negotiating, and then my server sent HTML data that your browser rendered on the page. But what about email? Is the process any different? The short answer is no. Email and the web work in remarkably similar fashion. Here's the short version: In order to send me an email, your email client takes the email address you provide, connects to my server, does some negotiating, and then my server accepts the email content you intended to send and saves it. My email client will then take that saved content and notify me that I have a new message from you. That's it. That's how email works. So what's the big fuss about? Why are email services charging $45 just to send ~1,500 emails? Why is it so expensive, while I can serve millions of requests a day on my web server for a fraction of the cost? The short answer is spam . But before we get to spam, let's get into the details I've omitted from the examples above. The negotiations. How similar email and web traffic really are? When you type a URL into your browser and hit enter, here's what happens: The entire exchange is direct, simple, and happens in milliseconds. Now let's look at email. The process is similar: Both HTTP and email use DNS to find servers, establish TCP connections, exchange data using text-based protocols, and deliver content to the end user. They're built on the same fundamental internet technologies. So if email is just as simple as serving a website, why does it cost so much more? The answer lies in a problem that both systems share but handle very differently. Unwanted third-party writes. Both web servers and email servers allow outside parties to send them data. Web servers accept form submissions, comments, API requests, and user-generated content. Email servers accept messages from any other email server on the internet. In both cases, this openness creates an opportunity for abuse. Spam isn't unique to email, it's everywhere. My blog used to get around 6,000 spam comments on a daily basis. On the greater internet, you will see spam comments on blogs, spam account registrations, spam API calls, spam form submissions, and yes, spam emails. The main difference is visibility. When spam protection works well, it's invisible. You visit websites every day without realizing that behind the scenes. CAPTCHAs are blocking bot submissions, rate limiters are rejecting suspicious traffic, and content filters are catching spam comments before they're published. You don't get to see the thousands of spam attempts that happen every day on my blog, because of some filtering I've implemented. On a well run web-server, the work is invisible. The same is true for email. A well-run email server silently: There is a massive amount of spam. In fact, spam accounts for roughly 45-50% of all email traffic globally . But when the system works, you simply don't see it. If we can combat spam on the web without charging exorbitant fees, email spam shouldn't be that different. The technical challenges are very similar. Yet a basic web server on a $5/month VPS can handle millions of requests with minimal spam-fighting overhead. Meanwhile, sending 1,500 emails costs $29-45 per month through commercial services. The difference isn't purely technical. It's about reputation, deliverability networks, and the ecosystem that has evolved around email. Email providers have created a cartel-like system where your ability to reach inboxes depends on your server's reputation, which is nearly impossible to establish as a newcomer. They've turned a technical problem (spam) into a business moat. And we're all paying for it. Email isn't inherently more complex or expensive than web hosting. Both the protocols and the infrastructure are similar, and the spam problem exists in both domains. The cost difference is mostly artificial. It's the result of an ecosystem that has consolidated around a few major providers who control deliverability. It doesn't help that Intuit owns Mailchimp now. Understanding this doesn't necessarily change the fact that you'll probably still need to pay for email services if you want reliable delivery. But it should make you question whether that $45 monthly bill is really justified by the technical costs involved. Or whether it's just the price of admission to a gatekept system. DNS Lookup : Your browser asks a DNS server, "What's the IP address for this domain?" The DNS server responds with something like . Connection : Your browser establishes a TCP connection with that IP address on port 80 (HTTP) or port 443 (HTTPS). Request : Your browser sends an HTTP request: "GET /blog-post HTTP/1.1" Response : My web server processes the request and sends back the HTML, CSS, and JavaScript that make up the page. Rendering : Your browser receives this data and renders it on your screen. DNS Lookup : Your email client takes my email address ( ) and asks a DNS server, "What's the mail server for example.com?" The DNS server responds with an MX (Mail Exchange) record pointing to my mail server's address. Connection : Your email client (or your email provider's server) establishes a TCP connection with my mail server on port 25 (SMTP) or port 587 (for authenticated SMTP). Negotiation (SMTP) : Your server says "HELO, I have a message for [email protected]." My server responds: "OK, send it." Transfer : Your server sends the email content, headers, body, attachments, using the Simple Mail Transfer Protocol (SMTP). Storage : My mail server accepts the message and stores it in my mailbox, which can be a simple text file on the server. Retrieval : Later, when I open my email client, it connects to my server using IMAP (port 993) or POP3 (port 110) and asks, "Any new messages?" My server responds with your email, and my client displays it. Checks sender reputation against blacklists Validates SPF, DKIM, and DMARC records Scans message content for spam signatures Filters out malicious attachments Quarantines suspicious senders Both require reputation systems Both need content filtering Both face distributed abuse Both require infrastructure to handle high volume

0 views
Max Woolf 2 weeks ago

Nano Banana can be prompt engineered for extremely nuanced AI image generation

You may not have heard about new AI image generation models as much lately, but that doesn’t mean that innovation in the field has stagnated: it’s quite the opposite. FLUX.1-dev immediately overshadowed the famous Stable Diffusion line of image generation models, while leading AI labs have released models such as Seedream , Ideogram , and Qwen-Image . Google also joined the action with Imagen 4 . But all of those image models are vastly overshadowed by ChatGPT’s free image generation support in March 2025. After going organically viral on social media with the prompt, ChatGPT became the new benchmark for how most people perceive AI-generated images, for better or for worse. The model has its own image “style” for common use cases, which make it easy to identify that ChatGPT made it. Two sample generations from ChatGPT. ChatGPT image generations often have a yellow hue in their images. Additionally, cartoons and text often have the same linework and typography. Of note, , the technical name of the underlying image generation model, is an autoregressive model. While most image generation models are diffusion-based to reduce the amount of compute needed to train and generate from such models, works by generating tokens in the same way that ChatGPT generates the next token, then decoding them into an image. It’s extremely slow at about 30 seconds to generate each image at the highest quality (the default in ChatGPT), but it’s hard for most people to argue with free. In August 2025, a new mysterious text-to-image model appeared on LMArena : a model code-named “nano-banana”. This model was eventually publically released by Google as Gemini 2.5 Flash Image , an image generation model that works natively with their Gemini 2.5 Flash model. Unlike Imagen 4, it is indeed autoregressive, generating 1,290 tokens per image. After Nano Banana’s popularity pushed the Gemini app to the top of the mobile App Stores, Google eventually made Nano Banana the colloquial name for the model as it’s definitely more catchy than “Gemini 2.5 Flash Image”. The first screenshot on the iOS App Store for the Gemini app. Personally, I care little about what leaderboards say which image generation AI looks the best. What I do care about is how well the AI adheres to the prompt I provide: if the model can’t follow the requirements I desire for the image—my requirements are often specific —then the model is a nonstarter for my use cases. At the least, if the model does have strong prompt adherence, any “looking bad” aspect can be fixed with prompt engineering and/or traditional image editing pipelines. After running Nano Banana though its paces with my comically complex prompts, I can confirm that thanks to Nano Banana’s robust text encoder, it has such extremely strong prompt adherence that Google has understated how well it works. Like ChatGPT, Google offers methods to generate images for free from Nano Banana. The most popular method is through Gemini itself, either on the web or in an mobile app, by selecting the “Create Image 🍌” tool. Alternatively, Google also offers free generation in Google AI Studio when Nano Banana is selected on the right sidebar, which also allows for setting generation parameters such as image aspect ratio and is therefore my recommendation. In both cases, the generated images have a visible watermark on the bottom right corner of the image. For developers who want to build apps that programmatically generate images from Nano Banana, Google offers the endpoint on the Gemini API . Each image generated costs roughly $0.04/image for a 1 megapixel image (e.g. 1024x1024 if a 1:1 square): on par with most modern popular diffusion models despite being autoregressive, and much cheaper than ’s $0.17/image. Working with the Gemini API is a pain and requires annoying image encoding/decoding boilerplate, so I wrote and open-sourced a Python package: gemimg , a lightweight wrapper around Gemini API’s Nano Banana endpoint that lets you generate images with a simple prompt, in addition to handling cases such as image input along with text prompts. I chose to use the Gemini API directly despite protests from my wallet for three reasons: a) web UIs to LLMs often have system prompts that interfere with user inputs and can give inconsistent output b) using the API will not show a visible watermark in the generated image, and c) I have some prompts in mind that are…inconvenient to put into a typical image generation UI. Let’s test Nano Banana out, but since we want to test prompt adherence specifically, we’ll start with more unusual prompts. My go-to test case is: I like this prompt because not only is an absurd prompt that gives the image generation model room to be creative, but the AI model also has to handle the maple syrup and how it would logically drip down from the top of the skull pancake and adhere to the bony breakfast. The result: That is indeed in the shape of a skull and is indeed made out of pancake batter, blueberries are indeed present on top, and the maple syrup does indeed drop down from the top of the pancake while still adhereing to its unusual shape, albeit some trails of syrup disappear/reappear. It’s one of the best results I’ve seen for this particular test, and it’s one that doesn’t have obvious signs of “AI slop” aside from the ridiculous premise. Now, we can try another one of Nano Banana’s touted features: editing. Image editing, where the prompt targets specific areas of the image while leaving everything else as unchanged as possible, has been difficult with diffusion-based models until very recently with Flux Kontext . Autoregressive models in theory should have an easier time doing so as it has a better understanding of tweaking specific tokens that correspond to areas of the image. While most image editing approaches encourage using a single edit command, I want to challenge Nano Banana. Therefore, I gave Nano Banana the generated skull pancake, along with five edit commands simultaneously: All five of the edits are implemented correctly with only the necessary aspects changed, such as removing the blueberries on top to make room for the mint garnish, and the pooling of the maple syrup on the new cookie-plate is adjusted. I’m legit impressed. Now we can test more difficult instances of prompt engineering. One of the most compelling-but-underdiscussed use cases of modern image generation models is being able to put the subject of an input image into another scene. For open-weights image generation models, it’s possible to “train” the models to learn a specific subject or person even if they are not notable enough to be in the original training dataset using a technique such as finetuning the model with a LoRA using only a few sample images of your desired subject. Training a LoRA is not only very computationally intensive/expensive, but it also requires care and precision and is not guaranteed to work—speaking from experience. Meanwhile, if Nano Banana can achieve the same subject consistency without requiring a LoRA, that opens up many fun oppertunities. Way back in 2022, I tested a technique that predated LoRAs known as textual inversion on the original Stable Diffusion in order to add a very important concept to the model: Ugly Sonic , from the initial trailer for the Sonic the Hedgehog movie back in 2019. One of the things I really wanted Ugly Sonic to do is to shake hands with former U.S. President Barack Obama , but that didn’t quite work out as expected. 2022 was a now-unrecognizable time where absurd errors in AI were celebrated. Can the real Ugly Sonic finally shake Obama’s hand? Of note, I chose this test case to assess image generation prompt adherence because image models may assume I’m prompting the original Sonic the Hedgehog and ignore the aspects of Ugly Sonic that are distinct to only him. Specifically, I’m looking for: I also confirmed that Ugly Sonic is not surfaced by Nano Banana, and prompting as such just makes a Sonic that is ugly, purchasing a back alley chili dog. I gave Gemini the two images of Ugly Sonic above (a close-up of his face and a full-body shot to establish relative proportions) and this prompt: That’s definitely Obama shaking hands with Ugly Sonic! That said, there are still issues: the color grading/background blur is too “aesthetic” and less photorealistic, Ugly Sonic has gloves, and the Ugly Sonic is insufficiently lanky. Back in the days of Stable Diffusion, the use of prompt engineering buzzwords such as , , and to generate “better” images in light of weak prompt text encoders were very controversial because it was difficult both subjectively and intuitively to determine if they actually generated better pictures. Obama shaking Ugly Sonic’s hand would be a historic event. What would happen if it were covered by The New York Times ? I added to the previous prompt: So there’s a few notable things going on here: That said, I only wanted the image of Obama and Ugly Sonic and not the entire New York Times A1. Can I just append to the previous prompt and have that be enough to generate the image only while maintaining the compositional bonuses? I can! The gloves are gone and his chest is white, although Ugly Sonic looks out-of-place in the unintentional sense. As an experiment, instead of only feeding two images of Ugly Sonic, I fed Nano Banana all the images of Ugly Sonic I had ( seventeen in total), along with the previous prompt. This is an improvement over the previous generated image: no eyebrows, white hands, and a genuinely uncanny vibe. Again, there aren’t many obvious signs of AI generation here: Ugly Sonic clearly has five fingers! That’s enough Ugly Sonic for now, but let’s recall what we’ve observed so far. There are two noteworthy things in the prior two examples: the use of a Markdown dashed list to indicate rules when editing, and the fact that specifying as a buzzword did indeed improve the composition of the output image. Many don’t know how image generating models actually encode text. In the case of the original Stable Diffusion, it used CLIP , whose text encoder open-sourced by OpenAI in 2021 which unexpectedly paved the way for modern AI image generation. It is extremely primitive relative to modern standards for transformer-based text encoding, and only has a context limit of 77 tokens: a couple sentences, which is sufficient for the image captions it was trained on but not nuanced input. Some modern image generators use T5 , an even older experimental text encoder released by Google that supports 512 tokens. Although modern image models can compensate for the age of these text encoders through robust data annotation during training the underlying image models, the text encoders cannot compensate for highly nuanced text inputs that fall outside the domain of general image captions. A marquee feature of Gemini 2.5 Flash is its support for agentic coding pipelines; to accomplish this, the model must be trained on extensive amounts of Markdown (which define code repository s and agentic behaviors in ) and JSON (which is used for structured output/function calling/MCP routing). Additionally, Gemini 2.5 Flash was also explictly trained to understand objects within images, giving it the ability to create nuanced segmentation masks . Nano Banana’s multimodal encoder, as an extension of Gemini 2.5 Flash, should in theory be able to leverage these properties to handle prompts beyond the typical image-caption-esque prompts. That’s not to mention the vast annotated image training datasets Google owns as a byproduct of Google Images and likely trained Nano Banana upon, which should allow it to semantically differentiate between an image that is and one that isn’t, as with similar buzzwords. Let’s give Nano Banana a relatively large and complex prompt, drawing from the learnings above and see how well it adheres to the nuanced rules specified by the prompt: This prompt has everything : specific composition and descriptions of different entities, the use of hex colors instead of a natural language color, a heterochromia constraint which requires the model to deduce the colors of each corresponding kitten’s eye from earlier in the prompt, and a typo of “San Francisco” that is definitely intentional. Each and every rule specified is followed. For comparison, I gave the same command to ChatGPT—which in theory has similar text encoding advantages as Nano Banana—and the results are worse both compositionally and aesthetically, with more tells of AI generation. 1 The yellow hue certainly makes the quality differential more noticeable. Additionally, no negative space is utilized, and only the middle cat has heterochromia but with the incorrect colors. Another thing about the text encoder is how the model generated unique relevant text in the image without being given the text within the prompt itself: we should test this further. If the base text encoder is indeed trained for agentic purposes, it should at-minimum be able to generate an image of code. Let’s say we want to generate an image of a minimal recursive Fibonacci sequence in Python, which would look something like: I gave Nano Banana this prompt: It tried to generate the correct corresponding code but the syntax highlighting/indentation didn’t quite work, so I’ll give it a pass. Nano Banana is definitely generating code, and was able to maintain the other compositional requirements. For posterity, I gave the same prompt to ChatGPT: It did a similar attempt at the code which indicates that code generation is indeed a fun quirk of multimodal autoregressive models. I don’t think I need to comment on the quality difference between the two images. An alternate explanation for text-in-image generation in Nano Banana would be the presence of prompt augmentation or a prompt rewriter, both of which are used to orient a prompt to generate more aligned images. Tampering with the user prompt is common with image generation APIs and aren’t an issue unless used poorly (which caused a PR debacle for Gemini last year), but it can be very annoying for testing. One way to verify if it’s present is to use adversarial prompt injection to get the model to output the prompt itself, e.g. if the prompt is being rewritten, asking it to generate the text “before” the prompt should get it to output the original prompt. That’s, uh, not the original prompt. Did I just leak Nano Banana’s system prompt completely by accident? The image is hard to read, but if it is the system prompt—the use of section headers implies it’s formatted in Markdown—then I can surgically extract parts of it to see just how the model ticks: These seem to track, but I want to learn more about those buzzwords in point #3: Huh, there’s a guard specifically against buzzwords? That seems unnecessary: my guess is that this rule is a hack intended to avoid the perception of model collapse by avoiding the generation of 2022-era AI images which would be annotated with those buzzwords. As an aside, you may have noticed the ALL CAPS text in this section, along with a command. There is a reason I have been sporadically capitalizing in previous prompts: caps does indeed work to ensure better adherence to the prompt (both for text and image generation), 2 and threats do tend to improve adherence. Some have called it sociopathic, but this generation is proof that this brand of sociopathy is approved by Google’s top AI engineers. Tangent aside, since “previous” text didn’t reveal the prompt, we should check the “current” text: That worked with one peculiar problem: the text “image” is flat-out missing, which raises further questions. Is “image” parsed as a special token? Maybe prompting “generate an image” to a generative image AI is a mistake. I tried the last logical prompt in the sequence: …which always raises a error: not surprising if there is no text after the original prompt. This section turned out unexpectedly long, but it’s enough to conclude that Nano Banana definitely has indications of benefitting from being trained on more than just image captions. Some aspects of Nano Banana’s system prompt imply the presence of a prompt rewriter, but if there is indeed a rewriter, I am skeptical it is triggering in this scenario, which implies that Nano Banana’s text generation is indeed linked to its strong base text encoder. But just how large and complex can we make these prompts and have Nano Banana adhere to them? Nano Banana supports a context window of 32,768 tokens: orders of magnitude above T5’s 512 tokens and CLIP’s 77 tokens. The intent of this large context window for Nano Banana is for multiturn conversations in Gemini where you can chat back-and-forth with the LLM on image edits. Given Nano Banana’s prompt adherence on small complex prompts, how well does the model handle larger-but-still-complex prompts? Can Nano Banana render a webpage accurately? I used a LLM to generate a bespoke single-page HTML file representing a Counter app, available here . The web page uses only vanilla HTML, CSS, and JavaScript, meaning that Nano Banana would need to figure out how they all relate in order to render the web page correctly. For example, the web page uses CSS Flexbox to set the ratio of the sidebar to the body in a 1/3 and 2/3 ratio respectively. Feeding this prompt to Nano Banana: That’s honestly better than expected, and the prompt cost 916 tokens. It got the overall layout and colors correct: the issues are more in the text typography, leaked classes/styles/JavaScript variables, and the sidebar:body ratio. No, there’s no practical use for having a generative AI render a webpage, but it’s a fun demo. A similar approach that does have a practical use is providing structured, extremely granular descriptions of objects for Nano Banana to render. What if we provided Nano Banana a JSON description of a person with extremely specific details, such as hair volume, fingernail length, and calf size? As with prompt buzzwords, JSON prompting AI models is a very controversial topic since images are not typically captioned with JSON, but there’s only one way to find out. I wrote a prompt augmentation pipeline of my own that takes in a user-input description of a quirky human character, e.g. , and outputs a very long and detailed JSON object representing that character with a strong emphasis on unique character design. 3 But generating a Mage is boring, so I asked my script to generate a male character that is an equal combination of a Paladin, a Pirate, and a Starbucks Barista: the resulting JSON is here . The prompt I gave to Nano Banana to generate a photorealistic character was: Beforehand I admit I didn’t know what a Paladin/Pirate/Starbucks Barista would look like, but he is definitely a Paladin/Pirate/Starbucks Barista. Let’s compare against the input JSON, taking elements from all areas of the JSON object (about 2600 tokens total) to see how well Nano Banana parsed it: Checking the JSON field-by-field, the generation also fits most of the smaller details noted. However, he is not photorealistic, which is what I was going for. One curious behavior I found is that any approach of generating an image of a high fantasy character in this manner has a very high probability of resulting in a digital illustration, even after changing the target publication and adding “do not generate a digital illustration” to the prompt. The solution requires a more clever approach to prompt engineering: add phrases and compositional constraints that imply a heavy physicality to the image, such that a digital illustration would have more difficulty satisfying all of the specified conditions than a photorealistic generation: The image style is definitely closer to Vanity Fair (the photographer is reflected in his breastplate!), and most of the attributes in the previous illustration also apply—the hands/cutlass issue is also fixed. Several elements such as the shoulderplates are different, but not in a manner that contradicts the JSON field descriptions: perhaps that’s a sign that these JSON fields can be prompt engineered to be even more nuanced. Yes, prompting image generation models with HTML and JSON is silly, but “it’s not silly if it works” describes most of modern AI engineering. Nano Banana allows for very strong generation control, but there are several issues. Let’s go back to the original example that made ChatGPT’s image generation go viral: . I ran that exact prompt through Nano Banana on a mirror selfie of myself: …I’m not giving Nano Banana a pass this time. Surprisingly, Nano Banana is terrible at style transfer even with prompt engineering shenanigans, which is not the case with any other modern image editing model. I suspect that the autoregressive properties that allow Nano Banana’s excellent text editing make it too resistant to changing styles. That said, creating a new image does in fact work as expected, and creating a new image using the character provided in the input image with the specified style (as opposed to a style transfer ) has occasional success. Speaking of that, Nano Banana has essentially no restrictions on intellectual property as the examples throughout this blog post have made evident. Not only will it not refuse to generate images from popular IP like ChatGPT now does, you can have many different IPs in a single image. Normally, Optimus Prime is the designated driver. I am not a lawyer so I cannot litigate the legalities of training/generating IP in this manner or whether intentionally specifying an IP in a prompt but also stating “do not include any watermarks” is a legal issue: my only goal is to demonstrate what is currently possible with Nano Banana. I suspect that if precedent is set from existing IP lawsuits against OpenAI and Midjourney , Google will be in line to be sued. Another note is moderation of generated images, particularly around NSFW content, which always important to check if your application uses untrusted user input. As with most image generation APIs, moderation is done against both the text prompt and the raw generated image. That said, while running my standard test suite for new image generation models, I found that Nano Banana is surprisingly one of the more lenient AI APIs. With some deliberate prompts, I can confirm that it is possible to generate NSFW images through Nano Banana—obviously I cannot provide examples. I’ve spent a very large amount of time overall with Nano Banana and although it has a lot of promise, some may ask why I am writing about how to use it to create highly-specific high-quality images during a time where generative AI has threatened creative jobs. The reason is that information asymmetry between what generative image AI can and can’t do has only grown in recent months: many still think that ChatGPT is the only way to generate images and that all AI-generated images are wavy AI slop with a piss yellow filter. The only way to counter this perception is though evidence and reproducibility. That is why not only am I releasing Jupyter Notebooks detailing the image generation pipeline for each image in this blog post, but why I also included the prompts in this blog post proper; I apologize that it padded the length of the post to 26 minutes, but it’s important to show that these image generations are as advertised and not the result of AI boosterism. You can copy these prompts and paste them into AI Studio and get similar results, or even hack and iterate on them to find new things. Most of the prompting techniques in this blog post are already well-known by AI engineers far more skilled than myself, and turning a blind eye won’t stop people from using generative image AI in this manner. I didn’t go into this blog post expecting it to be a journey, but sometimes the unexpected journeys are the best journeys. There are many cool tricks with Nano Banana I cut from this blog post due to length, such as providing an image to specify character positions and also investigations of styles such as pixel art that most image generation models struggle with, but Nano Banana now nails. These prompt engineering shenanigans are only the tip of the iceberg. Jupyter Notebooks for the generations used in this post are split between the gemimg repository and a second testing repository . I would have preferred to compare the generations directly from the endpoint for an apples-to-apples comparison, but OpenAI requires organization verification to access it, and I am not giving OpenAI my legal ID.  ↩︎ Note that ALL CAPS will not work with CLIP-based image generation models at a technical level, as CLIP’s text encoder is uncased.  ↩︎ Although normally I open-source every script I write for my blog posts, I cannot open-source the character generation script due to extensive testing showing it may lean too heavily into stereotypes. Although adding guardrails successfully reduces the presence of said stereotypes and makes the output more interesting, there may be unexpected negative externalities if open-sourced.  ↩︎ A lanky build, as opposed to the real Sonic’s chubby build. A white chest, as opposed to the real Sonic’s beige chest. Blue arms with white hands, as opposed to the real Sonic’s beige arms with white gloves. Small pasted-on-his-head eyes with no eyebrows, as opposed to the real Sonic’s large recessed eyes and eyebrows. That is the most cleanly-rendered New York Times logo I’ve ever seen. It’s safe to say that Nano Banana trained on the New York Times in some form. Nano Banana is still bad at rendering text perfectly/without typos as most image generation models. However, the expanded text is peculiar: it does follow from the prompt, although “Blue Blur” is a nickname for the normal Sonic the Hedgehog. How does an image generating model generate logical text unprompted anyways? Ugly Sonic is even more like normal Sonic in this iteration: I suspect the “Blue Blur” may have anchored the autoregressive generation to be more Sonic-like. The image itself does appear to be more professional, and notably has the distinct composition of a photo from a professional news photographer: adherence to the “rule of thirds”, good use of negative space, and better color balance. , mostly check. (the hands are transposed and the cutlass disappears) I would have preferred to compare the generations directly from the endpoint for an apples-to-apples comparison, but OpenAI requires organization verification to access it, and I am not giving OpenAI my legal ID.  ↩︎ Note that ALL CAPS will not work with CLIP-based image generation models at a technical level, as CLIP’s text encoder is uncased.  ↩︎ Although normally I open-source every script I write for my blog posts, I cannot open-source the character generation script due to extensive testing showing it may lean too heavily into stereotypes. Although adding guardrails successfully reduces the presence of said stereotypes and makes the output more interesting, there may be unexpected negative externalities if open-sourced.  ↩︎

0 views
Den Odell 3 weeks ago

Escape Velocity: Break Free from Framework Gravity

Frameworks were supposed to free us from the messy parts of the web. For a while they did, until their gravity started drawing everything else into orbit. Every framework brought with it real progress. React, Vue, Angular, Svelte, and others all gave structure, composability, and predictability to frontend work. But now, after a decade of React dominance, something else has happened. We haven’t just built apps with React, we’ve built an entire ecosystem around it—hiring pipelines, design systems, even companies—all bound to its way of thinking. The problem isn’t React itself, nor any other framework for that matter. The problem is the inertia that sets in once any framework becomes infrastructure. By that point, it’s “too important to fail,” and everything nearby turns out to be just fragile enough to prove it. React is no longer just a library. It’s a full ecosystem that defines how frontend developers are allowed to think. Its success has created its own kind of gravity, and the more we’ve built within it, the harder it’s become to break free. Teams standardize on it because it’s safe: it’s been proven to work at massive scale, the talent pool is large, and the tooling is mature. That’s a rational choice, but it also means React exerts institutional gravity. Moving off it stops being an engineering decision and becomes an organizational risk instead. Solutions to problems tend to be found within its orbit, because stepping outside it feels like drifting into deep space. We saw this cycle with jQuery in the past, and we’re seeing it again now with React. We’ll see it with whatever comes next. Success breeds standardization, standardization breeds inertia, and inertia convinces us that progress can wait. It’s the pattern itself that’s the problem, not any single framework. But right now, React sits at the center of this dynamic, and the stakes are far higher than they ever were with jQuery. Entire product lines, architectural decisions, and career paths now depend on React-shaped assumptions. We’ve even started defining developers by their framework: many job listings ask for “React developers” instead of frontend engineers. Even AI coding agents default to React when asked to start a new frontend project, unless deliberately steered elsewhere. Perhaps the only thing harder than building on a framework is admitting you might need to build without one. React’s evolution captures this tension perfectly. Recent milestones include the creation of the React Foundation , the React Compiler reaching v1.0 , and new additions in React 19.2 such as the and Fragment Refs. These updates represent tangible improvements. Especially the compiler, which brings automatic memoization at build time, eliminating the need for manual and optimization. Production deployments show real performance wins using it: apps in the Meta Quest Store saw up to 2.5x faster interactions as a direct result. This kind of automatic optimization is genuinely valuable work that pushes the entire ecosystem forward. But here’s the thing: the web platform has been quietly heading in the same direction for years, building many of the same capabilities frameworks have been racing to add. Browsers now ship View Transitions, Container Queries, and smarter scheduling primitives. The platform keeps evolving at a fair pace, but most teams won’t touch these capabilities until React officially wraps them in a hook or they show up in Next.js docs. Innovation keeps happening right across the ecosystem, but for many it only becomes “real” once React validates the approach. Which is fine, assuming you enjoy waiting for permission to use the platform you’re already building on. The React Foundation represents an important milestone for governance and sustainability. This new foundation is a part of the Linux Foundation, and founding members include Meta, Vercel, Microsoft, Amazon, Expo, Callstack, and Software Mansion. This is genuinely good for React’s long-term health, providing better governance and removing the risk of being owned by a single company. It ensures React can outlive any one organization’s priorities. But it doesn’t fundamentally change the development dynamic of the framework. Yet. The engineers who actually build React still work at companies like Meta and Vercel. The research still happens at that scale, driven by those performance needs. The roadmap still reflects the priorities of the companies that fund full-time development. And to be fair, React operates at a scale most frameworks will never encounter. Meta serves billions of users through frontends that run on constrained mobile devices around the world, so it needs performance at a level that justifies dedicated research teams. The innovations they produce, including compiler-driven optimization, concurrent rendering, and increasingly fine-grained performance tooling, solve real problems that exist only at that kind of massive scale. But those priorities aren’t necessarily your priorities, and that’s the tension. React’s innovations are shaped by the problems faced by companies running apps at billions-of-users scale, not necessarily the problems faced by teams building for thousands or millions. React’s internal research reveals the team’s awareness of current architectural limitations. Experimental projects like Forest explore signal-like lazy computation graphs; essentially fine-grained reactivity instead of React’s coarse re-render model. Another project, Fir , investigates incremental rendering techniques. These aren’t roadmap items; they’re just research prototypes happening inside Meta. They may never ship publicly. But they do reveal something important: React’s team knows the virtual DOM model has performance ceilings and they’re actively exploring what comes after it. This is good research, but it also illustrates the same dynamic at play again: that these explorations happen behind the walls of Big Tech, on timelines set by corporate priorities and resource availability. Meanwhile, frameworks like Solid and Qwik have been shipping production-ready fine-grained reactivity for years. Svelte 5 shipped runes in 2024, bringing signals to mainstream adoption. The gap isn’t technical capability, but rather when the industry feels permission to adopt it. For many teams, that permission only comes once React validates the approach. This is true regardless of who governs the project or what else exists in the ecosystem. I don’t want this critique to take away from what React has achieved over the past twelve years. React popularized declarative UIs and made component-based architecture mainstream, which was a huge deal in itself. It proved that developer experience matters as much as runtime performance and introduced the idea that UI could be a pure function of input props and state. That shift made complex interfaces far easier to reason about. Later additions like hooks solved the earlier class component mess elegantly, and concurrent rendering through `` opened new possibilities for truly responsive UIs. The React team’s research into compiler optimization, server components, and fine-grained rendering pushes the entire ecosystem forward. This is true even when other frameworks ship similar ideas first. There’s value in seeing how these patterns work at Meta’s scale. The critique isn’t that React is bad, but that treating any single framework as infrastructure creates blind spots in how we think and build. When React becomes the lens through which we see the web, we stop noticing what the platform itself can already do, and we stop reaching for native solutions because we’re waiting for the framework-approved version to show up first. And crucially, switching to Solid, Svelte, or Vue wouldn’t eliminate this dynamic; it would only shift its center of gravity. Every framework creates its own orbit of tools, patterns, and dependencies. The goal isn’t to find the “right” framework, but to build applications resilient enough to survive migration to any framework, including those that haven’t been invented yet. This inertia isn’t about laziness; it’s about logistics. Switching stacks is expensive and disruptive. Retraining developers, rebuilding component libraries, and retooling CI pipelines all take time and money, and the payoff is rarely immediate. It’s high risk, high cost, and hard to justify, so most companies stay put, and honestly, who can blame them? But while we stay put, the platform keeps moving. The browser can stream and hydrate progressively, animate transitions natively, and coordinate rendering work without a framework. Yet most development teams won’t touch those capabilities until they’re built in or officially blessed by the ecosystem. That isn’t an engineering limitation; it’s a cultural one. We’ve somehow made “works in all browsers” feel riskier than “works in our framework.” Better governance doesn’t solve this. The problem isn’t React’s organizational structure; it’s our relationship to it. Too many teams wait for React to package and approve platform capabilities before adopting them, even when those same features already exist in browsers today. React 19.2’s `` component captures this pattern perfectly. It serves as a boundary that hides UI while preserving component state and unmounting effects. When set to , it pauses subscriptions, timers, and network requests while keeping form inputs and scroll positions intact. When revealed again by setting , those effects remount cleanly. It’s a genuinely useful feature. Tabbed interfaces, modals, and progressive rendering all benefit from it, and the same idea extends to cases where you want to pre-render content in the background or preserve state as users navigate between views. It integrates smoothly with React’s lifecycle and `` boundaries, enabling selective hydration and smarter rendering strategies. But it also draws an important line between formalization and innovation . The core concept isn’t new; it’s simply about pausing side effects while maintaining state. Similar behavior can already be built with visibility observers, effect cleanup, and careful state management patterns. The web platform even provides the primitives for it through tools like , DOM state preservation, and manual effect control. What . Yet it also exposes how dependent our thinking has become on frameworks. We wait for React to formalize platform behaviors instead of reaching for them directly. This isn’t a criticism of `` itself; it’s a well-designed API that solves a real problem. But it serves as a reminder that we’ve grown comfortable waiting for framework solutions to problems the platform already lets us solve. After orbiting React for so long, we’ve forgotten what it feels like to build without its pull. The answer isn’t necessarily to abandon your framework, but to remember that it runs inside the web, not the other way around. I’ve written before about building the web in islands as one way to rediscover platform capabilities we already have. Even within React’s constraints, you can still think platform first: These aren’t anti-React practices, they’re portable practices that make your web app more resilient. They let you adopt new browser capabilities as soon as they ship, not months later when they’re wrapped in a hook. They make framework migration feasible rather than catastrophic. When you build this way, React becomes a rendering library that happens to be excellent at its job, not the foundation everything else has to depend on. A React app that respects the platform can outlast React itself. When you treat React as an implementation detail instead of an identity, your architecture becomes portable. When you embrace progressive enhancement and web semantics, your ideas survive the next framework wave. The recent wave of changes, including the React Foundation, React Compiler v1.0, the `` component, and internal research into alternative architectures, all represent genuine progress. The React team is doing thoughtful work, but these updates also serve as reminders of how tightly the industry has become coupled to a single ecosystem’s timeline. That timeline is still dictated by the engineering priorities of large corporations, and that remains true regardless of who governs the project. If your team’s evolution depends on a single framework’s roadmap, you are not steering your product; you are waiting for permission to move. That is true whether you are using React, Vue, Angular, or Svelte. The framework does not matter; the dependency does. It is ironic that we spent years escaping jQuery’s gravity, only to end up caught in another orbit. React was once the radical idea that changed how we build for the web. Every successful framework reaches this point eventually, when it shifts from innovation to institution, from tool to assumption. jQuery did it, React did it, and something else will do it next. The React Foundation is a positive step for the project’s long-term sustainability, but the next real leap forward will not come from better governance. It will not come from React finally adopting signals either, and it will not come from any single framework “getting it right.” Progress will come from developers who remember that frameworks are implementation details, not identities. Build for the platform first. Choose frameworks second. The web isn’t React’s, it isn’t Vue’s, and it isn’t Svelte’s. It belongs to no one. If we remember that, it will stay free to evolve at its own pace, drawing the best ideas from everywhere rather than from whichever framework happens to hold the cultural high ground. Frameworks are scaffolding, not the building. Escaping their gravity does not mean abandoning progress; it means finding enough momentum to keep moving. Reaching escape velocity, one project at a time. Use native forms and form submissions to a server, then enhance with client-side logic Prefer semantic HTML and ARIA before reaching for component libraries Try View Transitions directly with minimal React wrappers instead of waiting for an official API Use Web Components for self-contained widgets that could survive a framework migration Keep business logic framework-agnostic, plain TypeScript modules rather than hooks, and aim to keep your hooks short by pulling logic from outside React Profile performance using browser DevTools first and React DevTools second Try native CSS features like , , scroll snap , , and before adding JavaScript solutions Use , , and instead of framework-specific alternatives wherever possible Experiment with the History API ( , ) directly before reaching for React Router Structure code so routing, data fetching, and state management can be swapped out independently of React Test against real browser APIs and behaviors, not just framework abstractions

0 views
Jim Nielsen 4 weeks ago

Browser APIs: The Web’s Free SaaS

Authentication on the web is a complicated problem. If you’re going to do it yourself, there’s a lot you have to take into consideration. But odds are, you’re building an app whose core offering has nothing to do with auth. You don’t care about auth. It’s an implementation detail. So rather than spend your precious time solving the problem of auth, you pay someone else to solve it. That’s the value of SaaS. What would be the point of paying for an authentication service, like workOS, then re-implementing auth on your own? They have dedicated teams working on that problem. It’s unlikely you’re going to do it better than them and still deliver on the product you’re building. There’s a parallel here, I think, to building stuff in the browser. Browsers provide lots of features to help you deliver good websites fast to an incredibly broad and diverse audience. Browser makers have teams of people who, day-in and day-out, are spending lots of time developing and optimizing new their offerings. So if you leverage what they offer you, that gives you an advantage because you don’t have to build it yourself. You could build it yourself. You could say “No thanks, I don’t want what you have. I’ll make my own.” But you don’t have to. And odds are, whatever you do build yourself, is not likely to be as fast as the highly-optimized subsystems you can tie together in the browser . And the best part? Unlike SasS, you don’t have to pay for what the browser offers you. And because you’re not paying, it can’t be turned off if you stop paying. , for example, is a free API that’ll work forever. That’s a great deal. Are you taking advantage? Reply via: Email · Mastodon · Bluesky

0 views
David Bushell 4 weeks ago

Better Alt Text

It’s been a rare week where I was able to (mostly) ignore client comms and do whatever I wanted! That means perusing my “todo” list, scoffing at past me for believing I’d ever do half of it, and plucking out a gem. One of those gems was a link to “Developing an alt text button for images on [James’ Coffee Blog]” . I like this feature. I want it on my blog! My blog wraps images and videos in a element with an optional caption. Reduced markup example below. How to add visible alt text? I decided to use declarative popover . I used popover for my glossary web component but that implementation required JavaScript. This new feature can be done script-free! Below is an example of the end result. Click the “ALT” button to reveal the text popover (unless you’re in RSS land, in which case visit the example , and if you’re not in Chrome, see below). To implement this I appended an extra and element with the declarative popover attributes after the image. I generate unique popover and anchor names in my build script. I can’t define them as inline custom properties because of my locked down content security policy . Instead I use the attribute function in CSS. Anchor positioning allows me to place these elements over the image. I could have used absolute positioning inside the if not for the caption extending the parent block. Sadly using means only one thing… My visible alt text feature is Chrome-only! I’ll pray for Interop 2026 salvation and call it progressive enhancement for now. To position the popover I first tried but that sits the popover around/outside the image. Instead I need to sit inside/above the image. The allows that. The button is positioned in a similar way. Aside from being Chrome-only I think this is a cool feature. Last time I tried to use anchor positioning I almost cried in frustration… so this was a success! It will force me to write better alt text. How do I write alt text good? Advice is welcome. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

1 views
The Jolly Teapot 1 months ago

October 2025 blend of links

Some links don’t call for a full blog post, but sometimes I still want to share some of the good stuff I encounter on the web. Why it is so hard to tax the super-rich ・Very interesting and informative video, to the point that I wish it were a full series. Who knew I would one day be so fascinated by the topic of… checks notes … economics? jsfree.org ・Yes, a thousand yes to this collection of sites that work without needing any JavaScript. I don’t know if it’s the season or what, but these days I’m blocking JS every chance that I get. I even use DuckDuckGo again as a search engine because other search engines often require JavaScript to work. Elon Musk’s Grokipedia contains copied Wikipedia pages ・Just to be safe, I’ve immediately added a redirection on StopTheMadness so that the grokipedia domain is replaced by wikipedia.com (even if Wikipedia has its problems, especially in French). Also, what’s up with this shitty name? Why not Grokpedia ? I would still not care, but at least it wouldn’t sound as silly. POP Phone ・I don’t know for whom yet, but I will definitely put one of these under the Christmas tree this winter. (Via Kottke ) PolyCapture ・The app nerd in me is looking at these screenshots like a kid looks at a miniature train. (Via Daring Fireball ) Bari Weiss And The Tyranny Of False Balance ・“ You don’t need to close newspapers when you can convince editors that ‘balance’ means giving equal weight to demonstrable lies and documented facts. ” light-dark() ・Neat and elegant new CSS element that made me bring back the dark mode on this site, just to have an excuse to use it in the CSS. Eunoia: Words that Don't Translate ・Another link to add to your bookmark folder named “conversation starters.” (Via Dense Discovery ) Why Taste Matters More ・“ Taste gives you vision. It’s the lens through which you decide what matters, and just as importantly, what doesn’t. Without taste, design drifts into decoration or efficiency for efficiency’s sake. Devoid of feeling .” Tiga – Bugatti ・I recently realised that this truly fantastic song is already more than 10 years old, and I still can’t wrap my head around this fact. The video, just like the song, hasn’t aged one bit; I had forgotten how creative and fun it is. More “Blend of links” posts here Blend of links archive

0 views
Josh Comeau 1 months ago

Springs and Bounces in Native CSS

The “linear()” timing function is a game-changer; it allows us to model physics-based motion right in vanilla CSS! That said, there are some limitations and quirks to be aware of. I’ve been experimenting with this API for a while now, and in this post, I’ll share all of the tips and tricks I’ve learned for using it effectively. ✨

0 views
Thomasorus 1 months ago

List of JavaScript frameworks

The way the web was initially envisioned was through separation of concerns: HTML is for structure, CSS for styles and JavaScript for interactivity. For a long time the server was sending HTML to the browser/client through templates populated with data from the server. Then the page downloaded CSS and JavaScript. Those two then "attached" themselves to the structure and acted on it through HTML attributes, and could then change its looks, call for more data, create interactivity. Each time a visitor clicked a link, this whole process would start again, downloading the new page and its dependencies and rendering it in the browser. Using the history API and Ajax requests to fetch HTML of the next page and replace the current body with it. Basically emulating the look and feel of single page applications in multi-pages applications. Event handling/reactivity/dom manipulation via HTML attributes. Development happens client side, without writing JavaScript. Static HTML gets updated via web sockets or Ajax calls on the fly with small snippets rendered on the server. Development happens server side, without writing JavaScript. Most of the time a plugin or feature of an existing server side framework. A client-side, JavaScript component-based (mixing HTML, CSS and JavaScript in a single file) framework or library gets data through API calls (REST or GraphQL) and generates HTML blocks on the fly directly in the browser. Long initial load time, then fast page transitions, but a lot of features normally managed by the browser or the server needs to be re-implemented. The framework or library is loaded alongside the SPA code: The framework or library compiles to the SPA and disappears: A single page application library gets extended to render or generate static "dry" pages as HTML on the server to avoid the initial long loading time, detrimental to SEO. Often comes with opinionated choices like routing, file structure, compilation improvements. After the initial page load, the single page application code is loaded and attaches itself to the whole page to make it interactive, effectively downloading and rendering the website twice ("hydration"): After the initial page load, the single page application code is loaded and attaches itself only on certain elements that needs interactivity, partially avoiding the double download and rendering ("partial hydration", "islands architecture"): A server-side component-based framework or library gets data through API calls (REST or GraphQL) and serves HTML that gets its interactivity without hydration, for example by loading the interactive code needed as an interaction happens. Using existing frontend and backend stacks in an opinionated way to offer a fullstack solution in full JavaScript A client-side, component based application (Vue, React, etc) gets its state from pre-rendered JSON Stimulus JS Livewire (PHP) Stimulus Reflex (Ruby) Phoenix Liveview (Elixir) Blazor (C#) Unicorn (Python) Angular with AOT Next (React) Nuxt (Vue) Sveltekit (Svelte) Astro (React, Vue, Svelte, Solid, etc.) Solid Start (Solid)

0 views
Thomasorus 1 months ago

Cross platform web app solutions

A discovery list of technical solutions to produce a desktop and/or mobile app using web technologies. Apparently (all this is quite new to me) most solutions embed NodeJS inside them, making executing JavaScript the easiest part of the problem. Real trouble comes when talking about the UI, since each OS has different ways of rendering UI. Several solutions exist to make the programs multiplateform. Those solutions package webkot (chromium) and nodejs inside the app and make it work as a fake app on the desktop. Works well but comes with a lot of bloat and heavy ram consumption. Overall both are in the same family but they are differences between NW.js and Electron . Since bringing chromium makes the program very big, there are solutions to bridge between web apps and existing, lighter UI frameworks. Most of the time then, the framework is used to create a bridge between HTML/CSS and the existing frameworks components, modules or UI API. Since all OSes can render webview, it's possible to ask for one at the OS level by providing a bridge. The problem with this solution might be that if the OS has an outdated webview engine, all modern HTML/CSS/JS solutions might not work? Except for neutralino, most projects of this type tends to use webview , a C/C++/Go library. Several bindings library for other languages already exist. Electron is made by Github NW.js is made by Intel NodeGui provides a bridge between HTML/CSS and QT. Deno webview Sciter is a binary that can be used to create web apps. But under the hood it's using a superset of JavaScript called TIScript Sciter JS seems to be Sciter but with common JavaScript, using the quick JS engine.

0 views
Jim Nielsen 1 months ago

Write Code That Runs in the Browser, or Write Code the Browser Runs

I’ve been thinking about a note from Alex Russell where he says: any time you're running JS on the main thread, you're at risk of being left behind by progress. The zen of web development is to spend a little time in your own code, and instead to glue the big C++/Rust subsystems together, then get out of the bloody way. In his thread on Bluesky, Alex continues : How do we do this? Using the declarative systems that connect to those big piles of C++/Rust: CSS for the compositor (including scrolling & animations), HTML parser to build DOM, and for various media, dishing off to the high-level systems in ways that don't call back into your JS. I keep thinking about this difference: There’s a big difference between A) making suggestions for the browser, and B) being its micromanager . Hence the title: you can write code that will run in the browser, or you can write code that calls the browser to run. So what are the browser ‘subsystems’ I can glue together? What are some examples of things I can ask the browser to do rather than doing them myself? A examples come to mind: requestAnimationFrame -> document.startViewTransition -> @view-transition" data-og-image /> The trick is to let go of your need for control. Say to yourself, “If I don’t micromanage the browser on this task and am willing to let go of control, in return it will choose how to do this itself with lower-level APIs that are more performant than anything I can write.” For example, here are some approaches to animating transitions on the web where each step moves more responsibility from your JavaScript code on the main thread to the browser’s rendering engine: It’s a scale from: I want the most control, and in exchange I’ll worry about performance. I don’t need control, and in exchange you’ll worry about performance. I don’t know about you, but I’d much rather hand over performance, accessibility, localization, and a whole host of issues to the experts who build browsers. Building on the web is a set of choices: Anytime you choose to do something yourself, you’re choosing to make a trade-off. Often that increase in control comes at the cost of a degradation in performance. Why do it yourself? Often it’s because you want a specific amount of control over the experience you’re creating. That may be perfectly ok! But it should be a deliberate choice, not because you didn’t consider (or know) the browser offers you an alternative. Maybe it does! So instead of asking yourself, “How can I write code that does what I want?” Consider asking yourself, “Can I write code that ties together things the browser already does to accomplish what I want (or close enough to it)?” Building this way will likely improve your performance dramatically — not to mention decrease your maintenance burden dramatically! Reply via: Email · Mastodon · Bluesky I need to write code that does X. I need to write code that calls a browser API to do X. View transitions API (instead of JS DOM diffing and manual animation). CSS transitions or (GPU-accelerated) vs. manual JS with updates. in CSS vs. JS scroll logic. CSS grid or flexbox vs. JS layout engines (e.g., Masonry clones). and elements with native decoding and hardware acceleration vs. JS media players. or with for responsive images vs. manual image swapping logic in JS. Built-in form state ( ) and validation ( , , etc.) vs. JS-based state, tracking, and validation logic. Native elements like , , , etc., which provide built-in keyboard and accessibility behavior vs. custom ARIA-heavy components. JS timers, DOM manipulation, browser repaints when it can. Dropped frames. Syncs to browser repaint cycle. Smooth, but you gotta handle a lot yourself (diffing, cleanup, etc.) View Transitions in JS JS triggers, browser snapshots and animates. Native performance, but requires custom choreography on your part. View Transitions in CSS Declare what you expect broadly, then let the browser take over. Do it yourself. Let the browser do it. Somewhere in between.

0 views

When it comes to MCPs, everything we know about API design is wrong

TL;DR: I built a lightweight Chrome MCP. Scroll to the end to learn how to install it. Read the whole post to learn a little bit about the Zen of MCP design. Claude Code has built in tools to fetch web pages and to search the web – they actually run through Anthropic's servers, if I recall correctly. They do clever things to carefully manage context and to return information in a format that's easy for Claude to digest. These tools work really well. Right up to the point where they completely fall apart. An uncoached testimonial from the only customer who matters. Last week, I somehow got it into my head that I should update my custom blogging client to use Apple's new Liquid Glass look and feel. The first issue I ran into was that Claude was absolutely sure that macOS 26 wasn't out yet. (Amusingly, when asked to review a draft of this post, one of the things it flagged was: ' Inconsistent model naming - You refer to "macOS 26" but I believe you mean "macOS 15" (Sequoia). macOS 26 would be way in the future.') Claude was, however, happy to speculate about what a "Liquid Glass" UI might look like. Once I reminded the model that it had memory issues and Apple had indeed released the new version of their operating system, it was ready to get to work. I told it to go read Apple's Human Interface Guidelines and make a plan. This is what Claude saw: It turns out that Apple no longer offer a downloadable version of the HIG. And the online version requires JavaScript . After a bit of flailing, Claude reached for the industry-standard Playwright MCP from Microsoft. The Playwright MCP is a collection of 21 tools covering all aspects of driving a browser and debugging webapps, from to to . Just having the Playwright MCP available costs 13,678 tokens (7% of the whole context window) in every single session, even if you never use it. (Yes, the Google Chrome team has their own Chrome MCP. Its API surface is even bigger ) And once you do start using it, things get worse. Some of its tools return the entire DOM of the webpage you're working with. This means that simple requests fail because they return more tokens than Claude can handle in a response: It's frustrating to see a coding agent trying over and over to use a tool the way it's supposed to and having that tool just fail to return useful data. After hearing me complain about this a few times, Dan Grigsby commented that he'd had success just asking Claude to teach itself a skill: Using the raw Dev Tools remote control protocol to drive Chrome. This seemed like a neat trick, so I asked my Claude to take a swing at it. Claude was only too happy to try to speak raw JSON-RPC to Chrome on port 9292. It...just worked. But it was also very clunky and wasteful feeling. Claude was writing raw JSON-RPC command lines for each and every interaction. It was very verbose and required the LLM to get a whole bunch of details right on every single command invocation. It was time to make a proper Skill. After thinking about it for a moment, I asked Claude to write a little zero-dependency command-line tool called that it could run with the Bash tool to control Chrome, as well as a new file explaining how to use that script . encapsulated the complexity and made Chrome easily scriptable from the command line. The skill sets up the basics of web browsing with its tools and uses progressive disclosure to tell Claude how to get more information, but only when it has a need to know. For example, these examples of how to use the tool . Claude didn't always reach for the skill, so it wasn't aware of its new command-line tool, but once I pointed it in the right direction, it worked surprisingly well . This setup was incredibly token efficient – Nothing in the context window at startup other than a skill and in the system prompt. What was a little frustrating for me was that any time Claude wanted to do anything with the browser, it had to run a custom Bash command that I had to approve. Every click. Every navigation. Every javascript expression. It got old really, really fast. There's no real way to fix that without creating a custom MCP. But that would put us right back where we were with the official Playwright MCP, right? Nearly two dozen tools and 13k tokens spilled on the floor every time we started a session. Even trimming things down to only the dozen most important commands is still a bunch of tools, most of which Claude won't use in a given session. If you've ever done API design, you probably know how important it is to name your methods well. You know that every method should do one thing and only one thing. You know that you really need to type (and validate) all your parameters to make sure your callers can tell what they're supposed to be passing in and to make bad method calls fail as soon as possible. It would be absolutely unhinged to have a method called that took a parameter called that was itself a method dispatcher, a parameter called , and a parameter called . You'd have to be crazy to think that it's acceptable API design to have the optional, untyped field just have a description like And yet. That is exactly how I designed it. And it's just great. The high-level tool description reads: At session startup, the whole MCP config weighs in at just 947 tokens. I'm pretty sure I can shave at least 30-40 more. It's optimized to make Claude's life as easy as possble. Rather than having a method to start the browser, the MCP...just does it when it needs to. Same with opening a new tab if there wasn't one waiting. The tool description tells Claude what to do and where to read up when it needs more help. At least so far, it works just great for me. One of the mistakes I made while developing the MCP was to instruct Claude to cut down the API surface by only accepting CSS selectors, rather than accepting CSS or XPath. It seemed natural to me that a smaller, simpler API would be easier for Claude to work with and reason about. Right up until I saw the MCP tool description containing multiple admonitions like . The whole thing just...worked better when I let the selector fields accept either CSS or XPath. Another thing that Claude got not-quite-right when it first implemented the MCP was that it included detailed human-readable text for all the method parameters. Because LLMs that are using MCPs can see both the and the actual JSON schema, you don't need to repeat things like lists of values for an enum or type validations. One trick you can use is to ask your agent to tell you exactly what it can see about how to use an API. One of the weirdest realizations I had while building is this: I have no doubt that there are a dozen similar tools out there, but it was literally faster and easier to build the tool that I thought should exist than to test out a dozen tools to see if any of them work the way I think they should. Over the last couple of decades, the common wisdom has become that Postel's Law (aka the robustness principle) is dated and wrong and that APIs should be rigid and rigorous. That's the wrong choice when you're designing for use by LLMs. This might be a hard lesson to hear, but tools you build for LLMs are going to work much, much better if you think of your end-user as a "person" rather than a computer. Build your tools like they're a set of scripts you're handing to that undertrained kid who just got hired in the NOC. They are going to page you at 2AM when they can't figure out what's going on or when they misuse the tools in a way they can't unwind. Names and method descriptions matter far more than they ever have before. Automatic recovery is hugely important. Designing for error recovery rather than failing fast will make the whole system more reliable and less expensive to operate. When errors are unaviodable, your error messages should tell the user how to fix or work around the problem in plain English. If you can't give the user exactly what they asked for, but you can give them a partial answer or related information, do that. Claude absolutely does not care about the architectural purity of your API. It just wants to help you get work done with the limited resources at its disposal. This new MCP and skill for Claude Code, is called superpowers-chrome . You can install it like this: If you're already using Superpowers , you can just type /plugin, navigate to 'Install plugins', pick 'superpowers-marketplace' and then you should see . I'd love to hear from you if you find it helpful. I'd also love patches and pull requests.

0 views
Manuel Moreale 1 months ago

Alice

This week on the People and Blogs series we have an interview with Alice, whose blog can be found at thewallflowerdigest.co.uk . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Winnie Lim and the other 122 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I'm Alice, I'm currently 37, I'm from the East Midlands in the UK, and have lived in the region all my life. I live with my husband (whom I married in June) and our two cats. They are the best cats. At university, I studied English Literature because I never had any idea what I wanted to do for a career! I really enjoyed my time at university. Looking back, it was such a luxury to have all the time dedicated to reading books and thinking deeply about them (even if I was always too shy to contribute much in seminars!). I can't say an English Lit. degree has ever been beneficial in a practical sense, but I'm happy that I've started to dust off some of the cobwebs on it with my book blog! My work and my blog are separate, but I think the fact that it exists at all is a result of the way my career went, or rather didn't go! I got a Master's degree in Information and Library Management, but failed to ever get a proper professional job. Plan A was University Librarian, but I didn't get the graduate trainee placement I needed, and, with that, I was forever locked out of university libraries. I never saw a job posting that didn't require "at least 5 years of experience in an equivalent role", and social anxiety hampered the development of networking skills. I was a library assistant at a university for a while, where a good portion of my colleagues were in the same boat as me! Plan B was a School Librarian, purely because it was the only job I got offered. I was ill-suited to it, never really enjoyed and the school I was in had little interest in supporting the Library or developing a reading culture. I did that for about 4 years, the whole time trying to come up with an alternative plan. Eventually, Plan C presented itself, and I ended up in a little niche of library management systems, where I worked on data migrations for special libraries, and eventually moved into archives and museums. This is a job that really suits me. It turns out my true love all along was actually databases, information retrieval and the challenge of solving all the puzzles that involved! If I could go back in time to 18-21 years old, knowing myself as I do now, I would make different decisions! But, for now, I am happy where I ended up, and I'm still making a little contribution to the cultural sector! The Wallflower Digest was born in 2022 because my previous job had stopped offering me stimulating challenges, and I was feeling overlooked, bored and trapped by a lack of opportunities! My self-esteem was taking a real hit, and I just needed something to give me a goal and focus. I have had blogs in the past when bored at work! My library assistant job in my twenties was in a tiny, quiet campus library that involved some lone working evening shifts where there would be nothing to do but sit on the enquiries desk for hours! That was how my first blog started; it was mostly a TV blog called Between Screens. That was hosted by WordPress.com (I did have custom domains, though!) and is now long deleted. I used to write recaps and reviews of my favourite TV shows, movies and video games. This was mostly Made in Chelsea, Game of Thrones, Veronica Mars and Mass Effect ! I also played around briefly with a fiction blog and a sewing blog, but those were short-lived. This time around, I wanted my blog to be somewhere to exercise my writing skills and have a chance to play around with CSS and maybe other website bits if I wanted. When I picked the name, I wasn't sure what the blog was going to be, but I think I managed to nail it. I wanted something that felt like me. I've always been very shy, but I was painfully so as a child, and someone (probably a teacher) referred to me as a 'wallflower', and that term got stuck in my young brain. I don't know if the meaning will translate for those who aren't native English speakers, so as a definition, a "wallflower" can mean someone with an introverted personality type (or social anxiety) who will usually distance themselves from the crowd and actively avoid being in the limelight. Plus, I like flowers! I recently planted some wallflowers (Erysimum) in my garden! And then a ‘digest’ is a compilation or summary of information, and my blog is a mess of different topics. I share as I digest the things I read, learn and experience in my life. When I got started, I spun my wheels for a bit in the mud of terrible advice for new bloggers. You know, this strange idea that a blog has to make money, and therefore has to solve problems for an audience! This is why some of my oldest posts have a recognisable Content formatting of SEO friendly headings and keywords! But I eventually realised the fun and mental stimulation I needed came from just doing whatever I wanted, and that an Audience wasn't important to me (actually, I fear that!)! And, more importantly, the blogs I was finding that I enjoyed the most were messy little personal blogs where people shared snippets of their lives. These days, I remind myself that I can do what I want. I see it now as a loosely defined project to help me distil the things that resonate, and help me to understand myself a little better. I share whatever I want to, which currently is book reviews, updates on my life, occasionally progress with my garden (though I've been too busy this year!), and my embroidery or other craft projects. Lately - trying to be less of a wallflower - I've been taking part in more blogging community linkups and tag memes, which have been a lot of fun to answer prompts, but also for "blog hopping" and seeing who else is out there! I'm hoping to branch out from the book-based ones to other topics and blog hop beyond the borders of the book community, or the more tech-focused folks I found on Mastodon. I am toying with the idea of creating my own if I can't find an existing one that feels right! Life has been very busy recently, so the blog has really been ticking over on book reviews and joining in with the book blogger community's Top Ten Tuesday weekly link-up (currently hosted by ArtsyReaderGirl ). It's hard to find the time for more "creative" posts at the moment, but I do try to put really effort into my TTT and try to find something to say about the books I choose to list. Sometimes I get struck by inspiration - usually a topic that keeps recurring in my life somehow - and I'll start a draft, or just jot some thoughts into a note and eventually find the time to work it into something that makes sense! That is the biggest challenge when I work 40 hours a week and have to do all the other responsibilities of life, relationships and health things that come with being an adult. I mean, I've been trying to find the time and mental bandwidth to write a full review with my analysis of the book Rouge by Mona Awad since January (I loved it, and I'm still thinking about it)! But it's still in drafts, and I think I need to read it a third time now. It's like a running joke that I'll forever talk about it and never get it posted! I also post life updates semi-regularly. Those posts are just a catch-up on whatever is going on - how my walking/move more challenges are going, TV or movies, anything else I feel like! I love to read that kind of 'slice of life' content from other people. Now and again, I'll share something about my social anxiety struggles. I'm always battling this, and I find writing out my experiences and feelings helps to work it out of my system. As for the process, my drafts usually get entered straight into the Jetpack app on my phone. I used Obsidian as my digital notes app for general thoughts and inspiration, and all my book reviews and ebook highlights get synced into there, too. What I've got going on with Obsidian is its own little project (essentially as my own personal database!). Most of the time, I post whenever I've finished writing because time is too short to proofread, and that's why my blog is full of typos and errors! I do re-read things later on and correct mistakes I spot, but that's as far as it goes! I also love to use Canva to create graphics. Every book review gets a little graphic with a summary; those originated in my short-lived attempt to get involved with Bookstagram, and I enjoyed making them so much that I've kept them for the blog. I am also a visual person, so it is important to me that I like the look of my website! I think my creativity relies more on my mental state than my physical space! Definitely, my menstrual cycle comes with days where I'm buzzing with ideas and writing is easier, and I wake up with ideas first thing in the morning before the responsibilities of the day have taken over. I do need quite though. I can't think with background chatter, I have no idea how people manage to work in noisy cafes! They make me instantly tired, and my brain shuts down. Writing is easiest when I am on my PC with a full keyboard and dual monitors, but because I work from home full-time at the same desk, I don't like to be pinned in the same spot in my evenings, shut away from my husband, so PC time only really happens on the weekend. More often, I write on my phone; I also have an iPad, but if I'm typing on mobile, I'm faster on my phone. I am hosted by Hostingr, which has been fine and easy to use for a non-techie like me. My CMS is WordPress, it came installed and I find it familiar and easy to use with a big community. I find there is usually a plugin to solve most problems! I have no problem with the block editor, and I love that I can hook my blog up to the wider WordPress.com world to more easily connect with other bloggers. I use the Jetpack app for quick editing and posting, as well as my RSS feed, and to explore and discover new blogs through tags. I honestly think Jetpack gets underrated as a discovery tool! I don't think I would change anything about my blog. With hindsight, I do wish I'd wasted less time down the SEO rabbit hole and removed the pre-installed AISEO plugin earlier! I could also have figured out how I connect my blog to WordPress/Jetpack sooner to find other bloggers. I would not have made my thoughts on Atomic Habits so SEO friendly... it got caught in Google's net and now I regret how well it does search results. There is a crowd of James Clear fans who get upset when you don't praise it as the life-changing work of a genius they hold it up to be. Every few months, I get something that makes me consider turning off the comments. I got a New Year deal with Hostingr for 4 years of hosting at a ridiculous discount, so I paid that all upfront, and I think it worked out about £3 a month. I'm going to have to work out what to do when that's up for renewal! I think my domain is £8.99 a year. That is all the cost; I don't make any money from my blog, nor do I plan to. This is just a hobby, and hobbies (just like my embroidery and gardening) often cost money! Monetising would immediately make it stressful for me and take the fun out of it. I don't mind if other people want to monestise as long as it's not obnoxious. I don't like newsletters where they put some things behind a paywall but not everything, or they put half of it behind the paywall. Those are annoying when they come through my RSS feed, and usually I end up unsubscribing. I've occasionally done a "buy me a coffee" kind of one-off donation to bloggers, or the pay-what-you-like subscription model, where I can just do a couple of quid a month to show support. Or if they're an artist and they have a shop, I buy something small if the postage to the UK is reasonable. My favourite blogs are the ones where I can feel the person writing it, and their personality and passions come through. I want to read human thoughts, not Content! I like details about people's lives with the things they love (books, TV shows, comics, flowers, whatever!), or might share that they're having a hard time with something and how they're coping. Michael at My Comic Relief writes wonderful, passionate and compassionate posts about his favourite TV shows, movies and comic books. When Doctor Who and The Acolyte were on, I was watching my RSS feed for these thoughts every week! I always find his perspective interesting and his enthusiasm infectious. Dragon Rambles is a mix of personal posts and book reviews written by Nic in New Zealand. I think she's been blogging for many years. I really love it when she shares new books she finds for her collection of retro science fiction and fantasy! I have no interest in ever reading any of them myself, but I love to read about them and her collection! I also enjoy reading Elizabeth Tai . She is based in Malaysia and was one of the first bloggers I found on Mastodon in my super early days, and it was through her that I learned popular Indie Web concepts like digital gardens and POSSE. I enjoy the fact that she writes about all kinds of things! I am actually surprised she's not been featured yet! I think Michael, Nic or Liz would be great to interview. Michael and Nic, I found in the land of WordPress, and may not even be aware of this project! My other 3 favourites you've already featured, but I'll mention them because I think they're great! Veronique has been a favourite for a long time! Her writing always feels intimate, and I love the little snippet she shares from her life, her artwork and her passion for zines. She also mentioned my blog in her interview, and I can't tell you how thrilled I was! I had to try to explain the whole thing to my husband, who does not read blogs! Winnie Lim is another long-time favourite of mine. Her blog is also very intimate and thoughtful, and I am always eager to read about her life and little adventures. And also Tracy Durnell's Mind Garden is like what I think I'd like my blog to be, if I had the time and inclination to properly organise myself! I know she's also had a P&B feature because that's how I found her. I love her weekly notes. I don't know why I enjoy reading what music she listened to and what meals she had that week, but I do! This one is a silly one, and maybe a bit of a blast from the past because I used to follow Cake Wrecks way back in the day (like 15 years ago!), and when I was collecting RSS feeds of blogs again a couple of years ago, I was so happy it was still around! Unlike Regresty, RIP (and RIP to what Esty used to be!). Anyway, there is something about badly decorated cakes that I find deeply hilarious (and bad art in general), and these collections of wonky cakes made by so-called professional bakers are a regular source of joy. I don't have anything in particular to share. I am just so excited to have been asked to take part! I hope everyone keeps on doing what they love and blogging about it in the way that they want! I am thankful to have found that the 'blogosphere' is still alive and well, and for me, it's such a peaceful refuge away from the overwhelming noise of social media. I am also hugely appreciative of projects like this that make it easier for bloggers to find each other, so thank you, Manu! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 111 interviews . Make sure to also say thank you to Annie Mueller and the other 122 supporters for making this series possible.

0 views

What Dynamic Typing Is For

Unplanned Obsolescence is a blog is about writing maintainable, long-lasting software. It also frequently touts—or is, at the very least, not inherently hostile to—writing software in dynamically-typed programming languages. These two positions are somewhat at odds. Dynamically-typed languages encode less information. That’s a problem for the person reading the code and trying to figure out what it does. This is a simplified version of an authentication middleware that I include in most of my web services: it checks an HTTP request to see if it corresponds to a logged-in user’s session. Pretty straightforward stuff. The function gets a cookie from the HTTP request, checks the database to see if that token corresponds to a user, and then returns the user if it does. Line 2 fetches the cookie from the request, line 3 gets the user from the database, and the rest either returns the user or throw an error. There are, however, some problems with this. What happens if there’s no cookie included in the HTTP request? Will it return or an empty string? Will even exist if there’s no cookies at all? There’s no way to know without looking at the implementation (or, less reliably, the documentation). That doesn’t mean there isn’t an answer! A request with no cookie will return . That results in a call, which returns (the function checks for that). is a falsy value in JavaScript, so the conditional evaluates to false and throws an . The code works and it’s very readable, but you have to do a fair amount of digging to ensure that it works reliably. That’s a cost that gets paid in the future, anytime the “missing token” code path needs to be understood or modified. That cost reduces the maintainability of the service. Unsurprisingly, the equivalent Rust code is much more explicit. In Rust, the tooling can answer a lot more questions for me. What type is ? A simple hover in any code editor with an LSP tells me, definitively, that it’s . Because it’s Rust, you have to explicitly check if the token exists; ditto for whether the user exists. That’s better for the reader too: they don’t have to wonder whether certain edge cases are handled. Rust is not the only language with a strict, static typing. At every place I’ve ever worked, the longest-running web services have all been written in Java. Java is not as good as Rust at forcing you to show your work and handle edge cases, but it’s much better than JavaScript. Putting aside the question of which one I prefer to write, if I find myself in charge a production web service that someone else wrote, I would much prefer it to be in Java or Rust than JavaScript or Python. Conceding that, ceteris paribus , static typing is good for software maintainability, one of the reasons that I like dynamically-typed languages is that they encourage a style I find important for web services in particular: writing to the DSL. A DSL (domain-specific language) is programming language that’s designed for a specific problem area. This is in contrast to what we typically call “general-purpose programming languages” (e.g. Java, JavaScript, Python, Rust), which can reasonably applied to most programming tasks. Most web services have to contend with at least three DSLs: HTML, CSS, and SQL. A web service with a JavaScript backend has to interface with, at a minimum , four programming languages: one general-purpose and three DSLs. If you have the audacity to use something other than JavaScript on the server, then that number goes up to five, because you still need JavaScript to augment HTML. That’s a lot of languages! How are we supposed to find developers who can do all this stuff ? The answer that a big chunk of the industry settled on is to build APIs so that the domains of the DSLs can be described in the general-purpose programming language. Instead of writing HTML… …you can write JSX, a JavaScript syntax extension that supports tags. This has the important advantage of allowing you to include dynamic JavaScript expressions in your markup. And now we don’t have to kick out to another DSL to write web pages. Can we start abstracting away CSS too? Sure can! This example uses styled-components . This is a tactic I call “expanding the bounds” of the programming language. In an effort to reduce complexity, you try to make one language express everything about the project. In theory, this reduces the number of languages that one needs to learn to work on it. The problem is that it usually doesn’t work. Expressing DSLs in general-purpose programming syntax does not free you from having to understand the DSL—you can’t actually use styled-components without understanding CSS. So now a prospective developer has to both understand CSS and a new CSS syntax that only applies to the styled-components library. Not to mention, it is almost always a worse syntax. CSS is designed to make expressing declarative styles very easy, because that’s the only thing CSS has to do. Expressing this in JavaScript is naturally way clunkier. Plus, you’ve also tossed the web’s backwards compatibility guarantees. I picked styled-components because it’s very popular. If you built a website with styled-components in 2019 , didn’t think about the styles for a couple years, and then tried to upgrade it in 2023 , you would be two major versions behind. Good luck with the migration guide . CSS files, on the other hand, are evergreen . Of course, one of the reasons for introducing JSX or CSS-in-JS is that they add functionality, like dynamic population of values. That’s an important problem, but I prefer a different solution. Instead of expanding the bounds of the general-purpose language so that it can express everything, another strategy is to build strong and simple API boundaries between the DSLs. Some benefits of this approach include: The following example uses a JavaScript backend. A lot of enthusiasm for htmx (the software library I co-maintain) is driven by communities like Django and Spring Boot developers, who are thrilled to no longer be bolting on a JavaScript frontend to their website; that’s a core value proposition for hypermedia-driven development . I happen to like JavaScript though, and sometimes write services in NodeJS, so, at least in theory, I could still use JSX if I wanted to. What I prefer, and what I encourage hypermedia-curious NodeJS developers to do, is use a template engine . This bit of production code I wrote for an events company uses Nunjucks , a template engine I once (fondly!) called “abandonware” on stage . Other libraries that support Jinja -like syntax are available in pretty much any programming language. This is just HTML with basic loops ( ) and data access ( ). I get very frustrated when something that is easy in HTML is hard to do because I’m using some wrapper with inferior semantics; with templates, I can dynamically build content for HTML without abstracting it away. Populating this template in JavaScript is so easy . You just give it a JavaScript object with an field. That’s not particularly special on its own—many languages support serialized key-value pairs. This strategy really shines when you start stringing it together with SQL. Let’s replace that database function call with an actual query, using an interface similar to . I know the above code is not everybody’s taste, but I think it’s marvelous. You get to write all parts of the application in the language best suited to each: HTML for the frontend and SQL for the queries. And if you need to do any additional logic between the database and the template, JavaScript is still right there. One result of this style is that it increases the percentage of your service that is specified declaratively. The database schema and query are declarative, as is the HTML template. The only imperative code in the function is the glue that moves that query result into the template: two statements in total. Debugging is also dramatically easier. I typically do two quick things to narrow down the location of the bug: Those two steps are easy, can be done in production with no deployments, and provide excellent signal on the location of the error. Fundamentally, what’s happening here is a quick check at the two hard boundaries of the system: the one between the server and the client, and the one between the client and the database. Similar tools are available to you if you abstract over those layers, but they are lessened in usefulness. Every web service has network requests that can be inspected, but putting most frontend logic in the template means that the HTTP response’s data (“does the date ever get send to the frontend”) and functionality (“does the date get displayed in the right HTML element?”) can be inspected in one place, with one keystroke. Every database can be queried, but using the database’s native query language in your server means you can validate both the stored data (“did the value get saved?”) and the query (“does the code ask for the right value?”) independent of the application. By pushing so much of the business logic outside the general-purpose programming language, you reduce the likelihood that a bug will exist in the place where it is hardest to track down—runtime server logic. You’d rather the bug be a malformatted SQL query or HTML template, because those are easy to find and easy to fix. When combined with the router-driven style described in Building The Hundred-Year Web Service , you get simple and debuggable web systems. Each HTTP request is a relatively isolated function call: it takes some parameters, runs an SQL query, and returns some HTML. In essence, dynamically-typed languages help you write the least amount of server code possible, leaning heavily on the DSLs that define web programming while validating small amounts of server code via means other than static type checking. To finish, let’s take a look at the equivalent code in Rust, using rusqlite , minjina , and a quasi-hypothetical server implementation: I am again obfuscating some implementation details (Are we storing human-readable dates in the database? What’s that universal result type?). The important part is that this blows. Most of the complexity comes from the need to tell Rust exactly how to unpack that SQL result into a typed data structure, and then into an HTML template. The struct is declared so that Rust knows to expect a for . The derive macros create a representation that minijinja knows how to serialize. It’s tedious. Worse, after all that work, the compiler still doesn’t do the most useful thing: check whether is the correct type for . If it turns out that can’t be represented as a (maybe it’s a blob ), the query will compile correctly and then fail at runtime. From a safety standpoint, we’re not really in a much better spot than we were with JavaScript: we don’t know if it works until we run the code. Speaking of JavaScript, remember that code? That was great! Now we have no idea what any of these types are, but if we run the code and we see some output, it’s probably fine. By writing the JavaScript version, you are banking that you’ve made the code so highly auditable by hand that the compile-time checks become less necessary. In the long run, this is always a bad bet, but at least I’m not writing 150% more code for 10% more compile-time safety. The “expand the bounds” solution to this is to pull everything into the language’s type system: the database schema, the template engine, everything. Many have trod that path; I believe it leads to madness (and toolchain lock-in). Is there a better one? I believe there is. The compiler should understand the DSLs I’m writing and automatically map them to types it understands. If it needs more information—like a database schema—to figure that out, that information can be provided. Queries correspond to columns with known types—the programming language can infer that is of type . HTML has context-dependent escaping rules —the programming language can validate that is being used in a valid element and escape it correctly. With this functionality in the compiler, if I make a database migration that would render my usage of a dependent variable in my HTML template invalid, the compiler will show an error. All without losing the advantages of writing the expressive, interoperable, and backwards-compatible DSLs the comprise web development. Dynamically-typed languages show us how easy web development can be when we ditch the unnecessary abstractions. Now we need tooling to make it just as easy in statically-typed languages too. Thanks to Meghan Denny for her feedback on a draft of this blog. DSLs are better at expressing their domain, resulting in simpler code It aids debugging by segmenting bugs into natural categories The skills gained by writing DSLs are more more transferable CMD+U to View Source - If the missing data is in the HTML, it’s a frontend problem Run the query in the database - If the missing data is in the SQL, it’s a problem with the GET route Language extensions that just translate the syntax are alright by me, like generating HTML with s-expressions , ocaml functions , or zig comptime functions . I tend to end up just using templates, but language-native HTML syntax can be done tastefully, and they are probably helpful in the road to achieving the DX I’m describing; I’ve never seen them done well for SQL. Sqlx and sqlc seem to have the right idea, but I haven’t used either because I to stick to SQLite-specific libraries to avoid async database calls. I don’t know as much about compilers as I’d like to, so I have no idea what kind of infrastructure would be required to make this work with existing languages in an extensible way. I assume it would be hard.

0 views
David Bushell 1 months ago

What is a Linux?

Do you build websites like me? Maybe you’re an Apple user, by fandom, or the fact that macOS is not Windows. You’ve probably heard about this Linux thing. But what is it? In this post I try to explain what is a Linux and how Linux does what it be. ⚠️ This will be a blog post where the minutest of details is well actually-ied by orange site dwelling vultures. I’ll do my best to remain factual. At a high level Linux is best described as an OS (operating system) like Windows or macOS. Where Linux differs is that its components are all open source. Open source refers to the source code. Linux code is freely available. “Free” can mean gratis ; without payment. But open source licenses like GPL and MIT explicitly allow the sale of software. “Free” can also mean libre ; unrestricted, allowing users to modify and redistribute the code. Linux software is typically both free and free. You may see acronyms like OSS (open source software), and FOSS/FLOSS (free/libre and open source software), emphasising a more liberal ideology. Some believe that non-free JavaScript is nothing short of malware forced upon users. Think about the sins you’ve committed with JavaScript and ask yourself: are they wrong? Linux and OSS is a wonderful can of worms with polarising opinions. We can break down Linux usage into three categories. Linux can be “headless” meaning there is no desktop GUI. Headless systems are operated via the command line and keyboard (except for the occasional web control panel). This is the backbone of the Internet. The vast majority of web servers are headless Linux. “Desktop Linux” refers to the less nerdy experience of using a GUI with a mouse. Linux has never done well in this category. Depending on whom you ask, Windows (with a capital W) dominates. Steam survey puts Windows at 95% for gaming. Other sources are more favourable towards macOS reporting upwards of 15%. Linux is niche for desktop. Some will claim success for Linux in the guise of Android OS . Although technically based on Linux much of Android and Google’s success is antithetical to FOSS principles. SteamOS from Valve is a gaming Linux distro making moves in this category. Embedded systems are things like factory robots, orbital satellites, smart fridges, fast food kiosks, etc. There’s a good chance these devices run Linux. If it’s Windows you’ll know by the blue screen and horrendous input latency. That was four categories, sorry. Linux is not one operating system but many serving different requirements. If Bill Gates created Windows and Steve Jobs oversaw macOS, who’s the Linux mastermind? Linux is named after Linus Torvalds who is still the lead developer of the Linux kernel. But there is no Microsoft or Apple of Linux. Due to its open source nature, Linux is more like a collection of interchangeable pieces working together. There is no default Linux install. You must choose a distribution like a starter Pokémon. Linux distros differ in their choice of core pieces like: The Linux kernel includes the low-level services common to all Linux systems. The kernel also has drivers for hardware and file systems. Each distro typically compiles its own kernel which means hardware support can vary out of the box. It’s possible to recompile the kernel to include modules specific to your needs. Linux distros can exist for niche and specialised use cases. OpenWrt is a distro for network devices like wireless routers. DietPi is a lightweight choice for single board computers (a favourite of mine). Distros exist for seasoned nerds. Gentoo Linux is compiled from source with highly specific optimisation flags. NixOS provides an immutable base with declarative builds. If no distro meets your requirements, why not build Linux from scratch? You can find all sorts of weird and wonderful distros on Distro Watch . If you consult the distro timeline on Wikipedia you can see an extensive hierarchy. It’s overwhelming! Know that most are hobbyist projects not maintained for long. They’re nothing more than pre-installed software, opinionated settings, and a wallpaper. Distros like Debian and Arch Linux offer a more generalised OS. They provide the base for most commonly used distros. RHEL (Red Hat Enterprise Linux) also exists for the corporate world. From Debian comes Ubuntu and Raspberry Pi OS . Ubuntu desktop is by far the most popular distro for day-to-day use. Ubuntu makes significant changes to Debian and provides its own downstream package repository. Where should you start? You’ll get some crazy bad answers. Just try Ubuntu. It has the “network effect” and you’re more likely to find support online. This advice is likely to elicit the most comments! Desktop Linux can look wildly different across distros. There is no universal desktop GUI like you’d find on Windows or macOS. KDE offers the classic Windows-like experience. Gnome is more akin to macOS. XFCE is a lightweight option. Hyprland strips back the GUI using a tiled window presentation. There is a shortage of design and accessibility expertise within FOSS. Linux can be ugly and inaccessible at times. If you like design perfection Linux can make your eye twitch. On the plus side, you’re not stuck with a vendor-locked experience. Desktop environments provide hundreds of dials to customise their appearance. Want a start menu? You can add one! Hate the dock? Remove it! Some parts of Linux are even styled and scripted with CSS and JavaScript. Distros come with a package manager (think NPM). This is the main source of system updates and software. On Debian-based systems you’ll find commands. Arch-based systems use . Distros may include a custom GUI and auto-update feature for those scared of the command line. Linux has multiple upstream package repositories. If you run on Debian you’ll get an old version (politely referred to as: “stable”). In comparison, running on Arch gives you the cutting edge, likely compiled from Github last night. Remember that almost everything around Linux is open source. You’re free to compile and install software from anywhere. Software maintainers often provide an install script. See Node.js for example: To download and immediately execute a script from the Internet is insanely insecure! You’re suppose to vet the code first but nobody does. Every Linux system is different so software support can be tricky. Containerised software has become a popular distribution method to solve compatibility issues. Flatpak is the leading choice and Flathub is a bountiful app store. AppImage is a similar project. Ubuntu is trying to make Snaps happen in this space. Hopefully I’ve explained what Linux is! But is it for you? Linux can be a great OS if you’re a web developer writing code. All the familar tools should be available. If you like to tinker, Linux will be a never-ending source of weekend projects . Linux has unrivalled backwards compatibility and avoids the comparable bloat of Windows and macOS. Older hardware can feel surprisingly fresh under Linux. If you require access to proprietary design software like the Adobe suite you’re out of luck. This is why I’m stuck on macOS for my day job. Clients love to deliver vendor lock-in with their designs. There are often 3rd-party workarounds for apps like Figma. Unofficial apps are always buggy and prone to breakage. Both the best and worse parts about Linux is too much choice. Everything can be modified, replaced, improved, and broken. I’ll end before this turns into a book. Let me know if you found this informative! Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds. The Linux kernel A package manager A boot and init system Network utilities Desktop experience

1 views
Manuel Moreale 1 months ago

Linda Ma

This week on the People and Blogs series we have an interview with Linda Ma, whose blog can be found at midnightpond.com . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Aleem Ali and the other 120 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hey, I’m Linda. I grew up in Budapest in a Chinese family of four, heavily influenced by the 2000s internet. I was very interested in leaving home and ended up in the United Kingdom—all over, but with the most time spent in Edinburgh, Scotland. I got into design, sociology, and working in tech and startups. Then, I had enough of being a designer, working in startups, and living in the UK, so I left. I moved to Berlin and started building a life that fits me more authentically. My interests change a lot, but the persistent ones have been: journaling with a fountain pen, being horizontal in nature, breathwork, and ambient music. I was struck by a sudden need to write in public last year. I’d been writing in private but never felt the need to put anything online because I have this thing about wanting to remain mysterious. At least, that’s the story I was telling myself. In hindsight, the 'sudden need' was more of a 'wanting to feel safe to be seen.' I also wanted to find more people who were like-minded. Not necessarily interested in the same things as me, but thinking in similar ways. Through writing, I discovered that articulating your internal world with clarity takes time and that I was contributing to my own problems because I wasn't good at expressing myself. I write about these kinds of realizations in my blog. It’s like turning blurriness and stories into clarity and facts. I also do the opposite sometimes, where I reframe experiences and feelings into semi-fictional stories as a way to release them. I enjoy playing in this space between self-understanding through reality and self-soothing through fantasy. I also just enjoy the process of writing and the feeling of hammering on the keyboard. I wanted the blog to be formless and open-ended, so it didn’t have a name to begin with, and it was hanging out on my personal website. The name just kinda happened. I like the sound of the word “pond” and the feeling I get when I think of a pond. Then I thought: if I were a pond, what kind of pond would I be? A midnight pond. It reflects me, my writing, and the kind of impression I’d like to leave. It’s taken on a life of its own now, and I’m curious to see how it evolves. Nowadays, it seems I’m interested in writing shorter pieces and poems. I get a lot of inspiration from introspection, often catalyzed by conversations with people, paragraphs from books, music, or moments from everyday life. In terms of the writing process, the longer blogposts grow into being like this: I'll have fleeting thoughts and ideas that come to me pretty randomly. I try to put them all in one place (a folder in Obsidian or a board in Muse ). I organically return to certain thoughts and notes over time, and I observe which ones make me feel excited. Typically, I'll switch to iA Writer to do the actual writing — something about switching into another environment helps me get into the right mindset. Sometimes the posts are finished easily and quickly, sometimes I get stuck. When I get stuck, I take the entire piece and make it into a pile of mess in Muse. Sometimes the mess transforms into a coherent piece, sometimes it gets abandoned. When I finish something and feel really good about it, I let it sit for a couple days and look at it again once the post-completion high has faded. This is advice from the editors of the Modern Love column , and it’s very good advice. I occasionally ask a friend to read something to gauge clarity and meaning. I like the idea of having more thinking buddies. Please feel free to reach out if you think we could be good thinking buddies. Yes, I do believe the physical space influences my creativity. And it’s not just the immediate environment (the room or desk I'm writing at) but also the thing or tool I'm writing with (apps and notebook) as well as the broader environment (where I am geographically). There’s a brilliant book by Vivian Gornick called The Situation and the Story: The Art of Personal Narrative and a quote in it: “If you don’t leave home you suffocate, if you go too far you lose oxygen.” It’s her comment on one of the example pieces she discusses. This writer was talking about how he couldn’t write when he was too close or too far from home. It’s an interesting perspective to consider, and I find it very relatable. Though I wouldn’t have arrived at this conclusion had I not experienced both extremes. My ideal creative environment is a relatively quiet space where I can see some trees or a body of water when I look up. The tools I mentioned before and my physical journal are also essential to me. My site is built with Astro , the code is on GitHub, and all deploys through Netlify. The site/blog is really just a bunch of .md and .mdx files with some HTML and CSS. I code in VS Code. I wouldn’t change anything about the content or the name. Maybe I would give the tech stack or platform more thought if I started it now? In moments of frustration with Astro or code, I’ve often wondered if I should just accept that I’m not a techie and use something simpler. It’s been an interesting journey figuring things out though. Too deep into it, can’t back out now. The only running cost I have at the moment is the domain which is around $10 a year. iA Writer was a one-time purchase of $49.99. My blog doesn’t generate revenue. I don’t like the idea of turning personal blogs into moneymaking machines because it will most likely influence what and how you write. But — I am supportive of creatives wanting to be valued for what they create and share from an authentic place. I like voluntary support based systems like buymeacoffee.com or ko-fi.com . I also like the spirit behind platforms like Kickstarter or Metalabel . I started a Substack earlier this year where I share the longer posts from my blog. I’m not sure how I feel about this subscription thing, but I now use the paywall to protect posts that are more personal than others. I’ve come across a lot of writing I enjoy though and connected with others through writing. Here are a few I’ve been introduced to or stumbled upon: Interesting, no-longer-active blogs: I like coming across sites that surprise me. Here’s one that boggles my mind, and here’s writer Catherine Lacey’s website . There’s also this online documentary and experience of The Garden of Earthly Delights by Jheronimous Bosch that I share all the time, and Spencer Chang’s website is pretty cool. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 110 interviews . Make sure to also say thank you to Ben Werdmuller and the other 120 supporters for making this series possible. Lu’s Wikiblogardenite — Very real and entertaining blog of a "slightly-surreal videos maker and coder". Romina’s Journal — Journal of a graphic designer and visual artist. Maggie Appleton’s Garden — Big fan of Maggie’s visual essays on programming, design, and anthropology. Elliott’s memory site — This memory site gives me a cozy feeling. Where are Kay and Phil? — Friends documenting their bike tours and recipes. brr — Blog of an IT professional who was deployed to Antarctica during 2022-2023. The late Ursula K. Le Guin’s blog — She started this at the age of 81 in 2010.

2 views
Preah's Website 1 months ago

(Guide) Intro To Social Blogging

Social networks have rapidly become so vital to many people's lives on the internet. People want to see what their friends are doing, where they are, and photos of what they're doing. They also want to share these same things with their friends, all without having to go through the manual and sometimes awkward process of messaging them directly and saying "Hey, how're you doing?" Developers and companies have complied with this desire for instant connection. We see the launch of Friendster in 2002, MySpace and a job-centered one we all know, LinkedIn , in 2003. Famously, Facebook in 2004, YouTube in 2005, Twitter (now X) in 2006. Followed by Instagram , Snapchat , Google+ (RIP), TikTok , and Discord . People were extremely excited about this. We are more connected than ever. But we are losing in several ways. These companies that own these platforms want to make maximum profit, leading them to offer subscription-based services in some cases, or more distressing, sell their users' data to advertisers. They use algorithms to serve cherry-picked content that creates dangerous echo-chambers, and instill the need for users to remain on their device for sometimes hours just to see what's new, exacerbating feelings of FOMO and wasting precious time. Facebook has been found to conduct experiments on its users to fuel rage and misinformation for the purpose of engagement. 1 2 When did socializing online with friends and family become arguing with strangers, spreading misinformation, and experiencing panic attacks because of the constant feed of political and social unrest? I don't expect anyone to drop their social media. Plenty of people use it in healthy ways. We even have decentralized social media, such as the fediverse (think Mastodon) and the AT Protocol (think Bluesky) to reduce the problem of one person or company owning everything. I think this helps, and seeing a feed of your friends' short thoughts or posts occasionally is nice if you're not endlessly scrolling. I also think it's vital to many people to be able to explore recommendations frequently to get out of their bubble and experience variety. There is another option, one I am personally more fond of. It can sit nicely alongside your existing social media or replace it. It serves a different purpose than something like Twitter (X) or Instagram. It's meant to be a slower, more nuanced form of socializing and communicating, inspired by the pre-social media era, or at least the early one. For the purposes of this guide, I will refer to this as "Blog Feeds." A little intro in one page can be explained by blogfeeds.net , 3 which includes an aggregation of blogs to follow, essentially creating a network of people similar to a webring. 4 This will help you explore new blogs you find interesting and create a tighter group. Another gem is ooh.directory , which sorts blogs by category and interest, allows you to flip through random blogs, and visit the most recently-updated blogs for ideas of who to follow. Basically, a blog feed involves making a personal blog, which can have literally whatever you want on it, and following other people. The "following" aspect can be done through RSS (most common), or email newsletter if their site supports it. If the blog is part of the AT Protocol, you may be able to follow it using a Bluesky account. More about that later. Making a blog sounds scary and technical, but it doesn't have to be. If you know web development or want to learn, you can customize a site to be whatever your heart desires. If you're not into that, there are many services that make it incredibly easy to get going. You can post about your day, about traveling, about gaming, theme it a specific way, or post short thoughts on nothing much at all if you want. All I ask is that you do this because you want to, not solely because you might make a profit off of your audience. Also, please reconsider using AI to write posts if you are thinking of doing that! It's fully up to you, but in my opinion, why should I read something no one bothered to write? Hosted Services: Bear Blog: In the creator's own words, "A privacy-first, no-nonsense, super-fast blogging platform." Sign up, select a pre-made theme if you want and modify it to your liking, make post templates, and connect a custom domain if desired. Comes with ready-to-go RSS, and pretty popular among bloggers currently. This site runs on it. Pika: “An editor that makes you want to write, designed to get out of your way and perfectly match what readers will see.” With Pika you can sign up, choose a theme, customize without code, write posts in a clean editor, export your content, and connect your own domain, with a focus on privacy and design. You can start for free (up to ~50 posts) and upgrade later if you want unlimited posts, newsletter subscribers, analytics, etc. Substack: You might have seen this around before, it's quite popular. It's a platform built for people to publish posts and sometimes make money doing it. You can start a newsletter or blog, choose what’s free and what’s paid, send posts (and even podcasts or video) to subscribers’ inboxes, build a community, and access basic analytics. It’s simple and user-friendly, with a 10% fee if you monetize. This may not be the most loved option by other small bloggers due to its association with newsletter-signup popups and making a profit. It is also the most similar to other social media among blogging options . Ghost: An open-source platform focused on publishing and monetization. Ghost provides an editor (with live previews, Markdown + embeds, and an admin UI), built-in SEO, newsletter tools, membership & subscription support, custom themes, and control over your domain and data. You can self-host (free, for full flexibility) or use their managed Ghost(Pro) hosting, and benefit from faster performance, email delivery, and extensible APIs. Wordpress: The world’s most popular website and blogging platform, powering over 40% of the web. WordPress lets you create a simple blog or a business site using free and premium themes and plugins. You can host it yourself with full control, or use their hosted service (WordPress.com) for convenience. It supports custom domains, rich media, SEO tools, and extensibility through code or plugins. Squarespace: You might have heard of this on your favorite YouTuber's channel during a sponsorship (you don't sit through those, do you?). It is a platform for building websites, blogs, and online stores with no coding required. Squarespace offers templates, a drag-and-drop editor, built-in SEO, analytics, and e-commerce tools under a subscription. You can connect a custom domain, publish blog posts, and manage newsletters. Self-hosted, if you're more technical: Astro: A modern web framework built for speed, content, and flexibility. Astro lets you build blogs, portfolios, and full sites using any UI framework, or none at all, with zero JavaScript by default. 5 It supports Markdown, MDX, and server-side rendering, plus integrations for CMSs, themes, and deployment platforms. Hugo: An open-source static site generator built for efficiency and flexibility. It lets you create blogs and websites using Markdown, shortcodes, and templates. It supports themes, taxonomies, custom content types, and control over site structure without needing a database. Zola: Another open-source static site generator. Zola uses Markdown for content, Tera templates for layouts, and comes with built-in features like taxonomies, RSS feeds, and syntax highlighting. It requires no database, and is easy to configure. 11ty: Pronounced Eleventy. A flexible static site generator that lets you build content-focused websites using plain HTML, Markdown, or templating languages like Nunjucks, Liquid, and others. 11ty requires no database, supports custom data structures, and gives full control over your site’s output. Jekyll: A popular static site generator that transforms plain text into self-hosted websites and blogs. Jekyll uses Markdown, Liquid templates, and simple configuration to generate content without a database. It supports themes, plugins, and custom layouts, and integrates seamlessly with GitHub Pages for free hosting. Honorable mention: Wow, that's a lot of options! Don't get overwhelmed. Here are the basics for a simple site like Bear Blog or a static site generator. You write a post. This post tends to be in Markdown, which is a markup language (like HTML) for creating formatted text. It's actually not too far from something like Microsoft Word. In this case, if you want a header, you can put a pound symbol in front of your header text to tell your site that it should be formatted as one. Same with quotation blocks, bolding, italics and all that. Here is a simple Markdown cheatsheet provided by Bear Blog. Some other blogging platforms have even more options for formatting, like informational or warning boxes. After you've written it, you can usually preview it before posting. While you're writing, you might want to use a live-preview to make sure you're formatting it how you intend. After posting, people can go read your post and possibly interact with it in some ways if you want that. I'm not going to attempt to describe AT Protocol when there is another post that does an excellent job. But what I am going to mention, briefly, is using this protocol to follow blogs via Bluesky or another AT Protocol handle. Using something like leaflet.pub , you can create a blog on there, and follow other similar blogs. Here is an example of a blog on leaflet , and if you have Bluesky, go ahead and test subscribing using it. They also support comments and RSS. You don't have to memorize what RSS stands for (Really Simple Syndication, if you're curious). This is basically how you create a feed, like a Twitter (X) timeline or a Facebook homepage. When you subscribe to someone's blog, 6 you can get a simple, consolidated aggregation of new posts. At this point, RSS is pretty old but still works exactly as intended, and most sites have RSS feeds. What you need to start is a newsreader app. There are a lot of options, so it depends on what you value most. When you subscribe to a website, you put that into your newsreader app, and it fetches the content and displays it for you, among other neat features. Usually they include nice themes, no ads to bother you, and folder or tag organization. You may have to find a site's feed and copy the link, like , or your reader app may be able to find it automatically from a browser shortcut or from pasting in the normal link for the website. To learn more about adding a new subscription, see my feeds page . Here are some suggestions. Feel free to explore multiple and see what sticks: Feedly: A cloud-based, freemium RSS aggregator with apps and a browser interface. You can use a free account that supports a limited number of sources (about 100 feeds) and basic folders, but many advanced features—such as hiding ads, notes/highlights, power search, integration with Evernote/Pocket, and “Leo” AI filtering—require paid tiers. It supports iOS, Android, and web (desktop browsers). Feedly is not open source, it is a commercial service. Inoreader: Also a freemium service, available via web and on iOS and Android, with synchronization of your reading across devices. The free plan includes many of the core features (RSS subscription, folders, basic filtering), while more powerful features (such as advanced rules, full-text search, premium support, more feed limits) are gated behind paid tiers. Inoreader is not open source, it is a proprietary service with a freemium model. NetNewsWire: A native, free, and open-source RSS reader for Apple platforms (macOS, iPhone, iPad). It offers a clean, native experience and tracks your read/unread status locally or via syncing. Because it’s open source (MIT-licensed), you can inspect or contribute to its code. Its main limitation is platform since it’s focused on Apple devices. It's also not very visually flashy, if you care about that. feeeed (with four Es) : An iOS/iPadOS (and recent macOS) app that emphasizes a private, on-device reading experience without requiring servers or accounts. It is free (no ads or in-app purchases) and supports RSS subscriptions, YouTube, Reddit, and other sources, plus some AI summarization. Because it is designed to run entirely on device, there is no paid subscription for backend features, and it is private by design. It is not open-source. One personal note from me, I use this as my daily driver, and it has some minor bugs you may notice. It's developed by one person, so it happens. Reeder: A client app (primarily for Apple platforms: iOS, iPadOS, macOS) that fetches feed data from external services, such as Feedly, Inoreader, or local RSS sources. The new Reeder version supports unified timeline, filters, and media integration. It is not itself a feed-hosting service but a front end; thus, many of its features (such as sync or advanced filtering) depend on which backend you use. It uses iCloud to sync subscription and timeline state between devices. Reeder is proprietary (closed source) and uses a paid model or in-app purchases for more advanced versions. Unread: Another client app for Apple platforms with a focus on elegant reading. It relies on external feed services for syncing (you provide your own RSS or use a service like Feedly). Because Unread is a reader app, its features are more about presentation and gesture support; many of the syncing, feed limits, or premium capabilities depend on your chosen backend service. I would say Unread is my favorite so far, as it offers a lot for being free, has great syncing, tag organization, and a pleasing interface. It also fetches entire website content to get around certain limitations with some websites' feeds, allowing you to read everything in the app without visiting the website directly. FreshRSS: A self-hostable, open-source RSS/Atom aggregator that you run on your own server (like via Docker) and which supports reading through its own web interface or via third-party client apps. It allows full control over feed limits, filtering, theming, extensions, and it can generate feeds by scraping or filtering content. Because it is open source, there is no paid tier in the software itself (though you may incur hosting costs). Many client apps can connect to a FreshRSS instance for mobile or desktop reading. If you're interested in something interesting you can do with its API, check out Steve's post about automating feeds with FreshRSS. Click this for a more detailed breakdown of available RSS newsreaders. Additional resource on RSS and Feeds. Okay, soooo... I have a blog, I have RSS stuff, now what do I subscribe to, and how do I make this social? I'll let blogfeeds.net describe this: This takes us to our final point: Feeds. You can probably get away with just the first two items and then sharing it with people you already know, but what about meeting or talking to people you don't know? That's where Feeds come in. The idea is to create another page on your blog that has all the RSS feeds you're subscribed to. By keeping this public and always up to date, someone can visit your page, find someone new and follow them. Perhaps that person also has a feeds page, and the cycle continues until there is a natural and organic network of people all sharing with each other. So if you have a blog, consider making a feeds page and sharing it! If your RSS reader supports OPML file exports and imports, perhaps you can share that file as well to make it easier to share your feeds. Steve has an example of a feeds page , and blogfeeds.net has an aggregation of known blogs using feeds pages , to create a centralized place to follow blogs who have this same mindset. Once you make a feeds page, you can submit it to the site to get added to it. Then people can find your blog! There is debate on the best method for interaction with others via blogs. You have a few options. And the accompanying CSS, 7 which Bear Blog lets you edit: For each post, I do change the subject line (Re: {{post_title}}) manually to whatever the post title is. That way, someone can click the button and open their mail client already ready to go with a subject line pertaining to the post they want to talk about. Change the values and to whatever colors you want to match your site! See the bottom of this post to see what it looks like. Next: Comments: Comments are a tricky one. It's looked down on by some because of their lack of nuance and moderation stress, which is why Bear Blog doesn't natively have them. There are various ways to do comments, and it heavily depends on what blogging platform you choose, so here is Bear Blog's stance on it and some recommendations for setting up comments if you want . Guestbooks: This is an old form of website interaction that dates back to at least Geocities . The concept is that visitors to your site can leave a quick thought, their name, and optionally their own website to let you know they visited. You can see an example on my website , and my recommended service for a free guestbook is Guestbooks . You can choose a default theme and edit it if you want to match the rest of your site, implement spam protection, and access a dashboard for managing and deleting comments if needed. Here are some ideas to get you started and inspired: Add new pages, like a link to your other social media or music listening platforms, or a page dedicated to your pet. Email a random person on a blog to give your thoughts on a post of theirs or simply tell them their site is cool. Create an email just for this and for your website for privacy and separation, if desired. Add a Now page. It's a section specifically to tell others what you are focused on at this point of your life. Read more about it at nownownow.com . See an example on Clint McMahon's blog . A /now page shares what you’d tell a friend you hadn’t seen in a year. Write a post about a cool rock you found in your yard, or something similarly asinine. Revel in the lack of effort. Or, Make a post containing 1-3 sentences only. Join a webring . Make a page called Reviews, to review movies, books, TV shows, games, kitchen appliances, etc. That's all from me for now. Subscribe to my RSS feed , email me using the button at the bottom to tell me this post sucks, or that it's great, or if you have something to suggest to edit it, and bring back the old web. Subscribe via email or RSS Washington Post – Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation. Link . • Unpaywalled . ↩ The Guardian – Facebook reveals news feed experiment to control emotions. Link . ↩ This website was created by Steve, who has their own Bear Blog . Read Resurrect the Old Web , which inspired this post. ↩ A webring is a collection of websites linked together in a circular structure, organized around a specific theme. Each site has navigation links to the next and previous members, forming a ring. A central site usually lists all members to prevent breaking the ring if someone's site goes offline. ↩ Take a look at this Reddit discussion on why less JavaScript can be better . ↩ Or news site, podcast, or supported social media platform like Bluesky, and even subreddits. ↩ If you don't know what HTML and CSS is, basically, the first snippet of code I shared is HTML, used for the basic text and formatting of a website; CSS is used to apply fancy styles and color, among other things. ↩ Bear Blog: In the creator's own words, "A privacy-first, no-nonsense, super-fast blogging platform." Sign up, select a pre-made theme if you want and modify it to your liking, make post templates, and connect a custom domain if desired. Comes with ready-to-go RSS, and pretty popular among bloggers currently. This site runs on it. Pika: “An editor that makes you want to write, designed to get out of your way and perfectly match what readers will see.” With Pika you can sign up, choose a theme, customize without code, write posts in a clean editor, export your content, and connect your own domain, with a focus on privacy and design. You can start for free (up to ~50 posts) and upgrade later if you want unlimited posts, newsletter subscribers, analytics, etc. Substack: You might have seen this around before, it's quite popular. It's a platform built for people to publish posts and sometimes make money doing it. You can start a newsletter or blog, choose what’s free and what’s paid, send posts (and even podcasts or video) to subscribers’ inboxes, build a community, and access basic analytics. It’s simple and user-friendly, with a 10% fee if you monetize. This may not be the most loved option by other small bloggers due to its association with newsletter-signup popups and making a profit. It is also the most similar to other social media among blogging options . Ghost: An open-source platform focused on publishing and monetization. Ghost provides an editor (with live previews, Markdown + embeds, and an admin UI), built-in SEO, newsletter tools, membership & subscription support, custom themes, and control over your domain and data. You can self-host (free, for full flexibility) or use their managed Ghost(Pro) hosting, and benefit from faster performance, email delivery, and extensible APIs. Wordpress: The world’s most popular website and blogging platform, powering over 40% of the web. WordPress lets you create a simple blog or a business site using free and premium themes and plugins. You can host it yourself with full control, or use their hosted service (WordPress.com) for convenience. It supports custom domains, rich media, SEO tools, and extensibility through code or plugins. Squarespace: You might have heard of this on your favorite YouTuber's channel during a sponsorship (you don't sit through those, do you?). It is a platform for building websites, blogs, and online stores with no coding required. Squarespace offers templates, a drag-and-drop editor, built-in SEO, analytics, and e-commerce tools under a subscription. You can connect a custom domain, publish blog posts, and manage newsletters. Astro: A modern web framework built for speed, content, and flexibility. Astro lets you build blogs, portfolios, and full sites using any UI framework, or none at all, with zero JavaScript by default. 5 It supports Markdown, MDX, and server-side rendering, plus integrations for CMSs, themes, and deployment platforms. Hugo: An open-source static site generator built for efficiency and flexibility. It lets you create blogs and websites using Markdown, shortcodes, and templates. It supports themes, taxonomies, custom content types, and control over site structure without needing a database. Zola: Another open-source static site generator. Zola uses Markdown for content, Tera templates for layouts, and comes with built-in features like taxonomies, RSS feeds, and syntax highlighting. It requires no database, and is easy to configure. 11ty: Pronounced Eleventy. A flexible static site generator that lets you build content-focused websites using plain HTML, Markdown, or templating languages like Nunjucks, Liquid, and others. 11ty requires no database, supports custom data structures, and gives full control over your site’s output. Jekyll: A popular static site generator that transforms plain text into self-hosted websites and blogs. Jekyll uses Markdown, Liquid templates, and simple configuration to generate content without a database. It supports themes, plugins, and custom layouts, and integrates seamlessly with GitHub Pages for free hosting. Neocities: This is a modern continuation of Geocities , mainly focused on hand-coding HTML and CSS to create a custom site from scratch. Not ideal for blogging, but cool for showcasing a site and learning web development. It's free and open-source, and you can choose to pay for custom domains and more bandwidth, with no ads or data selling. You can see my silly site I made using Neocities for a D&D campaign I'm a part of at thepub.neocities.org . Feedly: A cloud-based, freemium RSS aggregator with apps and a browser interface. You can use a free account that supports a limited number of sources (about 100 feeds) and basic folders, but many advanced features—such as hiding ads, notes/highlights, power search, integration with Evernote/Pocket, and “Leo” AI filtering—require paid tiers. It supports iOS, Android, and web (desktop browsers). Feedly is not open source, it is a commercial service. Inoreader: Also a freemium service, available via web and on iOS and Android, with synchronization of your reading across devices. The free plan includes many of the core features (RSS subscription, folders, basic filtering), while more powerful features (such as advanced rules, full-text search, premium support, more feed limits) are gated behind paid tiers. Inoreader is not open source, it is a proprietary service with a freemium model. NetNewsWire: A native, free, and open-source RSS reader for Apple platforms (macOS, iPhone, iPad). It offers a clean, native experience and tracks your read/unread status locally or via syncing. Because it’s open source (MIT-licensed), you can inspect or contribute to its code. Its main limitation is platform since it’s focused on Apple devices. It's also not very visually flashy, if you care about that. feeeed (with four Es) : An iOS/iPadOS (and recent macOS) app that emphasizes a private, on-device reading experience without requiring servers or accounts. It is free (no ads or in-app purchases) and supports RSS subscriptions, YouTube, Reddit, and other sources, plus some AI summarization. Because it is designed to run entirely on device, there is no paid subscription for backend features, and it is private by design. It is not open-source. One personal note from me, I use this as my daily driver, and it has some minor bugs you may notice. It's developed by one person, so it happens. Reeder: A client app (primarily for Apple platforms: iOS, iPadOS, macOS) that fetches feed data from external services, such as Feedly, Inoreader, or local RSS sources. The new Reeder version supports unified timeline, filters, and media integration. It is not itself a feed-hosting service but a front end; thus, many of its features (such as sync or advanced filtering) depend on which backend you use. It uses iCloud to sync subscription and timeline state between devices. Reeder is proprietary (closed source) and uses a paid model or in-app purchases for more advanced versions. Unread: Another client app for Apple platforms with a focus on elegant reading. It relies on external feed services for syncing (you provide your own RSS or use a service like Feedly). Because Unread is a reader app, its features are more about presentation and gesture support; many of the syncing, feed limits, or premium capabilities depend on your chosen backend service. I would say Unread is my favorite so far, as it offers a lot for being free, has great syncing, tag organization, and a pleasing interface. It also fetches entire website content to get around certain limitations with some websites' feeds, allowing you to read everything in the app without visiting the website directly. FreshRSS: A self-hostable, open-source RSS/Atom aggregator that you run on your own server (like via Docker) and which supports reading through its own web interface or via third-party client apps. It allows full control over feed limits, filtering, theming, extensions, and it can generate feeds by scraping or filtering content. Because it is open source, there is no paid tier in the software itself (though you may incur hosting costs). Many client apps can connect to a FreshRSS instance for mobile or desktop reading. If you're interested in something interesting you can do with its API, check out Steve's post about automating feeds with FreshRSS. Email: Share an email people can contact you at, and when someone has something to say, they can email you about it. This allows for intential, nuanced discussion. Here is a template I use at the end of every post to facilitate this (totally stolen from Steve, again) : Comments: Comments are a tricky one. It's looked down on by some because of their lack of nuance and moderation stress, which is why Bear Blog doesn't natively have them. There are various ways to do comments, and it heavily depends on what blogging platform you choose, so here is Bear Blog's stance on it and some recommendations for setting up comments if you want . Guestbooks: This is an old form of website interaction that dates back to at least Geocities . The concept is that visitors to your site can leave a quick thought, their name, and optionally their own website to let you know they visited. You can see an example on my website , and my recommended service for a free guestbook is Guestbooks . You can choose a default theme and edit it if you want to match the rest of your site, implement spam protection, and access a dashboard for managing and deleting comments if needed. Add new pages, like a link to your other social media or music listening platforms, or a page dedicated to your pet. Email a random person on a blog to give your thoughts on a post of theirs or simply tell them their site is cool. Create an email just for this and for your website for privacy and separation, if desired. Add a Now page. It's a section specifically to tell others what you are focused on at this point of your life. Read more about it at nownownow.com . See an example on Clint McMahon's blog . Write a post about a cool rock you found in your yard, or something similarly asinine. Revel in the lack of effort. Or, Make a post containing 1-3 sentences only. Join a webring . Make a page called Reviews, to review movies, books, TV shows, games, kitchen appliances, etc. Washington Post – Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation. Link . • Unpaywalled . ↩ The Guardian – Facebook reveals news feed experiment to control emotions. Link . ↩ This website was created by Steve, who has their own Bear Blog . Read Resurrect the Old Web , which inspired this post. ↩ A webring is a collection of websites linked together in a circular structure, organized around a specific theme. Each site has navigation links to the next and previous members, forming a ring. A central site usually lists all members to prevent breaking the ring if someone's site goes offline. ↩ Take a look at this Reddit discussion on why less JavaScript can be better . ↩ Or news site, podcast, or supported social media platform like Bluesky, and even subreddits. ↩ If you don't know what HTML and CSS is, basically, the first snippet of code I shared is HTML, used for the basic text and formatting of a website; CSS is used to apply fancy styles and color, among other things. ↩

0 views
Ahmad Alfy 1 months ago

How Functional Programming Shaped (and Twisted) Frontend Development

A friend called me last week. Someone who’d built web applications back for a long time before moving exclusively to backend and infra work. He’d just opened a modern React codebase for the first time in over a decade. “What the hell is this?” he asked. “What are all these generated class names? Did we just… cancel the cascade? Who made the web work this way?” I laughed, but his confusion cut deeper than he realized. He remembered a web where CSS cascaded naturally, where the DOM was something you worked with , where the browser handled routing, forms, and events without twenty abstractions in between. To him, our modern frontend stack looked like we’d declared war on the platform itself. He asked me to explain how we got here. That conversation became this essay. A disclaimer before we begin : This is one perspective, shaped by having lived through the first browser war. I applied to make 24-bit PNGs work in IE6. I debugged hasLayout bugs at 2 AM. I wrote JavaScript when you couldn’t trust to work the same way across browsers. I watched jQuery become necessary, then indispensable, then legacy. I might be wrong about some of this. My perspective is biased for sure, but it also comes with the memory that the web didn’t need constant reinvention to be useful. There’s a strange irony at the heart of modern web development. The web was born from documents, hyperlinks, and a cascading stylesheet language. It was always messy, mutable, and gloriously side-effectful. Yet over the past decade, our most influential frontend tools have been shaped by engineers chasing functional programming purity: immutability, determinism, and the elimination of side effects. This pursuit gave us powerful abstractions. React taught us to think in components. Redux made state changes traceable. TypeScript brought compile-time safety to a dynamic language. But it also led us down a strange path. A one where we fought against the platform instead of embracing it. We rebuilt the browser’s native capabilities in JavaScript, added layers of indirection to “protect” ourselves from the DOM, and convinced ourselves that the web’s inherent messiness was a problem to solve rather than a feature to understand. The question isn’t whether functional programming principles have value. They do. The question is whether applying them dogmatically to the web (a platform designed around mutability, global scope, and user-driven chaos) made our work better, or just more complex. To understand why functional programming ideals clash with web development, we need to acknowledge what the web actually is. The web is fundamentally side-effectful. CSS cascades globally by design. Styles defined in one place affect elements everywhere, creating emergent patterns through specificity and inheritance. The DOM is a giant mutable tree that browsers optimize obsessively; changing it directly is fast and predictable. User interactions arrive asynchronously and unpredictably: clicks, scrolls, form submissions, network requests, resize events. There’s no pure function that captures “user intent.” This messiness is not accidental. It’s how the web scales across billions of devices, remains backwards-compatible across decades, and allows disparate systems to interoperate. The browser is an open platform with escape hatches everywhere. You can style anything, hook into any event, manipulate any node. That flexibility and that refusal to enforce rigid abstractions is the web’s superpower. When we approach the web with functional programming instincts, we see this flexibility as chaos. We see globals as dangerous. We see mutation as unpredictable. We see side effects as bugs waiting to happen. And so we build walls. Functional programming revolves around a few core principles: functions should be pure (same inputs → same outputs, no side effects), data should be immutable, and state changes should be explicit and traceable. These ideas produce code that’s easier to reason about, test, and parallelize, in the right context of course. These principles had been creeping into JavaScript long before React. Underscore.js (2009) brought map, reduce, and filter to the masses. Lodash and Ramda followed with deeper FP toolkits including currying, composition and immutability helpers. The ideas were in the air: avoid mutation, compose small functions, treat data transformations as pipelines. React itself started with class components and , hardly pure FP. But the conceptual foundation was there: treat UI as a function of state, make rendering deterministic, isolate side effects. Then came Elm, a purely functional language created by Evan Czaplicki that codified the “Model-View-Update” architecture. When Dan Abramov created Redux, he explicitly cited Elm as inspiration. Redux’s reducers are directly modeled on Elm’s update functions: . Redux formalized what had been emerging patterns. Combined with React Hooks (which replaced stateful classes with functional composition), the ecosystem shifted decisively toward FP. Immutability became non-negotiable. Pure components became the ideal. Side effects were corralled into . Through this convergence (library patterns, Elm’s rigor, and React’s evolution) Haskell-derived ideas about purity became mainstream JavaScript practice. In the early 2010s, as JavaScript applications grew more complex, developers looked to FP for salvation. jQuery spaghetti had become unmaintainable. Backbone’s two-way binding caused cascading updates (ironically, Backbone’s documentation explicitly advised against two-way binding saying “it doesn’t tend to be terribly useful in your real-world app” yet many developers implemented it through plugins). The community wanted discipline, and FP offered it: treat your UI as a pure function of state. Make data flow in one direction. Eliminate shared mutable state. React’s arrival in 2013 crystallized these ideals. It promised a world where : give it data, get back a component tree, re-render when data changes. No manual DOM manipulation. No implicit side effects. Just pure, predictable transformations. This was seductive. And in many ways, it worked. But it also set us on a path toward rebuilding the web in JavaScript’s image, rather than JavaScript in the web’s image. CSS was designed to be global. Styles cascade, inherit, and compose across boundaries. This enables tiny stylesheets to control huge documents, and lets teams share design systems across applications. But to functional programmers, global scope is dangerous. It creates implicit dependencies and unpredictable outcomes. Enter CSS-in-JS: styled-components, Emotion, JSS. The promise was component isolation. Styles scoped to components, no cascading surprises, no naming collisions. Styles become data , passed through JavaScript, predictably bound to elements. But this came at a cost. CSS-in-JS libraries generate styles at runtime, injecting them into tags as components mount. This adds JavaScript execution to the critical rendering path. Server-side rendering becomes complicated. You need to extract styles during the render, serialize them, and rehydrate them on the client. Debugging involves runtime-generated class names like . And you lose the cascade; the very feature that made CSS powerful in the first place. Worse, you’ve moved a browser-optimized declarative language into JavaScript, a single-threaded runtime. The browser can parse and apply CSS in parallel, off the main thread. Your styled-components bundle? That’s main-thread work, blocking interactivity. The web had a solution. It’s called a stylesheet. But it wasn’t pure enough. The industry eventually recognized these problems and pivoted to Tailwind CSS. Instead of runtime CSS generation, use utility classes. Instead of styled-components, compose classes in JSX. This was better, at least it’s compile-time, not runtime. No more blocking the main thread to inject styles. No more hydration complexity. But Tailwind still fights the cascade. Instead of writing once and letting it cascade to all buttons, you write on every single button element. You’ve traded runtime overhead for a different set of problems: class soup in your markup, massive HTML payloads, and losing the cascade’s ability to make sweeping design changes in one place. And here’s where it gets truly revealing: when Tailwind added support for nested selectors using (a feature that would let developers write more cascade-like styles), parts of the community revolted. David Khourshid (creator of XState) shared examples of using nested selectors in Tailwind, and the backlash was immediate. Developers argued this defeated the purpose of Tailwind, that it brought back the “problems” of traditional CSS, that it violated the utility-first philosophy. Think about what this means. The platform has cascade. CSS-in-JS tried to eliminate it and failed. Tailwind tried to work around it with utilities. And when Tailwind cautiously reintroduced a cascade-like feature, developers who were trained by years of anti-cascade ideology rejected it. We’ve spent so long teaching people that the cascade is dangerous that even when their own tools try to reintroduce platform capabilities, they don’t want them. We’re not just ignorant of the platform anymore. We’re ideologically opposed to it. React introduced synthetic events to normalize browser inconsistencies and integrate events into its rendering lifecycle. Instead of attaching listeners directly to DOM nodes, React uses event delegation. It listens at the root, then routes events to handlers through its own system. This feels elegant from a functional perspective. Events become data flowing through your component tree. You don’t touch the DOM directly. Everything stays inside React’s controlled universe. But native browser events already work. They bubble, they capture, they’re well-specified. The browser has spent decades optimizing event dispatch. By wrapping them in a synthetic layer, React adds indirection: memory overhead for event objects, translation logic for every interaction, and debugging friction when something behaves differently than the native API. Worse, it trains developers to avoid the platform. Developers learn React’s event system, not the web’s. When they need to work with third-party libraries or custom elements, they hit impedance mismatches. becomes a foreign API in their own codebase. Again: the web had this. The browser’s event system is fast, flexible, and well-understood. But it wasn’t controlled enough for the FP ideal of a closed system. The logical extreme of “UI as a pure function of state” is client-side rendering: the server sends an empty HTML shell, JavaScript boots up, and the app renders entirely in the browser. From a functional perspective, this is clean. Your app is a deterministic function that takes initial state and produces a DOM tree. From a web perspective, it’s a disaster. The browser sits idle while JavaScript parses, executes, and manually constructs the DOM. Users see blank screens. Screen readers get empty documents. Search engines see nothing. Progressive rendering which is one of the browser’s most powerful features, goes unused. The industry noticed. Server-side rendering came back. But because the mental model was still “JavaScript owns the DOM,” we got hydration : the server renders HTML, the client renders the same tree in JavaScript, then React walks both and attaches event handlers. During hydration, the page is visible but inert. Clicks do nothing, forms don’t submit. This is architecturally absurd. The browser already rendered the page. It already knows how to handle clicks. But because the framework wants to own all interactions through its synthetic event system, it must re-create the entire component tree in JavaScript before anything works. The absurdity extends beyond the client. Infrastructure teams watch in confusion as every user makes double the number of requests : the server renders the page and fetches data, then the client boots up and fetches the exact same data again to reconstruct the component tree for hydration. Why? Because the framework can’t trust the HTML it just generated. It needs to rebuild its internal representation of the UI in JavaScript to attach event handlers and manage state. This isn’t just wasteful, it’s expensive. Database queries run twice. API calls run twice. Cache layers get hit twice. CDN costs double. And for what? So the framework can maintain its pure functional model where all state lives in JavaScript. The browser had the data. The HTML had the data. But that data wasn’t in the right shape . It wasn’t a JavaScript object tree, so we throw it away and fetch it again. Hydration is what happens when you treat the web like a blank canvas instead of a platform with capabilities. The web gave us streaming HTML, progressive enhancement, and instant interactivity. We replaced it with JSON, JavaScript bundles, duplicate network requests, and “please wait while we reconstruct reality.” Consider the humble modal dialog. The web has , a native element with built-in functionality: it manages focus trapping, handles Escape key dismissal, provides a backdrop, controls scroll-locking on the body, and integrates with the accessibility tree. It exists in the DOM but remains hidden until opened. No JavaScript mounting required. It’s fast, accessible, and battle-tested by browser vendors. Now observe what gets taught in tutorials, bootcamps, and popular React courses: build a modal with elements. Conditionally render it when is true. Manually attach a click-outside handler. Write an effect to listen for the Escape key. Add another effect for focus trapping. Implement your own scroll-lock logic. Remember to add ARIA attributes. Oh, and make sure to clean up those event listeners, or you’ll have memory leaks. You’ve just written 100+ lines of JavaScript to poorly recreate what the browser gives you for free. Worse, you’ve trained developers to not even look for native solutions. The platform becomes invisible. When someone asks “how do I build a modal?”, the answer is “install a library” or “here’s my custom hook,” never “use .” The teaching is the problem. When influential tutorial authors and bootcamp curricula skip native APIs in favor of React patterns, they’re not just showing an alternative approach. They’re actively teaching malpractice. A generation of developers learns to build inaccessible soup because that’s what fits the framework’s reactivity model, never knowing the platform already solved these problems. And it’s not just bootcamps. Even the most popular component libraries make the same choice: shadcn/ui builds its Dialog component on Radix UI primitives, which use instead of the native element. There are open GitHub issues requesting native support, but the implicit message is clear: it’s easier to reimplement the browser than to work with it. The problem runs deeper than ignorance or inertia. The frameworks themselves increasingly struggle to work with the platform’s evolution. Not because the platform features are bad, but because the framework’s architectural assumptions can’t accommodate them. Consider why component libraries like Radix UI choose over . The native element manages its own state: it knows when it’s open, it handles its own visibility, it controls focus internally. But React’s reactivity model expects all state to live in JavaScript, flowing unidirectionally into the DOM. When a native element manages its own state, React’s mental model breaks down. Keeping in your React state synchronized with the element’s actual open/closed state becomes a nightmare of refs, effects, and imperative calls. Precisely what React was supposed to eliminate. Rather than adapt their patterns to work with stateful native elements, library authors reimplement the entire behavior in a way that fits the framework. It’s architecturally easier to build a fake dialog in JavaScript than to integrate with the platform’s real one. But the conflict extends beyond architectural preferences. Even when the platform adds features that developers desperately want, frameworks can’t always use them. Accordions? The web has and . Tooltips? There’s attribute and the emerging API. Date pickers? . Custom dropdowns? The web now supports styling elements with and pseudo-elements. You can even put elements with images inside elements now. It eliminates the need for the countless JavaScript select libraries that exist solely because designers wanted custom styling. Frameworks encourage conditional rendering and component state, so these elements don’t get rendered until JavaScript decides they should exist. The mental model is “UI appears when state changes,” not “UI exists, state controls visibility.” Even when the platform adds the exact features developers have been rebuilding in JavaScript for years, the ecosystem momentum means most developers never learn these features exist. And here’s the truly absurd part: even when developers do know about these new platform features, the frameworks themselves can’t handle them. MDN’s documentation for customizable elements includes this warning: “ Some JavaScript frameworks block these features; in others, they cause hydration failures when Server-Side Rendering (SSR) is enabled. ” The platform evolved. The HTML parser now allows richer content inside elements. But React’s JSX parser and hydration system weren’t designed for this. They expect to only contain text. Updating the framework to accommodate the platform’s evolution takes time, coordination, and breaking changes that teams are reluctant to make. The web platform added features that eliminate entire categories of JavaScript libraries, but the dominant frameworks can’t use those features without causing hydration errors. The stack that was supposed to make development easier now lags behind the platform it’s built on. The browser has native routing: tags, the History API, forward/back buttons. It has native forms: elements, validation attributes, submit events. These work without JavaScript. They’re accessible by default. They’re fast. Modern frameworks threw them out. React Router, Next.js’s router, Vue Router; they intercept link clicks, prevent browser navigation, and handle routing in JavaScript. Why? Because client-side routing feels like a pure state transition: URL changes, state updates, component re-renders. No page reload. No “lost” JavaScript state. But you’ve now made navigation depend on JavaScript. Ctrl+click to open in a new tab? Broken, unless you carefully re-implement it. Right-click to copy link? The URL might not match what’s rendered. Accessibility tools that rely on standard navigation patterns? Confused. Forms got the same treatment. Instead of letting the browser handle submission, validation, and accessibility, frameworks encourage JavaScript-controlled forms. Formik, React Hook Form, uncontrolled vs. controlled inputs; entire libraries exist to manage what already does. The browser can validate instantly, with no JavaScript. But that’s not reactive enough, so we rebuild validation in JavaScript, ship it to the client, and hope we got the logic right. The web had these primitives. We rejected them because they didn’t fit our FP-inspired mental model of “state flows through JavaScript.” Progressive enhancement used to be a best practice: start with working HTML, layer on CSS for style, add JavaScript for interactivity. The page works at every level. Now, we start with JavaScript and work backwards, trying to squeeze HTML out of our component trees and hoping hydration doesn’t break. We lost built-in accessibility. Native HTML elements have roles, labels, and keyboard support by default. Custom JavaScript widgets require attributes, focus management, and keyboard handlers. All easy to forget or misconfigure. We lost performance. The browser’s streaming parser can render HTML as it arrives. Modern frameworks send JavaScript, parse JavaScript, execute JavaScript, then finally render. That’s slower. The browser can cache CSS and HTML aggressively. JavaScript bundles invalidate on every deploy. We lost simplicity. is eight characters. A client-side router is a dependency, a config file, and a mental model. is self-documenting. A controlled form with validation is dozens of lines of state management. And we lost alignment with the platform. The browser vendors spend millions optimizing HTML parsing, CSS rendering, and event dispatch. We spend thousands of developer-hours rebuilding those features in JavaScript, slower. This isn’t a story of incompetence. Smart people built these tools for real reasons. By the early 2010s, JavaScript applications had become unmaintainable. jQuery spaghetti sprawled across codebases. Two-way data binding caused cascading updates that were impossible to debug. Teams needed discipline, and functional programming offered it: pure components, immutable state, unidirectional data flow. For complex, stateful applications (like dashboards with hundreds of interactive components, real-time collaboration tools, data visualization platforms) React’s model was genuinely better than manually wiring up event handlers and tracking mutations. The FP purists weren’t wrong that unpredictable mutation causes bugs. They were wrong that the solution was avoiding the platform’s mutation-friendly APIs instead of learning to use them well. But in the chaos of 2013, that distinction didn’t matter. React worked. It scaled. And Facebook was using it in production. Then came the hype cycle. React dominated the conversation. Every conference had React talks. Every tutorial assumed React as the starting point. CSS-in-JS became “modern.” Client-side rendering became the default. When big companies like Facebook, Airbnb, Netflix and others adopted these patterns, they became industry standards. Bootcamps taught React exclusively. Job postings required React experience. The narrative solidified: this is how you build for the web now. The ecosystem became self-reinforcing through its own momentum. Once React dominated hiring pipelines and Stack Overflow answers, alternatives faced an uphill battle. Teams that had already invested in React by training developers, building component libraries, establishing patterns are now facing enormous switching costs. New developers learned React because that’s what jobs required. Jobs required React because that’s what developers knew. The cycle fed itself, independent of whether React was the best tool for any particular job. This is where we lost the plot. Somewhere in the transition from “React solves complex application problems” to “React is how you build websites,” we stopped asking whether the problems we were solving actually needed these solutions. I’ve watched developers build personal blogs with Next.js. Sites that are 95% static content with maybe a contact form, because that’s what they learned in bootcamp. I’ve seen companies choose React for marketing sites with zero interactivity, not because it’s appropriate, but because they can’t hire developers who know anything else. The tool designed for complex, stateful applications became the default for everything, including problems the web solved in 1995 with HTML and CSS. A generation of developers never learned that most websites don’t need a framework at all. The question stopped being “does this problem need React?” and became “which React pattern should I use?” The platform’s native capabilities like progressive rendering, semantic HTML, the cascade, instant navigation are now considered “old-fashioned.” Reinventing them in JavaScript became “best practices.” We chased functional purity on a platform that was never designed for it. And we built complexity to paper over the mismatch. The good news: we’re learning. The industry is rediscovering the platform. HTMX embraces HTML as the medium of exchange. Server sends HTML, browser renders it, no hydration needed. Qwik resumable architecture avoids hydration entirely, serializing only what’s needed. Astro defaults to server-rendered HTML with minimal JavaScript. Remix and SvelteKit lean into web standards: forms that work without JS, progressive enhancement, leveraging the browser’s cache. These tools acknowledge what the web is: a document-based platform with powerful native capabilities. Instead of fighting it, they work with it. This doesn’t mean abandoning components or reactivity. It means recognizing that is a useful model inside your framework, not a justification to rebuild the entire browser stack. It means using CSS for styling, native events for interactions, and HTML for structure and then reaching for JavaScript when you need interactivity beyond what the platform provides. The best frameworks of the next decade will be the ones that feel like the web, not in spite of it. In chasing functional purity, we built a frontend stack that is more complex, more fragile, and less aligned with the platform it runs on. We recreated CSS in JavaScript, events in synthetic wrappers, rendering in hydration layers, and routing in client-side state machines. We did this because we wanted predictability, control, and clean abstractions. But the web was never meant to be pure. It’s a sprawling, messy, miraculous platform built on decades of emergent behavior, pragmatic compromises, and radical openness. Its mutability isn’t a bug. It’s the reason a document written in 1995 still renders in 2025. Its global scope isn’t dangerous. It’s what lets billions of pages share a design language. Maybe the web didn’t need to be purified. Maybe it just needed to be understood. I want to thank my friend Ihab Khattab for reviewing this piece and providing invaluable feedback.

0 views
Manuel Moreale 1 months ago

Blake Watson

This week on the People and Blogs series we have an interview with Blake Watson, whose blog can be found at blakewatson.com . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Jaga Santagostino and the other 121 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Sure! I’m Blake. I live in a small city near Jackson, Mississippi, USA. I work for MRI Technologies as a frontend engineer, building bespoke web apps for NASA. Previously I worked at an ad agency as an interactive designer. I have a neuromuscular condition called spinal muscular atrophy (SMA). It's a progressive condition that causes my muscles to become weaker over time. Because of that, I use a power wheelchair and a whole host of assistive technologies big and small. I rely on caregivers for most daily activities like taking a shower, getting dressed, and eating—just to name a few. I am able to use a computer on my own. I knew from almost the first time I used one that it was going to be important in my life. I studied Business Information Systems in college as a way to take computer-related courses without all the math of computer science (which scared me at the time). When I graduated, I had a tough time finding a job making websites. I did a bit of freelance work and volunteer work to build up a portfolio, but was otherwise unemployed for several years. I finally got my foot in the door and I recently celebrated a milestone of being employed for a decade . When I'm not working, I'm probably tinkering on side projects . I'm somewhat of a side project and home-cooked app enthusiast. I just really enjoy making and using my own tools. Over the last 10 years, I've gotten into playing Dungeons and Dragons and a lot of my side projects have been related to D&D. I enjoy design, typography, strategy games, storytelling, writing, programming, gamedev, and music. I got hooked on making websites in high school and college in the early 2000s. A friend of mine in high school had a sports news website. I want to say it was made with the Homestead site builder or something similar. I started writing for it and helping with it. I couldn’t get enough so I started making my own websites using WYSIWYG page builders. But I became increasingly frustrated with the limitations of page builders. Designing sites felt clunky and I couldn’t get elements to do exactly what I wanted them to do. I had a few blogs on other services over the years. Xanga was maybe the first one. Then I had one on Blogger for a while. In 2005, I took a course called Advanced Languages 1. It turned out to be JavaScript. Learning JavaScript necessitated learning HTML. Throughout the course I became obsessed with learning HTML, CSS, and JavaScript. Eventually, in August of 2005— twenty years ago —I purchased the domain blakewatson.com . I iterated on it multiple times a year at first. It morphed from quirky design to quirkier design as I learned more CSS. It was a personal homepage, but I blogged at other services. Thanks to RSS, I could list my recent blog posts on my website. When I graduated from college, my personal website became more of a web designer's portfolio, a professional site that I would use to attract clients and describe my services. But around that time I was learning how to use WordPress and I started a self-hosted WordPress blog called I hate stairs . It was an extremely personal disability-related and life journaling type of blog that I ran for several years. When I got my first full-time position and didn't need to freelance any longer, I converted blakewatson.com back into a personal website. But this time, primarily a blog. I discontinued I hate stairs (though I maintain an archive and all the original URLs work ). I had always looked up to various web designers in the 2000s who had web development related blogs. People like Jeffery Zeldman , Andy Clarke , Jason Santa Maria , and Tina Roth Eisenberg . For the past decade, I've blogged about web design, disability, and assistive tech—with the odd random topic here or there. I used to blog only when inspiration struck me hard enough to jolt my lazy ass out of whatever else I was doing. That strategy left me writing three or four articles a year (I don’t know why, but I think of my blog posts as articles in a minor publication, and this hasn’t helped me do anything but self-edit—I need to snap out of it and just post ). In March 2023, however, I noticed that I had written an article every month so far that year. I decided to keep up the streak. And ever since then, I've posted at least one article a month on my blog. I realize that isn't very frequent for some people, but I enjoy that pacing, although I wouldn't mind producing a handful more per year. Since I'm purposefully posting more, I've started keeping a list of ideas in my notes just so I have something to look through when it's time to write. I use Obsidian mostly for that kind of thing. The writing itself almost always happens in iA Writer . This app is critical to my process because I am someone who likes to tinker with settings and fonts and pretty much anything I can configure. If I want to get actual writing done, I need constraints. iA Writer is perfect because it looks and works great by default and has very few formatting options. I think I paid $10 for this app one time ten or more years ago. That has to be the best deal I've ever gotten on anything. I usually draft in Writer and then preview it on my site locally to proofread. I have to proofread on the website, in the design where it will live. If I proofread in the editor I will miss all kinds of typos. So I pop back and forth between the browser and the editor fixing things as I go. I can no longer type on a physical keyboard. I use a mix of onscreen keyboard and dictation when writing prose. Typing is a chore and part of the reason I don’t blog more often. It usually takes me several hours to draft, proofread, and publish a post. I mostly need to be at my desk because I have my necessary assistive tech equipment set up there. I romanticize the idea of writing in a comfy nook or at a cozy coffee shop. I've tried packing up my setup and taking it to a coffee shop, but in practice I get precious little writing done that way. What I usually do to get into a good flow state is put on my AirPods Pro, turn on noise cancellation, maybe have some ambient background noise or music , and just write. Preferably while sipping coffee or soda. But if I could have any environment I wanted, I would be sitting in a small room by a window a few stories up in a quaint little building from the game Townscaper , clacking away on an old typewriter or scribbling in a journal with a Parker Jotter . I've bounced around a bit in terms of tech stack, but in 2024, I migrated from a self-hosted WordPress site to a generated static site with Eleventy . My site is hosted on NearlyFreeSpeech.NET (NFSN)—a shared hosting service I love for its simplistic homemade admin system, and powerful VPS-like capabilities. My domain is registered with them as well, although I’m letting Cloudflare handle my DNS for now. I used Eleventy for the first time in 2020 and became a huge fan. I was stoked to migrate blakewatson.com . The source code is in a private repo on GitHub. Whenever I push to the main branch, DeployHQ picks it up and deploys it to my server. I also have a somewhat convoluted setup that checks for social media posts and displays them on my website by rebuilding and deploying automatically whenever I post. It's more just a way for me to have an archive of my posts on Mastodon than anything. Because my website is so old, I have some files not in my repo that live on my server. It is somewhat of a sprawling living organism at this point, with various small apps and tools (and even games !) deployed to sub-directories. I have a weekly scheduled task that runs and saves the entire site to Backblaze B2 . You know, I'm happy to say that I'd mostly do the same thing. I think everyone should have their own website. I would still choose to blog at my own domain name. Probably still a static website. I might structure things a bit differently. If I were designing it now, I might make more allowances for title-less, short posts (technically I can do this now, but they get lumped into my social feed, which I'm calling my Microblog, and kind of get lost). I might design it to be a little weirder rather than buttoned up as it is now. And hey, it's my website. I still might do that. Tinkering with your personal website is one of life's great joys. If you can't think of anything to do with your website, here are a hundred ideas for you . I don't make money from my website directly, but having a website was critical in getting my first job and getting clients before that. So, in a way, all the money I've made working could be attributed to having a personal website. I have a lot of websites and a lot of domains, so it's a little hard to figure out exactly what blakewatson.com itself costs. NFSN is a pay-as-you-go service. I'm currently hosting 13 websites of varying sizes and complexity, and my monthly cost aside from domains is about $23.49. $5 of that is an optional support membership. I could probably get the cost down further by putting the smaller sites together on a single shared server. I pay about $14 per year for the domain these days. I pay $10.50 per month for DeployHQ , but I use it for multiple sites including a for-profit side project, so it doesn’t really cost anything to use it for my blog (this is the type of mental gymnastics I like to do). I pay $15 per month for Fathom Analytics . In my mind, this is also subsidized by my for-profit side project. I mentioned that I backup my website to Backblaze B2. It's extremely affordable, and I think I'm paying below 50 cents per month currently for the amount of storage I'm using (and that also includes several websites). If you also throw in the cost of tools like Tower and Sketch , then there's another $200 worth of costs per year. But I use those programs for many things other than my blog. When you get down to it, blogs are fairly inexpensive to run when they are small and personal like mine. I could probably get the price down to free, save for the domain name, if I wanted to use something like Cloudflare Pages to host it—or maybe a free blogging service. I don't mind people monetizing their blogs at all. I mean if it's obnoxious then I'm probably not going to stay on your website very long. But if it's done tastefully with respect to the readers then good for you. I also don't mind paying to support bloggers in some cases. I have a number of subscriptions for various people to support their writing or other creative output. Here are some blogs I'm impressed with in no particular order. Many of these people have been featured in this series before. I'd like to take this opportunity to mention Anne Sturdivant . She was interviewed here on People & Blogs. When I first discovered this series, I put her blog in the suggestion box. I was impressed with her personal website empire and the amount of content she produced. Sadly, Anne passed away earlier this year. We were internet buddies and I miss her. 💜 I'd like to share a handful of my side projects for anyone who might be interested. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 109 interviews . Make sure to also say thank you to Bomburache and the other 121 supporters for making this series possible. Chris Coyier . " Mediocre ideas, showing up, and persistence. " <3 Jim Nielsen . Continually produces smart content. Don't know how he does it. Nicole Kinzel . She has posted nearly daily for over two years capturing human struggles and life with SMA through free verse poetry. Dave Rupert . I enjoy the balance of tech and personal stuff and the honesty of the writing. Tyler Sticka . His blog is so clean you could eat off of it. A good mix of tech and personal topics. Delightful animations. Maciej Cegłowski . Infrequent and longform. Usually interesting regardless of whether I agree or disagree. Brianna Albers . I’m cheating because this is a column and not a blog per se. But her writing reads like a blog—it's personal, contemplative, and compelling. There are so very few representations of life with SMA online that I'd be remiss not to mention her. Daring Fireball . A classic blog I’ve read for years. Good for Apple news but also interesting finds in typography and design. Robb Knight . To me, Robb’s website is the epitome of the modern indieweb homepage. It’s quirky, fun, and full of content of all kinds. And that font. :chefskiss: Katherine Yang . A relatively new blog. Beautiful site design. Katherine's site feels fresh and experimental and exudes humanity. HTML for People . I wrote this web book for anyone who is interested in learning HTML to make websites. I wrote this to be radically beginner-friendly. The focus is on what you can accomplish with HTML rather than dwelling on a lot of technical information. A Fine Start . This is the for-profit side project I mentioned. It is a new tab page replacement for your web browser. I originally made it for myself because I wanted all of my favorite links to be easily clickable from every new tab. I decided to turn it into a product. The vast majority of the features are free. You only pay if you want to automatically sync your links with other browsers and devices. Minimal Character Sheet . I mentioned enjoying Dungeons and Dragons. This is a web app for managing a D&D 5th edition character. I made it to be a freeform digital character sheet. It's similar to using a form fillable PDF, except that you have a lot more room to write. It doesn't force many particular limitations on your character since you can write whatever you want. Totally free.

0 views