Posts in Css (20 found)
Manuel Moreale 5 days ago

Linda Ma

This week on the People and Blogs series we have an interview with Linda Ma, whose blog can be found at midnightpond.com . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Aleem Ali and the other 120 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hey, I’m Linda. I grew up in Budapest in a Chinese family of four, heavily influenced by the 2000s internet. I was very interested in leaving home and ended up in the United Kingdom—all over, but with the most time spent in Edinburgh, Scotland. I got into design, sociology, and working in tech and startups. Then, I had enough of being a designer, working in startups, and living in the UK, so I left. I moved to Berlin and started building a life that fits me more authentically. My interests change a lot, but the persistent ones have been: journaling with a fountain pen, being horizontal in nature, breathwork, and ambient music. I was struck by a sudden need to write in public last year. I’d been writing in private but never felt the need to put anything online because I have this thing about wanting to remain mysterious. At least, that’s the story I was telling myself. In hindsight, the 'sudden need' was more of a 'wanting to feel safe to be seen.' I also wanted to find more people who were like-minded. Not necessarily interested in the same things as me, but thinking in similar ways. Through writing, I discovered that articulating your internal world with clarity takes time and that I was contributing to my own problems because I wasn't good at expressing myself. I write about these kinds of realizations in my blog. It’s like turning blurriness and stories into clarity and facts. I also do the opposite sometimes, where I reframe experiences and feelings into semi-fictional stories as a way to release them. I enjoy playing in this space between self-understanding through reality and self-soothing through fantasy. I also just enjoy the process of writing and the feeling of hammering on the keyboard. I wanted the blog to be formless and open-ended, so it didn’t have a name to begin with, and it was hanging out on my personal website. The name just kinda happened. I like the sound of the word “pond” and the feeling I get when I think of a pond. Then I thought: if I were a pond, what kind of pond would I be? A midnight pond. It reflects me, my writing, and the kind of impression I’d like to leave. It’s taken on a life of its own now, and I’m curious to see how it evolves. Nowadays, it seems I’m interested in writing shorter pieces and poems. I get a lot of inspiration from introspection, often catalyzed by conversations with people, paragraphs from books, music, or moments from everyday life. In terms of the writing process, the longer blogposts grow into being like this: I'll have fleeting thoughts and ideas that come to me pretty randomly. I try to put them all in one place (a folder in Obsidian or a board in Muse ). I organically return to certain thoughts and notes over time, and I observe which ones make me feel excited. Typically, I'll switch to iA Writer to do the actual writing — something about switching into another environment helps me get into the right mindset. Sometimes the posts are finished easily and quickly, sometimes I get stuck. When I get stuck, I take the entire piece and make it into a pile of mess in Muse. Sometimes the mess transforms into a coherent piece, sometimes it gets abandoned. When I finish something and feel really good about it, I let it sit for a couple days and look at it again once the post-completion high has faded. This is advice from the editors of the Modern Love column , and it’s very good advice. I occasionally ask a friend to read something to gauge clarity and meaning. I like the idea of having more thinking buddies. Please feel free to reach out if you think we could be good thinking buddies. Yes, I do believe the physical space influences my creativity. And it’s not just the immediate environment (the room or desk I'm writing at) but also the thing or tool I'm writing with (apps and notebook) as well as the broader environment (where I am geographically). There’s a brilliant book by Vivian Gornick called The Situation and the Story: The Art of Personal Narrative and a quote in it: “If you don’t leave home you suffocate, if you go too far you lose oxygen.” It’s her comment on one of the example pieces she discusses. This writer was talking about how he couldn’t write when he was too close or too far from home. It’s an interesting perspective to consider, and I find it very relatable. Though I wouldn’t have arrived at this conclusion had I not experienced both extremes. My ideal creative environment is a relatively quiet space where I can see some trees or a body of water when I look up. The tools I mentioned before and my physical journal are also essential to me. My site is built with Astro , the code is on GitHub, and all deploys through Netlify. The site/blog is really just a bunch of .md and .mdx files with some HTML and CSS. I code in VS Code. I wouldn’t change anything about the content or the name. Maybe I would give the tech stack or platform more thought if I started it now? In moments of frustration with Astro or code, I’ve often wondered if I should just accept that I’m not a techie and use something simpler. It’s been an interesting journey figuring things out though. Too deep into it, can’t back out now. The only running cost I have at the moment is the domain which is around $10 a year. iA Writer was a one-time purchase of $49.99. My blog doesn’t generate revenue. I don’t like the idea of turning personal blogs into moneymaking machines because it will most likely influence what and how you write. But — I am supportive of creatives wanting to be valued for what they create and share from an authentic place. I like voluntary support based systems like buymeacoffee.com or ko-fi.com . I also like the spirit behind platforms like Kickstarter or Metalabel . I started a Substack earlier this year where I share the longer posts from my blog. I’m not sure how I feel about this subscription thing, but I now use the paywall to protect posts that are more personal than others. I’ve come across a lot of writing I enjoy though and connected with others through writing. Here are a few I’ve been introduced to or stumbled upon: Interesting, no-longer-active blogs: I like coming across sites that surprise me. Here’s one that boggles my mind, and here’s writer Catherine Lacey’s website . There’s also this online documentary and experience of The Garden of Earthly Delights by Jheronimous Bosch that I share all the time, and Spencer Chang’s website is pretty cool. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 110 interviews . Make sure to also say thank you to Ben Werdmuller and the other 120 supporters for making this series possible. Lu’s Wikiblogardenite — Very real and entertaining blog of a "slightly-surreal videos maker and coder". Romina’s Journal — Journal of a graphic designer and visual artist. Maggie Appleton’s Garden — Big fan of Maggie’s visual essays on programming, design, and anthropology. Elliott’s memory site — This memory site gives me a cozy feeling. Where are Kay and Phil? — Friends documenting their bike tours and recipes. brr — Blog of an IT professional who was deployed to Antarctica during 2022-2023. The late Ursula K. Le Guin’s blog — She started this at the age of 81 in 2010.

2 views
Preah's Website 1 weeks ago

(Guide) Intro To Social Blogging

Social networks have rapidly become so vital to many people's lives on the internet. People want to see what their friends are doing, where they are, and photos of what they're doing. They also want to share these same things with their friends, all without having to go through the manual and sometimes awkward process of messaging them directly and saying "Hey, how're you doing?" Developers and companies have complied with this desire for instant connection. We see the launch of Friendster in 2002, MySpace and a job-centered one we all know, LinkedIn , in 2003. Famously, Facebook in 2004, YouTube in 2005, Twitter (now X) in 2006. Followed by Instagram , Snapchat , Google+ (RIP), TikTok , and Discord . People were extremely excited about this. We are more connected than ever. But we are losing in several ways. These companies that own these platforms want to make maximum profit, leading them to offer subscription-based services in some cases, or more distressing, sell their users' data to advertisers. They use algorithms to serve cherry-picked content that creates dangerous echo-chambers, and instill the need for users to remain on their device for sometimes hours just to see what's new, exacerbating feelings of FOMO and wasting precious time. Facebook has been found to conduct experiments on its users to fuel rage and misinformation for the purpose of engagement. 1 2 When did socializing online with friends and family become arguing with strangers, spreading misinformation, and experiencing panic attacks because of the constant feed of political and social unrest? I don't expect anyone to drop their social media. Plenty of people use it in healthy ways. We even have decentralized social media, such as the fediverse (think Mastodon) and the AT Protocol (think Bluesky) to reduce the problem of one person or company owning everything. I think this helps, and seeing a feed of your friends' short thoughts or posts occasionally is nice if you're not endlessly scrolling. I also think it's vital to many people to be able to explore recommendations frequently to get out of their bubble and experience variety. There is another option, one I am personally more fond of. It can sit nicely alongside your existing social media or replace it. It serves a different purpose than something like Twitter (X) or Instagram. It's meant to be a slower, more nuanced form of socializing and communicating, inspired by the pre-social media era, or at least the early one. For the purposes of this guide, I will refer to this as "Blog Feeds." A little intro in one page can be explained by blogfeeds.net , 3 which includes an aggregation of blogs to follow, essentially creating a network of people similar to a webring. 4 This will help you explore new blogs you find interesting and create a tighter group. Another gem is ooh.directory , which sorts blogs by category and interest, allows you to flip through random blogs, and visit the most recently-updated blogs for ideas of who to follow. Basically, a blog feed involves making a personal blog, which can have literally whatever you want on it, and following other people. The "following" aspect can be done through RSS (most common), or email newsletter if their site supports it. If the blog is part of the AT Protocol, you may be able to follow it using a Bluesky account. More about that later. Making a blog sounds scary and technical, but it doesn't have to be. If you know web development or want to learn, you can customize a site to be whatever your heart desires. If you're not into that, there are many services that make it incredibly easy to get going. You can post about your day, about traveling, about gaming, theme it a specific way, or post short thoughts on nothing much at all if you want. All I ask is that you do this because you want to, not solely because you might make a profit off of your audience. Also, please reconsider using AI to write posts if you are thinking of doing that! It's fully up to you, but in my opinion, why should I read something no one bothered to write? Hosted Services: Bear Blog: In the creator's own words, "A privacy-first, no-nonsense, super-fast blogging platform." Sign up, select a pre-made theme if you want and modify it to your liking, make post templates, and connect a custom domain if desired. Comes with ready-to-go RSS, and pretty popular among bloggers currently. This site runs on it. Pika: “An editor that makes you want to write, designed to get out of your way and perfectly match what readers will see.” With Pika you can sign up, choose a theme, customize without code, write posts in a clean editor, export your content, and connect your own domain, with a focus on privacy and design. You can start for free (up to ~50 posts) and upgrade later if you want unlimited posts, newsletter subscribers, analytics, etc. Substack: You might have seen this around before, it's quite popular. It's a platform built for people to publish posts and sometimes make money doing it. You can start a newsletter or blog, choose what’s free and what’s paid, send posts (and even podcasts or video) to subscribers’ inboxes, build a community, and access basic analytics. It’s simple and user-friendly, with a 10% fee if you monetize. This may not be the most loved option by other small bloggers due to its association with newsletter-signup popups and making a profit. It is also the most similar to other social media among blogging options . Ghost: An open-source platform focused on publishing and monetization. Ghost provides an editor (with live previews, Markdown + embeds, and an admin UI), built-in SEO, newsletter tools, membership & subscription support, custom themes, and control over your domain and data. You can self-host (free, for full flexibility) or use their managed Ghost(Pro) hosting, and benefit from faster performance, email delivery, and extensible APIs. Wordpress: The world’s most popular website and blogging platform, powering over 40% of the web. WordPress lets you create a simple blog or a business site using free and premium themes and plugins. You can host it yourself with full control, or use their hosted service (WordPress.com) for convenience. It supports custom domains, rich media, SEO tools, and extensibility through code or plugins. Squarespace: You might have heard of this on your favorite YouTuber's channel during a sponsorship (you don't sit through those, do you?). It is a platform for building websites, blogs, and online stores with no coding required. Squarespace offers templates, a drag-and-drop editor, built-in SEO, analytics, and e-commerce tools under a subscription. You can connect a custom domain, publish blog posts, and manage newsletters. Self-hosted, if you're more technical: Astro: A modern web framework built for speed, content, and flexibility. Astro lets you build blogs, portfolios, and full sites using any UI framework, or none at all, with zero JavaScript by default. 5 It supports Markdown, MDX, and server-side rendering, plus integrations for CMSs, themes, and deployment platforms. Hugo: An open-source static site generator built for efficiency and flexibility. It lets you create blogs and websites using Markdown, shortcodes, and templates. It supports themes, taxonomies, custom content types, and control over site structure without needing a database. Zola: Another open-source static site generator. Zola uses Markdown for content, Tera templates for layouts, and comes with built-in features like taxonomies, RSS feeds, and syntax highlighting. It requires no database, and is easy to configure. 11ty: Pronounced Eleventy. A flexible static site generator that lets you build content-focused websites using plain HTML, Markdown, or templating languages like Nunjucks, Liquid, and others. 11ty requires no database, supports custom data structures, and gives full control over your site’s output. Jekyll: A popular static site generator that transforms plain text into self-hosted websites and blogs. Jekyll uses Markdown, Liquid templates, and simple configuration to generate content without a database. It supports themes, plugins, and custom layouts, and integrates seamlessly with GitHub Pages for free hosting. Honorable mention: Wow, that's a lot of options! Don't get overwhelmed. Here are the basics for a simple site like Bear Blog or a static site generator. You write a post. This post tends to be in Markdown, which is a markup language (like HTML) for creating formatted text. It's actually not too far from something like Microsoft Word. In this case, if you want a header, you can put a pound symbol in front of your header text to tell your site that it should be formatted as one. Same with quotation blocks, bolding, italics and all that. Here is a simple Markdown cheatsheet provided by Bear Blog. Some other blogging platforms have even more options for formatting, like informational or warning boxes. After you've written it, you can usually preview it before posting. While you're writing, you might want to use a live-preview to make sure you're formatting it how you intend. After posting, people can go read your post and possibly interact with it in some ways if you want that. I'm not going to attempt to describe AT Protocol when there is another post that does an excellent job. But what I am going to mention, briefly, is using this protocol to follow blogs via Bluesky or another AT Protocol handle. Using something like leaflet.pub , you can create a blog on there, and follow other similar blogs. Here is an example of a blog on leaflet , and if you have Bluesky, go ahead and test subscribing using it. They also support comments and RSS. You don't have to memorize what RSS stands for (Really Simple Syndication, if you're curious). This is basically how you create a feed, like a Twitter (X) timeline or a Facebook homepage. When you subscribe to someone's blog, 6 you can get a simple, consolidated aggregation of new posts. At this point, RSS is pretty old but still works exactly as intended, and most sites have RSS feeds. What you need to start is a newsreader app. There are a lot of options, so it depends on what you value most. When you subscribe to a website, you put that into your newsreader app, and it fetches the content and displays it for you, among other neat features. Usually they include nice themes, no ads to bother you, and folder or tag organization. You may have to find a site's feed and copy the link, like , or your reader app may be able to find it automatically from a browser shortcut or from pasting in the normal link for the website. To learn more about adding a new subscription, see my feeds page . Here are some suggestions. Feel free to explore multiple and see what sticks: Feedly: A cloud-based, freemium RSS aggregator with apps and a browser interface. You can use a free account that supports a limited number of sources (about 100 feeds) and basic folders, but many advanced features—such as hiding ads, notes/highlights, power search, integration with Evernote/Pocket, and “Leo” AI filtering—require paid tiers. It supports iOS, Android, and web (desktop browsers). Feedly is not open source, it is a commercial service. Inoreader: Also a freemium service, available via web and on iOS and Android, with synchronization of your reading across devices. The free plan includes many of the core features (RSS subscription, folders, basic filtering), while more powerful features (such as advanced rules, full-text search, premium support, more feed limits) are gated behind paid tiers. Inoreader is not open source, it is a proprietary service with a freemium model. NetNewsWire: A native, free, and open-source RSS reader for Apple platforms (macOS, iPhone, iPad). It offers a clean, native experience and tracks your read/unread status locally or via syncing. Because it’s open source (MIT-licensed), you can inspect or contribute to its code. Its main limitation is platform since it’s focused on Apple devices. It's also not very visually flashy, if you care about that. feeeed (with four Es) : An iOS/iPadOS (and recent macOS) app that emphasizes a private, on-device reading experience without requiring servers or accounts. It is free (no ads or in-app purchases) and supports RSS subscriptions, YouTube, Reddit, and other sources, plus some AI summarization. Because it is designed to run entirely on device, there is no paid subscription for backend features, and it is private by design. It is not open-source. One personal note from me, I use this as my daily driver, and it has some minor bugs you may notice. It's developed by one person, so it happens. Reeder: A client app (primarily for Apple platforms: iOS, iPadOS, macOS) that fetches feed data from external services, such as Feedly, Inoreader, or local RSS sources. The new Reeder version supports unified timeline, filters, and media integration. It is not itself a feed-hosting service but a front end; thus, many of its features (such as sync or advanced filtering) depend on which backend you use. It uses iCloud to sync subscription and timeline state between devices. Reeder is proprietary (closed source) and uses a paid model or in-app purchases for more advanced versions. Unread: Another client app for Apple platforms with a focus on elegant reading. It relies on external feed services for syncing (you provide your own RSS or use a service like Feedly). Because Unread is a reader app, its features are more about presentation and gesture support; many of the syncing, feed limits, or premium capabilities depend on your chosen backend service. I would say Unread is my favorite so far, as it offers a lot for being free, has great syncing, tag organization, and a pleasing interface. It also fetches entire website content to get around certain limitations with some websites' feeds, allowing you to read everything in the app without visiting the website directly. FreshRSS: A self-hostable, open-source RSS/Atom aggregator that you run on your own server (like via Docker) and which supports reading through its own web interface or via third-party client apps. It allows full control over feed limits, filtering, theming, extensions, and it can generate feeds by scraping or filtering content. Because it is open source, there is no paid tier in the software itself (though you may incur hosting costs). Many client apps can connect to a FreshRSS instance for mobile or desktop reading. If you're interested in something interesting you can do with its API, check out Steve's post about automating feeds with FreshRSS. Click this for a more detailed breakdown of available RSS newsreaders. Additional resource on RSS and Feeds. Okay, soooo... I have a blog, I have RSS stuff, now what do I subscribe to, and how do I make this social? I'll let blogfeeds.net describe this: This takes us to our final point: Feeds. You can probably get away with just the first two items and then sharing it with people you already know, but what about meeting or talking to people you don't know? That's where Feeds come in. The idea is to create another page on your blog that has all the RSS feeds you're subscribed to. By keeping this public and always up to date, someone can visit your page, find someone new and follow them. Perhaps that person also has a feeds page, and the cycle continues until there is a natural and organic network of people all sharing with each other. So if you have a blog, consider making a feeds page and sharing it! If your RSS reader supports OPML file exports and imports, perhaps you can share that file as well to make it easier to share your feeds. Steve has an example of a feeds page , and blogfeeds.net has an aggregation of known blogs using feeds pages , to create a centralized place to follow blogs who have this same mindset. Once you make a feeds page, you can submit it to the site to get added to it. Then people can find your blog! There is debate on the best method for interaction with others via blogs. You have a few options. And the accompanying CSS, 7 which Bear Blog lets you edit: For each post, I do change the subject line (Re: {{post_title}}) manually to whatever the post title is. That way, someone can click the button and open their mail client already ready to go with a subject line pertaining to the post they want to talk about. Change the values and to whatever colors you want to match your site! See the bottom of this post to see what it looks like. Next: Comments: Comments are a tricky one. It's looked down on by some because of their lack of nuance and moderation stress, which is why Bear Blog doesn't natively have them. There are various ways to do comments, and it heavily depends on what blogging platform you choose, so here is Bear Blog's stance on it and some recommendations for setting up comments if you want . Guestbooks: This is an old form of website interaction that dates back to at least Geocities . The concept is that visitors to your site can leave a quick thought, their name, and optionally their own website to let you know they visited. You can see an example on my website , and my recommended service for a free guestbook is Guestbooks . You can choose a default theme and edit it if you want to match the rest of your site, implement spam protection, and access a dashboard for managing and deleting comments if needed. Here are some ideas to get you started and inspired: Add new pages, like a link to your other social media or music listening platforms, or a page dedicated to your pet. Email a random person on a blog to give your thoughts on a post of theirs or simply tell them their site is cool. Create an email just for this and for your website for privacy and separation, if desired. Add a Now page. It's a section specifically to tell others what you are focused on at this point of your life. Read more about it at nownownow.com . See an example on Clint McMahon's blog . A /now page shares what you’d tell a friend you hadn’t seen in a year. Write a post about a cool rock you found in your yard, or something similarly asinine. Revel in the lack of effort. Or, Make a post containing 1-3 sentences only. Join a webring . Make a page called Reviews, to review movies, books, TV shows, games, kitchen appliances, etc. That's all from me for now. Subscribe to my RSS feed , email me using the button at the bottom to tell me this post sucks, or that it's great, or if you have something to suggest to edit it, and bring back the old web. Subscribe via email or RSS Washington Post – Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation. Link . • Unpaywalled . ↩ The Guardian – Facebook reveals news feed experiment to control emotions. Link . ↩ This website was created by Steve, who has their own Bear Blog . Read Resurrect the Old Web , which inspired this post. ↩ A webring is a collection of websites linked together in a circular structure, organized around a specific theme. Each site has navigation links to the next and previous members, forming a ring. A central site usually lists all members to prevent breaking the ring if someone's site goes offline. ↩ Take a look at this Reddit discussion on why less JavaScript can be better . ↩ Or news site, podcast, or supported social media platform like Bluesky, and even subreddits. ↩ If you don't know what HTML and CSS is, basically, the first snippet of code I shared is HTML, used for the basic text and formatting of a website; CSS is used to apply fancy styles and color, among other things. ↩ Bear Blog: In the creator's own words, "A privacy-first, no-nonsense, super-fast blogging platform." Sign up, select a pre-made theme if you want and modify it to your liking, make post templates, and connect a custom domain if desired. Comes with ready-to-go RSS, and pretty popular among bloggers currently. This site runs on it. Pika: “An editor that makes you want to write, designed to get out of your way and perfectly match what readers will see.” With Pika you can sign up, choose a theme, customize without code, write posts in a clean editor, export your content, and connect your own domain, with a focus on privacy and design. You can start for free (up to ~50 posts) and upgrade later if you want unlimited posts, newsletter subscribers, analytics, etc. Substack: You might have seen this around before, it's quite popular. It's a platform built for people to publish posts and sometimes make money doing it. You can start a newsletter or blog, choose what’s free and what’s paid, send posts (and even podcasts or video) to subscribers’ inboxes, build a community, and access basic analytics. It’s simple and user-friendly, with a 10% fee if you monetize. This may not be the most loved option by other small bloggers due to its association with newsletter-signup popups and making a profit. It is also the most similar to other social media among blogging options . Ghost: An open-source platform focused on publishing and monetization. Ghost provides an editor (with live previews, Markdown + embeds, and an admin UI), built-in SEO, newsletter tools, membership & subscription support, custom themes, and control over your domain and data. You can self-host (free, for full flexibility) or use their managed Ghost(Pro) hosting, and benefit from faster performance, email delivery, and extensible APIs. Wordpress: The world’s most popular website and blogging platform, powering over 40% of the web. WordPress lets you create a simple blog or a business site using free and premium themes and plugins. You can host it yourself with full control, or use their hosted service (WordPress.com) for convenience. It supports custom domains, rich media, SEO tools, and extensibility through code or plugins. Squarespace: You might have heard of this on your favorite YouTuber's channel during a sponsorship (you don't sit through those, do you?). It is a platform for building websites, blogs, and online stores with no coding required. Squarespace offers templates, a drag-and-drop editor, built-in SEO, analytics, and e-commerce tools under a subscription. You can connect a custom domain, publish blog posts, and manage newsletters. Astro: A modern web framework built for speed, content, and flexibility. Astro lets you build blogs, portfolios, and full sites using any UI framework, or none at all, with zero JavaScript by default. 5 It supports Markdown, MDX, and server-side rendering, plus integrations for CMSs, themes, and deployment platforms. Hugo: An open-source static site generator built for efficiency and flexibility. It lets you create blogs and websites using Markdown, shortcodes, and templates. It supports themes, taxonomies, custom content types, and control over site structure without needing a database. Zola: Another open-source static site generator. Zola uses Markdown for content, Tera templates for layouts, and comes with built-in features like taxonomies, RSS feeds, and syntax highlighting. It requires no database, and is easy to configure. 11ty: Pronounced Eleventy. A flexible static site generator that lets you build content-focused websites using plain HTML, Markdown, or templating languages like Nunjucks, Liquid, and others. 11ty requires no database, supports custom data structures, and gives full control over your site’s output. Jekyll: A popular static site generator that transforms plain text into self-hosted websites and blogs. Jekyll uses Markdown, Liquid templates, and simple configuration to generate content without a database. It supports themes, plugins, and custom layouts, and integrates seamlessly with GitHub Pages for free hosting. Neocities: This is a modern continuation of Geocities , mainly focused on hand-coding HTML and CSS to create a custom site from scratch. Not ideal for blogging, but cool for showcasing a site and learning web development. It's free and open-source, and you can choose to pay for custom domains and more bandwidth, with no ads or data selling. You can see my silly site I made using Neocities for a D&D campaign I'm a part of at thepub.neocities.org . Feedly: A cloud-based, freemium RSS aggregator with apps and a browser interface. You can use a free account that supports a limited number of sources (about 100 feeds) and basic folders, but many advanced features—such as hiding ads, notes/highlights, power search, integration with Evernote/Pocket, and “Leo” AI filtering—require paid tiers. It supports iOS, Android, and web (desktop browsers). Feedly is not open source, it is a commercial service. Inoreader: Also a freemium service, available via web and on iOS and Android, with synchronization of your reading across devices. The free plan includes many of the core features (RSS subscription, folders, basic filtering), while more powerful features (such as advanced rules, full-text search, premium support, more feed limits) are gated behind paid tiers. Inoreader is not open source, it is a proprietary service with a freemium model. NetNewsWire: A native, free, and open-source RSS reader for Apple platforms (macOS, iPhone, iPad). It offers a clean, native experience and tracks your read/unread status locally or via syncing. Because it’s open source (MIT-licensed), you can inspect or contribute to its code. Its main limitation is platform since it’s focused on Apple devices. It's also not very visually flashy, if you care about that. feeeed (with four Es) : An iOS/iPadOS (and recent macOS) app that emphasizes a private, on-device reading experience without requiring servers or accounts. It is free (no ads or in-app purchases) and supports RSS subscriptions, YouTube, Reddit, and other sources, plus some AI summarization. Because it is designed to run entirely on device, there is no paid subscription for backend features, and it is private by design. It is not open-source. One personal note from me, I use this as my daily driver, and it has some minor bugs you may notice. It's developed by one person, so it happens. Reeder: A client app (primarily for Apple platforms: iOS, iPadOS, macOS) that fetches feed data from external services, such as Feedly, Inoreader, or local RSS sources. The new Reeder version supports unified timeline, filters, and media integration. It is not itself a feed-hosting service but a front end; thus, many of its features (such as sync or advanced filtering) depend on which backend you use. It uses iCloud to sync subscription and timeline state between devices. Reeder is proprietary (closed source) and uses a paid model or in-app purchases for more advanced versions. Unread: Another client app for Apple platforms with a focus on elegant reading. It relies on external feed services for syncing (you provide your own RSS or use a service like Feedly). Because Unread is a reader app, its features are more about presentation and gesture support; many of the syncing, feed limits, or premium capabilities depend on your chosen backend service. I would say Unread is my favorite so far, as it offers a lot for being free, has great syncing, tag organization, and a pleasing interface. It also fetches entire website content to get around certain limitations with some websites' feeds, allowing you to read everything in the app without visiting the website directly. FreshRSS: A self-hostable, open-source RSS/Atom aggregator that you run on your own server (like via Docker) and which supports reading through its own web interface or via third-party client apps. It allows full control over feed limits, filtering, theming, extensions, and it can generate feeds by scraping or filtering content. Because it is open source, there is no paid tier in the software itself (though you may incur hosting costs). Many client apps can connect to a FreshRSS instance for mobile or desktop reading. If you're interested in something interesting you can do with its API, check out Steve's post about automating feeds with FreshRSS. Email: Share an email people can contact you at, and when someone has something to say, they can email you about it. This allows for intential, nuanced discussion. Here is a template I use at the end of every post to facilitate this (totally stolen from Steve, again) : Comments: Comments are a tricky one. It's looked down on by some because of their lack of nuance and moderation stress, which is why Bear Blog doesn't natively have them. There are various ways to do comments, and it heavily depends on what blogging platform you choose, so here is Bear Blog's stance on it and some recommendations for setting up comments if you want . Guestbooks: This is an old form of website interaction that dates back to at least Geocities . The concept is that visitors to your site can leave a quick thought, their name, and optionally their own website to let you know they visited. You can see an example on my website , and my recommended service for a free guestbook is Guestbooks . You can choose a default theme and edit it if you want to match the rest of your site, implement spam protection, and access a dashboard for managing and deleting comments if needed. Add new pages, like a link to your other social media or music listening platforms, or a page dedicated to your pet. Email a random person on a blog to give your thoughts on a post of theirs or simply tell them their site is cool. Create an email just for this and for your website for privacy and separation, if desired. Add a Now page. It's a section specifically to tell others what you are focused on at this point of your life. Read more about it at nownownow.com . See an example on Clint McMahon's blog . Write a post about a cool rock you found in your yard, or something similarly asinine. Revel in the lack of effort. Or, Make a post containing 1-3 sentences only. Join a webring . Make a page called Reviews, to review movies, books, TV shows, games, kitchen appliances, etc. Washington Post – Five points for anger, one for a ‘like’: How Facebook’s formula fostered rage and misinformation. Link . • Unpaywalled . ↩ The Guardian – Facebook reveals news feed experiment to control emotions. Link . ↩ This website was created by Steve, who has their own Bear Blog . Read Resurrect the Old Web , which inspired this post. ↩ A webring is a collection of websites linked together in a circular structure, organized around a specific theme. Each site has navigation links to the next and previous members, forming a ring. A central site usually lists all members to prevent breaking the ring if someone's site goes offline. ↩ Take a look at this Reddit discussion on why less JavaScript can be better . ↩ Or news site, podcast, or supported social media platform like Bluesky, and even subreddits. ↩ If you don't know what HTML and CSS is, basically, the first snippet of code I shared is HTML, used for the basic text and formatting of a website; CSS is used to apply fancy styles and color, among other things. ↩

0 views
Manuel Moreale 1 weeks ago

Blake Watson

This week on the People and Blogs series we have an interview with Blake Watson, whose blog can be found at blakewatson.com . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Jaga Santagostino and the other 121 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Sure! I’m Blake. I live in a small city near Jackson, Mississippi, USA. I work for MRI Technologies as a frontend engineer, building bespoke web apps for NASA. Previously I worked at an ad agency as an interactive designer. I have a neuromuscular condition called spinal muscular atrophy (SMA). It's a progressive condition that causes my muscles to become weaker over time. Because of that, I use a power wheelchair and a whole host of assistive technologies big and small. I rely on caregivers for most daily activities like taking a shower, getting dressed, and eating—just to name a few. I am able to use a computer on my own. I knew from almost the first time I used one that it was going to be important in my life. I studied Business Information Systems in college as a way to take computer-related courses without all the math of computer science (which scared me at the time). When I graduated, I had a tough time finding a job making websites. I did a bit of freelance work and volunteer work to build up a portfolio, but was otherwise unemployed for several years. I finally got my foot in the door and I recently celebrated a milestone of being employed for a decade . When I'm not working, I'm probably tinkering on side projects . I'm somewhat of a side project and home-cooked app enthusiast. I just really enjoy making and using my own tools. Over the last 10 years, I've gotten into playing Dungeons and Dragons and a lot of my side projects have been related to D&D. I enjoy design, typography, strategy games, storytelling, writing, programming, gamedev, and music. I got hooked on making websites in high school and college in the early 2000s. A friend of mine in high school had a sports news website. I want to say it was made with the Homestead site builder or something similar. I started writing for it and helping with it. I couldn’t get enough so I started making my own websites using WYSIWYG page builders. But I became increasingly frustrated with the limitations of page builders. Designing sites felt clunky and I couldn’t get elements to do exactly what I wanted them to do. I had a few blogs on other services over the years. Xanga was maybe the first one. Then I had one on Blogger for a while. In 2005, I took a course called Advanced Languages 1. It turned out to be JavaScript. Learning JavaScript necessitated learning HTML. Throughout the course I became obsessed with learning HTML, CSS, and JavaScript. Eventually, in August of 2005— twenty years ago —I purchased the domain blakewatson.com . I iterated on it multiple times a year at first. It morphed from quirky design to quirkier design as I learned more CSS. It was a personal homepage, but I blogged at other services. Thanks to RSS, I could list my recent blog posts on my website. When I graduated from college, my personal website became more of a web designer's portfolio, a professional site that I would use to attract clients and describe my services. But around that time I was learning how to use WordPress and I started a self-hosted WordPress blog called I hate stairs . It was an extremely personal disability-related and life journaling type of blog that I ran for several years. When I got my first full-time position and didn't need to freelance any longer, I converted blakewatson.com back into a personal website. But this time, primarily a blog. I discontinued I hate stairs (though I maintain an archive and all the original URLs work ). I had always looked up to various web designers in the 2000s who had web development related blogs. People like Jeffery Zeldman , Andy Clarke , Jason Santa Maria , and Tina Roth Eisenberg . For the past decade, I've blogged about web design, disability, and assistive tech—with the odd random topic here or there. I used to blog only when inspiration struck me hard enough to jolt my lazy ass out of whatever else I was doing. That strategy left me writing three or four articles a year (I don’t know why, but I think of my blog posts as articles in a minor publication, and this hasn’t helped me do anything but self-edit—I need to snap out of it and just post ). In March 2023, however, I noticed that I had written an article every month so far that year. I decided to keep up the streak. And ever since then, I've posted at least one article a month on my blog. I realize that isn't very frequent for some people, but I enjoy that pacing, although I wouldn't mind producing a handful more per year. Since I'm purposefully posting more, I've started keeping a list of ideas in my notes just so I have something to look through when it's time to write. I use Obsidian mostly for that kind of thing. The writing itself almost always happens in iA Writer . This app is critical to my process because I am someone who likes to tinker with settings and fonts and pretty much anything I can configure. If I want to get actual writing done, I need constraints. iA Writer is perfect because it looks and works great by default and has very few formatting options. I think I paid $10 for this app one time ten or more years ago. That has to be the best deal I've ever gotten on anything. I usually draft in Writer and then preview it on my site locally to proofread. I have to proofread on the website, in the design where it will live. If I proofread in the editor I will miss all kinds of typos. So I pop back and forth between the browser and the editor fixing things as I go. I can no longer type on a physical keyboard. I use a mix of onscreen keyboard and dictation when writing prose. Typing is a chore and part of the reason I don’t blog more often. It usually takes me several hours to draft, proofread, and publish a post. I mostly need to be at my desk because I have my necessary assistive tech equipment set up there. I romanticize the idea of writing in a comfy nook or at a cozy coffee shop. I've tried packing up my setup and taking it to a coffee shop, but in practice I get precious little writing done that way. What I usually do to get into a good flow state is put on my AirPods Pro, turn on noise cancellation, maybe have some ambient background noise or music , and just write. Preferably while sipping coffee or soda. But if I could have any environment I wanted, I would be sitting in a small room by a window a few stories up in a quaint little building from the game Townscaper , clacking away on an old typewriter or scribbling in a journal with a Parker Jotter . I've bounced around a bit in terms of tech stack, but in 2024, I migrated from a self-hosted WordPress site to a generated static site with Eleventy . My site is hosted on NearlyFreeSpeech.NET (NFSN)—a shared hosting service I love for its simplistic homemade admin system, and powerful VPS-like capabilities. My domain is registered with them as well, although I’m letting Cloudflare handle my DNS for now. I used Eleventy for the first time in 2020 and became a huge fan. I was stoked to migrate blakewatson.com . The source code is in a private repo on GitHub. Whenever I push to the main branch, DeployHQ picks it up and deploys it to my server. I also have a somewhat convoluted setup that checks for social media posts and displays them on my website by rebuilding and deploying automatically whenever I post. It's more just a way for me to have an archive of my posts on Mastodon than anything. Because my website is so old, I have some files not in my repo that live on my server. It is somewhat of a sprawling living organism at this point, with various small apps and tools (and even games !) deployed to sub-directories. I have a weekly scheduled task that runs and saves the entire site to Backblaze B2 . You know, I'm happy to say that I'd mostly do the same thing. I think everyone should have their own website. I would still choose to blog at my own domain name. Probably still a static website. I might structure things a bit differently. If I were designing it now, I might make more allowances for title-less, short posts (technically I can do this now, but they get lumped into my social feed, which I'm calling my Microblog, and kind of get lost). I might design it to be a little weirder rather than buttoned up as it is now. And hey, it's my website. I still might do that. Tinkering with your personal website is one of life's great joys. If you can't think of anything to do with your website, here are a hundred ideas for you . I don't make money from my website directly, but having a website was critical in getting my first job and getting clients before that. So, in a way, all the money I've made working could be attributed to having a personal website. I have a lot of websites and a lot of domains, so it's a little hard to figure out exactly what blakewatson.com itself costs. NFSN is a pay-as-you-go service. I'm currently hosting 13 websites of varying sizes and complexity, and my monthly cost aside from domains is about $23.49. $5 of that is an optional support membership. I could probably get the cost down further by putting the smaller sites together on a single shared server. I pay about $14 per year for the domain these days. I pay $10.50 per month for DeployHQ , but I use it for multiple sites including a for-profit side project, so it doesn’t really cost anything to use it for my blog (this is the type of mental gymnastics I like to do). I pay $15 per month for Fathom Analytics . In my mind, this is also subsidized by my for-profit side project. I mentioned that I backup my website to Backblaze B2. It's extremely affordable, and I think I'm paying below 50 cents per month currently for the amount of storage I'm using (and that also includes several websites). If you also throw in the cost of tools like Tower and Sketch , then there's another $200 worth of costs per year. But I use those programs for many things other than my blog. When you get down to it, blogs are fairly inexpensive to run when they are small and personal like mine. I could probably get the price down to free, save for the domain name, if I wanted to use something like Cloudflare Pages to host it—or maybe a free blogging service. I don't mind people monetizing their blogs at all. I mean if it's obnoxious then I'm probably not going to stay on your website very long. But if it's done tastefully with respect to the readers then good for you. I also don't mind paying to support bloggers in some cases. I have a number of subscriptions for various people to support their writing or other creative output. Here are some blogs I'm impressed with in no particular order. Many of these people have been featured in this series before. I'd like to take this opportunity to mention Anne Sturdivant . She was interviewed here on People & Blogs. When I first discovered this series, I put her blog in the suggestion box. I was impressed with her personal website empire and the amount of content she produced. Sadly, Anne passed away earlier this year. We were internet buddies and I miss her. 💜 I'd like to share a handful of my side projects for anyone who might be interested. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 109 interviews . Make sure to also say thank you to Bomburache and the other 121 supporters for making this series possible. Chris Coyier . " Mediocre ideas, showing up, and persistence. " <3 Jim Nielsen . Continually produces smart content. Don't know how he does it. Nicole Kinzel . She has posted nearly daily for over two years capturing human struggles and life with SMA through free verse poetry. Dave Rupert . I enjoy the balance of tech and personal stuff and the honesty of the writing. Tyler Sticka . His blog is so clean you could eat off of it. A good mix of tech and personal topics. Delightful animations. Maciej Cegłowski . Infrequent and longform. Usually interesting regardless of whether I agree or disagree. Brianna Albers . I’m cheating because this is a column and not a blog per se. But her writing reads like a blog—it's personal, contemplative, and compelling. There are so very few representations of life with SMA online that I'd be remiss not to mention her. Daring Fireball . A classic blog I’ve read for years. Good for Apple news but also interesting finds in typography and design. Robb Knight . To me, Robb’s website is the epitome of the modern indieweb homepage. It’s quirky, fun, and full of content of all kinds. And that font. :chefskiss: Katherine Yang . A relatively new blog. Beautiful site design. Katherine's site feels fresh and experimental and exudes humanity. HTML for People . I wrote this web book for anyone who is interested in learning HTML to make websites. I wrote this to be radically beginner-friendly. The focus is on what you can accomplish with HTML rather than dwelling on a lot of technical information. A Fine Start . This is the for-profit side project I mentioned. It is a new tab page replacement for your web browser. I originally made it for myself because I wanted all of my favorite links to be easily clickable from every new tab. I decided to turn it into a product. The vast majority of the features are free. You only pay if you want to automatically sync your links with other browsers and devices. Minimal Character Sheet . I mentioned enjoying Dungeons and Dragons. This is a web app for managing a D&D 5th edition character. I made it to be a freeform digital character sheet. It's similar to using a form fillable PDF, except that you have a lot more room to write. It doesn't force many particular limitations on your character since you can write whatever you want. Totally free.

0 views
Lea Verou 2 weeks ago

In the economy of user effort, be a bargain, not a scam

Alan Kay [source] One of my favorite product design principles is Alan Kay’s “Simple things should be simple, complex things should be possible” . [1] I had been saying it almost verbatim long before I encountered Kay’s quote. Kay’s maxim is deceptively simple, but its implications run deep. It isn’t just a design ideal — it’s a call to continually balance friction, scope, and tradeoffs in service of the people using our products. This philosophy played a big part in Prism’s success back in 2012, helping it become the web’s de facto syntax highlighter for years, with over 2 billion npm downloads. Highlighting code on a page took including two files. No markup changes. Styling used readable CSS class names. Even adding new languages — the most common “complex” use case — required far less knowledge and effort than alternatives. At the same time, Prism exposed a deep extensibility model so plugin authors could patch internals and dramatically alter behavior. These choices are rarely free. The friendly styling API increased clash risk, and deep extensibility reduced encapsulation. These were conscious tradeoffs, and they weren’t easy. Simple refers to use cases that are simple from the user’s perspective , i.e. the most common use cases. They may be hard to implement, and interface simplicity is often inversely correlated with implementation simplicity. And which things are complex , depends on product scope . Instagram’s complex cases are vastly different than Photoshop’s complex cases, but as long as there is a range, Kay’s principle still applies. Since Alan Kay was a computer scientist, his quote is typically framed as a PL or API design principle, but that sells it short. It applies to a much, much broader class of interfaces. This class hinges on the distribution of use cases . Products often cut scope by identifying the ~20% of use cases that drive ~80% of usage — aka the Pareto Principle . Some products, however, have such diverse use cases that Pareto doesn’t meaningfully apply to the product as a whole. There are common use cases and niche use cases, but no clean 20-80 split. The long tail of niche use cases is so numerous, it becomes significant in aggregate . For lack of a better term, I’ll call these long‑tail UIs . Nearly all creative tools are long-tail UIs. That’s why it works so well for programming languages and APIs — both are types of creative interfaces. But so are graphics editors, word processors, spreadsheets, and countless other interfaces that help humans create artifacts — even some you would never describe as creative. Yes, programming languages and APIs are user interfaces . If this surprises you, watch my DotJS 2024 talk titled “API Design is UI Design” . It’s only 20 minutes, but covers a lot of ground, including some of the ideas in this post. I include both code and GUI examples to underscore this point; if the API examples aren’t your thing, skip them and the post will still make sense. You wouldn’t describe Google Calendar as a creative tool, but it is a tool that helps humans create artifacts (calendar events). It is also a long-tail product: there is a set of common, conceptually simple cases (one-off events at a specific time and date), and a long tail of complex use cases (recurring events, guests, multiple calendars, timezones, etc.). Indeed, Kay’s maxim has clearly been used in its design. The simple case has been so optimized that you can literally add a one hour calendar event with a single click (using a placeholder title). A different duration can be set after that first click through dragging [2] . But almost every edge case is also catered to — with additional user effort. Google Calendar is also an example of an interface that digitally encodes real-life, demonstrating that complex use cases are not always power user use cases . Often, the complexity is driven by life events. E.g. your taxes may be complex without you being a power user of tax software, and your family situation may be unusual without you being a power user of every form that asks about it. The Pareto Principle is still useful for individual features , as they tend to be more narrowly defined. E.g. there is a set of spreadsheet formulas (actually much smaller than 20%) that drives >80% of formula usage. While creative tools are the poster child of long-tail UIs, there are long-tail components in many transactional interfaces such as e-commerce or meal delivery (e.g. result filtering & sorting, product personalization interfaces, etc.). Filtering UIs are another big category of long-tail UIs, and they involve so many tradeoffs and tough design decisions you could literally write a book about just them. Airbnb’s filtering UI here is definitely making an effort to make simple things easy with (personalized! 😍) shortcuts and complex things possible via more granular controls. Picture a plane with two axes: the horizontal axis being the complexity of the desired task (again from the user’s perspective, nothing to do with implementation complexity), and the vertical axis the cognitive and/or physical effort users need to expend to accomplish their task using a given interface. Following Kay’s maxim guarantees these two points: But even if we get these two points — what about all the points in between? There are a ton of different ways to connect them, and they produce vastly different overall user experiences. How does your interface fare when a use case is only slightly more complex? Are users yeeted into the deep end of interface complexity (bad), or do they only need to invest a proportional, incremental amount of effort to achieve their goal (good)? Meet the complexity-to-effort curve , the most important usability metric you’ve never heard of. For delightful user experiences, making simple things easy and complex things possible is not enough — the transition between the two should also be smooth. You see, simple use cases are the spherical cows in space of product design . They work great for prototypes to convince stakeholders, or in marketing demos, but the real world is messy . Most artifacts that users need to create to achieve their real-life goals rarely fit into your “simple” flows completely, no matter how well you’ve done your homework. They are mostly simple — with a liiiiitle wart here and there. For a long-tail interface to serve user needs well in practice , we also need to design the curve, not just its endpoints . A model with surprising predictive power is to treat user effort as a currency that users are spending to buy solutions to their problems. Nobody likes paying it; in an ideal world software would read our mind and execute perfectly with zero user effort. Since we don’t live in such a world, users are typically willing to pay more in effort when they feel their use case warrants it. Just like regular pricing, actual user experience often depends more on the relationship between cost and expectation (budget) than on the absolute cost itself. If you pay more than you expected, you feel ripped off. You may still pay it because you need the product in the moment, but you’ll be looking for a better deal in the future. And if you pay less than you expected, you feel like you got a bargain, with all the delight and loyalty that entails. Incremental user effort cost should be proportional to incremental value gained. Suppose you’re ordering pizza. You want a simple cheese pizza with ham and mushrooms. You use the online ordering system, and you notice that adding ham to your pizza triples its price. We’re not talking some kind of fancy ham where the pigs were fed on caviar and bathed in champagne, just a regular run-of-the-mill pizza topping. You may still order it if you’re starving and no other options are available, but how does it make you feel? It’s not that different when the currency is user effort. The all too familiar “ But I just wanted to _________, why is it so hard? ”. When a slight increase in complexity results in a significant increase in user effort cost, we have a usability cliff . Usability cliffs make users feel resentful, just like the customers of our fictitious pizza shop. A usability cliff is when a small increase in use case complexity requires a large increase in user effort. Usability cliffs are very common in products that make simple things easy and complex things possible through entirely separate flows with no integration between them: a super high level one that caters to the most common use case with little or no flexibility, and a very low-level one that is an escape hatch: it lets users do whatever, but they have to recreate the solution to the simple use case from scratch before they can tweak it. Simple things are certainly easy: all we need to get a video with a nice sleek set of controls that work well on every device is a single attribute: . We just slap it on our element and we’re done with a single line of HTML: Now suppose use case complexity increases just a little . Maybe I want to add buttons to jump 10 seconds back or forwards. Or a language picker for subtitles. Or just to hide the volume control on a video that has no audio track. None of these are particularly niche, but the default controls are all-or-nothing: the only way to change them is to reimplement the whole toolbar from scratch, which takes hundreds of lines of code to do well. Simple things are easy and complex things are possible. But once use case complexity crosses a certain (low) threshold, user effort abruptly shoots up. That’s a usability cliff. For Instagram’s photo editor, the simple use case is canned filters, whereas the complex ones are those requiring tweaking through individual low-level controls. However, they are implemented as separate flows: you can tweak the filter’s intensity , but you can’t see or adjust the primitives it’s built from. You can layer both types of edits on the same image, but they are additive, which doesn’t work well. Ideally, the two panels would be integrated, so that selecting a filter would adjust the low-level controls accordingly, which would facilitate incremental tweaking AND would serve as a teaching aid for how filters work. My favorite end-user facing product that gets this right is Coda , a cross between a document editor, a spreadsheet, and a database. All over its UI, it supports entering formulas instead of raw values, which makes complex things possible. To make simple things easy, it also provides the GUI you’d expect even without a formula language. But here’s the twist: these presets generate formulas behind the scenes that users can tweak ! Whenever users need to go a little beyond what the UI provides, they can switch to the formula editor and adjust what was generated — far easier than writing it from scratch. Another nice touch: “And” is not just communicating how multiple filters are combined, but is also a control that lets users edit the logic. Defining high-level abstractions in terms of low-level primitives is a great way to achieve a smooth complexity-to-effort curve, as it allows you to expose tweaking at various intermediate levels and scopes. The downside is that it can sometimes constrain the types of high-level solutions that can be implemented. Whether the tradeoff is worth it depends on the product and use cases. If you like eating out, this may be a familiar scenario: — I would like the rib-eye please, medium-rare. — Thank you sir. How would you like your steak cooked? Keep user effort close to the minimum necessary to declare intent Annoying, right? And yet, this is how many user interfaces work; expecting users to communicate the same intent multiple times in slightly different ways. If incremental value should require incremental user effort , an obvious corollary is that things that produce no value should not require user effort . Using the currency model makes this obvious: who likes paying without getting anything in return? Respect user effort. Treat it as a scarce resource — just like regular currency — and keep it close to the minimum necessary to declare intent . Do not require users to do work that confers them no benefit, and could have been handled by the UI. If it can be derived from other input, it should be derived from other input. Source: NNGroup (adapted). A once ubiquitous example that is thankfully going away, is the credit card form which asks for the type of credit card in a separate dropdown. Credit card numbers are designed so that the type of credit card can be determined from the first four digits. There is zero reason to ask for it separately. Beyond wasting user effort, duplicating input that can be derived introduces an unnecessary error condition that you now need to handle: what happens when the entered type is not consistent with the entered number? User actions that meaningfully communicate intent to the interface are signal . Any other step users need to take to accomplish their goal, is noise . This includes communicating the same input more than once, providing input separately that could be derived from other input with complete or high certainty, transforming input from their mental model to the interface’s mental model, and any other demand for user effort that does not serve to communicate new information about the user’s goal. Some noise is unavoidable. The only way to have 100% signal-to-noise ratio would be if the interface could mind read. But too much noise increases friction and obfuscates signal. A short yet demonstrative example is the web platform’s methods for programmatically removing an element from the page. To signal intent in this case, the user needs to communicate two things: (a) what they want to do (remove an element), and (b) which element to remove. Anything beyond that is noise. The modern DOM method has an extremely high signal-to-noise ratio. It’s hard to imagine a more concise way to signal intent. However, the older method that it replaced had much worse ergonomics. It required two parameters: the element to remove, and its parent. But the parent is not a separate source of truth — it would always be the child node’s parent! As a result, its actual usage involved boilerplate , where developers had to write a much noisier [3] . Boilerplate is repetitive code that users need to include without thought, because it does not actually communicate intent. It’s the software version of red tape : hoops you need to jump through to accomplish your goal, that serve no obvious purpose in furthering said goal except for the fact that they are required. In this case, the amount of boilerplate may seem small, but when viewed as a percentage of the total amount of code, the difference is staggering. The exact ratio (81% vs 20% here) varies based on specifics such as variable names, but when the difference is meaningful, it transcends these types of low-level details. Of course, it was usually encapsulated in utility functions, which provided a similar signal-to-noise ratio as the modern method. However, user-defined abstractions don’t come for free, there is an effort (and learnability) tax there, too. Improving signal-to-noise ratio is also why the front-end web industry gravitated towards component architectures: they increase signal-to-noise ratio by encapsulating boilerplate. As an exercise for the reader, try to calculate the signal-to-noise ratio of a Bootstrap accordion (or any other complex Bootstrap component). Users are much more vocal about things not being possible, than things being hard. When pointing out friction issues in design reviews , I have sometimes heard “ users have not complained about this ”. This reveals a fundamental misunderstanding about the psychology of user feedback . Users are much more vocal about things not being possible, than about things being hard. The reason becomes clear if we look at the neuroscience of each. Friction is transient in working memory (prefrontal cortex). After completing a task, details fade. The negative emotion persists and accumulates, but filing a complaint requires prefrontal engagement that is brief or absent. Users often can’t articulate why the software feels unpleasant: the specifics vanish; the feeling remains. Hard limitations, on the other hand, persist as conscious appraisals. The trigger doesn’t go away, since there is no workaround, so it’s far more likely to surface in explicit user feedback. Both types of pain points cause negative emotions, but friction is primarily processed by the limbic system (emotion), whereas hard limitations remain in the prefrontal cortex (reasoning). This also means that when users finally do reach the breaking point and complain about friction, you better listen. Friction is primarily processed by the limbic system, whereas hard limitations remain in the prefrontal cortex Second, user complaints are filed when there is a mismatch in expectations . Things are not possible but the user feels they should be, or interactions cost more user effort than the user had budgeted, e.g. because they know that a competing product offers the same feature for less (work). Often, users have been conditioned to expect poor user experiences, either because all options in the category are high friction, or because the user is too novice to know better [4] . So they begrudgingly pay the price, and don’t think they have the right to complain, because it’s just how things are. You might ask, “If all competitors are equally high-friction, how does this hurt us?” An unmet need is a standing invitation to disruption that a competitor can exploit at any time. Because you’re not only competing within a category; you’re competing with all alternatives — including nonconsumption (see Jobs‑to‑be‑Done ). Even for retention, users can defect to a different category altogether (e.g., building native apps instead of web apps). Historical examples abound. When it comes to actual currency, a familiar example is Airbnb : Until it came along, nobody would complain that a hotel of average price is expensive — it was just the price of hotels. If you couldn’t afford it, you just couldn’t afford to travel, period. But once Airbnb showed there is a cheaper alternative for hotel prices as a whole , tons of people jumped ship. It’s no different when the currency is user effort. Stripe took the payment API market by storm when it demonstrated that payment APIs did not have to be so high friction. iPhone disrupted the smartphone market when it demonstrated that no, you did not have to be highly technical to use a smartphone. The list goes on. Unfortunately, friction is hard to instrument. With good telemetry you can detect specific issues (e.g., dead clicks), but there is no KPI to measure friction as a whole. And no, NPS isn’t it — and you’re probably using it wrong anyway . Instead, the emotional residue from friction quietly drags many metrics down (churn, conversion, task completion), sending teams in circles like blind men touching an elephant . That’s why dashboards must be paired with product vision and proactive, first‑principles product leadership . Steve Jobs exemplified this posture: proactively, aggressively eliminating friction presented as “inevitable.” He challenged unnecessary choices, delays, and jargon, without waiting for KPIs to grant permission. Do mice really need multiple buttons? Does installing software really need multiple steps? Do smartphones really need a stylus? Of course, this worked because he had the authority to protect the vision; most orgs need explicit trust to avoid diluting it. So, if there is no metric for friction, how do you identify it? Reducing friction rarely comes for free, just because someone had a good idea. These cases do exist, and they are great, but it usually takes sacrifices. And without it being an organizational priority, it’s very hard to steer these tradeoffs in that direction. The most common tradeoff is implementation complexity. Simplifying user experience is usually a process of driving complexity inwards and encapsulating it in the implementation. Explicit, low-level interfaces are far easier to implement, which is why there are so many of them. Especially as deadlines loom, engineers will often push towards externalizing complexity into the user interface, so that they can ship faster. And if Product leans more data-driven than data-informed, it’s easy to look at customer feedback and conclude that what users need is more features ( it’s not ) . The first faucet is a thin abstraction : it exposes the underlying implementation directly, passing the complexity on to users, who now need to do their own translation of temperature and pressure into amounts of hot and cold water. It prioritizes implementation simplicity at the expense of wasting user effort. The second design prioritizes user needs and abstracts the underlying implementation to support the user’s mental model. It provides controls to adjust the water temperature and pressure independently, and internally translates them to the amounts of hot and cold water. This interface sacrifices some implementation simplicity to minimize user effort. This is why I’m skeptical of blanket calls for “simplicity.”: they are platitudes. Everyone agrees that, all else equal, simpler is better. It’s the tradeoffs between different types of simplicity that are tough. In some cases, reducing friction even carries tangible financial risks, which makes leadership buy-in crucial. This kind of tradeoff cannot be made by individual designers — it requires usability as a priority to trickle down from the top of the org chart. The Oslo airport train ticket machine is the epitome of a high signal-to-noise interface. You simply swipe your credit card to enter and you swipe your card again as you leave the station at your destination. That’s it. No choices to make. No buttons to press. No ticket. You just swipe your card and you get on the train. Today this may not seem radical, but back in 2003, it was groundbreaking . To be able to provide such a frictionless user experience, they had to make a financial tradeoff: it does not ask for a PIN code, which means the company would need to simply absorb the financial losses from fraudulent charges (stolen credit cards, etc.). When user needs are prioritized at the top, it helps to cement that priority as an organizational design principle to point to when these tradeoffs come along in the day-to-day. Having a design principle in place will not instantly resolve all conflict, but it helps turn conflict about priorities into conflict about whether an exception is warranted, or whether the principle is applied correctly, both of which are generally easier to resolve. Of course, for that to work everyone needs to be on board with the principle. But here’s the thing with design principles (and most principles in general): they often seem obvious in the abstract, so it’s easy to get alignment in the abstract. It’s when the abstract becomes concrete that it gets tough. The Web Platform has its own version of this principle, which is called Priority of Constituencies : “User needs come before the needs of web page authors, which come before the needs of user agent implementors, which come before the needs of specification writers, which come before theoretical purity.” This highlights another key distinction. It’s more nuanced than users over developers; a better framing is consumers over producers . Developers are just one type of producer. The web platform has multiple tiers of producers: Even within the same tier there are producer vs consumer dynamics. When it comes to web development libraries, the web developers who write them are producers and the web developers who use them are consumers. This distinction also comes up in extensible software, where plugin authors are still consumers when it comes to the software itself, but producers when it comes to their own plugins. It also comes up in dual sided marketplace products (e.g. Airbnb, Uber, etc.), where buyer needs are generally higher priority than seller needs. In the economy of user effort, the antithesis of overpriced interfaces that make users feel ripped off are those where every bit of user effort required feels meaningful and produces tangible value to them. The interface is on the user’s side, gently helping them along with every step, instead of treating their time and energy as disposable. The user feels like they’re getting a bargain : they get to spend less than they had budgeted for! And we all know how motivating a good bargain is. User effort bargains don’t have to be radical innovations; don’t underestimate the power of small touches. A zip code input that auto-fills city and state, a web component that automatically adapts to its context without additional configuration, a pasted link that automatically defaults to the website title (or the selected text, if any), a freeform date that is correctly parsed into structured data, a login UI that remembers whether you have an account and which service you’ve used to log in before, an authentication flow that takes you back to the page you were on before. Sometimes many small things can collectively make a big difference. In some ways, it’s the polar opposite of death by a thousand paper cuts : Life by a thousand sprinkles of delight! 😀 In the end, “ simple things simple, complex things possible ” is table stakes. The key differentiator is the shape of the curve between those points. Products win when user effort scales smoothly with use case complexity, cliffs are engineered out, and every interaction declares a meaningful piece of user intent . That doesn’t just happen by itself. It involves hard tradeoffs, saying no a lot, and prioritizing user needs at the organizational level . Treating user effort like real money, forces you to design with restraint. A rule of thumb is place the pain where it’s best absorbed by prioritizing consumers over producers . Do this consistently, and the interface feels delightful in a way that sticks. Delight turns into trust. Trust into loyalty. Loyalty into product-market fit. Kay himself replied on Quora and provided background on this quote . Don’t you just love the internet? ↩︎ Yes, typing can be faster than dragging, but minimizing homing between input devices improves efficiency more, see KLM ↩︎ Yes, today it would have been , which is a little less noisy, but this was before the optional chaining operator. ↩︎ When I was running user studies at MIT, I’ve often had users exclaim “I can’t believe it! I tried to do the obvious simple thing and it actually worked!” ↩︎

1 views
Manuel Moreale 2 weeks ago

New site, kinda

If you’re reading this blog using RSS or via email (when I remember to send the content via email), you likely didn’t notice it. And if you’re reading my blog in the browser but are not a sharp observer, chances are, you also didn’t notice it. A new version of my site is live. At first glance, not much has changed. The typeface is still the same—love you, Iowan—the layout is still the same, the colours are still the same. For the most part, the site should still feel pretty much the same. So what has changed? A lot, especially under the hood. For example: I have rewritten the entire CSS, and I’m no longer using SASS since it’s no longer needed; interviews are now separate from regular content at the backend level and have their own dedicate URL structure (old URLs should still work, though); the site is now better structured to be expanded into something more akin to a digital garden than “just” a blog. Since I had to rewrite all the frontend code, I took this opportunity to tweak a few things here and there: quotes have a new style, the guestbook has been redesigned (go sign it if you haven’t already) , typography has been slightly tweaked in a couple of places, and the site should now scale much better on very big screens. More importantly, though, P&B interviews now have a more unique design—and a new colour scheme—something that makes me very happy. There are so many things I want to do for this series, but I just don’t have the time to dedicate to this, so I’m happy to have at least managed to give them a more unique identity here on the site. This space is still a work in progress. It will always be a work in progress, so expect things to change over time as I fine-tune minor details here and there. Thank you for keeping RSS alive. You're awesome. Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

2 views
Josh Comeau 3 weeks ago

The Big Gotcha With @starting-style

CSS has been on fire lately, with tons of great new features. @starting-style is an interesting one; it allows us to use CSS transitions for enter animations, something previously reserved for CSS keyframe animations. But is the juice worth the squeeze?

0 views
Josh Comeau 1 months ago

Color Shifting in CSS

A little while ago, I was trying to animate an element’s background color, so that it cycled through the rainbow. Seems easy, but it turns out, browsers have a surprisingly big limitation when it comes to color processing! In this tutorial, we’ll dig into the issue, and I’ll share a couple of strategies you can use to work around this limitation.

0 views
Thomasorus 1 months ago

Moyo

Moyō 模様 (pattern) is a collection of patterns in CSS I use for my website. Most of them were created as the months passed and I tweaked them for my time tracker tool. Moyo comes as a single SVG file containing all the patterns. If you are not familiar with SVG, an SVG file can use a tag to store graphical objects that you will use later. Each pattern is defined inside the tag and each one has an id attribute. Those ids can then be used inside other SVGs. Copy paste the content of this file inside your template and use the desired pattern. Colors are automatically defined by two CSS rules: the color value of the text (aka ) and the background value on the div containing them (if none, the ). Two patterns ( and ) default to a white background, which can be changed by using a css custom property.

0 views
Den Odell 2 months ago

We Keep Reinventing CSS, but Styling Was Never the Problem

We’ve been building for the web for decades. CSS has had time to grow up, and in many ways, it has. We’ve got scoped styles, design tokens, cascade layers, even utility-first frameworks that promise to eliminate bikeshedding entirely. And yet, somehow, every new project still begins with a shrug and the same old question: “So… how are we styling things this time?” It’s not that we lack options. It’s that every option comes with trade-offs. None of them quite fit. We keep reinventing CSS as if it’s the root cause. It isn’t. It’s easy to forget what CSS was originally designed for: documents. You’d write some HTML, style a few headings and paragraphs, maybe float an image to the left, and call it a day. In that world, global styles made sense. The cascade was helpful. Inheritance was elegant. Fast-forward a couple of decades and we’re building highly interactive, component-based, state-driven, design-system-heavy applications, still with a language meant to style a résumé in the early 2000s. CSS wasn’t built for encapsulated components. It wasn’t built for dynamic theming or runtime configuration or hydration mismatches. So we’ve spent years bolting on strategies to make it work. What we have now is a landscape of trade-offs. Each approach solves something. None solve everything. Yet we keep framing them as silver bullets, not as trade-off tools. Here’s the uncomfortable truth: most of our styling pain doesn’t come from CSS itself . It comes from trying to shoehorn CSS into frontend architectures that weren’t designed to support it. React , Vue , Svelte . They all put components at the core. Scoped logic. Scoped templates. Scoped state. Then we hand them a stylesheet that’s global, cascading, and inherited by default. We’ve spent the last decade asking CSS to behave like a module system. It isn’t one. This isn’t just a tooling choice. It’s a question of what trade-offs you’re prepared to live with. Do you want: There’s no single solution. Just strategies. Just context. Styling the web isn’t solved. It may never be. But it gets easier when we stop pretending there’s a perfect answer just one abstraction away. Be clear about what matters, and deliberate about what you’re willing to trade. Because at the end of the day, no one writes perfect CSS. Just CSS that’s good enough to ship. BEM gives you naming predictability, and very verbose selectors. CSS Modules give you scoping, unless you need runtime theming . Utility-first CSS (like Tailwind ) enables fast iteration, but clutters your markup. CSS-in-JS offers colocation and flexibility, at the cost of runtime performance and complexity. Cascade Layers and give you more control, if your team is ready to learn them. Scoped styles with minimal tooling? Use CSS Modules and accept limited runtime flexibility. Predictability and no cascade? Use utility-first CSS and brace for cluttered markup. Dynamic styles colocated with logic? Use CSS-in-JS and monitor your bundle size closely.

0 views
Cassidy Williams 2 months ago

Making a faded text effect in (mostly) CSS

I watched a video recently, that had text fading away. I thought it’d be cool to recreate that type of effect in CSS! The final output: See the Pen Fading away text effect by Cassidy ( @cassidoo ) on CodePen . I initially thought I might do this very manually with selectors, which I think works well if you hard-code tags around characters in a string. That would be a more “pure” CSS solution, but not nearly as flexible as what I would like! Another way would be to use and a gradient, which looks cooler for paragraphs but not quite what I was going for: This makes the text transparent so the background shows through, then adds a gradient background to the text that goes from opaque to transparent, then clips the background to the text. Again, looks cool, but I wanted a per-letter effect. The CodePen embedded above describes what’s happening, but for posterity, here’s the same information but with some more details: The JavaScript function splits all of the characters in an element that has the class . Then, it wraps each character with a . Each of those s has a class that assigns it a CSS variable based on its index: Then, with the power of CSS , it applies a blur filter and opacity based on those and variables! This was fun, hope you liked it!

0 views
Grumpy Gamer 2 months ago

Death By Scrolling Part 3

We just got Death By Scrolling running on the Switch. We had to do some optimization because we were doing stupid stuff with rendering the tile map. Modern PC machines are so fast these days that you can do a lot of stupid stuff and it doesn’t matter. Back in the day (queue angry old man) I had to count cpu cycles and every byte of memory was precious. Nowadays memory is basically infinite. But it’s different for consoles, they do have a limited amount of memory and you do have to pay attention to performance. One thing that really bothers me when I play Switch games ported from PC or other consoles is the lack of care over font size. Things that look good on a big monitor or TV are unreadable on the Switch (and the Steamdeck). We’ve taken great care to make everything big enough to be readable on handhelds, but it’s a real pain. As a designer you want to cram enough information on each screen, especially for RPG-ish games. People get used to html/css rendering and how good it is at flowing to fill space and around images. Games often don’t have text rendering engines that are that complex (using a html/css engine internally is overkill). One of the big upgrades I want to do to my engine is better text flow rendering, it will never be as good as html/css in the browser, but it could be a lot better/easier. It’s one of the things I’m envious of Godot.

0 views
Chris Coyier 2 months ago

Impact of AI on Tech Content Creators

Wes on Syntax : I write content. That content is consumed by people. But a lot of it has been used to train AIs for people to get a very quick answer. You can see the amount of bots visiting websites has been going up significantly. You ask a question about JavaScript and they go suck in 40 pages and it distills it down. From a user perspective I love it. I don’t want to read your life story, I want to get straight to the answer. How often do you just read the Google Summary and just close the tab? For those people that [create content] for money, that business is going to be significantly disrupted. What happens to the people that rely on that money? There’s no shortage of people putting content on the internet right now. Will that stop? I don’t know. It’ll stop when there is no longer any incentive to do so. Those incentives are various! Money is one. That was a partial motivator for me. Being able to support myself and my family partially through advertising on content was important. But I also found it to be fun and mentally rewarding, like it gave my life purpose. Even if the money wanes, those incentives may endure for many. Promoting other things can be another incentive. If you can still manage to get eyes on you, the value doesn’t have to come from traditional advertising. It could be because you’re working for a company, and there is value in the DevRel. You could be taking pre-orders for your next book. You could offer a training course for sale. Your superfans can pay for superchats and supermemberships on your Twitch or whatever. As long as there are incentives left to create technical content, people will. And AIs will continue to train on it. That does frame it as an adversarial relationship, which is a bummer (something something capitalism). Wes specifically wondered about me! He spent a good chunk of his life. His legacy was putting out very helpful information. CSS-Tricks is a huge swath of information. He was able to that because he was able to make money. The next CSS-Tricks isn’t going to be able to make that much money. The AIs are just going to gobble it up and contribute to our brain rot. I certainly wrote a lot of content for that site. And so did a ton of other authors , who still do to this day. And AIs have slurped and increasing reslurp it up. My main concerns with the AI-slurp-age are: I’m slightly less concerned that AI slurpage will disincentivize all content creation. Humans love other humans, and we’ll always want to connect with each other. We want to learn together and laugh together and play together, and, as weird as it is, sharing technical content with each other is a niche in there. Wes wondered if I “got out” at the right time. I sorta think I did. It was not premeditated, though. At the time, I was much more focused on advertising. For years leading up to the sale, I invested more and more money into the site with the goal of growth, only to see traffic stay flat. It wasn’t perfectly correlated, but flat traffic doesn’t help advertising revenue. Ramping up the amount of work for the same traffic and same money wasn’t feeling great. At the time, I assumed it was just a temporary slump, but now with enough distance, it kinda wasn’t. Fortunately, Digital Ocean didn’t really need the advertising, which is why I thought it was a perfect buyer. They had other incentives. I have no idea what they think of the purchase now, but I would hope it’s quite positive. There was a weird slump, but with Geoff still over there, I think they are doing awesome. I feel compelled to mention that my content creation career is far from over and takes many forms: So. Much. Content. I still think it’s fun and has value and plan to continue doing it, even if the incentives around doing it are constantly being battered down. I’ll need to continue to evolve how I get value out of it. I do enjoy it so much I’ll probably be explaining tricks into the dirt with a stick after the apocalypse. Wes also said of AI: As a user, I love it. I feel that, too. AI companies do slimy shit, and they know it . But I don’t wanna just take my ball and go home. There is some real user benefit coming out of AI products right now. It’s fun to be a part of. I’m experiencing genuine productivity boosts from using AI for coding work. The evolution of user interfaces around it is fascinating. Perhaps ironically, there is an awful lot of user content about AI — lolz.

0 views
Josh Comeau 2 months ago

A Friendly Introduction to SVG

SVGs are one of the most remarkable technologies we have access to on the web. They’re first-class citizens, fully addressable with CSS and JavaScript. In this tutorial, I’ll cover all of the most important fundamentals, and show you some of the ridiculously-cool things we can do with this massively underrated tool. ✨

0 views
matklad 3 months ago

font-size-adjust Is Useful

In this article, I will describe a recent addition to CSS, the property. I am also making a bold claim that everyone in the world misunderstands the usefulness of this property, including Google , MDN , and CSS Specification itself . (Just to clarify, no, I am not a web designer and I have no idea what I am talking about). Let’s start with oversimplified and incorrect explanation of (see https://tonsky.me/blog/font-size/ for details). Let’s say you specified . What does that mean? First, draw a square 96 pixels high: Then, draw a letter “m” somewhere inside this box: This doesn’t make sense? I haven’t told you how large the letter m should be? Tiny? Huge? Well, sorry, but that’s really how font size works. It’s a size of the box around the glyph, not the size of the glyph. And there isn’t really much consistency between the fonts as to how large the glyph itself should be. Here’s a small “x” in the three fonts used on my blog at 48px font size: They are quite different! And this is where comes in. If I specify , I ask the browser to scale the font such that the letter “x” is exactly half of the box. This makes the fonts comparable: Now, the part where I foolishly disagree with the world! The way this property is described in MDN and elsewhere is as if it only matters for the font fallback. That is, if you have , one potential problem could be that the fallback sans-serif font on the user’s machine will have very different size from Futura. So, the page could look very differently depending on whether fallback kicks in or not (and fallback can kick in temporarily , while the font is being loaded). So, the official guideline is, roughly, When using font fallback, find a value of that makes no change for the first font of the fallback stack. I don’t find this to be a particularly compelling use-case! Make sure to vendor the fonts used, specify inline in a tag inside the to avoid extra round trips, add and FOUC is solved for most people. Otherwise, you might want to stick to font. A use-case for I find much more compelling is that you probably are going to use several fonts on a web-page. And you also might change fonts in the future. And they will have different intrinsic size because that’s how the things are. Part of the mess is avoidable by pinning the meaning of font size. So, the guideline I’d use is: Stick into your CSS reset, right next to . Why ? That’s the invariant ratio for Helvetica, but any number in that vicinity should work! While fixes the size of the glyph relative to em-square, it doesn’t fix position of the glyph. This can create line height problems. Consider these two paragraphs that are styled with , but where the second paragraph is using monospace font for : You are supposed to use coreutils to solve this problem. You are supposed to use coreutils to solve this problem. You are supposed to use coreutils to solve this problem. You are supposed to use coreutils to solve this problem. You are supposed to use to solve this problem. You are supposed to use to solve this problem. You are supposed to use to solve this problem. You are supposed to use to solve this problem. In the first paragraph, each line is indeed 24 pixels high, but in the second paragraph each line is slightly larger, despite the line-height being set to 24px explicitly. How can this be? The full answer is in: https://iamvdo.me/en/blog/css-font-metrics-line-height-and-vertical-align The TL;DR is that doesn’t actually set the the height of the line (who would have thought!). Instead, it sets the height of each individual span of the text on the line. So, both “supposed” and “ ” have a 24 pixels high box around them. But because relative position of glyphs inside the em-box is different between the two fonts, the boxes are shifted relative to each other to align the baselines. You can see that by adding : You are supposed to use to solve this problem. You are supposed to use to solve this problem. You are supposed to use to solve this problem. You are supposed to use to solve this problem. If we align the boxes, than baselines are not aligned. It follows that when we align the baselines, the boxes are misaligned, and the line-height ends up larger than the height of any box, because boxes stick out! I don’t know a great solution here. A hack is to say something like such that text set in monospace font doesn’t affect line height calculation. Counter-intuitively, this will work even if the line is entirely monospace (see in the abovelinked article).

0 views
Chris Coyier 3 months ago

CSS Day Videos & Scope

" data-image-title="Screenshot 2025-07-13 at 4.51.57 PM" data-image-description="" data-image-caption="" data-medium-file="https://i0.wp.com/chriscoyier.net/wp-content/uploads/2025/07/Screenshot-2025-07-13-at-4.51.57%E2%80%AFPM.png?fit=300%2C121&ssl=1" data-large-file="https://i0.wp.com/chriscoyier.net/wp-content/uploads/2025/07/Screenshot-2025-07-13-at-4.51.57%E2%80%AFPM.png?fit=1024%2C414&ssl=1" src="https://i0.wp.com/chriscoyier.net/wp-content/uploads/2025/07/Screenshot-2025-07-13-at-4.51.57%E2%80%AFPM.png?resize=1024%2C414&ssl=1" alt="Three video thumbnails for a series on CSS by Chris Coyier, featuring titles 'DOM Blasters', 'Proximity Scope', and 'Donut Scope'. Each thumbnail includes a graphic design element and a smiling photo of Chris Coyier." class="wp-image-12554" srcset="https://i0.wp.com/chriscoyier.net/wp-content/uploads/2025/07/Screenshot-2025-07-13-at-4.51.57%E2%80%AFPM.png?resize=1024%2C414&ssl=1 1024w, https://i0.wp.com/chriscoyier.net/wp-content/uploads/2025/07/Screenshot-2025-07-13-at-4.51.57%E2%80%AFPM.png?resize=300%2C121&ssl=1 300w, https://i0.wp.com/chriscoyier.net/wp-content/uploads/2025/07/Screenshot-2025-07-13-at-4.51.57%E2%80%AFPM.png?resize=768%2C311&ssl=1 768w, https://i0.wp.com/chriscoyier.net/wp-content/uploads/2025/07/Screenshot-2025-07-13-at-4.51.57%E2%80%AFPM.png?resize=1536%2C621&ssl=1 1536w, https://i0.wp.com/chriscoyier.net/wp-content/uploads/2025/07/Screenshot-2025-07-13-at-4.51.57%E2%80%AFPM.png?w=1632&ssl=1 1632w" sizes="(max-width: 1000px) 100vw, 1000px" />

0 views
Lambda Land 3 months ago

Programmers and Their Monospace Blogs

Many developers seem to have a fanatic obsession with monospace fonts and using them to make their blogs look “cool”. I won’t call out anyone’s blog specifically, but you don’t have to look to hard to find some. As an example theme using a monospace font by default, look at hugo-theme-terminal , which has over 2,400 stars on GitHub. If you have a blog or are thinking about starting one, and you are writing mostly prose (you probably are), I have one suggestion for you about fonts: Do not use monospace fonts for prose. Please use a nice proportionally-spaced font instead. It will be nicer for your readers. I assume you care about your readers. Maybe there’s a slim fraction of a percent of developer/writers out there who are writing just so that they have a big portfolio of “content” with some metrics they can use to show off or brag about on LinkedIn. Whatever—those blokes probably don’t read much anyway, so I think I’m safe to ignore them. Even worse are the people who use an LLM to generate content for LinkedIn. Like, how banal can you get? Also, note that I’m using the word content in a slightly pejorative sense: LinkedIn addicts will talk about content as just some stuff that you need to generate to fill a space. It’s substance that matters. I want to read something that is trying to say something—not something that’s just taking up space on a (web)page. If you care about your readers, you should make the reading experience pleasant for them. If I open a website that makes it hard or is uncomfortable for me to read/scan/whatever, my patience for reading whatever is on that site drops and I close the page. What things make a website uncomfortable to read? Here are a few: Setting light-gray text on a slightly-less-gray background is a great way to make people squint at your site in frustration. Not centering your text and/or having lines that are way too long also infuriates me. I want to read your article centered in my field if vision where I don’t have to turn my head. I’ve got a big monitor. Just slap and on the element in your CSS and you’re good to go. It’s not hard. Monospace fonts are a bad choice for prose because—news flash—we’re not used to reading monospace fonts! Every book on my shelf—from deeply technical texts like Types and Programming Languages by Pierce to high fantasy like The Way of Kings by Sanderson—is set in a proportional font. Monospace fonts are a holdover from the typewriter age. Thankfully, our technology is well past the limitations of that machine, and good typography can rule again. If you still don’t believe me that monospace fonts are bad for prose, maybe professional typographer-cum-programmer Matthew Butterick can change your mind . If you look around on my blog, you will find plenty of code set in a monospace font . Code should be set in a monospace font. I actually use a monospace font when I write! (Partly because I’m used to it, and partly because I haven’t bothered to set up my editor to switch to something else when I’m writing.) So please don’t take this post as a tirade against monospace fonts in all contexts. Yeah, I do use my browser’s reader mode frequently when I want to read some long-form text. It’s a nice way to get a decent-looking view of the text on a page. In some cases, it hides distracting elements, making it infinitely superior to an ad-riddled page. But if reader mode is your suggestion to get around your bad monospaced font, isn’t that an admission of failure? I would think that you would want your blog to be nice enough to look at that people don’t even think about using reader mode because your website is pleasing to read as-is. If you want to include source code and images into your post, reader mode sometimes mangles those. I won’t advise you exactly what style you should set your blog/website in. That’s up to you and is in large part a matter of taste. But this part is not subjective: prose is meant to be set in a proportional font. Please stop using monospace for prose. Poor contrast Bad margins/too-long lines Crappy fonts

0 views
Xe Iaso 3 months ago

Rolling the ladder up behind us

Cloth is one of the most important goods a society can produce. Clothing is instrumental for culture, expression, and for protecting one's modesty. Historically, cloth was one of the most expensive items on the market. People bought one or two outfits at most and then wore them repeatedly for the rest of their lives. Clothing was treasured and passed down between generations the same way we pass jewelry down between generations. This cloth was made in factories by highly skilled weavers. These weavers had done the equivalent of PhD studies in weaving cloth and used state of the art hardware to do it. As factories started to emerge, they were able to make cloth so much more cheaply than skilled weavers ever could thanks to inventions like the power loom. Power looms didn't require skilled workers operating them. You could even staff them with war orphans, which there was an abundance of thanks to all the wars. The quality of the cloth was absolutely terrible in comparison, but there was so much more of it made so much more quickly. This allowed the price of cloth to plummet, meaning that the wages that the artisans made fell from six shillings a day to six shillings per week over a period of time where the price of food doubled. Mind you, the weavers didn't just reject technological progress for the sake of rejecting it. They tried to work with the ownership class and their power looms in order to produce the same cloth faster and cheaper than they had before. For a time, it did work out, but the powers that be didn't want that. They wanted more money at any cost. At some point, someone had enough and decided to do something about it. Taking up the name Ned, he led a movement that resulted in riots, destroying factory equipment, and some got so bad they had to call the army in to break them up. Townspeople local to those factory towns were in full support of Ned's followers. Heck, even the soldiers sent to stop the riots ended up seeing the points behind what Ned's followers were doing and joined in themselves. The ownership class destroyed the livelihood of the skilled workers so that they could make untold sums of money producing terrible cloth that people would turn their one-time purchase of clothing into a de-facto subscription that they had to renew every time their clothing wore out. Now we have fast fashion and don't expect our clothing to last more than a few years. I have a hoodie from AWS Re:Invent in 2022 that I'm going to have to throw out and replace because the sleeves are dying. We only remember them as riots because their actions affected those in power. This movement was known as the Luddites, or the followers of Ned Ludd. The word "luddite" has since shifted meaning over time and is now understood as "someone who is against technological development". The Luddites were not against technology like the propaganda from the ownership class would have you expect, they fought against how it was implemented and the consequences of its rollout. They were skeptical that the shitty cloth that the power loom produced would be a net benefit to society because it meant that customers would inevitably have to buy their clothes over and over again, turning a one-time purchase into a subscription. Would that really benefit consumers or would that really benefit the owners of the factories? Nowadays the Heritage Crafts Association of the United Kingdom lists many forms of weaving as Endangered or Critically Endangered crafts , meaning that those skills are either at critical risk of dying out without any "fresh blood" learning how to do it, or the last generation of artisans that know how to do that craft are no longer teaching new apprentices. All that remains of that expertise is now contained in the R&D departments of the companies that produce the next generations of power looms, and whatever heritage crafts practitioners remain. Remember the Apollo program that let us travel to the moon? It was mostly powered by the Rocketdyne F1 engine. We have all of the technical specifications to build that rocket engine. We know all the parts you need, all the machining you have to do, and roughly understand how it would be done, but we can't build another Rocketdyne F1 because all of the finesse that had been built up around manufacturing it no longer exists. Society has moved on and we don't have expertise in the tools that they used to make it happen. What are we losing in the process? We won't know until it's gone. As I've worked through my career in computering, I've noticed a paradox that's made me uneasy and I haven't really been able to figure out why it keeps showing up: the industry only ever seems to want to hire people with the word Senior in their title. They almost never want to create people with the word Senior in their title. This is kinda concerning for me. People get old and no longer want to or are able to work. People get sick and become disabled. Accidental deaths happen and remove people from the workforce. A meme based on the format where the dog wants to fetch the ball but doesn't want to give the ball to the human to throw it, but with the text saying 'Senior?', 'Train Junior?', and 'No train junior, only hire senior'. If the industry at large isn't actively creating more people with the word Senior in their title, we are eventually going to run out of them. This is something that I want to address with Techaro at some point, but I'm not sure how to do that yet. I'll figure it out eventually. The non-conspiratorial angle for why this is happening is that money isn't free anymore and R&D salaries are no longer taxable business expenses in the US, so software jobs that don't "produce significant value" are more risky to the company. So of course they'd steal from the future to save today. Sounds familiar, doesn't it? Is this how we end up losing the craft of making high quality code the same way we lost the craft of weaving high quality cloth? However there's another big trend in the industry that concerns me: companies releasing products that replace expertise with generative AI agents that just inscrutably do the thing for you. This started out innocently enough - it was just better ways to fill in the blanks in your code. But this has ballooned and developed from better autocomplete to the point where you can just assign issues to GitHub Copilot and have the issue magically get solved for you in a pull request. Ask the AI model for an essay and get a passable result in 15 minutes. At some level, this is really cool. Like, think about it. This reduces toil and drudgery to waiting for half an hour at most. In a better world I would really enjoy having a tool like this to help deal with the toil work that I need to do but don't really have the energy to. Do you know how many more of these essays would get finished if I could offload some of the drudgery of my writing process to a machine? We are not in such a better world. We are in a world where I get transphobic hate sent to the Techaro sales email. We are in a world where people like me are intentionally not making a lot of noise so that we can slide under the radar and avoid attention by those that would seek to destroy us. We are in a world where these AI tools are being pitched as the next Industrial Revolution , one where foisting our expertise away into language models is somehow being framed as a good thing for society. There's just one small problem: who is going to be paid and reap the benefits from this change as expectations from the ownership class change? A lot of the ownership class only really experiences the work product outputs of what we do with computers. They don't know the struggles involved with designing things such as the user getting an email on their birthday . They don't want to get pushback on things being difficult or to hear that people want to improve the quality of the code. They want their sparkle emoji buttons to magically make the line go up and they want them yesterday. We deserve products that aren't cheaply made mass produced slop that incidentally does what people want instead of high quality products that are crafted to be exactly what people need, even if they don't know they need it. Additionally, if this is such a transformational technology, why are key figures promoting it by talking down to people? Why wouldn't they be using this to lift people up ? Isn't that marketing? Fear sells a lot better than hope ever will. Amygdala responses are pretty strong right? So aren't a lot of your fears of the technology really feeding into the hype and promoting the technology by accident? I don't fear the power loom. I fear the profit expectations of the factory owners. As a technical educator, one of the things that I want to imprint onto people is that programming is a skill you can gain and that you too can both program things and learn how to program things. I want there to be more programmers out there. What I am about to say is not an attempt to gatekeep the skill and craft of computering; however, the ways that proponents of vibe coding are going about it are simply not the way forward to a sustainable future. About a year ago, Cognition teased an AI product named Devin , a completely automated software engineer. You'd assign Devin tasks in Slack or Jira and then it would spin up a VM and plod its way through fixing whatever you asked it to. This demo deeply terrified me, as it was nearly identical to a story I wrote for the Techaro lore: Protos . The original source of that satire was experience working at a larger company that shall remain unnamed where the product team seemed to operate under the assumption that the development team had a secret "just implement that feature button" and that we as developers were working to go out of our way to NOT push it. Devin was that "implement that feature" button the same way Protos mythically did. From what I've seen with companies that actually use Devin, it's nowhere near actually being useful and usually needs a lot of hand-holding to do anything remotely complicated, thank God. The thing that really makes me worried is that the ownership class' expectations about the process of developing software are changing. People are being put on PIPs for not wanting to install Copilot. Deadlines come faster because "the AI can write the code for you, right?" Twitter and Reddit contain myriads of stories of "idea guys" using Cursor or Windscribe to generate their dream app's backend and then making posts like "some users claim they can see other people's stuff, what kind of developer do I need to hire for this?" Follow-up posts include gems such as "lol why do coders charge so much???" By saving money in the short term by producing shitty software that doesn't last, are we actually spending more money over time re-buying nearly identical software after it evaporates from light use? This is the kind of thing that makes Canada not allow us to self-identify as Engineers, and I can't agree with their point more. Vibe coding is a distraction. It's a meme. It will come. It will go. Everyone will abandon the vibe coding tools eventually. My guess is that a lot of the startups propping up their vibe coding tools are trying to get people into monthly subscriptions as soon as possible so that they can mine passive income as their more casual users slowly give up on coding and just forget about the subscription. I'm not gonna lie though, the UX of vibe coding tools is top-notch. From a design standpoint it's aiming for that subtle brilliance where it seems to read your mind and then fill in the blanks you didn't even know you needed filled in. This is a huge part of how you can avoid the terror of the empty canvas. If you know what you are doing, an empty canvas represents infinite possibilities. There's nothing there to limit you from being able to do it. You have total power to shape everything. In my opinion, this is a really effective tool to help you get past that fear of having no ground to stand on. This helps you get past executive dysfunction and just ship things already. That part is a good thing. I genuinely want people to create more things with technology that are focused on the problems that they have. This is the core of how you learn to do new things. You solve small problems that can be applied to bigger circumstances. You gradually increase the scope of the problem as you solve individual parts of it. I want more people to be able to do software development. I think that it's a travesty that we don't have basic computer literacy classes in every stage of education so that people know how the machines that control their lives work and how to use them to their advantage. Sure it's not as dopaminergic as TikTok or other social media apps, but there's a unique sense of victory that you get when things just work. Sometimes that feeling you get when things Just Work™ is the main thing that keeps me going. Especially in anno dominium two thousand and twenty five. The main thing I'm afraid of is people becoming addicted to the vibe coding tools and letting their innate programming skills atrophy. I don't know how to suggest people combat this. I've been combating it by removing all of the automatic AI assistance from my editor (IE: I'll use a language server, but I won't have my editor do fill-in-the-middle autocomplete for me), but this isn't something that works for everyone. I've found myself more productive without it there and asking a model for the missing square peg to round hole when I inevitably need some toil code made. I ended up not shipping that due to other requirements, but you get what I'm going at. The biggest arguments I have against vibe coding and all of the tools behind it boil down to one major point: these tools have a security foundation of sand . Most of the time when you install and configure a Model Context Protocol (MCP) server, you add some information to a JSON file that your editor uses to know what tools it can dispatch with all of your configuration and API tokens. These MCP servers run as normal OS processes with absolutely no limit to what they can do. They can easily delete all files on your system, install malware into your autostart, or exfiltrate all your secrets without any oversight. Oh, by the way, that whole "it's all in one JSON file with all your secrets" problem? That's now seen as a load-bearing feature so that scripts can automatically install MCP servers for you. You don't even need to get expertise in how the tools work! There's a MCP server installer MCP server so that you can say "Hey torment nexus, install GitHub integration for me please" and then it'll just do it with no human oversight or review on what you're actually installing. Seems safe to me! What could possibly go wrong? If this is seriously the future of our industry, I wish that the people involved would take one trillionth of an iota of care about the security of the implementation. This is the poster child for something like the WebAssembly Component Model . This would let you define your MCP servers with strongly typed interfaces to the outside world that can be granted or denied permissions by users with strong capabilities. Combined with the concept of server resources , this could let you expand functionality however you wanted. Running in WebAssembly means that the no MCP server can just read and exfiltrate your SSH key. Running in WebAssembly means that it can't just connect to and then evaluate JavaScript code with user-level permissions on the fly. We shouldn't have to be telling developers "oh just run it all in Docker". We should have designed this to be fundamentally secure from the get-go. Personally, I only run MCP ecosystem things when contractually required to. Even then, I run it in a virtual machine that I've already marked as known compromised and use separate credentials not tied to me. Do with this information as you will. I had a lot of respect for Anthropic before they released this feculent bile that is the Model Context Protocol spec and initial implementations to the public. It just feels so half-baked and barely functional. Sure I don't think they expected it to become the Next Big Meme™, but I thought they were trying to do things ethically above board. Everything I had seen from Anthropic before had such a high level of craft and quality, and this was such a huge standout. We shouldn't have to be placing fundamental concerns like secret management or sandboxing as hand-waves to be done opt-in by the user. They're not gonna do it, and we're going to have more incidents where Cursor goes rogue and nukes your home folder until someone cares enough about the craft of the industry to do it the right way. I have a unique view into a lot of the impact that AI companies have had across society. I'm the CEO of Techaro , a small one-person startup that develops Anubis , a Web AI Firewall Utility that helps mitigate the load of automated mass scraping so that open source infrastructure can stay online. I've had sales calls with libraries and universities that are just being swamped by the load. There's stories of GitLab servers eating up 64 cores of high-wattage server hardware due to all of the repeated scraping over and over in a loop. I swear a lot of this scraping has to be some kind of dataset arbitrage or something, that's the only thing that makes sense at this point. And then in the news the AI companies claim "oh no we're just poor little victorian era orphans, we can't possibly afford to fairly compensate the people that made the things that make our generative AI models as great as they are". When the US copyright office tried to make AI training not a fair use , the head of that office suddenly found themselves jobless. Why must these companies be allowed to take everything without recourse or payment to the people that created the works that fundamentally power the models? The actual answer to this is going to sound a bit out there, but stay with me: they believe that we're on the verge of creating artificial superintelligence; something that will be such a benevolent force of good that any strife in the short term will ultimately be cancelled out by the good that is created as a result. These people unironically believe that a machine god will arise and we'd be able to delegate all of our human problems to it and we'll all be fine forever. All under the thumb of the people that bought the GPUs with dollars to run that machine god. As someone that grew up in a repressed environment full of evangelical christianity, I recognize this story instantly: it's the second coming of Christ wrapped in technology. Whenever I ask the true believers entirely sensible questions like "but if you can buy GPUs with dollars, doesn't that mean that whoever controls the artificial superintelligence thus controls everyone, even if the AI is fundamentally benevolent?" The responses I get are illuminating. They sound like the kinds of responses that evangelicals give when you question their faith. Honestly though, the biggest impact I've seen across my friends has been what's happened to art commissions. I'm using these as an indicator for how the programming industry is going to trend. Software development is an art in the same vein as visual/creative arts, but a lot of the craft and process that goes into visual art is harder to notice because it gets presented as a flat single-dimensional medium. Sometimes it can take days to get something right for a drawing. But most of the time people just see the results of the work, not the process that goes into it. This makes things like prompting "draw my Final Fantasy 14 character in Breath of the Wild" with images as references and getting a result in seconds look more impressive. If you commissioned a human to get a painting like this: An AI-generated illustration of my Final Fantasy 14 character composited into a screenshot of Breath of the Wild. Generated by GPT-4o through the ChatGPT interface. Inputs were a screenshot of Breath of the Wild and reference photos of my character. It'd probably take at least a week or two as the artist worked through their commission queue and sent you in-progress works before they got the final results. By my estimates between the artists I prefer commissioning, this would cost somewhere between 150 USD and 500 EUR at minimum. Probably more when you account for delays in the artistic process and making sure the artist is properly paid for their time. It'd be a masterpiece that I'd probably get printed and framed, but it would take a nonzero amount of time. If you only really enjoy the products of work and don't understand/respect any of the craftsmanship that goes into making it happen, you'd probably be okay with that instantly generated result. Sure the sun position in that image doesn't make sense, the fingers have weird definition, her tail is the wrong shape, it pokes out of the dress in a nonsensical way (to be fair, the reference photos have that too), the dress has nonsensical shading, and the layering of the armor isn't like the reference pictures, but you got the result in a minute! A friend of mine runs an image board for furry art . He thought that people would use generative AI tools as a part of their workflows to make better works of art faster. He was wrong, it just led to people flooding the site with the results of "wolf girl with absolutely massive milkers showing her feet paws" from their favourite image generation tool in every fur color imaginable, then with different characters, then with different anatomical features. There was no artistic direction or study there. Just an endless flood of slop that was passable at best. Sure, you can make high quality art with generative AI. There's several comic series where things are incredibly temporally consistent because the artist trained their own models and took the time to genuinely gain expertise with the tools. They filter out the hallucination marks. They take the time to use it as a tool to accelerate their work instead of replacing their work. The boards they post it to go out of their way to excise the endless flood of slop and by controlling how the tools work they actually get a better result than they got by hand, much like how the skilled weavers were able to produce high quality cloth faster and cheaper with the power looms. We are at the point where the artists want to go and destroy the generative image power looms. Sadly, they can't even though they desperately want to. These looms are locked in datacentres that are biometrically authenticated. All human interaction is done by a small set of trusted staff or done remotely by true believers. I'm afraid of this kind of thing happening to the programming industry. A lot of what I'm seeing with vibe coding leading to short term gains at the cost of long term toil is lining up with this. Sure you get a decent result now, but long-term you have to go back and revise the work. This is a great deal if you are producing the software though; because that means you have turned one-time purchases into repeat customers as the shitty software you sold them inevitably breaks, forcing the customer to purchase fixes. The one-time purchase inevitably becomes a subscription. We deserve more in our lives than good enough. Look, CEOs, I'm one of you so I get it. We've seen the data teams suck up billions for decades and this is the only time that they can look like they're making a huge return on the investment. Cut it out with shoving the sparkle emoji buttons in my face. If the AI-aided product flows are so good then the fact that they are using generative artificial intelligence should be irrelevant . You should be able to replace generative artificial intelligence with another technology and then the product will still be as great as it was before. When I pick up my phone and try to contact someone I care about, I want to know that I am communicating with them and not a simulacrum of them. I can't have that same feeling anymore due to the fact that people that don't natively speak English are much more likely to filter things through ChatGPT to "sound professional". I want your bad English. I want your bad art. I want to see the raw unfiltered expressions of humanity. I want to see your soul in action. I want to communicate with you, not a simulacrum that stochastically behaves like you would by accident. And if I want to use an LLM, I'll use an LLM. Now go away with your sparkle emoji buttons and stop changing their CSS class names so that my uBlock filters keep working. This year has been a year full of despair and hurt for me and those close to me. I'm currently afraid to travel to the country I have citizenship in because the border police are run under a regime that is dead set on either elimination or legislating us out of existence. In this age of generative AI, I just feel so replaceable at my dayjob. My main work product is writing text that convinces people to use globally distributed object storage in a market where people don't realize that's something they actually need. Sure, this means that my path forward is simple: show them what they're missing out on. But I am just so tired. I hate this feeling of utter replaceability because you can get 80% as good of a result that I can produce with a single invocation of OpenAI's Deep Research. Recently a decree came from above: our docs and blogposts need to be optimized for AI models as well as humans. I have domain expertise in generative AI, I know exactly how to write SEO tables and other things that the AI models can hook into seamlessly. The language that you have to use for that is nearly identical to what the cult leader used that one time I was roped into a cult. Is that really the future of marketing? Cult programming? I don't want this to be the case, but when you look out at everything out there, you can't help but see the signs. Aspirationally, I write for humans. Mostly I write for the version of myself that was struggling a decade ago, unable to get or retain employment. I create things to create the environment where there are more like me, and I can't do that if I'm selling to soulless automatons instead of humans. If the artificial intelligence tools were…well…intelligent, they should be able to derive meaning from unaltered writing instead of me having to change how I write to make them hook better into it. If the biggest thing they're sold for is summarizing text and they can't even do that without author cooperation, what are we doing as a society? Actually, what are we going to do when everyone that cares about the craft of software ages out, burns out, or escapes the industry because of the ownership class setting unrealistic expectations on people? Are the burnt out developers just going to stop teaching people the right ways to make software? Is society as a whole going to be right when they look back on the good old days and think that software used to be more reliable? Frank Herbert's Dune world had superintelligent machines at one point. It led to a galactic war and humanity barely survived. As a result, all thinking machines were banned, humanity was set back technologically, and a rule was created: Thou shalt not make a machine in the likeness of a human mind. For a very long time, I thought this was very strange. After all, in a fantasy scifi world like Dune, thinking machines could automate so much toil that humans had to process. They had entire subspecies of humans that were functionally supercomputers with feelings that were used to calculate the impossibly complicated stellar draft equations so that faster-than-light travel didn't result in the ship zipping into a black hole, star, moon, asteroid, or planet. After seeing a lot of the impact across humanity in later 2024 and into 2025, I completely understand the point that Frank Herbert had. It makes me wish that I could leave this industry, but this is the only thing that pays enough for me to afford life in a world where my husband gets casually laid off after being at the same company for six and a half years because some number in a spreadsheet put him on the shitlist. Food and rent keeps going up here, but wages don't. I'm incredibly privileged to be able to work in this industry as it is (I make enough to survive, don't worry), but I'm afraid that we're rolling the ladder up behind us so that future generations won't be able to get off the ground. Maybe the problem isn't the AI tools, but the way they are deployed, who benefits from them, and what those benefits really are. Maybe the problem isn't the rampant scraping, but the culture of taking without giving anything back that ends up with groups providing critical infrastructure like FFmpeg, GNOME, Gitea, FreeBSD, NetBSD, and the United Nations having to resort to increasingly desperate measures to maintain uptime. Maybe the problem really is winner-take-all capitalism. The deployment of generative artificial intelligence tools has been a disaster for the human race. They have allowed a select few to gain "higher productivity"; but they have destabilized society, have made work transactional, have subjected artists to indignities, have led to widespread psychological suffering for the hackers that build the tools AI companies rely on, and inflict severe damage on the natural world. The continued development of this technology will worsen this situation. It will certainly subject human beings to greater indignities and inflict greater damage on the natural world, it will probably lead to greater social disruption and psychological suffering, and it may lead to increased physical suffering even in "advanced" countries. For other works in a similar vein, read these: Special thanks to the following people that read and reviewed this before release:

0 views
Cassidy Williams 4 months ago

Generating open graph images in Astro

Something that always bugged me about this blog is that the open graph/social sharing images used this for every single post: I had made myself a blank SVG template (of just the rainbow-colored pattern) for each post literally years ago, but didn’t want to manually create an image per blog post. There are different solutions out there for this, like the Satori library, or using a service like Cloudinary , but they didn’t fit exactly how I wanted to build the images, and I clearly have a problem with control. So, I built myself my own solution! Last year, I made a small demo for Cosynd with Puppeteer that screenshotted websites and put it into a PDF for our website copyright offering, aptly named screenshot-demo . I liked how simple that script was, and thought I could follow a similar strategy for generating images. My idea was to: And then from there, I’d do this for every blog title I’ve written. Seemed simple enough? Reader, it was not. BUT it worked out in the end! Initially, I set up a fairly simple Astro page with HTML and CSS: With this, I was able to work out what size and positioning I wanted my text to be, and how I wanted it to adjust based on the length of the blog post title (both in spacing and in size). I used some dummy strings to do this pretty manually (like how I wanted it to change ever so slightly for titles that were 4 lines tall, etc.). Amusing note, this kind of particular design work is really fun for me, and basically impossible for AI tools to get right. They do not have my eyes nor my opinions! I liked feeling artistic as I scooted each individual pixel around (for probably too much time) and made it feel “perfect” to me (and moved things in a way that probably 0 other people will ever notice). Once I was happy with the dummy design I had going, I added a function to generate an HTML page for every post, so that Puppeteer could make a screenshot for each of them. With the previous strategy, everything worked well. But, my build times were somewhat long, because altogether the build was generating an HTML page per post (for people to read), a second HTML page per post (to be screenshotted), and then a screenshot image from that second HTML page. It was a bit too much. So, before I get into the Puppeteer script part with you, I’ll skip to the part where I changed up my strategy (as the kids say) to use a single page template that accepted the blog post title as a query parameter. The Astro page I showed you before is almost exactly the same, except: The new script on the page looked like this, which I put on the bottom of the page in a tag so it would run client-side: (That function is an interesting trick I learned a while back where tags treat content as plaintext to avoid accidental or dangerous script execution, and their gives you decoded text without any HTML tags. I had some blog post titles that had quotes and other special characters in them, and this small function fixed them from breaking in the rendered image!) Now, if you wanted to see a blog post image pre-screenshot, you can go to the open graph route here on my website and see the rendered card! In my folder, I have a script that looks mostly like this: This takes the template ( ), launches a browser, navigates to the template page, loops through each post, sizes it to the standard Open Graph size (1200x630px), and saves the screenshot to my designated output folder. From here, I added the script to my : I can now run to render the images, or have them render right after ! This is a GitHub Gist of the actual full code for both the script and the template! There was a lot of trial and error with this method, but I’m happy with it. I learned a bunch, and I can finally share my own blog posts without thinking, “gosh, I should eventually make those open graph images” (which I did literally every time I shared a post). If you need more resources on this strategy in general: I hope this is helpful for ya!

0 views
Den Odell 4 months ago

Hacking Layout Before CSS Even Existed

Before , before , even before , we still had to lay out web pages. Not just basic scaffolding, full designs. Carefully crafted interfaces with precise alignment, overlapping layers, and brand-driven visuals. But in the early days of the web, HTML wasn’t built for layout. CSS was either brand-new or barely supported. Positioning was unreliable. Browser behavior was inconsistent. And yet, somehow, we made it work. So how did we lay out the web? With tables. Yup. Tables. Not the kind used for tabular data. These were layout tables, deeply nested, stretched and tweaked, often stuffed with invisible spacer GIFs to push elements into place. Text, links, and buttons were dropped into cells and floated among a scaffolding of invisible structure. If you were building websites in the late ’90s or early 2000s, this will sound familiar. If not, consider this a quick trip back to one of the more creatively chaotic eras of frontend development, and what it can still teach us today. HTML began as a way to mark up academic documents. Headings, paragraphs, links, lists, that was about it. There was no real concept of “layout” in the design sense. Early browsers had little support for positioning or styling beyond font tweaks and basic alignment. But developers still wanted structure. And clients, especially as the web grew more mainstream, wanted their brands to show up consistently. They expected full visual treatments: custom type, precise alignment, multi-column layouts, and logos in exactly the right place. Designs weren’t just documents, they were meant to look like something. Often like print. Often pixel-perfect. So we did what developers always do: we got creative. HTML tables gave us something no other element did at the time: control. You could create rows and columns. You could define cell widths and heights. You could nest tables inside tables to carve up the page into zones. That control was intoxicating. It wasn’t elegant. It definitely wasn’t semantic. But it worked. Spacer GIFs like the one above were a standard trick. You’d create a 1×1 pixel transparent image, then stretch it using width and height attributes to force the browser to reserve space. There were entire toolkits built to generate spacer-driven layouts automatically. If you wanted padding, you’d nest another table. For alignment, you’d add empty cells or tweak the attribute. And when that wasn’t enough, you’d resort to comment-tag hacks or browser-specific rendering quirks just to make things behave. At agencies like AKQA, where I worked at the time, the designs weren’t simple page frames. They were fully realized compositions, often with custom art direction, background textures, and layered effects. We’d receive static visuals, usually Photoshop files, and break them apart manually into dozens of individual image slices. Some slices were background textures. Some were visual foreground elements: shadows, corners, borders, custom typography before existed. Then we’d reassemble everything with HTML tables, mixing sliced images with live HTML, real text, buttons, form inputs, to recreate the original design as closely as browsers would allow. It was part engineering, part pixel-pushing, part dark art. It’s easy to laugh now, but back then layout tables gave us something CSS didn’t: predictability . CSS support was spotty. Browsers implemented it inconsistently. You could spend hours tweaking styles, only to have them break in IE5.5. Tables weren’t perfect, but they rendered the same almost everywhere. WYSIWYG tools like Dreamweaver leaned hard into the table model. You’d drag content into cells and it would spit out layers of nested HTML you weren’t really meant to touch. Was it bloated? Yes. Fragile? Absolutely. But it shipped. CSS1 arrived in 1996. CSS2 in 1998 brought , , and . But it took years for browsers to catch up, and even longer for developers to trust it. The table era didn’t really end until the mid-2000s, when modern browsers matured and CSS layout finally became viable. Even then, it took time for the idea of separation of concerns to take hold: structure in HTML, style in CSS, behavior in JavaScript. Now we have and . We can align elements without nesting. We can reorder content for accessibility. We can build responsive layouts without a single spacer GIF in sight. What used to take 100 lines of messy table markup now takes 10 lines of clean, declarative CSS. It’s better for developers, and for users, especially those using assistive tech that struggled to parse table-based scaffolding. A few lessons from the layout table era still hold true: Table-based layouts were a workaround. But they also reflect something constant about web development: we’re always adapting. Always hacking. Always building better experiences with the tools we have, until the next set of tools comes along. So next time you float a div or write a neat little grid template, give a small nod to the table layouts that walked so Flexbox could run. Cross-browser consistency matters. Even now, not everything renders the same. Test broadly. You’ll always work with constraints. Back then it was no CSS. Today it might be legacy code, team skills, or framework limitations. Creativity under constraint is part of the job. Understand the tools you’re misusing. Tables weren’t designed for layout, but we understood them deeply. That same mindset helps today when bending modern tools to fit the real world.

0 views
Josh Comeau 4 months ago

Partial Keyframes

CSS Keyframe animations are so much more powerful than most developers realize. In this tutorial, I’ll show you something that completely blew my mind, a technique that makes our keyframe animations so much more reusable and dynamic! 🤯

0 views