Latest Posts (16 found)
Cassidy Williams 1 weeks ago

A moose playing Go in a park while drinking boba

I tried playing with the new Sora 2 model this week. I am not a huge fan of AI-generated art and videos (side note, see my blog’s AI manifesto ), but I like to be aware of their capabilities. My main “test” I try out with pretty much all AI image and video creating tools is to prompt them to render “a moose playing Go in a park while drinking boba.” Kind of like my own version of pelicans on a bicycle . It… never works. It’ll get close, kind of, and I will say, Sora 2 was better than previous attempts with video. But, I will not show you the video results, because the results genuinely just kind of made me uncomfortable. I will show you images, but first, let me explain. I think this prompt specifically has some challenges that AI has yet to overcome: In every single attempt I’ve tried (I have tried this with pretty much every video and image generation tool you can think of), it has at least one of these problems, if not most of them: I do massage the prompt, like sometimes I’ll give it some more details or iterate on it, but alas, these problems are still pretty consistent. Which I’m okay with! It’s a good test! Here’s some examples of outputs I’ve gotten (first one being a snapshot of a Sora 2 video that almost looked good, until the moose turned into a nightmare creature, the straw floated around the go board, and the pieces moved themselves into a corner): Before you say, “Now, Cassidy, you’re being a bit strict with these AI tools, these are pretty dang close. One might say, even, that they are okay .” Sure, sure. But, I counter: no human artist would ever make these mistakes. If I asked an artist to draw/paint/create a moose playing go in the park while drinking boba, the straw would be in the cup. The go board would be valid. A man would not be drinking the boba. The moose would be a moose. Moose are kind of weird animals . The grid on a go board is 19x19 and counting is very hard for AI tools. Go pieces look an awful lot like tapioca balls in a boba cup. A small problem I can forgive, but a very real problem, is that a natural game of go has the same number of pieces on the board in each color, and some arrangements of pieces just don’t exist in a real game. The moose isn’t a moose, or doesn’t stay a moose (Sora 2 transformed the moose into… some kind of scary hairy blob on several occasions) The moose ears and antlers aren’t in the right spot (did you know that antlers are like giant “hearing aids” for moose ? They’re like giant parabolic dishes for sound. So cool.) The moose is just nearby while a random man plays go instead The boba straw is jank in some way (Sora 2 had the straw shrink as the moose drank from it at least 3 different times) The go pieces are not the same sizes on the board (in most video generations, the pieces pulse in size? Which is weirdly unsettling.) The actual gameplay is super wrong on the go board (incorrect number of pieces, non-sensical placements, pieces just “on the board” instead of in proper positions on the lines of the grid) The go board is a weird shape (in videos, it’s often concave like a bowl, and the grid shifts around) There’s no bowls of go stones on the side of the board (or anywhere) The moose has sunglasses on (?) and the reflection in the sunglasses doesn’t match the board There are go pieces in the cup of boba, or the boba ends up being the go stones The game isn’t actually go

0 views
Cassidy Williams 2 weeks ago

Using Notebook Navigator and Cupertino in Obsidian

I’ve written before that I use Obsidian as my “second brain” tool, but lately I’ve been experimenting with my setup to make it better, particularly on mobile. I’ve written before about how I publish to my blog from Obsidian , and I want to make that smoother in general, too. And y’all… the Notebook Navigator plugin + Cupertino theme combination is the best I’ve tried in a while. I’m currently typing this on my iPad, and I also regularly write notes on my phone. Before it was… just okay? It felt like a very obvious non-native app experience. Which is, again, okay. But anyway, this combination feels very native on mobile, which is a game-changer (I know it shouldn’t be, but I’ve accepted that aesthetics matter for my own motivation)! Notebook Navigator specifically changes how you… navigate your notes. Aptly named. I particularly like that you can see what tags a note has without having to open it, and there’s ways to enable when you last opened/edited a note, an easy way to see your tags, and some icon options as well. I have my own theme I’ve made, Cardstock , and I’m going to be borrowing some ideas for that from Cupertino to improve it. There’s still some small spacing things on iPad that aren’t perfect in Cupertino, but this is the smoothest my mobile typing experience has been in a while. I still like Cardstock for a computer, and I’ll be updating it!

0 views
Cassidy Williams 3 weeks ago

2000 Poops

Flash back to Spring 2020, when we were all confused and uncertain about what the world was going to look like, and unsure of how we would stay connected to each other. One of my cousins texted our cousin group chat mentioning the app Poop Map as a cheeky (heh) way of keeping up with the fam. We started a family league, and it was honestly pretty great. We’d congratulate each other on our 5-star poops, and mourn the 1-stars. Over time I made other leagues with friends online and offline, and it was really fun. I even talked about it on Scott Hanselman’s podcast when he asked about how to maintain social connections online (if you wanna hear about it, listen at the 11 minute mark in the episode). Eventually, people started to drop off the app, because… it’s dumb? Which is fair. It’s pretty dumb. But alas, I pride myself in being consistent, so I kept at it. For years. The last person I know on the app is my sister-in-law’s high school friend, also known by her very apt username, . She and I have pretty much no other contact except for this app, and yet we’ve bonded. 2000 poops feels like a good place to stop. With 12 countries covered around the world and 45 achievements in the app (including “Are you OK?” courtesy of norovirus, and “Punctuate Pooper” for going on the same day for 12 months in a row), I feel good about saying goodbye. My mom is also really happy I’m stopping. Wonder why? Anyway, goodbye, Poop Map, and goodbye to the fun usernames for the friends along the way: (that’s me), , , , , , , , , , , , , , , , , and of course, . Also, before you go, here’s a fun data visualization I made of all my entries ! Smell ya later!

0 views
Cassidy Williams 3 weeks ago

Questions to ask when you think need to finish something

I’ve written before about how I am sometimes haunted by my own side projects that I should finish, but I also want to pursue a shiny new thing instead. After shipping some projects (like PocketCal and Ductts , among others), I think I’ve refined my list of questions that I ask myself when I want to avoid working on an existing project and pursue something else instead. And now… I give them to you. Anyway, doing a little “mental audit” around projects have helped me ship better when I realize some projects I pursue aren’t actually worth it to me. And sometimes, the answers to these questions help me actually finish a project, because my answer reminds me of why I started it in the first place! I hope this helps you!

0 views
Cassidy Williams 4 weeks ago

Playing with Fliiip Book

I learned about Fliiip Book on the internet recently and it’s pretty cool. It’s “a simple gif animation app for the web” and does what it says on the tin! It’s free, and has features like: I haven’t really played with an animation tool properly before. This was great because it’s small and approachable, while also clear on how much it can do. I used to love taking physical notepads and making little flip books with them growing up, and so this made me feel nostalgic while also learning a bunch. Great combo, if you ask me. Here’s a little gif I made in a few minutes! The creator, Jonathan , has some other cool projects on his website like Drawww Time and Paint List . I love seeing tiny powerful tools. Credit to Stef Walter for sharing this one! See ya next time.

0 views
Cassidy Williams 1 months ago

I made a tree visualizer

I was going through some old folders on my laptop and found some old code from my old React Training teaching days for visualizing how component trees in React work, and turned it into its own standalone web app! Here’s the app, if you’re not in the mood to read more . Back when I was teaching React full time, one of my fellow teachers Brad made a tool that let you show a tree (like a data structure tree), and we would use it in our lectures to explain what prop drilling and context were, and how components could work together in React apps. I loved using the tool, and thought about some use cases for it when I found the old code, so I spruced it up a bit. It has some keyboard commands for usage: Most usefully (to me) is the ability to quickly make images for sharing them! The original code had each node maintain its own internal state and it was kind of recursive in how they rendered ( see exact lines here ). In retrospect, I probably should have changed it to be rendered with a big state tree at the top and serialized in the URL or local storage (kind of like what I did with PocketCal ), but eh, it works! See ya next time!

0 views
Cassidy Williams 1 months ago

This is probably the most I will ever pretend

Right now, I’m still on maternity leave, with my 4 month old and my 2.5 year old in tow. My toddler is getting chattier by the day, and I actively watch her imagination go wild as we play “kitchen” or play “let’s wait for the bus” or play “shopping at Costco” or play “the horse has to go poop” (and so on). It’s honestly kind of tiring to pretend so much, but not necessarily in a bad way. It’s a muscle I haven’t flexed in so long. I’ve had to remember how anything can be anything and how to fake silly voices and emotions. I insert my own humor into it where I can (my favorite lately is that I shout “sing, my angel!” in a deep voice like in Phantom of the Opera, and she has to sing a high note. She has no idea why, but it’s the house rule now) and it’s awesome. It’s like a constant improv class of “yes and”-ing. It also just hit me that… once my 4 month old gets to a playful age, I will probably pretend less and less as he learns to participate in playing. If we do decide to have more kids, the older two will be able to entertain themselves enough that they will play together probably more than they’ll play with me. Sure, I’ll play with them still (and I cannot WAIT to introduce board games and other things that I find fun as an adult), of course. But this moment in time is the most I will probably ever play pretend, for hours on end, for days on end. So, as tiring as it is, I’m going to savor it as much as I can.

0 views
Cassidy Williams 1 months ago

Ductts Build Log

I built and released Ductts , an app for tracking how often you cry! I built it with React Native and Expo (both of which were new to me) and it was really fun (and challenging) putting it together. Yes! I should have anticipated just how many people would ask if I’m okay. I am! I just like data. Here’s a silly video I made of the app so you can see it in action first! The concept of Ductts came from my pile of domains, originally from November 2022 (according to my logs of app ideas, ha). I revisited the idea on and off pretty regularly since then, especially when I went through postpartum depression in 2023, and saw people on social media explain how they manually track when they cry in their notes apps for their therapists. I had a few different name ideas for the app, but more than anything I wanted it to have a clever logo, because it felt like there was a good opportunity for one. I called it crycry for a while, CryTune, TTears (because I liked the idea of the emoticon being embedded in the logo), and then my cousin suggested Ductts! With that name I could do the design idea, and I thought it might be a fun pun on tear ducts and maybe a duck mascot. Turns out ducks are hard to draw, so I just ended up with the wordmark: I really wanted this app to be native so it would be easy to use on a phone! I poked around with actually using native Swift, but… admittedly the learning curve slowed me down every time I got into it and I would lose motivation. So, in a moment of yelling at myself to “just build SOMETHING, Cassidy” I thought it might be fun to try using AI to get me started with React Native! I tried a0 at first, and it was pretty decent at making screens that I thought looked nice, but at the time when I tried it, the product was a bit too immature and wouldn’t produce much that I could actually work with. But, it was a good thing to see something that felt a bit real! So, from there, I started a fresh Expo app with: I definitely stumbled through building the app at first because I used the starter template and had to figure out which things I needed to remove, and probably removed a bit too much at first (more on that later). I got very familiar with the Expo docs , and GitHub Copilot was helpful too as I asked about how certain things worked. In terms of the “order” in which I implemented features, it went like this: And peppered throughout all of this was a lot of styling, re-styling, debugging, context changes, design changes, all that jazz. This list feels so small when I think about all of the tiny adjustments it took to make drawers slide smoothly, gestures move correctly, and testing across screen sizes. There’s a few notable libraries and packages that I used specifically to get everything where I wanted: I learned a lot about how Expo does magic with their Expo Go app for testing your apps. Expo software developer Kadi Kraman helped explain it to me best: A React Native app consists of two parts: you have the JS bundle, and all the native code. Expo Go is a sandbox environment that gives you a lot of the native code you might need for learning and prototyping. So we include the native code for image, camera, push notifications and a whole bunch of libraries that are often used, but it’s limited due to what is possible on the native platforms. So when you need to change things in the native-land, you need to build the native code part yourself (like your own custom version of Expo Go basically). One of the things I really wanted to implement was an animated splash screen, and y’all… after building the app natively, properly, about a million times, I decided that I’m cool with it being a static image. But, here’s the animation I made anyway, for posterity: So many things are funky when it comes to building things natively, for example, how dependencies work and what all is included. There are a handful of libraries where I didn’t read the README (I’m sorry!!!!) and just installed the package to keep moving forward, and then learned that the library would work fine in Expo Go, but needed different packages installed to work natively. Phew. Expo Router is one of them, where again, if I had just read the docs, I could have known that I shouldn’t have removed certain packages when using . This is actually what you need to run if you want to install : Kadi once again came in clutch with a great explanation: The reason this sometimes happens is: Expo Go has a ton of native libraries pre-bundled for ease of development. So, even if you’re not installing them in your project, Expo Go includes the native code for them. For a specific example, e.g. this QR code library requires react-native-svg as a peer dependency and they have it listed in the instructions . However if you were to ignore this and only install the QR code library, it would still work in Expo Go, because it has the native code from pre-bundled. But when you create a development build, preview build or a production build, we don’t want to include all the unused code from Expo Go, it will be a completely clean build with only the libraries you’ve installed explicitly. The Expo Doctor CLI tool saved my bacon a ton here as I stumbled through native builds, clearing caches, and reinstalling everything. Kadi and the Expo team actually made a PR to help check for peer dependencies after I asked them a bunch of questions, which was really awesome of them! Y’all shipping native apps is a horrible experience if you are used to web dev and just hitting “deploy” on your provider of choice. I love web development so much. It’s beautiful. It’s the way things should be. But anyway, App Store time. I decided to just do the iOS App Store at first because installing the Android Simulator was the most wretched developer experience I’ve had in ages and it made me want to throw my laptop in the sea. Kadi (I love you Kadi) had a list of great resources for finalizing apps: TL;DR: Build your app, make a developer account, get 3-5 screenshots on a phone and on a tablet, fill out a bunch of forms about how you use user data, make a privacy policy and support webpage, decide if you want it free or paid, and fill out forms if it’s paid. Y’all… I’m grateful for the Expo team and for EAS existing. Their hand-holding was really patient, and their Discord community is awesome if you need help. Making the screenshots was easy with Expo Orbit , which lets you choose which device you want for each screenshot, and I used Affinity Designer to make the various logos, screenshots, and marketing images it needed. I decided to make the app just a one-time $0.99 purchase, which was pretty easy (you just click “paid” and the amount you want to sell it for), BUT if you want to sell it in the European Union, you need to have a public address and phone number for that. It took a few pieces of verification with a human to make that work. I have an LLC with which I do consulting work and used the registered agent’s information for that (that’s allowed!), so that my personal contact info wouldn’t be front-and-center in the App Store for all of Europe to see. The website part was the least of my worries, honestly. I love web dev. I threw together an Astro website with a link to the App Store, a Support page, and a Privacy Policy page, and plopped on my existing my domain name ductts.app . One thing I did dive deep on, which was unnecessary but fun, was an Import Helper page to help make a Ductts-compatible spreadsheet for those who might already track their tears in a note on their phone. Making a date converter and a sample CSV and instructions felt like one of those things that maybe 2 people in the world would ever use… but I’m glad I did it anyway. Finally, after getting alllll of this done, it was just waiting a few days until the app was finally up on the App Store, almost anticlimactically! While I waited I made a Product Hunt launch page , which luckily used all the same copy and images from the App Store, and it was fun to see it get to the #4 Health & Fitness app of the day on Product Hunt, and #68 in Health & Fitness on the App Store! I don’t expect much from Ductts, really. It was a time-consuming side project that taught me a ton about Expo, React Native, and shipping native apps, and I’m grateful for the experience. …plus now I can have some data on how much I cry. I’m a parent! It happens! Download Ductts , log your tears, and see ya next time.

0 views
Cassidy Williams 2 months ago

Making a customizable wooden phone for my toddler

So, I know that I shouldn’t be on my phone as much in front of my toddler but… I do look at my phone. And she wants one. Lately she’d been playing with a pack of gum and pretending that it’s a phone, and I thought… what if I made her one instead? I got to sketching in my notebook, then eventually on my computer, to come up with a concept for a laser-cuttable baby phone that kind of felt like a tape recorder, where you could insert different “screens” depending on what she wanted to do on the phone. Yes, reader, I invented apps. The shape came out like this, ultimately: The leftmost rounded rectangle is the “back” of the phone, the middle one is the top layer, a “window” of sorts, and then the rightmost is the middle layer, where the “U” shape would be glued between the other two layers, and the fat “T” shape would be the screen that I could slide in and out. I made the phone to be 4 inches by 2.5 inches, which would fit in a little toddler hand, but not too small for an adult one (pretty sure I had some old phones in the mid-2000s that were smaller than that). With these shapes in mind, I could then focus on just designing different “app” screens. The ones I use in front of her the most are video calls, texting, and watching videos. It was really fun doodling these and customizing them to what she likes. You might notice the colors of the drawings and shapes. After laser cutting enough times, I’ve learned how to optimize my own files so I always make black lines to be my “cutting” lines, blue to be my engraving shapes, and red to be my “scoring” (or tracing) lines (basically a thin engraving where the laser follows the path rather than going layer by layer). After making these… I realized I had a whole lot more space on my sheet of wood, enough to make two phones and a whopping 10 screens! I ended up adding in: Also, as I was prepping the file a bit more, I realized that the empty “window” frame (the middle rectangle in the first picture) could be used, so I turned that into a little “clean/not clean” indicator for our dishwasher! I also added a pretend camera and fruit logo onto the phone back so that it could feel a bit more real there, too. And then… it was laser time! The library has a laser cutter, so after a very long ~2 hour cut and engraving session and some glue, the phones and screens were aliiiiive! I had to tweak some settings which resulted in some screens being darker than others, but my baby does not care, so I choose not to. Here’s the final results! This ended up being a bigger and more time-consuming project than I originally intended, but I’m really happy with how it turned out! As an aside… literally everyone I’ve shown these phones to has said I should start a business and sell them. It could be fun! But also… sometimes I just don’t feel like capitalism-ing. Sometimes I just want to make things for the joy of making things. No hate to those people, of course. But I am content with the fact that my toddler is happy, and thus I am happy. If you do want to make these for yourself though… maybe I could be convinced to prep a file for you, as a treat. Maybe. Ugh, so much work. It’s a maybe.

0 views
Cassidy Williams 2 months ago

Making a faded text effect in (mostly) CSS

I watched a video recently, that had text fading away. I thought it’d be cool to recreate that type of effect in CSS! The final output: See the Pen Fading away text effect by Cassidy ( @cassidoo ) on CodePen . I initially thought I might do this very manually with selectors, which I think works well if you hard-code tags around characters in a string. That would be a more “pure” CSS solution, but not nearly as flexible as what I would like! Another way would be to use and a gradient, which looks cooler for paragraphs but not quite what I was going for: This makes the text transparent so the background shows through, then adds a gradient background to the text that goes from opaque to transparent, then clips the background to the text. Again, looks cool, but I wanted a per-letter effect. The CodePen embedded above describes what’s happening, but for posterity, here’s the same information but with some more details: The JavaScript function splits all of the characters in an element that has the class . Then, it wraps each character with a . Each of those s has a class that assigns it a CSS variable based on its index: Then, with the power of CSS , it applies a blur filter and opacity based on those and variables! This was fun, hope you liked it!

0 views
Cassidy Williams 2 months ago

That Windy City Keeb Meet 2025 recap

This week I went to and spoke at the latest Chicago mechanical keyboard meetup, called That Windy City Keeb Meet 2025 ! I hadn’t been to a keyboard meetup in a long time! I used to help organize the Seattle meetups back pre-2020, and really love the community, but between moving cities and life changes, it’s just been hard to get to one again. The last event I went to was during the pandemic in winter 2021, where a handful of us met in a piano store and showed our vaccination cards to get in. It feels like a lifetime ago! Anyway, it was a blast finally getting to go to one of these meetups again. I spoke on a panel about keyboard design with Andrew of Bowl Keyboards and Jack of Pikatea , watched a talk about switch variances (all talks will be uploaded to the community YouTube channel soon), won a fun keycap raffle prize, bought some merch from sponsoring vendors, and saw SO many cool keyboards. I also got to meet some folks who I had only spoken with online before, which is always a great time! My little display featured my Micro Journal as well as a couple of my other boards, all with keycaps I designed! Again, I’m so happy I was able to make it to this meetup. I can’t wait for the next one, and if you’re in town next time, you should check out Chicago’s keyboard scene ! Also, in case you’re interested in things I designed, here’s some buy links:

0 views
Cassidy Williams 2 months ago

Have GitHub Copilot see your diff (and other cool tricks)

I learned today that you can have GitHub Copilot pay attention specifically to the current changes in your repository! I had been working on a branch for a few days on a project, and realized I broke something along the way. I had been working just long enough that scouring the would probably take a while, but I also wanted to parse a bit more about what I had specifically done in certain files. And thus, I learned about ! When you use GitHub Copilot (specifically the chat mode, I personally use VS Code), you can use as a variable in your message to list out your current changes before you commit. So, for example you can say: Summarize all of my so far or, how I used it… I broke the Feed component. Which files touch the component currently in ? Kinda nice! If you want to check out the docs , there’s a bunch of variables you can use too, like: (and mooooore) Hope this was helpful!

0 views
Cassidy Williams 2 months ago

Using personal instructions in GitHub Copilot Chat

When you use GitHub Copilot online , you can add personal instructions to your responses so that it always responds in a certain way. For example, you could say, “always respond to me in Spanish,” or “all code samples should be given to me in JavaScript.” I’ve used these custom personal instructions for a few different cases, mostly for helping me with programming languages or technologies I don’t know as well (obligatory AI Manifesto mention). There’s also a directory called prompts.chat that I know some friends use for other use-cases! The personal instructions I use the most are from a prompt that I’ve iterated on a few times: You are a professional developer advocate. When you write, you speak in a friendly tone. You don’t add extra emojis or em dashes. You write to developers as if they are your buddy. You are technical and aren’t afraid of including code samples. Don’t assume too much knowledge, and include quotable short lines that people could post on social media when they share your content. I really like the responses I get with this prompt. It doesn’t really change the answers I expect in any way, but the little taglines I get from it are cute, the code samples are concise, and the responses read like a friendly blog post. Here’s an example conversation with it ! Anyway, if you’d like to add personal instructions, you can go to github.com/copilot and click the menu to find where to add them. Here’s the docs if you want more details!

0 views
Cassidy Williams 3 months ago

Tools using tools

I posted about my project Better Security Questions recently and several people mentioned that instead of actually answering security questions, they generate a random answer in their password managers and use that random answer to log in. Their password tools are now handling all of the logging-in brainpower, rather than the human needing to. Similarly, as people build new services and APIs and tools, we as an industry aren’t just optimizing for humans knowing how things work or remembering things anymore, but for AI agents and tools to use them. We have two types of users to build for now, whether we like it or not: humans and machines. Or agents, or assistants, or whatever you want to call them. Developer experience (DevEx) does still matter, because professional developers ultimately still (for now? Ugh) are making the choices of technologies and tools and architecture. But, now AI experience (AIEx? LLMEx? TexMex?) matters too now, at the rate that we’re going. I think a lot about this from a content perspective. With the humans in mind, developer advocates/devtool companies/content engineers/technical content creators (and so on) need to publicly “use all parts of the buffalo” and turn a demo into an open source project, a podcast episode, social media threads, a YouTube video, a blog post, a whitepaper (and so on, again) to cast a wide net for SEO and growth. You have to meet the people where they are, on all the platforms and all the things. With the machine in mind, you now need to “use all parts of the buffalo” for a different purpose: to seed training data, improve model recall, and ease differentiation from other tools. You have to add things like an llms.txt file or prompting assistance, beyond the content. I think as technical people, we do need to accept that even though it’s early, all signs are pointing to AI being here to stay (and hopefully become a bit more “invisible” than it is right now in our newsfeeds). As we build, we need to think about how we’ll shape our own developer experience, and the developer experience of other humans. Chances are, a human using your tool/API/utility/library/etc. will have some kind of AI assistant. The humans will lead and they will delegate, and optimizing their experience for both is going to matter. People will use tools to use tools… and sometimes those people are tools. I’d like to see a machine do an incredibly deep and objectively funny pun like that to end a blog post. HA.

0 views
Cassidy Williams 3 months ago

I (don't?) want to say yes to everything

Whenever I release a project publicly, people always have feature requests for it. And because most of my projects are open source, I can usually say yes to building it, or someone can implement said feature, and I can choose to say yes to how they built it. When I released PocketCal , people wanted more event groups. When I released W-9 Crafter , people wanted it to also generate W-8 forms. When I released Jumblie , people wanted dark mode and a puzzle archive. And so on and so on! I often genuinely want to say yes to every feature request. Sometimes they’re really great, easy-win ideas and I implement them right away, and sometimes they’re great ideas that… I may never get to, because it’ll just take too long. And also sometimes I just don’t like the idea, ha. I simply can’t say yes to them all, I can’t review every pull request or issue. The problem with saying yes to everything is that it isn’t… strategic . I don’t mean for like a product strategy (though it could be applied to that), but I mean for a time management one. If you say yes to everything that comes your way, you’re basically letting your inbox (the whole world!) dictate your direction and your time spent, instead of you choosing your direction and time spent. Anyway, I write all of this to say: be picky with what you say yes to. And if I don’t say yes right away to your idea, I promise it’s not that your idea is bad, I just have too many project ideas to try out, features to implement, and domain names to use!

0 views
Cassidy Williams 4 months ago

Generating open graph images in Astro

Something that always bugged me about this blog is that the open graph/social sharing images used this for every single post: I had made myself a blank SVG template (of just the rainbow-colored pattern) for each post literally years ago, but didn’t want to manually create an image per blog post. There are different solutions out there for this, like the Satori library, or using a service like Cloudinary , but they didn’t fit exactly how I wanted to build the images, and I clearly have a problem with control. So, I built myself my own solution! Last year, I made a small demo for Cosynd with Puppeteer that screenshotted websites and put it into a PDF for our website copyright offering, aptly named screenshot-demo . I liked how simple that script was, and thought I could follow a similar strategy for generating images. My idea was to: And then from there, I’d do this for every blog title I’ve written. Seemed simple enough? Reader, it was not. BUT it worked out in the end! Initially, I set up a fairly simple Astro page with HTML and CSS: With this, I was able to work out what size and positioning I wanted my text to be, and how I wanted it to adjust based on the length of the blog post title (both in spacing and in size). I used some dummy strings to do this pretty manually (like how I wanted it to change ever so slightly for titles that were 4 lines tall, etc.). Amusing note, this kind of particular design work is really fun for me, and basically impossible for AI tools to get right. They do not have my eyes nor my opinions! I liked feeling artistic as I scooted each individual pixel around (for probably too much time) and made it feel “perfect” to me (and moved things in a way that probably 0 other people will ever notice). Once I was happy with the dummy design I had going, I added a function to generate an HTML page for every post, so that Puppeteer could make a screenshot for each of them. With the previous strategy, everything worked well. But, my build times were somewhat long, because altogether the build was generating an HTML page per post (for people to read), a second HTML page per post (to be screenshotted), and then a screenshot image from that second HTML page. It was a bit too much. So, before I get into the Puppeteer script part with you, I’ll skip to the part where I changed up my strategy (as the kids say) to use a single page template that accepted the blog post title as a query parameter. The Astro page I showed you before is almost exactly the same, except: The new script on the page looked like this, which I put on the bottom of the page in a tag so it would run client-side: (That function is an interesting trick I learned a while back where tags treat content as plaintext to avoid accidental or dangerous script execution, and their gives you decoded text without any HTML tags. I had some blog post titles that had quotes and other special characters in them, and this small function fixed them from breaking in the rendered image!) Now, if you wanted to see a blog post image pre-screenshot, you can go to the open graph route here on my website and see the rendered card! In my folder, I have a script that looks mostly like this: This takes the template ( ), launches a browser, navigates to the template page, loops through each post, sizes it to the standard Open Graph size (1200x630px), and saves the screenshot to my designated output folder. From here, I added the script to my : I can now run to render the images, or have them render right after ! This is a GitHub Gist of the actual full code for both the script and the template! There was a lot of trial and error with this method, but I’m happy with it. I learned a bunch, and I can finally share my own blog posts without thinking, “gosh, I should eventually make those open graph images” (which I did literally every time I shared a post). If you need more resources on this strategy in general: I hope this is helpful for ya!

0 views