Posts in Web (20 found)
./techtipsy 5 days ago

Why Nextcloud feels slow to use

Nextcloud. I really want to like it, but it’s making it really difficult. I like what Nextcloud offers with its feature set and how easily it replaces a bunch of services under one roof (files, calendar, contacts, notes, to-do lists, photos etc.), but no matter how hard I try and how much I optimize its resources on my home server, it feels slow to use, even on hardware that is ranging from decent to good. Then I opened developer tools and found the culprit. It’s the Javascript. On a clean page load, you will be downloading about 15-20 MB of Javascript, which does compress down to about 4-5 MB in transit, but that is still a huge amount of Javascript. For context, I consider 1 MB of Javascript to be on the heavy side for a web page/app. Yes, that Javascript will be cached in the browser for a while, but you will still be executing all of that on each visit to your Nextcloud instance, and that will take a long time due to the sheer amount of code your browser now has to execute on the page. A significant contributor to this heft seems to be the bundle, which based on its name seems to provide some common functionality that’s shared across different Nextcloud apps that one can install. It’s coming in at 4.71 MB at the time of writing. Then you want notifications, right? is here to cover you, at 1.06 MB . Then there are the app-specific views. The Calendar app is taking up 5.94 MB to show a basic calendar view. Files app includes a bunch of individual scripts, such as ( 1.77 MB ), ( 1.17 MB ), ( 1.09 MB ), ( 0.9 MB which I’ve never used!) and many smaller ones. Notes app with its basic bare-bones editor? 4.36 MB for the ! This means that even on an iPhone 13 mini, opening the Tasks app (to-do list), will take a ridiculously long time. Imagine opening your shopping list at the store and having to wait 5-10 seconds before you see anything, even with a solid 5G connection. Sounds extremely annoying, right? I suspect that a lot of this is due to how Nextcloud is architected. There’s bound to be some hefty common libraries and tools that allow app developers to provide a unified experience, but even then there is something seriously wrong with the end result, the functionality to bundle size ratio is way off. As a result, I’ve started branching out some things from Nextcloud, such as replacing the Tasks app with using a private Vikunja instance, and Photos to a private Immich instance. Vikunja is not perfect, but its 1.5 MB of Javascript is an order of magnitude smaller compared to Nextcloud, making it feel incredibly fast in comparison. However, with other functionality I have to admit that the convenience of Nextcloud is enough to dissuade me from replacing it elsewhere, due to the available feature set comparing well to alternatives. I’m sure that there are some legitimate reasons behind the current state, and overworked development teams and volunteers are unfortunately the norm in the industry, but it doesn’t take away the fact that the user experience and accessibility suffers as a result. I’d like to thank Alex Russell for writing about web performance and why it matters, with supporting evidence and actionable advice, it has changed how I view websites and web apps and has pushed me to be better in my own work. I highly suggest reading his content, starting with the performance inequality gap series. It’s educational, insightful and incredibly irritating once you learn how crap most things are and how careless a lot of development teams are towards performance and accessibility.

0 views
David Bushell 1 weeks ago

Better Alt Text

It’s been a rare week where I was able to (mostly) ignore client comms and do whatever I wanted! That means perusing my “todo” list, scoffing at past me for believing I’d ever do half of it, and plucking out a gem. One of those gems was a link to “Developing an alt text button for images on [James’ Coffee Blog]” . I like this feature. I want it on my blog! My blog wraps images and videos in a element with an optional caption. Reduced markup example below. How to add visible alt text? I decided to use declarative popover . I used popover for my glossary web component but that implementation required JavaScript. This new feature can be done script-free! Below is an example of the end result. Click the “ALT” button to reveal the text popover (unless you’re in RSS land, in which case visit the example , and if you’re not in Chrome, see below). To implement this I appended an extra and element with the declarative popover attributes after the image. I generate unique popover and anchor names in my build script. I can’t define them as inline custom properties because of my locked down content security policy . Instead I use the attribute function in CSS. Anchor positioning allows me to place these elements over the image. I could have used absolute positioning inside the if not for the caption extending the parent block. Sadly using means only one thing… My visible alt text feature is Chrome-only! I’ll pray for Interop 2026 salvation and call it progressive enhancement for now. To position the popover I first tried but that sits the popover around/outside the image. Instead I need to sit inside/above the image. The allows that. The button is positioned in a similar way. Aside from being Chrome-only I think this is a cool feature. Last time I tried to use anchor positioning I almost cried in frustration… so this was a success! It will force me to write better alt text. How do I write alt text good? Advice is welcome. Thanks for reading! Follow me on Mastodon and Bluesky . Subscribe to my Blog and Notes or Combined feeds.

0 views
A Working Library 1 months ago

We, the Heartbroken

“Heartbreak is the heart of all revolutionary consciousness. How can it not be? Who can imagine another world unless they have already been broken apart by the world we are in?” Gargi Bhattacharyya sees our grief for a broken world as the tool we use to weave a new one. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
Loren Stewart 3 months ago

A Progressive Complexity Manifesto

A manifesto for progressive web complexity. Reject the false binary between static sites and SPAs. Embrace the powerful middle ground with server-rendered HTML, HTMX, and intentional complexity escalation.

0 views
Cassidy Williams 4 months ago

Generating open graph images in Astro

Something that always bugged me about this blog is that the open graph/social sharing images used this for every single post: I had made myself a blank SVG template (of just the rainbow-colored pattern) for each post literally years ago, but didn’t want to manually create an image per blog post. There are different solutions out there for this, like the Satori library, or using a service like Cloudinary , but they didn’t fit exactly how I wanted to build the images, and I clearly have a problem with control. So, I built myself my own solution! Last year, I made a small demo for Cosynd with Puppeteer that screenshotted websites and put it into a PDF for our website copyright offering, aptly named screenshot-demo . I liked how simple that script was, and thought I could follow a similar strategy for generating images. My idea was to: And then from there, I’d do this for every blog title I’ve written. Seemed simple enough? Reader, it was not. BUT it worked out in the end! Initially, I set up a fairly simple Astro page with HTML and CSS: With this, I was able to work out what size and positioning I wanted my text to be, and how I wanted it to adjust based on the length of the blog post title (both in spacing and in size). I used some dummy strings to do this pretty manually (like how I wanted it to change ever so slightly for titles that were 4 lines tall, etc.). Amusing note, this kind of particular design work is really fun for me, and basically impossible for AI tools to get right. They do not have my eyes nor my opinions! I liked feeling artistic as I scooted each individual pixel around (for probably too much time) and made it feel “perfect” to me (and moved things in a way that probably 0 other people will ever notice). Once I was happy with the dummy design I had going, I added a function to generate an HTML page for every post, so that Puppeteer could make a screenshot for each of them. With the previous strategy, everything worked well. But, my build times were somewhat long, because altogether the build was generating an HTML page per post (for people to read), a second HTML page per post (to be screenshotted), and then a screenshot image from that second HTML page. It was a bit too much. So, before I get into the Puppeteer script part with you, I’ll skip to the part where I changed up my strategy (as the kids say) to use a single page template that accepted the blog post title as a query parameter. The Astro page I showed you before is almost exactly the same, except: The new script on the page looked like this, which I put on the bottom of the page in a tag so it would run client-side: (That function is an interesting trick I learned a while back where tags treat content as plaintext to avoid accidental or dangerous script execution, and their gives you decoded text without any HTML tags. I had some blog post titles that had quotes and other special characters in them, and this small function fixed them from breaking in the rendered image!) Now, if you wanted to see a blog post image pre-screenshot, you can go to the open graph route here on my website and see the rendered card! In my folder, I have a script that looks mostly like this: This takes the template ( ), launches a browser, navigates to the template page, loops through each post, sizes it to the standard Open Graph size (1200x630px), and saves the screenshot to my designated output folder. From here, I added the script to my : I can now run to render the images, or have them render right after ! This is a GitHub Gist of the actual full code for both the script and the template! There was a lot of trial and error with this method, but I’m happy with it. I learned a bunch, and I can finally share my own blog posts without thinking, “gosh, I should eventually make those open graph images” (which I did literally every time I shared a post). If you need more resources on this strategy in general: I hope this is helpful for ya!

0 views

Lost Computation

Read on the website: We keep losing context and computation when running programs. But we don't have to. Let’s see how this lost compute can be avoided.

0 views
Evan Schwartz 7 months ago

Building a fast website with the MASH stack in Rust

I'm building Scour , a personalized content feed that sifts through noisy feeds like Hacker News Newest, subreddits, and blogs to find great content for you. It works pretty well -- and it's fast . Scour is written in Rust and if you're building a website or service in Rust, you should consider using this "stack". After evaluating various frameworks and libraries, I settled on a couple of key ones and then discovered that someone had written it up as a stack. Shantanu Mishra described the same set of libraries I landed on as the "mash 🥔 stack" and gave it the tagline "as simple as potatoes". This stack is fast and nice to work with, so I wanted to write up my experience building with it to help spread the word. TL;DR: The stack is made up of Maud , Axum , SQLx , and HTMX and, if you want, you can skip down to where I talk about synergies between these libraries. (Also, Scour is free to use and I'd love it if you tried it out and posted feedback on the suggestions board !) Scour uses server-side rendered HTML, as opposed to a Javascript or WebAssembly frontend framework. Why? First, browser are fast at rendering HTML. Really fast. Second, Scour doesn't need a ton of fancy interactivity and I've tried to apply the "You aren't gonna need it" principle while building it. Holding off on adding new tools helps me understand the tools I do use better. I've also tried to take some inspiration from Herman from BearBlog's approach to "Building software to last forever" . HTML templating is simple, reliable, and fast. Since I wanted server-side rendered HTML, I needed a templating library and Rust has plenty to choose from. The main two decisions to make were: Here is a non-exhaustive list of popular template engines and where they fall on these two axes: I initially picked because of its popularity, performance , and type safety. (I quickly passed on all of the runtime-evaluated options because I couldn't imagine going back to a world of runtime type errors. Part of the reason I'm writing Rust in the first place is compile-time type safety!) After two months of using , however, I got frustrated with its developer experience. Every addition to a page required editing both the Rust struct and the corresponding HTML template. Furthermore, extending a base template for the page header and footer was surprisingly tedious. templates can inherit from other templates . However, any values passed to the base template (such as whether a user is logged in) must be included in every page's Rust struct , which led to a lot of duplication. This experience sent me looking for alternatives. Maud is a macro for writing fast, type-safe HTML templates right in your Rust source code. The format is concise and makes it easy to include values from Rust code. The Hello World example shows how you can write HTML tags, classes, and attributes without the visual noise of angle brackets and closing tags: Rust values can be easily spliced into templates (HTML special characters are automatically escaped ): Control structures like , , , , and are also very straightforward: Partial templates are also easy to reuse by turning them into small functions that return : All in all, Maud provides a pleasant way to write HTML components and pages. It also ties in nicely with the rest of the stack (more on that later). Axum is a popular web framework built by the Tokio team. The framework uses functions with extractors to declaratively parse HTTP requests. The Hello World example illustrates building a router with multiple routes, including one that handles a POST request with a JSON body and returns a JSON response: Axum extractors make it easy to parse values from HTTP bodies, paths, and query parameters and turn them into well-defined Rust structs. And, as we'll see later, it plays nicely with the rest of this stack. Every named stack needs a persistence layer. SQLx is a library for working with SQLite, Postgres, and MySQL from async Rust. SQLx has a number of different ways of working with it, but I'll show one that gives a flavor of how I use it: You can derive the trait for structs to map between the database row and your Rust types. Note that you can derive both and 's and on the same structs to use them all the way from your database to the Axum layer. However, in practice I've often found that it is useful to separate the database types from those used in the server API -- but it's easy to define implementations to map between them. The last part of the stack is HTMX . It is a library that enables you to build fairly interactive websites using a handful of HTML attributes that control sending HTTP requests and handling their responses. While HTMX itself is a Javascript library, websites built with it often avoid needing to use custom Javascript directly. For example, this button means "When a user clicks on this button, issue an AJAX request to /clicked, and replace the entire button with the HTML response". Notably, this snippet will replace just this button with the HTML returned from , rather than the whole page like a plain HTML form would. HTMX has been having a moment, in part due to essays like The future of HTMX where they talked about "Stability as a Feature" and "No New Features as a Feature". This obviously stands in stark contrast to the churn that the world of frontend Javascript frameworks is known for. There is a lot that can and has been written about HTMX, but the logic clicked for me after watching this interview with the creator of it. The elegance of HTMX -- and the part that makes its promise of stability credible -- is that it was built from first principles to generalize the behavior already present in HTML forms and links . Specifically, (1) HTML forms and links (2) submit GET or POST HTTP requests (3) when you click a Submit button and (4) replace the entire screen with the response. HTMX asks and answers the questions: By generalizing these behaviors, HTMX makes it possible to build more interactive websites without writing custom Javascript -- and it plays nicely with backends written in other languages like Rust. Since we're talking about Rust and building fast websites, it's worth emphasizing that while HTMX is a Javascript library, it only needs to be loaded once. Updating your code or website behavior will have no effect on the HTMX libraries, so you can use the directive to tell browsers or other caches to indefinitely store the specific versions of HTMX and any extensions you're using. The first visit might look like this: But subsequent visits only need to load the HTML: This makes for even faster page loads for return users. Overall, I've had a good experience building with this stack, but I wanted to highlight a couple of places where the various components complemented one another in nice ways. Earlier, I mentioned my frustration with , specifically around reusing a base template that includes different top navigation bar items based on whether a user is logged in or not. I was wondering how to do this with Maud, when I came across this Reddit question: Users of maud (and axum): how do you handle partials/layouting? David Pedersen, the developer of Axum, had responded with this gist . In short, you can make a page layout struct that is an Axum extractor and provides a method that returns : When you use the extractor in your page handler functions, the base template automatically has access to the components it needs from the request: This approach makes it easy to reuse the base page template without needing to explicitly pass it any request data it might need. (Thanks David Pedersen for the write-up -- and for your work on Axum!) This is somewhat table stakes for HTML templating libraries, but it is a nice convenience that Maud has an Axum integration that enables directly return from Axum routes (as seen in the examples just above). HTMX has a number of very useful extensions , including the Preload extension . It preloads HTML pages and fragments into the browser's cache when users hover or start clicking on elements, such that the transitions happen nearly instantly. The Preload extension sends the header with every request it initiates, which pairs nicely with middleware that sets the cache response headers: (Of course, this same approach can be implemented with any HTTP framework, not just Axum.) Update: after writing this post, u/PwnMasterGeno on Reddit pointed out the crate to me. This library includes Axum extractors and responders for all of the headers that HTMX uses. For example, you can use the header to determine if you need to send the full page or just the body content. also has a nice feature for cache management . It has a that automatically sets the component of the HTTP cache headers based on the request headers you use, which will ensure the browser correctly resends the request when the request changes in a meaningful way. While I've overall been happy building with the MASH stack, here are the things that I've found to be less than ideal. I would be remiss talking up this stack without mentioning one of the top complaints about most Rust development: compile times. When building purely backend services, I've generally found that Rust Analyzer does the trick well enough that I don't need to recompile in my normal development flow. However, with frontend changes, you want to see the effects of your edits right away. During development, I use Bacon for recompiling and rerunning my code and I use to have the frontend automatically refresh. Using some of Corrode's Tips For Faster Rust Compile Times , I've gotten it down to around 2.5 seconds from save to page reload . I'd love if it were faster, but it's not a deal-breaker for me. For anyone building with the MASH stack, I would highly recommend splitting your code into smaller crates so that the compiler only has to recompile the code you actually changed. Also, there's an unmerged PR for Maud to enable updating templates without recompiling , but I'm not sure if that will end up being merged. If you have any other suggestions for bringing down compile times, I'd love to hear them! HTMX's focus on building interactivity through swapping HTML chunks sent from the backend sometimes feels overly clunky. For example, the Click To Edit example is a common pattern involving replacing an Edit button with a form to update some information such as a user's contact details. The stock HTMX way of doing this is fetching the form component from the backend when the user clicks the button and swapping out the button for the form. This feels inelegant because all of the necessary information is already present on the page, save for the actual form layout. It seems like some users of HTMX combine it with Alpine.js , Web Components, or a little custom Javascript to handle this. For the moment, I've opted for the pattern lifted from the HTMX docs but I don't love it. If you're building a website and using Rust, give the MASH stack a try! Maud is a pleasure to use. Axum and SQLx are excellent. And HTMX provides a refreshing rethink of web frontends. That said, I'm not yet sure if I would recommend this stack to everyone doing web development. If I were building a startup making a normal web app, there's a good chance that TypeScript is still your best bet. But if you are working on a solo project or have other reasons that you're already using Rust, give this stack a shot! If you're already building with these libraries, what do you think? I'd love to hear about others' experiences. Thanks to Alex Kesling for feedback on a draft of this post! Discuss on r/rust , r/htmx or Hacker News . If you haven't already signed up for Scour, give it a try and let me know what you think !

0 views
Andre Garzia 8 months ago

The Web Should Be A Conversation

For a very long time, I've defended that the Web should be a conversation, a two-way street instead of a chute just pushing content into us. The Web is the only mass media we currently have where most people can have a voice. I'm not saying all these voices have the same loudness nor that every single person in our beautiful planet and space stations can actually post to the Web, just that it is the one place where everyone has the potential to be a part of it. Contrast it with streaming services, radio, or even the traditional publishing industry and you'll see that a person alone with an idea has a lot more obstacles in their way, than when considering just starting a blog. For the last couple of years, there has been a colossal push by Silicon Valley companies towards generative AI. Not only bots are going crazy gobbling all the content they can see regardless if they have the rights to do so or not, but content farms have been pushing drivel generated by such machines into the wider Web. I have seen a horrible decline in the quality of my search results and the social platforms that I'm a part of — the ones with algorithmic timelines such as Instagram and YouTube — have been pushing terrible content towards me, the kind that tries to get a rise out of you. They do this to "toxically foster" engagement. Trying to get you to be so mad that you dive deeper into either an echo champer or a flame war. The enshitfication of the Web is real, but it is happening at a surface level. All the content you love and want is still there. They are just harder to discover cause FAANG companies got a nuclear powered shit firehose spraying bullshit all over the place. There are many ways to fight this and in this blog post, I'll outline what I am doing and try to convince you to do the same. Yes, this post has an agenda, a biased human wrote it. TL;DR: We need to get back into blogging. We need to put care and effort into the Blogosphere. A human-centric Web, in my own opinion, is one that is made by people to be browsed by people. The fine folks at the IndieWeb been hammering at this for a very long time: On Social Networks such as Facebook or YouTube, you don't own your platform. You're just feeding a machine that will decide to show your content or not to people, depending on how much their shareholders can make out of your work and passion. Your content is yours When you post something on the web, it should belong to you, not a corporation. Too many companies have gone out of business and lost all of their users’ data. By joining the IndieWeb, your content stays yours and in your control. You are better connected Your articles and status messages can be distributed to any platform, not just one, allowing you to engage with everyone. Replies and likes on other services can come back to your site so they’re all in one place. You are in control You can post anything you want, in any format you want, with no one monitoring you. In addition, you share simple readable links such as example.com/ideas. These links are permanent and will always work. — Source: IndieWeb I'm not advocating for you to stop using these bad social networks. You do whatever you want to do. I'm urging you to also own your own little corner on the Web by making a little blog. What will you post into it? Well, whatever you want. The same stuff you post elsewhere. A blog doesn't need to be anything more complicated than your random scribblings and things you want to share with the world. I know there are many people that treat it as a portfolio to highlight their best self and promote themselves, if that is you too, go forward and do it! If that is not you, you can still have a blog and have fun. There are thousands of ways to start a blog, let me list some that I think are a good way to go: These are just some ways to do it. There are many more. When you start your own blog, you're joining the conversation. You don't need the blessing of a social network to post your own content online. You certainly don't need to play their algorithm game. Join the conversation as you are and not as these companies want you to be. The Web becomes better when you are your authentic self online. Post about all the things that interest you. It doesn't matter if you're mixing food recipes with development tips. You contain multitudes. Share the blog posts and content creators that you like. Talk about your shared passions on your blog. Create connections. The way to avoid doomscrolling and horrible algorithmic timelines is to curate your own feed subscriptions. Instead of relying on social networks and search engines to surface content for you, you can subscribe to the websites you want to check often. Many websites offer feeds in RSS or Atom formats and you can use a feed reader to keep track of them. There are many feed readers out there (heck, even I made one, more about it later). Let me show you some cool ones: Once you're in control of your own feed, you step away from algorithmic timelines. You can use feed readers to subscribe not only to blogs, but your favourite creators on YouTube and other platforms too. If the website you want to subscribe to does not offer a feed, check out services like rss.app and others to try to convert it into a feed you can use on your feed reader of choice. With time, you'll collect many subscriptions and your Web experience will be filled with people instead of bots. Use opml exporting and importing from your feed reader to share interesting blogs with your friends and readers. Word of mouth and grassroot connections between people in the blogosphere is how we step out of this shit. Learn a bit of HTML to add a blogroll link to your template. Sharing is caring. As I mentioned before, I have been thinking about this for a long time. I suspect I might have created one of the first blogging clients on MacOS 8 (yeah the screenshot is from MacOS 9). I have no idea how many times I implemented a feed reader, a blogging client, or a little blogging CMS. Even this blog you're reading right now is a home grown Lua -based blogging CMS I made in an afternoon. BlogCat is my latest experiment. It is an add-on for Firefox that adds blogging features to the browser. It aims to reduce the friction between blogging and Web Browsing by making weblogs a first-class citizen inside your user agent. You can subscribe to websites, import and export OPML, all from inside the browser. You can have a calm experience checking the latest posts from the websites you follow. Being a part of the conversation is also easy cause BlogCat supports posting to Micropub-enabled sites and also microblogging to Bluesky and Mastodon. It uses a handy sidebar so you can compose your post while browsing the web. I been using it for a couple weeks now and am enjoying it a lot. Maybe you will enjoy it too. Anyway, this is not a post about BlogCat, but this post is what originally inspired BlogCat. As I drafted this post weeks ago and mused about the Web I want and the features I want on Web Browsers, I realised I knew how to make them. Instead of simply shouting about it, I decided to build it myself. You too can be a part of the conversation. You too can help build the Web you want. Let's walk away from the enshitfication of the Web by linking hands across the blogosphere. Micro.Blog : A simple and powerful blogging platform by people who actually love blogs. You need a subscription for it, but it can be as cheap as 1 buck per-month. Jekyll using Github Pages : If you're a developer and already know a bit about Git, you can quickly spin a blog using Jekyll and Github Pages. That allows you to start a blog for free. Wordpress : It pains me to write this one. I don't like Wordpress but I understand it is an easy way to start blogging for free. Blogger : Blogger still exists! A simple way to create a blog. Feedly : A SaaS that is liked by many. Create an account and subscribe to your blogs from any Web device you got. NetNewsWire : Polished macOS app that has been the gold standard for feed readers for more than a decade. It is FOSS. Akregator : From our friends at KDE, a FOSS Desktop feed reader for Linux and Windows. Miniflux : a minimalist feed reader. You can join their SaaS or self-host it. Rad Reader : A minimalist desktop reader for macOS, Linux, and Windows. BlogCat : Yep, I made this. More about this later. It is an add-on for Firefox that adds blogging features to the browser.

0 views

CSS-only Syntax Highlighting

Read on the website: I hate JS. (No, not really.) I don't want to have even a line of JS on my webite. Especially for something as simple as syntax highlighting. I should be able to do that with some CSS and minor preprocessing, right?

0 views

Why Insist on a Word

A central concept to HTML, and hypertext theory more generally, is something called Representational State Transfer, a.k.a. REST. Over at htmx, a lot of the writing we do is based on REST theory. REST is a widely misunderstood term , and if you point that out to people, you will be told, repeatedly and sometimes quite irately: who cares? REST has a new meaning now—use words the way people understand them and spare us the lecture.

0 views
Phil Eaton 11 months ago

1 million page views

I was delighted to notice this morning that this site has recently passed 1M page views. And since Murat wrote about his 1M page view accomplishment at the time, I felt compelled to now too. I started regularly blogging in 2018. For some reason I decided to write a blog post every month. And while I have definitely skipped a month or two here or there, on average I've written 2 posts per month. Since at least 2018 this site has been built with a static site generator. I might have used a 3rd-party generator at one point, but for as long as I can remember most of this site has been built with a little Python script I wrote. I used to get so pissed when static site generators would pointlessly change their APIs and I'd have to make pointless changes. I have not had to make any significant changes to my build code in many years. I hosted the site itself on GitHub Pages for many years. But I wanted more flexibility with subdomains (ultimately not something I liked) and the ability to view server-side logs (ultimately not something I ever do). I think this site is hosted on an OVH machine now. But at this point it is inertia keeping me there. If you have no strong feelings otherwise, GitHub Pages is perfect. I used to use Google Analytics but then they shut down the old version. The new version was incredibly confusing to use. I could not find some very basic information. So I moved to Fathom which has been great. I used to track all subscribers in a Google Form and bcc them but this became untenable eventually after 1000 subscribers due to GMail rate limits. I currently use MailerLite for subscriptions and sending email about new posts. But this is an absolutely terrible service. They proxy all links behind a domain that adblockers hate and they also visually shorten the URL so you can't copy the text of the URL. I just want a service that has a hosted form for collecting subscribers and a that lets me dump raw HTML and send that as an email to my subscribers. No branding, no watermarks, no link proxying. This apparently doesn't exist. I am too lazy to figure out Amazon SES so I stick with MailerLite for now. In the beginning I talked about little interpreters in JavaScript, about programming languages, about Scheme. I was into functional programming. Over time I moved into little emulators and bytecode VMs. And for the last four years I became obsessed with databases and distributed systems. I have almost always written about little projects to teach myself a concept. Writing a bytecode VM in Rust , emulating a subset of x86 in Go , implementing Raft in Go , implementing MVCC isolation levels in Go , and so on. So many times when I tried to learn a concept I would find blog posts with only partial code. The post would link to a GitHub repo that, by the time I got to the post, had evolved significantly beyond what was described in the post. The repo code had by then become too complex for me to follow. So I was motivated to write minimal implementations and walk through the code in its entirety. I have also had a blast writing survey posts such as how various databases execute expressions , analyzing non-V8 JavaScript implementations , how various programming language implementations parse code , and how various database systems build on top of key-value databases . The last two posts have even each been cited in a research paper ( here and here ). In terms of quality, my single greatest trick is to read the post out loud. Multiple times. Notice parts that are awkward or unclear and rewrite them. My second greatest trick is to ask friends for review. Some posts like an intuition for distributed consensus and a write-ahead log is not a universal part of durability would simply not have been correct or credible without my fantastic reviewers. And I'm proud to have played that part a few times in turn. We also have a fantastic #writing-and-drafts channel on the Software Internals Discord where folks (myself occasionally included) come for post review. I've lost count of the total number of times that these posts have been on the front page of Hacker News or that a tweet announcing a post has reached triple digits likes. I think I've had 9 posts on the front of HN this year. I do know that my single best year for HN was 12 months between 2022-2023 where 20 of my posts or projects were on the front page. Every time a post does well there's a part of me that worries that I've peaked. But the way to deal with this has been to ignore that little voice and to just keep learning new things. I haven't stopped finding things confusing yet, and confusion is a phenomenal muse . And also to, like, go out and meet friends for dinner, run meetups , run book clubs , chat with you fascinating internet strangers, play volleyball, and so on. It's always been about cultivating healthy obsessions . In parting, I'll remind you: I wrote a little reflection on writing after noticing I passed 1M page views this morning. https://t.co/eIlMDVHNht pic.twitter.com/EKSiiDUz5G It is definitely worth writing about , whatever "it" is You're not writing enough And some ideas for posts I want to hear about if you write about them

0 views

The Messy Pile

A couple months ago I was sitting next to Ivy Wong and I saw them working on a dropdown menu so cute that I immediately asked how they did it.

0 views
Gabriel Garrido 1 years ago

Caching HTML in CDNs

Content delivery networks do not cache HTML out of the box. This is expected, otherwise their support lines would flood with the same issue: “why is my site not updating?” I want to cache my site fully. It’s completely static and I post infrequently. With some exceptions, most pages remain unchanged once published. Caching at the edge means that requests to this site travel shorter distances across the globe. Keeping pages and assets small means less bytes to send along . This is the way. Most CDNs have the ability to configure which files to cache and for how long. I use Bunny , which does not cache HTML out of the box. What follows should translate to other CDNs, but you may have to adapt a setting or two. I want the CDN to hold on to HTML files for a year. To that end, I define an edge rule that looks like this: If my origin responds to a request with a content type of or 1 then the CDN will cache it. I don’t want the CDN to cache requests for pages that do not exist, so I configure it to check that the origin also returns a status code of . I need to make sure that the CDN instructs browsers to not cache the page. Why? If I publish an update to a page, I can invalidate the cache in the CDN but not in someone’s browser. I create a second edge rule that looks like this: The header is used to instruct the browser on how to cache the resource. marks the response as stale and ensures it validates the page against the CDN. Suppose someone has loaded my page once. If they return to this page, the browser will verify with the CDN whether the page has been modified since it was requested. If it hasn’t, the CDN will send a without transmitting back the contents of the requested page. The header is set only if the requested resource ends in , as feeds do, or it does not have an extension at all (as html pages do) 2 . Lastly, I need to tell the CDN to clear some of its cache when I publish an update to the site. For example: I’ll admit that invalidating the cache is a reason why someone may just not bother with caching HTML. Following the list above, there are different scenarios to consider. If I edit a given post I may be tempted to think that only that page’s cache must be invalidated. However, in my site an update to a single page can manifest in changes to other pages: A change in the title requires updating the index, a new tag requires updating the tag index, a new internal backlink requires updating the referenced page. Lest you’re down to play whack-a-mole, the feasibility of this endeavour rests in the ability to invalidate the cache correctly and automatically. In my case I check every new site build against the live build to purge the cache automatically. I’m also caching files, as RSS readers are probing my site’s feeds frequently.  ↩︎ These are the affordances that I can use in my CDN’s edge rules. Other CDNs may provide with something more explicit.  ↩︎ The CSS file for this site has a hash in its filename. Updating the CSS means a new hash which means all pages update their CSS reference.  ↩︎ Override Cache Time: seconds If Response Header If Status Code: Set Response Header: If File Extension If Response Header If I edit a page’s content. I publish a new post I update the site’s CSS 3 I’m also caching files, as RSS readers are probing my site’s feeds frequently.  ↩︎ These are the affordances that I can use in my CDN’s edge rules. Other CDNs may provide with something more explicit.  ↩︎ The CSS file for this site has a hash in its filename. Updating the CSS means a new hash which means all pages update their CSS reference.  ↩︎

0 views
Gabriel Garrido 1 years ago

Implementing internal backlinks in Hugo

As of today, some pages on this site surface any other internal page where they are cross-referenced. If you’ve used a tool like Obsidian, you’ll know how useful backlinks are to navigate related content. The joys of hypertext! Hugo does not generate or expose backlinks out of the box. Instead, you must compute the references on each page as it is being built. This approach considers the following constraints: For example, you can go to my /about page and see the various pages that reference it, including this one. When this page is built, all other pages are inspected. If a page’s content has in its content, it will be matched. Create a file in your theme’s directory with the following markup. Then, instantiate it in any template where you want to show backlinks using . I did some light testing and the following references are supported using : For relative references using , supposing I’m a page: For page bundles , these work: I wish Hugo had better affordances to accomplish this type of tasks. Until then, you must bear with adding logic to your templates and O(n^2). Only links in the markdown content are eligible All content pages are eligible Links use Hugo’s ref and relref shortcodes No explicit front-matter is required Anchors within the same page (e.g ) are not considered backlinks Multiple languages are not considered

0 views
Gabriel Garrido 1 years ago

Read your Ghost subscriptions using RSS

I follow a handful of Ghost-powered sites that don’t serve their articles fully in their RSS feed. Most of the time this is because an article requires a paid or free subscription to be read. This turns out to be a nuisance as I prefer to read things using an RSS reader. For example, 404 Media’s recent article on celebrity AI scam ads in Youtube ends abruptly in my RSS reader. If I want to read the entire article then I am expected to visit their site, log in, and read it there. Article in the RSS reader cut short Update: In March 2024, 404 Media introduced full text RSS feeds for paid subscribers. If you’re not a paid subscriber, this note is still relevant if you wish to read free articles in full-text via RSS for any Ghost blog out there. Miniflux is a minimal RSS reader and client that I self-host. I like to use it with NetNewsWire on my iPad so that I can process and read articles in batches while offline. One of my favorite features in Miniflux is the ability to set cookies for any feed that you’re subscribed to. You can use this to have Miniflux pull the feed content as if you were authenticated 1 in websites that use cookies to maintain the user’s session. Ghost uses cookies to keep users authenticated for at least six months . A Ghost-powered site will respond with a different RSS feed depending on who is making the request. If you’re logged in and your subscription is valid for the article at hand, you get the entire article. If you’re not logged in or if don’t have the appropriate subscription, you get the abbreviated article. This is great! I can continue to support the publisher that I’m subscribed to while retaining control over my reading experience. Only the cookies are necessary Back in Miniflux, head to the Feeds page and press Add feed and enter the site’s URL. Toggle the Advanced Options dropdown,look for the Set Cookies field and add the following string: . Replace with the corresponding value that you see in the browser’s cookie jar for each cookie. Press Find a feed . At this point Miniflux should find the RSS feed automatically and configure it accordingly. If you’ve already added the feed before you don’t need to remove and add the feed again. Instead, go to the feed settings page, add the cookie, and click Refresh to force Miniflux to re-fetch the feed. Article in the RSS reader rendered fully Cookie expiration I didn’t read enough of Ghost’s code to verify whether they refresh the authentication cookies every once in a while. That said, the cookie’s expiration time is long enough that I’d be find with having to replace them once every six months if necessary. Anyone with access to these cookies can impersonate your account in the corresponding website. I self-host Miniflux so no one has access to the Miniflux database but me. If you pay for Miniflux then you’ll want to make sure you feel comfortable trusting them with your account cookies. I wouldn’t be too worried but you should be aware of this fact.  ↩︎ Open your browser, visit the Ghost-powered site that you’re following, and log in. Open your browser’s developer tools and head to the storage section (under “Application” in Chromium-based browsers, “Storage” in Firefox and Safari). Look for the cookies section and locate the Ghost-powered site. Look for the and cookies. Back in Miniflux, head to the Feeds page and press Add feed and enter the site’s URL. Toggle the Advanced Options dropdown,look for the Set Cookies field and add the following string: . Replace with the corresponding value that you see in the browser’s cookie jar for each cookie. Press Find a feed . Anyone with access to these cookies can impersonate your account in the corresponding website. I self-host Miniflux so no one has access to the Miniflux database but me. If you pay for Miniflux then you’ll want to make sure you feel comfortable trusting them with your account cookies. I wouldn’t be too worried but you should be aware of this fact.  ↩︎

0 views
nathan.rs 1 years ago

How to Fix Hugo's iOS Code-Block Text-Size Rendering Issue

Lately, I’ve been coming across many blogs that have weird font-size rendering issues for code blocks on iOS. Basically, in a code snippet, the text-size would sometimes be much larger for some lines than others. Below is a screenshot of the issue from a website where I’ve seen this occur. As you can see, the text-size isn’t uniform across code block lines. I’ve seen this issue across many blogs that compile markdown files to HTML such as sites built using Hugo, Jekyll, or even custom md-to-html shell scripts .

0 views
Dominik Weber 1 years ago

Refactoring an entire NextJs application to server components

Next.js 13 introduced the app directory and React Server Components. On Star Wars day (May 4th) a couple months later, React Server Components was marked as stable. Server components separate page rendering, with some parts rendered on the server and others on the client. The key difference is that server components are always rendered on the server, not prerendered on the server and hydrated on the client. Server-side rendering existed before, but now for the first time it’s possible to mix it with client-side rendering on the same page.

1 views