Posts in Web (20 found)
A Working Library 1 months ago

We, the Heartbroken

“Heartbreak is the heart of all revolutionary consciousness. How can it not be? Who can imagine another world unless they have already been broken apart by the world we are in?” Gargi Bhattacharyya sees our grief for a broken world as the tool we use to weave a new one. View this post on the web , subscribe to the newsletter , or reply via email .

0 views
Loren Stewart 2 months ago

A Progressive Complexity Manifesto

A manifesto for progressive web complexity. Reject the false binary between static sites and SPAs. Embrace the powerful middle ground with server-rendered HTML, HTMX, and intentional complexity escalation.

0 views
Cassidy Williams 4 months ago

Generating open graph images in Astro

Something that always bugged me about this blog is that the open graph/social sharing images used this for every single post: I had made myself a blank SVG template (of just the rainbow-colored pattern) for each post literally years ago, but didn’t want to manually create an image per blog post. There are different solutions out there for this, like the Satori library, or using a service like Cloudinary , but they didn’t fit exactly how I wanted to build the images, and I clearly have a problem with control. So, I built myself my own solution! Last year, I made a small demo for Cosynd with Puppeteer that screenshotted websites and put it into a PDF for our website copyright offering, aptly named screenshot-demo . I liked how simple that script was, and thought I could follow a similar strategy for generating images. My idea was to: And then from there, I’d do this for every blog title I’ve written. Seemed simple enough? Reader, it was not. BUT it worked out in the end! Initially, I set up a fairly simple Astro page with HTML and CSS: With this, I was able to work out what size and positioning I wanted my text to be, and how I wanted it to adjust based on the length of the blog post title (both in spacing and in size). I used some dummy strings to do this pretty manually (like how I wanted it to change ever so slightly for titles that were 4 lines tall, etc.). Amusing note, this kind of particular design work is really fun for me, and basically impossible for AI tools to get right. They do not have my eyes nor my opinions! I liked feeling artistic as I scooted each individual pixel around (for probably too much time) and made it feel “perfect” to me (and moved things in a way that probably 0 other people will ever notice). Once I was happy with the dummy design I had going, I added a function to generate an HTML page for every post, so that Puppeteer could make a screenshot for each of them. With the previous strategy, everything worked well. But, my build times were somewhat long, because altogether the build was generating an HTML page per post (for people to read), a second HTML page per post (to be screenshotted), and then a screenshot image from that second HTML page. It was a bit too much. So, before I get into the Puppeteer script part with you, I’ll skip to the part where I changed up my strategy (as the kids say) to use a single page template that accepted the blog post title as a query parameter. The Astro page I showed you before is almost exactly the same, except: The new script on the page looked like this, which I put on the bottom of the page in a tag so it would run client-side: (That function is an interesting trick I learned a while back where tags treat content as plaintext to avoid accidental or dangerous script execution, and their gives you decoded text without any HTML tags. I had some blog post titles that had quotes and other special characters in them, and this small function fixed them from breaking in the rendered image!) Now, if you wanted to see a blog post image pre-screenshot, you can go to the open graph route here on my website and see the rendered card! In my folder, I have a script that looks mostly like this: This takes the template ( ), launches a browser, navigates to the template page, loops through each post, sizes it to the standard Open Graph size (1200x630px), and saves the screenshot to my designated output folder. From here, I added the script to my : I can now run to render the images, or have them render right after ! This is a GitHub Gist of the actual full code for both the script and the template! There was a lot of trial and error with this method, but I’m happy with it. I learned a bunch, and I can finally share my own blog posts without thinking, “gosh, I should eventually make those open graph images” (which I did literally every time I shared a post). If you need more resources on this strategy in general: I hope this is helpful for ya!

0 views

Lost Computation

Read on the website: We keep losing context and computation when running programs. But we don't have to. Let’s see how this lost compute can be avoided.

0 views
Evan Schwartz 6 months ago

Building a fast website with the MASH stack in Rust

I'm building Scour , a personalized content feed that sifts through noisy feeds like Hacker News Newest, subreddits, and blogs to find great content for you. It works pretty well -- and it's fast . Scour is written in Rust and if you're building a website or service in Rust, you should consider using this "stack". After evaluating various frameworks and libraries, I settled on a couple of key ones and then discovered that someone had written it up as a stack. Shantanu Mishra described the same set of libraries I landed on as the "mash 🥔 stack" and gave it the tagline "as simple as potatoes". This stack is fast and nice to work with, so I wanted to write up my experience building with it to help spread the word. TL;DR: The stack is made up of Maud , Axum , SQLx , and HTMX and, if you want, you can skip down to where I talk about synergies between these libraries. (Also, Scour is free to use and I'd love it if you tried it out and posted feedback on the suggestions board !) Scour uses server-side rendered HTML, as opposed to a Javascript or WebAssembly frontend framework. Why? First, browser are fast at rendering HTML. Really fast. Second, Scour doesn't need a ton of fancy interactivity and I've tried to apply the "You aren't gonna need it" principle while building it. Holding off on adding new tools helps me understand the tools I do use better. I've also tried to take some inspiration from Herman from BearBlog's approach to "Building software to last forever" . HTML templating is simple, reliable, and fast. Since I wanted server-side rendered HTML, I needed a templating library and Rust has plenty to choose from. The main two decisions to make were: Here is a non-exhaustive list of popular template engines and where they fall on these two axes: I initially picked because of its popularity, performance , and type safety. (I quickly passed on all of the runtime-evaluated options because I couldn't imagine going back to a world of runtime type errors. Part of the reason I'm writing Rust in the first place is compile-time type safety!) After two months of using , however, I got frustrated with its developer experience. Every addition to a page required editing both the Rust struct and the corresponding HTML template. Furthermore, extending a base template for the page header and footer was surprisingly tedious. templates can inherit from other templates . However, any values passed to the base template (such as whether a user is logged in) must be included in every page's Rust struct , which led to a lot of duplication. This experience sent me looking for alternatives. Maud is a macro for writing fast, type-safe HTML templates right in your Rust source code. The format is concise and makes it easy to include values from Rust code. The Hello World example shows how you can write HTML tags, classes, and attributes without the visual noise of angle brackets and closing tags: Rust values can be easily spliced into templates (HTML special characters are automatically escaped ): Control structures like , , , , and are also very straightforward: Partial templates are also easy to reuse by turning them into small functions that return : All in all, Maud provides a pleasant way to write HTML components and pages. It also ties in nicely with the rest of the stack (more on that later). Axum is a popular web framework built by the Tokio team. The framework uses functions with extractors to declaratively parse HTTP requests. The Hello World example illustrates building a router with multiple routes, including one that handles a POST request with a JSON body and returns a JSON response: Axum extractors make it easy to parse values from HTTP bodies, paths, and query parameters and turn them into well-defined Rust structs. And, as we'll see later, it plays nicely with the rest of this stack. Every named stack needs a persistence layer. SQLx is a library for working with SQLite, Postgres, and MySQL from async Rust. SQLx has a number of different ways of working with it, but I'll show one that gives a flavor of how I use it: You can derive the trait for structs to map between the database row and your Rust types. Note that you can derive both and 's and on the same structs to use them all the way from your database to the Axum layer. However, in practice I've often found that it is useful to separate the database types from those used in the server API -- but it's easy to define implementations to map between them. The last part of the stack is HTMX . It is a library that enables you to build fairly interactive websites using a handful of HTML attributes that control sending HTTP requests and handling their responses. While HTMX itself is a Javascript library, websites built with it often avoid needing to use custom Javascript directly. For example, this button means "When a user clicks on this button, issue an AJAX request to /clicked, and replace the entire button with the HTML response". Notably, this snippet will replace just this button with the HTML returned from , rather than the whole page like a plain HTML form would. HTMX has been having a moment, in part due to essays like The future of HTMX where they talked about "Stability as a Feature" and "No New Features as a Feature". This obviously stands in stark contrast to the churn that the world of frontend Javascript frameworks is known for. There is a lot that can and has been written about HTMX, but the logic clicked for me after watching this interview with the creator of it. The elegance of HTMX -- and the part that makes its promise of stability credible -- is that it was built from first principles to generalize the behavior already present in HTML forms and links . Specifically, (1) HTML forms and links (2) submit GET or POST HTTP requests (3) when you click a Submit button and (4) replace the entire screen with the response. HTMX asks and answers the questions: By generalizing these behaviors, HTMX makes it possible to build more interactive websites without writing custom Javascript -- and it plays nicely with backends written in other languages like Rust. Since we're talking about Rust and building fast websites, it's worth emphasizing that while HTMX is a Javascript library, it only needs to be loaded once. Updating your code or website behavior will have no effect on the HTMX libraries, so you can use the directive to tell browsers or other caches to indefinitely store the specific versions of HTMX and any extensions you're using. The first visit might look like this: But subsequent visits only need to load the HTML: This makes for even faster page loads for return users. Overall, I've had a good experience building with this stack, but I wanted to highlight a couple of places where the various components complemented one another in nice ways. Earlier, I mentioned my frustration with , specifically around reusing a base template that includes different top navigation bar items based on whether a user is logged in or not. I was wondering how to do this with Maud, when I came across this Reddit question: Users of maud (and axum): how do you handle partials/layouting? David Pedersen, the developer of Axum, had responded with this gist . In short, you can make a page layout struct that is an Axum extractor and provides a method that returns : When you use the extractor in your page handler functions, the base template automatically has access to the components it needs from the request: This approach makes it easy to reuse the base page template without needing to explicitly pass it any request data it might need. (Thanks David Pedersen for the write-up -- and for your work on Axum!) This is somewhat table stakes for HTML templating libraries, but it is a nice convenience that Maud has an Axum integration that enables directly return from Axum routes (as seen in the examples just above). HTMX has a number of very useful extensions , including the Preload extension . It preloads HTML pages and fragments into the browser's cache when users hover or start clicking on elements, such that the transitions happen nearly instantly. The Preload extension sends the header with every request it initiates, which pairs nicely with middleware that sets the cache response headers: (Of course, this same approach can be implemented with any HTTP framework, not just Axum.) Update: after writing this post, u/PwnMasterGeno on Reddit pointed out the crate to me. This library includes Axum extractors and responders for all of the headers that HTMX uses. For example, you can use the header to determine if you need to send the full page or just the body content. also has a nice feature for cache management . It has a that automatically sets the component of the HTTP cache headers based on the request headers you use, which will ensure the browser correctly resends the request when the request changes in a meaningful way. While I've overall been happy building with the MASH stack, here are the things that I've found to be less than ideal. I would be remiss talking up this stack without mentioning one of the top complaints about most Rust development: compile times. When building purely backend services, I've generally found that Rust Analyzer does the trick well enough that I don't need to recompile in my normal development flow. However, with frontend changes, you want to see the effects of your edits right away. During development, I use Bacon for recompiling and rerunning my code and I use to have the frontend automatically refresh. Using some of Corrode's Tips For Faster Rust Compile Times , I've gotten it down to around 2.5 seconds from save to page reload . I'd love if it were faster, but it's not a deal-breaker for me. For anyone building with the MASH stack, I would highly recommend splitting your code into smaller crates so that the compiler only has to recompile the code you actually changed. Also, there's an unmerged PR for Maud to enable updating templates without recompiling , but I'm not sure if that will end up being merged. If you have any other suggestions for bringing down compile times, I'd love to hear them! HTMX's focus on building interactivity through swapping HTML chunks sent from the backend sometimes feels overly clunky. For example, the Click To Edit example is a common pattern involving replacing an Edit button with a form to update some information such as a user's contact details. The stock HTMX way of doing this is fetching the form component from the backend when the user clicks the button and swapping out the button for the form. This feels inelegant because all of the necessary information is already present on the page, save for the actual form layout. It seems like some users of HTMX combine it with Alpine.js , Web Components, or a little custom Javascript to handle this. For the moment, I've opted for the pattern lifted from the HTMX docs but I don't love it. If you're building a website and using Rust, give the MASH stack a try! Maud is a pleasure to use. Axum and SQLx are excellent. And HTMX provides a refreshing rethink of web frontends. That said, I'm not yet sure if I would recommend this stack to everyone doing web development. If I were building a startup making a normal web app, there's a good chance that TypeScript is still your best bet. But if you are working on a solo project or have other reasons that you're already using Rust, give this stack a shot! If you're already building with these libraries, what do you think? I'd love to hear about others' experiences. Thanks to Alex Kesling for feedback on a draft of this post! Discuss on r/rust , r/htmx or Hacker News . If you haven't already signed up for Scour, give it a try and let me know what you think !

0 views
Andre Garzia 7 months ago

The Web Should Be A Conversation

For a very long time, I've defended that the Web should be a conversation, a two-way street instead of a chute just pushing content into us. The Web is the only mass media we currently have where most people can have a voice. I'm not saying all these voices have the same loudness nor that every single person in our beautiful planet and space stations can actually post to the Web, just that it is the one place where everyone has the potential to be a part of it. Contrast it with streaming services, radio, or even the traditional publishing industry and you'll see that a person alone with an idea has a lot more obstacles in their way, than when considering just starting a blog. For the last couple of years, there has been a colossal push by Silicon Valley companies towards generative AI. Not only bots are going crazy gobbling all the content they can see regardless if they have the rights to do so or not, but content farms have been pushing drivel generated by such machines into the wider Web. I have seen a horrible decline in the quality of my search results and the social platforms that I'm a part of — the ones with algorithmic timelines such as Instagram and YouTube — have been pushing terrible content towards me, the kind that tries to get a rise out of you. They do this to "toxically foster" engagement. Trying to get you to be so mad that you dive deeper into either an echo champer or a flame war. The enshitfication of the Web is real, but it is happening at a surface level. All the content you love and want is still there. They are just harder to discover cause FAANG companies got a nuclear powered shit firehose spraying bullshit all over the place. There are many ways to fight this and in this blog post, I'll outline what I am doing and try to convince you to do the same. Yes, this post has an agenda, a biased human wrote it. TL;DR: We need to get back into blogging. We need to put care and effort into the Blogosphere. A human-centric Web, in my own opinion, is one that is made by people to be browsed by people. The fine folks at the IndieWeb been hammering at this for a very long time: On Social Networks such as Facebook or YouTube, you don't own your platform. You're just feeding a machine that will decide to show your content or not to people, depending on how much their shareholders can make out of your work and passion. Your content is yours When you post something on the web, it should belong to you, not a corporation. Too many companies have gone out of business and lost all of their users’ data. By joining the IndieWeb, your content stays yours and in your control. You are better connected Your articles and status messages can be distributed to any platform, not just one, allowing you to engage with everyone. Replies and likes on other services can come back to your site so they’re all in one place. You are in control You can post anything you want, in any format you want, with no one monitoring you. In addition, you share simple readable links such as example.com/ideas. These links are permanent and will always work. — Source: IndieWeb I'm not advocating for you to stop using these bad social networks. You do whatever you want to do. I'm urging you to also own your own little corner on the Web by making a little blog. What will you post into it? Well, whatever you want. The same stuff you post elsewhere. A blog doesn't need to be anything more complicated than your random scribblings and things you want to share with the world. I know there are many people that treat it as a portfolio to highlight their best self and promote themselves, if that is you too, go forward and do it! If that is not you, you can still have a blog and have fun. There are thousands of ways to start a blog, let me list some that I think are a good way to go: These are just some ways to do it. There are many more. When you start your own blog, you're joining the conversation. You don't need the blessing of a social network to post your own content online. You certainly don't need to play their algorithm game. Join the conversation as you are and not as these companies want you to be. The Web becomes better when you are your authentic self online. Post about all the things that interest you. It doesn't matter if you're mixing food recipes with development tips. You contain multitudes. Share the blog posts and content creators that you like. Talk about your shared passions on your blog. Create connections. The way to avoid doomscrolling and horrible algorithmic timelines is to curate your own feed subscriptions. Instead of relying on social networks and search engines to surface content for you, you can subscribe to the websites you want to check often. Many websites offer feeds in RSS or Atom formats and you can use a feed reader to keep track of them. There are many feed readers out there (heck, even I made one, more about it later). Let me show you some cool ones: Once you're in control of your own feed, you step away from algorithmic timelines. You can use feed readers to subscribe not only to blogs, but your favourite creators on YouTube and other platforms too. If the website you want to subscribe to does not offer a feed, check out services like rss.app and others to try to convert it into a feed you can use on your feed reader of choice. With time, you'll collect many subscriptions and your Web experience will be filled with people instead of bots. Use opml exporting and importing from your feed reader to share interesting blogs with your friends and readers. Word of mouth and grassroot connections between people in the blogosphere is how we step out of this shit. Learn a bit of HTML to add a blogroll link to your template. Sharing is caring. As I mentioned before, I have been thinking about this for a long time. I suspect I might have created one of the first blogging clients on MacOS 8 (yeah the screenshot is from MacOS 9). I have no idea how many times I implemented a feed reader, a blogging client, or a little blogging CMS. Even this blog you're reading right now is a home grown Lua -based blogging CMS I made in an afternoon. BlogCat is my latest experiment. It is an add-on for Firefox that adds blogging features to the browser. It aims to reduce the friction between blogging and Web Browsing by making weblogs a first-class citizen inside your user agent. You can subscribe to websites, import and export OPML, all from inside the browser. You can have a calm experience checking the latest posts from the websites you follow. Being a part of the conversation is also easy cause BlogCat supports posting to Micropub-enabled sites and also microblogging to Bluesky and Mastodon. It uses a handy sidebar so you can compose your post while browsing the web. I been using it for a couple weeks now and am enjoying it a lot. Maybe you will enjoy it too. Anyway, this is not a post about BlogCat, but this post is what originally inspired BlogCat. As I drafted this post weeks ago and mused about the Web I want and the features I want on Web Browsers, I realised I knew how to make them. Instead of simply shouting about it, I decided to build it myself. You too can be a part of the conversation. You too can help build the Web you want. Let's walk away from the enshitfication of the Web by linking hands across the blogosphere. Micro.Blog : A simple and powerful blogging platform by people who actually love blogs. You need a subscription for it, but it can be as cheap as 1 buck per-month. Jekyll using Github Pages : If you're a developer and already know a bit about Git, you can quickly spin a blog using Jekyll and Github Pages. That allows you to start a blog for free. Wordpress : It pains me to write this one. I don't like Wordpress but I understand it is an easy way to start blogging for free. Blogger : Blogger still exists! A simple way to create a blog. Feedly : A SaaS that is liked by many. Create an account and subscribe to your blogs from any Web device you got. NetNewsWire : Polished macOS app that has been the gold standard for feed readers for more than a decade. It is FOSS. Akregator : From our friends at KDE, a FOSS Desktop feed reader for Linux and Windows. Miniflux : a minimalist feed reader. You can join their SaaS or self-host it. Rad Reader : A minimalist desktop reader for macOS, Linux, and Windows. BlogCat : Yep, I made this. More about this later. It is an add-on for Firefox that adds blogging features to the browser.

0 views

CSS-only Syntax Highlighting

Read on the website: I hate JS. (No, not really.) I don't want to have even a line of JS on my webite. Especially for something as simple as syntax highlighting. I should be able to do that with some CSS and minor preprocessing, right?

0 views
Phil Eaton 10 months ago

1 million page views

I was delighted to notice this morning that this site has recently passed 1M page views. And since Murat wrote about his 1M page view accomplishment at the time, I felt compelled to now too. I started regularly blogging in 2018. For some reason I decided to write a blog post every month. And while I have definitely skipped a month or two here or there, on average I've written 2 posts per month. Since at least 2018 this site has been built with a static site generator. I might have used a 3rd-party generator at one point, but for as long as I can remember most of this site has been built with a little Python script I wrote. I used to get so pissed when static site generators would pointlessly change their APIs and I'd have to make pointless changes. I have not had to make any significant changes to my build code in many years. I hosted the site itself on GitHub Pages for many years. But I wanted more flexibility with subdomains (ultimately not something I liked) and the ability to view server-side logs (ultimately not something I ever do). I think this site is hosted on an OVH machine now. But at this point it is inertia keeping me there. If you have no strong feelings otherwise, GitHub Pages is perfect. I used to use Google Analytics but then they shut down the old version. The new version was incredibly confusing to use. I could not find some very basic information. So I moved to Fathom which has been great. I used to track all subscribers in a Google Form and bcc them but this became untenable eventually after 1000 subscribers due to GMail rate limits. I currently use MailerLite for subscriptions and sending email about new posts. But this is an absolutely terrible service. They proxy all links behind a domain that adblockers hate and they also visually shorten the URL so you can't copy the text of the URL. I just want a service that has a hosted form for collecting subscribers and a that lets me dump raw HTML and send that as an email to my subscribers. No branding, no watermarks, no link proxying. This apparently doesn't exist. I am too lazy to figure out Amazon SES so I stick with MailerLite for now. In the beginning I talked about little interpreters in JavaScript, about programming languages, about Scheme. I was into functional programming. Over time I moved into little emulators and bytecode VMs. And for the last four years I became obsessed with databases and distributed systems. I have almost always written about little projects to teach myself a concept. Writing a bytecode VM in Rust , emulating a subset of x86 in Go , implementing Raft in Go , implementing MVCC isolation levels in Go , and so on. So many times when I tried to learn a concept I would find blog posts with only partial code. The post would link to a GitHub repo that, by the time I got to the post, had evolved significantly beyond what was described in the post. The repo code had by then become too complex for me to follow. So I was motivated to write minimal implementations and walk through the code in its entirety. I have also had a blast writing survey posts such as how various databases execute expressions , analyzing non-V8 JavaScript implementations , how various programming language implementations parse code , and how various database systems build on top of key-value databases . The last two posts have even each been cited in a research paper ( here and here ). In terms of quality, my single greatest trick is to read the post out loud. Multiple times. Notice parts that are awkward or unclear and rewrite them. My second greatest trick is to ask friends for review. Some posts like an intuition for distributed consensus and a write-ahead log is not a universal part of durability would simply not have been correct or credible without my fantastic reviewers. And I'm proud to have played that part a few times in turn. We also have a fantastic #writing-and-drafts channel on the Software Internals Discord where folks (myself occasionally included) come for post review. I've lost count of the total number of times that these posts have been on the front page of Hacker News or that a tweet announcing a post has reached triple digits likes. I think I've had 9 posts on the front of HN this year. I do know that my single best year for HN was 12 months between 2022-2023 where 20 of my posts or projects were on the front page. Every time a post does well there's a part of me that worries that I've peaked. But the way to deal with this has been to ignore that little voice and to just keep learning new things. I haven't stopped finding things confusing yet, and confusion is a phenomenal muse . And also to, like, go out and meet friends for dinner, run meetups , run book clubs , chat with you fascinating internet strangers, play volleyball, and so on. It's always been about cultivating healthy obsessions . In parting, I'll remind you: I wrote a little reflection on writing after noticing I passed 1M page views this morning. https://t.co/eIlMDVHNht pic.twitter.com/EKSiiDUz5G It is definitely worth writing about , whatever "it" is You're not writing enough And some ideas for posts I want to hear about if you write about them

0 views
Gabriel Garrido 1 years ago

Caching HTML in CDNs

Content delivery networks do not cache HTML out of the box. This is expected, otherwise their support lines would flood with the same issue: “why is my site not updating?” I want to cache my site fully. It’s completely static and I post infrequently. With some exceptions, most pages remain unchanged once published. Caching at the edge means that requests to this site travel shorter distances across the globe. Keeping pages and assets small means less bytes to send along . This is the way. Most CDNs have the ability to configure which files to cache and for how long. I use Bunny , which does not cache HTML out of the box. What follows should translate to other CDNs, but you may have to adapt a setting or two. I want the CDN to hold on to HTML files for a year. To that end, I define an edge rule that looks like this: If my origin responds to a request with a content type of or 1 then the CDN will cache it. I don’t want the CDN to cache requests for pages that do not exist, so I configure it to check that the origin also returns a status code of . I need to make sure that the CDN instructs browsers to not cache the page. Why? If I publish an update to a page, I can invalidate the cache in the CDN but not in someone’s browser. I create a second edge rule that looks like this: The header is used to instruct the browser on how to cache the resource. marks the response as stale and ensures it validates the page against the CDN. Suppose someone has loaded my page once. If they return to this page, the browser will verify with the CDN whether the page has been modified since it was requested. If it hasn’t, the CDN will send a without transmitting back the contents of the requested page. The header is set only if the requested resource ends in , as feeds do, or it does not have an extension at all (as html pages do) 2 . Lastly, I need to tell the CDN to clear some of its cache when I publish an update to the site. For example: I’ll admit that invalidating the cache is a reason why someone may just not bother with caching HTML. Following the list above, there are different scenarios to consider. If I edit a given post I may be tempted to think that only that page’s cache must be invalidated. However, in my site an update to a single page can manifest in changes to other pages: A change in the title requires updating the index, a new tag requires updating the tag index, a new internal backlink requires updating the referenced page. Lest you’re down to play whack-a-mole, the feasibility of this endeavour rests in the ability to invalidate the cache correctly and automatically. In my case I check every new site build against the live build to purge the cache automatically. I’m also caching files, as RSS readers are probing my site’s feeds frequently.  ↩︎ These are the affordances that I can use in my CDN’s edge rules. Other CDNs may provide with something more explicit.  ↩︎ The CSS file for this site has a hash in its filename. Updating the CSS means a new hash which means all pages update their CSS reference.  ↩︎ Override Cache Time: seconds If Response Header If Status Code: Set Response Header: If File Extension If Response Header If I edit a page’s content. I publish a new post I update the site’s CSS 3 I’m also caching files, as RSS readers are probing my site’s feeds frequently.  ↩︎ These are the affordances that I can use in my CDN’s edge rules. Other CDNs may provide with something more explicit.  ↩︎ The CSS file for this site has a hash in its filename. Updating the CSS means a new hash which means all pages update their CSS reference.  ↩︎

0 views
Gabriel Garrido 1 years ago

Implementing internal backlinks in Hugo

As of today, some pages on this site surface any other internal page where they are cross-referenced. If you’ve used a tool like Obsidian, you’ll know how useful backlinks are to navigate related content. The joys of hypertext! Hugo does not generate or expose backlinks out of the box. Instead, you must compute the references on each page as it is being built. This approach considers the following constraints: For example, you can go to my /about page and see the various pages that reference it, including this one. When this page is built, all other pages are inspected. If a page’s content has in its content, it will be matched. Create a file in your theme’s directory with the following markup. Then, instantiate it in any template where you want to show backlinks using . I did some light testing and the following references are supported using : For relative references using , supposing I’m a page: For page bundles , these work: I wish Hugo had better affordances to accomplish this type of tasks. Until then, you must bear with adding logic to your templates and O(n^2). Only links in the markdown content are eligible All content pages are eligible Links use Hugo’s ref and relref shortcodes No explicit front-matter is required Anchors within the same page (e.g ) are not considered backlinks Multiple languages are not considered

0 views
Gabriel Garrido 1 years ago

Read your Ghost subscriptions using RSS

I follow a handful of Ghost-powered sites that don’t serve their articles fully in their RSS feed. Most of the time this is because an article requires a paid or free subscription to be read. This turns out to be a nuisance as I prefer to read things using an RSS reader. For example, 404 Media’s recent article on celebrity AI scam ads in Youtube ends abruptly in my RSS reader. If I want to read the entire article then I am expected to visit their site, log in, and read it there. Article in the RSS reader cut short Update: In March 2024, 404 Media introduced full text RSS feeds for paid subscribers. If you’re not a paid subscriber, this note is still relevant if you wish to read free articles in full-text via RSS for any Ghost blog out there. Miniflux is a minimal RSS reader and client that I self-host. I like to use it with NetNewsWire on my iPad so that I can process and read articles in batches while offline. One of my favorite features in Miniflux is the ability to set cookies for any feed that you’re subscribed to. You can use this to have Miniflux pull the feed content as if you were authenticated 1 in websites that use cookies to maintain the user’s session. Ghost uses cookies to keep users authenticated for at least six months . A Ghost-powered site will respond with a different RSS feed depending on who is making the request. If you’re logged in and your subscription is valid for the article at hand, you get the entire article. If you’re not logged in or if don’t have the appropriate subscription, you get the abbreviated article. This is great! I can continue to support the publisher that I’m subscribed to while retaining control over my reading experience. Only the cookies are necessary Back in Miniflux, head to the Feeds page and press Add feed and enter the site’s URL. Toggle the Advanced Options dropdown,look for the Set Cookies field and add the following string: . Replace with the corresponding value that you see in the browser’s cookie jar for each cookie. Press Find a feed . At this point Miniflux should find the RSS feed automatically and configure it accordingly. If you’ve already added the feed before you don’t need to remove and add the feed again. Instead, go to the feed settings page, add the cookie, and click Refresh to force Miniflux to re-fetch the feed. Article in the RSS reader rendered fully Cookie expiration I didn’t read enough of Ghost’s code to verify whether they refresh the authentication cookies every once in a while. That said, the cookie’s expiration time is long enough that I’d be find with having to replace them once every six months if necessary. Anyone with access to these cookies can impersonate your account in the corresponding website. I self-host Miniflux so no one has access to the Miniflux database but me. If you pay for Miniflux then you’ll want to make sure you feel comfortable trusting them with your account cookies. I wouldn’t be too worried but you should be aware of this fact.  ↩︎ Open your browser, visit the Ghost-powered site that you’re following, and log in. Open your browser’s developer tools and head to the storage section (under “Application” in Chromium-based browsers, “Storage” in Firefox and Safari). Look for the cookies section and locate the Ghost-powered site. Look for the and cookies. Back in Miniflux, head to the Feeds page and press Add feed and enter the site’s URL. Toggle the Advanced Options dropdown,look for the Set Cookies field and add the following string: . Replace with the corresponding value that you see in the browser’s cookie jar for each cookie. Press Find a feed . Anyone with access to these cookies can impersonate your account in the corresponding website. I self-host Miniflux so no one has access to the Miniflux database but me. If you pay for Miniflux then you’ll want to make sure you feel comfortable trusting them with your account cookies. I wouldn’t be too worried but you should be aware of this fact.  ↩︎

0 views
Dominik Weber 1 years ago

Refactoring an entire NextJs application to server components

Next.js 13 introduced the app directory and React Server Components. On Star Wars day (May 4th) a couple months later, React Server Components was marked as stable. Server components separate page rendering, with some parts rendered on the server and others on the client. The key difference is that server components are always rendered on the server, not prerendered on the server and hydrated on the client. Server-side rendering existed before, but now for the first time it’s possible to mix it with client-side rendering on the same page.

1 views
Kartik Agaram 1 years ago

crosstable.love

crosstable.love is a little app I whipped up for tracking standings during the Cricket World Cup, just to avoid the drudgery of resorting rows as new results come in. video; 20 seconds

0 views
kytta 2 years ago

I spent one week with Zola

You already know this story: I have been tired of not having a proper blog, so I started one . I absolutely didn’t want to try out anything with a CMS (sorry, WordPress folks!), but rather stick to a minimalistic statically generated site. When it came to choosing a generator, I had a few options I could consider: From this list, I’d go with Hugo, as I like its speed and feature completeness. However, I really dislike Go Templates. I find them quite confusing to use, and I still haven’t found an editor with proper support for them. So, I chose Zola. Setting up a Zola project is a very pleasing experience. Run , then , and the website is running. Nothing irritating here. All the pages in Zola live under . Every page should be a Markdown file with a preamble, which needs to have the defined. Pages can be organized into sections, and each section can have its settings for the pages: Their sorting, pagination, RSS feed generation, etc. Upon creating the first page, Zola will scream at you for not having a template. Not very friendly, yet understandable. I wish SSGs generated some default templates to start from, but since most people use themes with their blogs, it doesn’t matter that much. After one has defined their templates and settings, one can start writing posts! Zola includes a very good preview server: It is fast at rebuilding pages and includes livereload for the browser. Zola is very minimalistic SSG. Unlike Hugo, it has only a few options and a lot of sane defaults. As mentioned before, I wish there were some example templates for the HTML pages themselves, but they’re not too hard to write. Zola’s template engine, Tera , is basically Jinja2, which I wholeheartedly love. It includes all important Jinja2 features: filters, functions, includes, extends, and macros. Unlike Hugo, Zola doesn’t enforce any specific folder structure or naming for basic templates other than , , and , which means I can organize my templates in a very clean manner. Some Zola’s own filters for Tera are also incredibly cool. It took me under 15 minutes to add comments to my blog that are based on the replies I get to the post on Mastodon. Zola makes a request to the Fosstodon API, grabs the replies, and passes each of them to a macro that returns the DOM element. All of this is happening inside templates, which is very cool and somewhat frightening at the same time :D I like Zola’s documentation, but it has its quirks. For example, some concepts that I’d put inside their own documentation pages are hidden away, like template filters being hidden inside ‘Templates/Overview’ . Zola’s docs are supposed to have a search function, but it doesn’t work at the moment . Other than that, it is very clearly written, and I had a better time reading it than I had when reading Hugo documentation. I didn’t know what to name this section; in it, I talk about things that aren’t implemented in Zola (unlike Hugo or Jekyll) but which I don’t care about. One of those things is date-based ordering of pages. For example, a blog post from 3rd of January 2023 would be accessible under (or any other combination of pages). Vincent, the core developer of Zola, doesn’t like these ‘archive-style’ URLs and will not implement those . I have no problems with either URL style, and I am happy to keep my URLs clean, so I don’t really miss this. Yet I understand how critical this may become for someone migrating from Hugo or Jekyll. Zola also doesn’t have any Git integration. In Hugo, one can use Git commit dates to determine the and properties of a blog post or a sitemap entry. There also isn’t a feature request for it, so it may be added in the future, but I don’t care about it, so I won’t bother asking for it. Zola is by no means a finished project (heck, even Hugo isn’t), so there are a lot of things that I am missing from it. The thing that irritates me the most is poor footnote management. There is an issue , but it’s not Zola’s fault, but rather one of pulldown-cmark , the CommonMark parser that Zola uses. Footnotes as they are now look pretty ugly and do not play nicely with RSS readers, which is why I can’t post some old posts of mine for the moment. Another thing I’d really like to have is CSS post-processing. I know, I could run PostCSS over the generated content after running , but this would not fix the problem for the preview server, which means I am limited to a very small subset of PostCSS plugins. It would also mean that I would need to regenerate hashes for the SRI, which complicates the process even further. It’s not that I write very complicated CSS full of Stage 3 features and Modules and whatnot, but I’d still appreciate being able to use Autoprefixer and CSSO. Lastly, Zola can’t generate both RSS 2.0 and Atom feeds — you have to pick one. I don’t think any modern RSS reader would have an issue with Atom feeds, yet I really don’t want to give up on compatibility with some clients. There are workarounds, but there are no plans to implement it officially . For a short time, my website was hosted with Cloudflare Pages. I liked it for a few reasons: GitHub integration allowed me to push my code and have it be built automatically, and it supported IPv6 (unlike Vercel). I didn’t like having my whole DNS hosted there, but I wasn’t ready to switch somewhere else at the moment. When I migrated to Zola, I tried building the site on Cloudflare, and it kinda worked, but the fun ended there quickly. As it turns out, Cloudflare’s OS images are so old that they do not support new Zola versions . Here, ‘new’ means ‘any version released after August 2021’. It’s embarrassing beyond belief, and this is why I quickly abandoned Cloudflare for both my DNS (I switched back to deSEC) and my hosting (I migrated to GitHub Pages). So far, working with Zola has been great. The issue with footnotes is quite annoying, so I might have to learn Rust to fix this myself. At some days, I am thinking of migrating to Hugo or even writing my own SSG, but every time I get those thoughts I just re-read my first post on this blog and this calms me down :) This is post 004 of #100DaysToOffload . Jekyll . I like that it’s native to GitHub Pages while also being easy to deploy virtually everywhere. Its Jinja2-like Liquid templates are a very good thing, and there are a bazillion of plugins for it, too. Yet, I dislike Ruby a lot because of the slow speed and my inability to make it run properly on my computer. 11ty , Gatsby , Hexo , Next.js , and other JavaScript-based frameworks are off the table for me. I’ve never worked in an environment as fragile as Node.js. If one comes back to a project a year later, one discovers that nothing works any more. The build speeds aren’t the fastest, the template engines are not to my liking, and I really don’t want to ship any JavaScript to my readers. 11ty ticks the most boxes for me, but I couldn’t really get a hang of it. Hugo is a go-to choice for many. I mean, half of the blogs I read are Hugo-based. Being non-extensible (as it often is with compiled languages), it is the most mature and feature-rich SSG out there. I have used Hugo to build the previous incarnation of my website, and I’ll talk about it a bit more further down this article Pelican , Nikola , Cactus , Hyde are all Python-based, which suits me well. However, most of them aren’t as feature-complete as the ones I mentioned before. Some do not have any documentation, others don’t really have any plugins. I’d be okay with writing some plugins myself, but that would mean I spend less time writing and more time coding. Zola is a relatively new static site generator. Its idea is similar to Hugo, but it has some differences. It is written with Rust 🦀, which means it’s blazing ⚡️ fast 🚀 and memory 🧠 safe 🥽. It also uses its own template engine, Tera, which is pretty much another flavour of Jinja2 / Liquid / Twig / etc.

0 views
codedge 4 years ago

Render images in Statamics Bard

The Bard fieldtype is a beautiful idea to create long texts containing images, code samples - basically any sort of content. While I was creating my blog I was not sure how to extract images from the Bard field. Thanks to the Glide tag you can just simply pass the field of your image and it automatically outputs the proper url. My image field is a set called image. In your Antlers template for the images just write For generating responsive images you can use the excellent Statamic Responsive Images addon provided by Spatie . With this the above snippet changes to that: This generates a tag with to render images for differrent breakpoints.

0 views
Lea Verou 16 years ago

20 things you should know when not using a JS library

You might just dislike JavaScript libraries and the trend around them, or the project you’re currently working on might be too small for a JavaScript library. In both cases, I understand, and after all, who am I to judge you? I don’t use a library myself either (at least not one that you could’ve heard about  ;) ), even though I admire the ingenuity and code quality of some. However, when you take such a brave decision, it’s up to you to take care of those problems that JavaScript libraries carefully hide from your way. A JavaScript library’s purpose isn’t only to provide shortcuts to tedious tasks and allow you to easily add cool animations and Ajax functionality as many people (even library users) seem to think. Of course these are things that they are bound to offer if they want to succeed, but not the only ones. JavaScript libraries also have to workaround browser differences and bugs and this is the toughest part, since they have to constantly keep up with browser releases and their respective bugs and judge which ones are common enough to deserve workaround and which ones are so rare that would bloat the library without being worth it. Sometimes I think that nowadays, how good of a JavaScript developer you are doesn’t really depend on how well you know the language, but rather on how many browser bugs you’ve heard/read/know/found out. :P The purpose of this post is to let you know about the browser bugs and incompatibilities that you are most likely to face when deciding againist the use of a JavaScript library. Knowledge is power, and only if you know about them beforehand you can workaround them without spending countless debugging hours wondering “WHAT THE…”. And even if you do use a JavaScript library, you will learn to appreciate the hard work that has been put in it even more. Some of the things mentioned below might seem elementary to many of you. However, I wanted this article to be fairly complete and contain as many common problems as possible, without making assumptions about the knowledge of my readers (as someone said, “assumption is the mother of all fuck-ups” :P ). After all, it does no harm if you read something that you already know, but it does if you remain ignorant about something you ought to know. I hope that even the most experienced among you, will find at least one thing they didn’t know very well or had misunderstood (unless I’m honoured to have library authors reading this blog, which in that case, you probably know all the facts mentioned below :P ) . If you think that something is missing from the list, feel free to suggest it in the comments, but have in mind that I conciously omitted many things because I didn’t consider them common enough. John Resig (of the jQuery fame), recently posted a great presentation , which summarized some browser bugs related to DOM functions. A few of the bugs/inconsistencies mentioned above are derived from that presentation. The operator is almost useless: Use Object.prototype.toString instead . Never, EVER use a browser detect to solve the problems mentioned above. They can all be solved with feature/object detection, simple one-time tests or defensive coding. I have done it myself (and so did most libraries nowadays I think) so I know it’s possible. I will not post all of these solutions to avoid bloating this post even more. You can ask me about particular ones in the comments, or read the uncompressed source code of any library that advertises itself as “not using browser detects”. JavaScript Libraries are a much more interesting read than literature anyway. :P I’m not really sure to be honest, it depends on how you count them. I thought that if I put a nice round number in the title, it would be more catchy :P

0 views
Lea Verou 16 years ago

JS library detector

Just drag it to your bookmarks toolbar and it’s ready. And here is the human-readable code: Am I nuts? Certainly. Has it been useful to me? Absolutely.

0 views