Latest Posts (20 found)
usher.dev 6 months ago

AI Can't Replace Us - We Know Too Much

But developers don't just write code, we also know how everything actually works. Which systems are held together with wishes and crossed fingers. We're deeply embedded in how companies operate, and more often than not, we're what's keeping the lights on. The AI doesn't know: AI will give you a boost. It'll speed up development, often take entire problems off your hands, and point out awkward phrasing in my writing (which I may choose to ignore...). It's like pairing with an overconfident intern who read a blog post and now wants to rewrite everything in Rust. But developers aren't just coders. We're archives of institutional knowledge, tape dispensers holding together years of compromises, and the experience that holds the messy, mismatched parts of reality in place. If you're not already embedded in that mess - well, it might be time. Oh and yeah, sure - we should document this stuff. But let's be honest: we won't. Not properly. Which is exactly why we're sticking around. Which of the seven Google Docs actually describes the current infrastructure (and which one was just hopes & dreams). That we're halfway through migrating from Jira to Notion to Linear, and will probably back to Jira by the time that's done, just with more grey hair. Why you've already ruled out that vendor and don't need another two hour rambling demo call and 5 day implementation consultation. Which Slack thread from six months ago contains the final decision about the logging library. (Not the one in the spec, or the pinned message - Brian didn't like that one.) That one server on a non-standard cloud provider, running a lonely cron job, is the only thing holding your data stack together. That restarting the prod server breaks the firewall config unless you remember to run and pray. That you have to divide that metric by two to get the actual value. No one remembers why. How that one undocumented supplier API works, and why it only returns 14 pages of data unless you pass the flag. That one special customer with a custom setup, maintained in a fork of a fork in a separate GitHub org. Don't forget to update it. That the contractor who built the weird glue layer between two ancient vendor systems is gone. And so is their documentation. And maybe their LinkedIn. That orders over £100 get free shipping unless they contain an item in the Fresh Produce category, or involves anyone with the first name Angela. Why RG needs to FFL the TNO on Fridays or the RT will get PO'd. No, the other TNO.

0 views
usher.dev 8 months ago

Here's how I use LLMs to help me write code

The king of LLM blogging, Simon Willison, has a great overview on using LLMs to write code . Using LLMs to write code is difficult and unintuitive . It takes significant effort to figure out the sharp and soft edges of using them in this way, and there’s precious little guidance to help people figure out how best to apply them. Even if you're not sold on how LLMs can boost your development, I'd highly recommend that learning to manipulate them to your benefit is a skill worth learning.

0 views
usher.dev 8 months ago

Kill your Feeds - Stop letting algorithms dictate how you think

We used to control apps like Facebook and Instagram with our own choices. They became daily comforts, making the world seem a little bit smaller and closer by bringing the people that we cared about together in to one place. But from the perspective of these companies, that’s a problem. Our personal worlds, our friends, family, and connections, are finite. Once we’ve caught up, we put the app down. That’s bad for business. Social media companies need us flicking through their apps as long as they can keep us there. More eyes on ads is more money. So they play the system a bit. You've lingered on enough photos of cute puppies, they know what you like. Before long those feeds of finite content are replaced by infinite algorithmic content pulled from millions of users trying to optimise their posts to be picked up by the omnipotent algorithms. Algorithms which are completely opaque to us. Sci-fi imagines megacorporations controlling our minds with brain implants. Some worry that companies are already listening in. But they don’t have to - they already control our eyes. The creators of TikTok, Instagram etc. have gained control over exactly what we see. What we see strongly influences how we think. They know that their feeds make us angry, they know the negative effects on our mental health (particularly that of teens), and they know that they have an influence on our opinion. With the power to shape what we see comes the power to shape what we believe. Whether through deliberate manipulation or the slow creep of algorithmic recommendations, engagement is fueled by outrage, and outrage breeds extremism. The result is a feedback loop that isolates users, reinforces beliefs, and deprioritises opposing viewpoints. We live in times where being able to form our own opinion is more important than ever. Where knowing how to source and identify truthful information is a critical skill. Our reliance on being spoon fed ideas is destroying those abilities, Alec of Technology Connections calls this algorithmic complacency , referencing our increasing inability to look outside our algorithmically created bubble. The social media companies don't care, the only person who has any interest in fixing this is you. It's time to take back control of how we think. We've identified the problem, now it's time to take action. We don't all have the freedom, interest or willpower to delete social media from our lives entirely. It's still where our friends are, an occasional distraction from reality and a source of entertainment. You don't have to become a digital outcast to hold back this influence. So what can we do? The internet should serve you, not the other way around. Take back control. Kill your feeds before they kill your ability to think independently. Go directly to the source - if you like a particular TikTok creator, Facebook page or YouTube channel, skip the feed and go directly to their pages. Consider bookmarking their profiles individually. Learn to find information and entertainment without a feed - try to find a creator making videos or writing about a topic of interest without having to stumble across them in a feed. Use platforms and platform features that let you control your experience - Instagram's 'Following' feed, YouTube's Subscriptions page, Bluesky , Mastodon and RSS feeds Be mindful of engagement traps - recognise how algorithmic feeds are designed to keep you engaged and scrolling. Take a breath and stop the cycle. Talk about it - if you're reading this you a already know this is a problem. Your friends and family may not be aware of how their feeds are manipulating their attention and beliefs. Without intervention, the radicalisation of opinions, and the consequences we’re already seeing, will only escalate.

0 views
usher.dev 9 months ago

Confessions about my smart home

Frenck (lead engineer of Home Assistant) breaks down the state of his smart home . While you might assume the lead engineer of a home automation platform has everything figured out, it's fun to hear that's far from the truth. ...my dashboards are so bad that if I turn on the "Coffee machine" switch, it actually powers on my 3D Printer. 🤦‍♂️ I keep seeing examples of this - lead developers and experts in their field often don't have the perfect environment - they are focused on the things they want to do and often the pursuit of the ideal environment falls by the wayside. This is something I often need to remind myself about when I go on a yak shaving adventure tweaking my editor setup or finding the perfect tool for a job.

0 views
usher.dev 9 months ago

When Imperfect Systems are Good, Actually: Bluesky's Lossy Timelines

I love reading technical deep-dives in to system design and scaling . This post from Jaz (infrastructure at Bluesky) is a great read on how they balanced user needs with performance by finding where they could be 'imperfect'. By specifying the limits of reasonable user behavior and embracing imperfection for users who go beyond it, we can continue to provide service that meets the expectations of users without sacrificing scalability of the system. There's always sacrifices to be made in system design - I like the idea of defining a 'reasonable limit' for any one user and building optimisations around that limit.

0 views
usher.dev 9 months ago

Minimal GitOps-like Deployment Tool

However, it's easy to forget that there's a class of service/business where this doesn't make sense. Many small businesses (if they're not using a PaaS like Fly), don't have the resources to run a Kubernetes cluster and 95% of the time, just need an app to run on a single VPS. Sometimes, maybe during an event or a promotional period, it might make sense to add another server to the mix, load balancing traffic somewhere (DigitalOcean load balancers are pretty effective). Assuming we have some nice way to spin up more servers running the app (a nice Ansible Playbook perhaps?), and assuming we have some nice CD pipeline to deploy the app, we need some way to make sure all the servers are running the latest version. One pattern I like for this is a simplified GitOps pull model (think ArgoCD or Flux but held together with duct tape) where we: This way, your deployment pipeline doesn't need to know anything about the servers it's deploying to - it just needs to know how to update the repo with a reference to the new release. In one case, I have a repo called 'ops' which along with all the ansible playbooks, has a set of files, one for each app which contain (along with other settings), a line like: I can then run a script like the following on each server, which pulls down the repo, checks each app and updates the systemd service file to point to the latest release. For cases where Kubernetes and a full GitOps solution is overkill, this is a nice way to get a simple deployment pipeline setup. Have a repo which indicates what app releases should currently be deployed. Get your servers to query this repo to determine what it should be running and automatically update the app on the server to match. Make your deployment pipeline update the repo with a reference to the new release.

0 views
usher.dev 9 months ago

Why Blog if Nobody Reads It?

Andy Hawthorne writes a lovely summary of why it's worth blogging: When you write, you think better. When you think better, you create better. I've just added Simon Willison's link blog pattern to this site, hoping it will encourage me to write more, even if it's just fleeting thoughts on a link like this one.

0 views
usher.dev 3 years ago

Django on Fly.io with Litestream/LiteFS

One of the neat things that has come out of Fly is a renewed interest across the dev world in SQLite - an embedded database that doesn't need any special servers or delicate configuration. Some part of this interest comes from the thought that if you had an SQLite database that sat right next to your application, in the same VM, with no network latency, that's probably going to be pretty quick and pretty easy to deploy. Although in some ways it feels like this idea comes full circle back to the days of running a MySQL Server alongside our PHP application on a single VPS, we're also in an era where we need to deal with things like geographic distribution, ephemeral filesystems and scale-to-zero. So we want to run our apps in a nice PaaS, and also quite like the idea of our database being local to our application code, but there's a few conflicts here: Thankfully Fly have been funding the development of some interesting tools; Litestream and LiteFS which aim to solve this. The difference between these tools is not particularly obvious; so to summarise: Litestream was Ben Johnson's first attempt at solving this problem, and is now focused primarily on disaster recovery. It's a tool to stream all the changes made to your SQLite database to some remote storage, like S3, and then recover from it when you need to. This is great, and it nicely solves our first conflict. Our application can be configured to restore the database from remote storage when it starts, and we can be safe knowing that any changes are being backed up as our application runs. Unfortunately, it doesn't solve our second problem, replicating our databases to other instances of our app if we decide to scale out. While there were plans (and an initial implementation) for this in Litestream, live replication was instead moved to the second project, LiteFS. LiteFS does some magic with FUSE to allow it to intercept SQLite transactions and then replicate to multiple instances of your application. It's a little more complicated as you need additional tools like Consul so that it knows where to find the primary instance (where it will direct queries that write to the database), but it solves our second conflict! Alas, our first conflict isn't yet solved by LiteFS - if all your nodes go away, there's nowhere to replicate your database from so it too will disappear. S3 replication like in Litestream is on the roadmap however, so it seems like LiteFS is fixed to solve all our problems! So we know what these tools do, let's experiment with getting our Django applications running with them on Fly.io For Litestream, we'll need: Prepare your Fly application with (we don't need a Postgres database if it asks). Set all the environment variables we're going to need by creating a new file (call it something like ): will be the directory where the database is replicated to, is the path where Django and can find your database file, and is the path to your S3-compatible bucket. to import these values in to your Fly environment. Create a : Replace your section with whatever you you normally run to start your web server. Litestream will do its stuff and conveniently run our own application, exiting when our server exits. Create a script, , that will run on application start to make sure all our directories are created: Update your Docker to run this . Once deployed with , Litestream will start backing up your database. Careful, if you try to scale out by adding more instances, at best you'll see out of sync data, at worst you'll end up with a corrupt database. For LiteFS, we'll need: Prepare your Fly application with (we don't need a Postgres database if it asks). Set all the environment variables we're going to need by creating a new file (call it something like ): will be the directory where the database is replicated to and is the path where Django and can find your database file. to import these values in to your Fly environment. In your , add: This gives us access to the shared Fly.io-managed Consul instance. Create a : Replace your section with whatever you you normally run to start your web server. The is where LiteFS will create its filesystem (where the database will live), is where it keeps files it needs for replication. The and blocks tell LiteFS how to talk to each other and where to find the Fly.io managed Consul instance. Create the that is started by LiteFS. We need things like migrations to run after LiteFS has set up its filesystem, so we do those in this script@: Create a script, , that will run on application start to make sure all our directories are created: Update your Docker to run this . We're not there yet. We need to make sure database writes only go to our primary. To do this, we'll register a database which intercepts any write queries. I've got this in my app's (heavily based on Adam Johnson's ): This will raise an exception if the query will write to the database, and if the file created by LiteFS exists (meaning this is not the primary). We need something to intercept this exception, so add some middleware: and register it in your settings. This catches the exception raised by the previously registered , finds out where the primary database is hosted and returns a header telling Fly.io; "Sorry, I can't handle this request, please replay it to this database primary". Once deployed with , LiteFS will start replicating your database! These are fun tools to play with for now, but there's clearly a lot of work to get them working with our normal apps. I'm excited about how they could make getting a Django/Wagtail app deployed much more accessible, easier and cheaper, but they're still some work to be done to make that a reality. The LiteFS roadmap includes things like S3 replication (so we get similar backup features to Litestream), and write forwarding (so writes to read-replicas will automatically be forwarded to the primary). There's a lot of promise there and I can't wait to make more use of it! PaaS tools like Heroku/Fly tend to offer ephemeral storage, or no guarantees on the safety of storage. Trying to keep an SQLite database around on this sort of storage just won't work out. A common approach to scaling is to "scale out" - start up more instances of your application and load balance between them. How would that work with SQLite? Even if you could access the same database file from each instance, we're re-introducing latency and as SQLite can't be written to by multiple processes at once, we're probably slowing our app down too. Litestream was Ben Johnson's first attempt at solving this problem, and is now focused primarily on disaster recovery. It's a tool to stream all the changes made to your SQLite database to some remote storage, like S3, and then recover from it when you need to. This is great, and it nicely solves our first conflict. Our application can be configured to restore the database from remote storage when it starts, and we can be safe knowing that any changes are being backed up as our application runs. Unfortunately, it doesn't solve our second problem, replicating our databases to other instances of our app if we decide to scale out. While there were plans (and an initial implementation) for this in Litestream, live replication was instead moved to the second project, LiteFS. LiteFS does some magic with FUSE to allow it to intercept SQLite transactions and then replicate to multiple instances of your application. It's a little more complicated as you need additional tools like Consul so that it knows where to find the primary instance (where it will direct queries that write to the database), but it solves our second conflict! Alas, our first conflict isn't yet solved by LiteFS - if all your nodes go away, there's nowhere to replicate your database from so it too will disappear. S3 replication like in Litestream is on the roadmap however, so it seems like LiteFS is fixed to solve all our problems! An S3-compatible storage bucket and access keys Our django app, ideally configured with for convenience The binary available to our application. I have: in my Dockerfile. Prepare your Fly application with (we don't need a Postgres database if it asks). Set all the environment variables we're going to need by creating a new file (call it something like ): will be the directory where the database is replicated to, is the path where Django and can find your database file, and is the path to your S3-compatible bucket. Run to import these values in to your Fly environment. Create a : Replace your section with whatever you you normally run to start your web server. Litestream will do its stuff and conveniently run our own application, exiting when our server exits. Create a script, , that will run on application start to make sure all our directories are created: This: Checks important environment variables are set. Creates a database directory and makes sure it's open enough for the app to read/write to it (you might choose to tighten this up if appropriate). Restores the database using litestream if it doesn't already exist. Runs migrate to make sure the database is up to date (or creates it if there wasn't anything to restore). Runs which will in turn run the command in the litestream config, starting the application. Our django app, ideally configured with for convenience The binary available to our application. I have: in my Dockerfile (alternatively, copy the binary from the image . Some way to make sure our write requests only end up with the primary (we'll come back to this). Prepare your Fly application with (we don't need a Postgres database if it asks). Set all the environment variables we're going to need by creating a new file (call it something like ): will be the directory where the database is replicated to and is the path where Django and can find your database file. Run to import these values in to your Fly environment. In your , add: This gives us access to the shared Fly.io-managed Consul instance. Create a : Replace your section with whatever you you normally run to start your web server. The is where LiteFS will create its filesystem (where the database will live), is where it keeps files it needs for replication. The and blocks tell LiteFS how to talk to each other and where to find the Fly.io managed Consul instance. Create the that is started by LiteFS. We need things like migrations to run after LiteFS has set up its filesystem, so we do those in this script@: Create a script, , that will run on application start to make sure all our directories are created: This: Update your Docker to run this . We're not there yet. We need to make sure database writes only go to our primary. To do this, we'll register a database which intercepts any write queries. I've got this in my app's (heavily based on Adam Johnson's ): This will raise an exception if the query will write to the database, and if the file created by LiteFS exists (meaning this is not the primary). We need something to intercept this exception, so add some middleware: and register it in your settings. This catches the exception raised by the previously registered , finds out where the primary database is hosted and returns a header telling Fly.io; "Sorry, I can't handle this request, please replay it to this database primary".

0 views
usher.dev 3 years ago

Deploying a Wagtail site to Fly.io Part 5: Deploying

To start, we'll need to install the Fly command line tool, . Follow steps 1-3 over on the Fly.io docs and then come back. I'll wait. Great, so now you have a Fly.io account and the tool. Everything on Fly can be managed with this tool (not quite the nice GUI experience you get with Heroku, but it's getting there). Next up, let's kick off the application on Fly: (You'll need to come up with your own alternative to as app names need to be unique.) You'll be prompted for a region - pick your preference. You'll also be asked whether you want to setup a Postgresql database, type and select the 'Development' configuration. Finally, you'll be asked whether you want to deploy your application. Type to say no for now. In one command, Fly's done a lot of work for you here: One problem, we need to make sure your database is populated with the application models when it's first deployed (and when migrations are added in the future). To do this, edit the and add: Now whenever Fly creates a new release, it will run Django's command. While we're editing this file, you'll also want to change the line from to as this is the port the we're working with serves the application on. We're also going to need to set a few other environment variables to make sure everything we works as we expect. In Fly, we set these using the 'Secrets' feature and the related commands. I find the easiest way to do this is to prepare a file called with all the environment variables you want to set. It'll look something like this: The value of should be the name you gave your Fly app, followed by . This will create each line from your file as a Fly secret attached to your application. The application will be able to access each secret as an environment variable when it's running. Finally, we can deploy our app: Visiting will now reveal your site running on Fly! You'll probably want to set yourself up with a superuser. We can do this by accessing the server running your application and running the management command: You're also probably expecting a bit more than just a boring 'Welcome to your new Wagtail site!'. We need to get to generate its content, so let's do that. In a session, run to prepare our content. Refresh the site and you should have a fully functional bakery! Congratulations! You've got a Wagtail site running on Fly! There's a few things to consider now we're here: An empty app has been added to your Fly account ready to receive a deployment A Postgres database has been configured and attached to your app so that the secret (and therefore environment variable) contains all the information your app needs to connect to the database. A file has been added with all the details Fly needs to deploy and manage your app. You might want to point your own domain at Fly, which will generate SSL certificates for you. Take a look at the custom domains docs , and don't forget to update your . There's currently no super-easy way to run scheduled tasks on Fly. Things like Wagtail's or Django's . This is something I'd love to see Fly add, but there are workarounds that are a topic for a future blog post. Fly's Postgres service is very handy but lacks managed features like backups, rollbacks, disaster recovery and support. If you need these things, consider hosting your database somewhere else like Supabase , CrunchyData , or one of the big cloud providers' database services (AWS RDS, Google Cloud SQL).

0 views
usher.dev 3 years ago

Deploying a Wagtail site to Fly.io Part 4: Static & Media Files

We'll use a couple of very useful Django packages to handle these. As covered in Part 1 (but that was too many words ago), Django has no production-ready approach to serving static files. Thankfully, WhiteNoise exists and offers a nice simple way to get the job done without any additional servers or services. The Using WhiteNoise with Django docs already serve as a perfect starting point, so I won't repeat them here. I recommend you only follow the setup up to step 3 for now - everything following may be useful in some situations but likely won't be for this process. You'll need to add to your and update your settings to include something like: We've now told WhiteNoise to get involved whenever we ask for one of our 'static' files. Media files are a little more complicated. By default, Django's going to want to store any user uploaded files on the local filesystem. With Fly.io (and many other similar services like Heroku), the filesystem given to your application is ephemeral - it will vanish whenever your application is shut down or re-deployed. That's not great when you're dealing with files your users are expecting to stick around. Unless you have a nice way to replicate your media across volumes, an external provider like we use here is recommended. To solve this, we're going to want to: The magic combination here is django-storages and Amazon S3. Django Storages adds a set of custom 'storage backends' which give Django's s the ability to store and retrieve files from external locations. While it supports options like Google Cloud Storage and Azure Storage, we'll be using Amazon S3 here. First up, you'll need an AWS account . If your account is new, you'll get some 5GB of free storage for 12 months. If it's not, 5GB is going to cost around $0.12 a month. Once your account is set up and logged in, click your name in the top right and go to Security Credentials . Go to the Access keys section and click Create new access key . Take note of the generated Access Key ID and Secret Key in your password manager of choice. A place to put files in S3 terminology is called a 'bucket'. We'll need to create a new one such that: Consider a package like wagtail-storages if you'd like a bit more control. Setting this up can be a little daunting if you're not familiar with AWS/S3, so we'll take a little shortcut and use the handy s3-credentials tool which you can in to your dev environment. Once installed, we can get everything we need in one command: (as S3 bucket names must be unique you'll need to replace with a more unique name here). This will go ahead and create: and give you an output like: You'll want to note down your bucket name, and . Next, you'll need to add to your requirements and update your settings to use the S3 backend: We're making sure to use environment variables here so that if you specify an variable this config is going to apply. That was a big one. Let's move on... Static files - things that are in our codebase like CSS and JS and don't change unless we deploy a new version of the site. Media files - images and documents uploaded by our site users and content editors. Store those files somewhere a bit a more permanent. Tell Django to write and read files from that place. Our site is allowed to put files in to it Anyone on the internet can read files from it A bucket which is publicly accessible An AWS user which has write access to the bucket

0 views
usher.dev 3 years ago

Deploying a Wagtail site to Fly.io Part 3: Dockerising & Requirements

Fly currently supports 3 ways of doing this ; Dockerfiles, Buildpacks and Nixpacks. The latter two are a great way to get started if you're not familiar with Docker - they provide a pre-built way for the platform to build and run your application without the developer having to do much extra work. If you don't have a Dockerfile, grab a copy of the Wagtail template file , place it in the root of your project and replace any occurrences of with the name of the folder your is found in. Looking at this , you'll probably notice a big comment at the bottom, before the line that starts with . This line is what Fly will execute when it starts running our application, and the warning is telling us that running at this point is not best practice. Because Fly gives us a phase in which we can run migrations (which we'll come back to), we can simplify this line to: Now the only thing that Fly will do when your application launches is start the application server. Because we'll be using PostgreSQL for our site, we need to make sure the package that Django uses to talk to PostgreSQL databases will be installed. To do this, add to your file. Next up, what do to with all static/media files... How to acquire all our project's dependencies What application server to use and how to configure it What files need to be made available to the platform

0 views
usher.dev 3 years ago

Deploying a Wagtail site to Fly.io Part 2: Configuration

Alternatively, grab a checkout of , or find your own existing project. We'll start by getting your application in to a state where it is easily configurable with environment variables. When building an application for any platform, following the Twelve-Factor App methodology is highly recommended for reasons that are well explained on the linked site. In particular, factor 3 ('Config') is about storing configuration that is likely to vary on the host, rather than in your codebase. I'll refer to the former as a 'host' for simplicity here, even though that could be a local dev setup, CI, or something else. If you've seen a few Django projects, you've probably seen projects with multiple configuration files like , , etc. This can create a tight coupling between your codebase and the host it runs on. Rather than your host being able to inform the application about things like database credentials, domains and secrets, the codebase needs to be 'aware' of the details of the host it's going to run on before it gets there! Wagtail's default starter template is guilty of this, but we can focus on the , treating it as our only settings file and varying any settings that might change with environment variables. While there are a lot of nice fully-featured packages that help you do this ( django-environ , Dynaconf , python-decouple ), we'll start simple by pulling values directly from the environment using Python's module. There's a few settings we'll want to convert to environment variables (or add in if they don't exist). I'll usually end up with something like: This way, the hosts' environment variables specify: The other important thing that we need to pull from the environment are the database credentials. In 12-factor apps, these are often expressed in a URL format (e.g. ). This can take a bit of parsing to get the separate values Django expects, so a package like dj-database-url is very useful here. Configuring Django to use a environment variable using this package can be done by replacing your settings value with: Other things that vary can include: Most of these values can be easily parsed from the simple strings we are limited to with environment variables. On to the complicated bit... Whether debug mode is enabled The Django secret key A comma-separated list of values for API keys and other secrets Locations of external resources like S3 buckets (we'll come back to this)

0 views
usher.dev 3 years ago

Deploying a Wagtail site to Fly.io Part 1: The Plan

Migrating from Heroku or similar? Your app is probably ready to go. Jump to part 5 . Need a hand? I offer Wagtail, hosting and DevOps consulting services - drop me an email and we'll have a chat. Yes I'm drawn to the exciting tech , the fascinating blog posts , the attractive pricing and generous free tier (for now...), but more importantly, Fly are one of the few new breed 'PaaS' tools that realise what Heroku got right with its excellent developer experience, and are on a fast-track to replicating and eventually exceeding what Heroku offers. While Fly is quickly establishing itself as my go-to hosting platform for personal/hobby projects (even before the "sunsetting" of Heroku's free tier), Wagtail has been my go-to platform for almost all the sites I build for quite a while. In this series, I'll run through how to get a Wagtail site running on Fly.io. We'll need to sort out a few things before our application is ready to be served from Fly: There are, as always, many answers to these questions but we'll pick a few answers and run with them. Before we kick-off, a few final things: Any -specific changes or notes will be in these 🍞-boxes. Let's get started in Part 2. How do we configure our application so its easily ported between different environments? (dev, staging, production, etc.) How are we going to bundle up our application ready for pushing to Fly? How are we going to serve our 'static' files (CSS, JavaScript, etc.) Where are our media files (or Wagtail Images/Documents) going to be stored? Where's our database going to live? For configuration (and for everything else for that matter), we'll be following the Twelve-Factor App methodology. This means storing any configuration that might vary in the environment - Fly's Secrets in this case. We'll bundle up the application using a - while there are certainly merits to Fly's other option, Buildpacks , I consider having a good Docker set up to be super useful for portability and a great local development experience. Django has no built-in way (outside of development) to serve static files - it expects you to figure out how to serve these yourself with a separate web server or from a separate cloud service. We'll use the very useful WhiteNoise package to give Django the ability to easily and safely serve its own static files. When Fly runs your application, it gets a bit of the hosts' file system to write to, but whatever you write to it is 'ephemeral' (fancy for 'not gonna last'). When that application stops, your data is gone. That's no good when you want to store things like images or documents uploaded by users. Our database is going to live on Fly too! We'll use their built-in Postgres support for this project, but do be aware it is largely 'unmanaged' - meaning you don't get backups, disaster recovery and the other niceties you'd get with a managed service. We're going to be preparing and deploying a fresh Wagtail site in this series. If you have your own site ready to go, these steps should all be transferrable to whatever you're building. This series focuses on preparing and deploying a Wagtail site - I won't be going in to detail about setting up your development environment, installing dependencies or potential differences when working with different operating systems. Some knowledge of , Wagtail/Django, managing files and general command line usage is expected.

0 views
usher.dev 4 years ago

August 2021 Braindump

🎲 We're slowly getting back in to weekly game nights with a smaller group. It's been great to dive in to some things that have been sitting around waiting to be played; Oath , Anno 1800 , Hallertau and the lovely The Search for Planet X 🎲 We released tickets for Bastion , a small (around 100 people) board game meet-up I run with some friends. They sold out in 2 days - clearly everyone else is as eager as we are to get back to it after missing a year! 🏡 The new raised beds are finally finished. Next up, filling them. We're aiming for a hügelkultur -inspired approach to minimise the cost and labour required to fill up the deeper beds. We'll see how that goes. {% image "src/posts/2021-08-27-august-braindump/raised_beds.jpg", "Finished raised beds" %} 📹 I wrote a blog post about my adventures in background blurring my webcam on Linux . It's been fun to play around with BodyPix - now I have far too many fun ideas on what to do with this in conjunction with Home Assistant

0 views
usher.dev 4 years ago

Make Your Webcam Look Slightly More Professional (mainly on Linux)

Being annoyed with Zoom's lack of smart-background blurring on Linux led me down a rabbit-hole out of which I have only just emerged, covered in who-knows-what and left with a sense that I should probably have left the poor rabbits alone. Do you want to make this questionably improved transformation to your cheap webcam: With nothing but a browser, broadcasting software, a reasonable understanding of launch flags, a high tolerance for frustration and a decent chunk of your valuable time? Well have I got the blog post for you! The most complicated part of this process is getting blurring working. There are plenty of options out there (such as OBS Background Removal - if you fancy compiling your own OBS plugin.), but none were quite as easy to use as I'd like. For a while, I used the very useful VDO.ninja , which lets you broadcast your camera from one browser instance to any other browser, applying filters such as background removal on the way. By opening the 'viewing' URL using OBS' embedded browser source, you can pull together multiple video streams, ideal for composing group calls inside OBS. Where this comes in handy in this case is that you can use this for a single stream, but it does mean leaving the 'server' browser tab open and active. Taking heavy inspiration from VDO.ninja, I scraped together a single-page-application, Lazy Webcam Blur which uses some of the same methods of blurring ( BodyPix + TensorflowJS ) as VDO.ninja, but doesn't do anything else - no UI and no interactions needed. This page can be added as a 'Browser' source in OBS to bring in a pre-blurred webcam video stream. Unfortunately, out of the box, this probably won't work . The embedded browser is waiting for you to confirm permissions to use your webcam, but there's no UI to let you do that. To solve this, we're going to need to add the flag when launching OBS. How this happens will depend on your distribution/how you've installed OBS, but you have two main options: Now your slightly-nicer-looking-camera is added as a source in OBS, we can let other applications use it using OBS' built-in 'Virtual Camera'. As long as you're using a recent release of OBS, this should be a click of a button: <Image src={import("./obs_start_virtual_camera.png")} alt="Start Virtual Camera button" /> You'll then be able to select "OBS Virtual Camera" as a webcam in your video conferencing tool-of-choice to get whatever OBS is outputting. OBS gives us tools to do all sorts of post-processing on sources, but the one I've found most useful for improving webcam video is the "Apply LUT" filter. A LUT is a colour-mapping preset generally used for colour correction - but in this case we're going to take the non-scientific approach of 'just making it look nice'. Success! A slightly improved webcam image with far too much effort, relying on an entire browser engine running in the background. Yes, it's not perfect - I'd love to see simpler tools for this process pop up in the future - do give me a shout if you come across anything! In OBS, under Sources, click the '+' and select 'Browser' <Image src={import("./obs_add_source.png")} alt="Adding a browser source" /> Give your source a useful name and click OK. Enter the in the URL and set your target resolution - I use 1280x720. <Image src={import("./obs_browser_source.png")} alt="Browser source settings" /> After adding the source, stretch it to fill the scene. Launch OBS manually with the flag appended, Update/make a copy of the file used to launch OBS and append the flag to the value. Grab some LUT files - I use this pack from OBS user Jordan Wages. Right click your browser source in OBS and select 'Filters'. Under 'Effect Filters', click the '+' symbol and select 'Apply LUT'. Browse to where you have downloaded/extracted your LUT files and experiment with them to find the one you like the best. I like from Jordan's pack.

0 views
usher.dev 4 years ago

July 2021 Braindump

🪶 Wagtail , the open-source CMS on which I've built my livelihood recently started a performance working group . Already some great ideas as to how we can improve the CMS in the long-term - very much looking forward to seeing what comes out of this. 🎲 Attended the UK Games Expo 2021 show. It's been great to work with the team here building their new site and infrastructure over the past couple of years. Even with the major setback of having to build a 'Virtual' version of the Expo in 2020, I feel like we've made some big improvements. It was great to see this all come together at the show. While definitely scaled back from the past few years, there's still something about being amongst other board gamers that reminds you why the hobby is great. 🎮 There was a lot of buzz around Final Fantasy XIV (the MMO) recently, initially after the poor reception to the latest WoW patches, and then after the awful stuff that came out of Activision/Blizzard. Despite not having committed to an MMO in many years (only dropping in to the WoW expansions when they release), I still have a soft spot for the genre, so I thought I'd give it a go. Pretty pleased I did right now - very much enjoying the slower pace, the story elements and just exploring a different world. 🏡 Had the exciting experience of hiring a digger and going to town on our garden with it. Not entirely for entertainment purposes (although my son may have thought otherwise). We're replacing our slowly rotting raised vegetable beds with new, fresh, sleeper-based ones. {% image "src/posts/2021-08-06-july-braindump/digger.jpg", "Building raised beds" %} Despite the painful cost of wood at the moment, I think these will be a huge improvement. Always surprising how much you can learn from a "simple" project like this: 🪶 Wagtail , the open-source CMS on which I've built my livelihood recently started a performance working group . Already some great ideas as to how we can improve the CMS in the long-term - very much looking forward to seeing what comes out of this. 🎲 Attended the UK Games Expo 2021 show. It's been great to work with the team here building their new site and infrastructure over the past couple of years. Even with the major setback of having to build a 'Virtual' version of the Expo in 2020, I feel like we've made some big improvements. It was great to see this all come together at the show. While definitely scaled back from the past few years, there's still something about being amongst other board gamers that reminds you why the hobby is great. 🎮 There was a lot of buzz around Final Fantasy XIV (the MMO) recently, initially after the poor reception to the latest WoW patches, and then after the awful stuff that came out of Activision/Blizzard. Despite not having committed to an MMO in many years (only dropping in to the WoW expansions when they release), I still have a soft spot for the genre, so I thought I'd give it a go. Pretty pleased I did right now - very much enjoying the slower pace, the story elements and just exploring a different world. 🏡 Had the exciting experience of hiring a digger and going to town on our garden with it. Not entirely for entertainment purposes (although my son may have thought otherwise). We're replacing our slowly rotting raised vegetable beds with new, fresh, sleeper-based ones. {% image "src/posts/2021-08-06-july-braindump/digger.jpg", "Building raised beds" %} Despite the painful cost of wood at the moment, I think these will be a huge improvement. Always surprising how much you can learn from a "simple" project like this: Designing a wood structure in Fusion 360 (I like to be able to visualise these things). How to drive a digger, and how not to tip it over when working on slopes. You can buy really long timber screws.

0 views
usher.dev 4 years ago

May 2021 Braindump

🎮 Been enjoying Disco Elysium quite a lot - a really engaging world, story and characters - although it does take the right frame of mind to pick up. Satisfactory has been this month's mindless 'go-to'. 📺 Found Wintergatan , who is building a music-playing marble machine. A fascinating constantly evolving piece of engineering. ⌨️ I finally built my first mechanical keyboard. A GMMK Tenkeyless with Gateron yellows, if you care. It's nice, but it's definitely taking some getting used to. {% image "src/posts/2021-05-18-may-braindump/keyboard.jpg", "The fancy new keyboard" %} 🎲 Boardgaming has sadly taken a backseat recently (I suppose there's a good reason), but I was excited to receive my copy of Oath: Chronicles of Empire and Exile - the new release from Cole Wehrle, the designer of Root (one of my favourites in recent years). Oath promises to chronicle the history of its world over repeated plays, each game causing consequences for future games. Looking forward to diving in. 🎮 Been enjoying Disco Elysium quite a lot - a really engaging world, story and characters - although it does take the right frame of mind to pick up. Satisfactory has been this month's mindless 'go-to'. 📺 Found Wintergatan , who is building a music-playing marble machine. A fascinating constantly evolving piece of engineering. ⌨️ I finally built my first mechanical keyboard. A GMMK Tenkeyless with Gateron yellows, if you care. It's nice, but it's definitely taking some getting used to. 🎲 Boardgaming has sadly taken a backseat recently (I suppose there's a good reason), but I was excited to receive my copy of Oath: Chronicles of Empire and Exile - the new release from Cole Wehrle, the designer of Root (one of my favourites in recent years). Oath promises to chronicle the history of its world over repeated plays, each game causing consequences for future games. Looking forward to diving in.

0 views
usher.dev 4 years ago

Exposing Home Assistant using Cloudflare Tunnel

Nothing on my home network can be reached from the outside world without a VPN. Unfortunately, that presents a few issues with Home Assistant: So far, I've been living with these problems. Exposing my entire HA instance to the world isn't something I'm comfortable with. If you're not comfortable with your networking and security knowledge, stop here and go ahead and subscribe to Home Assistant Cloud. It's very good and a great way to support Home Assistant. If you're interested in managing a solution for this yourself, read on. Cloudflare's 'Argo Tunnel' product has been around for a while, providing a tool to create a secure tunnel from any network in to the Cloudflare network, but they've recently rebranded it to Cloudflare Tunnel and made it free to everyone . Here's how I set it up to expose my Home Assistant instance. On your home server, use the utility to login to Cloudflare and download a certificate. I use the docker container, so to do this: Create a folder for your configuration to live, I use on the host. Replacing with a user/group ID that has access to read and write from your directory. This will provide you with a link to follow to authorise with Cloudflare and to choose a domain to authorise. Once that's done, will downloaded the generated certificate and place it in your mounted volume at . Create your tunnel: This will create a new tunnel named and drop a config file for it in your configuration directory. Create a configuration file to route your tunnel to your Home Assistant instance. In : replacing the ID and with a reference to the config file you got from step 3, and replacing the with the URL for your Home Assistant instance. In the Cloudflare DNS panel, add a new from the subdomain you want your instance to be accessible at, to - where the ID in the target is the same as the tunnel ID you created previously. <Image src={import("./cloudflare-tunnel-ui.png")} alt="Cloudflare DNS UI" /> You'll need some way to start your tunnel and keep it running - I'm doing this using , with a that looks a bit like: Run to bring up the tunnel. The first thing we need to do is give Cloudflare a way to authenticate you so we can make sure access is restricted. Next up, we need to configure the tunnel to use this login provider: <Image src={import("./cloudflare-rule-ui.png")} alt="Cloudflare Application Rule UI" /> 5. Leave the 'setup' settings as they are and finalise setup. Once this is done, you should be able to visit the domain you've setup where you'll be prompted to follow the One-time PIN sign in process. If the entered email matches the one you provided in your rule, you'll have remote access to your Home Assistant instance! Many Home Assistant integrations expose a webhook URL to allow external applications (and mobile apps) to update sensors. These applications won't be able to negotiate through the Cloudflare Access authentication process, so to work around this we'll add a bypass rule specifically for webhooks. <Image src={import("./cloudflare-webhook-path.png")} alt="Setting the webhook path in the Cloudflare UI" /> 2. When setting rules, create a rule with the 'Rule action' set to Bypass' and an 'Include' rule set to 'Everyone'. This will allow anonymous users to bypass authentication. <Image src={import("./cloudflare-rule-include-everyone.png")} alt="Setting a rule to bypass everyone in the Cloudflare UI" /> To set up your Home Assistant mobile app to route sensor data through the tunnel, you'll need to set up a separate URL for external and internal use. On Android, this is done by setting the Home Assistant URL setting to the external/tunnel URL, and the Internal Connection URL to the URL you use while connected to the networks listed in Home Network WiFi SSID : <Image src={import("./android-ui.png")} alt="Setting up an external URL in the Home Assistant android app" /> I'm still experimenting with this so this solution isn't entirely complete. It works to help limit the exposure of your Home Assistant instance, but it isn't perfect: Accessing the Home Assistant UI from out-and-about is a pain. The Home Assistant app can't report useful information such as location data unless the device is connected to the VPN. There are a number of integrations which use webhooks or similar to communicate data to your HA instance. Create a folder for your configuration to live, I use on the host. Run: Replacing with a user/group ID that has access to read and write from your directory. This will provide you with a link to follow to authorise with Cloudflare and to choose a domain to authorise. Once that's done, will downloaded the generated certificate and place it in your mounted volume at . Create your tunnel: This will create a new tunnel named and drop a config file for it in your configuration directory. Create a configuration file to route your tunnel to your Home Assistant instance. In : replacing the ID and with a reference to the config file you got from step 3, and replacing the with the URL for your Home Assistant instance. Head over to the Cloudflare Teams Dashboard to start configuring access to your tunnel. Start at Configuration -> Authentication. Click '+ Add' next to Login methods to add your first login method. The easiest to get started with here is 'One-time PIN', so choose and enable that. Go to Access -> Applications on the left Click 'Add an application' and choose 'Self-hosted' from the options. Give your application a name and provide the domain you set up previously. In the next step, create a rule for 'Emails' which includes your email address: Create another application as above, but when prompted for the application domain, enter in the path: Some integrations don't use webbooks as a means to communicate with HA, so you may find you need to expose different URLs - this isn't typically well documented so you'll need to dive in to the code to figure out what you need to configure. You're still exposing part of your Home Assistant instance to the world - if there's a vulnerability exploitable through the webhook endpoint, this won't help you. The dashboard in the Home Assistant app won't work with Cloudflare Access in front of it. Home Assistant Cloud is better, easier, and super cheap. Go and sign up.

0 views
usher.dev 4 years ago

April 2021 Braindump

🎮 Finished NieR: Automata - when this first came out I was intrigued but not convinced I'd be able to put the 40-odd hours in to get through it. When it appeared on Game Pass I thought I'd give it a chance, and I'm very glad I did. An excellent combination of thought-provoking story, fun characters, engaging combat and fantastic music, sprinkled with a few tropey JRPG bits and a slight lack of respect for the player's time. 🎲 We've been playing quite a bit of MicroMacro: Crime City , a game that presents you with a large city map and a set of crimes you need to solve by spotting clues sprinkled throughout the city. A lovely unique concept that I'm excited to see more of. 📺 Discovered That Van Jolene via GeoWizard . A well-presented and entertaining YouTube series on a couple's adventures in converting a van in to a campervan. Now I'm tempted to buy a van. 📖 Currently reading Skunk Works: A Personal Memoir of My Years at Lockheed . I'm not usually interested in aerospace or the military, but this book's story of the engineering effort that goes in to building one of these machines is amazing. There's a few interesting lessons on security here too! 🏠 I bought a Coral Edge TPU and set it up with Frigate on our home server to do real-time object detection on my cameras. I've done a lot of of experimenting with various self-hosted camera monitoring tools ( MotionEye , Shinobi , ZoneMinder , Blue Iris ) - while they all have their advantages, they all fall short on motion/object detection. Offloading this to an Edge TPU with Frigate is working out great. 🌻 Our greenhouse has been in a state of disrepair for a while - so we finally got around to replacing all the broken glass panels with corrugated PVC sheets (not quite as pretty, but way more rugged for our windy garden). We also installed fancy automatic window openers to control the temperature in there. These are such clever, simple pieces of engineering made up of a piston filled with wax which expands and contracts based on the temperature. Magic. 💻 I discovered and switched to which replaces tools like , and with a single tool. Very happy with it so far. 🎮 Finished NieR: Automata - when this first came out I was intrigued but not convinced I'd be able to put the 40-odd hours in to get through it. When it appeared on Game Pass I thought I'd give it a chance, and I'm very glad I did. An excellent combination of thought-provoking story, fun characters, engaging combat and fantastic music, sprinkled with a few tropey JRPG bits and a slight lack of respect for the player's time. 🎲 We've been playing quite a bit of MicroMacro: Crime City , a game that presents you with a large city map and a set of crimes you need to solve by spotting clues sprinkled throughout the city. A lovely unique concept that I'm excited to see more of. 📺 Discovered That Van Jolene via GeoWizard . A well-presented and entertaining YouTube series on a couple's adventures in converting a van in to a campervan. Now I'm tempted to buy a van. 📖 Currently reading Skunk Works: A Personal Memoir of My Years at Lockheed . I'm not usually interested in aerospace or the military, but this book's story of the engineering effort that goes in to building one of these machines is amazing. There's a few interesting lessons on security here too! 🏠 I bought a Coral Edge TPU and set it up with Frigate on our home server to do real-time object detection on my cameras. I've done a lot of of experimenting with various self-hosted camera monitoring tools ( MotionEye , Shinobi , ZoneMinder , Blue Iris ) - while they all have their advantages, they all fall short on motion/object detection. Offloading this to an Edge TPU with Frigate is working out great. 🌻 Our greenhouse has been in a state of disrepair for a while - so we finally got around to replacing all the broken glass panels with corrugated PVC sheets (not quite as pretty, but way more rugged for our windy garden). We also installed fancy automatic window openers to control the temperature in there. These are such clever, simple pieces of engineering made up of a piston filled with wax which expands and contracts based on the temperature. Magic. 💻 I discovered and switched to which replaces tools like , and with a single tool. Very happy with it so far.

0 views
usher.dev 4 years ago

GPT Board Game Titles

So here we are. When we were building out Virtually Expo (UK Games Expo 'gone virtual' for a pandemic-stricken world), we built a Discord bot that let visitors join one of UKGEs famed queues. One of the things this bot did was to give you a random, AI generated board game when you reached the 'front' of the queue. These were generated in advance by training a GPT-2 model (with the aid of Max Woolf's fantastic gpt-2-simple ) on the top 2,000 board games from BoardGameGeek . First up; if you're not familiar with the board game world - there's a lot of games named after places. From Carcassonne , through Orléans and Mombasa , to Bruges and Snowdonia . The model seemed to pick up on this trend and continued the theme with ideas such as: Then there's the expansion trend. Carcassonne has somewhere between 10 and 134 expansions, depending on how you're counting. If the publisher ever runs out of ideas, this model has them covered: along with positive vibes from the appropriately titled: There's the expansions that promise to change the game quite significantly: I am tickled by the idea of Zooloretto , a game in which you carefully plan a zoo filled with cute animals to attract visitors, turned on its head by the addition of the 'Run!' expansion. There's the philosophical: I checked, 'bluffling' isn't a word, but I can definitely agree that something big is indeed bluffling the Earth. There's the games I'd play in an instant: and the ones that are perhaps for a more niche audience: The 'creative use of an IP' award goes to: It seems to have a slightly confusing tendency to generate fish-themed games: of which, 'My Fish is Fish' raises questions I do not have answers for. Then there's the titles that cause genuine concern about the future of AI: Yep. We should probably stop doing this before it's too late. Finally, a few I liked: GPT-3 tries pickup lines Max Woolf's Buzzfeed YouTube video titles Monmouth Tycoon High Wycombe Choose Your Own Adventure: Argentina Turning Point: Swansea Crisis in Northumberland Carcassonne: Yard Edition Carcassonne: New York Carcassonne: Vale of the Fallen Dragons of War Carcassonne: Abyss – The Lost Chronicles Carcassonne: Asteroid Derby Carcassonne: The Cards are on the Rise Pandemic: Alive! Ticket to Ride: The Quest for the Galaxy Rallyman Gettysburg Animal Upon Animal: The Camelot Campaign Unlock! The Impacts of Crime Rory's Story Cubism Smash Up: First Person Shooter Power Grid: The Next Phase of Electronic Warfare Zooloretto: Run! Ticket to Ride: The Official Route Field Commander: Cthulhu Catan: All the Pretty Things Who Will Win? Modern Magic: The Gathering Where Did All the Money Go? If There Were No Dragon... Something Big Is Bluffling the Earth Mythic Tales: All the President's Cards Are Spell Cards! Long Story Short: You Can't Always Get What You Want Valley of the Uzis I Cast a Spell: The Card Game Accursed Gratuitous Sarcasm Einstein: Son of a crab and his business partner embark on a daring mission Battle Masters: The Fate of Meddle and Sump-Hole goats Cottage Cheese My Wife Is Alive! Quirky Fate: You Win! War of Entrails 10 Days, No Jumpers Pyromania Board Game Nuns on the Underground Rent a Bear In The Night Warhammer 40,000: The Futon War Bob Ross: Guitar Hero Dice Throne: Season Two – Star Wars Battle for Aragorn Hannibal: Family Fish Prince Piranha Detective My Fish is Fish Flying Carp Rolling Fishes Death to the Lobster EXPLOSION: A New World Our Souls Are Marketable Unconditional Surrender-era No Peace Without Fire The Road to Wales Zombie and Screamin' Zombies Explosive Substances Control 4: Harry the Accuser Capital Letters Get Off My Lawn! Unfairly Impeded Do Something Crazy Microsoft Word Fabio's Garage There's Something Shifting In Paris Escape Goat & Cat with Guest Cat Band-Aid Man Billionaire Bus Driver Bulk Spaghetti The Captain Is, But He's Bigger Gravy Lobby Whoopi Goldberg Tesla and the Other Robots

0 views