Latest Posts (12 found)
codedge 3 weeks ago

Managing secrets with SOPS in your homelab

Sealed Secrets, Ansible Vault, 1Password or SOPS - there are multiple ways how and where to store your secrets. I went with SOPS and age with my ArgoCD GitOps environment. Managing secrets in your homelab, be it within a Kubernetes cluster or while deploying systems and tooling with Ansible, is a topic that arises with almost 100% certainty. In general you need to decide, whether you want secrets to be held and managed externally or internally. One important advantage I see with internally managed solutions is, that I do not need an extra service. No extra costs and connections, no chicken-egg-problem when hosting your passwords inside your own Kubernetes cluster, but cannot reach it when the cluster is down. Therefore I went with SOPS for both, secrets for my Ansible scripts and secrets I need to set for my K8s cluster. While SOPS can be used with PGP, GnuPG and more, I settled with age as encryption. With SOPS your secrets live, encrypted, inside your repository and can be en-/decrypted on-the-fly whenever needed. The private key for encryption should, of course, never be committed into your git repository or made available to untrusted sources. First, we need to install SOPS, age and generate an age key. SOPS is available for all common operating systems via the package manager. I either use Mac or Arch: Now we need to generate an age key and link it to SOPS as the default key to encrypt with. Generate an age key Our age key will live in . Now we tell SOPS where to find our age key. I put the next line in my . The last thing to do is to put a in your folder from where you want to encrypt your files. This file acts as a configuration regarding the age recipient (key) and how the data should be encrypted. My config file looks like this: You might wonder yourself about the first rule with . I will just quote the [KSOPS docs](To make encrypted secrets more readable, we suggest using the following encryption regex to only encrypt data and stringData values. This leaves non-sensitive fields, like the secret’s name, unencrypted and human readable.) here: To make encrypted secrets more readable, we suggest using the following encryption regex to only encrypt data and stringData values. This leaves non-sensitive fields, like the secret’s name, unencrypted and human readable. All the configuration can be found in the SOPS docs . Let’s now look into the specifics using our new setup with either Ansible or Kubernetes. Ansible can automatically process (decrypt) SOPS-encrypted files with the [Community SOPS Collection](Community SOPS Collection). Additionally in my I enabled this plugin ( see docs ) via Now, taken from the official Ansible docs: After the plugin is enabled, correctly named group and host vars files will be transparently decrypted with SOPS. The files must end with one of these extensions: .sops.yaml .sops.yml .sops.json That’s it. You can now encrypt your group or host vars files and Ansible can automatically decrypt them. SOPS can be used with Kubernetes with the KSOPS Kustomize Plugin . The configuration we already prepared, we only need to apply KSOPS to our cluster. I use the following manifest - see more examples in my homelab repository : Externally managed: this includes either a self-hosted and externally hosted secrets solution like AWS KMS, password manager like 1Password or similar Internally managed: solutions where your secrets live next to your code, no external service is need Arch Linux: Two separate rules depending on the folder, where the encrypted files are located Files ending with are targeted The age key, that should be used for en-/decryption is specified

0 views
codedge 1 months ago

Random wallpaper with swaybg

Setting a wallpaper in Sway, with swaybg, is easy. Unfortunately there is no way of setting a random wallpaper automatically out of the box. Here is a little helper script to do that. The script is based on a post from Silvain Durand 1 with some slight modifications. I just linked the script my sway config instead of setting a background there. Sway config : The script spawns a new instance, changes the wallpaper, and kills the old instance. With this approach there is no flickering of the background when changing. An always up-to-date version can be found in my dotfiles . Original script from Silvain Durand: https://sylvaindurand.org/dynamic-wallpapers-with-sway/   ↩︎ Original script from Silvain Durand: https://sylvaindurand.org/dynamic-wallpapers-with-sway/   ↩︎

0 views
codedge 2 months ago

Modern messaging: Running your own XMPP server

Since a years we know, or might suspect, our chats are listend on, our uploaded files are sold for advertising or what purpose ever and the chance our social messengers leak our private data is incredibly high. It is about time to work against this. Since 3 years the European Commission works on a plan to automatically monitor all chat, email and messenger conversations. 1 2 If this is going to pass, and I strongly hope it will not, the European Union is moving into a direction we know from states suppressing freedom of speech. I went for setting up my own XMPP server, as this does not have any big resource requirements and still support clustering (for high-availabilty purposes), encryption via OMEMO, file sharing and has support for platforms and operating systems. Also the ecosystem with clients and multiple use cases evolved over the years to provide rock-solid software and solutions for multi-user chats or event audio and video calls. All steps and settings are bundled in a repository containing Ansible roles: https://codeberg.org/codedge/chat All code snippets written below work in either Debian os Raspberry Pi OS. The connection from your client to the XMPP server is encrypted and we need certificates for our server. First thing to do is setting up our domains and point it to the IP - both IPv4 and IPv6 is supported and we can specify both later in our configuration. I assume the server is going to be run under and you all the following domains have been set up. Fill in the IPv6 addresses accordingly. ejabberd is a robust server software, that is included in most Linux distributions. Install from Process One repository I discovered ProcessOne, the company behind ejabberd , also provides a Debian repository . Install from Github To get the most recent one, I use the packages offered in their code repository . Installing version 25.07 just download the asset from the release: Make sure the fowolling ports are opened in your firewall, taken from ejabberd firewall settings . Port , used for MQTT, is also mentioned in the ejabberd docs, but we do not use this in our setup. So this port stays closed. Depending how you installed ejabberd the config file is either at or . The configuration is a balance of 70:30 between having a privacy-focused setup for your users and meeting most of the suggestions of the XMPP complicance test . That means, settings that protect the provacy of the users are higher rated despite not passing the test. Therefore notable privacy and security settings are: The configuration file is in YAML format. Keep an eye for indentation. Let’s start digging into the configuration. Set the domain of your server Set the database type Instead of using the default type, we opt for , better said . Generate DH params Generate a fresh set of params for the DH key exchange. In your terminal run and link the new file in the ejabberd configuration. Ensure TLS for server-to-server connections Use TLS for server-to-server (s2s) connections. The listners The listeners aka inside the config especially for , , and are important. All of them listen on port . Only one request handler is attached to port , the . For adminstration of ejabberd we need a user with admin rights and properly set up ACLs and access rules. There is a separat section for ACLs inside the config in which we set up an admin user name . The name of the user is important for later, when we actually create this user. The should already be set up, just to confirm that you have a correct entry for the action. Now the new user needs to be create by running this command on the console. Watch out to put in the correct domain. Another user can be registered with the same command. We set as the admin user in the config previously. That is how ejabberd knows which user has admin permissions. Enabling file uploads is done with . First, create a folder where the uploads should be stored. Now update the ejabberd configuration like this: The allowed file upload size is defined in the param and is set to 10MB. Make sure, to delete uploaded files in a reasonable amount of time via cronjob. This is an example of a cronjob, that deletes files that are older than 1 week. Registration in ejabberd is done via and can be enabled with these entries in the config file: If you want to enable registration for your server make sure you enable a captcha for it. Otherwise you will get a lot of spam and fake registrations. ejabberd provides a working captcha script , that you can copy to your server and link in your configuration. You will need and installed on you system. In the config file ejabberd can provision TLS certificates on its own. No need to install certbot . To not expose ejabberd directly to the internet, is put in front of the XMPP server. Instead of using nginx , every other web server (caddy, …) or proxy can be used as well. Here is a sample config for nginx : The nginx vhosts offers files, and , for indicating which other connection methods (BOSH, WS) your server offers. The details can be read in XEP-0156 extension. Opposite to the examples in the XEP, there is no BOSH, but only a websocket connection our server offers. The BOSH part is removed from the config file. host-meta.json Put that file in a folder your nginx serves. Have a look at the path and URL it is expected to be, see . Clients I can recommend are Profanity , an easy to use command-line client, and Monal for MacOS and iOS. A good overview of client can be found on the offical XMPP website . Citizen-led initiative collecting information about Chat Controle https://fightchatcontrol.eu   ↩︎ Explanation by Patrick Breyer, former member of the European Parliament https://www.patrick-breyer.de/en/posts/chat-control/   ↩︎ 5222 : Jabber/XMPP client connections, plain or STARTTLS 5223 : Jabber client connections, using the old SSL method 5269 : Jabber/XMPP incoming server connections 5280/5443 : HTTP/HTTPS for Web Admin and many more 7777 : SOCKS5 file transfer proxy 3478/5349 : STUN+TURN/STUNS+TURNS service XMPP over HTTP is disabled ( mod_bosh ) Discover then a user last accessed a server is disabled ( mod_last ) Delete uploaded files on a regular base (see upload config ) Register account via a web page is disabled ( mod_register_web ) In-band registration can be enabled, default off, captcha secured ( mod_register , see registration config ) Citizen-led initiative collecting information about Chat Controle https://fightchatcontrol.eu   ↩︎ Explanation by Patrick Breyer, former member of the European Parliament https://www.patrick-breyer.de/en/posts/chat-control/   ↩︎

0 views
codedge 3 months ago

Proton Authenticator: Don't you want diversification?

When Proton released its new authenticator in July 2025 I had mixed feelings about that. A new authenticator app although there is a bunch of viable, secure and privacy-respecting as well as well-maintained alternatives . When did you as an ordinary user trusted all your secrets, mail, vpn, password, 2FA and so on, to one company and what happened next? Just upfront, I do not have anything personally or technically against Proton Authenticator. It is way newer and maintained than the old Twilio Authy some people using. But the there is something more to this. There is an English saying: Don’t put all your eggs in one basket . That applies to investments aka diversification , as well as it applies relying solely on one company with your communication and secrets. Last time most of trusted one company with all their digital life belongings that was probably Google. We had mail there, our calenders, we shared private stuff on Google+ - Google+ , can you remember? - then there is Google Authenticator and Search, Youtube, etc. Effectively your digital life was on Google. You created a vendor lock-in yourself. And now we do the vendor lock-in again and think it will be different? There might be people saying that this time things are different. Proton is based in Europe. Argh, no they are not. Proton is based in Switzerland and that is not inside the European Union. Furthermore everything is encrypted and Proton knows shit what is inside your communication, people say. Well, in 2019 it turned out Proton lied about logging 1 and IMHO their CEO has issues with being political neutral 2 . Your trust your digital guts to Proton - I mean using a VPN for privacy reasons, or using encrypted email, or a password manager and then they hand over logs, metadata and more. Not the nicest move. Let us just keep it with that. Wouldn’t you be better of to diversify your tools to different provider, different countries and jurisdictions and maybe also self-hosting some of them? You do not want to create SPOF for your communication and secure data, so do not do that by just using one provider for everything. French climate activist was arrested to Swiss authorities based on Proton logs https://arstechnica.com/information-technology/2021/09/privacy-focused-protonmail-provided-a-users-ip-address-to-authorities/   ↩︎ Proton Mail Faces Backlash Over Claims of Political Neutrality Amid CEO’s Praise for Republican Party https://techstory.in/proton-mail-faces-backlash-over-claims-of-political-neutrality-amid-ceos-praise-for-republican-party/   ↩︎ French climate activist was arrested to Swiss authorities based on Proton logs https://arstechnica.com/information-technology/2021/09/privacy-focused-protonmail-provided-a-users-ip-address-to-authorities/   ↩︎ Proton Mail Faces Backlash Over Claims of Political Neutrality Amid CEO’s Praise for Republican Party https://techstory.in/proton-mail-faces-backlash-over-claims-of-political-neutrality-amid-ceos-praise-for-republican-party/   ↩︎

0 views
codedge 5 months ago

Hugo with Codeberg and BunnyCDN

While enshittification is rolling across a lot of US-based services, let’s try to host our static Hugo page with EU-based services only. Domain, deployment, CDN: we’re going back to the roots - or into a more modern tech-era for European internet services. Disclaimer : I never had technical problems with Github or Cloudflare. The switch to move my page to providers based inside the European Union is based on a bunch of actions US-based internet companies did or the US government in general 1 . I do not want to support them anymore. The services I recommend here are tested and trusted by me. Links to these services use the affiliate programm of that particular services to support me using their services. I moved my website from using Github + Cloudflare pages to a new setup using Codeberg and Bunny.net CDN. During the last months I already read about tools and providers others are using or recommending. I have settled for the following new providers: I just need a workflow that consists of these three steps I can see, that the following setup is going to last for a long time, as it provides a basic, but extensible architecture with the common things you would expect and probably already enjoy, like I am with them for several years and never had any issues. The domains work, their UI is rather functional than too much clickly-clicky. I like! You just register with them, payment can be done via Credit Card, PayPal, SEPA and more. They even give you a little credit in case something goes wrong with the payment method selected. In any case, you will keep your domains all time and not getting kicked immediately. Their support was all the time very friendly and supportive. Codeberg is a non-profit source code hoster from Germany. Compared to Github and others, there is no intention to monetize your code or knowledge 2 . The service runs on Forgejo, a rewritten fork of Gitea 3 . They are still small compared to the big players Github and Gitlab, but already provide all functionality for most of us, to run your software project on it. You have got code hosting, Codeberg actions - which is almost the same as Github actions, an issue tracker, packages, release, user management and further more. If you cannot find actions in your repository make sure you enable it first. In the repository settings, visit “Units” and click the “Enable Actions” checkbox. To deploy your code create a file in the root folder of your project. This is a 1-to-1 copy of how you structure workflows in Github. Inside your you can use the following code. I created a 2-step deploy process: first build the files, second deploy to Bunny. When using the workflow from above, have a look at the following variables and adjust to your needs. Important: and are two separate keys. The deploy key you can get in your push zone in the FTP & API Access tab. The general API key you need to copy from your Account Settings page. As you can see, we use duck.sh 4 for the deployment, which is done via FTP. duck.sh : I was not able to find some more lightweight CLI FTP tool to transfer multiple files. Probably some shell wizard would write a script utilizing curl. I’d be happy if you can ping or mention me in the Fediverse if you found some alternative. At the end of the deploy step the pull zones are purged. This invalidates the cache of Bunny so it is going to fetch the new version, that we just pushed. The most “work” needs to be done on Bunnys side. What we need is a Let’s create our push zone by clicking on Storage on the left side and Add Storage Zone . The name for the push zone can be freely chosen. The storage tier can be probably left to Standard . Only if you require response times between 1-5ms choose Edge (SSD) , which comes with almost 3-times the pricing of the Standard tier. Now set up the geo replication, check the pricing per GB and your done with the first part. I am currently at $0.025/GB with GEO Replication in Frankfurt, Singapore and New York. You can attach the pull zone by clicking Connect Pull Zone . Enter the name of the pull zone. Again, set the storage tier to _Standard. Adjust the Pricing Zones to what ever suits your needs and wallet. When creating the pull zone enter the domain for your site as custom hostname. I entered , as this should be the domain for my page. You can also go with an ANAME record, like as an example for my site, but this is not recommended 5 . Set Force SSL to true and you are good to go. Cost comparison is always a bit tricky. If we compare the costs between self-hosting your site on a VPS, most people would argue, that the VPS can also be utilized for some other things than just hosting a website. That is why I will just put the costs side-by-side - and in my opinion the price for this setup is very very competitive. I used Hetzner for Hosting and just a general pricing of 1 EUR/month for the domain. Bunny charges a minimum of 1 EUR/month in general up to a certain level of traffic. Have a look at their websites calculator to get a better indication. For me the amount of traffice served this is within the 1 EUR/month. I hope you now enjoy your website being served from the EU :-) Enshittification of the internet: and it continues , recent examples   ↩︎ What is Codeberg e.V., see their mission   ↩︎ Offical Forgejo statement about soft and hard fork: https://forgejo.org/2024-02-forking-forward/   ↩︎ https://duck.sh/   ↩︎ How ANAME DNS records affect routing   ↩︎ Domain Offensive : Domain provider Codeberg : Code hosting/development platform and deployment tool Bunny.net : CDN for hosting the website write content, deploy (run through various steps, I can control and modify) and enjoy your website. insane speed automatic TLS certificate a deploy process you as the owner control mostly yourself. base uri: set this to the URL your page should be run at : set this to the id of your pull zone at Bunny, we’ll come to that in a moment : this is deploy key, with which you deploy your files to Bunny : this is the general API key, which is used for purging the pull zone push zone : the primary region where your content will be uploaded to. Select some place close to where you live. Also select at least one geo replication location to have your content replicated pull zone : this can be one or more zone, where your content is pushed to. I set this up with Frankfurt (EU), Singapore (Asia) and New York (North America). Additionally you can select place in Africa and South America. Enshittification of the internet: and it continues , recent examples   ↩︎ What is Codeberg e.V., see their mission   ↩︎ Offical Forgejo statement about soft and hard fork: https://forgejo.org/2024-02-forking-forward/   ↩︎ https://duck.sh/   ↩︎ How ANAME DNS records affect routing   ↩︎

0 views
codedge 6 months ago

Moving away from Authy

One year ago Authys desktop app was shut down. Their mobile app still works, but is literally a dead train. It was about time to move to some alternative - and there are not plenty. 2FA/Mutifactor developed to be a common thing over the last couple of years. Event non-technical persons understand the importance. While hardware keys, such as Yubikey (U2F, Universal Second Factor), are the better option 1 , using apps to generate TOTP 2 are widely used. With this is mind, I thought there must be plenty of alternatives in the market. While looking around in Reddit, on Google and checking out Privacy Guides 3 to collect recommendations, I stumbled upon the ever same repeating lack of features for the 2FA apps out there. Features I expect from the new 2FA app: The last point is funny, as Authy was closed source. I decided to not go this way again, and switch to a more transparent solution. If at this point you might think to develop your own TOTP solution would be possible - it is. There is nicely written article by Hendrik Erz 4 on how to generate the codes. While in the research process, I discovered that Authy codes cannot be easily transferred to a different application, meaning there is no export functionality. WTF?! Although I do not have hundreds of accounts, the migration is going to take some time, for all accounts to be manually changed. But it is what it is - there is no way around. I had a quick look at these apps and in the end went for ente auth . 2FAS does also have all the features I wanted to see in a Multifactor app, but… they are not listed on Privacy Guides and they launched some crypto token 5 . Both points weren’t trustful, especially not the second one, to go for this app. ente auth has all the features I expect from a 2FA solution and has a long time record in providing stable, free and open source solutions. I am now with enthe auth for almost half a year and all I can say is, it just works. Syncing has never been a problem, the apps on the individual platforms work flawless, getting updates from time to time. All good! Pretty satisfied. The new Proton Authenticator made big waves and surely adds missing features to the whole security suite of Proton. The main thing, that bugs me with Proton is, that the more data, apps, you name it, you move into their spere, the bigger the more you depend on them as a company. I have written my concerns about this in a separate post . Universal Second Factor (U2F) - Advantages and disadvantages  ↩︎ Time-based one-time password (TOTP)   ↩︎ Privacy Guides   ↩︎ Implement TOTP yourself   ↩︎ Why does a two-factor authenticator app need an NFT? - https://nft.2fas.com/   ↩︎ Desktop (macOS, Linux) and mobile app (iOS) Sync with multiple devices, end-to-end encrypted Must not require an internet connection Open source would be a big plus is mandatory Proton Authenticator Universal Second Factor (U2F) - Advantages and disadvantages  ↩︎ Time-based one-time password (TOTP)   ↩︎ Privacy Guides   ↩︎ Implement TOTP yourself   ↩︎ Why does a two-factor authenticator app need an NFT? - https://nft.2fas.com/   ↩︎

0 views
codedge 2 years ago

Nuxt 3: Good to know

When switching my personal website to Nuxt 3 I found myself looking and researching all the small things you need to tackle, besides the content, to make it fly. Sitemap, robots.txt, proper page titles, include 3rd party JS and so on. While there is a lot written about a general setup of Nuxt, most blogs almost never talk about these small things. You can find all of these tips in the repository codedge/codedge.de of this site. Most websites you a similar page title like this: For example: Let’s create that. Prerequisites In your you configure the central title content using the configuration: This adds the general part, in my case , to every title for each page. To add the specific title for a page, open one of your files in the directory and add to it. This is going to build a title like . You might want to load scripts, f. ex. tracking scripts, depending on your environment. I wanted to include the Plausible tracking script only on the production environment. In you you can do this: Make sure, you set your correctly according to your site. While including images in your repository might be a fast and straightforward solution you might want to consider using some image service like Cloudinary or Imgix as a central place for your images. I chose Cloudinary which is supported by the NuxtImage plugin. After setting things up you might want to use this in your Markdown files. Using or does unfortunately not work there. Wrapping this inside a Markdown Component (MDC) helps. Create a file in and Add With that in place you can use NuxtImage in your Markdown files by writing You want to serve a custom 404 page, properly styled, whenever a visitor hits an unknown sub-page of your site. Instead of customizing the file, as suggested by other people, I went for writing a custom component which only deals with the 404 error. Add a file and add the following: Additionally to the content itself, we modify the page title and returning a proper 404 http status code. I use this component in my file ( see here ) file. So whenever a request is matched by this catchall route, deliver the 404 page, as probably no other content was found. Currently neither Nuxt 3 or the Content module provides a way to generate a sitemap in the default installation. Luckily this is quick win to set up yourself. First, install the NPM package sitemap by running . Second, create a file with the content Don’t forget to update the hostname in the code. You can now open your sitemap at . You use the pages feature You have a central

0 views
codedge 4 years ago

Auto-generate a social media image in Statamic

When writing blog posts you likely want to share your post on Twitter or any other social network to give your posts an audience. Let’s stay with Twitter for this post. While writing blog posts might be the main task, the title and content is clear, there is often an image used to and pulled by social media. Instead of searching the web for meaningful backgrounds, why not just putting the title of your post and the name of your blog on an uni-color background. For creating this image we need to hook into Statamics event system to generate the image after saving the blog post. To create a new listener that actually runs the image generation we run which creates a new listener at . Next the listener needs to be run, when an entry is saved. For this the EventSaved event comes in handy. So let’s register our listener by putting in it . Now, whenever an entry is saved, our listener is called and can do its magic, generating a pixel image for Twitter. For creating the images I am going to use the Intervention Image library . Back to our listener file we need to load the Intervention Image library and generate the image. Here is complete file: The requirement for Intervention is PHPs extension. Alternatively you can use gd too. I restricted the generation to: I suspect there is more than one libray to use for this use-case, but I am pretty happy with Intervention. I never used it before, but it seems to be stable and does exactly what it should do. Imagick or GD? First I started with using but it turned out I cannot properly render text. When using gd you can specify built-in fonts 1 to 5, which wasn’t my intention. I wanted to use my own font. With I was able to render text properly with a custom file. posts in the collections only published items

0 views
codedge 4 years ago

Run Statamic on Vercel

When Tailwindcss 2 was released I decided to give my blog a new update and also write about some more topics I had in mind. The blog ran on Hugo hosted on Netlify. While I never had issues with Netlify for some reasons Hugo wasn’t just not made for me. It took me a fair amount of time to get Tailwindcss 2 up and running there, then I found some issues with generating my content,… all in all not a refreshing experience when you just want to write some new blog posts. In the meantime I feel in love with Statamic as I used it in another freelance project, so I thought why not give it a try for my own blog. There are some very good blog post by Rias about running Statamic 2 and 3 on Netlify that I took as a base. When converting my blog from Hugo to Statamic and changing settings on Netlify I ran into bigger problems getting all the php composer packages installed in Netlify. The error gave me a bad day and I eventually found others had issues ( Github issue ) with this as well. While it seemed to be a problem with Composer 2 in my case I just wasn’t able to get this fixed. Even an update from Composer 2.0.7 to 2.0.8 did not help. Unfortunately you cannot specify the version of Composer running when Netlify builds your blog. Vercel , which is the former ZEIT , is an equally opponent to Netlify build JAMSTACK application. There are already huge comparisons between these two ( Google ) so I won’t bother you with this. You need to use Statamic SSG package to publish your site on Vercel. There is a separate section describing which steps to take, including a build script, and what settings you need. Just fantastic. Additionally I found some more settings can be set using a ( see mine ) in your root project folder: When switching to Statamic I found a couple of things missing: Create a feed Create a new route in your under which your feed should be reachable. In my case the feed can be found under : Next you need to set up a template in which the feed is generated. So I created a file at ( see on Github ). To let the SSG package of Statamic know, that it needs to generate your feed route, specify this in in the section. Create a sitemap For generating a sitemap I use Spaties laravel-sitemap package. All you need to do is setting up a command that is run when deploying your site. To let the sitemap build on deployment create a script in your and let run when deploying to Vercel. This should already be included in the build script from Statamics SSG package. Vercel, by default, checks for inside the output folder of the application. So the easiest solution is to just create a in your folder and let this file being copied to the final static folder by setting it in your config file. The bad thing with this is, you cannot use the already existing layout as it just skips the whole Statamic magic. I hope the open issue in the SSG repository gets fixed more or less soon, so that Statamic is automatically emitting a . Setting the correct headers for your sitemap and feed Deny any iframe embedding of your content Set cache controls for your assets Disable the now bot comments on Github for your commits an atom feed a 404 page.

0 views
codedge 5 years ago

Render images in Statamics Bard

The Bard fieldtype is a beautiful idea to create long texts containing images, code samples - basically any sort of content. While I was creating my blog I was not sure how to extract images from the Bard field. Thanks to the Glide tag you can just simply pass the field of your image and it automatically outputs the proper url. My image field is a set called image. In your Antlers template for the images just write For generating responsive images you can use the excellent Statamic Responsive Images addon provided by Spatie . With this the above snippet changes to that: This generates a tag with to render images for differrent breakpoints.

0 views
codedge 5 years ago

Create proper database dumps for Magento 2

When developing a Magento2 you might be needing database dumps from time to time from the production system. Normally you either dump the complete database - which, to me, bring a lot of negative side effects. Instead of now writing your own script to dump only the files you need and exclude those, you don’t want, just use the fantastic n98-magerun2 . To dump a database without logs, sessions, trade data, admin users and index tables use: Check the extended documentation for further groups and how to create your own groups. Size : Depending on the shop and the logging settings, the tables can grow up to multiple gigabytes. Sensitive (customer) data : Getting all the (hashed) passwords, addresses and names from customers does not feel like a good idea. Depending on the companies compliance guideline you may not even allowed to have them. Old/outdated data : Tables with logs or reports that are not necessary for your local development.

0 views
codedge 5 years ago

Run a PHP application on AWS Fargate

Following the trend of serverless, all that hype (or not?) I was looking through the AWS services offered and stumbled upon AWS Fargate , a service that lets you run containerized applications on either Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Services (EKS) . For the tooling (development and deployment) of our PHP application I’d like to stick to probably the widely adopted tools, like: Of course, you’re not bound to this tooling. If you want to use another external database hosted somewhere else than on Amazon, feel free to do. The same applies to logging. You’ve got a working Greylog up and running then you’re free to use that. These things are not mandatory for AWS Fargate. I just decided to use that for convenience reasons. If you want to know about pricing of this setup by now, let’s leave it with what advocates usually say: It depends! 😉 To get a general feeling have look at the costs section at the end of this article. A lot of the (further) costs depend on the pricing. For this project/experiment I use Laravel 7. Of course, you can use any framework you’d like to run this application. Just make sure you adjust certain paths for building the Docker images. Grab the code : All files can be found in this Github repository . Feel free to open issues and PRs if you spot anything wrong. The final directory layout: Let’s get started! To provide the complete application to AWS Fargate, we’ll split it into three containers: For the database we don’t create a separate container as we need a stateful solution. We’ll rely on a multi-stage build as well as on images only for our Docker images to keep the image sizes extremely small. You can find some comparison and further reading on Alpine images in this blog post or on the Alpine docker page. Here is an example just to demonstrate the basic usage for our multi-stage build. You can find the full Dockerfile in the repository . Due to the fact that we only hold the files needed for the containers we keep to size of the images relatively small. For the php image, the size mainly depends on the modules needed as we need to include the build and development files for creating the extensions. This might increase our image size. For my setup I ended with these sizes: Although included in the project and supported by AWS Fargate, Docker compose is not going to be used in our AWS deployment. For orchestration of the containers we are going to use Amazons ECS tasks definition in which you can link containers. I will come to that a bit later when talking about the task definition. You can use the the docker-compose file for local testing of the containers. In there you can see how we addressed to different stages of the Docker image - using , although we have only one Dockerfile. Through the item the containers are linked. Do not use links . Using links in your docker-compose.yml is considered deprecated and being removed more or less soon. Either use or create a user-defined network . For local testing you just need to run . Afterwards you’ll find the three images, linked, up and running. The push to the repository is part of our deployment pipeline which is our next topic. For deployment we are going to use Github Actions to create the Docker images, push them to a Docker registry (ECR in our case), creating a deployment task for ECS to pick up the images and spin up a new service that runs our three containers on ECS. Most of you are already familiar with so we are going to step right into our workflow file at . To make the workflow work you need to register secrets in Github for your AWS key, AWS secret key and AWS region. Furthermore you need to adjust the names (3 in total) according to the repositories you have created in AWS. For that, we quickly switch to our ECR console and create three repositories. This feature is supported since mid of 2019 and prevents tags from being overwritten. As we’re tagging the images with and wanting this tag to reflect the latest changes we set this to . Image scanning helps in identifying software vulnerabilities in your container images. We are going to use this here. You can certainly disable it too, if you don’t like it for whatever reasons. After you created the secret in your Github repository, set up the repositories on AWS ECR, adjusted the names for the repositories in your workflow let’s quickly have a look at each of the workflow steps: In ECS, the basic unit of a deployment is a task, a logical construct that models one or more containers. This means that the ECS APIs operate on tasks rather than individual containers. In ECS, you can’t run a container: rather, you run a task, which, in turns, run your container(s). A task contains one or more containers. In our workflow the steps 7, 8 and 9 are responsible to adjust the file. This file can be compared to the or any other orchestration file you use to connect your Docker containers. One special thing here is the following line appearing in each of step 7, 8 and 9: What it does in 8 and 9 is, that it takes the former out and exchange the field inside. In step 7 it uses the as it is. This is needed to iteratively inserting the image used from our Docker registry and finally pushing the complete task definition in step 10. Moving on to the task-definition.json file itself. There is a whole documentation sections on this file AWS docs page. I’ll continue with the parts that are relevant for us here. Let’s go through the file, but starting at the end: All containers have set to . This means, that if one container fails or stops for any reason, all other containers that are part of the task are stopped. Next is all the configuration about AWS Fargate and all the connected services we are going to use. AWS Fargate cannot be configured directly as it is more an underlying technology to run serverless applications on Amazon AWS. In the next chapters we are going to step into each part that needs configuration to get the whole Laravel application running. Talking about security on AWS could fill an entire series of posts. I’ll keep this to a minumum where I’d personally think this is a reasonable way to operate an application. Separate user and group for your application In your Identity and Access Management (IAM) create a new user called that is the one who is allowed to run all tasks around your application. Assign and manage permissions via a group, f. ex. . Make the user member of this group and then assign permissions to this group instead of directly to the user. Roles for ECR and ECS By default the newly created user has no rights to execute any operation on ECR or ECS. In our case, this user need to be able to push images to ECR or to run services on ECS to execute our task to spin up the container. For that we need to attach to two policies to our group: Surely we would need to tighten the permissions a bit later on. I’ll get to that in a separate post. Execution role From the AWS docs The Amazon ECS container agent, and the Fargate agent for your Fargate tasks, make calls to the Amazon ECS API on your behalf. The agent requires an IAM role for the service to know that the agent belongs to you. This IAM role is referred to as a task execution IAM role. So the, create a new role called and attach the following policies: The name of this role needs to match the value in our workflow. Head over to your AWS Management Console, open Services, type ECS and click on Elastic Container Service. On the left side menu click on and hit the Create Cluster button. Launch type: Fargate Task definition: This is prepopulated with the name in our . Service name: This should match the service name from our workflow file. Number of tasks: Leave that to for now. We will not run more than one instance of this service. Create service: Step 2 Cluster VPC : Make sure you select the correct Virtual Private Cluster (VPC) group that was created together with your cluster. It should be selected automatically. Subnets : Select the subnet that are unselected, normally two. Load balancers: We will go with None for the moment and come back later to add a Load Balancer to our service. Service discovery : Disable service discovery as we won’t use Amazon Route 53 for this project. If needed you can add this later on, of course. Auto-scaling : Skip that. We won’t use that. Review all your settings and hit Create service . Managing secrets and/or environment variables on AWS can be done with either AWS Secrets Manager or with AWS Systems Manager Parameter Store . I decided to go with the parameter store for one main reason: it is (almost) free of charge. For those who want to read more about differences and pros and cons of both solutions have a look at this blog post for comparison of both. Whether it is a configuration like or an actual secret like the . Both is going to be stored in the parameter store and injected into the container in our . Store your as instead of type . A SecureString parameter is any sensitive data that needs to be stored and referenced in a secure manner. If you have data that you don’t want users to alter or reference in plain text, such as passwords or license keys, create those parameters using the SecureString datatype. Although I used the normal type you can do better for the above mentioned reasons. To quickly check if you can reach your site, navigate to your cluster, and check the task that is currently running. There you will find your public IP address that directly points to port of your app. When you enter that page you should be welcomed with the Laravel landing page from our fresh install. Note : We won’t handle neither HTTPS nor Amazon Route 53 here. After adding the load balancer you can point your domain to CNAME of ELB. The ELB is going to direct all traffic to port of our application where our nginx is listening. A load balancer can be created in the EC2 service. On the left select Load balancers and hit the button Create Load Balancer . Next select Application Load Balancer. In the first step make sure you Additionally add the availability zones for your different subnets. Next would be step 2 which we are going to skip, as it is only about HTTPS. We create a new security group with only one rule that allows traffic coming from of type to reach our instance via on port . Here we create a new target group with a target type . Our load balancer routes requests to the targets in this target group using the protocol and port that you specify. The Register Targets step can be skipped and on step 6 you please check all the settings again. If everything looks good, you can save the configuration of the ELB. Now when pushing changes to your Github repository the deploy workflow start, build the images, pushed them to the ECR Docker registry, creates the task that is picked up by the service and creates your application. So, what’s the price you have to pay for this setup? As I mentioned earlier, this is hard to say, as it depends on usage, dimensioning and the region. Here is a nice overview, how the different AWS regions varies by costs. To make it short, the top 5: Use the AWS pricing calculator to get a proper pricing. If you’re in a first phase of your project you are probably eligible for the AWS Free Usage Tier . This will surely give you a lot of space to play around and test. Here is a list of the usage for the service I created and played around with for this post. As you can see the most important for now is the space for our Docker registry on ECR. So to keep our images small is basically saving us money. Just a quick rundown on improvements further ideas to be done. Provide HTTPS access, refine the groups, policies in IAM to tighten access and strengthen security, see the AWS SDK for Laravel to make handling for AWS in Laravel easier … and probably many many more things :-) Deployment : Github including Github Actions Infrastructure : Elastic Container Service (ECS) to run our containers on Docker container hosted on Amazon Elastic Container Registry (ECR) Database : we’ll be using Amazon RDS Logging : Amazon Cloudwatch Checkout : Checkout your repository from Github Configure AWS credentials : Configure AWS credential environment variables for use in other GitHub Actions Login to Amazon ECR : Log into Amazon ECR with the local Docker client Build, tag, push image: nginx : Build the Docker image for nginx and pushes it to Amazon ECR Build, tag, push image: php-fpm : Build the Docker image for php-fpm and pushes it to Amazon ECR Build, tag, push image : nodejs: Build the Docker image for nodejs and pushes it to Amazon ECR Render task definition: nginx : Renders the final repository URL including the name of repository into the task-definition.json file. We’ll come to that in the next topic. Render task definition: php-fpm : Renders the final repository URL including the name of repository into the task-definition.json file. Render task definition: nodejs : Renders the final repository URL including the name of repository into the task-definition.json file. Deploy Amazon ECS task definition : A task definition is required to run Docker containers in Amazon ECS. You define your containers, its hardware resources, inter-container connections as well as host connections, where to send logs to, and many more. See the next section about this topic. But first let’s check for two important settings, and . is the name you are going to choose in Amazon ECS. is the name for the service that picks up the task and deploys it into the cluster. Logout of Amazon ECR : Log out from Amazon ECR and erase any credentials connected with it requiresCompatibilities : This needs to be set to FARGATE. Otherwise ECS won’t recognize it properly. networkMode : This is set to awsvpc, so every task that is launched from that task definition gets its own elastic network interface (ENI) and a primary private IP address. That makes it possible to call services and applications as if they would be in one system (not in distributed containers). Example for nginx calling php-fpm: If we would orchestrate with Docker Compose we normally call a container by its name. So the above statement would probably be . cpu : The cpu value can be expressed in CPU units or vCPUs in a task definition but is converted to an integer indicating the CPU units when the task definition is registered. memory : The memory value can be expressed in MiB or GB in a task definition but is converted to an integer indicating the MiB when the task definition is registered. Both values, and can be defined for each container separately or for the complete task. In this sample application here, I defined the values for the complete task. This runs just fine. And remember, you can change (scale) this as you need. This is just a point to start from. executionRoleArn : This is connected to permissions on AWS. We leave this to the value . I’ll come to this in the section about configuring AWS Fargate and its roles. family : This is the name for our task that can be freely choosen. So you can deploy multiple if you like and do balancing and stuff. It is just common name for a set of containers, in our case here for three. containerDefinitions : Here comes the fun part… the containers. I am going to summarize the important things you’ll encounter in inside the : : Open port 80 to host to be accessible from outside : Open port 9000 only for inter-container communication to be accessible from nginx, Secrets and environment variable like keys, settings for debugging, env to be used are properly set : nothing special AmazonEC2ContainerRegistryFullAccess AmazonECS_FullAccess AmazonECSTaskExecutionRolePolicy AmazonSSMReadOnlyAccess: we are going to need that to read our environment variables from AWS Systems Manager Parameter Store Launch type: Fargate Task definition: This is prepopulated with the name in our . Service name: This should match the service name from our workflow file. Number of tasks: Leave that to for now. We will not run more than one instance of this service. Parameter store : Free of charge, limit of 10,000 parameters per account Secrets manager : $0.40 per secret stored and additional $0.05 for 10,000 API calls have a listener for HTTP on port have the correct Virtual Private Cloud (VPC) selected. N. Virginia

0 views