Posts in Docker (11 found)
Kix Panganiban 2 weeks ago

Cutting the cord on TV and movie subscriptions

In 2025, there's no longer a single subscription that you can pay for to watch any new movie or TV show that comes out. Netflix, Disney, HBO, and even Apple now push you to pay a separate subscription just to watch that one new show that everyone's talking about -- and I'm sick of it. Thanks to a friend of mine, I recently got intrigued by the idea of seedboxing again. In a nutshell, instead of spending $ to pay for 5 different streaming services, you pay a single fee to have someone in an area with lax torrenting laws host a VPS for you -- where you can run a torrent client and a Plex server, download content, and stream it to your devices. I tried a few seedbox services, but the pricing didn't really work for me. And since I'm in the Philippines, many of them suffer from high latency, and even raw download speeds can be spotty. So I put my work hat on and decided to try spinning up my own media server, and I chose this stack: https://github.com/Rick45/quick-arr-Stack For people just getting into home media servers like myself, this stack can essentially be run with just , with a few modifications to the env values as necessary. (For Windows users running this on WSL like me, you'll need to change all containers using networking to instead, and expose all ports one by one. Most of them only need one port, except for the Plex container, which lists them here .) Once it's up, you get: The quick arr Stack repo has a much longer and thorough explanation of each component, as well as how to configure them. Once it's all up and running -- you now have access to any TV show or movie that you want, without paying ridiculous subscription fees to all those streaming apps! Deluge -- the torrent client Plex Media Server -- this should be obvious unless you don't know what Plex is; it hosts all your downloaded content and allows you to access it via the Plex apps or through a web browser Radarr -- a tool for searching for movies, presented in a much nicer interface than manually searching for individual torrents Sonarr -- a tool for searching for TV shows, and as I understand it, a fork of Radarr Prowlarr -- converts search requests from Radarr and Sonarr into torrent downloads in Deluge Bazarr -- automatically downloads subtitles for your downloaded media Torrenting is illegal. That should be obvious. Check your local laws to make sure you're not breaking any. The stack includes an optional VPN client, which you could use if you want to be less detectable. You'll need to configure the right torrent trackers in Prowlarr. Some are great for movies, some for TV shows, and there are different ones for anime. There doesn't seem to be a single tracker that does it all. Even then, some trackers might not work. For example, l337's Cloudflare firewall is blocking Prowlarr. Not all movies and TV shows will be easy to find, so if you're looking for some obscure media, you might need to go with a Usenet tracker. This setup requires a pretty stable internet connection (with headroom for both your torrenting and your regular use), and tons of storage. Depending on how much media you're downloading, you'll probably need to delete watched series consistently or use extremely large drives. Diagnosing issues (Prowlarr can't see Sonarr! Plex isn't updating! Downloads aren't appearing in Deluge!) requires some understanding of Docker containers, Linux, and a bit of command-line work. It's certainly not impossible, but might be off-putting for beginners.

0 views

A secure & efficient Node/npm in Docker setup for frontend development

IntroI’ve been searching for a secure and efficient Node in Docker setup for a little while now. And I think I’ve found it! A couple of years ago, I’d install the latest LTS version of Node/npm on my Windows machine and be done with it. In my day job, I could be working on one of 4 different front end projects, and have my own front end projects I’d work on in my spare time.

0 views
//pauls dev blog 6 months ago

How To Secure Your Docker Environment By Using a Docker Socket Proxy

In my previous article, I talked about Docker security tips that are often overlooked while working with Docker. One critical flaw was mounting the Docker Socket directly into containers which can be a major vulnerability. I recommended using a Docker Socket Proxy that is a simple but powerful solution. In this article, we will dive deeper into this topic. We will explore the Docker Socket Proxy and learn why exposing it directly into a container is a dangerous practice. We will use it to harden our container infrastructure. Additionally, we will make use of the newly deployed Docker Socket Proxy in a containerized application (Portainer) to communicate with the Docker API in a safe way. Let’s make our containers smarter and safer. In any Docker environment, the Docker Socket ( ) refers to a special file within the Linux OS that allows direct communication with the Docker daemon . The Docker daemon is the core process that manages every container on the host system. This means that every application (or container) that has access to the Docker Socket can fully manage containers (e.g., create, stop, delete, or modify ), which results in full control over the entire Docker environment. If the Docker Socket is mounted inside a Docker container (which many containers like Portainer want for convenience), we are giving that container full control over the Docker Daemon. Now, if an attacker can gain access to our container, they can: This also works if the Docker socket is mounted as read-only because attackers can often find ways to execute commands through it. In contrast to the simple Docker Socket, the Docket Socket Proxy acts as a middleware between our containers and the Docker Socket. This results in not exposing the Docker Socket directly to all our containers. With the Docket Socket Proxy, we can: Using a Docker Socket Proxy in our Docker environment will introduce the following benefits: To set up our own Docker Socket Proxy, we will use the LinuxServer.io version of the Docket Socket Proxy to restrict the API access. To set it up, we first need to create a Compose file called : In this Compose file, we used several environment variables that permit or allow the Docker Socket Proxy to do certain tasks. If we want to allow/permit other functionalities, we can find the setting in this list and add it to our environment variables. Before starting the Docker Socket Proxy, we have to create an external network because it was specified as one in the Compose file. We can do this with the following command: Now that we have the network created, we can start the Compose file with the Docker Socket Proxy by executing: Assuming that the Docker Socket Proxy is running we can start to configure and deploy Portainer with the Docker Socket Proxy instead of directly accessing the Docker Socket. To do this, we have to create a Compose file named with the following contents: In this Compose file, there are two very important settings that differ from a normal Portainer deployment with direct access to the Docker Socket: 1. The command uses a TCP connection to the Socket proxy container: 2. The network that is used is the previously created external network To start the Portainer Docker container, we can execute: After some seconds, the command should be finished,d and Portainer is started. To access Portainer, we can switch to our browser and open: In this very short tutorial, we learned how to set up the Docker Socket Proxy container, use it within a Portainer instance to securely access the Docker API, and allow minimal exposure. This should add an extra layer of trust and security. But, why is this secure? So, in Conclusion, we can say that using a Docker Socket Proxy is a much safer alternative to the normal approach where we expose directly. Additionally, it is very easy to do, and we should take the extra step to secure our environment. Let's all make our environments more secure 😊🚀! I hope you find this tutorial helpful and can and will use the Docker Socket Proxy in your Docker environment. Also, I would love to hear your ideas and thoughts about the Docker Socket Proxy. If you can provide more information or have other important settings to set, don’t hesitate to comment here. If you have any questions, please write them down below. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , BlueSky , and  GitHub . Delete and modify every container in our Docker environment Create new compromised Docker containers with run harmful software Escape from the container to take over the host Limit all actions that the container can do by granting only permissions that are necessary Allow only specific containers to use the Proxy instead of exposing the full socket to everyone Run a small, well-defined proxy container that is controlled and can be used to access the Docker API It minimizes the risk by restricting which actions can be done by applications. It prevents direct access to the Docker Daemon , which reduces attack surfaces in our environment It allows controlled and secure automation while still maintaining security. (for HTTPS) ✅ It prevents us from full Docker socket exposure. This restricts API access, so Portainer (or other containers) only gets the permissions it needs. ✅ It reduces attack surface because no direct socket access is possible and sensitive API actions like and are blocked. ✅ It works in Swarm mode because it uses an overlay network for secure communication between containers

0 views
//pauls dev blog 7 months ago

How To Self-Host FreshRSS Reader Using Docker

Have you ever thought about self-hosting your own RSS reader? Yes, That's good, you're in the right place. In this guide, we will walk through everything you need to know to self-host your own version of FreshRss, a self-hosted, lightweight, easy-to-work-with, powerful, and customizable RSS Reader. RSS stands for Really Simple Syndication and is a tool that can help us to aggregate and read content from multiple websites in one place which is updated when new posts are published on any of the websites added as RSS feed. RSS readers can be used to stay updated and get the latest posts from blogs (also mine), news sites, or podcasts without visiting each site and without having an algorithm filtering these sites "for our interests". The news/articles are simply ordered chronologically. This can save time because all the content we want is available in one place and often they are offline usable so you can read without an active internet connection. Another important feature of an RSS reader is privacy because the RSS reader pulls the content directly from the website and this avoids tracking and adds. Unfortunately, not every website supports RSS feeds containing the complete article, but we can avoid being tracked for those who do. Nowadays, many different kinds of RSS readers exist, but most of them are hosted by different providers: As usual with external hosted services, more or the main content is blocked behind a Paywall. Because of that, I decided to host my very own RSS reader which I can use. My choice was FreshRSS because it is lightweight, easy to work with, very powerful, and can be customized to my needs. Additionally, is is a multi-user application where I can invite my colleagues/friend or share lists for anonymous reading. Another cool feature is that there is an API for clients, including mobiles! Internally, FreshRSS natively supports basic Web scraping based on XPath and JSON documents if there are websites that do not have any RSS (Atom) feed published. To follow every step within this tutorial and have a running FresshRSS service in the end we need to have a running Docker or Docker Swarm environment with a configured load balancer like Traefik. If we want to use Docker in Swarm mode to host our services we can follow this article to set it up: After using that article to set up our Docker Swarm we should install and configure a Traefik load balancer (or something similar: NGINX, Caddy, etc) which will grant our service Let's Encrypt SSL certificates and forward our services within our Docker Swarm environment. To learn about this (and other important services) we could follow this tutorial: If we don't want to host our own Docker Swarm but still want to use Docker and Traefik to host FreshRSS I have created the following tutorial which will show how to set up a Traefik Proxy that can be used to grant Let's Encrypt SSL certificate to services running in Docker containers: Before installing FreshRss we have to create a file called which will store every environment variable that is used (for our specific use case) in our Docker container. For more information about settings in the file see this GitHub reference . Now, that we have everything properly set up we can install FreshRSS on our server using either simple Deployment on one server or Docker Swarm deployment. FreshRSS will be installed with Docker Compose. The Compose file contains the service name, all mandatory/important settings for Traefik to have a unique URL and an SSL certificate. To install FreshRSS within your Docker Swarm you can paste the following code into your docker-compose.yml which will be explained afterward. Line 4: The rolling release, which has the same features as the official git     branch is used Line 5 - 8: This sets up log rotation for the container and sets 10 megabytes as a maximum for a log file using a maximum of three log files. Line 9 - 11: To persist all container data, two persistent volumes are used (data/extensions) Line 12 - 13: Set the used network to my Traefik network Line 15 - 17: The service will only be deployed to a Docker Swarm node if the label  is true. This can be achieved by executing the following command before deploying the docker-compose.yml to the stack: Line 18 - 29: Set up a standard configuration for a service deployed on Port 80 in a Docker Swarm with Traefik and Let's Encrypt certificates. In Lines 22 and 25 a URL is registered for this service: rss.paulsblog.dev Line 30: Enables using the file Line 31 - 34: Configure environment variables used by FreshRSS like the timezone (Line 32), the cronjob to update (Line 33), the production version (Line 34) Line 36 - 46: Sets the and environment variable which automatically passes arguments to the command line  . Only executed at the very first run. This means, that if you make changes, delete the FreshRSS service, the volume, and restart the service. Line 47 - 54: The used volumes and networks are defined because this has to be done in Compose files. As we will deploy this service into our Docker Swarm using a file we have to prepend the docker config command which will generate a new Compose file with all variables from the file. To have a simple command we will pipe the output the the docker stack command and deploy it. Because of this procedure, the resulting command to deploy the service in our Docker Swarm is: If you do not have a running Docker Swarm you can use this Compose file: There are only two differences between this and the Docker Swarm Compose files. The first is in  Lines 5 - 7  where a hostname ( ), container name ( ), and the restart policy is set:  . The other change is that  labels are removed from the deploy - keyword  and put to a higher order within the Compose file. This is done because deploy is only used in a Docker Swarm environment but labels can also be used in a simple Docker environment. To start the container simply use: After deploying our own instance of FreshRSS we can switch to our favorite web browser, open https://rss.paulsblog.dev (or your used domain), and log in with the credentials specified within the file (admin:freshrss123456). We should see something similar like this: The first thing we have to do is configure the FreshRss instance. To do this, we should press the cogwheel in the top right corner to open up the settings: Now we have to do two things: On the main page, you can click the small + icon next to Subscription Management to open up the following page: Before adding a feed we should create a category (e.g. Blogs). Then we can add an RSS feed for our most visited/interesting blog (-> https://www.paulsblog.dev/rss ). Pressing Add will open up the settings of the RSS feed page and we are able to configure it even further: On this page, we can add or change the description, check if the feed is working, and also set or update the category. If we Submit we can go back to the article overview and will see all articles that were fetched from the RSS feed: Clicking on an article will show us the complete article to read (if the author of the website has enabled full articles in RSS feed, otherwise there is only a small preview.) For example, on my page, the last article in the RSS feed before publishing this one could be completely read in FreshRSS: As mentioned earlier we could access our self-hosted FreshRSS service from our mobile device. For Android Users, I would recommend ReadDrops ( https://github.com/readrops/Readrops ) to access our FreshRSS. Readdrops can be found in the Google Play Store and is completely free to use: https://play.google.com/store/apps/details?id=com.readrops.app As I am not an IOS user, I do not know which app is best so you have to find one from this wonderful list in the FreshRSS GitHub . Congratulations to us, as we should now have installed our own FreshRSS blog (at least me). If you want to improve the user experience or do more with your FreshRSS instance you can: This is the end of this tutorial. Once you finish setting up your personal FreshRSS instance, share your experience with me. I would love to read it! Also, if you want to give feedback about this tutorial or have any alternative self hosted RSS readers, please comment here and explain the differences. And as always, if you have any questions, please ask them in the comments. I will answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , and  GitHub . NewsBlur : NewsBlur is a "free" RSS reader that can be used on the web, iPad/iPhone, and Android and is able to follow 64 feeds. Like many others, it has a premium account that adds more "features". InnoReader : InnoReader is another "free" RSS reader that can be used to follow websites and read articles on their website. If you have an active subscription, you can also use their IOS or Android apps. With InnoRead you can follow 150 feeds. Feedly : Feedly is also a "free" RSS reader which offers a free plan that can be used on the web and IOS/Android and allows you to follow up to 100 feeds and organize them into 3 folders. Change the password for our admin. This is done in Account -> Profile Adjust the theme we want to use for our instance. This is done in Configuration -> Display Add my blog to your FreshRSS instance: https://www.paulsblog.dev/rss Add Krebs On Security Add LinuxServer.io Add BleepingComputer Invite your friends to share it

0 views
//pauls dev blog 8 months ago

How To Move Docker's Data Directory To Free Up Disk Space

When working with services like Docker which stores large amounts of data in locations that often do not have enough space can be a very frustrating experience, especially, if we are running out of disk space on our Linux server. Recently, I had a case where the partition on a host using docker was running very low on disk space. This time, I deployed so many containers on the host that the storage on the root partition (~15GB) was nearly full. Luckily, I had an additional disk mounted at with plenty of disk space. So I was searching for a simple and effective solution to my problem which was moving Docker's default storage location to another directory on the disk mounted at to ensure that all docker services can operate without problems and prevent data loss. In this article, I want to explain all the steps I did to safely relocate Docker's data directory so that everyone can do this without breaking their containerized applications if this problem ever occurs. If Docker is installed its default behavior is that container images, volumes, and other runtime data (logs) will be stored in which will grow significantly over time because of: When hosting Linux servers, the root partition often has limited space because it should only contain the filesystem and configuration files. This results in having Docker using all left space very fast resulting in slowdowns, crashes, or failures if launching new Docker containers. The solution is to move the Docker's data directory to avoid these problems. Before we can move the Docker's data directory we should stop the Docker service to prevent any data loss while moving the files. Running this command will ensure that no active processes are using Docker’s current data directory while we move it. Unfortunately, this will stop all services running on our host. If we do this in production we have to plan it in a maintenance window! After the Docker service is stopped we can add (or set) a setting value in the configuration file which normally is located at . If we cannot find it there we should create it and add the following configuration: Now, to ensure that every already created container, all images, and all volumes keep working, we have to copy the contents of (the old path) to the new location using the following command: In this command, the flag preserves the permissions, symbolic links, and all corresponding metadata. The is used to not copy data from other mounted filesystems and the property is used to simply copy the directory and not create an extra subdirectory. The next step is restarting the Docker service: This command will restart the Docker service and should instruct Docker to use the new storage location which we defined in the . Finally, we should confirm that Docker is using the new directory and is working correctly. To check this, we can use the following command: This command should return the new directory path we set: Then, we should test that our containers, images, and volumes are working by simply listing them: As a last step, we can check the container by accessing them. For example, if we host a website we can simply access it with the browser. In some cases moving the Docker's data directory is not possible and we could use different techniques to free up disk space and move files. The most important command for freeing up disk space while using Docker is: Docker provides built-in commands to remove unused data and reclaim space: This built-in command from Docker is used to remove unused data and reclaim space. It will remove every stopped container, each not-used network, the dangling images, and all build caches (it won't delete the volumes!) . We should keep in mind that we use it with caution as it will delete every image that is not linked to a running Docker container! If is too dangerous we can remove the individual parts separately. Remove unused networks: Remove unused volumes: Remove dangling images: Another approach to moving the entire Docker directory, we could use Linux bind mounts to selectively move certain large parts in the Docker directory like images or volumes. To do this, we simply create a Linux bind mount using the new path as the source and the docker directory as the destination: This solution is very flexible because we are able to manage the storage without affecting the Docker runtime. Having not enough disk space in a Docker environment could lead to heavy problems as I already learned in this "incident" . By freeing up disk space regularly and moving Docker's data directory to a separate large partition it is possible to ensure a running system and our containerized applications. I hope this article gave you an easy step-by-step guide to apply this to your Docker environment if you encounter that your Docker data directory is consuming too much space. But remember, keeping an eye on your storage needs and proactively managing Docker's data, like enabling logrotate, will help you prevent these unexpected problems. As a best practice, I would recommend cleaning up unused Docker resources regularly and monitoring disk usage on a daily basis to avoid running into problems! To do this, we could use Grafana, Netdata, Checkmk (Nagios), Zabbix, MinMon, or simply a CronJob. However, I would love to hear your feedback about this tutorial. Furthermore, if you have any alternative approaches, please comment here and explain what you have done. Also, if you have any questions, please ask them in the comments. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , BlueSky , and  GitHub . Log files and container data Persistent volumes containing application data (databases, files, etc) Docker images and layers

0 views
//pauls dev blog 1 years ago

How To Setup Docker Registry In Kubernetes Using Traefik v2

More than a year ago I created a tutorial on How To Install A Private Docker Container Registry In Kubernetes: In this tutorial, I was using Traefik for exposing the Docker Registry which will allow to access the registry through HTTPS with a proper TLS certificate. Unfortunately, the tutorial does not work anymore because the of the Kubernetes used by any Traefik file has changed. Instead of using all IngressRoutes have to use . Because of this problem, I recreated setting up the Docker Container Registry tutorial in a simple way. For more information or explanation switch to my previous tutorial . First, we create a namespace that we use for our registry: To add the PVC we create a file and add: Then let's deploy the file to our Kubernetes using: Before deploying our Docker Container Registry we should create a secret that we use to authenticate when pushing/pulling. To simplify this step I created a script that can be used for this purpose. Create a new file and add: To generate the files execute: This will create two files ( and ) in your destination folder( ) with the needed strings. Now, to Install and Deploy the registry we will use Helm the Kubernetes Package Manager. First, we will add the Helm Repository and create a which will contain our specific data. 1. Create : 2. Add and Update Helm Repository 3. Install Docker Registry Now we can add the Traefik IngressRoute which will expose the Docker Container Registry. To do this we have to create a file called and add: This file can then be easily applied to our Kubernetes cluster by executing: The last step will be to test if the Docker Container Registry is working. To check this we can simply download any Docker image and push it to our newly set upped Container Registry. First, we pull an image from Docker Hub by executing: Then we tag the image with a custom name and add our Docker Registry domain name as prefix: Then we have to login into our Docker Container Registry: Now, we can try to push our personal NGINX image to our Private Container Registry by executing: If there are no errors, our Kubernetes Docker Container Registry is working and we can start using it. This simple update to my previously written tutorial, " How To Install A Private Docker Container Registry In Kubernetes ", should help you to deploy a private Docker Container Registry in your Kubernetes cluster with a newer version of Traefik. If you need further information on how to apply this tutorial or have any questions, please ask them in the comments. I will try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , and  GitHub .

0 views
fasterthanli.me 2 years ago

Extra credit

We’ve achieved our goals already with this series: we have a web service written in Rust, built into a Docker image with nix, with a nice dev shell, that we can deploy to fly.io . But there’s always room for improvement, and so I wanted to talk about a few things we didn’t bother doing in the previous chapters. When we ran our app locally, we signed up for a MaxMindDB account, and had to download and unpack the “Country” Geolite2 database manually.

0 views
fasterthanli.me 2 years ago

Generating a docker image with nix

There it is. The final installment. Over the course of this series, we’ve built a very useful Rust web service that shows us colored ASCII art cats, and we’ve packaged it with docker, and deployed it to https://fly.io . We did all that without using at all, and then in the last few chapters, we’ve learned to use , and now it’s time to tell goodbye, along with this whole-ass :

0 views
Can ELMA 4 years ago

Using ngrok in Docker

You can run multiple services as a single app with . If you are using ngrok in your development environment, you may also need to make it a Docker service. Should you? Probably not. But if you want to, you will see how to do it in this article. First of all, create a folder named . You should then create a symbolic link in that folder that references to the actual ngrok executable. For instance, In the ngrok folder, you now have a symbolic link to the ngrok executable. Next, you should create a configuration file for ngrok. In the same ngrok folder, create a file like the one below: Then define your ngrok service in the file: Did you notice the section in the above file? You can find more about it in this article . You can access it on . You may also need to access the ngrok service from other services. Moreover, you may want to get the current ngrok address dynamically. Below is a sample Python code I wrote for this: This will return the public URL of the running ngrok instance according to the HTTP protocol you specified.

0 views
Can ELMA 4 years ago

Docker profiles: Scenario based services

Sometimes you need some services only in certain scenarios. Docker Compose has a handy feature to achieve this; profiles. Here's a sample file below for the examples in the following sections. There are profiles assigned to two of the services in the above example. The service is assigned to the profile, while the service is assigned to the profile . Unlike other services, and services do not have profiles. So they always start. But the and services will only be started when you activate their profiles. The below command would start the and services by default. The following command would also start the service along with the and services. That's because we enabled the profile: The below command would enable multiple profiles, and , at once and start the and services besides the and services. You can also use the environment variable to specify which profiles to enable:

0 views
ptrchm 4 years ago

Docker on macOS Without Performance Problems

The app I’m currently working on runs on Docker. It’s a medium-sized Rails monolith with a bunch of resource-heavy dependencies (Postgres, PostGIS, ElasticSearch, Redis, Next.js and a few more). Docker helps us make sure the whole stack works for everyone, every time. For those using Linux, it works without noticeable downsides. But on macOS, despite every possible performance tweak, Docker had been a huge pain. MacBook with Docker will always run hot, battery will drain in less than an hour, fan speed is high enough for the laptop to take off and I need an external SSD disk to fit all the images.

0 views