Latest Posts (15 found)
//pauls dev blog 6 months ago

How To Secure Your Docker Environment By Using a Docker Socket Proxy

In my previous article, I talked about Docker security tips that are often overlooked while working with Docker. One critical flaw was mounting the Docker Socket directly into containers which can be a major vulnerability. I recommended using a Docker Socket Proxy that is a simple but powerful solution. In this article, we will dive deeper into this topic. We will explore the Docker Socket Proxy and learn why exposing it directly into a container is a dangerous practice. We will use it to harden our container infrastructure. Additionally, we will make use of the newly deployed Docker Socket Proxy in a containerized application (Portainer) to communicate with the Docker API in a safe way. Let’s make our containers smarter and safer. In any Docker environment, the Docker Socket ( ) refers to a special file within the Linux OS that allows direct communication with the Docker daemon . The Docker daemon is the core process that manages every container on the host system. This means that every application (or container) that has access to the Docker Socket can fully manage containers (e.g., create, stop, delete, or modify ), which results in full control over the entire Docker environment. If the Docker Socket is mounted inside a Docker container (which many containers like Portainer want for convenience), we are giving that container full control over the Docker Daemon. Now, if an attacker can gain access to our container, they can: This also works if the Docker socket is mounted as read-only because attackers can often find ways to execute commands through it. In contrast to the simple Docker Socket, the Docket Socket Proxy acts as a middleware between our containers and the Docker Socket. This results in not exposing the Docker Socket directly to all our containers. With the Docket Socket Proxy, we can: Using a Docker Socket Proxy in our Docker environment will introduce the following benefits: To set up our own Docker Socket Proxy, we will use the LinuxServer.io version of the Docket Socket Proxy to restrict the API access. To set it up, we first need to create a Compose file called : In this Compose file, we used several environment variables that permit or allow the Docker Socket Proxy to do certain tasks. If we want to allow/permit other functionalities, we can find the setting in this list and add it to our environment variables. Before starting the Docker Socket Proxy, we have to create an external network because it was specified as one in the Compose file. We can do this with the following command: Now that we have the network created, we can start the Compose file with the Docker Socket Proxy by executing: Assuming that the Docker Socket Proxy is running we can start to configure and deploy Portainer with the Docker Socket Proxy instead of directly accessing the Docker Socket. To do this, we have to create a Compose file named with the following contents: In this Compose file, there are two very important settings that differ from a normal Portainer deployment with direct access to the Docker Socket: 1. The command uses a TCP connection to the Socket proxy container: 2. The network that is used is the previously created external network To start the Portainer Docker container, we can execute: After some seconds, the command should be finished,d and Portainer is started. To access Portainer, we can switch to our browser and open: In this very short tutorial, we learned how to set up the Docker Socket Proxy container, use it within a Portainer instance to securely access the Docker API, and allow minimal exposure. This should add an extra layer of trust and security. But, why is this secure? So, in Conclusion, we can say that using a Docker Socket Proxy is a much safer alternative to the normal approach where we expose directly. Additionally, it is very easy to do, and we should take the extra step to secure our environment. Let's all make our environments more secure 😊🚀! I hope you find this tutorial helpful and can and will use the Docker Socket Proxy in your Docker environment. Also, I would love to hear your ideas and thoughts about the Docker Socket Proxy. If you can provide more information or have other important settings to set, don’t hesitate to comment here. If you have any questions, please write them down below. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , BlueSky , and  GitHub . Delete and modify every container in our Docker environment Create new compromised Docker containers with run harmful software Escape from the container to take over the host Limit all actions that the container can do by granting only permissions that are necessary Allow only specific containers to use the Proxy instead of exposing the full socket to everyone Run a small, well-defined proxy container that is controlled and can be used to access the Docker API It minimizes the risk by restricting which actions can be done by applications. It prevents direct access to the Docker Daemon , which reduces attack surfaces in our environment It allows controlled and secure automation while still maintaining security. (for HTTPS) ✅ It prevents us from full Docker socket exposure. This restricts API access, so Portainer (or other containers) only gets the permissions it needs. ✅ It reduces attack surface because no direct socket access is possible and sensitive API actions like and are blocked. ✅ It works in Swarm mode because it uses an overlay network for secure communication between containers

0 views
//pauls dev blog 7 months ago

How To Secure Your Docker Containers: The Most Essential Steps

Often, I got asked by developers or system administrators how I securely run Docker containers and what best practices I follow. I know ensuring container security is very important because misconfigurations and vulnerabilities can expose our system to attacks and that is not what we want. Because of this, I wanted to create this article explaining possible attack vectors and then cover the most essential steps to secure Docker environments preventing attackers from using them. Have Fun Reading! Before we can dive into security measures, we should be able to recognize the most common attack vectors that threaten our Docker environments. To minimize the security risks for our Docker container we can simply follow these best practices: ✔ Carefully Vet Image Sources: We should always use official or well-reviewed repositories and verify the integrity of the image before deployment. ✔ Keep Containers Updated, But Avoid Auto-Updates: We have to update our containers regularly and help patch vulnerabilities. However, we should avoid automated updates because they can introduce breaking changes. ✔ Enforce the Least Privilege Principle: We should limit permissions for our containers, only permitting what is necessary and preventing unnecessary access. ✔ Restrict Container Resources: CPU, memory, and storage limits should be used to prevent resource exhaustion attacks. ✔ Do Regular Backups: If a breach, a failure, or anything else happens, it is very important that we have done recent backups to ensure a quick recovery without losing too much data. All of the mentioned security practices should offer us a strong foundation for protecting our Docker environment. Although this list doesn't cover every measure, it should highlight the most critical parts that are easy to implement and very effective. Normally, these steps could also be implemented by people with limited technical expertise. To keep it simple and more importantly practical, I have created a list of the most important security practices which are very easy to implement. This list is primarily focused on helping the less technical Docker enthusiasts strengthen their container security without getting frustrated by all the complex configurations. As mentioned above the list is not complete but should include the most important steps. If we are running container images from unverified or unknown sources we will introduce a significant security risk, because many contain malware, outdated software, or vulnerabilities that could be exploited by attackers. To avoid this, we should: An updated system is very important! Because of that regular updates for our containers help us to protect against newly discovered vulnerabilities. BUT , we should never blindly update to the newest version because the new version could introduce bugs or breaking changes. We should always: The Docker socket ( ) is a very critical part of our Docker environment that grants full control over the Docker daemon. Exposing it without cautions can lead to severe security breaches. By default, in many containers, it is possible that we can use , and with could allow processes to gain additional privileges which increase the risk of exploitation. We should prevent privilege escalation inside any containers by adding the following security option to the Compose file: This setting prevents the use of , , and which will reduce the risk for privilege-related attacks. However, it could be that some images may require these privileges to work properly. This means, that we have to test this setting beforehand. Docker containers normally run with a limited set of capabilities that control what they can and cannot do on the host system. Granting more privileges will increase the security risk. Because of this, we should: To limit the potential damage an attacker can do if any container is compromised we should run all containers as non-root users. This means that instead of granting root privileges we should use or explicitly define a user to reduce the risk in our Compose file: If not correctly managed, Docker containers could monopolize system resources and cause performance issues due to memory leaks. To prevent this, we should enforce resource constraints in our Docker Compose file. In the following Compose file we can see how Docker Compose anchors are used to define a base configuration and then override the values as needed for individual containers (explained in comments): If we set these limits in our containers we can ensure that they cannot consume excessive CPU, memory, or storage which will prevent potential system crashes or slowdowns. Since I encountered that one container created such an enormous amount of logs that my cluster broke I always use log rotation. You can read about the problem in this article: Backups are very important and are essential for recovering after security breaches, data loss, or a system failure. If this happens it is a big problem and not having backups could really hurt us. Because of this, we should: Using a good backup strategy will ensure that we can recover quickly from an incident which will minimize the downtime and prevent data loss . As you might now, I learned that the hard way that backups are very important. You can read about my incident here: Now, that we have reached the end of the article, I want to highlight a very important statement: This means that security isn't just about following a simple checklist . It requires understanding our threat model and continually improving our defenses. My personal tips for everyone are, that we should: By doing this, we can keep our Docker environment more secure against possible threats. I hope you find these Docker security tips helpful and can easily adopt these best practices for yourself. Also, I would love to hear your ideas and thoughts about these tips. If you can provide more best practices to implement, don’t hesitate to comment here. If you have any questions, please write them down below. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , BlueSky , and  GitHub . Insecure Exposure: One of the most frequent risks occurs when users expose the Web UI of applications without implementing proper security controls, making them an easy target for attackers. Unverified or Malicious Images: If we run container images from untrusted sources, such as third-party application catalogs that have little to no transparency about their creators or source code, we could introduce security risks. Supply Chain Attacks: Still, if we use a trusted image, some dependencies within it may contain malicious code, which could lead to a potential compromise. Lateral Movement: If a device or a container within our network gets infected, an attacker can exploit it to move from it to other devices and gain access to your server through vulnerable applications. Vulnerabilities and Bugs: Any containerized application that we deploy can have security flaws. This means there could be some vulnerabilities which are exploitable, leading to serious consequences. Always pull images from official repositories (like Docker Hub) or well-maintained sources with a strong reputation. Check its source code, update history, and overall trust level Check factors such as popularity (stars, downloads), contributor activity, and release frequency . Avoid running images that lack transparency about their origin or maintenance status. Frequently update our host operating system, Docker engine, and container images to benefit from the latest security patches. Avoid automatic updates for containers. They can unintentionally pull compromised or broken images before issues are identified. Also, new images can introduce breaking changes like database upgrades (switching from SQLite to MySQL), changes in communications, or changing base functionality (Remove SHA1 for support for SSH RSA signing - Gitea some time ago... ) Verify the stability and security of the new version and update manually. Never mount the Docker socket directly into our containers. If we do it we could allow attackers inside a compromised container to control our entire host system. There is a common misconception that mounting the socket as read-only improves security but it does not . Attackers are still able to execute commands and escalate privileges. We should use a Docker socket proxy with minimal privileges . Using this approach only a designated, well-audited container can interact with the Docker API, reducing overall exposure. Avoid using the flag! This will give our container almost the same level of access as the host itself. Using this setting will grant an attacker nearly full control over our host system Not granting elevated capabilities such as to our Docker containers. This setting will effectively allow a container to bypass most security restrictions and take over the host. Always use only the minimal set of privileges for the container to function correctly. Back up our Docker Compose files and container data volumes regularly (ideally in a GIT). Use application-provided backup tools (e.g. PostgreSQL and MariaDB ) Use a backup solution like Velero, Duplicati, rsync, etc . Be aware that some applications cannot be backed up while running , so we have to check their documentation for best practices. Stay informed about new security vulnerabilities affecting Docker and all applications we use. I use FreshRSS to subscribe to popular security-related websites and read all important articles. Regularly review and refine our Docker security practices. Accept that no system is 100% secure. But we should take proactive steps to significantly reduce risks.

0 views
//pauls dev blog 7 months ago

5+2 Phenomenal FREE Platforms To Learn Coding

Some time ago, I was asked by fellow colleagues that were new how they could learn to code without following the "old path" by visiting a University or making an apprenticeship. If you are also new to the world of coding, it makes sense to start by teaching yourself using one of the many free resources found online. By taking advantage of these free resources, you can learn what you want and not waste money upfront. Once you’ve gone through enough free coding platforms, you will know what you want and can specialize in that field of coding. One of the best websites to improve your coding is freeCodeCamp. It contains numerous exercises based on different topics and languages for practice. Also, this website provides a means to get certified based on your skills by taking their test. I would highly suggest having a look at  their website here  and starting to learn. Exercism is a platform that exists to help as many people as possible to attain fluency in  ANY  programming language. It provides many concept exercises to improve your coding skills. The best thing about this website is that it publishes all information and all tutorials  for free . Also, you are able to keep track of your progress. Additionally, you can opt in as a mentor and share your knowledge with other people. Exercism contains tutorials/exercises for 55 different languages that you want to master:  Python ,  JavaScript ,  TypeScript ,  Go , and  many more . Here is a link to the Exercism website . The Odin Project is a platform where everyone can learn coding in  Ruby on Rails  and  JavaScript.  The platform is designed for people who could not attend an intensive coding school or had access to good computer science education. The Odin Project follows these beliefs: I personally think that this project is mandatory for every person who wants to learn any of the two programming languages. Here is a link to the website Codecademy was started with the goal to give anyone in the world the ability to learn the skills they need to succeed in the 21st century. To achieve this goal, Codecademy provides ~15 courses in different programming languages. Many of them are free, but some are only included within the Pro version that costs 18$ a month. At the moment, many free courses are available within the catalog. You can  start a quiz here  to find out which course is best suited for you. To be clear, this isn’t a coding platform itself, but it’s a great resource for community-curated programming courses. You can simply search for the programming language you want to learn, and you’ll get a list of the best courses, tutorials, and books that are recommended by coders and available online. Here is a link to the website WarriorJS is a learning platform for JavaScript that teaches you JavaScript while you are playing a game. This game is designed for new or advanced JavaScript programmers and will put your skills to the test! Here is a link to the website Elevator Saga is a game where you have to use JavaScript to transport people with an elevator in an efficient manner. While progressing through the different stages you have to complete even more difficult challenges. Only efficient programs will be able to complete all challenges. Here is a link to the website In this article, I showed five cool coding platforms that you can use to start coding. Taking advantage of any of the free coding resources out there is definitely the way to go when you are just starting. If you want a Gamification approach to learning to code, you should check out this article: Also, if you want to have a full path to follow, you can follow my described guide to become a Full Stack developer: If JavaScript is not your thing and you maybe want to learn Python, I have gathered some resources (videos, books, and websites) and listed them in this article: I hope you find any of these coding platforms helpful and find a suitable platform to start your learning. I would love to hear your ideas and thoughts. If you can provide another free coding platform, don’t hesitate to comment here. Also, if you have any questions, please jot them down below. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter ,  BlueSky , and  GitHub . Education should be free and accessible You learn best by actually building Motivation is fueled by working with others Open source is best

0 views
//pauls dev blog 7 months ago

How To Self-Host FreshRSS Reader Using Docker

Have you ever thought about self-hosting your own RSS reader? Yes, That's good, you're in the right place. In this guide, we will walk through everything you need to know to self-host your own version of FreshRss, a self-hosted, lightweight, easy-to-work-with, powerful, and customizable RSS Reader. RSS stands for Really Simple Syndication and is a tool that can help us to aggregate and read content from multiple websites in one place which is updated when new posts are published on any of the websites added as RSS feed. RSS readers can be used to stay updated and get the latest posts from blogs (also mine), news sites, or podcasts without visiting each site and without having an algorithm filtering these sites "for our interests". The news/articles are simply ordered chronologically. This can save time because all the content we want is available in one place and often they are offline usable so you can read without an active internet connection. Another important feature of an RSS reader is privacy because the RSS reader pulls the content directly from the website and this avoids tracking and adds. Unfortunately, not every website supports RSS feeds containing the complete article, but we can avoid being tracked for those who do. Nowadays, many different kinds of RSS readers exist, but most of them are hosted by different providers: As usual with external hosted services, more or the main content is blocked behind a Paywall. Because of that, I decided to host my very own RSS reader which I can use. My choice was FreshRSS because it is lightweight, easy to work with, very powerful, and can be customized to my needs. Additionally, is is a multi-user application where I can invite my colleagues/friend or share lists for anonymous reading. Another cool feature is that there is an API for clients, including mobiles! Internally, FreshRSS natively supports basic Web scraping based on XPath and JSON documents if there are websites that do not have any RSS (Atom) feed published. To follow every step within this tutorial and have a running FresshRSS service in the end we need to have a running Docker or Docker Swarm environment with a configured load balancer like Traefik. If we want to use Docker in Swarm mode to host our services we can follow this article to set it up: After using that article to set up our Docker Swarm we should install and configure a Traefik load balancer (or something similar: NGINX, Caddy, etc) which will grant our service Let's Encrypt SSL certificates and forward our services within our Docker Swarm environment. To learn about this (and other important services) we could follow this tutorial: If we don't want to host our own Docker Swarm but still want to use Docker and Traefik to host FreshRSS I have created the following tutorial which will show how to set up a Traefik Proxy that can be used to grant Let's Encrypt SSL certificate to services running in Docker containers: Before installing FreshRss we have to create a file called which will store every environment variable that is used (for our specific use case) in our Docker container. For more information about settings in the file see this GitHub reference . Now, that we have everything properly set up we can install FreshRSS on our server using either simple Deployment on one server or Docker Swarm deployment. FreshRSS will be installed with Docker Compose. The Compose file contains the service name, all mandatory/important settings for Traefik to have a unique URL and an SSL certificate. To install FreshRSS within your Docker Swarm you can paste the following code into your docker-compose.yml which will be explained afterward. Line 4: The rolling release, which has the same features as the official git     branch is used Line 5 - 8: This sets up log rotation for the container and sets 10 megabytes as a maximum for a log file using a maximum of three log files. Line 9 - 11: To persist all container data, two persistent volumes are used (data/extensions) Line 12 - 13: Set the used network to my Traefik network Line 15 - 17: The service will only be deployed to a Docker Swarm node if the label  is true. This can be achieved by executing the following command before deploying the docker-compose.yml to the stack: Line 18 - 29: Set up a standard configuration for a service deployed on Port 80 in a Docker Swarm with Traefik and Let's Encrypt certificates. In Lines 22 and 25 a URL is registered for this service: rss.paulsblog.dev Line 30: Enables using the file Line 31 - 34: Configure environment variables used by FreshRSS like the timezone (Line 32), the cronjob to update (Line 33), the production version (Line 34) Line 36 - 46: Sets the and environment variable which automatically passes arguments to the command line  . Only executed at the very first run. This means, that if you make changes, delete the FreshRSS service, the volume, and restart the service. Line 47 - 54: The used volumes and networks are defined because this has to be done in Compose files. As we will deploy this service into our Docker Swarm using a file we have to prepend the docker config command which will generate a new Compose file with all variables from the file. To have a simple command we will pipe the output the the docker stack command and deploy it. Because of this procedure, the resulting command to deploy the service in our Docker Swarm is: If you do not have a running Docker Swarm you can use this Compose file: There are only two differences between this and the Docker Swarm Compose files. The first is in  Lines 5 - 7  where a hostname ( ), container name ( ), and the restart policy is set:  . The other change is that  labels are removed from the deploy - keyword  and put to a higher order within the Compose file. This is done because deploy is only used in a Docker Swarm environment but labels can also be used in a simple Docker environment. To start the container simply use: After deploying our own instance of FreshRSS we can switch to our favorite web browser, open https://rss.paulsblog.dev (or your used domain), and log in with the credentials specified within the file (admin:freshrss123456). We should see something similar like this: The first thing we have to do is configure the FreshRss instance. To do this, we should press the cogwheel in the top right corner to open up the settings: Now we have to do two things: On the main page, you can click the small + icon next to Subscription Management to open up the following page: Before adding a feed we should create a category (e.g. Blogs). Then we can add an RSS feed for our most visited/interesting blog (-> https://www.paulsblog.dev/rss ). Pressing Add will open up the settings of the RSS feed page and we are able to configure it even further: On this page, we can add or change the description, check if the feed is working, and also set or update the category. If we Submit we can go back to the article overview and will see all articles that were fetched from the RSS feed: Clicking on an article will show us the complete article to read (if the author of the website has enabled full articles in RSS feed, otherwise there is only a small preview.) For example, on my page, the last article in the RSS feed before publishing this one could be completely read in FreshRSS: As mentioned earlier we could access our self-hosted FreshRSS service from our mobile device. For Android Users, I would recommend ReadDrops ( https://github.com/readrops/Readrops ) to access our FreshRSS. Readdrops can be found in the Google Play Store and is completely free to use: https://play.google.com/store/apps/details?id=com.readrops.app As I am not an IOS user, I do not know which app is best so you have to find one from this wonderful list in the FreshRSS GitHub . Congratulations to us, as we should now have installed our own FreshRSS blog (at least me). If you want to improve the user experience or do more with your FreshRSS instance you can: This is the end of this tutorial. Once you finish setting up your personal FreshRSS instance, share your experience with me. I would love to read it! Also, if you want to give feedback about this tutorial or have any alternative self hosted RSS readers, please comment here and explain the differences. And as always, if you have any questions, please ask them in the comments. I will answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , and  GitHub . NewsBlur : NewsBlur is a "free" RSS reader that can be used on the web, iPad/iPhone, and Android and is able to follow 64 feeds. Like many others, it has a premium account that adds more "features". InnoReader : InnoReader is another "free" RSS reader that can be used to follow websites and read articles on their website. If you have an active subscription, you can also use their IOS or Android apps. With InnoRead you can follow 150 feeds. Feedly : Feedly is also a "free" RSS reader which offers a free plan that can be used on the web and IOS/Android and allows you to follow up to 100 feeds and organize them into 3 folders. Change the password for our admin. This is done in Account -> Profile Adjust the theme we want to use for our instance. This is done in Configuration -> Display Add my blog to your FreshRSS instance: https://www.paulsblog.dev/rss Add Krebs On Security Add LinuxServer.io Add BleepingComputer Invite your friends to share it

0 views
//pauls dev blog 8 months ago

How To Move Docker's Data Directory To Free Up Disk Space

When working with services like Docker which stores large amounts of data in locations that often do not have enough space can be a very frustrating experience, especially, if we are running out of disk space on our Linux server. Recently, I had a case where the partition on a host using docker was running very low on disk space. This time, I deployed so many containers on the host that the storage on the root partition (~15GB) was nearly full. Luckily, I had an additional disk mounted at with plenty of disk space. So I was searching for a simple and effective solution to my problem which was moving Docker's default storage location to another directory on the disk mounted at to ensure that all docker services can operate without problems and prevent data loss. In this article, I want to explain all the steps I did to safely relocate Docker's data directory so that everyone can do this without breaking their containerized applications if this problem ever occurs. If Docker is installed its default behavior is that container images, volumes, and other runtime data (logs) will be stored in which will grow significantly over time because of: When hosting Linux servers, the root partition often has limited space because it should only contain the filesystem and configuration files. This results in having Docker using all left space very fast resulting in slowdowns, crashes, or failures if launching new Docker containers. The solution is to move the Docker's data directory to avoid these problems. Before we can move the Docker's data directory we should stop the Docker service to prevent any data loss while moving the files. Running this command will ensure that no active processes are using Docker’s current data directory while we move it. Unfortunately, this will stop all services running on our host. If we do this in production we have to plan it in a maintenance window! After the Docker service is stopped we can add (or set) a setting value in the configuration file which normally is located at . If we cannot find it there we should create it and add the following configuration: Now, to ensure that every already created container, all images, and all volumes keep working, we have to copy the contents of (the old path) to the new location using the following command: In this command, the flag preserves the permissions, symbolic links, and all corresponding metadata. The is used to not copy data from other mounted filesystems and the property is used to simply copy the directory and not create an extra subdirectory. The next step is restarting the Docker service: This command will restart the Docker service and should instruct Docker to use the new storage location which we defined in the . Finally, we should confirm that Docker is using the new directory and is working correctly. To check this, we can use the following command: This command should return the new directory path we set: Then, we should test that our containers, images, and volumes are working by simply listing them: As a last step, we can check the container by accessing them. For example, if we host a website we can simply access it with the browser. In some cases moving the Docker's data directory is not possible and we could use different techniques to free up disk space and move files. The most important command for freeing up disk space while using Docker is: Docker provides built-in commands to remove unused data and reclaim space: This built-in command from Docker is used to remove unused data and reclaim space. It will remove every stopped container, each not-used network, the dangling images, and all build caches (it won't delete the volumes!) . We should keep in mind that we use it with caution as it will delete every image that is not linked to a running Docker container! If is too dangerous we can remove the individual parts separately. Remove unused networks: Remove unused volumes: Remove dangling images: Another approach to moving the entire Docker directory, we could use Linux bind mounts to selectively move certain large parts in the Docker directory like images or volumes. To do this, we simply create a Linux bind mount using the new path as the source and the docker directory as the destination: This solution is very flexible because we are able to manage the storage without affecting the Docker runtime. Having not enough disk space in a Docker environment could lead to heavy problems as I already learned in this "incident" . By freeing up disk space regularly and moving Docker's data directory to a separate large partition it is possible to ensure a running system and our containerized applications. I hope this article gave you an easy step-by-step guide to apply this to your Docker environment if you encounter that your Docker data directory is consuming too much space. But remember, keeping an eye on your storage needs and proactively managing Docker's data, like enabling logrotate, will help you prevent these unexpected problems. As a best practice, I would recommend cleaning up unused Docker resources regularly and monitoring disk usage on a daily basis to avoid running into problems! To do this, we could use Grafana, Netdata, Checkmk (Nagios), Zabbix, MinMon, or simply a CronJob. However, I would love to hear your feedback about this tutorial. Furthermore, if you have any alternative approaches, please comment here and explain what you have done. Also, if you have any questions, please ask them in the comments. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , BlueSky , and  GitHub . Log files and container data Persistent volumes containing application data (databases, files, etc) Docker images and layers

0 views
//pauls dev blog 8 months ago

The Dark Side Of Software: Anti-Patterns (and How To Fix Them)

An Anti-Pattern is a proven way to " shoot yourself in the foot." The term Anti-Pattern was coined by Andrew Koenig and it's pretty entertaining to read about it in the " Design Patterns: Elements of Reusable Object-Oriented Software ", published in 1994. The author defines Anti-Patterns as a " commonly-used process, structure or pattern of action that, despite initially appearing to be an appropriate and effective "Response to a problem, has more bad consequences than good ones. " In 1998 the term "Anti-Pattern" became popular thanks to the book " AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis ". It defined Anti-Patterns as " specific repeated practices in software architecture, software design, and software project management that initially appear to be beneficial, but ultimately result in bad consequences that outweigh hopes-for advantages. " In the following article, we will learn about the reason why Anti-Patterns are used, the most common Anti-Patterns, and possible solutions that can be used to avoid these Anti-Patterns. An Anti-Pattern, similar to a design pattern, is a literary form and simplifies the communication and description of common problems. Often Anti-Patterns are a design pattern applied in the wrong context. In Software Development, the following factors are often the main causes of Anti-Patterns. To counter these factors, software design and development must take the following fundamental forces into account when making decisions about the project: In the previously mentioned book " AntiPatterns: Refactoring Software, Architectures, and Projects in Crisis " the Anti-Patterns are categorized into three different domains which cover most of the common areas where Anti-Pattern can be seen. However, nowadays, additional perspectives and domains were identified or created which also can have different Anti-Patterns. Software Development: This domain focuses on coding and implementation practices and its Anti-Patterns lead to poor software structure which makes it harder to maintain and extend. Software Architecture: This domain is about system- and enterprise-level design which often leads to faulty or rigid architectures in the system. This avoids scalability and adaptability for further development. Software Project Management: This is all about team coordination, resource allocation, and timelines. Often brought about by poor communication and bad leadership resulting in chaos and inefficiency. Software Operations and Deployment : Anti-Patterns in this domain involve the deployment, monitoring, and scaling of software in production environments. User Experience (UX) and Design: This domain describes the usability and accessibility of software in which ignoring UX often leads to not meeting user expectations. Team Dynamics and Culture: This domain focuses on the human side of software development, including collaboration and morale. If not maintained it creates a toxic culture and poor team dynamics which can fail even well-structured projects. Security and Compliance: In this domain security vulnerabilities and adherence to regulatory requirements are addressed because ignoring best practices or compliance requirements can result in security breaches or fines for the company. The God Object (or God class ) is a commonly used Anti-Pattern which occurs if a single class/object has too many responsibilities, becoming the main part of the system and handles completely different functionality that normally should be part of multiple specialized classes. Additionally, many other classes often depend on the God class which creates high coupling. How to solve: Use the Single Responsibility Principle and break down the God objects methods and properties into smaller classes/functions. If the code is unstructured and not organized you have created Spaghetti Code . The Software Developer cannot read and maintain it properly because it does not follow a straight line. Jumping between methods with many if conditions or having multiple criss-crossing dependencies. How to solve: Adapt coding standards and start using meaningful names for classes. Avoid using circular usage of dependencies and start modularizing functionality Code is reused by simply copying other fragments within the project which leads to redundant code fragments often resulting in problems while maintaining these parts. How to solve: Extract functionality into abstract/reusable functions, classes, or libraries. Use a Static analyzer (e.g. SonarQube) to find the redundancies. A specific tool, framework, or pattern (often the singleton pattern) is used to "fix" every problem even if it does not fit. How to solve: Identify the problem context and use the correct tool/pattern. Often this leads to an enormous amount of refactoring if done too late. Within the codebase, multiple numbers and strings are used without any explanation often resulting in uncertainties about how to understand them. How to solve: Replace magic numbers (or strings) with well-named constants to improve the readability of the code. The codebase uses fixed configuration values from a variable instead of retrieving them from a configuration file or through environment settings. This leads to the problem of an app being rebuilt whenever this setting changes. How to solve: Use configuration files for different versions, environment variables for server configurations, and constants that load the data. For simple solutions/problems, very complex designs are used that add unnecessary layers and features that aren't needed. In short, a codebase or a design that solves problems that you do not have. How to solve: Focus on the current problem, develop lightweight and extensible solutions, and add features or layers only if necessary. If small modifications are needed to fix a bug or add a feature they often result in a very big change including multiple unrelated parts of the app. How to solve: Refactor the codebase to improve cohesion and coupling by combining important functionality into a single class or module. When old undocumented or poorly documented code stays in the project because developers find it too risky to remove it the Lava Flow Anti-Pattern is present. Normally, this happens if a project is very old and has many legacy functionalities that no one understands and no one is able to learn. Often this results in a code state where many parts are nearly impossible to maintain and debugging or refactoring is very complicated. How to solve: Analyze the legacy code, identify dependencies and interactions between classes/functions, and add documentation step-by-step. Write tests to verify that the legacy part does what it should do and check if it is really used by changing and verifying with the implemented tests. Then remove unnecessary parts from the project and ensure that no new bugs will be introduced. It would be very easy if tests had been developed before. In contrast to Lava Flow the Dead Code Anti-Pattern describes code that is never executed or used in the app. Often, this results from refactoring or changing requirements where the new code is implemented but the developer doesn't remove the unused parts. These dead code areas increase the complexity of classes and create confusion for every developer. How to solve: Run static code analysis tools like SonarQube to identify not used code. Also, modern IDEs like IntelliJ have built-in functionality to find unused parts (imports/functions/classes) in the Project. After finding them start refactoring/removing the unused parts. Additionally, integrate the code analysis tool into a CI/CD environment to prevent creating Pull Requests/Features with Dead Code. Sometimes features are added to projects that may be needed in the future. This adds clutter and increases the complexity without adding any value. Features shouldn't be in the codebase unless they are needed How to solve: Focus on the requirements of the initial version and don't implement features that could be important in the future. Many developers tend to optimize their algorithms before a real performance bottleneck is identified. This leads to the high complexity of simple functions and sometimes introduces bottlenecks that were not present before optimizing How to solve: Before refactoring identify critical issues and profile the performance. Also, prioritize developing clean, understandable, and maintainable code before optimization of old functions. The silver bullet syndrome describes the overuse of a single tool, technology, or methodology used as a universal solution to every problem regardless of the context. How to solve: Each problem should be evaluated individually to find the correct approach for every context. Additionally, all different kinds of technologies, tools, or methodologies should be researched to know when to choose which one. The Not Invented Here Syndrome happens if a project team avoids using any third-party tools, framework, or solution simply because they are not invented by themself. The software will have many self-created solutions for common problems because architects struggle against software that is not developed by them. How to solve: Keep your mind open and embrace external solutions when they are well-documented, tested, and fit the problem to avoid "reinventing the wheel". Also, this will save resources and money. Having a system with no clear architecture, chaotic unmaintainable code that has no layers, no modularity, and no documentation is a "Big Ball of Mud". This normally results in non-maintainable software that has several bugs and performance issues. How to solve: Introduce a clear architecture by refactoring it into smaller, cohesive modules and enforce standards like layered architecture or adopt a microservice approach. If a software project uses a technology or a tool tied to a specific vendor and becomes completely dependent on their implementation the "Vendor Lock-In" Anti-Pattern is present. If the vendor makes software changes or expected product features are delayed the project will delay or will be unable to complete. How to solve: Either use open standards from the beginning or create an isolation layer on top of the vendor's software to separate the project from the vendor's implementation. Later, the vendor can be exchanged and the isolation layer is the only part that has to be changed. Also, the isolation layer can combine multiple vendors to provide all the features that are needed for the project. Anchors Aweigh occurs in a project when progress is hindered because of outdated or initial design decisions, tools, or technologies. The Anti-Patterns name refers to a ship that is unable to sail because the anchor is still set. During Software Development this anchor could be a prototypal wireframe or a missing dependency that is not delivered. Additionally, this could be because of a specific part of the app that is kept despite being used and generating multiple bugs. How to solve: Either plan early, review architectural decisions periodically (and early), and encourage the team to refactor features incrementally. Also, invest in skill development so that developers can use newer technology and reduce resistance toward these technologies/tools. If developers start to use code or patterns simply because "it's a best practice" without fully understanding why or how the Patterns work they have established a "Cargo Cult Programming". This often results in overcomplication where unnecessary libraries, tools, or design patterns are used for problems that could be solved much more easily without them. How to solve: Developers should invest more time to understand the code, the tool, the pattern, or the framework before applying it to the app. Additionally, the company should encourage learning, collaboration, and advancement to create a culture where developers tend to ask "why". Furthermore, code reviews should be mandatory in which multiple developers have to review the solution before it is applied to the main code. If new code is developed to encapsulate existing poorly structured code instead of refactoring or optimizing the architecture "The Onion" Anti-Pattern is present. This results in a layered architecture structured like an onion where the core functionality becomes hard to access or change due to the wrapping. Also, adding new features often introduces abstraction layers instead of addressing the main issues. In the end, the app's performance will suffer and become unmaintainable. How to solve: Refactor or replace the core parts of the system and move them into modules. Instead of wrapping layers into new layers start refactoring and treat this refactoring as part of the roadmap instead of putting it into the "nice-to-have" section. The Broken Windows Theory describes acceptance of small code issues like poor naming, some unused code, or using bad practices (copy/paste). The problem is that these small issues will accumulate over time resulting in a chaotic, unmaintainable codebase. Often this problem arises from the project manager who decides "to not do this now" because other features are more important and the customer wants it. How to solve: Instead of accepting small issues developers have to fix them immediately either by themself or by using automated tools like static code analysis tools (SonarQube). Furthermore, clean code standards have to be established so that many of these issues can be addressed. Code reviews in which these guidelines are checked should be done by all developers. The Death March Anti-Pattern describes projects that are doomed from the start due to unrealistic requirements or deadlines. This normally includes over-optimistic planning and poor risk management resulting in a lack of trust of developers and pressure from stakeholders or management. Also, company culture is based on control instead of trust where opinions and warnings from experts are ignored. How to solve: Use empirical evidence to prove why deadlines or requirements could not be fulfilled. Communicate early with stakeholders and set clear understandable expectations where unrealistic features are stopped at the start. As the project manager encourage the team to talk openly about risk and also listen to their concerns about the project. If too much time is spent by decision-makers analyzing, discussing, and planning an "Analysis Paralysis" occurs which prevents progress. Often, endless meetings are conducted without concrete solutions because everyone fears deciding the wrong thing. In these meetings simple problems are getting too much focus and very unlikely scenarios are discussed all the time instead of prioritising core functionality. The decision-makers try to find the perfect solution to the problem instead of creating a practical one that works. This leads to missed deadlines, wasted resources, and a demotivated team of engineers. How to solve: The most important step to do is setting a deadline for the decision which limits the time for discussions. If not already done the project team should adopt agile principles and start with a small, quick release and increment based on feedback instead of planning the perfect solution before the start. This means the team should focus on an MVP (minimal viable product). Scope Creep happens if the project requirements change all the time by extending the features or adding changes after the project plan or concept was agreed on. Additionally, these new changes don't adjust the time, budget, and staffing of the project which leads to missed deadlines, exceeding budget, overworked teams, and more importantly a lower software product. How to solve: Define a detailed project scope and document all requirements that are approved by all stakeholders. If changes are needed use Change Request and evaluate the change regarding budget, timelines, and resources. If adding features is still needed communicate early with the client and explain that these additional features will increase costs and deadlines. Brooks' Law states: In a project, this means if the features take too much time adding a new developer does not save time, it instead causes further delays because the onboarding has to take place where the new team members need to be taught the codebase, the tools, and everything project-related. Additionally, the code quality will suffer because the team will rush to integrate the new developer which will lead to less understanding of the codebase and therefore introduce more bugs and inconsistencies. Also, the experienced developers have to interrupt their tasks to train newcomers which will lower their productivity. How to solve: To avoid Brooks' Law the project team has to improve their planning and estimation to avoid hitting deadlines. If this still fails, the software teams should scale gradually by adding software developers early and onboarding them properly and not at the near end of the project. Start by using pair programming early to integrate them. Another possibility would be to break down the task into independent units where adding people to work on specific parts will not interfere because they can be done in parallel without having dependencies Focus: Deployment and runtime considerations. In Software Operations, a Snowflake Server is a special unique server that was manually configured by someone and has no documentation which makes it very difficult to reproduce the configuration, scale the server, or recover from an incident. As everything was done manually for the server another server often does not have the exact same configuration which leads to inconsistencies and problems if deploying software. This manual configuration also increases the risk of downtime because if something happens, the engineer has to find out what to do to recover. How to solve: To face this problem tools like Terraform or Ansible (Infrastructure as Code) should be used to guarantee that every server is the same, e.g. has the same versions, packages, and tools installed. Additionally, all server changes (or migrations) should be kept in a version-controlled repository (like GIT) and everything should be applied using a CI/CD pipeline to automatically update/deploy software. To be fully flexible the used infrastructure should be immutable which means that containerization like Docker or Kubernetes is used to host services. Manual Deployment describes the steps that are done to release new software packages. These steps often involve manually copying files, running db scripts by hand, and connecting to the server infrastructure through SSH. This process is usually very time-consuming and vulnerable to human error because a typo can cause heavy failures. Also, sometimes there could be inconsistencies between different environments, such as Dev/Staging/Demo/Release. Sometimes, if done manually, there is no backup strategy implemented which makes it really difficult to roll back to a working version if the deployment fails or breaks something. How to solve: If making changes to multiple (or even single) systems automated deployments should be used. To do this, implement a proper CI/CD tool such as Jenkins, GitHub Actions, GitLab CI, ArgoCD, or Woodpecker (that's what I use). Using this kind of tool will guarantee that each deployment uses the same steps and normally there should be no human error. Additionally, a rollback strategy could be implemented in the CI/CD pipeline to have a production-ready disaster recovery without having manual interaction. The Mystery Meat Navigation Anti-Pattern is from the book "Web pages that suck" by Vincent Flanders and is an Anti-Pattern that describes an interface that is confusing because it has several buttons, links, or other things that don't have informative labels and forces the user to hover and click to find out how it works. This increases the time the users waste on the web page (or app) and will lead to frustration because of the bad experience resulting in very high bounce rates and lower engagement. How to solve: Instead of having unspecified icons for navigation use clear labels with descriptive text or a tooltip to show the user what happens if the specified link is clicked. Furthermore, apply state-of-the-art UI guidelines and conventions that are recognizable and recurring for multiple apps (e.g. Material Design for apps). Additionally, every developer should prioritize Web Accessibility which will ensure that screen readers can use the app/website without any problems. Dark Patterns describes a process or misleading UI/UX practices to manipulate users into taking actions they don't want to do such as subscribing to a newsletter, sharing the data, or spending money. Luckily, many countries have introduced laws against these shady tactics (EU Digitial Services Act). How to solve: If wanted this can be solved by being transparent and clearly offering pricing, and terms. Communicate with the user and optimize the opt-out mechanism by providing easy one-click links that will unsubscribe from the subscription. Also, explain all fees and offer the ability to cancel early without making the user feel guilty for opting out (e.g. don't give them bad feelings for canceling). In a team with multiple developers, the Hero Syndrome occurs if a single team member thinks he is "the expert" for everything and the only one who can solve the most complex problems. Although that person is highly skilled their behavior prevents collaboration and improvement of other developers. This creates knowledge silos because only the hero knows about all kinds of problems/projects and if he leaves or gets ill this knowledge is gone. Furthermore, this leads to the problem, that knowledge is not shared among others and non-hero developers don't want to take ownership of tasks. Also, the hero often works more than he has to which could lead to burnout. How to solve: The most important thing is to share the knowledge between all developers by doing pair programming and adding detailed documentation for every task. Also, promote collaboration by doing discussions, collecting decisions, and evaluating all of them prior to implementation. Develop a mentorship program in which the Senior/Principal developer takes their time to mentor Junior or new developers. Keep in mind that this process only works if the hero wants to do it . Everyone makes mistakes, but in a Blame Culture, these mistakes are used to punish the developer which means that if a mistake is made other developers do finger-pointing at the one who did it wrong. This results in a toxic environment where developers don't take the initiative for new tasks and maybe hide mistakes because of fear. The problem is that this hinders innovation, damages the morale of the team, and increases the stress. How to solve: Establish a learning culture where mistakes are a natural part of growth and use retrospectives to focus on the solutions and improvements instead of blame. Create an environment where people feel safe and can make mistakes without fear. Promote taking responsibility for tasks with the support of all team members and avoid punishing failure. Security Through Obscurity is an Anti-Pattern where a system is secured by a hidden design or infrastructure that is not publicly documented. This means that the source code is hidden, the password scheme is unbelievably complex, and the details of the security system are confidential. The reason to do this is to discourage the attackers from exploiting the system that they don't know. However, this is not a security mechanism because nowadays attackers are constantly looking for new ways to exploit apps or websites. And if they break the obscurity, it is often very easy to exploit the weakness. How to solve: Instead of relying on obscurity rely on best practices for security guidelines to implement strong authentication/encryption. On a yearly (or monthly) basis conduct audits and penetration tests to actively check the system for vulnerabilities. Manually review code and use of techniques to secure the app. At least observe the OWASP Top Ten vulnerabilities/risks and avoid creating them in your app. If a company only focuses on meeting the minimum regulatory requirements to secure an app rather than addressing all kinds of risks, the Anti-Pattern Checkbox Compliance occurs. The name Checkbox Compliance is used because security risks and compliance are often noted down in a list and before releasing this list has to be "checked" to pass the audits. Unfortunately, attackers do not care about these guidelines, and if the app still has major security issues it will be vulnerable to them. How to solve: Establish a mindset of teams to prioritize security as a fundamental part of the software they develop. Observe the OWASP Top Ten and go beyond company regulations to focus on real threats that are relevant to the app. Integrate Penetration testing, code reviews, and code analyzer tools in the creation of software and educate developers on security best practices by investing in cybersecurity training. Recognizing and Understanding Anti-Patterns in multiple areas of software development is critical for creating efficient, maintainable, secure, and extensible systems. Each domain mentioned in this article provides its own set of issues, but the fundamental problem is the same: poor practices in development, architecture, management, operations, user experience, team dynamics, and security. All of them can have long-term negative consequences for software quality and organizational success. Identifying these typical problems allows teams to take proactive actions to prevent them, cultivating a culture of continuous development, collaboration, and best practices. The key to conquering anti-patterns is in: Additionally, everyone should understand that no project is created without any Anti-Pattern. But with an open mind to face software development and non-toxic management, it is possible to identify Anti-Patterns, refactor the software, and optimize the process which leads to better software and a friendly work environment for everyone. I hope this article gave you a quick and neat overview of Anti-Patterns. I would love to hear your feedback about this list. Furthermore, if you already used Anti-Patterns, or found and refactored some, please comment here and explain what you have done. Also, if you have any questions, please ask them in the comments. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , BlueSky , and  GitHub . Pressure of time Disinterest Closed-Mindedness Functionality Management Ensure that the product delivers the features while balancing between scope and feasibility. Complexity Management Simplify design and solutions to reduce unnecessary complexity of the software. Also, improve maintainability. Performance Management Address non-functional requirements like development speed, reliability, and scalability to improve quality. Change Management Adopt a flexible strategy to handle new requirements and unexpected changes without restarting the project. IT Resource Management Allocate and utilize all technical resources efficiently. Technology Transfer Management Plan the adoption of new technologies and ensure smooth integration with existing systems. Awareness: entails recognizing the indications and symptoms of antipatterns early on. Proactive prevention: entails putting in place strategies, tools, and processes to reduce risks before they become ingrained. Adaptability: means being open to change, feedback, and incremental improvement. Strong Leadership and Culture: Promoting teamwork, information sharing, and accountability.

0 views
//pauls dev blog 1 years ago

AppImage On Linux Explained: How to Run Apps Without Installing

In every Linux distribution there are several ways to install software. One of the most used methods is downloading a or file and simply double-click them to install. But since some years, you might have seen applications with a extension and wondered what these files are. Within this short tutorial I want to explain AppImage and show how to use it on Linux to run applications. I also add some important tips to think about when working with AppImage and I provide a simple tutorial to add any AppImage to your Linux application launcher. AppImage is a relatively new (compared to deb/rpm) packaging format ( developed in 2004, named klik ) that allows you to run applications on Linux with just a simple click. Unlike to traditional packages like DEB or RPM, AppImages are compatible across all Linux distributions, making them available to all Linux users. While DEB and RPM offer a convenient way for users to install software, they are challenging for developers, who has to create different packages for each Linux distribution. This is exactly where AppImage steps in. AppImage is a universal software packaging format that bundles software in an single file which works on all modern Linux distributions. In contrast to other Linux distributions AppImage does not install software in the traditional way by placing files in various system locations (and requiring root permissions). Also, it does not really "install" the software because it is a compressed image that includes all neccessary dependencies and libraries to run the application. This means that if you run an AppImage file the software starts immediately without any extraction or installation process. To delete an AppImage it is sufficient to simply delete the file. Key Features of AppImage On Linux any AppImage can be used by following three simple steps: After downloading and running your AppImage file they wont appear in your application launcher. This means that you cannot add it to your Panels or launch it by opening the launcher. Using the AppImage software in the Linux application launcher can be done by simply creating a desktop entry for the app. In Linux a desktop entry is a configuration file that tells your desktop environment how an application is handled. In Linux Desktop entries can be created in the following two folders: While the first folder is globally used by every user (also you need root for this) the second folder should be used. Switch to , create a new file with the extension and add the following code: In this file replace Appname with the name of the app which will be displayed in the application launcher. Also, replace   with the full path to an icon file which is used for the app if you have one. If you don't have an icon, you can omit this line, but having an icon is recommended for better integration. Lastly, replace  with the full path to your AppImage file. After closing and saving this file your system should automatically detect the changes and your app can be found in the application launcher. AppImage offers a simple and effective solution for running applications across different Linux distributions without the complexity of traditional installation methods. By using this universal, portable format, it makes software access easier for both users and developers . Also, AppImage applications don't install themself on your Linux system and deleting them is very easy by simply deleting the file. In this simple tutorial, I tried to show how every Linux user can use AppImage applications and integrate them in their launcher for a better experience. Hopefully, this article was easy to understand and you learned about AppImage applications. Do you have any questions regarding AppImage? Or do you have any feedback? I would love to hear it, your thoughts and answer all your questions. Please share everything in the comments. Feel free to connect with me on Medium , LinkedIn , Twitter , and GitHub . No installation or compilation : Simply click and run the application. No root permissions required : It doesn't modify system files. Portable : Can be run from anywhere, including live USB environments. Read-only apps : The software operates in read-only mode. Can be removed easily : Simply deleting the AppImage file will remove the complete software. No sandboxing by default : AppImages are not sandboxed unless configured otherwise. Not Distribution dependant : Works across various Linux distributions. Download the AppImage file Make it executable (chmod +x )

0 views
//pauls dev blog 1 years ago

How To Setup Docker Registry In Kubernetes Using Traefik v2

More than a year ago I created a tutorial on How To Install A Private Docker Container Registry In Kubernetes: In this tutorial, I was using Traefik for exposing the Docker Registry which will allow to access the registry through HTTPS with a proper TLS certificate. Unfortunately, the tutorial does not work anymore because the of the Kubernetes used by any Traefik file has changed. Instead of using all IngressRoutes have to use . Because of this problem, I recreated setting up the Docker Container Registry tutorial in a simple way. For more information or explanation switch to my previous tutorial . First, we create a namespace that we use for our registry: To add the PVC we create a file and add: Then let's deploy the file to our Kubernetes using: Before deploying our Docker Container Registry we should create a secret that we use to authenticate when pushing/pulling. To simplify this step I created a script that can be used for this purpose. Create a new file and add: To generate the files execute: This will create two files ( and ) in your destination folder( ) with the needed strings. Now, to Install and Deploy the registry we will use Helm the Kubernetes Package Manager. First, we will add the Helm Repository and create a which will contain our specific data. 1. Create : 2. Add and Update Helm Repository 3. Install Docker Registry Now we can add the Traefik IngressRoute which will expose the Docker Container Registry. To do this we have to create a file called and add: This file can then be easily applied to our Kubernetes cluster by executing: The last step will be to test if the Docker Container Registry is working. To check this we can simply download any Docker image and push it to our newly set upped Container Registry. First, we pull an image from Docker Hub by executing: Then we tag the image with a custom name and add our Docker Registry domain name as prefix: Then we have to login into our Docker Container Registry: Now, we can try to push our personal NGINX image to our Private Container Registry by executing: If there are no errors, our Kubernetes Docker Container Registry is working and we can start using it. This simple update to my previously written tutorial, " How To Install A Private Docker Container Registry In Kubernetes ", should help you to deploy a private Docker Container Registry in your Kubernetes cluster with a newer version of Traefik. If you need further information on how to apply this tutorial or have any questions, please ask them in the comments. I will try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , and  GitHub .

0 views
//pauls dev blog 1 years ago

How To Deploy Portainer in Kubernetes With Traefik Ingress Controller

This tutorial will show how to deploy Portainer (Business Edition) with Traefik as an Ingress Controller in Kubernetes (or k8s) to manage installed Service. To follow this tutorial you need the following: Helm is the primarily used Package manager for our Kubernetes cluster. We can use the official Helm installer script to automatically install the latest version. To download the script and execute it locally we run the following command: To access our Kubernetes cluster we have to use  and supply a file which we can download from our provider. Then we can store our kubecfonig in the environment variable to enable the configuration for all following commands: Alternatively, we can install "Lens - The Kubernetes IDE" from https://k8slens.dev/ . I would recommend working with Lens! Running Portainer on Kubernetes needs data persistence to store user data and other important information. During installation using Helm Portainer will automatically use the default storage class from our Kubernetes cluster. To list all storage classes in our Kubernetes cluster and identify the default we execute : The default storage class is marked with after its name. As I (or we) don't want to use I switch the default storage class by executing the following command: It is also possible to use the following parameter while installing with Helm: We will deploy Portainer in our Kubernetes cluster with Helm. To install with Helm we have to add the Portainer Helm repository: After the update finishes we will install Portainer and expose it via NodePort because we utilize Traefik to proxy requests to a URL and generate an SSL certificate: With this command, Portainer will be installed with default values. After some seconds we should see the following output: If you are using Lens you can now select the Pod and scroll down to the Ports section to forward the port to your local machine: Press Forward and enter a Port to access the Portainer instance in your browser to test Portainer before creating the Deployment: Press Start and a new browser window will open showing the initial registration screen for Portainer in which we can insert the first user: After the Form is filled out and the button Create user is pressed, we successfully created our Administrator user and Portainer is ready to use. If you have installed the business edition you should now insert your License Key which we got f rom following the registration process on the Portainer website . To make Portainer available with an URL and a SSL certificate within the WWW we have to add an IngressRoute for Traefik. The IngressRoute will contain the service name, the port Portainer is using, and the URL on which Portainer can be accessed. We should save (and maybe adjust) this code snippet as and apply it to our Kubernetes cluster by executing: Congratulations! We have reached the end of this short tutorial! I hope this article gave you a quick and neat overview of how to set up Portainer in your Kubernetes cluster using Traefik Proxy as an Ingress Controller. I would love to hear your feedback about this tutorial. Furthermore, if you already used Portainer in Kubernetes with Traefik and use a different approach please comment here and explain what you have done differently. Also, if you have any questions, please ask them in the comments. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , and  GitHub . Thank you for reading, and  happy deploying! A running Kubernetes cluster or a Managed Kubernetes running a Traefik Ingress Controller ( see this tutorial ) A PRIMARY_DOMAIN

0 views
//pauls dev blog 1 years ago

How To Create An SSH-Enabled User With Public Key Authentication On Linux

If you are reading this you are struggling to create an SSH-enabled user on your Linux and are looking for a way to solve this. Don't search any longer, you are at the right place! Within this tutorial, I will show you how to set up a user on your Linux and enable SSH login using PublicKey authentication. To add an SSH-enabled user the user needs an SSH Key pair which can be generated with the following procedure on a Linux system: Run command and provide the type of the key by appending and the length of the key by appending . To create an RSA key and specify a length of 2048 bits use this: After running this command the CLI will prompt you to enter a path to a file which will be used to save the key. It defaults to . Adjust the path to your needs and hit enter. In the next step, you will be prompted to enter a passphrase which is not mandatory but should be used to protect the private key against unauthorized use. Enter your private passphrase and hit enter to generate the key. After the process is finished you can find the SSH key pair in your selected folder: If the new user has an SSH key pair you can log into your Linux system to start adding the user to the server. The process to create a new user is straightforward: Enabling SUDO can be done in two different ways: Using visudo or adding the user to the SUDO group. To enable command for the previously created user you have to edit the file by using the command. To do this, execute (as root) and search the following line: . Then append If you want to avoid entering the password every time you run a command with sudo you can also append To close visudo press and save your changes by pressing . To add the user to the sudo group you have to use the command which should be executed as root (or with sudo): Executing this command will add new_user to the sudo group granting full sudo privileges. Normally, if you followed this guide, you should be able to log in as a new user by executing: If for any reason this does not work try any or all of the following steps: On Debian/Ubuntu: On RedHat/CentOS: If you cannot connect, check the for the following values: If you see this message within the SSH logs it means that the user is locked because it was disabled or locked due to some administrative policy. Having SSH access with public key authentication to your server is mandatory if managing a Linux server. Knowing how to set it up yourself is therefore a vital skill. By following my guide, you should have learned how to configure your SSH access with public key authentication which will enhance security using SSH and improve ease of use. Hopefully, this article was easy to understand and you learned to set up SSH public key authentication for your server. Do you have any questions regarding SSH and public key authentication? Or do you have any feedback? I would love to hear it, your thoughts and answer all your questions. Please share everything in the comments. Feel free to connect with me on Medium , LinkedIn , Twitter , and GitHub . Become the user by typing: and providing the root password. Create the new user with the useradd command: Now set a password for the user (otherwise it becomes locked): Create a directory within the new user's home directory: Create a file called within the directory and save the public key of the new user. Change the owner/group of the home directory to the new user: Set the correct permissions for the .ssh folder: Set the correct permissions for the authorized_keys file: Restart the SSH daemon on your server: or directives should not block the new_user name and if used should allow the new user First, check the account status by using: . If locked the result should show where the indicates it is locked. Unlock the user account: Check the account status (again):

0 views
//pauls dev blog 1 years ago

How To Unhide Titlebars on Maximised Windows in KDE Plasma 6

Since I updated my Garuda Linux some days ago it switched from Plasma 5 to Plasma 6 which removed a Widget that I was using in my top panel: "Application Menu" This led to a major problem (for me) if I maximized my applications because instead of an application menu the Global Menu Widget was used. This can be seen in the following screenshot while maximizing the Brave browser: The problem was that the Global Menu Widget missed some adjustments that I needed: Normally, this wouldn't be a problem because every application has its own title bar. Unfortunately, the default behavior for my installation of Garuda Linux was configured that if I maximized any Window it would hide the title bar which led to the problem that for some applications I couldn't minimize, move or close them through the top bar. See the marked spots on the following screenshot: Fortunately, I found out that there exists a hidden setting in KWin that can be set to enable and also disable title bars and window borders if the application window is maximized. To do this, I had to open the following file: In this file, I searched for the section and changed the setting of to false so that it looked like this: After logging out and logging in back again the title bars were available all the time and I could maximize on double click and move the window without any problems. As this was triggering me the last few days and I searched some days to fix this problem I decided to create a small post for everyone who will ever face this problem. Also if you are one of these guys that do not want to have title bars at all, you can use this setting to remove them. So I guess, it is important for everyone :) Extend the Menu to use the whole panel Enable Double Click to reset Full Screen Enable Click And Drag to move the Window to another screen maybe

0 views
//pauls dev blog 1 years ago

How To Boost Linux Server Performance By Enabling Swap

A very practical strategy for preemptively avoiding getting out-of-memory errors within all kinds of applications is increasing the Swap space on your Linux server. By allocating Swap space you will effectively create a special area for the operating system to unload applications from the RAM to prevent memory exhaustion. In this article, we will learn how to create a Linux Swap to boost our server performance. With detailed instructions and practical examples, you will understand the implementation process and be ready to optimize your server. If you want to skip everything and just try out my recommendations, please jump to the section " Script To Do Everything Automatically ". On a Linux system, Swap is a special area on your HDD that is designed to temporarily store RAM data to increase the "memory" with some limitations. Because the Swap file is on the HDD it will be significantly slower than your RAM. Therefore, it will only be used as the RAM gets full, to keep data from running applications in "memory" instead of throwing it away. So this Swap file is a good utility file to use if your RAM runs full and some kind of safety approach against out-of-memory exceptions on low RAM systems. To learn more about the concept and get more insights about the process of "swapping" you should read this Wikipedia page where "Memory Paging" is explained in detail: After reading (or skipping) the Wikipedia page you can now follow this guide by applying all explained commands. The first step we have to do is check if our server already has a Swap file registered. To list all configured Swap files we have to type: This command does not produce any output if the system does not have any Swap file available. Additionally, we can use the utility on Linux to display information about the RAM which will also display information about the Swap file: In this snippet, we see that our server has a total of 8G of RAM and no Swap file configured. Assuming we have enough free HDD space left, we can now create a new Swap file within our Linux filesystem. To do this we will create a file named in our root ( ) directory with a utility tool called (should be installed on every Linux machine). As our server has 8G of RAM we should create a Swapfile which is at least the same size. If you have another amount of RAM please adjust the following command to your needs: Before we turn the allocated space into a Swap file, we should lock down the permissions to avoid users accessing it except them with root privileges. If not doing this all users on the server can access the file and read its content. This would be a major security issue and have to be avoided at all costs. To make the file only accessible by root we use : Now to check if the file exists and has the correct permission use: If your output looks the same you can mark the Swap file as your Swap space by executing: The command should output the following: The last step is enabling the Swap file by executing: After we executed all previously shown commands we can use and to validate if the Swap file no is present on the Linux server: Unfortunately, everything done in the previous chapters will be gone after we reboot our server. Luckily, we can change this by adding the swap file in the file. This can be done in two ways: 1. Use Echo and Tee: 2. Manually adding at the end of the file with your preferred Editor When using a Swap file there are two settings that I personally think are important to adjust. First, there is which controls the tendency of the kernel to reclaim the memory which is used for caching of inode and directory objects. The second setting is the which controls how much the kernel prefers Swap over Ram. Both settings should be adjusted to optimize Swap file usage on our server. For more settings just read the Linux kernel documentation Before we change the value of we should check the current value by executing: On my server running Linux, the initial value was set to which is really high and will remove inode information too quickly. Because of this, we should set the value to 50 by using the command: As with the Swap file, this setting is only valid until we reboot our server. To persist the setting in our server we have to adjust the within the folder. Open with your preferred Editor (I use VIM) and add the following line at the bottom: As with the the first command we execute is used to check the current value of the Swappiness: On my server, this setting was 60 which is a good default setting for a desktop computer which will often use a GUI and other different programs. But, when used as a server the Swappiness should be much lower to boost Swap and Ram performance. You can play around with the settings but I recommend something between 10 - 20. As done before we have to use to set the Swappiness on our server: To persist the setting in our server we have to use our preferred Editor and open the and add the following line at the bottom: If you trust me ( obviously you do ) and just want to enable Swap file, set the Swappiness, and adjust the Cache Pressure you can simply copy the following script: Then adjust your Swap file size (remember it should align with your RAM) and paste it into your CLI connected to your server. Managing the Swap space on Linux servers is important for preventing memory errors and boosting performance. If you followed this guide, you should now have set up and optimized your Swap space which normally should boost your server. Hopefully, this article was easy to understand and your server is faster than ever and will not get any out-of-memory errors or reboots (like mine before I did it). Do you have any questions regarding Swap? I would love to hear your feedback, your thoughts and answer all your questions. Please share everything in the comments. Feel free to connect with me on Medium , LinkedIn , Twitter , and GitHub .

0 views
//pauls dev blog 1 years ago

Self-Host Umami Analytics With Docker Compose

Umami is an open-source, privacy-centric, and lightweight web analytics service built with JavaScript (NextJS) running in a NodeJS environment. It offers a fantastic alternative for those who want to be free from conventional analytics platforms that track your data and more importantly your visitors. Another thing that makes Umami really special is its user-friendly design, making it the ideal choice for your self-hosting alternative to Google Analytics. In this tutorial, you will learn how to unleash Umami's potential in a straightforward setup using Docker Compose. There is no need for complicated configurations or complex processes because Umami is designed about simplicity to ensure an easy self-hosted experience. Furthermore, Umami is open-source and grants you full visibility and control, allowing you to customize it according to your unique requirements. If you think about analytics you are often concerned about privacy, aren't you? Luckily, Umami has got your back, prioritizing the protection of user data while delivering the analytics insights you desire . Now, get ready to dive into this tutorial and learn how to set up Umami with Docker Compose. If you follow this and deploy it in your own cluster, you will not only have a privacy-focused analytics solution in your toolkit but also the satisfaction of being the master of your data. Let's hope this tutorial empowers you to elevate your web analytics game with Umami – the go-to open-source option for privacy-conscious developers like yourself. This tutorial will deploy Umami either on a server running Docker (with Docker Compose) or within a server cluster utilizing Docker Swarm. To deploy this the tutorial assumes that we already: Unfortunately , this tutorial will not show how to configure a domain with a TLS certificate because I normally use Traefik to automatically create SSL-secured domains based on configuration in the Compose file. Luckily , I have created several tutorials about how to set up your own Traefik Proxy on your server, or your server cluster: On your single instance server using Docker: On your server cluster using Docker Swarm: The first step is to create a folder that will contain all files needed to deploy Umami either on a single server or on a server cluster. The structure should look like this containing two files and . Switch to the folder and create a new file containing the following snippet: In this snippet, we define the which Umami will use. As we can see it uses the same values as , , and . Keep this in mind if you want to change it. Additionally, you should generate a random salt and put it into the variable. To generate something just hit your keyboard or use this command: Now, we can create our Docker Compose file which will be used to deploy the service. As a starting point for our configuration, we can download the Compose file from Umami's GitHub repository which will get some modifications: The resulting Compose file to deploy a single instance Docker using Traefik Proxy looks like: Some important information: After all files are created we have to define our environment variable by exporting it: Then, we can start the container by executing: Switch to https://umami.PRIMARY_DOMAIN and log in with the default credentials: To stop the container at any time we have to switch to the umami folder and execute the following: To set up Umami within a Docker Swarm environment we have to adjust the previously explained Docker Compose file by adding the deploy keyword. Additionally, we have to add placement constraints for the database service to guarantee that it is deployed on the correct server within the Docker Swarm cluster. The resulting Docker Compose file will be: In this file, we have moved all below the keyword to enable Traefik configuration while utilizing a Docker Swarm environment. Furthermore, we have added the deploy keyword containing placement constraints in the service: This constraint will ensure that the db service is always deployed on the worker node within the Docker Swarm which has the corresponding label. To set the label (if not already done) we have to execute the following command on the Docker Swarm Manager Node: To deploy the service in our Docker Swarm we have to export the by using in our CLI and then execute: This command will use the Compose command to load the file settings into the Compose file before using to deploy the stack in our Docker Swarm environment. Change to an appropriate name lime To stop the stack use Now, as our Umami instance is running we should log in and instantly change our Umami credentials from to something more secure. Then we use the Add Site button in the top right corner to add our website to track Now, we can switch to Tracking code in the tab layout. The tracking code looks like this: I would recommend adding the keyword to it and then insert it in the header section of your website. Afterward, tracking of our website will start and we can instantly see our visitors in the real-time view in Umami To sum up, Umami really impresses me with its straightforward approach to website analytics, particularly when compared with other self-hosted alternatives like Countly, which I had previously used and found to be overly bloated. Furthermore, one of Umami's standout features is its straightforward setup process using Docker, making it accessible even to those Developers less familiar with advanced technical concepts. Another important aspect of Umami is its trustworthy option, prioritizing user data protection without sacrificing functionality. As a software developer, it is paramount to understand that selecting analytics tools that not only provide valuable insights but also prioritize the security and privacy of user information is mandatory. By choosing Umami, we can ensure that our analytics remain both effective and ethically correct in today's digital landscape. Do you have any questions regarding this tutorial? I would love to hear your thoughts and answer your questions. Please share everything in the comments. Feel free to connect with me on Medium , LinkedIn , Twitter , and GitHub . Thank you for reading, and happy analyzing your data! have any kind of website to monitor have Docker and Docker Compose installed optionally have a server cluster utilizing Docker Swarm have a domain to publish the deployed Umami instance We will remove the environment variables for the database because we extract them already into an file We add Traefik configuration to use automatic TLS certificate generation for a URL We add to avoid getting blocked by Ad-Blocker In this Compose file, we added several which are needed to deploy the service using a Traefik Proxy which is installed within our Docker environment like it was described in this tutorial: By adding we can change the filename of the tracking script. This allows us to avoid being blocked by many ad blockers. I just called it to have some very general name. This setting is totally optional but I recommend doing it. See this GitHub discussion for more information The database container is not accessible from the public network and only communicates with the umami web container through the network web container is part of the external network (see previous post) and the network to communicate with the container. All data is persisted within a named volume called

0 views
//pauls dev blog 1 years ago

Gratitude and Exciting Updates for the New Year 2024 🚀

As we kick off 2024, I wanted to take a moment to express my sincere gratitude to you for being an integral part of pauls dev blog . Your unwavering support, engagement, and enthusiasm for the content shared on my blog mean the world to me. Reflecting on the past year, I am humbled and thrilled to see the incredible growth of our community. With nearly 500 email newsletter subscribers, I am inspired by the collective enthusiasm for the diverse content covering topics like Docker, Java, Kotlin, JavaScript, TypeScript, Productivity, and Personal Growth To my paid subscribers, I want to extend a special thank you. Your commitment and support have played a pivotal role in enabling me to continue creating high-quality tutorials and articles. Your contributions are not just subscriptions; they are investments in the growth and sustainability of this community. Without your generosity, my blog wouldn't be what it is today. It's your commitment that has propelled it to where it is today. I'd also like to acknowledge and thank those who have supported the blog through Buy Me a Coffee . Your thoughtful gestures, like buying a virtual coffee, have made a significant impact, and I'm genuinely appreciative of your kindness. As we step into this new year, I am thrilled to share that I've been hard at work crafting content that I believe will resonate with you. In the early weeks of 2024 (and at the end of 2023), three articles were created, each aimed at providing some insights and value but not targeted at my usual subscribers: These articles are just the beginning of what's coming for pauls dev blog this year. I am committed to providing content that not only meets but exceeds your expectations. Your feedback is invaluable, and I encourage you to share your thoughts and suggestions as we navigate through this exciting journey together. You can help me to shape the direction of this blog. For those who wish to take their support to the next level, I invite you to consider becoming a paid subscriber . Your contributions will help me sustain the blog and the cost that I have. If you prefer a more casual way to support, feel free to visit my Buy Me a Coffee page to show your appreciation with a virtual coffee. Thank you once again for being an essential part of my blog. Your engagement fuels my passion for knowledge-sharing, and I'm genuinely looking forward to the meaningful interactions and exciting content that await us in the coming months. Wishing you a fantastic year filled with growth, learning, and success! And if you do not already have, feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , and  GitHub . Warm regards, Paul from pauls dev blog "How Proper Tidying Up Makes You More Productive" : If you carry too much baggage with you, you can hinder success at work. "How To Learn From Mistakes: Navigating The Complex Journey To Personal Growth" : Unraveling the Challenges, Cultivating a Positive Mistake Culture, and Empowering Self-Improvement in a Judgment-Free Environment "How To Boost Your Career With Games" : Unlocking Cognitive Skills, Mental Health Benefits, and Strategic Advantages for Career Growth

0 views
//pauls dev blog 1 years ago

How To Boost Your Career With Games

Better than their reputation: video games are supposed to promote intelligence and health. And they can even be a career booster. How it works? We have the results of several studies. Gamers have to deal with prejudices. Energy drinks, gambling all night long, living in the virtual world. Honestly, some clichés don't sound that unrealistic, and addictive gaming has disadvantages, which is why it should not be underestimated. But anyone who has to repeatedly defend their enthusiasm for video games can now breathe a sigh of relief: gaming skills help us to be successful at work , support our health, and train skills, according to researchers. Several scientific studies show that even children who start playing at an early age go on to have strong cognitive skills. Sounds too good to be true? The most important results will be summarized in the next chapters. In a research study from 2020 , Neuroscientists were able to prove the positive effect of gaming. Children in particular who were actively involved in video games before puberty later showed improved cognitive abilities in tests, including after playing the classic “Super Mario 64” for several hours. The scientists emphasize, that study participants process 3D objects better overall and that their memory is also stronger because they can remember information. Other scientific tests have also shown that players have good spatial thinking skills. The neuroscientists Jürgen Gallinat and Simone Kühn were able to prove, among other things, that in gamers the gray matter in the brain - also known as "gray matter" - increases with increasing gaming experience, which strengthens the ability to think spatially. The research can be found here . Additionally, Kühn and Gallinat come to another conclusion: Regular video games not only train our cognitive skills but can also combat depression and loneliness. One reason for this is the sense of community when gamers share experiences and successes with one another. The brain releases happiness hormones that ensure connection and satisfaction. Topics such as fear of failure, solidarity, and challenges that are overcome in everyday life and particularly play a role in psychological problems are also present and can contribute to a positive experience. Research shows that regular play can help in many ways and the effect is often underestimated. Professional life is by no means linear. We experience this at the latest when we realize that we have ended up in the wrong profession. Or if we drop out of our studies, have to change jobs, or are fired. Anyone who plays games already knows the problem but has the opportunity to take creative approaches. Failure does not mean giving up completely because it can mark the start of a new life or a new possibility. Gamers sometimes have to take unusual paths until they defeat the final boss - and that's the same in real life. A wrong decision in career planning does not mean the end . Creativity and decision-making are important gaming skills that can help us make smart decisions outside of the virtual world. Another fantastic research study comes to the conclusion that gamers can learn particularly quickly. The fact that quick comprehension is required in many professions is not new. In this area, passionate gamers benefit because, according to the study, they can classify their knowledge much better than non-gamers. The authors emphasize their belief that video games actively promote specific brain regions and thus contribute to learning. The rapidly changing world of work requires a strong ability to learn in many professions and the ability to adapt quickly. Video gamers therefore have a significant advantage if we go by current studies. In general, a growth mindset and the attitude to learn from mistakes, as well as the willingness to courageously take new steps again and again, are becoming more important. Gamers always have one goal in mind: improve, build a team, or reach a new level. That's why people who are actively involved in gaming associate their hobby with meaning. Career planning and everyday work also require us to think about our goals, set them, and pursue them. Because without a concrete goal, the “why” is lost. In particular, more and more young employees and young talents are prioritizing careers that are in line with their values and that offer meaning - because without them they are often unwilling to make a commitment. The gaming ability to prioritize and pursue your goals can therefore also help you stay steadfast and true to your principles. Video games not only require creativity but also that you think and act tactically. This is something you want to aim for in your everyday working life and in your career. It is important to demonstrate this ability every day. Whether you proceed analytically or imagine something abstract: tactical thinking describes how you achieve your goals by developing concrete plans and implementing them - especially in a way that is adapted to the situation. The world of work is becoming more international and companies are becoming increasingly cosmopolitan. It's no wonder that knowledge of the English language is not only in demand but also an advantage in many industries. Online gaming offers the opportunity to improve your own language skills. The learning effect is based primarily on voluntary action. You don't speak and write in a foreign language because you have to - but because you really want to be part of a community. English skills can therefore be improved through playing video games. Gaming can improve the feeling of togetherness. Especially for people who have difficulty showing communication skills, approaching others, or sharing feedback and criticism, gaming offers a kind of learning opportunity to develop a sense of better team spirit. So it can't hurt to join others, open up, and celebrate successes and failures together - because all of this awaits us in our careers. Whether you're alone in the gaming world or not, critical thinking is becoming an important decision-making tool for many gamers. Even at a younger age, passionate gamers develop a sensitivity for details and question what they see and experience. Thinking independently also helps at work: decisions are not made dogmatically but are self-reflected. The world of gaming offers more than just entertainment because it provides us a unique set of skills and advantages that can significantly impact our careers. However, the stigma surrounding gaming is gradually fading away as researchers consistently highlight the positive effects on cognitive abilities, mental health, and professional development. From improved cognitive skills to enhanced decisiveness and strong learning abilities, gamers are equipped with many valuable traits that are applicable in the real world. The ability to set and pursue goals, intensify tactical thinking, optimize English skills, and improve teamwork are just a few examples of how gaming helps to personal and professional growth. While there are downsides, such as addiction and the need for moderation, the studies presented several benefits that gaming can bring to us. As we navigate through the rapidly changing complex world of work, the findings suggest that embracing a growth mindset, a willingness to learn from mistakes, and the ability to adapt quickly are becoming increasingly vital for everyone. So, gaming is not just a pastime; it's a training ground for skills that extend beyond the screen. Remember this, the next time you find yourself immersed in a virtual world, you might be learning the skills that will push you forward in your career and contribute to your overall success. Embrace the positive aspects of gaming, and continue to learn and grow. Let your gaming journey be a catalyst for excellence in both your virtual and professional endeavors - it will take you to the next level . What do you think about gaming? Are you a gamer yourself? Did you encounter something at work that you could handle better because of gaming? Is there anything else you want to add? Also, do you have any questions? I would love to hear your feedback and your thoughts and answer all your questions. Please share everything in the comments. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , and  GitHub . Thank you for reading, and Now Start Playing Games! ⛔🖱️⛔🖱️

0 views