Posts in Linux (20 found)
neilzone 3 days ago

How I interact with PDFs using Free software and Linux in 2025

This blogpost contains some brief thoughts on how I interact with PDFs. It is not an exhaustive list of Free software PDF tools for Linux. I know that there are other options (including Okular, and LibreOffice’s Draw); some I have tried and some I have not. I use Gnome’s default tool, evince , also known as “Document Viewer”. The function for leaving notes on a PDF is pretty useful, although I don’t know how well it works if I wanted to share that file-with-notes with someone else, especially someone else using different software. When I review a contract or an article, I still like to do so with a pen in my hand. I don’t know why. Since I am using a ThinkPad with a touchscreen, and support for a pen , this is no problem. I convert the contract/article into PDF, most often using LibreOffice Writer’s export function. I then open the resulting PDF in Xournal++ , which is superbly useful piece of software. I like writing advice notes in Markdown, but I tend to convert them to PDF before sharing them as a “final” version. I am using pandoc and typst to do this , and it works really well, resulting in a nicely-formatted PDF. If I am converting a document into PDF for ease of scribbling, I use LibreOffice Writer’s export function. For scanning documents to PDF and OCRing them, I use paperless-ngx . I don’t appear to have blogged about it, but the gist is that I have configured our Brother MFC L2750DW to scan to a directory on a server as PDF, and paperless-ngx watches that directory and then ingests the resulting files. It works well, although one day I should move it from a Raspberry Pi 4 to something a little beefier. I use three main tools for “doing stuff” to PDFs. PDFArranger makes adding pages, deleting pages, splitting files, rotating pages, and moving pages around very easy. I use it quite a lot. I discovered Stirling-PDF more recently, and I’ve self-hosted an instance of Stirling-PDF for just under a year now. It is a web interface for a range of tools which offer quite a lot of PDF-related functions, and it works well on both desktop and mobile browsers. For batch changes to PDFs, I like PDFtk - apparently, there is a GUI version, but I use the command line tool. I find it especially useful for rotating all the pages in a file because someone has scanned a document poorly. If someone needs me to sign a PDF, in the sense of adding my name to it, I tend to use Xournal++. Rarely do I need a digital representation of my scribbled signature, but when I do, I also use Xournal++, either scribbling on the document or adding an image. For an electronic signature, I’ve experimented with LibreOffice’s functionality, but I didn’t get as far as I wanted. I should look into it some more. I ran an instance of DocuSeal for a while, but frankly I just did not use it enough to justify keeping it going. I was impressed with it and, had I needed its functionality more often, I imagine that I would have kept it. This is, fortunately, not something that I need to do day to day. I have a clunky-but-seems-to-work approach , for those very rare cases when I need it. If there is a better approach to doing this using Free software on Linux, I’d love to hear about it. I haven’t had to do this in a while, but I’ve note in my wiki that says that this ghostscript incantation has worked for me in the past:

0 views
Jeff Geerling 5 days ago

Resizeable BAR support on the Raspberry Pi

While not an absolute requirement for modern graphics card support on Linux, Resizeable BAR support makes GPUs go faster, by allowing GPUs to throw data back and forth on the PCIe bus in chunks larger than 256 MB. In January, I opened an issue in the Raspberry Pi Linux repo, Resizable BAR support on Pi 5 .

0 views
マリウス 6 days ago

Alpine Linux on a Bare Metal Server

When I began work on 📨🚕 ( MSG.TAXI ) , I kept things deliberately low-key, since I didn’t want it turning into a playground for architecture astronauts . For the web console’s tech stack, I went with the most boring yet easy-to-master CRUD stack I could find , that doesn’t depend on JavaScript . And while deploying Rails in a sane way (without resorting to cOnTaInErS ) is a massive PITA, thanks to the Rails author’s cargo-cult mentality and his followers latching onto every half-baked wannabe-revolutionary idea, like Kamal and more recently Omarchy , as if it were a philosopher’s stone, from a development perspective it’s still the most effective getting-shit-done framework I’ve used to date. Best of all, it doesn’t rely on JavaScript (aside from actions, which can be avoided with a little extra effort). Similarly, on the infrastructure side, I wanted a foundation that was as lightweight as possible and wouldn’t get in my way. And while I’m absolutely the type of person who would run a Gentoo server, I ultimately went with Alpine Linux due to its easier installation, relatively sane defaults (with a few exceptions, more on that later ), and its preference for straightforward, no-nonsense tooling that doesn’t try to hide magic behind the scenes. “Why not NixOS?” you might ask. Since I’m deploying a lightweight, home-brewed Ruby/Rails setup alongside a few other components, I didn’t see the point of wrapping everything as Nix packages just to gain the theoretical benefits of NixOS. In particular, the CI would have taken significantly longer, while the actual payoff in my case would have been negligible. Since I’m paying for 📨🚕 out of my own pocket, I wanted infrastructure that’s cheap yet reliable. With plenty of people on the internet praising Hetzner , I ended up renting AMD hardware in one of their Finnish datacenters. Hetzner doesn’t offer as many Linux deployment options as cloud providers like Vultr , so I had to set up Alpine myself, which was pretty straightforward. To kickstart an Alpine installation on a Hetzner system, you just need to access the server’s iPXE console, either by renting a Hetzner KVM for an hour or by using their free vKVM feature. From there, you can launch the Alpine Linux by initializing the network interface and chain-loading the file: From that point on setup should be easy thanks to Alpine’s installer routine. If you’re using Hetzner’s vKVM feature to install Alpine, this chapter is for you. Otherwise, feel free to skip ahead. vKVM is a somewhat hacky yet ingenious trick Hetzner came up with, and it deserves a round of applause. If you’re curious about how it works under the hood, rent a real KVM once and reboot your server into vKVM mode. What you’ll see is that after enabling vKVM in Hetzner’s Robot , iPXE loads a network image, which boots a custom Linux OS. Within that OS, Hetzner launches a QEMU VM that uses your server’s drives to boot whatever you have installed. It’s basically Inception at the OS level. As long as vKVM is active (meaning the iPXE image stays loaded), your server is actually running inside this virtualized environment, with display output forwarded to your browser. Run while in vKVM mode and you’ll see, for example, your NIC showing up as a VirtIO device. Here’s the catch: When you install Alpine through this virtualized KVM environment, it won’t generate the your physical server actually needs. For instance, if your server uses an NVMe drive, you may discover that doesn’t include the module, causing the OS to fail on boot. Hetzner’s documentation doesn’t mention this, and it can easily bite you later. Tl;dr: If you installed your system via vKVM , make sure your includes all necessary modules. After updating , regenerate the . There are several ways to do this, but I prefer . Always double-check that the regenerated really contains everything you need. Unfortunately Alpine doesn’t provide tools for this, so here’s a .tar.gz with Debian’s and . Extract it into , and note that you may need to for them to work properly, due to Alpine’s somewhat annoying defaults (more on that later ). Finally, after rebooting, make sure you’ve actually left the vKVM session. You can double check by running . If the session is still active (default: 1h), your system may have just booted back into the VM, which you can identify by its Virt-devices. As soon as your Alpine Linux system is up and running there are a couple of things that I found important to change right off the bat. Alpine’s default boot timeout is just 1 second, set in ( ). If you ever need to debug a boot-related issue over a high-latency KVM connection, you will dread that 1-second window. I recommend increasing it to 5 seconds and running to apply the change. In practice, you hopefully won’t be rebooting the server that often, so the extra four seconds won’t matter day-to-day. Alpine uses the classic to configure network settings. On Hetzner’s dedicated servers, you can either continue using DHCP for IPv4 or set the assigned IP address statically. For IPv6, you’ll be given a subnet from which you can choose your own address. Keep in mind that the first usable IPv6 on Hetzner’s dedicated servers is : Amongst the first things you do should be disabling root login and password authentication via SSH: Apart from that you might want to limit the type of key exchange methods and algorithms that your SSH server allows, depending on the type of keys that you’re using. Security by obscurity: Move your SSH server from its default port (22) to something higher up and more random to make it harder for port-scanners to hit it. Finicky but more secure: Implement port knocking and use a handy client to open the SSH port for you only, for a limited time only. Secure: Set up a small cloud instance to act as Wireguard peer and configure your server’s SSH port to only accept connections from the cloud instance using a firewall rule . Use Tailscale if a dedicated Wireguard instance is beyond your expertise. You will likely want to have proper (GNU) tools around, over the defaults that Alpine comes with ( see below ). Some of the obvious choices include the following: In addition, I also like to keep a handful of convenience tools around: This is a tricky part because everyone’s monitoring setup looks different. However, there are a few things that make sense in general. Regardless what you do with your logs it’s generally a good idea to switch from BusyBox to something that allows for more advanced configurations, like syslog-ng : You probably should have an overview of how your hardware is doing. Depending on what type of hard drives your server has, you might want to install the or packages. UFW is generally considered an uncomplicated way to implement firewalling without having to complete a CCNP Security certification beforehand: Depending on your SSH setup and whether you are running any other services that could benefit from it, installing Fail2Ban might make sense: The configuration files are located at and you should normally only create/edit the files. The easiest way to backup all the changes that you’ve made to the general configuration is by using , the integrated Alpine local backup solution that was originally intended as a tool to manage diskless mode installations. I would, however, recommend to manually back up installed packages ( ) and use Restic for the rest of the system, including configuration files and important data, e.g.: However, backups depend on the data that your system produces and your desired backup target. If you’re looking for an easy to use, hosted but not-too-expensive one-off option, then Tarsnap might be for you. You should as well look into topics like local mail delivery, system integrity checks (e.g. AIDE ) and intrusion detection/prevention (e.g. CrowdSec ). Also, if you would like to get notified for various server events, check 📨🚕 ( MSG.TAXI ) ! :-) One of the biggest annoyances with Alpine is BusyBox : You need SSH? That’s BusyBox. The logs? Yeah, BusyBox. Mail? That’s BusyBox, too. You want to untar an archive? BusyBox. What? It’s gzipped? Guess what, you son of a gun, gzip is also BusyBox. I understand why Alpine chose BusyBox for pretty much everything, given the context that Alpine is most often used in ( cOnTaInErS ). Unfortunately, most BusyBox implementations are incomplete or incompatible with their full GNU counterparts, leaving you wondering why something that worked flawlessly on your desktop Linux fails on the Alpine box. By the time I finished setting up the server, there was barely any BusyBox tooling left. However, I occasionally had to resort to some odd trickery to get things working. You now have a good basis to set up whatever it is that you’re planning to use the machine for. Have fun! Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program .

0 views
vkoskiv 1 weeks ago

My First Contribution to Linux

I've been spending more of my spare time in recent years studying the Linux source tree to try to build a deeper understanding of how computers work. As a result, I've started accumulating patches that fix issues with hardware I own. I decided to try upstreaming one of these patches to familiarize myself with the kernel development process.

0 views
Jeff Geerling 2 weeks ago

Not all OCuLink eGPU docks are created equal

I recently tried using the Minisforum DEG1 GPU Dock with a Raspberry Pi 500+, using an M.2 to OCuLink adapter, and this chenyang SFF-8611 Cable . After figuring out there's a power button on the DEG1 (which needs to be turned on), and after fiddling around with the switches on the PCB (hidden under the large metal plate on the bottom; TGX to OFF was the most important setting), I was able to get the Raspberry Pi's PCIe bus to at least tell the graphics card installed in the eGPU dock to spin up its fans and initialize. But I wasn't able to get any output from the card (using this Linux kernel patch ), and did not show it. (Nor were there any logs showing errors in ).

0 views
DHH 2 months ago

Omarchy micro-forks Chromium

You can just change things. That's the power of open source. But for a lot of people, it might seem like a theoretical power. Can you really change, say, Chrome. Well, yes. We've made a micro fork of Chromium for Omarchy (our new 37signals Linux distribution). Just to add one feature needed for live theming

0 views
underlap 2 months ago

Arch linux take two

After a SSD failure [1] , I have the pleasure of installing arch linux for the second time. [2] Last time was over two years ago (in other words I remember almost nothing of what was involved) and since then I’ve been enjoying frequent rolling upgrades (only a couple of which wouldn’t boot and needed repairing). While waiting for the new SSD to be delivered, I burned a USB stick with the latest arch iso in readiness. I followed the instructions to check the ISO signature using gpg: So this looks plausible, but to be on the safe side, I also checked that the sha256 sum of the ISO matched that on the arch website. My previous arch installation ran out of space in the boot partition, so I ended up fiddling with the configuration to avoid keeping a backup copy of the kernel. This time, I have double the size of SSD, so I could (at least) double the size of the boot partition. But what is a reasonable default size for the boot partition? According to the installation guide , a boot partition isn’t necessary. In fact, I only really need a root ( ) partition since my machine has a BIOS (rather than UEFI). Since there seem to be no particular downsides to using a single partition, I’ll probably go with that. Then I don’t need to choose the size of a boot partition. The partitioning guide states: If you are installing on older hardware, especially on old laptops, consider choosing MBR because its BIOS might not support GPT If you are partitioning a disk that is larger than 2 TiB (≈2.2 TB), you need to use GPT. My system BIOS was dated 2011 [3] and the new SSD has 2 TB capacity, so I decided to use BIOS/MBR layout, especially since this worked fine last time. Here are the steps I took after installing the new SSD. Boot from the USB stick containing the arch ISO. Check ethernet is connected using ping. It was already up to date. Launch and set the various options: I then chose the Install option. It complained that there was no boot partition, so I went back and added a 2 GB fat32 boot partition. Chose the install option again. The installation began by formatting and partitioning the SSD. Twelve minutes later, I took the option to reboot the system after installation completed. After Linux booted (with the slow-painting grub menu, which I’ll need to switch to text), I was presented with a graphical login for i3. After I logged in, it offered to create an i3 config for me, which I accepted. Reconfigured i3 based on the contents of my dotfiles git repository. Installed my cloud provider CLI in order to access restic/rclone backups from the previous arch installation. At this point I feel I have a usable arch installation and it’s simply a matter of setting up the tools I need and restoring data from backups. I wanted to start dropbox automatically on startup and Dropbox as a systemd service was just the ticket. The failed SSD had an endurance of 180 TBW and lasted 5 years. The new SSD has an endurance of 720 TBW, so I hope it would last longer, although 20 years (5*720/180) seems unlikely. ↩︎ I was being ironic: it was quite painful the first time around. But this time I know how great arch is, so I’ll be more patient installing it. Also, I have a backup and a git repo containing my dot files, so I won’t be starting from scratch. ↩︎ There was a BIOS update available to fix an Intel advisory about a side-channel attack. However, I couldn’t confirm that my specific hardware was compatible with the update, so it seemed too risky to apply the update. Also, browsers now mitigate the side-channel attack. In addition, creating a bootable DOS USB drive seems to involve either downloading an untrusted DOS ISO or attempting to create a bootable Windows drive (for Windows 10 or 11 which may require a license key), neither of which I relish. ↩︎

0 views
DHH 3 months ago

Linux crosses magic market share threshold in US

According to Statcounter, Linux has claimed 5% market share of desktop computing in the US. That's double of where it was just three years ago. Really impressive. Windows is still dominant at 63%, and Apple sit at 26%. But for the latter, it's quite a drop from their peak of 33% in June 2023

0 views
DHH 3 months ago

Get in losers, we're moving to Linux!

I've never seen so many developers curious about leaving the Mac and giving Linux a go. Something has really changed in the last few years. Maybe Linux just got better. Maybe powerful mini PCs made it easier. Maybe Apple just fumbled their relationship with developers one too many times. Maybe it's all of it. But whatever the reason, the vibe shift is noticeable

0 views
DHH 3 months ago

Omarchy is out

My latest love letter to Linux has been published. It's called Omarchy, and it's an opinionated setup of the Arch Linux distribution and the Hyprland tiling window manager. With everything configured out-of-the-box to give you exactly the same setup that I now run every day. My Platonic ideal of what a developer environment should look like. It's not for everyone, though

0 views
Uros Popovic 4 months ago

Linux VM without VM software - User Mode Linux

Quick demonstration of using User Mode Linux (UML) to run a Linux VM inside Linux's userspace, without additional VM software or even root permissions

0 views
ryansouthgate.com 7 months ago

Cronjob vs systemd timers - how to create a systemd timer

IntroI’ve been using cronjobs for a long time. More recently I’ve been testing systemd-timers as they allow dependency management, so I don’t have to check (in the script I’m running) to see if certain services are up/running before my scripts can do their work. In this post I’m going to demonstrate how to create a simple systemd timer service which runs a script on a schedule A quick comparison with CronI’m all for learning to use new tools.

0 views
//pauls dev blog 8 months ago

How To Move Docker's Data Directory To Free Up Disk Space

When working with services like Docker which stores large amounts of data in locations that often do not have enough space can be a very frustrating experience, especially, if we are running out of disk space on our Linux server. Recently, I had a case where the partition on a host using docker was running very low on disk space. This time, I deployed so many containers on the host that the storage on the root partition (~15GB) was nearly full. Luckily, I had an additional disk mounted at with plenty of disk space. So I was searching for a simple and effective solution to my problem which was moving Docker's default storage location to another directory on the disk mounted at to ensure that all docker services can operate without problems and prevent data loss. In this article, I want to explain all the steps I did to safely relocate Docker's data directory so that everyone can do this without breaking their containerized applications if this problem ever occurs. If Docker is installed its default behavior is that container images, volumes, and other runtime data (logs) will be stored in which will grow significantly over time because of: When hosting Linux servers, the root partition often has limited space because it should only contain the filesystem and configuration files. This results in having Docker using all left space very fast resulting in slowdowns, crashes, or failures if launching new Docker containers. The solution is to move the Docker's data directory to avoid these problems. Before we can move the Docker's data directory we should stop the Docker service to prevent any data loss while moving the files. Running this command will ensure that no active processes are using Docker’s current data directory while we move it. Unfortunately, this will stop all services running on our host. If we do this in production we have to plan it in a maintenance window! After the Docker service is stopped we can add (or set) a setting value in the configuration file which normally is located at . If we cannot find it there we should create it and add the following configuration: Now, to ensure that every already created container, all images, and all volumes keep working, we have to copy the contents of (the old path) to the new location using the following command: In this command, the flag preserves the permissions, symbolic links, and all corresponding metadata. The is used to not copy data from other mounted filesystems and the property is used to simply copy the directory and not create an extra subdirectory. The next step is restarting the Docker service: This command will restart the Docker service and should instruct Docker to use the new storage location which we defined in the . Finally, we should confirm that Docker is using the new directory and is working correctly. To check this, we can use the following command: This command should return the new directory path we set: Then, we should test that our containers, images, and volumes are working by simply listing them: As a last step, we can check the container by accessing them. For example, if we host a website we can simply access it with the browser. In some cases moving the Docker's data directory is not possible and we could use different techniques to free up disk space and move files. The most important command for freeing up disk space while using Docker is: Docker provides built-in commands to remove unused data and reclaim space: This built-in command from Docker is used to remove unused data and reclaim space. It will remove every stopped container, each not-used network, the dangling images, and all build caches (it won't delete the volumes!) . We should keep in mind that we use it with caution as it will delete every image that is not linked to a running Docker container! If is too dangerous we can remove the individual parts separately. Remove unused networks: Remove unused volumes: Remove dangling images: Another approach to moving the entire Docker directory, we could use Linux bind mounts to selectively move certain large parts in the Docker directory like images or volumes. To do this, we simply create a Linux bind mount using the new path as the source and the docker directory as the destination: This solution is very flexible because we are able to manage the storage without affecting the Docker runtime. Having not enough disk space in a Docker environment could lead to heavy problems as I already learned in this "incident" . By freeing up disk space regularly and moving Docker's data directory to a separate large partition it is possible to ensure a running system and our containerized applications. I hope this article gave you an easy step-by-step guide to apply this to your Docker environment if you encounter that your Docker data directory is consuming too much space. But remember, keeping an eye on your storage needs and proactively managing Docker's data, like enabling logrotate, will help you prevent these unexpected problems. As a best practice, I would recommend cleaning up unused Docker resources regularly and monitoring disk usage on a daily basis to avoid running into problems! To do this, we could use Grafana, Netdata, Checkmk (Nagios), Zabbix, MinMon, or simply a CronJob. However, I would love to hear your feedback about this tutorial. Furthermore, if you have any alternative approaches, please comment here and explain what you have done. Also, if you have any questions, please ask them in the comments. I try to answer them if possible. Feel free to connect with me on  Medium ,  LinkedIn ,  Twitter , BlueSky , and  GitHub . Log files and container data Persistent volumes containing application data (databases, files, etc) Docker images and layers

0 views
ryansouthgate.com 11 months ago

Fixing 'cups-pki-invalid' printing error in Linux Mint/Ubuntu

Today on my Linux Mint install (21.2) I got the error cups-pki-invalid when trying to print a document. In this short blog post, we’re going to fix it, without having to remove/re-install printers. A quick Google search shows that this is likely due to an expired certificate, but we knew that anyway with “pki” in the error code, didn’t we? Why this certificate doesn’t re-gen (or have a longer expiry) is unknown to me, but it is what it is, and we’ll fix the error.

0 views
alikhil 11 months ago

Forwarding SMS to Telegram

After extensive travel, I’ve accumulated several mobile numbers and, naturally, physical SIM cards. Switching them out each time became tedious, even after buying a basic Nokia with two SIM slots, which only helped temporarily. When a friend asked if I could set up a Spanish number for account registrations, I realized it was time to automate the process. If you’re dealing with multiple SIM cards and want to receive SMS in Telegram, I have a straightforward approach. You’ll need a Linux machine that’s always online, connected to the internet, and about $10. If you have a USB modem at home, check if it’s supported by Gammu . For our purposes, we don’t need an expensive 4G modem with advanced features. Any basic 2G/3G modem will work, and these are easy to find at a discounted price on sites like eBay or Wallapop. Search for “Huawei USB modem,” sort by price, and look for unlocked options or ones with compatible firmware. For instance: Next, go to the Gammu website and look up the device. Make sure it appears on the list and that “SMS” is included in the “Supported features” column: If the device meets these requirements, it’s good to go! Before starting the setup, it’s best to connect the modem with the SIM card already inserted to your PC and check that it’s functioning properly. Run the following command to identify the device path: You should see a paths similar to: Choose a path that ends with , in my case it’s . Using Docker Compose, set up your configuration: Save the configuration to a file and run: If everything is set up correctly, you should see the following log messages: To test SMS reception, you can use free online SMS-sending services (search for “send SMS for free”) or try logging into Telegram, your bank account, etc. The Gammu library provides a unified interface for working with phones and modems from various manufacturers. On top of that, there’s the Gammu SMS Daemon , which receives SMS messages and triggers a custom script—in our case, script to send the messages to Telegram. Thanks to @kutovoys for the idea and Docker image ! This is a simple, affordable, and scalable solution—especially if you’re into self-hosting. This post was originally written for vas3k.club . A physical SIM card A USB modem that’s supported by the Gammu library A Telegram bot token, chat or channel ID A Linux machine with a free USB port Docker and Docker Compose installed

0 views
//pauls dev blog 1 years ago

AppImage On Linux Explained: How to Run Apps Without Installing

In every Linux distribution there are several ways to install software. One of the most used methods is downloading a or file and simply double-click them to install. But since some years, you might have seen applications with a extension and wondered what these files are. Within this short tutorial I want to explain AppImage and show how to use it on Linux to run applications. I also add some important tips to think about when working with AppImage and I provide a simple tutorial to add any AppImage to your Linux application launcher. AppImage is a relatively new (compared to deb/rpm) packaging format ( developed in 2004, named klik ) that allows you to run applications on Linux with just a simple click. Unlike to traditional packages like DEB or RPM, AppImages are compatible across all Linux distributions, making them available to all Linux users. While DEB and RPM offer a convenient way for users to install software, they are challenging for developers, who has to create different packages for each Linux distribution. This is exactly where AppImage steps in. AppImage is a universal software packaging format that bundles software in an single file which works on all modern Linux distributions. In contrast to other Linux distributions AppImage does not install software in the traditional way by placing files in various system locations (and requiring root permissions). Also, it does not really "install" the software because it is a compressed image that includes all neccessary dependencies and libraries to run the application. This means that if you run an AppImage file the software starts immediately without any extraction or installation process. To delete an AppImage it is sufficient to simply delete the file. Key Features of AppImage On Linux any AppImage can be used by following three simple steps: After downloading and running your AppImage file they wont appear in your application launcher. This means that you cannot add it to your Panels or launch it by opening the launcher. Using the AppImage software in the Linux application launcher can be done by simply creating a desktop entry for the app. In Linux a desktop entry is a configuration file that tells your desktop environment how an application is handled. In Linux Desktop entries can be created in the following two folders: While the first folder is globally used by every user (also you need root for this) the second folder should be used. Switch to , create a new file with the extension and add the following code: In this file replace Appname with the name of the app which will be displayed in the application launcher. Also, replace   with the full path to an icon file which is used for the app if you have one. If you don't have an icon, you can omit this line, but having an icon is recommended for better integration. Lastly, replace  with the full path to your AppImage file. After closing and saving this file your system should automatically detect the changes and your app can be found in the application launcher. AppImage offers a simple and effective solution for running applications across different Linux distributions without the complexity of traditional installation methods. By using this universal, portable format, it makes software access easier for both users and developers . Also, AppImage applications don't install themself on your Linux system and deleting them is very easy by simply deleting the file. In this simple tutorial, I tried to show how every Linux user can use AppImage applications and integrate them in their launcher for a better experience. Hopefully, this article was easy to understand and you learned about AppImage applications. Do you have any questions regarding AppImage? Or do you have any feedback? I would love to hear it, your thoughts and answer all your questions. Please share everything in the comments. Feel free to connect with me on Medium , LinkedIn , Twitter , and GitHub . No installation or compilation : Simply click and run the application. No root permissions required : It doesn't modify system files. Portable : Can be run from anywhere, including live USB environments. Read-only apps : The software operates in read-only mode. Can be removed easily : Simply deleting the AppImage file will remove the complete software. No sandboxing by default : AppImages are not sandboxed unless configured otherwise. Not Distribution dependant : Works across various Linux distributions. Download the AppImage file Make it executable (chmod +x )

0 views

2024 - My year of the Linux desktop

I’ve been a Windows user since first using a mouse. Windows 10 will be the last version of Windows I’ll daily drive, and here’s why… IntroductionMy first operating system was Windows 3.1 - when my uncle gave us a computer he no longer needed. From that point on, he helped me keep up to date with the latest versions of Windows (with hardware upgrades to match). Over the years, Windows has been a constant in my life, I’ve used pretty much every desktop version of Windows, and most Windows Server OS too.

0 views

Creating Right-Click (Context Menu) actions in Linux Mint - Nemo

This is a short and sweet post about creating Nemo Actions in Linux Mint. Since moving from Windows 10 and daily-driving Linux Mint. There’s been few things I’ve missed. However, one small feature of the Windows file explorer I’ve been missing has been the “Copy Full Path” when you select a file, hold shift and right click. This small feature copies the full path of the file to your clipboard. And I’ve just come to realise how much I miss it.

0 views