Latest Posts (16 found)
マリウス 6 days ago

Be Your Own Privacy-Respecting Google, Bing & Brave

Search engines have long been a hot topic of debate, particularly among the tinfoil-hat-wearing circles on the internet. After all, these platforms are in a unique position to collect vast amounts of user data and identify individuals with unsettling precision. However, with the shift from traditional web search, driven by search queries and result lists, to a LLM-powered question-and-answer flow across major platforms, concerns have grown and it’s no longer just about privacy: Today, there’s increasing skepticism about the accuracy of the results. In fact, it’s not only harder to discover new information online, but verifying the accuracy of these AI-generated answers has become a growing challenge. As with any industry upended by new technology, a flood of alternatives is hitting the market, promising to be the antidote to the established players. However, as history has shown, many of these newcomers are unlikely to live up to their initial hype in the long run. Meanwhile, traditional search services are either adopting the same LLM-driven approach or shutting down entirely . However, as long as major search engines still allow software to tap into their vast databases without depending too heavily on their internal algorithms and AI-generated answers, there’s some hope. We can take advantage of these indexes and create our own privacy-respecting search engines that prioritize the content we actually want to see. Let’s check how to do so using the popular metasearch engine SearxNG on OpenBSD ! SearXNG is a free and open-source metasearch engine, initially forked from Searx after its discontinuation, which can tap into over 70 different search engines to receive search results from. Note: SearXNG is not a search engine but a metasearch engine, which means that it does not have its own index but instead it uses existing indexes from e.g. Google , Brave , Bing , Mojeek , and others. What SearXNG does is that it runs your search query through all of the search engines that you have enabled on your SearXNG instance, onto which it applies custom prioritization and removal rules in an effort to tailor the results to your taste . SearXNG is not particularly resource-intensive and doesn’t require significant storage space, as it does not maintain its own search index. However, depending on your performance requirements, you may need to choose between slightly longer wait times or higher costs, especially for cloud instances. I tested SearXNG on a Vultr instance with 1 vCPU and 1GB of RAM, and it performed adequately. That said, for higher traffic or more demanding usage, you’ll need to allocate more CPU and RAM to ensure optimal performance. Let’s start by setting up the base system. This guide assumes you’re using the latest version of OpenBSD (7.8, at the time of writing) and that you’ve already configured and secured SSH access. Additionally, your firewall should be set up to allow traffic on ports 22, 80, and 443. Ideally, you should also have implemented preventive measures against flooding and brute-force attacks, such as PF ’s built-in rate limiting. Note: I’m going to use as domain for this specific setup, as well as as hostname for the SearXNG instance. Make sure to replace these values with your domain/preferred hostname in the configuration files below. First, let’s install the dependencies that we need: The default configuration of redis works just fine for now, so we can enable and start the service right away: Next, we create a dedicated user for SearXNG : With the newly created user we clone the SearXNG repository from GitHub and set up a Python virtual environment : Next, we copy the default configuration from the repository to ; Make sure to beforehand: While the default settings will work just fine it’s advisable to configure the according to your requirements. One key element that will make or break your experience with SearXNG is the plugin and its configuration. Make sure to enable the plugin: … and make sure to properly configure it: The configuration tells SearXNG to rewrite specific URLs. This is especially useful if you’re not running LibRedirect but would still like results from e.g. X.com to open on Xcancel.com instead. The configuration contains URLs that you want SearXNG to completely remove from your search results, e.g. Pinterest , Facebook or LinkedIn (unless you need those for OSINT ). The configuration lists URLs that SearXNG should de-prioritize in your search results. The setting, on the other hand, does the exact opposite: It instructs SearXNG to prioritize results from the listed URLs. If you need examples for those files feel free to check the lycos.lol repository . PS: Definitely make sure to change the ! We’re going to run SearXNG using uWSGI , a popular Python web application server. To do so, we create the file with the following content: Next, we create the file with the following content: This way we can use to enable and run uWSGI by issuing the following commands: Info: In case the startup should fail, it is always possible to and start uWSGI manually to see what the issue might be: For serving the Python web application we use Nginx . Therefor, we create with the following content: We include this file in our main configuration: Note: I’m not going to dive into the repetitive SSL setup, but you can find plenty other write-ups on this site that explain how to configure it on OpenBSD. Next, we enable Nginx and start it: You should be able to access your SearXNG instance by navigating to in a browser. In case you encounter issues with the semaphores required for interprocess communication within uWSGI , make sure to check [the settings][sminfo] and increase specifically the parameter, e.g. by adding the following line to : As can be seen, setting up a SearXNG instance on OpenBSD is fairly easy and doesn’t require much work. However, configuring it to your liking so that you can get the search results you’re interested in is going to require more effort and time. Especially the plugin is likely something that will evolve over time, the more you’ll use the search engine. At this point, however, you’re ready to enjoy your self-hosted, privacy-respecting metasearch engine based upon SearXNG ! :-) I had registered the domain for this closed-access SearXNG instance. However, a day after the domain became active, NIC.LOL set the domain status to . I asked Njalla , my registrar, if they would know more and their reply was: Right now the domain in question has the status code “serverHold”. serverHold is a status code set by the registry (the one that manage the whole TLD) and that means they have suspended the domain name because the domain violated their terms or rules. Upon further investigation, it became clear that the domain was falsely flagged by everyone’s favorite tax-haven-based internet bully, Spamhaus . After all, when the domain was dropped globally the only thing that was visible on the domain’s Nginx was an empty page. The domain also didn’t have (and still hasn’t) any MX records configured. I reached out to Spamhaus who replied with the following message: Thank you for contacting the Spamhaus Ticketing system, It appears that this ticket was submitted using a disposable or temporary email address; because of this, we cannot confirm its authority. To ensure that we can help you, please do not use a temporary email address (this includes freemails such as gmail.com, hotmail.com, etc) and ensure that the ticket contains the following: When these issues have been resolved, another ticket may be opened to request removal. – Regards, Marvin Adams The Spamhaus Project Spamhaus flagged the domain I just purchased, which I could have used for sending email. Upon contacting them, they then closed my ticket because I was using a temporary email address instead of, let’s say, my own lycos.lol domain. And even though it was a free or temporary email that I had sent the email from, I thought it was my domain registrar’s responsibility to handle KYC, not Spamhaus ’s. I’ve always known that Spamhaus is an incompetent and corrupt organization, but I didn’t fully realize how mentally challenged they are until now. Also, shoutout to NIC.LOL for happily taking my cash without providing any support in this matter whatsoever. This serves as a harsh reminder that the once fun place we called the internet is dead and that everything these days is controlled by corporations which you’re always at the mercy of. It also highlights how misleading and inaccurate some popular posts on sites like Hacker News can be, e.g. “Become unbannable from your email” . They’re not just lacking in detail but they’re obviously wrong with the unbannable part. After some back-and-forth, I managed to get back online and set up the SearXNG instance. The instance will be available to members of the community channel . Additionally, I’ve taken further steps to protect this website from future hostility by Spamhaus: Say hello to ! More on that in a future status update . Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program . Learn why . Information that makes clear the requestor’s authority over the domain or IP Details on how the issue(s) have been addressed Reference any other Spamhaus removal ticket numbers related to this case

0 views
マリウス 2 weeks ago

On Generative AI Imagery

With a growing readership on this very niche website of mine, the amount of reader feedback I receive, primarily via email, but also through the community channel , has noticeably increased. This is something that brings me joy, and I’m happy to respond to everyone who reaches out, whether that’s with replies to questions, help on specific topics, or just a simple “thank you” message. However, for the past year, I’ve been receiving an increasing number of comments about my use of generative AI imagery in some of the posts on this website. While all the comments have been in good spirit, they share one thing in common: A dislike for such graphics, along with well-intentioned suggestions to avoid future use of generative AI for cover art or inline “artwork” . Because of the repeated feedback on this specific topic, I decided to write this post to explain myself and the situation I’m facing with this website. This website has been around for over half a decade now, during which I’ve dedicated considerable effort to producing original writing, photography and sometimes graphics for roughly 130 articles (not counting regular pages). Out of this content, only around 10 posts (~7%) feature imagery produced by generative AI, which I always disclose , usually in the article’s footer, and sometimes with slightly sarcastic remarks about generative AI. However, despite my focus on original work, I’ve never received feedback explicitly appreciating the artwork featured in the remaining 93% of the posts. While the purpose of good artwork is to blend in with the writing and thus become one , it is nevertheless disheartening to see that as soon as I introduced generated images, I received immediate feedback, despite the fact that these images blended in better than any of my original amateur photography or artwork ever could. Unfortunately, many readers don’t fully realize the extensive work involved in creating both the written content, the accompanying graphics, and sometimes even videos for each post. As someone who isn’t a professional artist and who faces the challenge of finding new subjects for niche topics like “The Small Web 101” or “Installing Alpine Linux on a Bare Metal Server” , I turned to generative AI. To give you a sense of the effort that goes into just the writing, let’s take the aforementioned article as an example: Researching, drafting the idea, expanding on it, refining rough edges, proofreading it repeatedly, and ultimately running it through a grammar and spell checker usually takes me about 30 hours for a post like that. I don’t use things like dictation and speech-to-text conversion. While new technologies might speed up the process, they would also partially take a way the joy I find in the process, and quite possible produce a result that is not as thoughtful as it might otherwise have been. Besides, I wouldn’t want to lose the ability to do these things on my own by slowly offsetting more and more tasks to computer programs. The fact that I still need to use tools for grammar and spell correction after all these years is frustrating enough. However, with my process still being predominantly based on blood, sweat and tears , an article that requires deeper investigative work easily doubles the aforementioned number from start to finish. Keep in mind, these numbers don’t include any work on graphics! Even shorter articles, like the “Tabs vs. Spaces” one, end up taking an absurd number of hours to complete. Manually searching through every language’s official and unofficial developer guidelines to determine whether tabs or spaces are preferred, and what indentation size should be used, takes a lot of time. I’m not using “AI” tools to automate these tasks because they simply can’t be trusted to produce accurate data. Especially with an article like this, accuracy is key, and it’s the only reason anyone would find value in it. Long story short, let me be clear: I’m not a fan of generative AI either. After all, my snarky comments in the article disclaimers are there for a reason. However, after years of effort and with limited funds from donations that don’t even cover basic infrastructure costs, let alone the purchase of real artwork, I had to find a way to balance the time, effort, and, to some extent, costs that go into maintaining this website. Sadly, generative AI seems to be the only way I, as someone with little artistic talent, can afford more sophisticated graphics that support the written word and are at least somewhat pleasing (or at least okay ) to the readers’ eyes. I hope this clarifies why someone with a website like mine, that is very outspoken against many commonly considered modern technologies, sometimes employs imagery produced by generative AI. I also hope that, despite your personal opinion on generative AI, the stolen artwork I occasionally use won’t deter you from diving into the actual written content. Thank you for being here.

0 views
マリウス 3 weeks ago

Cameras, Cameras Everywhere!

We live in an age when a single walk down the street can put you inside at least a dozen different recording ecosystems at once: Fixed municipal CCTV, a bypassing police cruiser’s cameras or body-cam feeds, the license-plate cameras on light poles, the dash-, cabin-, and exterior cameras of nearby cloud-connected vehicles, Ring and Nest doorbells of residences that you might pass by, and the phones and wearables of other pedestrians passing you, that are quietly recording audio and/or video. Each of those systems was justified as a modest safety, convenience, or product feature, yet when stitched together they form a surveillance fabric that reaches far beyond its original intent. Instead of only looking at the big picture all these individual systems paint, let’s instead focus on each individual area and uncover some of the actors complicit in the making of this very surveillance machinery that they profit immensely from. Note: The lists below only mention a few of the most prominent enablers and profiteurs. CCTV is not new, but it’s booming. Market reports show the global video-surveillance/CCTV market measured in tens of billions of dollars and growing rapidly as governments and businesses deploy these solutions. A continued double-digit market growth over the next several years is expected. Cameras haven’t been reliably proven to reduce crime at scale, and the combination of live feeds, long-term storage and automated analytics (including behavior detection and face matching) enable discriminatory policing and concentrate a huge trove of intimate data without adequate oversight. Civil liberties groups and scholars argue CCTV expansion is often implemented with weak limits on access, retention, and third-party sharing. In addition, whenever tragedy strikes it seems like “more video surveillance, now powered by AI” is always the first response: More CCTV to be installed in train stations after knife attack Heidi Alexander has announced that the Government will invest in “improved” CCTV systems across the network, and that facial recognition could be introduced in stations following Saturday’s attack. “We are investing in improved CCTV in stations and the Home Office will soon be launching a consultation on more facial recognition technology which could be deployed in stations as well. So we take the safety of the travelling public incredibly seriously.” Automatic license-plate readers (ALPRs) used to be a tool for parking enforcement and specific investigations, but firms like Flock Safety have taken ALPRs into a new phase by offering cloud-hosted, networked plate-reading systems to neighborhoods, municipalities and private groups. The result is a searchable movement history for any car observed by the network. Supporters point to solved car thefts and missing-person leads. However, clearly these systems amount to distributed mass surveillance, with weak governance and potential for mission creep (including law-enforcement or immigration enforcement access). The ACLU and other groups have documented this tension and pressed for limits. Additionally there has been a plethora of media frenzy on specifically Flock Safety’s products and their reliability : A retired veteran named Lee Schmidt wanted to know how often Norfolk, Virginia’s 176 Flock Safety automated license-plate-reader cameras were tracking him. The answer, according to a U.S. District Court lawsuit filed in September, was more than four times a day, or 526 times from mid-February to early July. No, there’s no warrant out for Schmidt’s arrest, nor is there a warrant for Schmidt’s co-plaintiff, Crystal Arrington, whom the system tagged 849 times in roughly the same period. ( via Jalopnik ) Police departments now carry many more mobile recording tools than a decade ago, that allow the city’s static CCTV to be extended dynamically: Vehicle dash cameras, body-worn cameras (BWCs), and in some places live-streaming CCTV or automated alerts pushed to officers’ phones. Bodycams were originally promoted as accountability tools, and they have provided useful evidence, but they also create new data flows that can be fused with other systems (license-plate databases, facial-recognition engines, location logs), multiplying privacy and misuse risks. Many researchers, advocacy groups and watchdogs warn that pairing BWCs with facial recognition or AI analytics can make ubiquitous identification possible, and that policies and safeguards are lagging . Recent reporting has uncovered operations where real-time facial-recognition systems were used in ways not disclosed to local legislatures or the public, demonstrating how rapidly policy gets outpaced by deployment. One of many recent examples consists of an extended secret live-face-matching program in New Orleans that led to arrests and subsequent controversy about legality and oversight. Drones and aerial systems add another layer. Airborne or rooftop cameras can rapidly expand coverage areas and make “seeing everything” more practical, with similar debates about oversight, warranting, and civil-liberties protections. Modern cars increasingly ship with external and internal cameras, radar, microphones and cloud connections. Tesla specifically has been a headline example where in-car and exterior cameras record for features like Sentry Mode, Autopilot/FSD development, and safety investigations. Reporting has shown that internal videos captured by cars have, on multiple occasions, been accessed by company personnel and shared outside expected channels, sparking alarm about how that sensitive footage is handled. Videos of private interiors, garages and accidents have leaked, and workers have admitted to circulating clips . Regulators, privacy groups and media have flagged the risks of always-on vehicle cameras whose footage can be used beyond owners’ expectations. Automakers and suppliers are rapidly adding cameras for driver monitoring, ADAS (advanced driver-assistance systems), and event recording, which raises questions about consent when cars record passengers, passers-by, or are subject to remote access by manufacturers, insurers or law enforcement, especially with cloud-connected vehicles. Ring doorbells and other cloud-connected home security cameras have created an informal, semi-public surveillance layer. Millions of privately owned cameras facing streets and porches that can be searched, shared, and, in many jurisdictions, accessed by police via relationships or tools. Amazon’s Ring drew intense scrutiny for police partnerships and for security practices that at times exposed footage to unauthorized access. A private company mediates a vast public-facing camera network, and incentives push toward more sharing, not less. Another recent example of creeping features, Ring’s “Search Party” AI pet-finder feature (enabled by default), also raised fresh concerns about consent and the expansion of automated scanning on users’ cloud footage. While smartphones don’t (yet) record video all by themselves, the idea that our phones and earbuds “listen” only when we ask them has been punctured repeatedly. Investigations disclosed that contractors for Apple, Google and Amazon listened to small samples of voice-assistant recordings, often including accidentally captured private conversations, to train and improve models. There have also been appalling edge cases, like smart speakers accidentally sending recordings to contacts, or assistants waking and recording without clear triggers. These incidents underline how easily ambient audio can become recorded, labeled and routed into human or machine review. With AI assistants (Siri, Gemini, etc.) integrated on phones and wearables, for which processing often requires sending audio or text to the cloud, new features make it even harder for users to keep control of what’s retained, analyzed, or used to personalize models. A recent crop of AI wearables, like Humane ’s AI Pin , the Friend AI pendants and similar always-listening companions, aim to deliver an AI interface that’s untethered from a phone. They typically depend on continuous audio capture and sometimes even outward-facing cameras for vision features. The devices sparked two predictable controversies: Humane ’s AI Pin drew mixed reviews, questions about “trust lights” and bystander notice, and eventually a shutdown/asset sale that stranded some buyers, which is yet another example of how the technology and business models create risks for both privacy and consumers. Independent wearables like Friend have also raised alarm among reviewers about always-listening behavior without clear opt-out tools. Even though these devices might not necessarily have cameras (yet) to record video footage, they usually come with always-on microphones and can, at the very least, scan for nearby Bluetooth and WiFi devices to collect valuable insights on the user’s surroundings and, more precisely, other users in close proximity. A device category that banks primarily on its video recording capabilities are smart glasses. Unlike the glassholes from a decade ago, this time it seems fashionable and socially accepted to wear the latest cloud-connected glasses. Faced with the very same issues mentioned previously for different device types, smart glasses, too, create immense risks for privacy, with little to no policy in place to protect bystanders . There are several satellite constellations in orbit that house advanced imaging satellites capable of capturing high-resolution, close-up images of Earth’s surface, sometimes referred to as “spy satellites” . These satellites provide a range of services, from military reconnaissance to commercial imagery. Notable constellations by private companies include GeoEye ’s GeoEye-1 , Maxar ’s WorldView , Airbus ’ Pléiades , Spot Image ’s SPOT , and Planet Labs ’ RapidEye , Dove and SkySat . Surveillance tech frequently arrives with a compelling use case, like detering car theft, finding a missing child, automating a customer queue, or making life easier with audio and visual interactions. But it also tends to become infrastructural and persistent. When private corporations, local governments and individual citizens all accumulate recordings, we end up with a mosaic of surveillance that’s hard to govern because it’s distributed across actors with different incentives. In addition, surveillance technologies rarely affect everyone equally. Studies and analyses show disproportionate impacts on already-targeted communities, with increased policing, mistaken identifications from biased models, and chilling effects on protest, religion or free association. These systems entrench existing power imbalances and are primarily benefitial to the people in charge of watching rather than the majority that’s being watched . Ultimately, surveillance not only makes us more visible, but we’re also more persistently recorded, indexed and analyzable than ever before. Each camera, microphone and AI assistant may be framed as a single, sensible feature. Taken together, however, they form a dense information layer about who we are, where we go and how we behave. The public debate now needs to shift from “Can we build this?” to “Do we really want this?” . For that, we need an informed public that understands the impact of all these individual technologies and what it’s being asked to give up in exchange for the perceived sense of safety these systems offer. Avigilon (Motorola Solutions) Axis Communications Bosch Security Systems Sony Professional Axis Communications Bosch Security Systems Flock Safety Kapsch TrafficCom Motorola Solutions (WatchGuard) PlateSmart Technologies Digital Ally Kustom Signals Motorola Solutions (WatchGuard) Transcend Information Flock Safety Lockheed Martin (Procerus Technologies) Quantum Systems Mercedes-Benz Eufy Security Nest Hello (Google) Ring (Amazon) SkyBell (Honeywell) Bystander privacy (how do you notify people they’re being recorded?) Vendor and lifecycle risk (cloud dependence, subscription models, and what happens to device functionality or stored data if a startup folds) Gentle Monster Gucci (+ Snap) Oakley (+ Meta) Ray-Ban (+ Meta) Spectacles (Snap) BAE Systems General Dynamics (SATCOM) Thales Alenia Space

0 views
マリウス 1 months ago

Zeit v1

Zeit began nearly five years ago as a pet project. I needed a lightweight, user-friendly tool to track time, with the added capability to export data for integration with other, mostly home-brewed software used for accounting and invoicing. At the time, I had only a basic set of features in mind and no clear long-term plan for the tool. Little did I know that I seemingly wasn’t alone in my need for a time tracker that stays out of the way and doesn’t come with an attached (paid) cloud service. Whenever users requested new features or options, I either implemented them myself or accepted their pull requests without much hesitation. My guiding principle was simple: If a small enhancement could make the software more useful to even one other person, I was happy to introduce it. Nearly five years after its initial release, Zeit has stood the test of time ( hah ) quite well and continues to grow in popularity on GitHub . What began as a minimal command-line time-tracking utility has evolved into a more feature-rich program. Depending on your perspective, you might say it now has a few too many features. Fast forward to today, and the first version of Zeit (referred to as zeit v0 ) has strayed far from its original goal of being a clean, minimal command-line tool. Instead, it has grown into an unwieldy user experience, cluttered with features that are neither intuitive nor well thought out. From a code standpoint, many of the decisions that made sense a few years ago now seem suboptimal, especially as we look to the future. While I could have sifted through the original v0 codebase to clean it up and remove features that were added by contributors who eventually stopped maintaining them, I opted to rewrite Zeit from scratch. The new version is built on more modern dependencies, with a cleaner, more streamlined codebase, and is free of the “one-off” features added for individual users who no longer appear to use Zeit . Over the past five years, I’ve learned a great deal from user feedback. With Zeit v1 , I’ve implemented the most practical and useful feature requests in a way that feels cohesive and polished, rather than like an afterthought. Starting with the database, Zeit v1 replaces the old storage engine, BuntDB , with BadgerDB v4 . BadgerDB is an embeddable, persistent, and fast key-value (KV) database written in pure Go. This new database not only stores time entries but also user-specific configurations, eliminating the need for a separate config file and reducing external dependencies. This shift addresses past issues with unnecessary dependencies, such as Viper , that would eventually cause more headaches than they would benefit the project. “Why not use SQLite?” you might ask. The answer is simple: Cross-compiling. Unfortunately, there is no fully compliant SQLite implementation written in pure Go. Using the official SQLite implementation requires , which complicates cross-compilation for various platforms. Additionally, the data Zeit generates fits well into a key-value store and doesn’t require the complexity of a relational database on the client side. Next, I cleaned up the overall project structure and organized it into distinct areas for the database layer, internal business logic, the command-line interface (CLI), and output. Unlike v0 , Zeit now uses Charm ’s lipgloss v2 library to render CLI output, taking advantage of the terminal’s default theme for colors. This results in a more seamless integration of Zeit into your terminal user interface (TUI) aesthetics. Additionally, most Zeit commands (e.g., , , etc.) now support JSON output alongside the standard CLI output. This makes it easier to integrate Zeit with other tools. For example, to build your own project/task picker, you can leverage the JSON output of the command and use to create a list of project/task entries. You can then feed that list into your favorite dmenu launcher, simplifying the process of managing your time-tracking data: One change that will break compatibility with existing integrations is the new command-line interface, which adopts a similar approach to many of my newer tools, such as whats . In the past, Zeit users had to learn and memorize command-line flags like , , , and even less intuitive ones like or . While Zeit v1 still supports similar flags, its primary focus now shifts to a more natural way of using command-line arguments: As demonstrated by this otherwise complex example, which tracks a new block of time with a note on the personal project and knowledge task, starting four hours ago and ending ten minutes ago, the use of a more natural approach to command-line arguments significantly enhances a user’s understanding of the command. However, because Zeit still supports flags, the same command can also be executed using those: The structure is kept (almost) identical across various commands and can hence be as well used for filters: This command lists all tracked time blocks for the personal project and knowledge task, from last week (at this time) until two hours ago today. As shown, the need for a detailed explanation is minimal, as the command’s purpose is easily understood just by looking at it. Similarly, as demonstrated in the previous example, the same flags can also be used with the command: If you use Zeit daily, you may find the natural arguments interface more intuitive and enjoyable than working with flags. However, if you’re building a tool that interacts with to inject or extract data, you’ll likely prefer sticking to the more programmatically robust flags. With the complete rewrite of Zeit , one major change is its license. Historically, all of my software projects on GitHub have been published under the GNU GPL v3 license, allowing anyone to use the software under conditions deemed appropriate by the FSF and the OSI . However, as I explained in a previous status update here , these organizations were founded in a different era and, in my view, have failed to adapt to the realities of today. One glaring example of this is their incoherent stance on freedom , particularly when it comes to freedom of speech. It’s curious that many advocates of the GNU/OSI philosophies call for limitations on free speech while insisting that software must be usable without restriction in order to qualify as free and open source . To put it simply, Zeit v1 is no longer published under the GNU GPL or any of the OSI-approved licenses. Instead, it is now licensed under a partially modified HL3 license, which I’ve dubbed the SEGV license . This is not an open source license in the traditional (and, in my opinion, flawed) sense, but rather a source-available license. That said, I reject the taxonomy imposed by the FSF and the OSI and will continue to call my software open source , as the license change won’t have any practical impact for the average user. However, it is designed to ideally prevent certain groups whose goals I consider morally wrong from using the software. I’ve completed the first release of Zeit v1 , marking the official debut of this complete rewrite, now with version number v1.0.0 . Along with the new version, Zeit also has an official website: zeit.observer While the site currently serves as a simple landing page, it will grow in functionality over time, as indicated by the features listed as coming “soon” . Please note, however, that this new version is a full rewrite and not compatible with existing Zeit v0 databases. If you’re currently using Zeit v0 , worry not: You can export your entries using , and then import them into v1 with the new command. Just make sure you first export the database using Zeit v0 and only then upgrade to Zeit v1 and run the import command. If you’re looking for a command-line utility for time tracking, especially if you’re already using another tracker, I’d love for you to give Zeit v1 a try and share your thoughts . Let me know your top three missing features and which platforms you typically use for time tracking.

1 views
マリウス 1 months ago

A Word on Omarchy

Pro tip: If you’ve arrived here via a link aggregator, feel free to skip ahead to the Summary for a conveniently digestible tl;dr that spares you all the tedious details, yet still provides enough ammunition to trash-talk this post in the comments of whatever platform you stumbled upon it. In the recent months, there has been a noticeable shift away from the Windows desktop, as well as from macOS , to Linux, driven by various frustrations, such as the Windows 11 Recall feature. While there have historically been more than enough Linux distributions to choose from, for each skill level and amount of desired pain, a recent Arch -based configuration has seemingly made strides across the Linux landscape: Omarchy . This pre-configured Arch system is the brainchild of David Heinemeier Hansson , a Danish web developer and entrepreneur known as one of the co-founders of 37signals and for developing the Ruby on Rails framework. The name Omarchy appears to be a portmanteau of Arch , the Linux distribution that Hansson ’s configuration is based upon, and お任せ, which translates to omakase and means to leave something up to someone else (任せる, makaseru, to entrust ). When ordering omakase in a restaurant, you’re leaving it up to the chef to serve you whatever they think is best. Oma(kase) + (A)rch + y is supposedly where the name comes from. It’s important to note that, contrary to what Hansson says in the introduction video , Omarchy is not an actual Linux distribution . Instead, it’s an opinionated installation of Arch Linux that aims to make it easy to set up and run an Arch desktop, seemingly with as much TUI-hacker-esque aesthetic as possible. Omarchy comes bundled with Hyprland , a tiling window manager that focuses on customizability and graphic effects, but apparently not as much on code quality and safety . However, the sudden hype around Omarchy , which at this point has attracted attention and seemingly even funding from companies like Framework (Computer Inc.) ( attention ) and Cloudflare ( attention and seemingly funding ), made me want to take a closer look at the supposed cool kid on the block to understand what it was all about. Omarchy is a pre-configured installation of the Arch distribution that comes with a TUI installer on a 6.2GB ISO. It ships with a collection of shell scripts that use existing FOSS software (e.g. walker ) to implement individual features. The project is based on the work that the FOSS community, especially the Arch Linux maintainers, have done over the years, and ties together individual components to offer a supposed ready-to-use desktop experience. Omarchy also adds some links to different websites, disguised as “Apps” , but more on that later. This, however, seems to be enough to spark an avalanche of attention and, more importantly, financial support for the project. Anyway, let’s give Omarchy an actual try, and see what chef Hansson recommended to us. The Omarchy installer is a simple text user interface that tries to replicate what Charm has pioneered with their TUI libraries: A smooth command-line interface that preserves the simplicity of the good old days , yet enhances the experience with playful colors, emojis, and animations for the younger, future generation of users. Unlike mature installers, Omarchy ’s installer script doesn’t allow for much customization, which is probably to be expected with an “Opinionated Arch/Hyprland Setup” . Info: Omarchy uses gum , a Charm tool, under the hood. One of the first things that struck me as unexpected was the fact that I was able to use as my user password, an easy-to-guess word that Omarchy will also use for the drive encryption, without any resistance from the installer. Most modern Linux distributions actively prevent users from setting easily guessable or brute-forceable passwords. Moreover, taking into account that the system relies heavily on sudo (instead of the more modern doas ), and also considering that the default installation configures the maximum number of password retries to 10 (instead of the more cautious limit of three), it raises an important question: Does Omarchy care about security? Let’s take a look at the Omarchy manual to find out: Omarchy takes security extremely seriously. This is meant to be an operating system that you can use to do Real Work in the Real World . Where losing a laptop can’t lead to a security emergency. According to the manual, taking security extremely seriously means enabling full-disk encryption (but without rejecting simple keys), blocking all ports except for 22 (SSH, on a desktop) and 53317 (LocalSend), continuously running (even though staying bleeding-edge has repeatedly proven to be in insufficient security measure in the past) and maintaining a Cloudflare protected package mirror. That’s seemingly all. Hm. Proceeding with the installation, the TUI prompts for an email address, which makes the whole process feel a bit like the Windows setup routine. While one might assume Omarchy is simply trying to accommodate its new user base, the actual reason appears to be much simpler: . If, however, you’d be expecting for Omarchy to set up GPG with proper defaults, configure SSH with equally secure defaults, and perhaps offer an option to create new GPG/SSH keys or import existing ones, in order to enable proper commit and push signing for Git, you will be left disappointed. Unfortunately, none of this is the case. The Git config doesn’t enable commit or push signing, neither the GPG nor the SSH client configurations set secure defaults, and the user isn’t offered a way to import existing keys or create new ones. Given that Hansson himself usually does not sign his commits, it seems that these aspects are not particularly high on the project’s list of priorities. The rest of the installer routine is fairly straightforward and offers little customization, so I won’t bore you with the details, but you can check the screenshots below. After initially downloading the official ISO file, the first boot of the system greets you with a terminal window informing you that it needs to update a few packages . And by “a few” it means another 1.8GB. I’m still not entirely sure why the v3.0.2 ISO is a hefty 6.2GB, or why it requires downloading an additional 1.8GB after installation on a system with internet access. For comparison, the official Arch installer image is just 1.4GB in size . While downloading the updates (which took over an hour for me), and with over 15GB of storage consumed on my hard drive, I set out to experience the full Omarchy goodness! After hovering over a few icons on the Waybar , I discovered the menu button on the very left. It’s not a traditional menu, but rather a shortcut to the aforementioned walker launcher tool, which contains a few submenus: The menu reads: Apps, Learn, Trigger, Style, Setup, Install, Remove, Update, About, System; It feels like a random assortment of categories, settings, package manager subcommands, and actions. From a UX perspective, this main menu doesn’t make much sense to me. But I’m feeling lucky, so let’s just go ahead and type “Browser” ! Hm, nothing. “Firefox” , maybe? Nope. “Chrome” ? Nah. “Chromium” ? No. Unfortunately the search in the menu is not universal and requires you to first click into the Apps category. The Apps category seems to list all available GUI (and some TUI) applications. Let’s take a look at the default apps that Omarchy comes with: The bundled “apps” are: 1Password, Alacritty, Basecamp, Bluetooth, Calculator, ChatGPT, Chromium, Discord, Disk Usage, Docker, Document Viewer, Electron 37, Figma, Files, GitHub, Google Contacts, Google Messages, Google Photos, HEY, Image Viewer, Kdenlive, LibreOffice, LibreOffice Base, LibreOffice Calc, LibreOffice Draw, LibreOffice Impress, LibreOffice Math, LibreOffice Writer, Limine-snapper-restore, LocalSend, Media Player, Neovim, OBS Studio, Obsidian, OpenJDK Java 25 Console, OpenJDK Java 25 Shell, Pinta, Print Settings, Signal, Spotify, Typora, WhatsApp, X, Xournal++, YouTube, Zoom; Aside from the fact that nearly a third of the apps are essentially just browser windows pointing to websites , which leaves me wondering where the 15GB of used storage went, the selection of apps is also… well, let’s call it opinionated , for now at least. Starting with the browser, Omarchy comes with Chromium by default, specifically version 141.0.7390.107 in my case, which, unlike, for example, ungoogled-chromium , has disabled support for manifest v2 and thus doesn’t include extensions like uBlock Origin or any other advanced add-ons. In fact, the browser is completely vanilla, with no decent configuration. The only extension it includes is the copy-url extension, which serves a rather obscure purpose: Providing a non-intuitive way to copy the current page’s URL to your clipboard using an even less intuitive shortcut ( ) while using any of the “Apps” that are essentially just browser windows without browser controls. Other than that, it’s pretty much stock Chromium. It allows all third-party cookies, doesn’t send “Do Not Track” requests, sends browsing data to Google Safe Browsing , but doesn’t enforce HTTPS. It has JavaScript optimization enabled for all websites, which increases the attack surface, and it uses Google as the default search engine. There’s not a single opinionated setting in the configuration of the default browser on Omarchy , let alone in the choice of browser itself. And the fact that the only extension installed and active by default is an obscure workaround for the lack of URL bars in “App” windows doesn’t exactly make this first impression of what is likely one of the most important components for the typical Omarchy user very appealing. Alright, let’s have a look at what is probably the second most important app after the browser for many people in the target audience: Basecamp ! Just kidding. Obviously, it’s the terminal. Omarchy comes with Alacritty by default, which is a bit of an odd choice in 2025, especially for a desktop that seemingly prioritizes form over function, given the ultra-conservative approach the Alacritty developers take toward anything related to form and sometimes even function. I would have rather expected Kitty , WezTerm , or Ghostty . That said, Alacritty works and is fairly configurable. Unfortunately, like the browser and various other tools such as Git, there’s little to no opinionated configuration happening, especially one that would enhance integration with the Omarchy ecosystem. Omarchy seemingly highlights the availability of NeoVim by default, yet doesn’t explicitly configure Alacritty’s vi mode , leaving it at its factory defaults . In fact, aside from the keybinding for full-screen mode, which is a less-than-ideal shortcut for anyone with a keyboard smaller than 100% (unless specifically mapped), the Alacritty config doesn’t define any other shortcuts to integrate the terminal more seamlessly into the supposed opinionated workflow. Not even the desktop’s key-repeat rate is configured to a reasonable value, as it takes about a second for it to kick in. Fun fact: When you leave your computer idling on your desk, the screensaver you’ll encounter isn’t an actual hyprlock that locks your desktop and uses PAM authentication to prevent unauthorized access. Instead, it’s a shell script that launches a full-screen Alacritty window to display a CPU-intensive ASCII animation. While Omarchy does use hyprlock , its timeout is set longer than that of the screensaver. Because you can’t dismiss the screensaver with your mouse (only with your keyboard) it might give inexperienced users a false sense of security. This is yet another example of prioritizing gimmicky animations over actual functionality and, to some degree, security. Like the browser and the terminal emulator, the default shell configuration is a pretty basic B….ash , and useful extensions like Starship are barely configured. For example, I ed into a boilerplate Python project directory, activated its venv , and expected Starship to display some useful information, like the virtual environment name or the Python version. However, none of these details appeared in my prompt. “Surely if I do the same in a Ruby on Rails project, Starship will show me some useful info!” I thought, and ed into a Rails boilerplate project. Nope. In fact… Omarchy doesn’t come with Rails pre-installed. I assume Hansson ’s target audience doesn’t primarily consist of Rails developers, despite the unconditional , but let’s not get ahead of ourselves. It is nevertheless puzzling that Omarchy doesn’t come with at least Ruby pre-installed. I find it a bit odd that the person who literally built the most successful Ruby framework on earth is pre-installing “Apps” like HEY , Spotify , and X , but not his own FOSS creation or even just the Ruby interpreter. If you want Rails , you have to navigate through the menu to “Install” , then “Development” , and finally select "‘Ruby on Rails" to make RoR available on your system. Not just Ruby , though. And even going the extra mile to do so still won’t make Starship display any additional useful info when inside a Rails project folder. PS: The script that installs these development tools bypasses the system’s default package manager and repository, opting instead to use mise to install interpreters and compilers. This is yet another example of security not being taken quite as seriously as it should be. At the very least, the script should inform the user that this is about to happen and offer the option to use the package manager instead, if the distributed version meets the user’s needs. Fun fact: At the time of writing, mise installed Ruby 3.4.7. The latest package available through the package manager is – you guessed it – 3.4.7. As mentioned earlier, Omarchy is built entirely using Bash scripts, and there’s nothing inherently wrong with that. When done correctly and kept at a sane limit, Bash scripts are powerful and relatively easy to maintain. However, the scripts in Omarchy are unfortunately riddled with little oversights that can cause issues. Those scripts are also used in places in which a proper software implementation would have made more sense. Take the theme scripts, for example. If you go ahead and create a new theme under and name it , and then run a couple of times until the tool hits your new theme, you can see one effect of these oversights. Nothing catastrophic happened, except now won’t work anymore. If you’d want to annoy an unsuspecting Omarchy user, you could do this: While this is such a tiny detail to complain about, it is an equally low-hanging fruit to write scripts in a way in which this won’t happen. Apart from the numerous places where globbing and word splitting can occur, there are other instances of code that could have also been written a little bit more elegantly. Take this line , for example: To drop and from the , you don’t have to call and pipe to . Instead, you can simply use Bash’s built-in regex matching to do so: Similarly, in this line there’s no need to test for a successful exit code with a dedicated check, when you can simply make the call from within the condition: And frankly, I have no idea what this line is supposed to be: What are you doing, Hansson? Are you alright? Make no mistake to believe that the remarks made above are the only issues with Hansson ’s scripts in Omarchy . While these specific examples are nitpicks, they paint a picture that is only getting less colorful the more we look into the details. We can continue to gauge the quality of the scripts by looking beyond just syntax. Take, for example, the migration : This script runs five commands in sequence within an condition: first , followed by two invocations, then again, and finally . While this might work as expected “on a sunny day” , the first command could fail for various reasons. If it does, the subsequent commands may encounter issues that the script doesn’t account for, and the outcome of this migration will be differently from what the author anticipated. For experienced users, the impact in such a case may be minimal, but for others, it may present a more significant hurdle. Furthermore, as can be seen in here , the invoking process cannot detect if only one of the five commands failed. As a result, the entire migration might be marked as skipped , despite changes being made to the system. But let’s continue to look into specifically the migrations in just a moment. The real concern here, however, is the widespread absence of exception handling, either through status code checks for previously executed commands or via dependent executions (e.g., ). In most scripts, there is no validation to ensure that actions have the desired effect and the current state actually represents the desired outcome. Almost all sequentially executed commands depend upon one another, yet the author doesn’t make sure that if fails the script won’t just blindly run . Note: Although sets , which would cause a script like the one presented above to fail when the first command fails, the migrations are invoked by sourcing the script. This script, in turn, invokes the script using the helper function . However, this function executes the script in the following way: In this case, the options are not inherited by the actual migration , meaning it won’t stop immediately when an error occurs. This behavior makes sense, as abruptly stopping the installation would leave the system in an undefined state. But even if we ignored that and assumed that migrations would stop when the first command would fail, it still wouldn’t actually handle the exception, but merely stop the following commands from performing actions on an unexpected state. To understand the broader issue and its impact on security, we need to dive deeper into the system’s functioning, and especially into migrations . This helps illustrate how the fragile nature of Omarchy could take a dangerous turn, especially considering the lack of tests, let alone any dedicated testing infrastructure. Let’s start by adding some context and examining how configurations are applied in Omarchy . Inspired by his work as a web developer, Hansson has attempted to bring concepts from his web projects into the scripts that shape his Linux setup. In Omarchy , configuration changes are handled through migration scripts, as we just saw, which are in principle similar to the database migrations you might recall from Rails projects. However, unlike SQL or the Ruby DSL used in Active Record Migrations , these Bash scripts do not merely contain a structured query language; They execute actual system commands during installation. More importantly: They are not idempotent by default! While the idea of migrations isn’t inherently problematic, in this case, it can (and has) introduce(d) issues that go/went unnoticed by the Omarchy maintainers for extended periods, but more on that in a second. The migration files in Omarchy are a collection of ambiguously named scripts, each containing a set of changes to the system. These changes aren’t confined to specific configuration files or components. They can be entirely arbitrary, depending on what the migration is attempting to implement at the time it is written. To modify a configuration file, these migrations typically rely on the command. For instance, the first migration intended to change from to might execute something like . The then following one would have to account for the previous change: . Another common approach involves removing a specific line with and appending the new settings via . However, since multiple migrations are executed sequentially, often touching the same files and running the same commands, determining the final state of a configuration file can become a tedious process. There is no clear indication of which migration modifies which file, nor any specific keywords (e.g., ) to grep for and help identify the relevant migration(s) when searching through the code. Moreover, because migrations rely on fixed paths and vary in their commands, it’s impossible to test them against mock files/folders, to predict their outcome. These scripts can invoke anything from sourcing other scripts to running commands, with no restrictions on what they can or cannot do. There’s no “framework” or API within which these scripts operate. To understand what I mean by that, let’s take a quick look at a fairly widely used pile of scripts that is of similar importance to a system’s functionality: OpenRC . While the init.d scripts in OpenRC are also just that, namely scripts, they follow a relatively well-defined API : Note: I’m not claiming that OpenRC ’s implementation is flawless or the ultimate solution, far from it. However, given the current state of the Omarchy project, it’s fair to say that OpenRC is significantly better within its existing constraints. Omarchy , however, does not use any sort of API for that matter. Instead, scripts can basically do whatever they want, in whichever way they deem adequate. Without such well defined interfaces , it is hard to understand the effects that migrations will have, especially when changes to individual services are split across a number of different migration scripts. Here’s a fun challenge: Try to figure out how your folder looks after installation by only inspecting the migration files. To make matters worse, other scripts (outside the migration folder) may also modify configurations that were previously altered by migrations , at runtime, such as . Note: To the disappointment of every NixOS user, unlike database migrations in Rails , the migrations in Omarchy don’t support rollbacks and, judging by their current structure, are unlikely to do so moving forward. The only chance Omarchy users have in case a migration should ever brick their existing system is to make use of the available snapshots . All of this (the lack of interfaces , the missing exception handling and checks for desired outcomes, the overlapping modification, etc.) creates a chaotic environment that is hard to overview and maintain, which can severely compromise system integrity and, by extension, security. Want an example? On my fresh installation, I wanted to validate the following claim from the manual : Firewall is enabled by default: All incoming traffic by default except for port 22 for ssh and port 53317 for LocalSend. We even lock down Docker access using the ufw-docker setup to prevent that your containers are accidentally exposed to the world. What I discovered upon closer inspection, however, is that Omarchy ’s firewall doesn’t actually run, despite its pre-configured ruleset . Yes, you read that right, everyone installing the v3.0.2 ISO (and presumably earlier versions) of Omarchy is left with a system that doesn’t block any of the ports that individual software might open during runtime. Please bear in mind that apart from the full-disk encryption, the firewall is the only security measure that Omarchy puts in place. And it’s off by default. Only once I manually enabled and started using / , it did activate the rules mentioned in the handbook. As highlighted in the original issue , it appears that, with the chaos that are the migration- , preflight- and first-run- scripts no one ever realized that you need to tell to explicitly enable a service for it to actually run. And because it’s all made up of Bash scripts that can do whatever they want, you cannot easily test these things to notice that the state that was expected for a specific service was not reached. Unlike in Rails , where you can initialize your (test) database and run each migration manually if necessary to make sure that the schema reaches the desired state and that the database is seeded correctly, this agglomeration of Bash scripts is not structured data. Hence, applying the same principle to something as arbitrary as a Bash script is not as easily possible, at least not without clearly defined structures and interfaces . As a user who trusted Omarchy to secure their installation, I would be upset, to say the least. The system failed to keep users safe, and more importantly, nobody noticed for a long time. There was no hotfix ISO issued, nor even a heads-up to existing users alongside the implemented fix ( e.g. ). While mistakes happen, simply brushing them under the rug feels like rather negligent behavior. When looking into the future, the mess that is the Bash scripts certainly won’t decrease in complexity, making me doubt that things like these won’t happen again. Note: The firewall fix was listed in v2.1.1. However, on my installation of v3.0.2 the firewall would still not come up automatically. I double-checked this by running the installation of v3.0.2 twice, and both times the firewall would not autostart after the second reboot. While writing this post, v3.1.0 ( update: v3.1.1 ) was released and I also checked the issue there. v3.1.0 appears to have finally fixed the firewall issue. Having that said, it shows how much of a mess the whole system is, when things that were identified and supposedly fixed multiple versions ago still don’t work in newer releases weeks later. Tl;dr: v3.1.0 appears to be the first release to actually fix the firewall issue, even though it was identified and presumably fixed in v2.1.1, according to the changelog. With the firewall active, it becomes apparent that Omarchy ’s configuration does indeed leave port 22 (SSH) open, even though the SSH daemon is not running by default. While I couldn’t find a clear explanation for why this port is left open on a desktop system without an active SSH server, my assumption is that it’s intended to allow the user to remotely access their workstation should they ever need to. It’s important to note that the file in Omarchy , like many other system files, remains unchanged. Users might reasonably assume that, since Omarchy intentionally leaves the SSH port open, it must have also configured the SSH server with sensible defaults. Unfortunately, this is not the case. In a typical Arch installation, users would eventually come across the “Protection” section on the OpenSSH wiki page, where they would learn about the crucial settings that should be adjusted for security reasons. However, when using a system like Omarchy , which is marketed as an opinionated setup that takes security seriously , users might expect these considerations to be handled for them, making it all the more troubling that no sensible configuration is in place, despite the deliberate decision to leave the SSH port open for future use. Hansson seemingly struggles to get even basics like right. The fact that there’s so little oversight, that users are allowed to set weak password for both, their account and drive encryption, and that the only other security measure put in place, the firewall, simply hasn’t been working, does not speak in favor of Omarchy . Info: is abstraction layer that simplifies managing the powerful / firewall and it stands for “ u ncomplicated f ire w all”. Going into this review I wasn’t expecting a hardened Linux installation with SELinux , intrusion detection mechanisms, and all these things. But Hansson is repeatedly addressing users of Windows and macOS (operating systems with working firewalls and notably more security measures in place) who are frustrated with their OS, as a target audience. At this point, however, Omarchy is a significantly worse option for those users. Not only does Omarchy give a hard pass on Linux Security Modules , linux-hardened , musl , hardened_malloc , or tools like OpenSnitch , and fails to properly address security-related topics like SSH, GPG or maybe even AGE and AGE/Yubikey , but it in fact weakens the system security with changes like the increase of and login password retries and the decrease of faillock timeouts . Omarchy appears to be undoing security measures that were put in place by the software- and by the Arch -developers, while the basis it uses for building the system does not appear to be reliable enough to protect its users from future mishaps. Then there is the big picture of Omarchy that Hansson tries to curate, which is that of a TUI-centered, hacker -esque desktop that promises productivity and so on. He even goes as far as calling it “a pro system” . However, as we clearly see from the implementation, configuration and the project’s approach to security, this is unlike anything you would expect from a pro system . The entire image of a TUI-centered productivity environment is further contradicted in many different places, primarily by the lack of opinions and configuration . If the focus is supposed to be on “pro” usage, and especially the command-line, then… The configuration doesn’t live up to its sales pitch, and there are many aspects that either don’t make sense or aren’t truly opinionated , meaning they’re no different from a standard Arch Linux installation. In fact, I would go as far as to say that Omarchy is barely a ready-to-use system at all out of the box and requires a lot of in-depth configuration of the underlying Arch distribution for it to become actually useful. Let’s look at only a few details. There are some fairly basic things you’ll miss on the “lightweight” 15GB installation of Omarchy : With the attention Omarchy is receiving, particularly from Framework (Computer Inc.) , it is surprising that there is no option to install the system on RAID1 hardware: I would argue that RAID1 is a fairly common use case, especially with Framework (Computer Inc.) 16" laptops, which support a secondary storage device. Considering that Omarchy is positioning itself to compete against e.g. macOS with TimeMachine , yet it does not include an automated off-drive backup solution for user data by default – which by the way is just another notable shortcoming we could discuss – and given that configuring a RAID1 root with encryption is notoriously tedious on Linux, even for advanced users, the absence of this option is especially disappointing for the intended audience. Even moreso when neither the installer nor the post-installation process provides any means to utilize the additional storage device, leaving inexperienced users seemingly stuck with the command. Omarchy does not come with a dedicated swap partition, leaving me even more puzzled about its use of 15GB of disk space. I won’t talk through why having a dedicated swap partition that is ideally encrypted using the same mechanisms already in place is a good idea. This topic has been thoroughly discussed and written about countless times. However, if you, like seemingly the Omarchy author, are unfamiliar with the benefits of having swap on Linux, I highly recommend reading this insightful write-up to get a better understanding. What I will note, however, is that the current configuration does not appear to support hibernation via the command through the use of a dynamic swap file . This leads me to believe that hibernation may not function on Omarchy . Given the ongoing battery drain issues with especially Framework (Computer Inc.) laptops while in suspend mode, it’s clear that hibernation is an essential feature for many Linux laptop users. Additionally, it’s hard to believe that Hansson , a former Apple evangelist , wouldn’t be accustomed to the simple act of closing the lid on his laptop and expecting it to enter a light sleep mode, and eventually transitioning into deep sleep to preserve battery life. If he had ever used Omarchy day-to-day on a laptop in the same way most people use their MacBooks , he would almost certainly have noticed the absence of these features. This further reinforces the impression that Omarchy is a project designed to appear robust at first glance, but reveals a surprisingly hollow foundation upon closer inspection. Let’s keep our focus on laptop use. We’ve seen Hansson showcasing his Framework (Computer Inc.) laptop on camera, so it’s reasonable to assume he’s using Omarchy on a laptop. It’s also safe to say that many users who might genuinely want to try Omarchy will likely do so on a laptop as well. That said, as we’ve established before, closing the laptop lid doesn’t seem to trigger hibernate mode in Omarchy . But if you close the lid and slip the laptop into your backpack, surely it would activate some power-saving measures, right? At the very least, it should blank the screen, switch the CPU governor to powersaving , or perhaps even initiate suspend to RAM ? Well… Of course, I can’t test these scenarios firsthand, as I’m evaluating Omarchy within a securely confined virtual machine, where any unintended consequences are contained. Still, based on the system’s configuration, or more accurately the lack thereof, it seems unlikely that an Omarchy laptop will behave as expected. The system might switch power profiles due to the power-profiles-daemon when not plugged in, yet its functionality is not comparable to a properly configured or similar. It seems improbable that it will enter suspend to RAM or hibernate mode, and it’s doubtful any other power-saving measures (like temporarily halting non-essential background processes) will be employed to conserve battery life. Although the configuration comes with an “app” for mail, namely HEY , that platform does not support standard mail protocols . I don’t think it’s a hot take to say that probably 99% of Omarchy ’s potential users will need to work with an email system that does support IMAP and SMTP, however. Yet, the base system offers zero tools for that. I’m not even asking for anything “fancy” like ; Omarchy unfortunately doesn’t even come with the most basic tools like the command out of the box. Whether you want to send email through your provider, get a simple summary for a scheduled Cron job delivered to your local mailbox, or just debug some mail-related issue, the command is relatively essential, even on a desktop system, but it is nowhere to be found on Omarchy . Speaking of which: Cron jobs? Not a thing on Omarchy . Want to automate backing up some files to remote storage? Get ready to dive into the wonderful world of timers , where you’ll spend hours figuring out where to create the necessary files, what they need to contain, and how to activate them. Omarchy could’ve easily included a Cron daemon or at least for the sake of convenience. But I guess this is a pro system , and if the user needs periodic jobs, they will have to figure out . Omarchy is, after all, -based … … and that’s why it makes perfect sense for it to use rootless Podman containers instead of Docker. That way, users can take advantage of quadlets and all the glorious integration. Unfortunately, Omarchy doesn’t actually use Podman . It uses plain ol’ Docker instead. Like most things in Omarchy , power monitoring and alerting are handled through a script , which is executed every 30 seconds via a timer. That’s your crash course on timers right there, Omarchy users! This script queries and then uses to parse the battery percentage and state. It’s almost comical how hacky the implementation is. Given that the system is already using UPower , which transmits power data via D-Bus , there’s a much cleaner and more efficient way to handle things. You could simply use a piece of software that connects to D-Bus to continuously monitor the power info UPower sends. Since it’s already dealing with D-Bus , it can also send a desktop notification directly to whatever notification service you’re using (like in Omarchy ’s case). No need for , , or a periodic Bash script triggered by a timer. “But where could I possibly find such a piece of software?” , you might ask. Worry not, Hr. Hansson , I have just the thing you need ! That said, I can understand that you, Hr. Hansson , might be somewhat reluctant to place your trust in software created by someone who is actively delving into the intricacies of your project, rather than merely offering a superficial YouTube interview to casually navigate the Hyprland UI for half an hour. Of course, Hr. Hansson , you could have always taken the initiative to develop a more robust solution yourself, in a proper, lower-level language, and neatly integrated it into your Omarchy repository. But we will explore why this likely hasn’t been a priority for you, Hr. Hansson , in just a moment. While the author’s previous attempt for a developer setup still came with Zellij , this time his opinions seemingly changed and Omarchy doesn’t include Zellij , or Tmux or even screen anymore. And nope, picocom isn’t there either, so good luck reading that Arduino output from . That moment, when you realize that you’ve spent hours figuring out timers , only to find out that you can’t actually back up those files to a remote storage because there’s no , let alone or . At least there is the command. :-) Unfortunately not, but Omarchy comes with and by default. I could go on and on, and scavenge through the rest of the unconfigured system and the scripts, like for example the one, where Omarchy once again seems to prefer -ing random scripts from the internet (or anyone man-in-the-middle -ing it) rather than using the system package manager to install Tailscale . But, for the sake of both your sanity and mine, I’ll stop here. As we’ve seen, Omarchy is more unconfigured than it is opinionated . Can you simply install all the missing bits and piece and configure them yourself? Sure! But then what is the point of this supposed “perfect developer setup” or “pro system” to begin with? In terms of the “opinionated” buzzword, most actual opinions I’ve come across so far are mainly about colors, themes, and security measures. I won’t dare to judge the former two, but as for the latter, well, unfortunately they’re the wrong opinions . In terms of implementation: Omarchy is just scripts, scripts, and more scripts, with no proper structure or (CI) tests. BTW: A quick shout out to your favorite tech influencer , who probably has at least one video reviewing the Omarchy project without mentioning anything along these lines. It is unfortunate that these influential people barely scratch the surface on a topic like this, and it is even more saddening that recording a 30 minute video of someone clicking around on a UI seemingly counts as a legitimate “review” these days. The primary focus for many of these people is seemingly on pumping out content and generating hype for views and attention rather than providing a thoughtful, thorough analysis. ( Alright, we’re almost there. Stick with me, we’re in the home stretch. ) The Omarchy manual : The ultimate repository of Omarchy wisdom, all packed into 33 pages, clocking in at little over 10,000 words. For context, this post on Omarchy alone is almost 10,000 words long. As is the case with the rest of the system, the documentation also adheres to Hansson ’s form over function approach. I’ve mentioned this before, but it bears repeating: Omarchy doesn’t offer any built-in for its scripts, let alone auto-completion, nor does it come with traditional pages. The documentation is tucked away in yet another SaaS product from Hansson ’s company ( Writebook ) and its focus is predominantly on themes, more themes, creating your own themes, and of course, the ever-evolving hotkeys. Beyond that, the manual mostly covers how to locate configuration files for individual UI components and offers guidance on how to configure Hyprland for a range of what feels like outrageously expensive peripherals. For the truly informative content, look no further than the shell function guide, with gems such as: : Format an entire disk with a single ext4 partition. Be careful! Wow, thanks, Professor Oak, I will be! :-) On a more serious note, though, the documentation leaves much to be desired, as evidenced by the user questions over on the GitHub discussions page . Take this question , which unintentionally sums up the Omarchy experience for probably many inexperienced users: I installed this from github without knowing what I was getting into (the page is very minimal for a project of this size, and I forgot there was a link in the footnotes). Please tell me there’s a way to remove Omarchy without wiping my entire computer. I lost my flashdrive, and don’t have a way to back up all my important files anymore. While this may seem comical on the surface, it’s a sad testament to how Omarchy appears to have a knack for luring in unsuspecting users with flashy visuals and so called “reviews” on YouTube, only to leave them stranded without adequate documentation. The only recourse? Relying on the solid Arch docs, which is an abrupt plunge into the deep end, given that Arch assumes you’re at least familiar with its very basics and that you know how you set up your own system. Maybe GitHub isn’t the most representative forum for the project’s support; I haven’t tried Discord, for example. But no matter where the community is, users should be able to fend for themselves with proper documentation, turning to others only as a last resort. It’s difficult to compile a list of things that could have made Omarchy a reasonable setup for people to consider, mainly because, in my opinion, the core of the setup – scripts doing things they shouldn’t or that should have been handled by other means (e.g., the package manager) – is fundamentally flawed. That said, I do think it’s worth mentioning a few improvements that, if implemented, could have made Omarchy a less bad option. Configuration files should not be altered through loose migration scripts. Instead, updated configuration files should be provided directly (ideally via packages, see below) and applied as patches using a mechanism similar to etc-update or dpkg . This approach ensures clarity and reduces confusion, preserves user modifications, and aligns with established best practices. Improve on the user experience where necessary and maybe even contribute improvements back. Use proper software implementations where appropriate. Want a fancy screensaver? Extend Hyprlock instead of awkwardly repurposing a fullscreen terminal window to mimic one. Need to display power status notifications without relying on GNOME or KDE components? Develop a lightweight solution that integrates cleanly with the desktop environment, or extend the existing Waybar battery widget to send notifications. Don’t like existing Linux “App Store” options? Build your own, rather than diverting a launcher from its intended use only to run Bash scripts that install packages from third-party sources on a system that has a perfectly good package manager in place. Arguably the most crucial improvement: Package the required software and install it via the system’s package manager. Avoid relying on brittle scripts, third-party tools like mise , or worse, piping scripts directly into . I understand that the author is coming from an operating system where it’s sort of fine to and use software like to manage individual Ruby versions. However, we have to take into consideration that specifically macOS has a significantly more advanced security architecture in place than (unfortunately) most out-of-the-box Linux installations have, let alone Omarchy . On Hanssons setup the approach is neither sensible nor advisable, especially given that it’s ultimately a system that is built around a proper package manager. If you want multiple versions of Ruby, package them and use slotting (or the equivalent of it on the distribution that you’re using, e.g. installation to version-specific directories on Arch ). Much of what the migrations and other scripts attempt to do could, and should have been achieved through well-maintained packages and the proven mechanisms of a package manager. Whether it’s Gentoo , NixOS , or Ubuntu , each distribution operates in its own unique way, offering users a distinct set of tools and defaults. Yet, they all share one common trait: A set of strong, well-defined opinions that shape the system. Omarchy , in contrast, feels little more than a glorified collection of Hyprland configurations atop an unopinionated, barebones foundation. If you’re going to have opinions, don’t limit them to just nice colors and cute little wallpapers. Form opinions on the tools that truly matter, on how those tools should be configured, and on the more intricate, challenging aspects of the system, not just the surface-level, easy choices. Have opinions on the really sticky and complicated stuff, like power-saving modes, redundant storage, critical system functionality, and security. Above all, cultivate reasonable opinions, ones that others can get behind, and build a system that reflects those. Comprehensive documentation is essential to help users understand how the system works. Currently, there’s no clear explanation for the myriad Bash scripts, nor is there any user-facing guidance on how global system updates affect individual configuration files. ( finally… ) Omarchy feels like a project created by a Linux newcomer, utterly captivated by all the cool things that Linux can do , but lacking the architectural knowledge to get the basics right, and the experience to give each tool a thoughtful review. Instead of carefully selecting software and ensuring that everything works as promised, the approach seems to be more about throwing everything that somehow looks cool into a pile. There’s no attention to sensible defaults, no real quality control, and certainly no verification that the setup won’t end up causing harm or, at the very least, frustration for the user. The primary focus seems to be on creating a visually appealing but otherwise hollow product . Moreover, the entire Omarchy ecosystem is held together by often poorly written Bash scripts that lack any structure, let alone properly defined interfaces . Software packages are being installed via or similar mechanisms, rather than provided as properly packaged solutions via a package manager. Hansson is quick to label Omarchy a Linux distribution , yet he seems reluctant to engage with the foundational work that defines a true distribution: The development and proper packaging (“distribution”) of software . Whenever Hansson seeks a software (or software version) that is unavailable in the Arch package repositories, he bypasses the proper process of packaging it for the system. Instead, he resorts to running arbitrary scripts or tools that download the required software from third-party sources, rather than offering the desired versions through a more standardized package repository. Hansson also appears to avoid using lower-level programming languages to implement features in a more robust and maintainable manner at all costs , often opting instead for makeshift solutions, such as executing “hacky” Bash scripts through timers. A closer look at his GitHub profile and Basecamp’s repositories reveals that Hansson has seemingly worked exclusively with Ruby and JavaScript , with most contributions to more complex projects, like or , coming from other developers. This observation is not meant to diminish the author’s profession and accomplishments as a web developer, but it highlights the lack of experience in areas such as systems programming, which are crucial for the type of work required to build and maintain a proper Linux distribution. Speaking of packages, the system gobbles up 15GB of storage on a basic install, yet fails to deliver truly useful or high-quality software. It includes a hodgepodge of packages, like OpenJDK and websites of paid services in “App” -disguise, but lacks any real optimization for specific use cases. Despite Omarchy claiming to be opinionated most of the included software is left at its default settings, straight from the developers. Given Hansson ’s famously strong opinions on everything, it makes me wonder if the Omarchy author simply hasn’t yet gained the experience necessary to develop clear, informed stances on individual configurations. Moreover, his prioritization of his paid products like Basecamp and HEY over his own free software like Rails leaves a distinctly bitter aftertaste when considering Omarchy . What’s even more baffling is that seemingly no one at Framework (Computer Inc.) or Cloudflare appears to have properly vetted the project they’re directing attention (and sometimes financial support) to. I find it hard to believe that knowledgeable people at either company have looked at Omarchy and thought, “Out of all the Linux distributions out there, this barely configured stack of poorly written Bash scripts on top of Arch is clearly the best choice for us to support!” In fact, I would go as far as to call it a slap in the face to each and every proper distro maintainer and FOSS developer. Furthermore, I fail to see the supposed gap Omarchy is trying to fill. A fresh installation of Arch Linux, or any of its established derivatives like Manjaro , is by no means more complicated or time-consuming than Omarchy . In fact, it is Omarchy that complicates things further down the line, by including a number of unnecessary components and workarounds, especially when it comes to its chosen desktop environment. The moment an inexperienced user wants or needs to change anything, they’ll be confronted with a jumbled mess that’s difficult to understand and even harder to manage. If you want Arch but are too lazy to read through its fantastic Wiki , then look at Manjaro , it’ll take care of you. If that’s still not to your liking, maybe explore something completely different . On the other hand, if you’re just looking to tweak your existing desktop, check out other people’s dotfiles and dive into the unixporn communities for inspiration. As boring as Fedora Workstation or Ubuntu Desktop might sound, these are solid choices for anyone who doesn’t want to waste time endlessly configuring their OS and, more importantly, wants something that works right out of the box and actually keeps them safe. Fedora Workstation comes with SELinux enabled in “enforcing” mode by default, and Ubuntu Desktop utilizes AppArmor out of the box. Note: Yes, I hear you loud and clear, SuSE fans. The moment your favorite distro gets its things together with regard to the AppArmor-SELinux transition and actually enables SELinux in enforcing mode across all its different products and versions I will include it here as well. Omarchy is essentially an installation routine for someone else’s dotfiles slapped on top of an otherwise barebones Linux desktop. Although you could simply run its installation scripts on your existing, fully configured Arch system, it doesn’t seem to make much sense and it’s definitely not the author’s primary objective. If this was just Hansson’s personal laptop setup, nobody, including myself, would care about the oversights or eccentricities, but it is not. In fact, this project is clearly marketed to the broader, less experienced user base, with Hansson repeatedly misrepresenting Omarchy as being “for developers or anyone interested in a pro system” . I emphasize marketed here, because Hansson is using his reach and influence in every possible way to advertise and seemingly monetize Omarchy ; Apart from the corporate financial support, the project even has its own merch that people can spend money on. Given that numerous YouTubers have been heavily promoting the project over the past few weeks, often in the same breath with Framework (Computer Inc.) , it wouldn’t be surprising to see the company soon offering it as a pre-installation option on their hardware. If you’re serious about Linux, you’re unlikely to fall for the Omarchy sales pitch. However, if you’re an inexperienced user who’s heard about Omarchy from a tech-influencer raving about it, I strongly recommend starting your Linux journey elsewhere, with a distribution that actually prioritizes your security and system integrity, and is built and maintained by people who live and breathe systems, and especially Linux. Alright, that’s it. Why don’t any of the Bash scripts and functions provide a flag or maybe even autocompletions? Why are there no Omarchy -related pages? Why does the system come with GNOME Files , which requires several gvfs processes running in the background, yet it lacks basic command-line file managers like or ? Why would you define as an for unconditionally, but not install Rails by default? Why bother shipping tools like and but fail to provide aliases for , , etc to make use of these tools by default? Why wouldn’t you set up an O.G. alias like in your defaults ? Why ship the GNOME Calculator but not include any command-line calculators (e.g., , ), forcing users to rely on basics like ? Why ship the full suite of LibreOffice, but not a single useful terminal tool like , , , etc.? Why define functions like with and without an option to enable encryption, when the rest of the system uses and ? And if it’s intended for use by inexperienced users primarily for things like USB sticks, why not make it instead of so the drive works across most operating systems? Why not define actually useful functions like or / ? Why doesn’t your Bash configuration include history- and command-flag-based auto-suggestions? Or a terminal-independent vi mode ? Or at least more consistent Emacs-style shortcuts? Why don’t you include some quality-of-life tools like or some other command-line community favorites? If you had to squeeze in ChatGPT , why not have Crush available by default? Why does the base install with a single running Alacritty window occupy over 2.2GB of RAM right after booting? For comparison: My Gentoo system with a single instance of Ghostty ends up at around half of that. Why set up NeoVim but not define as an alias for , or even create a symlink? And speaking of NeoVim , why does the supposedly opinionated config make NeoVim feel slower than VSCode ?

0 views
マリウス 1 months ago

The Small Web 101

Info: This is a living document. It will be updated in the future and I am happy for recommendations to add to it! Disclaimer: I am intentionally not featuring products – let’s say, a paid search engine run by a company that supposedly lets you find things on the small web – or platforms like IndieWeb , that publish articles like “Set up an Indie Website using Known on Amazon Web Services ” , “Set up an Indie Website using Blogger ” (a Google service), and “Get Started on WordPress ” , because that is exactly what the small web is not about. I will not feature <insert your favorite webring here> because while webrings are great for discovering new pages there’s no point in recommending specific ones, as this depends on personal preference. Also, please don’t tell me about Gemini . Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program . Searchmysite 68k.news ( Note: Uses Google News ) Low-Tech Magazine OpenBSD Webzine SoylentNews Spike.News ( Note: Aggregates different sources) IndieBlogPage PersonalSit.es Ye Olde Blogroll The Geocities Gallery ooh.directory BUKMARK.CLUB neocities (browse) Bring Back Blogging Gossip’s Web The Useless Web 32-Bit Cafe Internet Artifacts

0 views
マリウス 1 months ago

Alpine Linux on a Bare Metal Server

When I began work on 📨🚕 ( MSG.TAXI ) , I kept things deliberately low-key, since I didn’t want it turning into a playground for architecture astronauts . For the web console’s tech stack, I went with the most boring yet easy-to-master CRUD stack I could find , that doesn’t depend on JavaScript . And while deploying Rails in a sane way (without resorting to cOnTaInErS ) is a massive PITA, thanks to the Rails author’s cargo-cult mentality and his followers latching onto every half-baked wannabe-revolutionary idea, like Kamal and more recently Omarchy , as if it were a philosopher’s stone, from a development perspective it’s still the most effective getting-shit-done framework I’ve used to date. Best of all, it doesn’t rely on JavaScript (aside from actions, which can be avoided with a little extra effort). Similarly, on the infrastructure side, I wanted a foundation that was as lightweight as possible and wouldn’t get in my way. And while I’m absolutely the type of person who would run a Gentoo server, I ultimately went with Alpine Linux due to its easier installation, relatively sane defaults (with a few exceptions, more on that later ), and its preference for straightforward, no-nonsense tooling that doesn’t try to hide magic behind the scenes. “Why not NixOS?” you might ask. Since I’m deploying a lightweight, home-brewed Ruby/Rails setup alongside a few other components, I didn’t see the point of wrapping everything as Nix packages just to gain the theoretical benefits of NixOS. In particular, the CI would have taken significantly longer, while the actual payoff in my case would have been negligible. Since I’m paying for 📨🚕 out of my own pocket, I wanted infrastructure that’s cheap yet reliable. With plenty of people on the internet praising Hetzner , I ended up renting AMD hardware in one of their Finnish datacenters. Hetzner doesn’t offer as many Linux deployment options as cloud providers like Vultr , so I had to set up Alpine myself, which was pretty straightforward. To kickstart an Alpine installation on a Hetzner system, you just need to access the server’s iPXE console, either by renting a Hetzner KVM for an hour or by using their free vKVM feature. From there, you can launch the Alpine Linux by initializing the network interface and chain-loading the file: From that point on setup should be easy thanks to Alpine’s installer routine. If you’re using Hetzner’s vKVM feature to install Alpine, this chapter is for you. Otherwise, feel free to skip ahead. vKVM is a somewhat hacky yet ingenious trick Hetzner came up with, and it deserves a round of applause. If you’re curious about how it works under the hood, rent a real KVM once and reboot your server into vKVM mode. What you’ll see is that after enabling vKVM in Hetzner’s Robot , iPXE loads a network image, which boots a custom Linux OS. Within that OS, Hetzner launches a QEMU VM that uses your server’s drives to boot whatever you have installed. It’s basically Inception at the OS level. As long as vKVM is active (meaning the iPXE image stays loaded), your server is actually running inside this virtualized environment, with display output forwarded to your browser. Run while in vKVM mode and you’ll see, for example, your NIC showing up as a VirtIO device. Here’s the catch: When you install Alpine through this virtualized KVM environment, it won’t generate the your physical server actually needs. For instance, if your server uses an NVMe drive, you may discover that doesn’t include the module, causing the OS to fail on boot. Hetzner’s documentation doesn’t mention this, and it can easily bite you later. Tl;dr: If you installed your system via vKVM , make sure your includes all necessary modules. After updating , regenerate the . There are several ways to do this, but I prefer . Always double-check that the regenerated really contains everything you need. Unfortunately Alpine doesn’t provide tools for this, so here’s a .tar.gz with Debian’s and . Extract it into , and note that you may need to for them to work properly, due to Alpine’s somewhat annoying defaults (more on that later ). Finally, after rebooting, make sure you’ve actually left the vKVM session. You can double check by running . If the session is still active (default: 1h), your system may have just booted back into the VM, which you can identify by its Virt-devices. As soon as your Alpine Linux system is up and running there are a couple of things that I found important to change right off the bat. Alpine’s default boot timeout is just 1 second, set in ( ). If you ever need to debug a boot-related issue over a high-latency KVM connection, you will dread that 1-second window. I recommend increasing it to 5 seconds and running to apply the change. In practice, you hopefully won’t be rebooting the server that often, so the extra four seconds won’t matter day-to-day. Alpine uses the classic to configure network settings. On Hetzner’s dedicated servers, you can either continue using DHCP for IPv4 or set the assigned IP address statically. For IPv6, you’ll be given a subnet from which you can choose your own address. Keep in mind that the first usable IPv6 on Hetzner’s dedicated servers is : Amongst the first things you do should be disabling root login and password authentication via SSH: Apart from that you might want to limit the type of key exchange methods and algorithms that your SSH server allows, depending on the type of keys that you’re using. Security by obscurity: Move your SSH server from its default port (22) to something higher up and more random to make it harder for port-scanners to hit it. Finicky but more secure: Implement port knocking and use a handy client to open the SSH port for you only, for a limited time only. Secure: Set up a small cloud instance to act as Wireguard peer and configure your server’s SSH port to only accept connections from the cloud instance using a firewall rule . Use Tailscale if a dedicated Wireguard instance is beyond your expertise. You will likely want to have proper (GNU) tools around, over the defaults that Alpine comes with ( see below ). Some of the obvious choices include the following: In addition, I also like to keep a handful of convenience tools around: This is a tricky part because everyone’s monitoring setup looks different. However, there are a few things that make sense in general. Regardless what you do with your logs it’s generally a good idea to switch from BusyBox to something that allows for more advanced configurations, like syslog-ng : You probably should have an overview of how your hardware is doing. Depending on what type of hard drives your server has, you might want to install the or packages. UFW is generally considered an uncomplicated way to implement firewalling without having to complete a CCNP Security certification beforehand: Depending on your SSH setup and whether you are running any other services that could benefit from it, installing Fail2Ban might make sense: The configuration files are located at and you should normally only create/edit the files. The easiest way to backup all the changes that you’ve made to the general configuration is by using , the integrated Alpine local backup solution that was originally intended as a tool to manage diskless mode installations. I would, however, recommend to manually back up installed packages ( ) and use Restic for the rest of the system, including configuration files and important data, e.g.: However, backups depend on the data that your system produces and your desired backup target. If you’re looking for an easy to use, hosted but not-too-expensive one-off option, then Tarsnap might be for you. You should as well look into topics like local mail delivery, system integrity checks (e.g. AIDE ) and intrusion detection/prevention (e.g. CrowdSec ). Also, if you would like to get notified for various server events, check 📨🚕 ( MSG.TAXI ) ! :-) One of the biggest annoyances with Alpine is BusyBox : You need SSH? That’s BusyBox. The logs? Yeah, BusyBox. Mail? That’s BusyBox, too. You want to untar an archive? BusyBox. What? It’s gzipped? Guess what, you son of a gun, gzip is also BusyBox. I understand why Alpine chose BusyBox for pretty much everything, given the context that Alpine is most often used in ( cOnTaInErS ). Unfortunately, most BusyBox implementations are incomplete or incompatible with their full GNU counterparts, leaving you wondering why something that worked flawlessly on your desktop Linux fails on the Alpine box. By the time I finished setting up the server, there was barely any BusyBox tooling left. However, I occasionally had to resort to some odd trickery to get things working. You now have a good basis to set up whatever it is that you’re planning to use the machine for. Have fun! Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program .

0 views
マリウス 2 months ago

Updates 2025/Q3

This post includes personal updates and some open source project updates. Q3 has been somewhat turbulent, marked by a few unexpected turns. Chief among them, changes to my home base . In mid-Q3, the owner of the apartment I was renting abruptly decided to sell the property, effectively giving me two weeks to vacate. Thanks to my lightweight lifestyle , moving out wasn’t a major ordeal. However, in a global housing landscape distorted by corporations, wealthy boomers, and trust-fund heirs, securing a new place on such short notice proved nearly impossible. Embracing the fact that I am the master of my own time and destiny, and guided by the Taoist principle of Wu Wei , I chose not to force a solution. Instead, I placed all my belongings (including my beloved desk ) into storage and set off for a well-earned break from both the chaos and the gloom of the wet season. Note: If you ever feel you’re not being treated with the respect you deserve, the wisest response is often to simply walk away. Go where you’re treated best. Give yourself the space to reflect, to regain clarity, and most importantly, to reconnect with your sense of self. Resist the urge to look back or second-guess your instincts. Never make life-altering decisions in haste; Make them on your terms, not someone else’s. And remember, when life gives you onions, make onionade . On a different note, my coffee equipment has been extended by a new addition that is the Bookoo Themis Mini Coffee Scale , a super lightweight (~140g) and portable (8cm x 8cm x 1.5cm) coffee scale that allows me to precisely measure the amount of coffee that I’m grinding and brewing up . So far I’m very happy with the device. I don’t use its Bluetooth features at all, but when I initially tried, out of curiosity, their Android app didn’t really work. Speaking of brewing: Unfortunately at the end of Q3 my 9barista espresso maker seemingly broke down . While there are no electronics or mechanics that can actually break, I suspect that during my last descaling procedure enough limestone was removed for the boiler O-ring to not properly seal the water chamber any longer. I took the 9barista apart and couldn’t visually see anything else that could make it misbehave. I have hence ordered a repair kit from the manufacturer’s online store and am waiting for it to be delivered before I can continue enjoying self-made, awesome cups of coffee. Europe is continuing to build its surveillance machinery under claims of online safety , with the UK enforcing online age verification for major platforms, followed by the EU piloting similar acts in several member states . Even though the changes don’t affect me, I find this trend frightening, especially considering the looming threat to online privacy that is Chat Control . Even presumed safe-havens for censorship and surveillance like Matrix have rolled over and implemented age verification on the Matrix.org homeserver. The CEO of Element ( New Vector Limited ) gave the following explanation for it : Now, the Online Safety Act is a very sore point. The fact is that Matrix.org or Element is based in the UK, but even if we weren’t we would still be required to comply to the Online Safety Act for users who are in the UK, because it is British law. That statement is not quite accurate, however. If the Matrix.org homeserver was run by an entity in a non-cooperative jurisdiction they wouldn’t need to implement any of this. This is important, because people need to understand that despite the all the globalism that is being talked about, not every place on earth part-takes in implementing these mindless laws, even if your government would like you to think that it’s the norm. Obviously it’s not exactly easy to run a platform from within an otherwise (at least partially) sanctioned country, especially when user data is at stake. However with regulations like these becoming more and more widespread my pirate mind imagines a future where such setups are becoming viable options, given that the countries in question are scrambling for income streams and would be more than happy to gain leverage over other countries. We’ve already seen this in several instances (e.g. Yandex, Alibaba, ByteDance ( TikTok ), Telegram, DiDi, Tencent ( WeChat ), …) and given the global political climate I can imagine more services heading towards jurisdictions that allow them to avoid requesting IDs from their users or breaking security measures only so government agencies can siphon out data at will. However, a different future outcome might be an increased focus on decentralization (or at least federation ), which would as well be a welcome change. As Matrix correctly pointed out, individually run homeservers are not affected by any of this. Similarly, I haven’t heard of any instances of XMPP service operators being approached by UK officials. Unlike on centralized platforms like Discord, and wannabe-decentralized platforms like Bluesky, enforcing something like age verification on an actual federated/decentralized network is near impossible, especially with services that are being hosted outside of the jurisdiction’s reach. In the future, federated protocols, as well as peer-to-peer projects are going to become more important than ever to counter the mindless policies enacted by the people in power. Looking at this mess from the bright side, with major big tech platforms requiring users to present IDs we can hope for a growing number of people to cut ties with those platforms, driving them, and their perpetrators , into the ground in the long run. If you are looking for decentralized alternatives to centralized services, here is a non-exhaustive list: Since I began publishing code online, I’ve typically used the GPL or MIT license for my projects. However, given the current global climate and the direction the world seems to be heading, I’ve grown increasingly skeptical of these traditional licenses. While I still want to offer free and open source software to people , I find myself more and more reluctant to grant unrestricted use, particularly to organizations whose values or missions I fundamentally disagree with. Unfortunately, exclusions or prohibitions were never part of the vision behind the GNU or OSI frameworks, making most conventional open source licenses unsuitable for this kind of selective restriction. Recently, however, I came across the Hippocratic License , which is designed to address exactly these concerns. In fact, the HL3 already includes three of the four exclusions I would like to enforce: Mass surveillance, military activities, and law enforcement. The fourth, government revenue services, could likely be added in a similar manner. That said, HL3 does overreach in some areas, extending into domains where I don’t believe a software license should have jurisdiction, such as: 3.2. The Licensee SHALL: 3.2.1. Provide equal pay for equal work where the performance of such work requires equal skill, effort, and responsibility, and which are performed under similar working conditions, except where such payment is made pursuant to: 3.2.1.1. A seniority system; 3.2.1.2. A merit system; 3.2.1.3. A system which measures earnings by quantity or quality of production; or 3.2.1.4. A differential based on any other factor other than sex, gender, sexual orientation, race, ethnicity, nationality, religion, caste, age, medical disability or impairment, and/or any other like circumstances (See 29 U.S.C.A. § 206(d)(1); Article 23, United Nations Universal Declaration of Human Rights; Article 7, International Covenant on Economic, Social and Cultural Rights; Article 26, International Covenant on Civil and Political Rights); and 3.2.2. Allow for reasonable limitation of working hours and periodic holidays with pay (See Article 24, United Nations Universal Declaration of Human Rights; Article 7, International Covenant on Economic, Social and Cultural Rights). These aspects of the Hippocratic License have already drawn significant criticism, and I would personally remove them in any variation I choose to adopt. However, a far greater concern lies with the license’s stewardship, the Organization for Ethical Source ( OES ). While supporting a good cause is typically straightforward, the organization’s founder and current president has unfortunately earned a reputation for unprofessional conduct , particularly in addressing the very issues the organization was created to confront. I’m reluctant to have my projects associated with the kind of “drama” that seems to follow the organization’s leadership. For this reason, I would likely need to distance any variation of the license as far as possible from its heritage, to avoid direct association with the OES and the leadership’s behavior. Hence, I’m still on the lookout for alternative licenses, specifically ones that maintain the permissiveness of something like the GPL, but allow for clearly defined, legally enforceable exceptions. If you have experience in working with such licenses, I would very much appreciate your input. PS: I’m fully aware that adopting such a license would render my software non-free in the eyes of organizations like GNU or the OSI. However, those organizations were founded in a different era and have, in my view, failed to adapt to the realities of today’s world. It’s curious how many advocates of GNU/OSI philosophies call for limitations on freedom of speech, yet insist on software being usable without restriction in order to qualify as free and open source . This site has received what some might consider a useless or even comical update, which, however, is meant to further the goal of raising awareness about the role JavaScript plays in browsers. I got the inspiration for this from this post by sizeof.cat , a site I discovered thanks to the friendly folks in the VT100 community room . While sizeof.cat uses this feature purely for the lulz , I believe it can serve as an effective way to encourage people to disable JavaScript in their browsers by default, and to be very selective about which websites they enable it for. As a result, this website now features a similar (but edgier ) option, which you can test by enabling JavaScript for this domain and then sending this tab to the background. Go ahead, I’ll wait. :-) Like sizeof.cat ’s original implementation, this site will randomly alternate between different services . However, unlike the original, you’ll see an overlay when you return to the site, explicitly encouraging you to disable JavaScript in your browser. After having used neomutt for several years, I grew tired of the many cogs ( notmuch , mbsync , w3m , reader , etc.) I had to maintain for the setup to function the way I expected it to do, especially when my primary requirement is to not leave e-mails on the server for more time than really needed. Eventually I got fed up with my e-mail client breaking whenever I needed it most, and with having to deal with HTML e-mail on the command line, thinking that if I’d use an actual GUI things would be much simpler. Little did I know. I moved to Mozilla Thunderbird as my primary e-mail client a while ago. I set up all my IMAP accounts, and I created a “Local Folder” that Mozilla sold me as maildir : Fast-forward to today and I’m stuck with a setup where I cannot access my “Local Folder” maildir by any other maildir -compliant software besides Thunderbird, because even though Mozilla called it maildir , it is not an actual maildir format : Note this is NOT full maildir in the sense that most people, particularly linux users or mail administrators, know as maildir. On top of that, my OpenSnitch database is overflowing with deny rules for Mozilla’s supposed “privacy respecting” software. At this point I’m not even wondering what the heck is wrong with this company anymore. Mozilla has lost it, with Firefox , and seemingly also with other software they maintain. With my e-mails now locked-in into something that Mozilla titles maildir even though it is not, I am looking forward to go back to where I came from. I might however replace the overly complex neomutt setup with a more modern and hopefully lightweight aerc configuration. Unfortunately, I have used Thunderbird ’s “Local Folder” account for too long and I’ll have to figure out a way to get those e-mails into an actual maildir format before I can leave Mozilla’s ecosystem once and for all. Note on Firefox: I don’t care what your benchmark says, in everyday use Firefox is annoyingly slow despite all its wRiTtEn In RuSt components. For reasons that I didn’t care to investigate, it also seemingly hangs and waits for connections made by its extensions (e.g. password managers) and meanwhile prevents websites from loading. The amount of obscure problems that I’ve encountered with Firefox over the past years is unmatched by any other browser. Not to mention the effort that goes into checking the configuration editor with every new release and disabling all the privacy invasive bs that Mozilla keeps adding. At this point I’m not supporting Firefox any longer, despite the world’s need for a Chromium alternative. Firefox is sucking out the air in the room and with it dead hopefully more effort will be put into alternatives. I had to bring my Anker A1257 power bank to a “recycling” facility, due to it being recalled by Anker : There’s an interesting post by lumafield if you want to know the details. However, what Anker calls a recall is effectively a throw it away and we give you a voucher , because apparently we’re too stupid as societies to demand for manufacturers to take back their broken junk and recycle it properly . I tried to be better by not tossing the device into the trash but bring it to a dedicated “recycling” facility, even when I know for sure that they won’t actually recycle it or even dispose of it in a proper way. But that’s pretty much all I, as a consumer, can do in this case. While I, too, got a lousy voucher from Anker, none of their current options fit the specifications of the A1257. I therefor decided to look for alternatives and found the Sharge Pouch Mini P2. I needed something that is lightweight, has a relatively small form factor and doesn’t rely on a single integrated charging cable which would render the device useless the moment it would break. Given how bad Anker USB-C cables usually are in terms of longevity, I would never buy into a power bank from Anker that comes with an integrated USB cable, especially when it’s the only option to charge the power bank. While the Sharge also has a fixed USB cable, it is nevertheless possible to use the USB-C port for re-charging the device. If the integrated red cable ever breaks, I can still continue using the device. As I have zero experience with this manufacturer it remains to be seen how this 213g- heavy power bank will perform long-term. So far the power bank appears sufficient. While charging it barely gets warm, and even though the device lacks a display for indicating the charge level, the LED ring around the power button is sliced into four segments that make it easy to guesstimate the remaining charge. Charging it full takes around an hour. One thing that is slightly annoying is the USB-C port, which won’t fit significantly thicker cable-heads. The maximum that I could fit were my Cable Matters USB4 cables. The situation with GrapheneOS devices (and Android in general) mentioned in my previous updates has prompted me to revive my dormant Pinephone Pro . Trying to do so, however, I found that the original Pinephone battery was pretty much dead. Hence, I ordered a new battery that is compatible with the Samsung Galaxy J7 (models / ) – primarily because Pine64 doesn’t appear to be selling batteries for the Pinephone Pro anymore; Update: Pine64 officially discontinued the Pinephone Pro – and gave the latest version of postmarketOS (with KDE Plasma Mobile) a go. While Pinephone Pro support has become better over the years, with at least the selfie-camera finally “working” , the Pinephone hardware unfortunately remains a dead-end. Even with a new battery the phone discharges within a few hours (with sleep enabled). In fact, it even discharges over night when turned off completely. I don’t know whether newer versions of the Pine64 hardware have fixed the hardware bugs, but judging by the search results that I’m getting I doubt so. The UI has certainly become more usable with hardware acceleration seemingly working fine now, however the Pinephone is still too weak for most use cases. Retrospectively, the Pinephone Pro was a bad investment, as it’s effectively a wire-bound device with an integrated UPS at most, that I would argue isn’t even suitable as a development platform with all its hardware glitches ( Hello %0 battery boot loop! , just to name one). It is in fact so bad that you cannot even boot the device when there is no battery in it, to use it as a regular SBC with integrated display. This is sad because the Pinephone hardware tarnishes the reputation of Linux on mobile, given that it is one of the most prominent options. If you’re considering to give Linux on mobile a try, I do not recommend the Pinephone, and I am somewhat happy that Pine64 decided to discontinue it. They did not discontinue the original Pinephone, yet, however. Having that said, I have been following the progress that Luca Weiss ( Z3ntu ) made with running pmOS on the Fairphone 6 and I have to admit that I’m intrigued. While it’s still a long way to go , it is nice to see a Fairphone engineer that is actively working on bringing mobile Linux to the device. I don’t know whether his efforts are partially funded by his employer, or whether it’s his personal interest, but I truly hope for the former. The Fairphone is an odd value proposition for the average Android user. The native Fairphone Android experience seems average , and their Murena /e/OS partnership is highly questionable at best and might tarnish their reputation in the long run. However, I feel like they could gain a pretty large nerd-following by officially supporting mobile Linux, and actively pushing for it. At least in my books, having full-fledged postmarketOS support on their phones would be an instant money-magnet from the tech sphere, especially with the current bar being as low as the Pinephone. I will keep an eye on the progress, because I would be more than happy to give it a go once UFS support, 3D acceleration and WiFi connectivity issues are fixed. Alternatively, it appears that the OnePlus 6T is among the best supported postmarketOS devices at this point, and from several videos I came across on YouTube it appears that performance is significantly better than the Pinephone. However, a 7-years-old phone battery is probably cooked, and replacing it requires removal of the glued backcover. At an average price (on eBay) of around $100, plus another $30 for the replacement battery, the phone is not a particularly attractive option from a hardware standpoint. I invested quite some time in pursuing my open source projects in the past quarter, hence there are a few updates to share. With 📨🚕 going live Overpush has received a lot of updates over the past months, most of which are as beneficial for self-hosted versions as they are for the hosted service. You can find an overview of the changes on the releases page. zpoweralertd 0.0.2 was released with compatibility for Zig 0.15.1. Apart from the adjustments to compile with the latest Zig release no new things were introduced. Nearly five years after its initial release, zeit has weathered the test of time ( hah ) relatively well and continues to grow in popularity on GitHub . What started as a minimal command-line time-tracking utility has evolved into a program packed with a wide range of features and options. Depending on your preferences, you might however say that it now has one too many these days. zeit began as a personal pet project, with no clear long-term plan. Whenever users requested a new feature or option, I either implemented it myself or accepted their pull requests without much second thought. My mantra was simple: If a small enhancement made the software more useful to even one other person, I was happy to introduce it. Fast forward to today, and the very first version of zeit (dubbed zeit v0 ) has strayed far from its roots as a minimal and clean command-line tool. Instead, it has grown into a somewhat unwieldy UX experience, cluttered with features that are neither intuitive nor well thought out. From a code perspective, some of the decisions that made sense a few years ago now seem less ideal, particularly as we look ahead. While I could have sifted through the original v0 codebase to clean it up and remove features that were added by contributors who ultimately didn’t maintain them long-term, I chose instead to rewrite zeit from the ground up. This new version will be based on more modern dependencies and, hopefully, will be cleaner, more streamlined, and free of the “one-off” features that were added for single users who eventually stopped using zeit altogether. That said, I’ve learned a lot from the feature requests submitted over the past five years. With this new version, I’m working to implement the most useful and practical requests in a way that feels more cohesive and polished from a UX perspective, and less like an afterthought. I’m nearing the release of the first version of this complete rewrite, which will be called zeit v1 and carry the version number v1.0.0 . This new version will not be compatible with your existing zeit v0 database. However, if you’re currently using zeit v0 , you can export your entries using , and then import them into v1 with the new command. If you’re interested in a command-line utility for time tracking, especially if you’re already using a different tracker, I’d love to hear from you . Let me know your top three feature requests for a tool like zeit and which platform(s) you currently use or would like to switch from. Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program . Twitter X: Mastodon Facebook: See Twitter X. Reddit: Lemmy Instagram: Pixelfed YouTube: PeerTube Spotify: Funkwhale , or simply host your own Jellyfin server WhatsApp: plenty to choose from The Fed, ECB, etc.: Monero , Bitcoin , et al.

0 views
マリウス 2 months ago

Thoughts on Cloudflare

As many of you know, I am skeptical of the concept of relying on someone else’s computer , especially when a service grows to the point where it becomes an oligopoly, or worse, a monopoly. Cloudflare is, in my view, on track to becoming precisely that. As a result, I would argue they are a net negative for the internet and society at large. Besides the frustration they cause to VPN and Tor users through incessant captchas, Cloudflare’s infamous one more step pages have dulled users' vigilance, making them more vulnerable to even the most blatant malware attacks . Moreover, under the guise of iNnOvAtIvE cLoUd InFrAsTrUcTuRe , Cloudflare not only enable phishermen to phish and tunnelers to tunnel : Ironically, the very security measures they sell can be bypassed by bad actors using Cloudflare itself . It’s a similar irony that their systems, designed to shield clients from threats, sometimes struggle to defend their own infrastructure . Incidents like these highlight not only weaknesses in Cloudflare’s offerings but a broader issue: Cloudflare has become a highly attractive target for state-sponsored attacks , suffering from recurring breaches . Their sheer scale, considering that they are serving a substantial portion of the internet, means that an outage or compromise could have widespread, costly consequences. Another major concern is, that in many cases, Cloudflare acts as a man-in-the-middle SSL-terminating proxy between users and websites. They have visibility into everything users do on these sites, from browsing habits to submitting sensitive personal information. This makes Cloudflare a prime target for any actor seeking to harvest massive amounts of data. The Cloudbleed incident clearly demonstrated the risks: Tavis Ormandy posted the issue on his team’s issue tracker and said that he informed Cloudflare of the problem on February 17. In his own proof-of-concept attack he got a Cloudflare server to return “private messages from major dating sites, full messages from a well-known chat service, online password manager data, frames from adult video sites, hotel bookings. We’re talking full https requests, client IP addresses, full responses, cookies, passwords, keys, data, everything.” I stand with Hugo in considering Cloudflare harmful and recommend that websites avoid relying on it whenever possible. Cloudflare’s origins in Project Honeypot , and its early ties to the US Department of Homeland Security, are troubling to say the least: Five years later Mr Prince was doing a Master of Business Administration (MBA) at Harvard Business School, and the project was far from his mind, when he got an unexpected phone call from the US Department of Homeland Security asking him about the information he had gathered on attacks. Mr Prince recalls: “They said ‘do you have any idea how valuable the data you have is? Is there any way you would sell us that data?’. “I added up the cost of running it, multiplied it by ten, and said ‘how about $20,000 (£15,000)?’. “It felt like a lot of money. That cheque showed up so fast.” Mr Prince, who has a degree in computer science, adds: “I was telling the story to Michelle Zatlyn, one of my classmates, and she said, ‘if they’ll pay for it, other people will pay for it’.” Source: BBC Furthermore, Cloudflare has been criticized as an employer , reportedly fostering a hire-and-fire culture among its sales staff . Even its CEO has attracted controversy, such as suing neighbors over their dogs following objections to his plans to build an 11,300-square-foot estate. Plans that required lobbying to overcome local zoning laws . Given all this, it is time to reconsider Cloudflare’s dominant market position , controlling over 20% of the internet . Cloudflare has shown a pattern of equivocating on politically sensitive issues , perhaps to maintain its status as the world’s largest botnet operator , and they appear to defend “free speech” when it is profitable , but not when it isn’t . Cloudflare has also been accused of providing services to terrorists and drug traffickers while skirting international sanctions . Meanwhile, open-source developers have been harshly punished for less. Despite the brilliance of many engineers at Cloudflare, they are not infallible. They, too, experience recurring downtime and preventable mistakes . Cloudflare, like any other company, puts its pants on one leg at a time . There is no reason it should be treated as the default, or sole, solution for content delivery. If running your own Varnish instances isn’t feasible, and you need a global CDN, consider these alternatives to support competition and balance the scales: Info: Some hosting services might use Cloudflare without disclosing it openly/obviously, e.g. Render . Make sure to check whatever hosting service that you’re using whether it employs Cloudflare’s infrastructure in the background. If you currently have domains registered with Cloudflare, move them elsewhere immediately. As a general rule, never allow your CDN or hosting provider to also hold your domain registrations. Should the hosting provider cut you off, you’ll want the freedom to quickly redirect your domains to another provider without disruption. For more info, visit the cloud and domains sections of the infrastructure page. If, however, you’re running Cloudflare’s more advanced service offers, like Cloudflare Workers, you will likely have a harder time moving away. While some frameworks support different providers, like Vercel, Fastly, AWS, Azure, or Akamai, it is likely that most simple implementations will be heavily reliant on Cloudflare’s architecture. There’s unfortunately no easy path out of this, other than rewriting the specific components and infrastructure deployment configuration to support a different provider. If you wish to identify or avoid websites that make use of Cloudflare, you can use this browser extension for Firefox and Chrome (ironically created by Cloudflare). Beware that these extensions might transfer information about your browsing behavior to Cloudflare. Configure them to be active only when manually clicked on specific websites that you want investigate. There are third-party alternatives like this and this , as well as older/unmaintained extensions like this and this . PS: Decentraleyes is a solid option to enhance browsing privacy; check the browser section for other helpful extensions. All that said, you might think “Come on, Cloudflare isn’t that bad!” , and you’d be right: Every now and then, they do some good . *smirk* Still, we have to recognize that Cloudflare has grown into a cornerstone of modern digital infrastructure, which is a role that could eventually render it too big too fail , to borrow a term from the financial world. Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program .

1 views
マリウス 2 months ago

📨🚕

📨🚕 ( MSG.TAXI ) is a multi-protocol push notification router. You post to it via a webhook URL and it flings that data to your configured targets . It’s the missing glue between your code and your notification channels, whether that’s your smart home, your CI pipeline, your RPG guild’s Matrix room, or just your phone at 3AM when your server falls over (again). Push notifications from anything, to anything. Intro Updates Website

0 views
マリウス 2 months ago

Njalla Has Silently Changed: A Word of Caution

I’ve been using Njalla as my primary domain service for the past few years, and I’ve had nothing but good things to say about them. Their website is simple yet functional, their support is quick and efficient, and the company offers its services in a way that should be the global standard when it comes to data and privacy protection. Njalla made sense for me on many different levels: They’re a domain provider headquartered on a former pirate island nation in the Caribbean, which is home to countless offshore trust funds, that registers your domain in their name, so none of your personal information appears in the mandatory ICANN registration data. All of this is offered without any KYC requirements, and with the option to pay using Monero. And if that’s not enough, Njalla sends every email encrypted with your GPG public key and can even forego email entirely in favor of XMPP notifications with OMEMO encryption. Yes, Njalla also provides an API (with access tokens that can be configured with granular permissions) which works seamlessly with tools like Certbot , Lego , and others to request Let’s Encrypt certificates via DNS validation. Heck, there are even Terraform providers that support it. And if that still weren’t enough reasons to like Njalla , the Njalla blog offered unrivaled transparency and entertainment for everyone, giving people the chance to see with their own eyes how Njalla was fighting for the little guys . On top of that, I’ve always sympathized with brokep , Njalla ’s founder, and his work and many of his views. If you’re unfamiliar with him or his history, I recommend the (relatively new) series The Pirate Bay , which premiered on Sveriges Television at the end of 2024. Over the past few years, I’ve been quite vocal in my praise for Njalla . In fact, if you’re a regular reader of this site or have come across me on other platforms, you’ve probably seen me plug Njalla the same way Jensen Huang plugs AI . However, a recent interaction in the VT100 community channel prompted me to do what I periodically do with every service I use: Check what’s new. This time, it was Njalla ’s turn. While browsing through various pages on their website, I came across the About page and was surprised to find the following statement: Njalla is run by njalla.srl based in Costa Rica. Curious, I checked their terms of service and confirmed that Njalla does indeed appear to have relocated its operations from Nevis to Costa Rica. I searched through my email history to see if there had been any announcement about this change, but found nothing. Wanting to know when this happened, I checked Njalla ’s Mastodon and Bluesky profiles, but again found no mention of the move. I even went as far as looking at brokep ’s social profiles , only to find that they were either deleted or inactive. At that point, I started to get a bad feeling. Had Njalla been sold to someone else? Before jumping to conclusions, I decided to contact Njalla support to clarify the situation. Subject: 1337 Services LLC -> Njalla SRL? I just stumbled upon the fact that Njalla seemingly changed hands without any notice, and I would like to understand what happened to 1337 Services LLC on Nevis and who the new owner Njalla SRL is. I would appreciate further insights into this topic. Kind regards! The support replied promptly: Internal restructuring. Nothing to worry about. However, while it was a response , it wasn’t particularly satisfying, so I decided to be the PITA that I am somewhat known to be and ask again: Thank you for your reply and your re-assurance. I’d like to apoligize in advance for being a PITA, but with brokep seemingly having disabled his social media profiles (Bsky, Mastodon, X), discovering this change felt “off”. Also, as someone who has a relatively decent understanding of offshore jurisdictions and their governing laws, I am wondering about the motivation for this move. Costa Rica’s offshore landscape appears to have changed over the past years and their SRL/SA seemingly requires company books to reflect share ownership, which in turn list the owner’s name and ID numbers within the Central Bank of Costa Rica via the Registry of Transparency and Final Beneficiaries (RTBF). While UBO info is not publicly available unless explicitly listed as initial shareholders in the national registry, the information is still accessible and shareable by government entities and could make it easier for foreign entities to pressure the owners (and hence Njalla) into doing things it would otherwise not do. In addition, foreign court orders appear to be somewhat easier enforceable in Costa Rica’s jurisdiction, as opposed to Nevis, where foreign entities would in theory require dealing with local courts to obtain a local court order. While the trend towards transparency, information sharing and absurd KYC hasn’t passed Nevis/St. Kitts either (especially in terms of banking infrastructure) it appears that jurisdictions like Nevis or the Seychelles still seem to be “better” choices for operating a service like Njalla. I have been very vocal to recommend Njalla on different platforms, and I would like to update my recommendation based on this new reality. I would hence be curious to understand the rational behind the change, if you wouldn’t mind sharing a few insights. If preferrable you’re welcome to reach out via email to xxx (pubkey attached) or via XMPP/OMEMO to xxx. Thank you kindly and best regards! While I wasn’t expecting Njalla to offer in-depth strategic reasoning for this move, I nevertheless hoped for them to provide solid arguments as for why they believe Costa Rica is a good option for their operations and maybe even an explanation on why they haven’t notified their customers of the change. However, the reply that I got back was disappointing, to put it mildly: We do understand your concerns, but the reasoning or insights is not something we share. If you feel you can’t recommend our services any more, then of course you shouldn’t. That is totally up to you. Kind regards, Njalla It’s clear that Njalla is operating on a take it or leave it basis here. While they are entirely within their rights to do so, one could argue that using Njalla inherently requires a significant degree of trust. After all, they are the ones who legally own your domain . Given that, I think it’s fair to say that customers deserve at least some level of transparency in return. At least enough to feel confident that Njalla isn’t working against their interests. Note: There are many reasons why moving the company might make sense. While we can draw up as many conspiracy theories as we’d like, the most banal explanation might have to do with brokep being a Swedish citizen and supposedly still residing in the EU. Doing business within a place like Nevis, which the EU considers a tax haven and which is being grey-listed as non-cooperative tax jurisdiction every once in a while, can be a bit of a PITA . Not only is it unlikely for brokep to benefit from low-/no-tax advantages that the jurisdiction offers, it is on the contrary quite possible that EU CFC rules are intentionally hurting him, especially with a digital (low substance) business like Njalla , especially when dealing with cryptocurrency, especially with Monero, to discourage the average Joe Peter from doing business in jurisdictions that have historically been reserved for the bloc’s politicians and other elites. After all, the democractization of tax havens is certainly not something world leaders are in favor of. Costa Rica has managed to escape the bloc’s Annex I (in 2023) and Annex II (in 2024, approved 2025) and is not considered a low-tax country or tax haven. With CR joining the OECD’s CRS, it has certainly become easier for EU residents to do business in the Latin American country, despite its territorial tax regime. As boring as this sounds, but it might just be that brokep got sick of dealing with the EU charade around offshore tax havens – btw, hey, EU, how are things in Luxembourg, Cyprus and Monaco going? – and chose a more viable solution. I dug up Njalla ’s Terms of Service on the Internet Archive and found that the change seems to have occurred sometime between October 2, 2024, and December 16, 2024. Whether or not it’s legally permissible for a company to change something as fundamental as its jurisdiction or corporate registration without informing existing customers, I found the lack of communication troubling. What concerned me even more was that, after I pointed out the change, Njalla didn’t seem interested in offering any further explanation. On the contrary, their responses came across as an attempt to quietly brush the matter aside and move on. While the service continues to function as it always has, and I haven’t encountered any issues, I’m honestly uncertain about how to interpret the situation. As I’ve mentioned before, I deeply admire the work that brokep is doing, and I’m a strong supporter of Njalla ’s mission. I’ve been recommending their service for years, and I likely will continue to do so, although with reservations. That said, this situation has somewhat tarnished my perception of Njalla . Not only has their blog become less insightful over the years, but it also appears that they are actively concealing information from those who trust them: Their customers. With Njalla ’s lack of transparency and unsatisfactory responses, I’m uncertain about what to make of the situation. I’d assume that if you have a normal domain with Njalla , there’s probably little to worry about, provided the company hasn’t been sold to a new owner. The service seems to be operating as usual, and I haven’t heard of any malicious intent regarding domain ownership. That said, if you’re considering registering a domain to poke fun at a logistics provider or other international entities that might take issue with it, I wouldn’t be so confident that Njalla will still have Batman handling the situation. As long as you don’t provide your PII and use untraceable payment methods, however, the worst-case scenario is that Njalla shuts down your domain and won’t return it to you. I continue to hold several domains with Njalla . While I could migrate to another provider, I’m willing to wait, observe, and give Njalla the benefit of the doubt for now. That said, I will certainly be more cautious moving forward and think twice before registering any new domains with them. Frankly, there aren’t many trustworthy and reliable alternatives, especially ones backed by prominent figures with (for the most part) agreeable values. If you’re seeking services based in offshore jurisdictions, there’s a non-exhaustive list in the domains section of the infrastructure page. It’s important to note that when you allow someone else to register a domain on your behalf, you’re effectively entrusting them with ownership of the domain , meaning they could ultimately do whatever they wish with it. Therefore, trustworthiness is a critical factor when evaluating these services. Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program .

1 views
マリウス 2 months ago

Mass-Surveillance History & Trivia

Note: This post focuses on mostly Wikipedia-documented programs/acts and major events. Many additional local or classified efforts exist but are omitted if lacking a solid Wikipedia entry. Info: Years denote program start, reveal, or key legislative milestone. Organized chronologically with brief context and trivia. VENONA (USA/UK/AUS) [1943–1980] SIGINT program decrypting Soviet communications; Not domestic surveillance per se but foundational to Cold War signals intelligence. Project SHAMROCK (USA) [1945–1975] NSA predecessor harvested copies of most international telegraphs entering/leaving the U.S.; Ended after the Church Committee. UKUSA Agreement (from which Five Eyes derives) [1946] Signals intelligence alliance formalized publicly later; Underpins many joint programs. NSA founded (USA) [1952] Creation of the National Security Agency institutionalized large-scale SIGINT capabilities. COINTELPRO (USA) [1956–1971] FBI’s domestic counterintelligence program targeting civil rights groups, anti-war activists, and others; Involved infiltration and surveillance. Project MINARET (USA) [1967–1973] NSA watch-list program surveilling U.S. citizens (including MLK Jr.) without warrants; Exposed in 1975–76. ECHELON (Five Eyes) [Late 1960s onward] Global signals interception network (NSA, GCHQ, ASD, CSE, GCSB) monitoring satellite/microwave communications. Church & Pike Committees (USA) [1975–1976] Congressional inquiries exposing illegal domestic surveillance; Led to FISA (1978) and the FISC court. FISA & FISC (USA) [1978] Legal framework for foreign intelligence surveillance with secret court orders. BLARNEY / FAIRVIEW / STORMBREW / OAKSTAR (USA) [1978 onward] NSA “corporate partner” upstream collection families at backbone chokepoints. Clipper Chip & Skipjack (USA) [1993–1996] Government-proposed key-escrow encryption standard; Abandoned after public backlash and cryptanalytic concerns. SORM launched (Russia) [1995] “System for Operative Investigative Activities” requiring ISPs to install FSB access; Later expanded to SORM-2 (internet) and SORM-3 (deep metadata). Carnivore / DCS1000 (USA) [1997–2001] FBI packet-sniffing system for ISP-side interception. NSAKEY (USA) [1999 (alleged)] Reported Microsoft Windows cryptographic key controversy; Raised concerns about possible NSA backdoor. Onyx interception system (Switzerland) [2000] Satellite communications interception sites at Zimmerwald, Heimenschwand, Leuk; First publicized mid-2000s. Interception Modernisation Programme (UK) [2000–2006] (via RIPA ) Ambitious plan to expand traffic data retention and interception; Later morphed into follow-on initiatives. STELLAR WIND (USA) [2001–2007] Post-9/11 warrantless surveillance (content + metadata); Aspects later routed into FISA processes. Data Retention beginnings (EU) [2001] Post-9/11 debates culminated in the 2006 EU Data Retention Directive mandating telco retention (later invalidated in 2014, see below). SITEL lawful interception system (Spain) [2001] National police interception platform for phone/internet data. Total Information Awareness (USA) [2002–2003] DARPA’s Information Awareness Office sought vast data-integration for pattern analysis; Defunded amid civil-liberties outcry. ThinThread (USA) [2002–2008] ( later Trailblazer ) Competing NSA programs; ThinThread emphasized privacy protections; Trailblazer pursued broader data analysis, ultimately cancelled after overruns/criticism. AT&T Room 641A (USA) [2003 (installed) / 2006 (revealed)] Fiber-optic splitter room in San Francisco (Narus gear) enabling backbone interception under NSA partnerships. Operation EIKONAL (Germany/NSA) [2004–2005] BND with NSA tapped Deutsche Telekom Frankfurt switch; Filters proved leaky; Later parliamentary inquiry. Golden Shield (China) [2006] (subsystem Great Firewall ) “Golden Shield Project” integrates policing with internet control; Surveillance and filtering co-develop. EU Data Retention Directive (EU) [2006] Mandated retention of telecom metadata across member states (up to 24 months); Struck down by CJEU in 2014. Hemisphere Project (USA) [2007] AT&T call-records database queried by law enforcement with parallel-construction concerns; Data reaches back decades. BULLRUN / EDGEHILL (USA/UK) [2007] Efforts to defeat encryption standards and implementations via covert influence and exploits. PRISM (USA) [2007 (begins) / 2013 (revealed)] NSA program collecting data from U.S. internet companies under FISA §702 orders. FRA-lagen (Sweden) [2008] Law enabling the National Defence Radio Establishment (FRA) to intercept cross-border cable communications; Amended for more oversight (2009). XKEYSCORE [2008] Distributed search/analysis system for captured internet data; Used by NSA, GCHQ, and partners including BND under agreements. Optic Nerve (UK) [2008] GCHQ bulk-captured Yahoo webcam images (including non-targets) for facial-recognition research. Karma Police (UK) [2008] GCHQ project building web-browsing profiles tied to IP addresses for “behavioural detection.” Tempora build-out (UK) [2008–2011] GCHQ buffer-records fiber traffic at landing stations; Integrated with Five Eyes analytics. GCSB law changes (New Zealand) [2009] Legal reforms later enabled broader assistance to domestic agencies; Subsequent controversies and oversight changes. Mastering the Internet & Global Telecoms Exploitation (UK) [2009] GCHQ capstone initiatives to scale cable tapping and data analytics. MYSTIC (USA) [2009–2014] NSA voice interception of entire countries’ phone calls; Sub-program SOMALGET stored full audio in places like the Bahamas. Royal Concierge (UK) [2010] GCHQ monitoring of hotel booking systems to track diplomats for potential ops. Boundless Informant (USA) [2013] NSA global metadata visualization tool tallying collection by country/source. MUSCULAR (USA/UK) [2013] NSA & GCHQ tapped private Google/Yahoo data-center links overseas; Exploited unencrypted inter-DC traffic (later encrypted by companies). 2010s Global Surveillance Disclosures (worldwide) [2013] Wave of revelations across Five Eyes and partners prompting legislative reforms. CJEU invalidates EU Data Retention Directive (EU) [2014] Digital Rights Ireland decision strikes blanket retention; Many national laws revised or challenged. Project SPEARGUN allegations (New Zealand) [2014] Reports that GCSB sought to tap a trans-Pacific cable and enable bulk metadata flows to NSA. Intelligence Act (France) [2015] Legalized wide surveillance powers (including algorithmic “black boxes” at ISPs) after terror attacks; Provisions for international communications interception. Data Retention Law (Australia) [2015] Mandatory ISP retention of metadata (2 years) for law-enforcement access. China’s Sky Net expansion & early “Sharp Eyes” pilots (China) [2015] Nationwide CCTV with facial recognition, integrating public and private cameras; “Sharp Eyes” pushes village-level coverage. Investigatory Powers Act a.k.a. “Snoopers’ Charter” (UK) [2016] Consolidated surveillance authorities (bulk powers, equipment interference), data retention & ISP logging (“internet connection records”). Yarovaya Law (Russia) [2016] Counter-terrorism package mandating retention and decryption assistance by telecoms/online services; Strengthens SORM ecosystem. German BND law reform (Germany) [2016] Legalizes/stipulates foreign-to-foreign cable tapping, introduces oversight changes following EIKONAL fallout. Investigatory Powers Act comes into force (UK) [2016] Bulk powers framework operational with codes of practice. Wiv 2017 (“Sleepwet”) & 2018 referendum (Netherlands) [2017] Intelligence and Security Services Act expanded bulk interception; A 2018 advisory referendum rejected it, prompting tweaks before implementation. China’s Intelligence Law [2017] Compels organizations and citizens to support state intelligence work; Implications for tech firms and data. Assistance and Access Act (Australia) [2018] Technical capability notices and voluntary/compulsory assistance powers targeting encrypted services and devices. Five Eyes Plus [2018] Five Eyes agreements with France, Germany, and Japan to introduce an information-sharing framework to counter China and Russia. CSE Act (Canada) via Bill C-59 [2019] Statutory basis for CSE’s active cyber operations and foreign intelligence, with new oversight/review mechanisms. IJOP & surveillance in Xinjiang (China) [2020s] Integrated Joint Operations Platform aggregates data (checkpoints, apps, biometrics) for risk scoring of Uyghurs and others. SORM-3 (Russia) [Ongoing] Deeper DPI, social media, and traffic metadata capture with localization requirements for providers. Five Eyes / Nine Eyes / Fourteen Eyes / SSEUR bulk collection [Ongoing] Continued §702 reauthorizations (USA) and partner bulk-powers regimes (UK, others) with periodic court/oversight modifications. Chat Control (Europe) [Ongoing] Proposal aimed to combat CSAM by allowing law enforcement to scan private messages, photos, and files, even when content is end-to-end-encrypted; Automatic scanning without consent or suspicion, imposed on all 450 million citizens of the European Union. Domain Awareness System (USA/NYC): Largest digital surveillance system in the world, part of the Lower Manhattan Security Initiative in partnership between the New York Police Department and Microsoft to monitor New York City. IMSI catchers / Stingrays: Portable cell-site simulators widely A device used by police and intel services for location/metadata capture. Mail Isolation Control and Tracking (MICT) (USA): USPS photographs exterior of all mail for investigative use. Jingwang Weishi (China/Xinjiang): Mandatory phone-scanning app developed by Shanghai Landasoft Data Technology Inc. reported to extract/report content/signatures. GhostNet (origin linked to China): Operation exposed in 2009 compromising targets in 100+ countries; Espionage network notable in surveillance context. Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program .

0 views
マリウス 3 months ago

Doubting Your Favorite Web Search Engine

Info: Many links in this post reference specific text segments on the linked website. If your browser is unable to handle links to text fragments ( ) it might appear as if the linked page is irrelevant to the text in this post. The links should work in newer Firefox and Chromium desktop versions, however. Kagi was founded in 2018 by Vladimir Prelovac, a Yugoslavia-born entrepreneur whose first venture, ManageWP , a platform for managing multiple WordPress instances, was acquired by GoDaddy. He later served as the company’s VP of Product before setting out to build a search engine of his own. The company has followed an interesting playbook. They started small and focused on a niche audience of tech aficionados, which is also why they seemingly can’t stop trying to position themselves as the search engine for the small web . To resonate with this community’s nostalgia for the good old days they created a modern version of Rover (the Windows XP search dog) as their mascot. They chose the Japanese word for key (鍵) as their name, and they made the platform lightweight, largely JavaScript-free and an overall pleasant web experience in a world full of bloated React and Next.js soy . Oh, and did I already mention that cute dog mascot? Really, what better idea for a search engine logo than a dog? Almost as inspired as the mascot’s name: Doggo . Because, well, calling it Kagi the dog would have been far too simple. ;-) On a side note: While it’s never made explicit what breed Doggo is, it feels like a missed opportunity not to have gone with a Golden Retriever , though perhaps that would have been a little too on Lycos’ nose. A Beagle , a Collie , a St. Bernard , or really any other search-and-rescue dog might have fit the theme nicely as well. Still, I’ll give Kagi the benefit of the doubt and assume it’s meant as an artistic take on a Basset Hound, a breed originally developed to track small game. In that sense, Doggo is perfectly suited to Kagi’s mission: finding the small things… like, well, the small web . Anyway, I first heard about Kagi in early 2023 during an email exchange with a reader of this site. Until then, the Palo Alto–based company hadn’t been on my radar. After taking a quick look at Kagi’s product, I remember replying something along these lines: I personally wouldn’t use a search engine that requires me to log in with an account – even with an explanation on why it’s needed and a privacy policy – however I’m likely not the target audience in first place. Nevertheless I could imagine accountless subscriptions to be something that could improve user confidence. But more on that in a moment. First, let’s look at the promises Kagi is making. The service markets itself as a privacy-respecting search engine, with no incentive to track what individual users are searching for. And despite the account requirement, it does appear to be doing everything right to live up to those claims: To be fair, Kagi’s marketing material is convincing. So much so, that if I weren’t the kind of person who insists on rummaging through closets to look for skeletons, I’d probably be sold already. And as much as I genuinely want Kagi to be exactly what it claims to be, I can’t shake my doubts. Grab your tinfoil hats, buckle up, and hit play on Pacific Coast Highway , because we’re heading straight into the twilight zone . I’ve been giving Kagi a try every few months, mostly because its fan club never seems to tire of boasting about how brilliant the search engine is. Meanwhile, I’m constantly reminded that I’m not one of the cool kids with my SearXNG , my Leta , and the rest of my oddball setup. So, every now and then, I cave and spin up a fresh account just to see how Kagi is coming along. Naturally, I did the same thing in preparation for writing this post. For testing, I like to reuse real search queries I’ve run in the past and compare how often the results I prefer actually show up. One of these queries is “rama kara mt3” . On Google, Brave, DuckDuckGo, Bing, and of course SearXNG, that search reliably brings up either my website or a video from my YouTube channel within the first page of results. Only Mojeek and Yandex seem to snub me, either by down-ranking the content or flat-out refusing to index the site, even though I can see their crawlers in my analytics constantly. On Kagi, however, I couldn’t find either my site or the YouTube channel anywhere within the first three pages. I’m not pointing this out because I’m salty about Kagi skipping my site on the first page, but rather because, for whatever reason, there simply don’t seem to be many websites featuring RAMA KARA keyboards with MT3 keycaps. That scarcity is likely why most mainstream search engines consistently prioritize my website and YouTube channel for those particular queries. Given how often the phrase “small web” appears across Kagi’s website, I would have expected them to prioritize a site like mine over, say, generic e-commerce listings. In practice, though, that doesn’t seem to happen. I tried a handful of other oddly specific search terms for various sites that normally surface at the very top of mainstream search engines. With Kagi, however, I often had to wade through pages of results before finding what I was looking for. Why there’s such a gap, especially when it comes to niche organic content that Kagi claims to champion, is unclear to me. That said, search results are inherently subjective, and this alone isn’t enough reason for me to dismiss the service outright. The first thing that I found particularly interesting was the use of Privacy Pass . With this, Kagi can (in the future) offer searches which theoretically won’t require accounts. Privacy Pass promises blind signing , to sign passes for the user without seeing the user’s actual identity. Another feature is anonymous redemption , so that users can redeem signed passes for their searches. All while preventing the signing party as well as the redeeming party from linking a user to a pass. However, the way Kagi implemented Privacy Pass , in which they are the origin, the attester and issuer , is not ideal from a privacy perspective, as, quote, attestation mechanisms that can uniquely identify a Client, e.g., requiring that Clients authenticate with some type of application-layer account, are not appropriate, as they could lead to unlinkability violations . Ideally, with Privacy Pass , you would want to have someone else as issuer-attester, which Kagi would accept issued tokens from. By assuming every role Kagi is ignoring the warning mentioned in the RFC. As it is not viable (yet) to use the search engine without an account, the current Privacy Pass implementation is a marginal benefit at best, for which users have to take Kagi’s word that, by assuming every role in the chain, they do not secretly link Privacy Pass searches to user accounts. Speaking of searches: While discussing Kagi in the VT100 community channel , I brought up the “rama kara mt3” example search. Interestingly, a long-time Kagi user reported seeing this website and/or the YouTube channel on the first page of their results. My hypothesis is that the user has likely been using their account for extended periods, allowing Kagi to learn their preferences and adjust results accordingly. However, according to Kagi’s marketing materials, this is not something they actively do : Searches are anonymous and private to you. Kagi does not log and associate searches with an account. … To ensure your privacy and security, we don’t monitor, log or store your queries or associate them with your account. So, the learning argument probably isn’t why other users are seeing different results in their accounts compared to the one I just created. However, there’s a catch: The privacy policy I reviewed was from 2024. What does the current version say? Unlike most search engines, we do not track which search result you choose to click. … We may store web requests made by user browser temporarily, with strict retention periods, for debugging purposes, and in a manner that they are not linked to an account. Wait, what? That’s it? Kagi does not track which search result you click , but there’s no mention anymore of storing the search query itself. What happened to the original claims that searches were anonymous and not tied to an account? Surely, the changelog mentions this removal: … 2025-02-13 Added Privacy pass and Tor options Added request debugging information Added applemap and mapkit cookie information Changed fair use for AI to be bound to actual token cost Simplified and clarified the language 2024-05-29 Added section on Browser-Extension 2023-09-21 Increased Fair Use limits for AI tools (300 to 500) … Hrmm. Maybe I’m missing something, but to me it looks like the privacy policy was revised between 2024 and 2025, with some important statements removed. None of which seem to be noted in the changelog. I don’t know if customers received further details via email, so it’s hard to say exactly what happened to the claim about searches being anonymous. This is probably just a slip-up, and anonymity is still key to Kagi , right? We did not say we maintain anonmity, but privacy, which are two different things. For example. your parents may know everything about you, yet still respect your privacy. This statement was made by the founder of Kagi on Reddit, two years before the claim about searches being anonymous was removed. Nevertheless, the current privacy policy still states: Maximizing anonymity with Kagi We strive to give our customers the possibility to maximize their anonymity . Users who want provable anonymity guarantees may access our service by: … So, in fact, Kagi has stated that they maintain anonymity , and they continue to make that claim. Given that the CEO clearly understands the distinction between privacy and anonymity , this doesn’t seem like just another slip-up. But… wait, is Vladimir suggesting that Kagi is like our parents , in the sense that the service knows everything about you, yet respects your privacy ? :-S Okay, let’s climb out of the policy rabbit hole before I start sweating under my tinfoil hat. Instead, let’s focus on something that seems to come up every time Kagi is mentioned: Google. Almost every post I’ve come across talks about how the author abandoned Google and switched to Kagi , claiming their life instantly improved thanks to Kagi’s presumably superior search results. As I mentioned earlier, search results are highly subjective, and while Kagi didn’t work for me personally, it may be exactly what others have been looking for. However, one detail that’s often glossed over is that Kagi primarily functions as a middleman between you and other search engines, like Google . In other words, it’s what we call a metasearch engine . Similar to the policy change mentioned earlier, Kagi also scrubbed their documentation of the search sources they used to list : Google, Brave, Mojeek and Yandex, specialized search engines like Marginalia, and sources of vertical information like Apple, Wikipedia, Open Meteo, Yelp, TripAdvisor and other APIs. Note: The change occurred sometime between April 25, 2024, and May 2, 2024. One possible reason is that in February 2024, following political and legal pressure, Yandex sold its Russian operations to an investment fund with close ties to the Kremlin. To avoid backlash from its user base over still using Yandex as a source, Kagi may have simply removed all mentions of it. If that was indeed the reason, it didn’t work . The only index that Kagi appears to be building themselves is for the “small web” , using Teclis , the aforementioned side project of Kagi’s founder, and another called TinyGem , which seems to function as a news index. Beyond that, Kagi relies entirely on existing indexes from Google, Brave, and others. In that sense, Kagi is (technically) much like SearXNG . In fact, it could be SearXNG, with a custom frontend. But circling back to the “I dropped Google Search because it’s terrible and Kagi is so much better” conversation: While I still acknowledge that search results are highly subjective and that different people phrase the same query in different ways, it’s important to note that because Kagi relies on the indexes of major search engines like Google, their results won’t look vastly different from what the original sources provide. Kagi does adjust the ranking, as we’ve seen in my earlier example, but it cannot magically surface content that isn’t already available elsewhere. What it can do, however, is make content less visible or harder to find, based on the criteria it uses to filter and rank results from its sources. With this in mind, it’s worth remembering that most search engines already perform a significant amount of censorsh… err, filtering . Adding another layer of filtering (via Kagi) will merely re-order results, or, at worst, further reduce the content that is easily accessible. As a power user , I don’t want opaque filters layered on top of opaque filters . What I want are better search algorithms that rank signal up and noise down, applied to the largest possible dataset, so I can locate the content I’m looking for more efficiently. Note: One amusing twist in the “I dropped Google for Kagi” conversation is that not only does Kagi use Google’s search index (among others), it also relies on Google’s cloud infrastructure . While leveraging cloud infrastructure is standard practice, even among other search engines, competitors like DuckDuckGo and Brave at least opt for AWS instead of their primary rival’s platform. I’m sure that when Kagi eventually captures a significant slice of the search market, we’ll definitely not see their GCP account suspended for arbitrary reasons… because that never happens to anyone on GCP, right? Putting the Google topic aside for a moment, while browsing the Kagi docs I noticed multiple pages mentioning AI and LLMs. Even though Kagi appears to cater heavily to the “small web” and its predominantly low-tech-loving audience, the service was actually intended to be an AI startup back in 2018: Kagi has long heritage in AI, in fact we started as kagi.ai in 2018 and we’ve previously published products, research and even a sci-fi story about AI. … The emergence of generative AI will enable a new paradigm in search that can unlock a whole new category of previously impossible searches. … Kagi is thrilled to introduce next-generation AI into our product offering: … While Kagi tries to simplify the privacy picture by stating, “When you use the Assistant by Kagi, your data is never used to train AI models (not by us or by the LLM providers), and no account information is shared with the LLM providers. By default, threads are deleted after 24 hours of inactivity.” , the devil is in the details : When our Azure OpenAI Service API quota is saturated, the request is sent to the fallback provider: OpenAI’s own API. OpenAI is currently required by court order to retain all chat logs. Also, while Kagi claims that threads are deleted after 24 hours, which we have to take at face value, since there’s no way to verify it, the LLM provider APIs have their own data retention policies, and we must similarly trust that they adhere to them. So why would a service that’s relatively straightforward and easy to like from a privacy perspective muddy the waters by introducing the can of worms that are externally run LLMs? If we look back at earlier blog posts, we can actually find some clues as to why Kagi might have taken this route: In this future, instead of everyone sharing the same Siri, you’ll have your completely individual Mike or Julia or Jarvis - the AI. The more you tell your assistant, the better it can help you, so when you ask it to recommend a good restaurant nearby, it’ll provide options based on what you like to eat and how far you want to drive . Ask it for a good coffee maker, and it’ll recommend choices within your budget from your favorite brands . Instead of being scared to share information with it, you will chose what data you want it to have and volunteer your data only after knowing its incentives align with yours. The more you tell your assistant, the better it can help you, so when you ask it to recommend a good restaurant nearby, it’ll provide options based on what you like to eat and how far you want to drive. These posts come from Kagi’s official blog. How could all of this function without storing each user’s search history and linking it to their accounts? More importantly, how do these claims align with the mission statement displayed on their landing page: We envision a friendly internet worthy of human potential - where exploration leads to discovery, not distraction, where knowledge flows freely, unbound by algorithms or advertising. Our mission is to humanize the web. To return the web to its rightful owners: the people who use it, we’re building tools that serve humans first, creating an ethical, and truly personal internet. We are driven by the purpose to inform and educate and unlock the true promise of the digital age: universal access to human knowledge, delivered with clarity and protected with integrity. Knowledge unbound by algorithms ? Humanize the web ? I fail to see how AI and LLMs align with this mission statement. What I do understand is that Kagi, like seemingly everyone else these days, feels compelled to sprinkle in the buzzwords du jour to appeal to investors. Speaking of which… Kagi is a Silicon Valley startup, and they’re probably not just showing off charts – nope, not those charts – because charts are fun. With SEO gaming posts like “What ROI can we expect from Kagi?” on their website, and impressive growth, from probably something around $410k–$2M in annual revenue two years ago to probably somewhere between $3.3M and $16.5M today, the 40-person company appears to be a solid investment opportunity for Silicon Valley. Although I would like to give Kagi the benefit of the doubt and assume they genuinely aren’t going to be evil , history shows that over a long enough timeline, every proprietary service eventually gets corrupted and – I can’t hear it anymore – enshittified . Even if the red flags I’ve highlighted aren’t immediate concerns, and even if we assume Kagi is well-intentioned and committed to not selling out its users, I think it’s safe to assume that once they reach a certain threshold, things probably won’t operate the same way anymore and revenue streams will have to be grown/diversified to keep investors happy. Also, we have to consider that, with the rapid pace at which the human internet is disappearing, Kagi may soon find itself with little valuable content to offer its predominantly tech-savvy users, eventually forcing a pivot in other directions. Kagi wasn’t originally started as a search engine, and based on the information I’ve gathered, I don’t believe it will continue as one in the long run. Search appears to be an intermediary step in a broader vision that the founder has had from the very beginning. In all honesty, much of the fluff and user-pampering Kagi has done seems to be a means to an end for that larger vision. When you piece together all the small details, the bigger picture doesn’t look nearly as rosy as Kagi paints it today. In fact, it’s not fundamentally different from what other search providers are already doing. It seems that internet search, as we know it, is a dying breed. The early wave of the LLM craze, with its often idiotic crawler implementations, has already forced website operators into employing tactics that might not only hinder AI bots but also traditional search engine crawlers from scraping content. This, in turn, reduces the amount of useful data available for search engines, degrading the quality of results. Kagi won’t solve this problem. At present, it’s merely a filter attempting to improve the signal-to-noise ratio. But if there’s barely any signal left, removing the noise won’t accomplish much. If you’re a paying Kagi customer today, I encourage you to try SearXNG . Test different instances, as some may perform better than others. If you’re particularly adventurous, consider hosting your own private instance, which can provide even more reliable search results. While SearXNG is also just a metasearch engine , the long-term solution lies in building community-driven search indexes, with SearXNG potentially acting as an API client in the future. If these topics pique your interest, check out the following projects: If we truly want the web, especially the small web to survive, we need to stop funneling money to corporations that actively erode it. Every Kagi subscription you pay, whether you care or not, still lines Google’s pockets. Switching to Kagi doesn’t mean you’ve left Google behind, it only means you’ve changed the way Google profits from you. To preserve the richness and diversity of the web, we must support alternatives that empower communities, foster independent content, and keep the small web alive; Not proprietary platforms that extract value from it to sell it for a monthly subscription. I am doubtful that Kagi is going to evolve into the search engine that everyone would love it to be, but I would be happy to be proven wrong. Time will tell, but as for now I will stick to alternatives that don’t require an account and that I can ideally run on my own . Footnote: The artwork was generated using AI and further botched by me using the greatest image manipulation program .

2 views
マリウス 3 months ago

Anonymous (Paid) E-Mail Account

Alright, it’s time to dust off your tinfoil hats again and dive into the paranoid world of online privacy with me! Today we’ll be looking at e-mail services and walking through the steps of opening a paid e-mail account without having any personal information directly attached to it. Why would we want to do this in the first place? First and foremost, because we cannot trust any online service to safeguard our data anymore . According to NordLayer , 2024 has been one of the worst years yet on the cybersecurity front: 2024 has been another banner year for data breaches, with cybercriminals accelerating their efforts to steal and monetize confidential information. The stats below show that data theft is commonplace, and organizations face a challenging data security environment: Having even just your credit card with your full name attached to that [email protected] e-mail address of yours could put you in an uncomfortable situation if either the e-mail provider, or any of the services that account is used on ever gets hacked and the data is leaked online . It is safe to say that these days you cannot trust any service with your real information anymore. While online accounts that don’t require payment can usually be created using fake data, those accounts typically monetize their service by selling the data they generate, even when they say they don’t . With a paid e-mail service, there is at least a slight chance that they will remain true to their mission and not sell any data generated by their paid accounts, as they already make money from those accounts and would risk losing them if they were to misbehave. This is not the case for paid services like Google Workspace , Microsoft Office 365 and similar big tech offerings. Even though you pay them money, they will still make you the product and use your data to spy on you, train AI and potentially sell it to third parties. The paid e-mail services referred to in here are small businesses that truly depend on the income generated through their paid services. So what is the best way to create an e-mail account with a paid service provider without giving them any PII ? Ideally you could simply mail a letter with the yearly fee in cash or precious metals to an address, but unfortunately that’s not always possible. However, decentralized currencies allow us to do just that but digitally. If you are only concerned about having your PII directly attached to the e-mail account, you can use any regulated DEX to buy Bitcoin that you can afterwards pay the service provider with. Even though your PII is not available to the mail service provider, the transaction will be traceable and your DEX will have your PII. If you’re concerned about the possibility that the DEX might get hacked and leak account data (and through that your PII), which would in turn make it possible to trace your payment to the mail service provider, and at the very least reveal what e-mail provider you are using, then it definitely makes sense to find a DEX that allows you to purchase untraceable cryptocurrency like Monero . If your paranoia level is above that, your best bet is to purchase cryptocurrency on a peer-to-peer marketplace with cash-offers, or to find cryptocurrency ATMs around you that do not require KYC and accept cash. At that point it doesn’t really matter which cryptocurrency you purchase, as you can use platforms like fixedfloat and changenow later on to convert the currency into one accepted by the mail service provider (most likely Bitcoin). Ideally, however, you would want to have at least one conversion-hop in XMR to break the transaction chain and make it harder to trace the full transaction. While you could use a tumbler , I would not recommend it, as such transactions are immediate red flags that might raise eyebrows and, in worst case, put your wallet(s) on a list . Most services these days accept payments in cryptocurrency, regardless of what their public landing-pages might be saying. As soon as you identified a trustworthy service that you would like to use, it’s worth contacting their support via e-mail, to ask about the options to pay with cryptocurrency. At this point we’re encountering a chicken-and-egg problem, where you need to have an e-mail account to write the support to open an e-mail account. However, for this short interaction a (free) throwaway account that won’t require PII can be used. Remember that every interaction with a cryptocurrency wallet, a DEX or any e-mail provider should ideally be done from an IP address that is not linked to your PII. A coffee shop with WiFi is the easiest and cheapest way. Tor is another possibility, however, you might encounter obstacles creating accounts via the Onion network. Another alternative is a VPN for which you are able to purchase scratch cards . When contacting support, be brief but state your reason for why you would like to open an account using cryptocurrency instead of conventional payment methods. Explain, that you don’t feel comfortable having your PII attached to online accounts. The more reasonable your request is, the more likely it is for the service to agree. While working on this post I have tested this approach with a handful of different services and was able to ultimately open e-mail accounts with all of them. Mail services these days suffer a lot from malicious use (fraudsters, spammers, etc.) and might hence be careful to open e-mail accounts using cryptocurrency. This is especially true for the ones that don’t openly advertise to accept crypto payments. Hence, don’t be irritated by follow-up questions from the support. In most cases the support will send a wallet address for you to transfer the fee to. Keep in mind that if crypto payments are being offered, it’s usually only for yearly plans and in many cases refunds won’t be possible. Make sure to investigate the desired mail provider thoroughly beforehand, including its privacy policy and terms of service. Ideally you can make use of a trial phase under a different account, unless a payment option is required for that. If you’re e.g. a political activist who’s looking to open an e-mail account this way make sure the provider in question is as far away from your own jurisdiction as possible. Ideally you’d want the e-mail service to be operated under a jurisdiction that is cumbersome to deal with, especially for the one you’re located in. Example: If you are a French climate activist, do not use a service based right at your doorstep (e.g. Switzerland). Find the jurisdiction that is likely to be the least cooperative towards your own jurisdiction and use a service that is based there and ideally runs its servers there as well. It is important to remember that the yearly fee won’t be charged automatically and that failure to pay in time will result in the account getting shut down. Set up a payment reminder so that you can contact the support and communicate future payments when due. If you are likely to stick with the services for longer, ask for multi-year payment options. You are now likely the owner of an e-mail account that has no PII attached to it. Keep in mind that just because you paid for it, it doesn’t mean that the service won’t boot you off the platform, especially if you’re planning to do dumb things with it. Don’t be a d!ck and enjoy the little privacy we have left responsibly. Correct, most services let the payment processor (e.g. Stripe) handle the PII involved with payments, to a certain degree. Services are usually still obliged by law to store invoicing data (name, address) on their end, usually for multiple years even after you might have already closed your account with them. Even if no PII is stored in the provider’s database, your account is usually still linked to it through a common identifier used by the provider and their payment processor. Usually it isn’t, although conversion/transfer fees might accumulate, depending on how elaborate the flow gets. However, some services might even offer a discount when paying via cryptocurrency. Some services use crypto processors to handle crypto payments. Often times the processor will require you to create an account, which in turn might be subject to KYC procedures. If you can’t pay without creating an account, ask the e-mail provider to send you a simple wallet address. If they refuse, find another e-mail service provider. If you’re referring to encryption at rest , sure, it won’t hurt. If, however, you’re talking about end-to-end (or “zero-knowledge” ) encryption of the mailbox, no. Services that offer this feature usually require you to use non-standard ways to access your e-mails ( with some exceptions ), introducing additional software (telemetry) and hence potential security-issues into an otherwise battle-tested setup. If you encrypt your mails using GPG, and the people you’re communicating with do the same, your e-mail content is effectively E2EE. Any service promising you “zero-knowledge” is trying to sell you smoke, as the real issue with e-mail isn’t necessarily the content, but the metadata . (see below) No. GPG encrypts the content of your messages and maybe the subject line (depending on your client), but there’s more to anonymity than e-mail content. E-mail will unfortunately never be fully anonymous because it leaks metadata on so many different levels, regardless of what the marketing departments of paid services try to sell you. If you’re curious about PGP/GPG and its issues, refer to this excellent write-up . Even though I don’t agree with the author’s verdict to not encrypt e-mails at all , he is nevertheless right with his criticisms of PGP/GPG as a whole, and the suggested alternatives for various scenarios. Because not everyone wants to LARP as secret agent, and because the main goal of this exercise is to limit your own exposure to data breaches. As most online services these days require an e-mail address to create an account, it makes sense to have an e-mail account that has no PII linked to it. Find a state actor that is willing to sponsor you a passport that lists your name as Christopher Condent , Edward Teach , Charles Johnson , Alaric Arabel , or similar, and use e-mail like everyone else.

0 views
マリウス 3 months ago

📨🚕

Disclaimer: This service is in alpha state, see below . Please do not share this post, or 📨🚕 on link aggregation platforms just yet. The server infrastructure will not be able to handle a sudden traffic spike and besides, 📨🚕 is way too undercooked for the average HN user/lobster/redditor/etc. atm. Thank you for your understanding. For a long time, I’ve been using Pushover for various tasks, such as infrastructure monitoring, and I even contributed to its ecosystem . However, as I transitioned towards more freedom-respecting software , my needs changed, and Pushover no longer met them. I ended up building my own near drop-in replacement and migrated to it. This allowed me to use the clients I preferred to receive notifications, rather than being locked into a closed-source app that doesn’t work outside of Apple’s and Google’s ( GSF ) walled gardens. I’ve been running Overpush for myself for quite some time now, and it’s grown into a reliable open-source tool that anyone can easily set up on their own server. One challenge, though, has been the integration of target platforms. For example, if someone wanted Overpush to send notifications to XMPP, they’d have to find a reliable host, create a dedicated account, configure it in their file, and set up their target user account. While not overly complicated, it’s a tedious process that becomes more cumbersome with multiple platforms. Since I’ve been successfully hosting Overpush for myself, I thought I could turn it into a service for others. The service is based on a single, relatively lightweight Go binary, which doesn’t require heavy hardware. However, to support new users and dynamically configure their applications (where an application defines an endpoint-to-target-service route), I needed to extend Overpush to use a database, replacing the simple TOML file. I also had to build a web interface for user sign-ups and account management. Ideally, I wanted a clean landing page with essential documentation to help people get started. A few months ago, I began working on these updates, and I’m excited to announce that the first version of the hosted Overpush service is now live! Although I originally named the open-source project Overpush , I wanted to move away from its legacy of being a mere replacement for Pushover and focus on the future. That’s why I’m calling the hosted service 📨🚕, or MSG.TAXI , which is now available at the domain: https://msg.taxi As described in the introductory post on the 📨🚕 blog , MSG.TAXI is a multi-protocol push notification router. You send data to it via a webhook URL, and it routes that data to your configured targets (e.g. XMPP, Matrix, Telegram, and more coming soon). It’s the missing link between your code and your notification channels, whether that’s your smart home, CI pipeline, RPG guild’s Matrix room, or just your phone at 3 AM when your server crashes (again). 📨🚕 is push notifications from anything, to anything . :-) You can check out Overpush for a detailed breakdown of how the service works, but in short, 📨🚕 lets you sign up and create what are called applications . Each application is assigned a custom webhook URL and can route the data it receives to a single target platform (for example, XMPP) at the moment. Depending on the target platform you choose for an application , you’ll need to configure a destination , typically your user ID or username on that platform. For XMPP, this would be your JID . Once that’s set up, you can start posting webhooks to the application -specific URL, and 📨🚕 will process and forward them to your target. As of right now 📨🚕 supports the following target platforms: I’m working to make more target platforms available on 📨🚕. If you’re interested to use the service let me know which platforms you need ! One thing that’s particularly important to me is privacy , which is often overlooked in hosted services. With 📨🚕, I’m striving to do better. Currently, none of the target platforms support native E2EE implementations. For example, notifications routed via XMPP won’t be encrypted with OMEMO. There are a few reasons for this: Even if I were to implement native E2EE for each platform, it would be a half-hearted solution. The content of your notifications would be sent in plain text within 📨🚕 and encrypted only when it leaves the service. This would mean you’d have to trust 📨🚕 to not inspect your data. I don’t want you to have to trust 📨🚕. So instead of dealing with this overhead, my focus is on the “ends” in “end-to-end” . My long-term goal is to offer E2EE that’s modern and independent of the target platform. To achieve this, I’m developing a lightweight and portable 📨🚕 CLI client that allows you to post encrypted webhooks to the service. On the receiving end, the challenge is greater, as a client would need to be available on various platforms, including mobile. For now, there’s a manual method for implementing E2EE, at least on Android. I briefly documented it here . Essentially, you would use GPG to encrypt the content before submitting it to 📨🚕, and then use OpenKeychain on your Android device to decrypt the messages. You’d forward the encrypted message from e.g. Conversations , Element , Telegram , or whichever target platform’s client you’re using, via the system’s share dialog. (usually by long-tapping the received message, tapping “Share with” and choosing OpenKeychain ) I recommend using this method for confidential information. If for some reason you can’t encrypt the message at the source, 📨🚕 natively integrates age encryption. However, this isn’t true E2EE, as encryption happens after the webhook is accepted by 📨🚕 and processed by an internal worker . One future improvement is to move this encryption step to the moment the post hits the webhook, so the content remains encrypted throughout 📨🚕’s internal systems. I will also be adding integrated GPG support soon. Another key privacy concern is data storage. 📨🚕 does not store copies of the notifications you route through the service, with one exception: Each application allows you to debug the payload it receives via webhooks. Think of it like a debug output for the notifications. If you enable this debugging feature, 📨🚕 will store a copy of the latest webhook content in its database so you can view it in the web console. However, once a new webhook hits the endpoint, the previously stored content is overwritten. 📨🚕 does not maintain a historical timeline of your webhooks. That said, if you keep debugging enabled, it’s important to note that backups are taken periodically to prevent data loss in case of server failure. These backups will contain the latest webhook received at the time of backup. This means that if debugging is left on, the data might end up being stored in backups, even if it’s not part of the active database. In summary, if privacy is important to you, I recommend disabling webhook logging once you’ve verified that your application is working as expected. As mentioned in the blog post, the current version of 📨🚕 is more like an alpha release. While it should work smoothly, occasional hiccups are possible. I’ll do my best to ensure a seamless experience, but please bear with me if things don’t work perfectly right away. If you run into any issues, feel free to reach out . If you think 📨🚕 could be useful to your workflow, I invite you to give it a try. You’re also welcome to share it with anyone who might find it useful, but I kindly ask that you do not share this post, or 📨🚕 on link aggregation platforms just yet. The server infrastructure will not be able to handle a sudden traffic spike and besides, 📨🚕 is way too undercooked for the average HN user/lobster/Redditor atm. I’d prefer the platform to grow slowly and organically so I can actively engage with users, resolve issues, fix bugs, and eventually make it a really great service. I’m looking forward to hearing from you if you decide to give 📨🚕 a try!

1 views
マリウス 3 months ago

High Quality Offline Music

After having on on-and-off relationship with streaming platforms for several years, in 2023 I decided to let my Spotify Premium subscription lapse and instead go back to a traditional local ( offline ) music catalog. My primary motivation was the lack of proper internet infrastructure in various places, that made it increasingly hard to stream from an online source. In addition, Spotify in particular became more and more of an annoyance, logging me out of my devices every once in a while, because they thought they had detected unauthorized access, or because I was streaming from a source IP that the service didn’t seem to like. On top of that, the increasing privacy concerns, the lack of proper high-quality sound formats and the controversies around the streaming price models and Spotify’s founder became major red flags as well. Streaming high-bitrate audio heavily relies on a solid network infrastructure. In rural areas, buffering, dropouts, or forced quality downgrades are common and particularly annoying when trying to enjoy background music while focused on a task. On top of that, streaming services require periodic subscription validation. Platforms like Spotify make their client software phone home regularly, to verify that the subscription is still active. If you’re offline for too long, for example during prolonged travel without connectivity (think sailing), your ability to play music might stop at any moment, despite having cached songs locally. Beyond these technical hurdles, privacy is a big concern. Every song you stream from an online service is logged, timestamped, and linked directly to your account, which, in turn, is liked to your PayPal or credit card, and hence your real identity. That data doesn’t just sit there. It can be sold to or enriched by data brokers, to build comprehensive psychographic profiles. Your listening habits can reveal your mood, emotional state, and even potential mental health fluctuations. It’s not far-fetched to imagine a world where insurers or employers purchase this data to make inferences about “mental stability” , sexual orientation or political alignment, and adjust rates or base hiring decisions on this. Streaming services are, in themselves, inherently privacy invasive and present a quietly dystopian future unfolding in the background. On top of that, there’s what I’d like to call subscription fatigue . You’re billed every month, regardless of how often you use the service or whether you’re stuck listening to the same songs over and over again. The recurring charge doesn’t go away, but unlike when buying music, you don’t actually own anything at the end of the day. To add insult to injury, those payments rarely benefit the artists you love and hope to support. Revenue in the streaming industry is a complex topic that could warrant an entire write-up on its own. The bottom line is, however, that unless you’re a megastar, you’re barely seeing any money from millions of listens, and you might find other income streams to be significantly more lucrative . “But without streaming services how do you discover new music that you like?” , you might ask. And yes, you’re right, streaming services introduced us to discovery playlists and algorithmic suggestions as a convenient way to find new music. However, personalized playlists have increasingly become a gateway for record labels to push their songs, regardless of what you want to hear. Many users report hearing the same new singles across Spotify despite different tastes, a sign of homogenized playlist algorithms and paid content. Academic studies support the theory that algorithms favor popular tracks, leading to lower diversity and promoting the same songs to all users, regardless of their unique preferences. This results in playlists that sound alike across different accounts, containing fewer and fewer truly personalized suggestions, and instead more promotional pieces. Hint: YouTube’s play next feature is fairly equal to most platforms' discovery queues. Simply listen to a song that you like and have YouTube continue playing similar content to discover titles that you might not have heard of. A troubling trend in the streaming world is the quiet surge of AI-generated music that is being pushed onto playlists, more often than not without any clear disclosure to the user. These tracks are optimized in terms of duration, emotionally vague tones, and melodies and are often designed to fit seamlessly into background playlists for “focus ”, “sleep” or other moods . These songs aren’t made by humans (in the classical sense), yet they appear under fabricated artist names and stock album covers, blending into your recommended mix as if they were legitimate indie releases. Streaming platforms benefit immensely from this technology, due to the lack of licensing fees and royalties. It’s a subtle but dangerous erosion of music as a form of human expression and it’s replacing artistry with algorithmically generated filler to keep the users listening and the services profiting. And worst of it all is that people are probably already listening to AI tracks without realizing it. For instance, the band Velvet Sundown , which amassed over a million streams on Spotify before being revealed as entirely AI-created , is emblematic of this trend. Platforms like Deezer report that roughly 18% of newly uploaded tracks are fully AI-generated, and up to 70% of their streams are fraudulent . These are only some of the technical and cultural issues that music streaming has brought us. Hence, for the past two years I’ve slowly returned to the roots of digital music consumption, that is storing my carefully curated library locally, ideally in a lossless, high-quality format (e.g. FLAC), and having my own streaming service by hosting the music on my trusty NAS (a.k.a. my Ultra‑Portable Data Center ). These days, my Jellyfin rocks a library of over 2,000 songs, that I can stream on every computer , phone , and tablet I own. With the music stored and served this way, there’s no third-party tracking, no usage analytics, and no opaque algorithms deciding what I hear. It works completely offline, anytime and anywhere, without worrying about internet outages or validation check-ins. Heck, even during power-outages I’m still able to keep my music playing, thanks to uninterruptible power supplies and laptop / phone batteries. Also, it’s a one-time investment that persists virtually until the end of time, meaning no endless subscription fees just to listen to the same songs. And with Jellyfin supporting remote control features, I’m able to connect to any device from any other and chose what it’ll play – without any internet uplink or proprietary protocols (e.g. AirPlay). Of course, this setup isn’t without its trade-offs. The lack of curated discovery with truly taste-based algorithmic recommendations means that I have to seek out new music on my own, which takes more time and intention. However, given the enshittification – a word so over-used that I’m truly starting to hate it – with most platforms’ recommendations, I don’t feel like I’m missing out here. If you’re considering the switch to Jellyfin be prepared to stumble upon a few things that you would think are absolute basic to any music player and platform, but are simply nowhere to be found in Jellyfin. Luckily, there are third-party clients for Jellyfin that implement at least some of those features. Also, there’s the self-hosting aspect: Running a Jellyfin server means handling updates, backups, and the occasional LAN hiccup – which more often than not is DNS – myself. Everything is manually synced, meaning that adding new albums or tracks requires me to upload them to the NAS and refresh the library, which, while not hard, does add friction compared to the instant gratification of streaming platforms. Note: I have documented how to set up Jellyfind in my post about the Ultra-Portable Data Center (part two). If you’re curious about how to set up your own Jellyfin instance go check it out! I recently picked up the iFi hip-dac3 , a compact USB DAC and headphone amp designed specifically for audiophiles. While I’m as far from being an audiophile as it gets, the device nevertheless solves a couple of issues that I had with Bluetooth connections. First of all, it connects easily to my phone via USB-C and handles high-res audio with support for PCM up to 384kHz. The sound, even with my modest current setup, is noticeably cleaner, fuller, and more detailed, especially in tracks with subtle instrumental layering. I haven’t yet paired it with a pair of serious audiophile headphones, so I’m not fully unlocking its potential just yet. But even now, the difference is striking and the build quality, portability, and battery life are top-notch. Hoarding gigabytes of FLACs has finally paid off, heh. Another benefit is that I don’t have to deal with Bluetooth issues, especially with lower-end headphones and Android devices. Periodic disconnects and audio glitches have been problems that I’ve experienced in the past, especially when trying to use LDAC. I also have a healthy distrust towards Bluetooth security in general, hence I use a wired keyboard for example. On top of that, I also don’t feel like blasting my head with Bluetooth for several hours on a day to day basis. Streaming may have convenience, but the real costs are hidden: Loss of privacy, weak artist compensation, algorithmic manipulation, and ongoing payments. By returning to locally stored files and a self‑hosted Jellyfin system, I’ve reclaimed control, quality, and peace of mind. Sure, I lose autoplay surprises, but I gain a music experience that’s truly mine and I experience new music more intentionally rather than as part of an endless consumption queue. P.S.: If you need more reasons to quit Spotify in particular, I invite you to perform a web search using the terms “Daniel Ek Prima Materia Helsing” . P.P.S.: Tidal is now majority-owned by Block, Inc. , a company helmed by a CEO whose results are more often than not outpaced by his ego, which, in turn, is only rivaled by the uncritical fervor of his die-hard crypto-lemmings.

0 views