Latest Posts (20 found)

Network-Wide Ad Blocking with Tailscale and AdGuard Home

One of the frustrations with traditional network-wide ad blocking is that it only works when you’re at home. The moment you leave your network, you’re back to seeing ads and trackers on every device. But if you’re already running Tailscale, there’s a simple fix: run AdGuard Home on a device in your tailnet and point all your devices at it. The result? Every device on your Tailscale network gets full ad blocking and secure DNS resolution, whether you’re at home, in a coffee shop, or on the other side of the world. I’ve been taking digital privacy more seriously in recent years. I prefer encrypted email via PGP , block ads and trackers wherever possible, and generally try to minimise the data I leak online. I’ve been running Pi-hole for years, but it always felt like a half-measure. It worked great at home, but my phone and laptop were unprotected the moment I stepped outside. I could have set up a VPN back to my home network, but that felt clunky. With Tailscale, the solution is elegant. Every device is already connected to my tailnet, so all I need is a DNS server that’s accessible from anywhere on that network. AdGuard Home fits the bill perfectly. It’s lighter than Pi-hole, has a cleaner interface, and supports DNS-over-HTTPS out of the box for upstream queries. The other benefit is that this setup preserves Tailscale’s Magic DNS. I can still access my tailnet devices by name (like ), while all other DNS queries go through AdGuard for secure resolution and ad blocking. SSH into your always-on device and run the official installer: This installs AdGuard Home to and sets it up as a systemd service. Once installed, open the setup wizard in your browser at . During setup: The key here is binding to your Tailscale IP rather than . This ensures AdGuard only listens on your tailnet, not on your local network or the public internet. By default, AdGuard will use your system’s DNS servers for upstream queries. That’s not ideal. We want encrypted DNS all the way through. In AdGuard Home, go to Settings → DNS settings → Upstream DNS servers and replace the defaults with: These are Quad9’s DNS-over-HTTPS and DNS-over-TLS endpoints. Quad9 is a privacy-focused resolver that also blocks known malicious domains. For the Bootstrap DNS servers (used to resolve the upstream hostnames), add: I’d also recommend enabling DNSSEC validation and Optimistic caching in the same settings page for better security and performance. Now the easy part. Open your Tailscale admin console and: That’s it. Every device on your tailnet will now use your AdGuard instance for DNS resolution. This setup gives you: If you do keep logging enabled, the query logs can be useful for identifying apps that are phoning home or misbehaving. But there’s a trade-off here. By default, AdGuard Home logs every DNS query from every device. That’s useful for debugging, but it felt uncomfortable to me. The majority of my family use my tailnet, and I have no interest in knowing what sites they’re visiting. I also don’t need my own traffic logged if it isn’t necessary. I’ve turned off query logging entirely in Settings > General settings > Query log configuration , and disabled statistics as well. Ad blocking still works without any of this data being stored. Since all your devices depend on this DNS server, you’ll want to make sure it’s reliable. If the device running AdGuard goes offline, DNS resolution will fail for your entire tailnet. A few options to mitigate this: For my setup, I’m running it on a small Intel NUC that’s always on anyway. It’s been rock solid so far. This is one of those setups that takes ten minutes and then quietly improves your life. Every device on my tailnet now gets ad blocking and secure DNS without any per-device configuration. The combination of Tailscale’s networking and AdGuard’s filtering is genuinely elegant. If you’re already running Tailscale, this is worth the effort. A device on your Tailscale network that’s always on (a small home server, Raspberry Pi, or even an old laptop) AdGuard Home installed on that device Access to your Tailscale admin console Set the DNS listen address to your device’s Tailscale IP (e.g., ) Set the admin interface to the same Tailscale IP on port 3000 Create an admin username and password Add your device’s Tailscale IP as a Global nameserver Enable Override local DNS Ad and tracker blocking everywhere , not just at home Encrypted DNS queries , so your ISP can’t see what domains you’re resolving Malware protection via Quad9, which blocks known malicious domains at the DNS level A single dashboard to view query logs and statistics for all your devices in one place No client configuration since Tailscale pushes the DNS settings automatically Run AdGuard on a device that’s always on (a dedicated home server or cloud VPS) Add a fallback DNS server in Tailscale (though this bypasses AdGuard when your server is down) Run a second AdGuard instance on another device and add both as nameservers

0 views
Danny McClelland 1 weeks ago

Using Proton Pass CLI to Keep Linux Scripts Secure

If you manage dotfiles in a public Git repository, you’ve probably faced the dilemma of how to handle secrets. API keys, passwords, and tokens need to live somewhere, but committing them to version control is a security risk. Proton has recently released a CLI tool for Proton Pass that solves this elegantly. Instead of storing secrets in files, you fetch them at runtime from your encrypted Proton Pass vault. The CLI is currently in beta. Install it with: This installs to . Then authenticate: This opens a browser for Proton authentication. Once complete, you’re ready to use the CLI. List your vaults: View an item: Fetch a specific field: Get JSON output (useful for parsing multiple fields): I have several tools that need API credentials. Rather than storing these in config files, I created wrapper scripts that fetch credentials from Proton Pass at runtime. Here’s a wrapper for a TUI application that needs API credentials: The key insight: fetching JSON once and parsing with is faster than making separate API calls for each field. The Proton Pass API call takes a few seconds. For frequently-used tools, this adds noticeable latency. The solution is to cache credentials in the Linux kernel keyring: With caching: The cache expires after one hour, or when you log out. Clear it manually with: The CLI also has built-in commands for secret injection. The command passes secrets as environment variables: The command processes template files: These use a URI syntax: to reference secrets. For applications that read credentials from config files (like WeeChat’s ), the wrapper can update the file before launching: The CLI can also act as an SSH agent, loading keys stored in Proton Pass: This is useful if you store SSH private keys in your vault. This approach keeps secrets out of your dotfiles repository entirely. The wrapper scripts reference Proton Pass item names, not actual credentials. Your secrets remain encrypted in Proton’s infrastructure and are only decrypted locally when needed. The kernel keyring cache is per-user and lives only in memory. It’s cleared on logout or reboot, and the TTL ensures credentials don’t persist indefinitely. For public dotfiles repositories, this is a clean solution: commit your wrapper scripts freely, keep your secrets in Proton Pass. First run: ~5-6 seconds (fetches from Proton Pass) Subsequent runs: ~0.01 seconds (from kernel keyring)

0 views
Danny McClelland 1 weeks ago

Scheduled Deploys for Future Posts

One of the small joys of running a static blog is scheduling posts in advance. Write a few pieces when inspiration strikes, set future dates, and let them publish themselves while you’re busy with other things. There’s just one problem: static sites don’t work that way out of the box. With a dynamic CMS like WordPress, scheduling is built in. The server checks the current time, compares it to your post’s publish date, and serves it up when the moment arrives. Simple. Static site generators like Hugo work differently. When you build the site, Hugo looks at all your content, checks which posts have dates in the past, and generates HTML for those. Future-dated posts get skipped entirely. They don’t exist in the built output. This means if you write a post today with tomorrow’s date, it won’t appear until you rebuild the site tomorrow. And if you’re using Netlify’s automatic deploys from Git, that rebuild only happens when you push a commit. No commit, no deploy, no post. I could set a reminder to push an empty commit every morning. But that defeats the purpose of scheduling posts in the first place. The fix is straightforward: trigger a Netlify build automatically every day, whether or not there’s new code to deploy. Netlify provides build hooks for exactly this purpose. A build hook is a unique URL that triggers a new deploy when you send a POST request to it. All you need is something to call that URL on a schedule. GitHub Actions handles the scheduling side. A simple workflow with a cron trigger runs every day at midnight UK time and pings the build hook. Netlify does the rest. First, create a build hook in Netlify: Next, add that URL as a secret in your GitHub repository: Finally, create a workflow file at : The dual cron schedule handles UK daylight saving time. During winter (GMT), the first schedule fires at midnight. During summer (BST), the second one does. There’s a brief overlap during the DST transitions where both might run, but an extra deploy is harmless. The trigger is optional but handy. It adds a “Run workflow” button in the GitHub Actions UI, letting you trigger a deploy manually without pushing a commit. Now every morning at 00:01, GitHub Actions wakes up, pokes the Netlify build hook, and a fresh deploy rolls out. Any posts with today’s date appear automatically. No manual intervention required. It’s a small piece of automation, but it removes just enough friction to make scheduling posts actually practical. Write when you want, publish when you planned. Go to your site’s dashboard Navigate to Site settings → Build & deploy → Build hooks Click Add build hook , give it a name, and select your production branch Copy the generated URL Go to Settings → Secrets and variables → Actions Create a new repository secret called Paste the build hook URL as the value

0 views
Danny McClelland 2 weeks ago

Leaving Spotify for Self-Hosted Audio

I’ve been a Spotify subscriber for years. It’s convenient, the catalogue is vast, and the recommendations used to be genuinely useful. But lately, I’ve found myself increasingly uncomfortable with the direction the platform is heading. It’s hard to pin down exactly when Spotify stopped feeling like a music service and started feeling like something else entirely. A few things have been gnawing at me: Artist compensation is broken. The per-stream payout is famously tiny , and the model actively discourages the kind of music I actually want to support. Albums that reward repeated listening lose out to background playlist fodder designed to rack up streams. In 2024, Spotify stopped paying royalties entirely for any track under 1,000 streams, demonetising an estimated 86% of music on the platform. The interface is hostile. Every update seems to prioritise podcasts, audiobooks, and algorithmically-generated content over letting me play my own playlists. The homepage is a mess of things I didn’t ask for. AI-generated music is creeping in. There’s been a wave of low-effort AI tracks flooding the platform , often mimicking real artists or filling ambient playlists. Spotify removed over 75 million spam tracks in 2024 alone. It feels like the beginning of a race to the bottom, where quantity beats quality and genuine artists get drowned out. I don’t own anything. After years of subscription payments, I have nothing to show for it. If Spotify disappears tomorrow, or removes an album I love, it’s just gone. The company’s direction feels off. Beyond the platform itself, there’s the question of what Spotify’s leadership prioritises. CEO Daniel Ek has been investing heavily in European defense technology . That’s his prerogative, of course, but it underlines that my subscription money flows to a company whose priorities don’t align with mine. Spotify Wrapped was a wake-up call. In previous years, Wrapped felt like a fun novelty. This year it was a reminder that I don’t actually listen to that many artists. The ones I enjoy, I play on repeat. So why am I paying a monthly subscription to listen to the same songs over and over? The artists I love aren’t seeing much from those streams, and I’m essentially renting music at an increasingly high cost. The family plan price keeps creeping up, and for what? The privilege of temporarily accessing albums I could just buy outright? The obvious response is “just switch to another service.” But the alternatives have their own problems. YouTube Music / Google shares many of Spotify’s issues, with the added concern that both platforms profit from advertising revenue that flows from some less than savoury sources. When your business model depends on engagement at any cost, the incentives get murky fast. Apple Music locks you further into an ecosystem and has its own history of prioritising platform control over user freedom. Tidal is perhaps the current outlier. Better artist payouts, lossless audio as standard, and seemingly fewer of the dark patterns plaguing the others. But streaming services have a habit of starting idealistic and drifting toward the mean once growth becomes the priority. How long until Tidal follows the same path? I’d rather not find out by having my library disappear when they pivot. The fundamental problem isn’t any single company. It’s the streaming model itself. When you rent access instead of owning files, you’re always at the mercy of corporate decisions you have no control over. When I thought about what I wanted from music, the list was simple: I’ve landed on a self-hosted Plex library for my music collection, served up with Plexamp on all my devices. Plexamp is genuinely excellent. It’s a dedicated music player built by Plex, and it feels like it was designed by people who actually care about listening to music rather than optimising engagement metrics. Clean interface, proper gapless playback, and features like sonic exploration that help with discovery without feeling algorithmic. The client availability sealed the deal. Plexamp runs on iOS, Android, macOS, Windows, and Linux. The only gap is native car integration, but Bluetooth fills that role with minimal friction. Connect, play, done. The server side is just Plex running on my existing home server. Music files live on local storage, backed up properly, under my control. No subscription required for basic playback, though Plex Pass unlocks some Plexamp features. One of the benefits of owning your music files is choosing the quality. My entire library is FLAC: lossless audio that preserves every detail from the original recording. To be honest, I can’t reliably tell the difference between Spotify’s high-quality streams and lossless audio on my current setup. Most people can’t. But that’s not really the point. Audio technology keeps improving. Better headphones, better DACs, better speakers. The music I’m collecting now might be played on equipment that doesn’t exist yet. By storing everything in lossless, I’m preserving the highest possible quality for whatever the future brings. I’d rather have more data than I need today than wish I’d kept it later. With streaming, you get whatever quality the service decides to give you. With my own files, the choice is mine. Bandcamp is the obvious choice for buying digital music directly. Artists get a better cut, you get lossless files, and there’s a strong community around it. In theory, it’s perfect. In practice, I find the search experience frustrating. Getting to the specific artist and album I want feels slower than it should. Maybe I’m spoiled by years of Spotify’s instant search, but the friction is noticeable. For now, I’m putting up with it because the alternatives are worse, but I’m constantly searching for something better. If you know of a good source for purchasing lossless music with a decent search experience, I’d love to hear about it. I won’t pretend this is all upside. Spotify’s discovery features, when they worked, introduced me to artists I genuinely love. The convenience of having everything available instantly is hard to replicate. And sharing music with friends becomes more complicated when you can’t just send a link. But those trade-offs feel worth it. I’d rather have a smaller collection of music I actually own than endless access to a library that’s increasingly polluted with content designed to game the algorithm rather than move the listener. I won’t sugarcoat it: the friction to switch has been fairly high. Ripping, cataloguing, and transferring content is one thing. The curated playlists from years gone by are another. Those playlists represent hours of listening, discovering, and refining. Losing them felt like losing a part of my music history. Soundiiz came in handy here, automatically copying playlists across to Plex. It worked well for most of the heavy lifting. But invariably there’s a song on a crucial playlist that I just don’t own yet, leaving a gap. Until I fill those gaps, the migration doesn’t feel complete. It’s a slow process. Every missing track is a reminder that I’m rebuilding something that took years to accumulate. But each album I add is mine now, permanently, and that makes the effort feel worthwhile. Ownership - Files that live on my hardware, that I control Quality - Lossless audio, not compressed streams No algorithms - I’ll decide what to listen to, thanks Supporting artists - Buying albums directly puts more money in their pockets than years of streaming

0 views
Danny McClelland 2 weeks ago

Omarchy Hardening

A few weeks ago, I came across A Word on Omarchy which highlighted some security gaps in Omarchy’s default configuration. Things like LLMNR being enabled, UFW configured but not actually running, and relaxed login attempt limits. The post resonated with me. Omarchy is a fantastic opinionated setup for Arch Linux with Hyprland, but like any distribution that prioritises convenience, some security defaults get loosened in the process. That’s not necessarily wrong, it’s a trade-off, but it’s worth knowing about. So I built Omarchy Hardening . It’s an interactive terminal script that walks you through five hardening options: Each option shows exactly what will change before you confirm. Nothing is selected by default. The script opens with a warning, and I’ll repeat it here: you should not rely on automation to secure your system . The best approach is to understand your distribution and make these changes yourself. Read the source code. Run the commands manually. This builds knowledge you’ll need when things go wrong. The tool exists to demonstrate what these changes look like and to make them easier to apply consistently. But it’s not a substitute for understanding. If you’re curious about going further, the README includes a section on additional hardening steps. OpenSnitch is worth particular attention. It’s an application-level firewall that prompts you whenever a program tries to make a network connection. Educational and practical. The code is on GitHub: dannymcc/omarchy-hardening Disable LLMNR - Prevents name poisoning attacks on local networks Enable UFW Firewall - For earlier Omarchy versions where UFW wasn’t enabled by default Tailscale-only SSH - Restricts SSH to your Tailscale network, making it invisible to the public internet Limit Login Attempts - Reduces failed attempts from 10 back to 3 before lockout Configure Git Signing - Enables SSH commit signing for verified commits

0 views
Danny McClelland 2 weeks ago

ZeroNet: The Web Without Servers

I’ve been exploring ZeroNet recently, a peer-to-peer web platform that’s been around since 2015 but still feels like a glimpse of what the internet could be. It’s not mainstream, and it’s not trying to be. But for anyone who cares about decentralisation and censorship-resistance, it’s worth understanding. ZeroNet is a decentralised network where websites exist without traditional servers. Instead of requesting a page from a server somewhere, your browser downloads it from other users who already have it. Think BitTorrent, but for websites. Once you’ve visited a site, you become a host for it too. The more people visit, the more resilient the site becomes. There’s no company to take to court. No single point of failure. No domain registrar that can be pressured into pulling the plug. The technical bits are surprisingly elegant. ZeroNet uses Bitcoin cryptography for identity. Each site has a unique address derived from a public/private key pair. The site owner signs updates with their private key, and everyone can verify those signatures. This means content can be updated, but only by whoever holds the key. No passwords, no accounts, no centralised authentication. Content is distributed using BitTorrent’s protocol. When you visit a ZeroNet site, you’re downloading it from peers and simultaneously seeding it to others. Sites are essentially signed archives that propagate across the network. For privacy, ZeroNet can route traffic through Tor. It’s optional, but turning it on means your IP address isn’t visible to other peers. Combined with the fact that there’s no central server logging requests, the privacy properties are genuinely interesting. My interest in ZeroNet ties directly into my broader views on privacy . I’m not naive about the limitations of decentralised systems, or the fact that censorship resistance can protect content that probably shouldn’t be protected. But there’s something valuable in understanding how these networks function. The centralised web has become remarkably fragile. A handful of companies control most of the infrastructure, and they’re increasingly subject to political and legal pressure. That’s sometimes appropriate. Nobody wants to defend genuinely harmful content. But the tools of control, once built, don’t stay confined to their intended purpose. ZeroNet represents a different architecture entirely. It’s not about evading accountability, it’s about distributing it. Instead of trusting a company to host your content and hoping they don’t change their terms of service, you trust mathematics. The trade-offs are real: slower access, no search engines worth mentioning, and a user experience that assumes technical competence. But those are engineering problems, not fundamental limitations. I’m not suggesting everyone should abandon the normal web for ZeroNet. That would be impractical and unnecessary. But understanding how decentralised alternatives work feels increasingly important. The architecture of the tools we use shapes what’s possible, and diversity in that architecture is probably healthy. For now, I’m treating ZeroNet as an experiment. Something to explore and learn from rather than rely on. But in a world where digital infrastructure is more contested than ever, it’s useful to know that alternatives exist. Thanks to ポテト for pointing me towards ZeroNet.

0 views
Danny McClelland 1 months ago

Value

I recently passed my advanced motorcycle test with the IAM. A F1RST, no less. The highest grade. And within hours of getting the result, I’d already started telling myself it wasn’t that impressive. This happens every time. The thing I’ve been working toward, the qualification, the goal, the milestone, suddenly feels smaller the moment I reach it. Not worthless, exactly. Just… less. As though the act of achieving it somehow deflated the whole thing.

0 views
Danny McClelland 7 months ago

The Endless Hunt for Productivity Nirvana

I’ve been chasing the perfect productivity setup for longer than I care to admit. The signs are all there: a Downloads folder cluttered with productivity apps, browser bookmarks organised by system acronyms, and that familiar feeling of starting fresh with yet another note-taking tool, convinced that this time will be different. My digital graveyard is extensive. NotePlan, Microsoft OneNote, Apple Notes, Google Keep, Notion, Airtable, Logseq, Google Docs, Obsidian, Simple Notes — I’ve installed them all, configured them meticulously, and abandoned them with the same predictable rhythm.

0 views
Danny McClelland 7 months ago

2025 Privacy Reboot: Six Month Check-In

Six months ago, I wrote about my privacy reboot — a gradual shift toward tools that take both privacy and security seriously. It was never about perfection or digital purity, but about intentionality. About understanding which tools serve me, rather than the other way around. Here’s how it’s actually gone. The Wins Ente continues to impress. The family photo migration is complete, and the service has been rock solid. The facial recognition quirks I mentioned on Android have largely sorted themselves out, and the peace of mind knowing our family memories aren’t feeding Google’s advertising machine feels worth the subscription cost.

0 views
Danny McClelland 8 months ago

Focus

We’ve all seen them: those productivity YouTubers with perfectly lit home offices explaining how they maintain “deep work” for 12+ hours a day. They sit there, looking impossibly serene, selling us a vision of superhuman concentration that I’ve come to believe is complete nonsense. I used to buy into this. I’d feel like a failure when my brain checked out after three solid hours of work. I’d push myself to match these claimed productivity marathons, only to end up exhausted and wondering what was wrong with me.

0 views
Danny McClelland 8 months ago

Trust

We like to believe we’re in control. That privacy is something we can protect if we just check the right boxes, read the fine print, toggle the right settings. But that belief is crumbling. In 2025, privacy isn’t something we manage — it’s something we quietly surrender, one tap, click, and scroll at a time. Lately, I’ve been thinking about how much I rely on Google. Not in an abstract way, but in a daily, tangible, everything-I-do-is-somehow-Google-enabled kind of way.

0 views
Danny McClelland 8 months ago

Balance

Tucked away in a parenting book I read nearly two decades ago — title and author long lost to time — was a metaphor that lodged itself in my brain and never left. “Life is a balance, or rather, a juggle of balls. Some are glass. Some are plastic." The idea is simple but enduring: drop a plastic ball, and it bounces. Drop a glass one, and it shatters. The trick — the real tightrope act — is knowing which is which.

0 views
Danny McClelland 8 months ago

100 Days of Writing

Is there some magic in writing every day for 100 days? Maybe. Maybe not. But that’s not quite the right question. A better one might be: What would I hope to get out of writing every day for 100 days? For starters, I’d get better at clarity — saying what I mean without losing the thread halfway through. I’d build speed: less dithering, more straight-from-brain-to-fingers. And maybe, just maybe, I’d find a rhythm.

0 views
Danny McClelland 8 months ago

2025: My Privacy Reboot

Six Month Update Curious how this privacy reboot actually worked out? I wrote a detailed follow-up after six months of living with these changes — covering what worked, what didn't, and the pragmatic compromises along the way. Read the Six Month Check-In → The line between privacy and security isn’t always clear — and in tech, it’s often treated like they’re the same thing. But they’re not. Even the broader question of when to trust digital services with our data has become increasingly complex.

0 views
Danny McClelland 1 years ago

Privacy

I believe privacy is a fundamental right, and I’ve designed this blog to respect yours. What I Track This blog uses Umami Analytics to collect minimal, anonymous page view data. I track this information solely to understand which content resonates with readers, helping me focus my design and writing efforts on what’s genuinely valuable to my audience. What I collect: Page views and basic navigation patterns General geographic regions (country level only) Referrer information (which site led you here) Device type (desktop, mobile, tablet) What I don’t collect:

0 views
Danny McClelland 1 years ago

Replacing Google Photos with Immich

I have, for a long time, been looking for a better alternative to Google Photos. Although Google Photos does exactly what I want, and isn’t that expensive, I do often consider the fact that all of my photos are in Google’s hands. I did move to Synology Photos a few years ago. The move itself was straight forward enough, but the user experience leaves quite a lot to be desired.

0 views
Danny McClelland 2 years ago

2024 macOS Dotfiles

It is the time of year again where I decide to update my local computer configuration, as well as any remote linux server(s) that I am maintaining. I really appreciate having a familiar prompt and alias setup whenever I login to any of my servers/workstations. As per usual, I cannot remember which specific packages and plugins I use; so I’ve am using this post for future me to discover how I actually configured my environments.

0 views
Danny McClelland 2 years ago

Running Powershell Script an Elevated User

When running a powershell script, I often find I need to run the script in an elevated prompt. The nature of my job is that often these scripts will be run by people that don’t really know what Powershell is. I have found it quite useful to first create a bash script that the user executes, which in turn calls the actual Powershell script as an elevated user. To keep this handy, I’m posting it here for future me.

0 views
Danny McClelland 2 years ago

Extending Unraid VM Storage

More and more I find myself quickly spinning up a new Windows VM on my unraid server. It is always a ‘temporary’ VM which, after setting up exactly how I like, I invariably then wish I’d set a much larger virtual disk size. The standard VM disk size is 30G and that always seems to be enough. Fast forward an hour or two and I really wish I had set something more realistic.

0 views
Danny McClelland 5 years ago

Tmux exit current session, not Tmux itself

When accessing remote servers that I am responsible for, I always initiate a tmux session along with the SSH session. This means I am always in a tmux session and I will never forget to start a tmux session manually. There is something particurly frustrating about starting a process on a remote server only to realise that you I forgot to start a tmux session and the process is going to take > 1 hour.

0 views