Latest Posts (16 found)
matduggan.com 2 weeks ago

The Year of the 3D Printed Miniature (And Other Lies We Tell Ourselves)

One amusing thing about following tech news is how often the tech community makes a bold prediction or assertion, only to ultimately be completely wrong. This isn't amusing in a "ha ha, we all make mistakes" kind of way. It's amusing in the way that watching someone confidently stride into a glass door is amusing. You feel bad, but also, they really should have seen that coming. Be it VR headsets that would definitely replace reality by 2018, or self-driving cars in every driveway "within five years" (a prediction that has been made every five years since 2012), we have a remarkable talent for making assumptions about what consumers will like and value without having spent a single goddamn minute listening to those same consumers. It's like a restaurant critic reviewing a steakhouse based entirely on the menu font. So when a friend asked me what I thought about "insert new revolutionary technology that will change everything" this week, my brain immediately jumped to "it'll be like 3D printers and Warhammer." This comparison made sense in the moment, as we were currently playing a game of Warhammer 40,000, surrounded by tiny plastic soldiers and the faint musk of regret. But I think, after considering it later, it might make sense for more people as well—a useful exercise in tech enthusiasm versus real user wants and needs. Or, put another way: a cautionary tale about people who have never touched grass telling grass-touchers how grass will work in the future. One long-held belief among tech bros has been the absolute confidence that 3D printers would, at some point, disrupt . Exactly what they would disrupt wasn't 100% clear. Disruption, in Silicon Valley parlance, is less a specific outcome and more a vibe—a feeling that something old and profitable will soon be replaced by something new and unprofitable that will somehow make everyone rich. A common example trotted out was one of my favorite hobbies: tabletop wargaming. More specifically, the titan of the industry, Warhammer 40,000. Every time a new 3D printer startup graced the front page of Hacker News, this proclamation would echo from the comments section like a prophecy from a very boring oracle: "This will destroy Games Workshop." Reader, it has not destroyed Games Workshop. Games Workshop is doing fine. Games Workshop will be selling overpriced plastic crack to emotionally vulnerable adults long after the sun has consumed the Earth. For those who had friends in high school—and I'm not being glib here, this is a genuine demographic distinction—40k is a game where two or more players invest roughly $1,000 to build an army of small plastic figures. You then trim excess plastic with a craft knife (cutting yourself at least twice, this is mandatory), prime them, paint them over the course of several months, and then carefully transport them to an LGS (local game shop) in foam-lined cases that cost more than some people's luggage. Another fellow dork will then play you on a game board roughly the size of a door, covered in fake terrain that someone spent 40 hours making to look like a bombed-out cathedral. You will both have rulebooks with you containing as many pages as the Bible and roughly as open to interpretation. Wars have been started over less contentious texts. To put 40k in some sort of nerd hierarchy, imagine a game shop. At the ground level of this imaginary shop are Magic: The Gathering and Pokémon TCG games. Yes, these things are nerdy, but it's not that deep into the swamp. It's more of a gentle wade. You start with Pokémon at age 10, burn your first Tool CD at 14, and then sell your binder of 'mons to fund your Magic habit. This is the natural order of things. Deeper into the depths, maybe only playing at night like creatures who have evolved beyond the need for vitamin D, are your TTRPGs (tabletop RPGs). The titan of the industry is Dungeons & Dragons, but there is always some new hotness nipping at its heels, designed by someone who thought D&D wasn't quite complicated enough. TTRPGs are cheap to attempt to disrupt—you basically need "a book"—so there are always people trying. These are the folks with thick binders, sacks of fancy dice made from materials that should not be made into dice, and opinions about "narrative agency." Near the bottom, almost always in the literal basement of said shop, are the wargame community. We are the Morlocks of this particular H.G. Wells situation. I, like a lot of people, discovered 40k at a dark time in my life. My college girlfriend had cheated on me, and I had decided to have a complete mental breakdown over this failed relationship that was doomed well before this event. The cheating was less a cause and more a symptom, like finding mold on bread that was already stale. Honestly, in retrospect, hard to blame her. I was being difficult. I was the kind of difficult where your friends start sentences with "Look, I love you, but..." Late at night, I happened to be driving my lime green Ford Probe past my local game shop. The Ford Probe, for those unfamiliar, was a car designed by someone who had heard of cars but had never actually seen one. It was the automotive equivalent of a transitional fossil. I loved it the way you love something that confirms your worst suspicions about yourself. There, through the shop window, I saw people hauling some of the strangest items out of their trunks. Half-destroyed buildings. Thousands of tiny little figures. Giant robots the size of a small cat with skulls for heads. One man was carrying what appeared to be a ruined spaceship made entirely of foam and spite. I pulled over immediately. The owner, who knew me from playing Magic, seemed neither surprised nor pleased to see me. This was his default state. Running a game shop for 20 years will do that to a person. "They're in the basement," he said, in the mostly dark game shop, the way someone might say "the body's in the basement" in a very different kind of establishment. I descended the rickety wooden stairs to a large basement lit by three naked bulbs hanging from cords. The aesthetic was "serial killer's workspace" meets "your uncle's unfinished renovation project." It was perfect. Before me were maybe a dozen tables littered with plastic. Some armies had many bug-like things, chitinous and horrible. Others featured little skeletons or robots. There were tape measures everywhere and people throwing literal handfuls of small six-sided dice at the table with the intensity of gamblers who had nothing left to lose. Arguments broke out over millimeters. Someone was consulting a rulebook with the desperation of a lawyer looking for a loophole. I was hooked immediately. 40k is the monster of wargaming specifically because of a few genius decisions by Games Workshop, the creators—a British company that has somehow figured out how to print money by selling plastic and lore about a fascist theocracy in space. It's a remarkable business model. Since the beginning of the game, 40k casual games have allowed proxies. Proxies are stand-ins for specific units that you need for an army but don't have. Why don't you have them? Excellent question. Let me tell you about Games Workshop's relationship with its customers. Games Workshop has always played a lot of games with inventory. Often releases will have limited supply, or there are weird games with not fulfilling the entire order that a game shop might make. Even when they switched from metal to plastic miniatures, the issues persisted. This has been the source of conspiracy theories since the very beginning—whispers of artificial scarcity, of deliberate shortages designed to create FOMO among people who were already deeply susceptible to FOMO because they collect tiny plastic soldiers. Whether the conspiracy theories are true is almost beside the point. The feeling of scarcity is real, and feelings, as any therapist will tell you, are valid. Even the stupid ones. So players had proxies. Anything from a Coke can to another unit entirely. Basically, if it had the same size base and roughly the same height, most people would consider it allowable. "This empty Red Bull can is my Dreadnought." Sure. Fine. We've all been there. This is where I first started to see 3D-printed miniatures enter the scene. Similar to most early tech products, the first FDM 3D-printed miniatures I saw were horrible. The thick, rough edges and visible layer lines were not really comparable to the professional product, even from arm's length. They looked like someone had described a Space Marine to a printer that was also drunk. But they were totally usable as a proxy and better than a Coke can. The bar, as they say, was low. But the technology continued to get better and cheaper and, as predicted by tech people, I started to notice more and more interest in 3D printing among people at the game stores. When I first encountered a resin 3D-printed army at the table, I'll admit I was intrigued. This person had basically fabricated $3,000 worth of hard-to-get miniatures out of thin air and spite. This was supposed to be the big jumping-off point. The inflection moment. There were a lot of discussions at the table about how soon we wouldn't even have game shops with inventory! They'd be banks of 3D printers that we would all effortlessly use to make all the minis we wanted! The future was here, and it smelled like resin fumes! Printing a bunch of miniatures off a resin 3D printer quickly proved to have a lot of cracks in this utopian plan. Even a normal-sized mini took hours to print. That wouldn't be so bad, except these printers couldn't just live anywhere in your apartment. They're not like a Keurig. You can't just put them on your kitchen counter and forget about them. When I was invited to watch someone print off minis with a resin 3D printer, it reminded me a lot of the meth labs in my home state of Ohio. And I don't mean that as hyperbole. I mean there were chemicals, ventilation hoods, rubber gloves, and a general atmosphere of "if something goes wrong here, it's going to go very wrong." The guy giving me the tour had safety goggles pushed up on his forehead. He was wearing an apron. At one point, he said the phrase "you really don't want to get this on your skin" with the casual tone of someone who had definitely gotten it on his skin. In practice, the effort to get the STL files, add supports, wash off the models with isopropyl alcohol, remove supports without snapping off tiny arms, and finally cure the mini in UV lights was exponentially more effort than I'm willing to invest. And I say this as someone who has painted individual eyeballs on figures smaller than my thumb. I have a high tolerance for tedious bullshit. This exceeded it. Before I start, I first want to say I don't dislike the 3D printing community. I think it's great they're supporting smaller artists. I love that they found a hobby inside of a hobby, like those Russian nesting dolls but for people who were already too deep into something. I will gladly play against their proxy armies any day of the week. But people outside of the hobby proclaiming that this is the "future" are a classic example of how they don't understand why we're doing the activity in the first place. It's like watching someone who has never cooked explain how meal replacement shakes will eliminate restaurants. You're not wrong that it's technically more efficient. You're just missing the entire point of the experience. The reason why Games Workshop continues to have a great year after year—despite prices that would make a luxury goods executive blush, despite inventory issues, despite a rulebook that changes often enough to require a subscription service—is because of this fundamental misunderstanding. Players invest a lot of time and energy into an army. You paint them. You decorate the plastic bases with fake grass and tiny skulls. You learn their specific rules and how to use them. You develop opinions about which units are "good" and which are "trash" and you will defend these opinions with the fervor of a religious convert. Despite the eternal complaints about the availability of inventory, the practical reality is that most people can only keep a pipeline of one or maybe two armies going at once. The bottleneck isn't acquiring plastic. The bottleneck is everything else . So let's do the math on this. You buy a resin 3D printer. All the supplies. You get a spot in your house where you can safely operate it—which means either a garage, a well-ventilated spare room, or a relationship-ending negotiation with whoever you live with. You find or buy all the STLs you need. Let's say they all have supports in the files, so you just need to print them off. Best-case scenario. Let's say we break even around 50-75 infantry and a few larger models. This is over the raw cost of materials, but we need to factor in the space in your house it takes up, plus there's a learning curve with figuring out how to do it. You also need to invest a lot of time getting these files for printing and finding the good ones. For the sake of keeping this simple, let's just assume the actual printing process goes awesome. No failed prints. No supports that fuse to the model. No discovering that your file was corrupted after six hours of printing. Fantasy land. Here's the thing: getting the raw plastic minis is not the time-consuming part. First, you need to paint them. I take about two hours to paint each model, and I'm far from the best painter out there. I'm solidly in the "looks good from three feet away" category, which is also how I'd describe my general appearance. Vehicles take longer because they're bigger—maybe 10-20 hours for one of those. We're talking somewhere in the ballpark of 150 hours to paint everything that you need to paint for a standard army. Now don't get me wrong, I love painting. But I'm a 38-year-old with a child and a full-time job. Finding 150 hours for anything that isn't work, childcare, or sleep requires the kind of calendar Tetris that would make a project manager weep. It is a massive investment of time to get an army on the table, even if you remove the financial element of buying the minis entirely. Frankly, the money I pay to Games Workshop is the easiest part of the entire process. Often the box will be lovingly stacked on top of other sealed mini boxes—a pile of shame, we call it—until I can start the process of even hoping to catch up. I have boxes I bought during the Obama administration. They're still sealed. They judge me. But okay, let's say we get them all painted. What's next? Next comes "learn how the army works." There is a ton of flexibility to each army in 40k and how they work and operate. It takes a bit of research and time to figure out what they all do, which is something you are 100% expected to know cover to cover when you show up to play. It's not my job to know what your army can and cannot do. If you show up not knowing your own rules, you will be eaten alive, and you will deserve it. So what I saw with the 3D printing crowd felt a lot like the "Year of the Linux Desktop" crowd. Every year they would proclaim that soon we'd all get on board with their vision. They would print off an incredibly impressive army with all the hard-to-find minis that were sold once at a convention in 1997. They'd get the army "painted" to some definition of painted—and I'm using those quotation marks with malice—get on the table, and then play effectively that one army the same as the rest of us. The printer didn't give them more time. It didn't give them more skill. It just gave them more unpainted plastic, which, brother, I have plenty of already. For those in the 3D printing crowd who weren't big into playing, just painting, part of the point is showing off your incredible work to everyone else. Except nobody wants to see a 3D-printed forgery of an official model. It's like showing up to a car show with a kit car that looks like a Ferrari. Sure, it's impressive in its own way, but it's not really a Ferrari, and everyone knows it, and now we're all standing around pretending we don't know it, and it's uncomfortable for everyone. Once someone figured out one of your minis was 3D printed, shops generally wouldn't feature it in their display cases. So there was no reason for people who were going to put in 10+ hours per model to skip paying for the official real models. If you're going to invest that much time, you want the real thing. You want the little Games Workshop logo on the base. You want to be able to say "yes, I paid $60 for this single figure" with the quiet dignity of someone who has made peace with their choices. "Well then the shops can just sell the STLs and do the printing there!" This shows me you haven't spent a lot of time in these shops. Game shops need to carry a ton of inventory all the time, and a lot of their sales are impulse purchases. I see a mini I wouldn't typically be interested in, but it's done and ready, and I'm weak, and now I own it. That's the business model. They also operate on relatively thin margins—these aren't Apple Stores, they're labors of love run by people who got into this because they loved games and are now slowly being crushed by commercial rent and distributor minimums. It's just not feasible for them to print minis on demand and have enough staff to keep an eye on all the printing. Plus, tabletop wargaming isn't their major revenue generator anyway—it's card games like Pokémon and Magic. The wargamers in the basement are a bonus, not the main attraction. We're the weird cousins who show up to Thanksgiving and everyone tolerates us because we're family. At the end of the day, the 3D printing proclamation that it would disrupt my hobby ended up being a whole lot of nothing. A series of reasonable mistakes were made by people enthusiastic about the technology, resulting in the current situation where every year is the year that all of this will get disrupted. Any day now. Just you wait. They looked at the price of miniatures and saw inefficiency. They looked at the scarcity and saw opportunity. What they didn't see was that the price and the scarcity were almost beside the point. The hobby isn't about acquiring plastic. The hobby is about what you do with the plastic after you acquire it. The hobby is about the 150 hours of painting. The hobby is about the arguments over rules interpretations. The hobby is about descending into a basement lit by three naked bulbs and finding your people. You can't 3D print that. So the next time someone tells you that some new technology is going to "disrupt" something you love, ask yourself: do they actually understand why people love it? Do they understand the irrational, inefficient, deeply human reasons people engage with this thing? Or are they just looking at a spreadsheet and seeing numbers that don't make sense to them? Because if it's the latter, you can probably ignore them. They'll be wrong. They're almost always wrong. In the meantime, you can find me in the basement, losing match after match, surrounded by tiny plastic soldiers I've spent hundreds of hours painting, playing a game that makes no sense to anyone who hasn't given themselves over to it completely. It's not efficient. It's not optimized. It's not disrupting anything. The game looks more complicated to play than it is. Especially now, in the 10th edition, the core rules don't take long to learn. However, there is a lot of depth to the individual options available to each army that take a while to master. So it hits that sweet spot of being fast to onboard someone onto while still providing frightening amounts of depth if you're the kind of person who finds "frightening amounts of depth" appealing rather than exhausting. I am that kind of person. This explains a lot about my life. The community is incredible. When I moved from Chicago to Denmark, it took me less than three days to find a local 40k game. Same thing when I moved from Michigan to Chicago. The age and popularity of the game means it is a built-in community that follows you basically around the world. Few other properties have this kind of stickiness. It's like being a Deadhead, except instead of following a band, you're following a shared delusion that tiny plastic men matter. They do matter. Shut up. Cool miniatures. They look nice. They're fun to paint and put together. They're complicated without being too annoying. This is the part that 3D printers are supposed to help with.

0 views
matduggan.com 1 months ago

SQLite for a REST API Database?

When I wrote the backend for my Firefox time-wasting extension ( here ), I assumed I was going to be setting up Postgres. My setup is boilerplate and pretty boring, with everything running in Docker Compose for personal projects and then persistence happening in volumes. However when I was working with it locally, I obviously used SQLite since that's always the local option that I use. It's very easy to work with, nice to back up and move around and in general is a pleasure to work with. As I was setting up the launch, I realized I really didn't want to set up a database. There's nothing wrong with having a Postgres container running, but I'd like to skip it if its possible. So my limited understanding of SQLite before I started this was "you can have one writer and many readers". I had vaguely heard of SQLite "WAL" but my understanding of WAL is more in the context of shipping WAL between database servers. You have one primary, many readers, you ship WAL to from the primary to the readers and then you can promote a reader to the primary position once it has caught up on WAL. My first attempt at setting up SQLite for a REST API died immediately in exactly this way. So by default SQLite: This seems to be caused by SQLite having a rollback journal and using strict locking. Which makes perfect sense for the use-case that SQLite is typically used for, but I want to abuse that setup for something it is not typically used for. So after doing some Googling I ended up with these as the sort of "best recommended" options. I'm 95% sure I copy/pasted the entire block. What is this configuration doing. However my results from load testing sucked. Now this is under heavy load (simulating 1000 active users making a lot of requests at the same time, which is more than I've seen), but still this is pretty bad. The cause of it was, of course, my fault. My "blacklist" is mostly just sites that publish a ton of dead links. However I had the setup wrong and was making a database query per website to see if it matched the black list. Stupid mistake. Once I fixed that. Great! Or at least "good enough from an unstable home internet connection with some artificial packet loss randomly inserted". So should you use SQLite as the backend database for a FastAPI setup? Well it depends on how many users you are planning on having. Right now I can handle between 1000 and 2000 requests per second if they're mostly reads, which is exponentially more than I will need for years of running the service. If at some point in the future that no longer works, it's thankfully very easy to migrate off of SQLite onto something else. So yeah overall I'm pretty happy with it as a design. Only one writer at a time Writers block readers during transactions Switches SQLite from rollback journal to Write-Ahead Logging (WAL) Default behavior is Write -> Copy original data to journal -> Modify database -> Delete journal. WAL mode is Write -> Append changes to WAL file -> Periodically checkpoint to main DB So here you have 4 options to toggle for how often SQLite syncs to disk. OFF is SQlite lets the OS handle it. NORMAL is the SQLite engine still syncs, but less often than FULL. WAL mode is safe from corruption with NORMAL typically. FULL uses the Xsync method of the VFS (don't feel bad I've never heard of it before either: https://sqlite.org/vfs.html ) to ensure everything is written to disk before moving forward. EXTRA: I'm not 100% sure what this exactly does but it sounds extra. "EXTRA synchronous is like FULL with the addition that the directory containing a rollback journal is synced after that journal is unlinked to commit a transaction in DELETE mode. EXTRA provides additional durability if the commit is followed closely by a power loss. Without EXTRA, depending on the underlying filesystem, it is possible that a single transaction that commits right before a power loss might get rolled back upon reboot. The database will not go corrupt. But the last transaction might go missing, thus violating durability, if EXTRA is not set." = please wait up to 60 seconds. this one threw me for a loop. Why is it a negative number? If you set it to a positive number, you mean pages. SQLite page size is 4kb by default, so 2000 = 8MB. A negative number means KB which is easier to reason about than pages. I don't really know what a "good" cache_size is here. 64MB feels right given the kind of data I'm throwing around and how small it is, but this is guess work. = write to memory, not disk. Makes sense for speed.

0 views
matduggan.com 1 months ago

Making RSS More Fun

I don't like RSS readers. I know, this is blasphemous especially on a website where I'm actively encouraging you to subscribe through RSS. As someone writing stuff, RSS is great for me. I don't have to think about it, the requests are pretty light weight, I don't need to think about your personal data or what client you are using. So as a protocol RSS is great, no notes. However as something I'm going to consume, it's frankly a giant chore . I feel pressured by RSS readers, where there is this endlessly growing backlog of things I haven't read. I rarely want to read all of a websites content from beginning to end, instead I like to jump between them. I also don't really care if the content is chronological, like an old post about something interesting isn't less compelling to me than a newer post. What I want, as a user experience, is something akin to TikTok. The whole appeal of TikTok, for those who haven't wasted hours of their lives on it, is that I get served content based on an algorithm that determines what I might think is useful or fun. However what I would like is to go through content from random small websites. I want to sit somewhere and passively consume random small creators content, then upvote some of that content and the service should show that more often to other users. That's it. No advertising, no collecting tons of user data about me, just a very simple "I have 15 minutes to kill before the next meeting, show me some random stuff." In this case the "algorithm" is pretty simple: if more people like a thing, more people see it. But with Google on its way to replacing search results with LLM generated content, I just wanted to have something that let me play around with the small web the way that I used to. There actually used to be a service like this called StumbleUpon which was more focused on pushing users towards popular sites. It has been taken down, presumably because there was no money in a browser plugin that sent users to other websites whose advertising you didn't control. You can go download the Firefox extension now and try this out and skip the rest of this if you want. https://timewasterpro.xyz/ If you hate it or find problems, let me know on Mastodon. https://c.im/@matdevdug So I wanted to do something pretty basic. You hit a button, get served a new website. If you like the website, upvote it, otherwise downvote it. If you think it has objectionable content then hit report. You have to make an account (because I couldn't think of another way to do it) and then if you submit links and other people like it, you climb a Leaderboard. On the backend I want to (very slowly so I don't cost anyone a bunch of money) crawl a bunch of RSS feeds, stick the pages in a database and then serve them up to users. Then I want to track what sites get upvotes and return those more often to other users so that "high quality" content shows up more often. "High quality" would be defined by the community or just me if I'm the only user. It's pretty basic stuff, most of it copied from tutorials scattered around the Internet. However I really want to drive home to users that this is not a Serious Thing. I'm not a company, this isn't a new social media network, there are no plans to "grow" this concept beyond the original idea unless people smarter than me ping with me ideas. So I found this amazing CSS library: https://sakofchit.github.io/system.css/ The Apple's System OS design from the late-80s to the early 90s was one of my personal favorites and I think would send a strong signal to a user that this is not a professional, modern service. Great, the basic layout works. Let's move on! So I ended up doing FastAPI because it's very easy to write. I didn't want to spend a ton of time writing the API because I doubt I nailed the API design on the first round. I use sqlalchemy for the database. The basic API layout is as follows: The source for the RSS feeds came from the (very cool) Kagi small web Github. https://github.com/kagisearch/smallweb . Basically I assume that websites that have submitted their RSS feeds here are cool with me (very rarely) checking for new posts and adding them to my database. If you want the same thing as this does, but as an iFrame, that's the Kagi small web service. The scraping work is straightforward. We make a background worker, they grab 5 feeds every 600 seconds, they check for new content on each feed and then wait until the 600 seconds has elapsed to grab 5 more from the smallweb list of RSS feeds. Since we have a lot of feeds, this ends up look like we're checking for new content less than once a day which is the interval that I want. Then we write it out to a sqlite database and basically track "has this URL been reported", if so, put it into a review queue and then how many times this URL has been liked or disliked. I considered a "real" database but honestly sqlite is getting more and more scalable every day and its impossible to beat the immediate start up and functionality. Plus very easy to back up to encrypted object storage which is super nice for a hobby project where you might wipe the prod database at any moment. In terms of user onboarding I ended up doing the "make an account with an email, I send a link to verify the email". I actually hate this flow and I don't really want to know a users email. I never need to contact you and there's not a lot associated with your account, which makes this especially silly. I have a ton of email addresses and no real "purpose" in having them. I'd switch to Login with Apple, which is great from a security perspective but not everybody has an Apple ID. I also did a passkey version, which worked fine but the OSS passkey handling was pretty rough still and most people seem to be using a commercial service that handled the "do you have the passkey? Great, if not, fall back to email" flow. I don't really want to do a big commercial login service for a hobby application. Auth is a JWT, which actually was a pain and I regret doing it. I don't know why I keep reaching for JWTs, they're a bad user experience and I should stop. I'm more than happy to release the source code once I feel like the product is in a somewhat stable shape. I'm still ripping down and rewriting relatively large chunks of it as I find weird behavior I don't like or just decide to do things a different way. In the end it does seem to do whats on the label. We have over 600,000 individual pages indexed. Honestly I've been pretty pleased. But there are some problems. First I couldn't find a reliable way of switching the keyboard shortcuts to be Mac/Windows specific. I found some options for querying platform but they didn't seem to work, so I ended up just hardcoding them as Alt which is not great. The other issue is that when you are making an extension, you spend a long time working with these manifests.json. The specific part I really wasn't sure about was: I'm not entirely sure if that's all I'm doing? I think so from reading the docs. Anyway I built this mostly for me. I have no idea if anybody else will enjoy it. But if you are bored I encourage you to give it a try. It should be pretty light weight and straight-forward if you crack open the extension and look at it. I'm not loading any analytics into the extension so basically until people complain about it, I don't really know if its going well or not. admin - mostly just generating read-only reports of like "how many websites are there" leaderboard - So this is my first attempt at trying to get users involved. Submit a website that other people like? Get points, climb leaderboard. I need to sort stuff into categories so that you get more stuff in genres you like. I don't 100% know how to do that, maybe there is a way to scan a website to determine the "types" of content that is on there with machine learning? I'm still looking into it. There's a lot of junk in there. I think if we reach a certain number of downvotes I might put it into a special "queue". I want to ensure new users see the "best stuff" early on but there isn't enough data to determine "best vs worst". I wish there were more independent photography and science websites. Also more crafts. That's not really a "future thing", just me putting a hope out into the universe. Non-technical beta testers get overwhelmed by technical content.

0 views
matduggan.com 3 months ago

I broke and fixed my Ghost blog

Once a month I will pull down the latest docker images for this server and update the site. The Ghost CMS team updates things at a pretty regular pace so I try to not let an update sit for too long. With this last round I suddenly found myself locked out of my Ghost admin panel. I was pretty confident that I hadn't forgotten my password and when I was looking at the logs, I saw this pretty spooky error. I was surprised by this sudden error, especially when I dumped out the database and confirmed that the hashed password for my Ghost user matched the password I was giving it. If you want to try that, this is the guide I followed: https://hostarmada.com/tutorials/blog-cms/ghost/how-to-change-the-admin-password-of-your-ghost-blog-if-you-get-locked-out/ So Ghost is a good CMS system, but it can be a little bit slow under load from automated scraping from RSS readers. I want to cache everything that I can with Nginx, so I use Nginx to store a lot of that junk. My configuration is not too terribly clever and has worked up to this point. The basic point is to get caching on the public content and then definitely NOT cache the ghost admin panel. After some testing, I confirmed this seemed to all work. But I was still locked out. Alright so I still couldn't figure out what was going on, so I went through the docs. Then I found this seemingly new addition. https://docs.ghost.org/config?_ga=2.92846045.1713439663.1760543217-1048546310.1760543217#security Now I have transactional email set up, but just looking at the error it seemed to feel related. So I added: to my docker-compose file to disable this new feature and then blamo, suddenly works fine. So if you are locked out of your Docker CMS admin panel, disable this (temporarily hopefully because it's a good feature) to let you continue to log in, debug your transactional email and then turn it back on. Hope that helps.

0 views
matduggan.com 3 months ago

Greenland is a beautiful nightmare

Greenland is a complicated topic here in Denmark. The former colony that is still treated a bit like a colony is something that inspires a lot of emotions. Greenland has been subjected to a lot of unethical experiments by Denmark, from taking their kids to wild experiments in criminal justice. But there is also a genuine pride a lot of people have here for the place and you run into Danes who grew up there more often than I would have guessed. When the idea of going to Greenland was introduced to me, I was curious. Having lived in Denmark for awhile, you hear a lot about the former colony and its 55,000 residents. We were invited by a family that my wife was close with growing up and is Danish. They wanted to take their father back to see the place he had spend some time in during his 20s and had left quite an impression. A few drinks in, I said "absolutely let's do it", not realizing we had already committed to going and I had missed the text message chain. A few weeks before I went, I realized "I don't know anything about Greenland" and started to watch some YouTube videos. It was about this time when I started to get a pit in my stomach, the "oh god I think I've made a huge mistake" feeling I'm painfully familiar with after a career in tech. Greenland appeared to have roughly 9 people living there and maybe 5 things to look at. Even professional travel personalities seemed to be scraping the bottom of the barrel. "There's the grocery store again!" they would point out as they slipped down the snowy roads. I couldn't tell any difference between different towns in the country. It reminded me a lot of driving through Indiana. For those not in the US, Indiana is a state in the US famous for being a state one must drive through in order to get somewhere better. If you live in Michigan, a good state and want to go to Illinois, another good state, one must pass through Indiana, a blank state. Because of this little strip here, you often found yourself passing through this place. Driving through Indiana isn't bad, it's just an empty void. It's like a time machine back to the 90s when people still smoke in restaurants but also there's nothing that sticks out about it. There is nothing distinct about Indiana, it's just a place full of people who got too tired on their way to somewhere better and decided "this is good enough". The difference is that Greenland is very hard to get to, as I was about to learn. Finally the day arrived. Me, my wife, daughter, 4 other children and 6 other adults all came to the Copenhagen Airport and held up a gate agent for what felt like an hour to slowly process all of our documents. Meanwhile, I nursed a creeping paranoia that I'd be treated as some sort of American spy, given my government's recent hobby of threatening to purchase entire countries like they're vintage motorcycles on Craigslist. The 5 hour flight is uneventful, the children are beautifully behaved and I begin to think "well this seems ok!" like the idiot I am. As I can look down and see the airport, the pilot comes on and informs us that there is too much fog to land safely. Surely fog cannot stop a modern aircraft full of all these dials and screens I think, foolishly. We are informed there is enough fuel to circle the airport for 5 hours to wait for the fog to lift. What followed was three hours of flying in lazy circles, like a very expensive, very slow merry-go-round. After the allotted time, we are informed that we must fly to Iceland to refuel and then we will be returning to Denmark . After a total of 15 hours in the air we will be going back to exactly where we started, to do the entire thing again. We were obviously upset at this turn of events, but I noticed the native Greenlandic folks seemed not surprised at this turn of events. As I later learned, this happens all the time . The native Greenlanders on board seemed utterly unsurprised by this development, displaying the kind of resigned familiarity that suggested this was Tuesday for them. I began wondering if I could just pretend Iceland was Greenland—surely my family wouldn't notice the difference? But the pilot, apparently reading my mind, announced that no one would be disembarking in Iceland. It felt oddly authoritarian, like being grounded by an airline, as if they knew we'd all just wander off into Reykjavik and call it close enough. We crash out in a airport hotel 20 minutes from our apartment after 15 hours in the air and tons of CO2 emissions only to wake up the next day to start again. This time, I notice that all of the people are asking for (and receiving) free beer from the crew that they are stashing in their bags. It turns out soda and beer, really anything that needs to be imported, is pretty expensive in Greenland. The complimentary drinks are there to be kept for later. Finally we land. The first thing you notice when you land in Greenland is there are no trees or grass. There is snow and then there is exposed rock. The exterior of the airport is metal but the inside is wood, which is strange because again there are no trees. This would end up being a theme, where buildings representing Denmark were made out of lots of wood, almost to ensure that you understood they weren't from here. We ended up piling all of our stuff into a bus and heading for the hotel in Nuuk. Nuuk is the capital of Greenland and your introduction to the incredible calm of the Greenlandic people. I have never met a less stressed out group of humans in my life. Nobody is really rushing anywhere, it's all pretty quiet and calm. The air is cold and crisp with lots of kids playing outside and just generally enjoying life. The city itself sits in a landscape so dramatically inhospitable it makes the surface of Mars look cozy. Walking through the local mall, half the shops sell gear designed to help you survive what appears to be the apocalypse. Yet somehow, there's traffic. Actual traffic jams in a place where you can walk from one end to the other in twenty minutes. It's like being stuck behind a school bus in your own driveway. To put this map into some perspective, it is only six kilometers from the sorta furthest tip to the airport. But riding the bus around Nuuk was a peaceful experience that lets you see pretty much the entire city without needing to book a tour or spend a lot of money. We went to Katuaq, a cultural center with a cafe and a movie theater that was absolutely delicious food. But again even riding the bus around it is impossible to escape the feeling that this is a fundamentally hostile to human life place. The sun is bright and during the summer its pretty hot, with my skin feeling like it was starting the burn pretty much the second it was exposed to the light. It's hard to even dress for, with layers of sunscreen, bug spray and then something warm on top if you suddenly got cold. The sun, meanwhile, has apparently forgotten how to set, turning our hotel rooms into solar ovens. You wake up in a pool of your own sweat, crack a window for relief, and immediately get hit with air so cold it feels personal. It's like being trapped in a meteorological mood swing. So after a night here, we went back to the airport again and flew to our final destination, Ilulissat. The flight to our final destination revealed Greenland's true nature: endless, empty hills stretching toward infinity, punctuated by ice formations that look like nature's sculpture garden. Landing in Ilulissat felt like victory—we'd made it to the actual destination, not just another waypoint in our Arctic odyssey. Walking through the tiny airport, past Danish military recruitment posters (apparently someone, somewhere, thought this place needed defending), I felt genuinely optimistic for the first time in days. Well you can sleep easy Danish military, because Ilulissat is completely protected from invasion. The second I stepped outside I was set upon by a flood of mosquitos like I have never experienced before. I have been to the jungles of Vietnam, the swamps of Florida and the Canadian countryside. This was beyond anything I've ever experienced. There are bugs in my mouth, ears, eyes and nose almost immediately. The photo below is not me being dramatic, it is actually what is required to keep them off of me. In fact what you need to purchase in order to walk around this area at all are basically bug nets for your face. They're effectively plastic mesh bags that you put on. Our hotel, charming in that "remote Arctic outpost" way, sat adjacent to what I can only describe as a canine correctional facility. Dozens of sled dogs were chained to rocks like some sort of prehistoric parking lot, each with a tiny house they could retreat to when the existential weight of their circumstances became too much. Now, I'd always imagined sled dogs living their best life—running through snow, tongues lolling, living the Disney version of Arctic life. I'd never really considered their downtime, assuming they frolicked in meadows or something equally wholesome. The reality was more "minimum security prison with a view." The dogs are visited roughly twice a day by the person who owns and feeds them, which was quite the party for the dogs that lost their minds whenever the car pulled up. Soon the kids really looked forward to dog feeding time. The fish scrapes the dogs lived on came out of a chest freezer that was left exposed up on the rock face without electricity and you could smell it from 50 yards away when it opened. During one such performance, a fellow parent leaned over and whispered with the casual tone of someone commenting on the weather, "I think that one is dead." Before I could process this information, the frozen canine was unceremoniously launched over a small cliff like a furry discus. A second doggy popsicle followed shortly after, right in front of our assembled children, who watched with the kind of wide-eyed fascination usually reserved for magic shows. We stopped making dog feeding time a group activity after that and had to distract the kids from ravens flying away with tufts of dog fur. Obviously a big part of Greenland is the nature, specifically the icebergs. Icebergs are incredible and during the week we spend up there, I enjoyed watching them every morning. It's like watching a mountain slowly moving while you sit still. The visual contrast of the ice and the exposed stone is beautiful and peaceful. Finding our tour operator proved to be an exercise in small-town efficiency. The man who gave me directions was the same person who picked us up from the airport, who was also our tour guide, who probably doubled as the mayor and local meteorologist. It was like a one-man civic operation disguised as multiple businesses—the ultimate small-town gig economy. The sea around Greenland is calmer than anything I've ever been on before, perfectly calm and serene. All around us whales emerged, thrilling my daughter. However the biggest hit of the entire tour, maybe the entire trip, was a member of the crew who handed each of the kids a giant rock of glacier ice to eat. I had to pull my daughter away to observe the natural beauty as she ate glacier ice like it was ice cream. "LOOK AT MY ICE" she was yelling as they slipped and slid around the deck of this boat. So if you've ever wonder "what is a glacier", let me tell you. Greenland has a lot of ice and it pushes out from the land that is covers into the sea. When that happens, a lot of it breaks off. This sounds more exciting than it is. On TV in 4K it looks incredible, giant mountains of ice falling into the ocean. Honestly you can go read the same thing I did here . However that doesn't happen very often. So in order for us tourists to be able to see anything, we had to go to a very productive glacier. This means there are constantly small chunks breaking off and falling into the sea. Practically though, it kinda looks like you are a boat in a slushee. It's beautiful and something to see, but also depressing to see along the rock face how much more ice there used to be. Back in town, we hopped on the "bus". Now the bus here is clearly a retrofitted party van, complete with blue LED lights. The payment system is zip tied to a desk chair that is, itself, wedged in the front. However the bus works well and does get you around. The confusing part is that you will, once again, sometimes encounter a lot of traffic. People are driving pretty quickly and really seem to have somewhere to go. You also see a lot of fancy cars parked outside of houses here. Which begs a pretty basic question. If there was almost nowhere to drive to in Nuuk, where in the hell are these people driving . The distance between the end of the road and the beginning of the road is less than 6 km. Also the process to make a road here is beyond anything you've ever seen. Everything requires a giant pile of explosives. Where did these vehicles even come from? Why does one ship a BMW to a place accessible only by plane and boat? More importantly, where was everyone going with such determination? It was like watching a very expensive version of bumper cars, except everyone was committed to the illusion that they had somewhere important to be. Everyone had dings and scrapes like crashes were common. Anyway, as I dodged speeding cars filled with people heading nowhere, I decided to hop off the bus and head to the grocery store. Inside was less a store and more the idea of a store. There was a lot of alcohol, chips, candy and shelf-stable foods, which all makes sense to me. What was strange was there wasn't a lot else, including meat. Locals couldn't be eating at the local restaurants, where the prices were as high as Berlin or Copenhagen for food. So what were they eating? When I asked one of my bus drivers, he told me that it was pretty unusual to buy meat. They purchased a lot of whale and seal meat. I had sorta heard this before, but when we stopped the bus he pointed out a group of men hauling guns out into a small boat to go shoot seals. The guns were held together with a surprising amount of duct tape, which is not something I associate with the wild. I had assumed, based on my casual reading of the news, that we were mostly done killing whales. As it turns out, I was wrong. They eat a lot of whale and it is, in fact, not hard to find. If you are curious, whale does not taste fishy. It tastes a little bit like if you cooked reindeer in a pot of seaweed. I wouldn't go out of your way for it, but it's not terrible. The argument I've always heard for why people still kill whales is because it's part of their culture and also because it's an important source of protein. When you hear the phrase "part of their culture" I always imagined like traditional boats going out with spears. What I didn't imagine was industrial fishing boats and an industrial crane that lifts the dead whale out of the water for "processing". Some of the illusion is broken when your boat tour guide points out the metal warehouse with the word "whale" on the side. "Yeah the water here was red with blood for a week" the guide said, counting the cigarettes left in a pack he had. It's a wild place unlike anywhere I've ever been. It is the closest I have ever felt to living a sci-fi type experience. The people of Greenland are amazing, tough, calm and kind. I have nothing but positive experiences to recount from the many people I met there, Danish and Greenlandic, who patiently sat through my millions of questions. However it is, by far, the least hospitable to human life place I've ever been to. The folks who live there have adapted to the situation in, frankly, genius ways. If that's your idea of a good time, Greenland is perfect for you. Maybe don't get emotionally attached to the sled dogs though. Or the whales.

0 views
matduggan.com 4 months ago

FYI: Broadcom is ruining Bitnami containers

For a long time Bitnami containers and Helm charts have been widely considered the easiest and fastest way to get reliable, latest versions of popular applications built following container best practices. They also have some of the better docs on the internet for figuring out how to configure all this stuff. However Broadcom, in their infinite capacity for short term gain over long term relationships, has decided to bring that to a close. On July 16th they informed their users that the platform was changing. Originally they were going to break a ton of workflows with only 43 days warning, but have expanded that out to a generous 75 days. It's impossible to read these timelines as anything other than Broadcom knows that enterprise customers won't be able to switch off in 43 or 75 days and is using this to extort people into paying them the rumored $50,000 a year to keep using the images. You can read the entire announcement here: https://github.com/bitnami/containers/issues/83267 Here is my summary though: TL;DR: Bitnami is significantly reducing their free container image offerings and moving most existing images to a legacy repository with no future updates. Free Community Tier (Severely Limited): Your Existing Images: Production Users: Before September 29th: Helm Charts: If you're using Bitnami for anything beyond basic development with latest tags, you'll need to either pay for Bitnami Secure Images or migrate to alternative container images before September 29th. Only a small subset of hardened images will remain free Available only with "latest" tags (no version pinning) Intended for development use only Find the limited selection at: https://hub.docker.com/u/bitnamisecure All current Bitnami images (including versioned tags) move to No updates, patches, or support for legacy images Use legacy repo only as temporary migration solution Need to subscribe to "Bitnami Secure Images" for continued support Includes security patches, LTS branches, and full version catalog Audit your deployments - Check which Bitnami images you're using Update CI/CD pipelines - Remove dependencies on deprecated images Choose your path: Development only: Migrate to the limited free tier (latest tags only) Production: Subscribe to Bitnami Secure Images or find alternatives Temporary fix: Update image references to (not recommended long-term) Source code remains open source on GitHub Existing OCI charts at won't receive updates Charts will fail unless you override image repositories

0 views
matduggan.com 6 months ago

What Does a Post-Google Internet Look Like

With the rise of the internet came the need to find information more quickly. The concept of search engines came into this space to fill this need, with a relatively basic initial design. This is the basis of the giant megacorp Google, whose claim to fame was they made the best one of these. Into this stack they inject ads, both ads inside the sites themselves and then turning the search results themselves into ads. As time went on, what we understood to be "Google search" was actually a pretty sophisticated machine that effectively determined what websites lived or died. It was the only portal that niche websites had to get traffic. Google had the only userbase large enough for a website dedicated to retro gaming or VR headsets or whatever to get enough clicks to pay their bills. Despite the complexity, the basic premise remained. Google steers traffic towards your site, the user gets the answer from your site and then everyone is happy. Google showed some ads, you showed some ads, everyone showed everyone on Earth ads. This incredibly lucrative setup was not enough, however, to drive endless continous growth, which is now the new expectation of all tech companies. It is not enough to be fabulously profitable, you must become Weyland-Yutani. So now Google is going to break this long-standing agreement with the internet and move everything we understand to be "internet search" inside their silo. In March 2024 Google moved to embed LLM answers in their search results ( source ). The AI Overview takes the first 100 results from your search query, combines their answers and then returns what it thinks is the best answer. As expected, websites across the internet saw a drop in traffic from Google. You started to see a flood of smaller websites launch panic membership programs, sell off their sites, etc. It became clear that Google has decided to abandon the previous concept of how internet search worked, likely in the face of what it considers to be an existential threat from OpenAI. Maybe the plan was always to bring the entire search process in-house, maybe not, but OpenAI and its rise to fame seems to have forced Google's hand in this space. This is not a new thing, Google has been moving in this direction for years. It was a trend people noticed going back to 2019. It appears the future of Google Search is going to be a closed loop that looks like the following: This is all backed up by data from outside the Google ecosystem confirming that the ratio of scrapes to click is going up. Basically it's costing more for these services to make their content available to LLMs and they're getting less traffic from them. This new global strategy makes sense, especially in the context of the frequent Google layoffs. Previously it made strategic sense to hold onto all the talent they could, now it doesn't matter because the gates are closing. Even if you had all the ex-Google engineers money could buy, you can't make a better search engine because the concept is obsolete. Google has taken everything they need from the internet, it no longer requires the cooperation or goodwill of the people who produce that content. So the source of traffic for the internet is going to go away. My guess is there will be some effort to prevent this, some sort of alternative Google search either embraced or pushed by people. This is going to fail, because Google is an unregulated monopoly. Effectively because the US government is so bad at regulating companies and so corrupt with legalized bribery in the form of lobbying, you couldn't stop Google at this point even if you wanted to. While the US Department of Justice has finally decided to doing something, it's almost too late to make a difference. https://www.justice.gov/opa/pr/department-justice-prevails-landmark-antitrust-case-against-google Even if you wanted to and had a lot of money to throw at the problem, it's too late. If Apple made their own search engine and pointed iOS to it as the default and paid Firefox to make it the default, it still wouldn't matter. The AI Overview is a good enough answer for most questions and so convincing consumers to: I'm confident there will still be sites doing web searching, but I suspect given the explosion in AI generated slop it's going to be impossible to use them even if you wanted to. We're quickly reaching a point where it would be possible to generate a web page on demand, meaning the capacity of the slop-generation exceeds the capacity of humans to fight it. Because we didn't regulate the internet, we're going to end up with an unbreakable monopoly on all human knowledge held by Microsoft and Google. Then because we didn't learn anything we're going to end up with a system that can produce false data on demand and make it impossible to fact check anything that the LLM companies return. Paid services like Kogi will be the only search engines worth trying. So I think you are going to see a rush of shutdowns and paywalls like you've never seen before. In some respects, it is going to be a return to the pre-Google internet, where it will once again be important that consumers know your domain name and go directly to your site. It's going to be a massive consolidation of the internet down and I think the ad-based economy of the modern web will collapse. Google was the ad broker, but now they're going to operate like Meta and keep the entire cycle inside their system. My prediction is that this is going to basically destroy any small or medium sized business that attempts to survive with the model of "produce content, get paid per visitor through ads". Everything instead is going to get moved behind aggressive paywalls, blocking archive.org. You'll also see prices go way up for memberships. Access to raw, human produced information is going to be a premium product, not something for everyday people. Fake information will be free. Anyone attempting to make an online store is gonna get mob-style shakedown. You can either pay Amazon to let consumers see your product or you can pay Google to have their LLM recommend your product or you can (eventually) pay OpenAI/Microsoft to do it. I also think these companies will use this opportunity to dramatically reprice their advertising offerings. I don't think it'll be cheap to get the AI Summary to recommend your frying pan. I suspect there will be a brief spike in other forms of marketing spend, like podcasts, billboards, etc. When companies see the sticker shock from Google they're going to explore other avenues like social media spend, influencers, etc. But all those channels are going to be eaten by the LLM snake at the same time. If consumers are willing to engage with an LLM-generated influencer, that'll be the direction companies go in because they'll be cheaper and more reliable. Podcast search results are gonna be flooded with LLM-generated shows and my guess is that they're going to take more of the market share than anyone wants to admit. Twitch streaming has already moved from seeing the person to seeing an anime-style virtual overlay where you don't see the persons face. There won't be a reason for an actual human to be involved in that process. My prediction is that a lot of the places that employ technical people are going to disappear. FAANG isn't going to be hiring at anywhere near the same rate they were before, because they won't need to. I don't need 10,000 people maintaining relationships with ad sellers and ad buyers or any of the staff involved in the maintenance or improvement of those systems. The internet is going to return to more of its original roots, which are niche fan websites you largely find through social media or word of mouth. These sites aren't going to be ad driven, they'll be membership driven. Very few of them are going to survive. Subscription fatigue is a real thing and the math of "it costs a lot of money to pay people to write high quality content" isn't going to go away. In a relatively short period of time, it will go from "very difficult" to absolutely impossible to launch a new commercially viable website and have users organically discover that website. You'll have to block LLM scrapers and need a tremendous amount of money to get a new site bootstrapped. Welcome to the future, where asking a question costs $4.99 and you'll never be able to find out if the answer is right or not. Google LLM takes the information from the results it has already ingested to respond to most questions. Companies will at some point pay for their product or service to be "the answer" in different categories. Maybe this gets disclosed, maybe not, maybe there's just a little i in the corner that says "these answers may be influenced by marketing partners" or something. Google will attempt to reassure strategic partners that they aren't going to kill them, while at the same time turning to their relationship with Reddit to supply their "new data". Android is the dominant mobile platform on Earth Chrome is the dominant web browser Apple gets paid to make the other mobile platform default to Google Firefox gets paid to make the other web browser default to Google switch platforms and go back to a two/three/four step process compared to a one step process is a waste of time.

0 views
matduggan.com 6 months ago

What Would a Kubernetes 2.0 Look Like

Around 2012-2013 I started to hear a lot in the sysadmin community about a technology called "Borg". It was (apparently) some sort of Linux container system inside of Google that ran all of their stuff. The terminology was a bit baffling, with something called a "Borglet" inside of clusters with "cells" but the basics started to leak. There was a concept of "services" and a concept of "jobs", where applications could use services to respond to user requests and then jobs to complete batch jobs that ran for much longer periods of time. Then on June 7th, 2014, we got our first commit of Kubernetes. The Greek word for 'helmsman' that absolutely no one could pronounce correctly for the first three years. (Is it koo-ber-NET-ees? koo-ber-NEET-ees? Just give up and call it k8s like the rest of us.) Microsoft, RedHat, IBM, Docker join the Kubernetes community pretty quickly after this, which raised Kubernetes from an interesting Google thing to "maybe this is a real product?" On July 21st 2015 we got the v1.0 release as well as the creation of the CNCF. In the ten years since that initial commit, Kubernetes has become a large part of my professional life. I use it at home, at work, on side projects—anywhere it makes sense. It's a tool with a steep learning curve, but it's also a massive force multiplier. We no longer "manage infrastructure" at the server level; everything is declarative, scalable, recoverable and (if you’re lucky) self-healing. But the journey hasn't been without problems. Some common trends have emerged, where mistakes or misconfiguration arise from where Kubernetes isn't opinionated enough. Even ten years on, we're still seeing a lot of churn inside of ecosystem and people stepping on well-documented landmines. So, knowing what we know now, what could we do differently to make this great tool even more applicable to more people and problems? Let's start with the positive stuff. Why are we still talking about this platform now? Containers at scale Containers as a tool for software development make perfect sense. Ditch the confusion of individual laptop configuration and have one standard, disposable concept that works across the entire stack. While tools like Docker Compose allowed for some deployments of containers, they were clunky and still required you as the admin to manage a lot of the steps. I set up a Compose stack with a deployment script that would remove the instance from the load balancer, pull the new containers, make sure they started and then re-added it to the LB, as did lots of folks. K8s allowed for this concept to scale out, meaning it was possible to take a container from your laptop and deploy an identical container across thousands of servers. This flexibility allowed organizations to revisit their entire design strategy, dropping monoliths and adopting more flexible (and often more complicated) micro-service designs. Low-Maintenance If you think of the history of Operations as a sort of "naming timeline from pets to cattle", we started with what I affectionately call the "Simpsons" era. Servers were bare metal boxes set up by teams, they often had one-off names that became slang inside of teams and everything was a snowflake. The longer a server ran, the more cruft it picked up until it became a scary operation to even reboot them, much less attempt to rebuild them. I call it the "Simpsons" era because among the jobs I was working at the time, naming them after Simpsons characters was surprisingly common. Nothing fixed itself, everything was a manual operation. Then we transition into the "01 Era". Tools like Puppet and Ansible have become common place, servers are more disposable and you start to see things like bastion hosts and other access control systems become the norm. Servers aren't all facing the internet, they're behind a load balancer and we've dropped the cute names for stuff like "app01" or "vpn02". Organizations designed it so they could lose some of their servers some of the time. However failures still weren't self-healing, someone still had to SSH in to see what broke, write up a fix in the tooling and then deploy it across the entire fleet. OS upgrades were still complicated affairs. We're now in the "UUID Era". Servers exist to run containers, they are entirely disposable concepts. Nobody cares about how long a particular version of the OS is supported for, you just bake a new AMI and replace the entire machine. K8s wasn't the only technology enabling this, but it was the one that accelerated it. Now the idea of a bastion server with SSH keys that I go to the underlying server to fix problems is seen as more of a "break-glass" solution. Almost all solutions are "destroy that Node, let k8s reorganize things as needed, make a new Node". A lot of the Linux skills that were critical to my career are largely nice to have now, not need to have. You can be happy or sad about that, I certainly switch between the two emotions on a regular basis, but it's just the truth. Running Jobs The k8s jobs system isn't perfect, but it's so much better than the "snowflake cron01 box" that was an extremely common sight at jobs for years. Running on a cron schedule or running from a message queue, it was now possible to reliably put jobs into a queue, have them get run, have them restart if they didn't work and then move on with your life. Not only does this free up humans from a time-consuming and boring task, but it's also simply a more efficient use of resources. You are still spinning up a pod for every item in the queue, but your teams have a lot of flexibility inside of the "pod" concept for what they need to run and how they want to run it. This has really been a quality of life improvement for a lot of people, myself included, who just need to be able to easily background tasks and not think about them again. Service Discoverability and Load Balancing Hard-coded IP addresses that lived inside of applications as the template for where requests should be routed has been a curse following me around for years. If you were lucky, these dependencies weren't based on IP address but were actually DNS entries and you could change the thing behind the DNS entry without coordinating a deployment of a million applications. K8s allowed for simple DNS names to call other services. It removed an entire category of errors and hassle and simplified the entire thing down. With the Service API you had a stable, long lived IP and hostname that you could just point things towards and not think about any of the underlying concepts. You even have concepts like ExternalName that allow you to treat external services like they're in the cluster. YAML was appealing because it wasn't JSON or XML, which is like saying your new car is great because it's neither a horse nor a unicycle. It demos nicer for k8s, looks nicer sitting in a repo and has the illusion of being a simple file format. In reality. YAML is just too much for what we're trying to do with k8s and it's not a safe enough format. Indentation is error-prone, the files don't scale great (you really don't want a super long YAML file), debugging can be annoying. YAML has so many subtle behaviors outlined in its spec. I still remember not believing what I was seeing the first time I saw the Norway Problem. For those lucky enough to not deal with it, the Norway Problem in YAML is when 'NO' gets interpreted as false. Imagine explaining to your Norwegian colleagues that their entire country evaluates to false in your configuration files. Add in accidental numbers from lack of quotes, the list goes on and on. There are much better posts on why YAML is crazy than I'm capable of writing: https://ruudvanasseldonk.com/2023/01/11/the-yaml-document-from-hell HCL is already the format for Terraform, so at least we'd only have to hate one configuration language instead of two. It's strongly typed with explicit types. There's already good validation mechanisms. It is specifically designed to do the job that we are asking YAML to do and it's not much harder to read. It has built-in functions people are already using that would allow us to remove some of the third-party tooling from the YAML workflow. I would wager 30% of Kubernetes clusters today are already being managed with HCL via Terraform. We don't need the Terraform part to get a lot of the benefits of a superior configuration language. The only downsides are that HCL is slightly more verbose than YAML, and its Mozilla Public License 2.0 (MPL-2.0) would require careful legal review for integration into an Apache 2.0 project like Kubernetes. However, for the quality-of-life improvements it offers, these are hurdles worth clearing. Why HCL is better Let's take a simple YAML file. Even in the most basic example, there are footguns everywhere. HCL and the type system would catch all of these problems. Take a YAML file like this that you probably have 6000 in your k8s repo. Now look at HCL without needing external tooling. Here's all the pros you get with this move. I know, I'm the 10,000 person to write this. Etcd has done a fine job, but it's a little crazy that it is the only tool for the job. For smaller clusters or smaller hardware configuration, it's a large use of resources in a cluster type where you will never hit the node count where it pays off. It's also a strange relationship between k8s and etcd now, where k8s is basically the only etcd customer left. What I'm suggesting is taking the work of kine and making it official. It makes sense for the long-term health of the project to have the ability to plug in more backends, adding this abstraction means it (should) be easier to swap in new/different backends in the future and it also allows for more specific tuning depending on the hardware I'm putting out there. What I suspect this would end up looking like is much like this: https://github.com/canonical/k8s-dqlite . Distributed SQlite in-memory with Raft consensus and almost zero upgrade work required that would allow cluster operators to have more flexibility with the persistence layer of their k8s installations. If you have a conventional server setup in a datacenter and etcd resource usage is not a problem, great! But this allows for lower-end k8s to be a nicer experience and (hopefully) reduces dependence on the etcd project. Helm is a perfect example of a temporary hack that has grown to be a permanent dependency. I'm grateful to the maintainers of Helm for all of their hard work, growing what was originally a hackathon project into the de-facto way to install software into k8s clusters. It has done as good a job as something could in fulfilling that role without having a deeper integration into k8s. All that said, Helm is a nightmare to use. The Go templates are tricky to debug, often containing complex logic that results in really confusing error scenarios. The error messages you get from those scenarios are often gibberish. Helm isn't a very good package system because it fails at some of the basic tasks you need a package system to do, which are transitive dependencies and resolving conflicts between dependencies. What do I mean? Tell me what this conditional logic is trying to do: Or if I provide multiple values files to my chart, which one wins: Ok, what if I want to manage my application and all the application dependencies with a Helm chart. This makes sense, I have an application that itself has dependencies on other stuff so I want to put them all together. So I define my sub-charts or umbrella charts inside of my Chart.yaml. But assuming I have multiple applications, it's entirely possible that I have 2 services both with a dependency on nginx or whatever like this: Helm doesn't handle this situation gracefully because template names are global with their templates loaded alphabetically. Basically you need to: The list of issues goes on and on. Let's just go to the front page of artifacthub: I'll grab elasticsearch cause that seems important. Seems pretty bad for the Official Elastic helm chart. Certainly will be right, it's an absolute critical dependency for the entire industry. Nope. Also how is the maintainer of the chart "Kubernetes" and it's still not marked as a . Like Christ how much more verified does it get. I could keep writing for another 5000 words and still wouldn't have outlined all the problems. There isn't a way to make Helm good enough for the task of "package manager for all the critical infrastructure on the planet". Let's call our hypothetical package system KubePkg, because if there's one thing the Kubernetes ecosystem needs, it's another abbreviated name with a 'K' in it. We would try to copy as much of the existing work inside the Linux ecosystem while taking advantage of the CRD power of k8s. My idea looks something like this: The packages are bundles like a Linux package: There's a definition file that accounts for as many of the real scenarios that you actually encounter when installing a thing. There's a real signing process that would be required and allow you more control over the process. Like how great would it be to have something where I could automatically update packages without needing to do anything on my side. What k8s needs is a system that meets the following requirements: Try to imagine, across the entire globe, how much time and energy has been invested in trying to solve any one of the following three problems. I am not suggesting the entire internet switches over to IPv6 and right now k8s happily supports IPv6-only if you want and a dualstack approach. But I'm saying now is the time to flip the default and just go IPv6. You eliminate a huge collection of problems all at once. It has nothing to do with driving IPv6 adoption across the entire globe and just an acknowledgement that we no longer live in a world where you have to accept the weird limitations of IPv4 in a universe where you may need 10,000 IPs suddenly with very little warning. The benefits for organizations with public IPv6 addresses is pretty obvious, but there's enough value there for cloud providers and users that even the corporate overlords might get behind it. AWS never needs to try and scrounge up more private IPv4 space inside of a VPC. That's gotta be worth something. The common rebuttal to these ideas is, "Kubernetes is an open platform, so the community can build these solutions." While true, this argument misses a crucial point: defaults are the most powerful force in technology. The "happy path" defined by the core project dictates how 90% of users will interact with it. If the system defaults to expecting signed packages and provides a robust, native way to manage them, that is what the ecosystem will adopt. This is an ambitious list, I know. But if we're going to dream, let's dream big. After all, we're the industry that thought naming a technology 'Kubernetes' would catch on, and somehow it did! We see this all the time in other areas like mobile developer and web development, where platforms assess their situation and make radical jumps forward. Not all of these are necessarily projects that the maintainers or companies would take on but I think they're all ideas that someone should at least revisit and think "is it worth doing now that we're this nontrivial percentage of all datacenter operations on the planet"? Questions/feedback/got something wrong? Find me here: https://c.im/@matdevdug Type Safety : Preventing type-related errors before deployment Variables and References : Reducing duplication and improving maintainability Functions and Expressions : Enabling dynamic configuration generation Conditional Logic : Supporting environment-specific configurations Loops and Iteration : Simplifying repetitive configurations Better Comments : Improving documentation and readability Error Handling : Making errors easier to identify and fix Modularity : Enabling reuse of configuration components Validation : Preventing invalid configurations Data Transformations : Supporting complex data manipulations Don't declare a dependency on the same chart more than once (hard to do for a lot of microservices) If you do have the same chart declared multiple times, has to use the exact same version Cross-Namespace installation stinks Chart verification process is a pain and nobody uses it No metadata in chart searching. You can only search by name and description, not by features, capabilities, or other metadata. Helm doesn't strictly enforce semantic versioning If you uninstall and reinstall a chart with CRDs, it might delete resources created by those CRDs. This one has screwed me multiple times and is crazy unsafe. True Kubernetes Native : Everything is a Kubernetes resource with proper status and events First-Class State Management : Built-in support for stateful applications Enhanced Security : Robust signing, verification, and security scanning Declarative Configuration : No templates, just structured configuration with schemas Lifecycle Management : Comprehensive lifecycle hooks and upgrade strategies Dependency Resolution : Linux-like dependency management with semantic versioning Audit Trail : Complete history of changes with who, what, and when, not what Helm currently provides. Policy Enforcement : Support for organizational policies and compliance. Simplified User Experience : Familiar Linux-like package management commands. It seems wild that we're trying to go a different direction from the package systems that have worked for decades. I need this pod in this cluster to talk to that pod in that cluster. There is a problem happening somewhere in the NAT traversal process and I need to solve it I have run out of IP addresses with my cluster because I didn't account for how many you use. Remember: A company starting with a /20 subnet (4,096 addresses), deploys 40 nodes with 30 pods each, and suddenly realizes they're approaching their IP limit. Not that many nodes! Flatter, less complicated network topology inside of the cluster. The distinction between multiple clusters becomes a thing organizations can choose to ignore if they want if they want to get public IPs. Easier to understand exactly the flow of traffic inside of your stack. Built-in IPSec

0 views
matduggan.com 7 months ago

Simple Python Script for FTP Uploads

So awhile ago I purchased a Tp-link AX3000 wireless router as a temporary same-day fix to a dying AP. Of course, like all temporary fixes, this one ended up being super permanent. It's a fine wireless router, nothing interesting to report, but one of the features I stumbled upon when I was clicking around the webUI seemed like a great solution for a place to stick random files. Inside of Advanced Settings, you'll see this pane: You have a few options for how to expose this USB drive: I actually didn't find the SMB to work that well in my testing, seemingly disconnecting all the time. But FTP works pretty well. So that's what I ended up using, which was fine except seemingly randomly files were getting corrupted when I moved them over. Looking at the failing files, I realized they were all over 4 GB and thought "there's no way in 2025 they are formatting this external drive in FAT32, right?" To be clear, I didn't partition this drive. The router offered to wipe it when I plugged it in and I said sure. However that is exactly what they are doing, which means we have a file size limit of 4 GB per file. This explained the transfer problems and, while still annoying, is not a complicated thing to work around. Now I have an easy script that I can use to periodically offload directories full of files that I don't know if I want to delete yet, but I don't want on my laptop. LOCAL_DIR will obviously need to get changed FTP_HOST has a different IP range than the default router range because of specific stuff for me. You'll need to check that. FTP_PASS required an email address format. I don't know why. The directory of "G" was assigned to me by the router, so I assume this is a common convention with these routers. I don't know why it puts a directory inside of the drive instead of writing out to the root of the drive. Presumably some Windows convention.

0 views
matduggan.com 7 months ago

Write your own Ghost Theme

In general, Ghost CMS has been a good tool for me. I've been pleased by the speed and reliability of the platform, with the few problems I have run into being fixed by the Ghost team pretty quickly. From the very beginning though I've struggled with the basic approach of the Ghost platform. At its core, the Ghost CMS tool is a newsletter platform. This makes sense, it's how small content creators actually generate revenue. But I don't need any of that functionality, as I don't want to capture a bunch of users email addresses. I'm lucky enough to not need the $10 a month it costs to host this website on my own and I'd rather not have to think about who I would need to notify if my database got breached. But it means that most of the themes for Ghost are completely cluttered with junk I don't need. I started working on my own CMS, but other than the more simplistic layout, I couldn't think of anything my CMS did that was better than Ghost or Wordpress. There was less code, but it was code I was going to have to maintain. After going through the source for a bunch of Ghost themes, I realized I could probably get where I wanted to go through the theme work alone. I didn't find a ton of resources on how to actually crank out a theme, so I figured I would write up the base outline I sketched out as I worked. So Ghost uses the Handlebars library to make templates. Here's the basic layout: This is what they all do: package.json(required): The theme's "ID card." This JSON file contains metadata like the theme's name, version, author, and crucial configuration settings such as the number of posts per page. default.hbs(optional but probably required): The main base template. Think of it as the master "frame" for your site. It typically contains the , , tags, your site-wide header and footer, and the crucial {{ghost_head}} and {{ghost_foot}} helpers. All other templates are injected into the {{{body}}} tag of this file. index.hbs(required): The main template for listing your posts. It's used for your homepage by default and will also be used for tag and author archives if tag.hbs and author.hbs don't exist. It uses the {{#foreach posts}} helper to loop through and display your articles. post.hbs(required): The template for a single post. When a visitor clicks on a post title from your index.hbs page, Ghost renders the content using this file. It uses the {{#post}} block helper to access all the post's data (title, content, feature image, etc.). /partials/ (directory): This folder holds reusable snippets of template code, known as partials. It's perfect for elements that appear on multiple pages, like your site header, footer, sidebar, or a newsletter sign-up form. You include them in other files using {{> filename}} /assets/ (directory): This is where you store all your static assets. It's organized into sub-folders for your CSS stylesheets, JavaScript files, fonts, and images used in the theme's design. You link to these assets using the {{asset}} helper (e.g., {{asset "css/screen.css"}}). page.hbs (Optional): A template specifically for static pages (like an "About" or "Contact" page). If this file doesn't exist, Ghost will use post.hbs to render static pages instead. tag.hbs (Optional): A dedicated template for tag archive pages. When a user clicks on a tag, this template will be used to list all posts with that tag. If it's not present, Ghost falls back to index.hbs. author.hbs (optional): A dedicated template for author archive pages. This lists all posts by a specific author. If it's not present, Ghost falls back to index.hbs Ghost uses a logical hierarchy to decide which template to render for a given URL. This allows you to create specific designs for different parts of your site while having sensible defaults. This system provides a clean separation of concerns, making your theme easy to manage and update. You can start with just the three required files ( , , ) and add more specific templates as your design requires them. You are more than welcome to use this theme as a starting point. The only part that was complex was the "Share with Mastodon" button that you see, which frankly I'm still not thrilled with. I wish there was a less annoying way to do it than prompting the user for their server, but I can't think of anything. So Ghost actually has an amazing checking tool for seeing if your theme will work available here: https://gscan.ghost.org/ . It tells you all the problems and missing pieces from your theme and really helped me iterate quickly on the design. Just zip up the theme, upload it and you'll get back a nicely formatted list of problems. Anyway I found the process of writing my own theme to be surprisingly fun. Hopefully folks like how it looks, but if you hate it I'm still curious to hear why. package.json(required): The theme's "ID card." This JSON file contains metadata like the theme's name, version, author, and crucial configuration settings such as the number of posts per page. default.hbs(optional but probably required): The main base template. Think of it as the master "frame" for your site. It typically contains the , , tags, your site-wide header and footer, and the crucial {{ghost_head}} and {{ghost_foot}} helpers. All other templates are injected into the {{{body}}} tag of this file. index.hbs(required): The main template for listing your posts. It's used for your homepage by default and will also be used for tag and author archives if tag.hbs and author.hbs don't exist. It uses the {{#foreach posts}} helper to loop through and display your articles. post.hbs(required): The template for a single post. When a visitor clicks on a post title from your index.hbs page, Ghost renders the content using this file. It uses the {{#post}} block helper to access all the post's data (title, content, feature image, etc.). /partials/ (directory): This folder holds reusable snippets of template code, known as partials. It's perfect for elements that appear on multiple pages, like your site header, footer, sidebar, or a newsletter sign-up form. You include them in other files using {{> filename}} /assets/ (directory): This is where you store all your static assets. It's organized into sub-folders for your CSS stylesheets, JavaScript files, fonts, and images used in the theme's design. You link to these assets using the {{asset}} helper (e.g., {{asset "css/screen.css"}}). page.hbs (Optional): A template specifically for static pages (like an "About" or "Contact" page). If this file doesn't exist, Ghost will use post.hbs to render static pages instead. tag.hbs (Optional): A dedicated template for tag archive pages. When a user clicks on a tag, this template will be used to list all posts with that tag. If it's not present, Ghost falls back to index.hbs. author.hbs (optional): A dedicated template for author archive pages. This lists all posts by a specific author. If it's not present, Ghost falls back to index.hbs The Request : A visitor goes to a URL on your site (e.g., your homepage, a post, or a tag archive). Context is Key : Ghost determines the "context" of the URL. Is it the homepage? A single post? A list of posts by an author? Find the Template : Ghost looks for the most specific template file for that context. Visiting ? Ghost looks for . If it doesn't find it, it uses . Visiting a static page like ? Ghost looks for . If it's not there, it uses . Inject into the Frame : Once the correct template is found (e.g., ), Ghost renders it and injects the resulting HTML into the helper inside your file.

0 views
matduggan.com 8 months ago

TIL Simple Merge of two CSVs with Python

I generate a lot of CSVs for my jobs, mostly as a temporary storage mechanism for data. So I make report A about this thing, I make report B for that thing and then I produce some sort of consumable report for the organization at large. Part of this is merging the CSVs so I don't need to overload each scripts to do all the pieces. For a long time I've done this in Excel/LibreOffice, which totally works. But I recently sat down with the library and I had no idea how easy it is use for this particular use case. Turns out this is a pretty idiot-proof way to do the same thing without needing to deal with the nightmare that is Excel. Just a super easy to hook up script that has saved me a ton of time from having to muck around with Excel. Make sure Python is installed Change to the first file you want to consider. Same with The most important thing to consider here: I only want the output if the value in the column is in BOTH files. If you want all the output from and then enrich it with the values from if it is present, change to

0 views
matduggan.com 8 months ago

GitHub Copilot for Vim Review

The impact of Large Language Models (LLMs) on the field of software development is arguably one of the most debated topics in developer circles today, sparking discussions at meetups, in lunchrooms, and even during casual chats among friends. I won't attempt to settle that debate definitively in this post, largely because I lack the foresight required. My track record for predicting the long-term success or failure of new technologies is, frankly, about as accurate as a coin flip. In fact, if I personally dislike a technology, it seems destined to become an industry standard. However, I do believe I'm well-positioned to weigh in on a much more specific question: Is GitHub Copilot beneficial for me within my primary work environment, Vim? I've used Vim extensively as my main development tool for well over a decade, spending roughly 4-5 hours in it daily, depending on my meeting schedule. My work within Vim involves a variety of technologies, including significant amounts of Python, Golang, Terraform, and YAML. Therefore, while I can't provide a universal answer to whether an LLM is right for you , I can offer concrete opinions based on my direct experience with GitHub Copilot as a dedicated Vim user today. So just to prove I really set it up: It's a real test, I've been using it every day for this time period. I have it set up in what I believe to be the "default configuration". The Vim plugin I'm using is the official one located here: https://github.com/github/copilot.vim The plugin uses Vimscript to capture the current state of the editor. That includes stuff like: The Node.js language server receives the request from the Vim/Neovim plugin. It processes the provided context and constructs a request to the actual GitHub Copilot API running on GitHub's servers. This request includes the code context and other relevant information needed by the Copilot AI model to generate suggestions. The plugin receives the suggestions from the language server. It then integrates these suggestions into the Vim or Neovim interface, typically displaying them as "ghost text" inline with the user's code or in a separate completion window, depending on the plugin's configuration and the editor's capabilities. As you can tell from the output of the plugin is actually pretty performant and doesn't add a notable time to my initial launch. In terms of the normal usage, it works like it says on the box. You start typing and it shows the next line it thinks you might be writing. The suggestions don't do much on their own. Basically the tool isn't smart enough to even keep track of what it has already suggested. So in this case I've just tab completed and taken all the suggestions and you can tell it immediately gets stuck in a loop. Now you can use it to "vibe code" inside of Vim. That works by writing a comment describing what you want to do and then just tab accepting the whole block of code. So for example I wrote . It produced the following. So I made a somewhat misleading comment on purpose. I was trying to get it to write a function to see if a JWT was actually a JWE. Now this python code is (obviously) wrong. The code assumes the token will always have exactly three parts separated by dots ( , , ). This is the structure of a standard JSON Web Token (JWT). However, a JSON Web Encryption (JWE), which is what a wrapped encrypted JWT is, has five parts: So this gives you a rough idea of the quality of the code snippets it produces. If you are writing something dead simple, the autogenerate will often work and can save you time. However go even a little bit off the golden path and, while Copilot will always give it a shot, the quality is all over the place. Reviewing a product like this is extremely hard because it does everything all the time and changes daily with no notice. I've had weeks where it seems like the Copilot intelligence gets cranked way up and weeks where its completely brain dead. However I will go through some common tasks I have to do all the time and rank it on how well it does. This is probably the thing Copilot is best at. You have a JSON that you are getting from some API and then Copilot helps you fill in the parsing for that so you don't need to type the whole thing out. So just by filling in my imports it already has a good idea of what I'm thinking about here. So in this example I write the comment with the example JSON object and then it fills in the rest. This code is....ok. However I'd like it to probably check the json_data to see if it matches the expectation before it parses. Changing the comment however changes the code. This is very useful for me as someone who often needs to consume JSONs from source A and then send JSONs on to target B. Saves a lot of time and I think the quality looks totally acceptable to me. Some notes though: I work a lot with databases, like everyone on Earth does. Copilot definitely understands the concepts of databases but your experience can vary wildly depending on what you write and the mood it is in. I mean this is sort of borderline making fun of me. Obviously I don't want to just check if the file named that exists? This is better but it's still not good. If there is a file sitting there with the right name that isn't a database , sqlite3.connect will just make it. The part is super shitty. Obviously that's not what I want to do. I probably want to at least log something? Let me show another example. I wrote Then I typed user_ID UUID and let it fill in the rest. Not great. What it ended up making was even worse. We're missing error handling, no try/finally blocks with the connection cursor, etc. This is pretty shitty code. My experience is it doesn't get much better the more you use. Some tips: Just that alone seems to make it a lot happier. Still not amazing but at least closer to correct. Not much to report with Terraform. So why the 70/100? I've had a lot of frustrations with Copilot hallucinations with Terraform where it will simply insert arguments that don't exist. I can't reliably reproduce it, but this is something that can really burn a lot of time when you hit it. My advice with Terraform is to run something like after which will often catch weird stuff it inserts. https://github.com/tenable/terrascan However I will admit it saves me a lot of time, especially when writing stuff that is mind-numbing like 1000 DNS entries. So easily worth the risk on this one. This is a good summary of my experience with Copilot with Golang. I don't know why. It will work fine for awhile and then at some point, roughly when the golang file hits around 300-400 lines, seems to just lose it. Maybe there's another plugin I have that's causing a problem with Copilot and Golang, maybe I'm holding it wrong, I have no idea. There's nothing in the logs I can find that would explain why it seems to break on Golang. I'm not going to file a bug report because I don't consider this my job to fix. Is Copilot worth $10 a month? I think that really depends on what your day looks like. If you are someone who is: Then I think this tool might be worth the money. However if your day looks like this: I'd say stay far away from Copilot for Vim. It's going to end up causing you serious problems that are going to be hard to catch. The entire content of the current buffer (the file being edited). The current cursor position within the buffer. The file type or programming language of the current buffer. Protected Header Encrypted Key Initialization Vector Authentication Tag Python Types greatly improve the quality of the suggestions You need to check to make sure it doesn't truncate the list. Sometimes Copilot will "give up" like 80% through writing out all the items. It doesn't often make up ones, which is nice, but you do need to make sure everything you expected to be there ends up getting listed. If you write out the SQL in the comments then you will have a way better time. Make sure you use the That seems to ground the LLM with the rest of the code and allows it to detect things like "what is the cloud account you are using". Writing microservices where the total LoC rarely exceeds 1000 per microservice Spends a lot of your time consuming and producing JSONs for other services to receive Are capable of checking SQL queries and confirming how they need to be fixed Has good or great test coverage Spends most of your day inside of a monolith or large codebase carefully adding new features or slightly modifying old features Doesn't have any or good test coverage Doesn't have a good database migration strategy.

0 views
matduggan.com 10 months ago

Slack: The Art of Being Busy Without Getting Anything Done

My first formal IT helpdesk role was basically "resetting stuff". I would get a ticket, an email or a phone call and would take the troubleshooting as far as I could go. Reset the password, check the network connection, confirm the clock time was right, ensure the issue persisted past a reboot, check the logs and see if I could find the failure event, then I would package the entire thing up as a ticket and escalate it up the chain. It was effectively on the job training. We were all trying to get better at troubleshooting to get a shot at one of the coveted SysAdmin jobs. Moving up from broken laptops and desktops to broken servers was about as big as 22 year old me dreamed. Sometimes people would (rightfully) observe that they were spending a lot of time interacting with us, while the more senior IT people were working quietly behind us and they could probably fix the issue immediately. We would explain that, while that was true, our time was less valuable than theirs. Our role was to eliminate all of the most common causes of failure then to give them the best possible information to take the issue and continue looking at it. There are people who understand waiting in a line and there are people who make a career around skipping lines. These VIPs encountered this flow in their various engineering organizations and decided that a shorter line between their genius and the cogs making the product was actually the "secret sauce" they needed. Thus, Slack was born, a tool pitched to the rank and file as a nicer chat tool and to the leadership as a all-seeing eye that allowed them to plug directly into the nervous system of the business and get instant answers from the exact right person regardless of where they were or what they were doing. At first Slack-style chat seemed great. Email was slow and the signal to noise ratio was off, while other chat systems I had used before at work either didn't preserve state, so whatever conversation happened while you were offline didn't get pushed to you, or they didn't scale up to large conversations well. Both XMPP and IRC has the same issue, which is if you were there when the conversation was happening you had context, but otherwise no message history for you. There were attempts to resolve this ( https://xmpp.org/extensions/xep-0313.html ) but support among clients was all over the place. The clients just weren't very good and were constantly going through cycles of intense development only to be abandoned. It felt like when an old hippie would tell you about Woodstock. "You had to be there, man". Slack brought channels and channels bought a level of almost voyeurism into what other teams were doing. I knew exactly what everyone was doing all the time, down to I knew where the marketing team liked to go for lunch. Responsiveness became the new corporate religion and I was a true believer. I would stop walking in the hallway to respond to a DM or answer a question I knew the answer to, ignoring the sighs of frustration as people walked around my hoodie-clad roadblock of a body. Sounds great, what's the issue? So what's the catch? Well I first noticed it on the train. My daily commute home through the Chicago snowy twilight used to be a sacred ritual of mental decompression. A time to sift through the day's triumphs and (more often) the screw-ups. What needed fixing tomorrow? What problem had I pushed off maybe one day too long? But as I got further and further into Slack, I realized I was coming home utterly drained yet strangely...hollow. I hadn't done any actual work that day. My days had become a never-ending performance of "work". I was constantly talking about the work, planning the work, discussing the requirements of the work, and then in a truly Sisyphean twist, linking new people to old conversations where we had already discussed the work to get them up to speed on our conversation. All the while diligently monitoring my channels, a digital sentry ensuring no question went unanswered, no emoji not +1'd. That was it, that was the entire job. Show up, spend eight hours orchestrating the idea of work, and then go home feeling like I'd tried to make a sandcastle on the beach and getting upset when the tide did what it always does. I wasn't making anything, I certainly wasn't helping our users or selling the product. I was project managing, but poorly, like a toddler with a spreadsheet. And for the senior engineers? Forget about it. Why bother formulating a coherent question for a team channel when you could just DM the poor bastard who wrote the damn code in the first place? Sure, they could push back occasionally, feigning busyness or pointing to some obscure corporate policy about proper channel etiquette. But let's be real. If the person asking was important enough (read: had a title that could sign off on their next project), they were answering. Immediately. So, you had your most productive people spending their days explaining why they weren't going to answer questions they already knew the answer to, unless they absolutely had to. It's the digital equivalent of stopping a concert pianist to teach you "Twinkle Twinkle Little Star" 6 times a day. And don't even get me started on the junior folks. Slack was actively robbing them of the chance to learn. Those small, less urgent issues? That's where the real education happens. You get to poke around in the systems, see how the gears grind, understand the delicate dance of interconnectedness. But why bother troubleshooting when Jessica, the architect of the entire damn stack, could just drop the answer into a DM in 30 seconds? People quickly figured out the pecking order. Why wait four hours for a potentially wrong answer when the Oracle of Code was just a direct message away? You think you are too good to answer questions??? Au contraire! I genuinely enjoy feeling connected to the organizational pulse. I like helping people. But that, my friends, is the digital guillotine. The nice guys (and gals) finish last in this notification-driven dystopia. The jerks? They thrive. They simply ignore the incoming tide of questions, their digital silence mistaken for deep focus. And guess what? People eventually figure out who will respond and only bother those poor souls. Humans are remarkably adept at finding the path of least resistance, even if it leads directly to someone else's burnout. Then comes review time. The jerk, bless his oblivious heart, has been cranking out code, uninterrupted by the incessant digital demands. He has tangible projects to point to, gleaming monuments to his uninterrupted focus. The nice person, the one everyone loves, the one who spent half their day answering everyone else's questions? Their accomplishments are harder to quantify. "Well, they were really helpful in Slack..." doesn't quite have the same ring as "Shipped the entire new authentication system." It's the same problem with being the amazing pull request reviewer. Your team appreciates you, your code quality goes up, you’re contributing meaningfully. But how do you put a number on "prevented three critical bugs from going into production"? You can't. So, you get a pat on the back and maybe a gift certificate to a mediocre pizza place. Time marches on, and suddenly, email is the digital equivalent of that dusty corner in your attic where you throw things you don't know what to do with. It's a wasteland of automated notifications from systems nobody cares about. But Slack? There’s no rhyme or reason to it. Can I message you after hours with the implicit understanding you'll ignore it until morning? Should I schedule the message for later, like some passive-aggressive digital time bomb? And the threads! Oh, the glorious, nested chaos of threads. Should I respond in a thread to keep the main channel clean? Or should I keep it top-level so that if there's a misunderstanding, the whole damn team can pile on and offer their unsolicited opinions? What about DMs? Is there a secret protocol there? Or is it just a free-for-all of late-night "u up?" style queries about production outages? It felt like every meeting had a pre-meeting in Slack to discuss the agenda, followed by an actual meeting on some other platform to rehash the same points, and then a post-meeting discussion in a private channel to dissect the meeting itself. And inevitably, someone who missed the memo would then ask about the meeting in the public channel, triggering a meta-post-meeting discussion about the pre-meeting, the meeting, and the initial post-meeting discussion. The only way I could actually get any work done was to actively ignore messages. But then, of course, I was completely out of the loop. The expectation became this impossible ideal of perfect knowledge, of being constantly aware of every initiative across the entire company. It was like trying to play a gameshow and write a paper at the same time. To be seen as "on it", I needed to hit the buzzer and answer the question, but come review time none of those points mattered and the scoring was made up. I was constantly forced to choose: stay informed or actually do something. If I chose the latter, I risked building the wrong thing or working with outdated information because some crucial decision had been made in a Slack channel I hadn't dared to open for fear of being sucked into the notification vortex. It started to feel like those brief moments when you come up for air after being underwater for too long. I'd go dark on Slack for a few weeks, actually accomplish something, and then spend the next week frantically trying to catch up on the digital deluge I'd missed. One of the hardest lessons for anyone to learn is the profound value of human attention. Slack is a fantastic tool for those who organize and monitor work. It lets you bypass the pesky hierarchy, see who's online, and ensure your urgent request doesn't languish in some digital abyss. As an executive, you can even cut out middle management and go straight to the poor souls actually doing the work. It's digital micromanagement on steroids. But if you're leading a team that's supposed to be building something, I'd argue that Slack and its ilk are a complete and utter disaster. Your team's precious cognitive resources are constantly being bled dry by a relentless stream of random distractions from every corner of the company. There are no real controls over who can interrupt you or how often. It's the digital equivalent of having your office door ripped off its hinges and replaced with glass like a zoo. Visitors can come and peer in on what your team is up to. Turns out, the lack of history in tools like XMPP and IRC wasn't a bug, it was a feature. If something important needed to be preserved, you had to consciously move it to a more permanent medium. These tools facilitated casual conversation without fostering the expectation of constant, searchable digital omniscience. Go look at the Slack for any large open-source project. It's pure, unadulterated noise. A cacophony of voices shouting into the void. Developers are forced to tune out, otherwise it's all they'd do all day. Users have a terrible experience because it's just a random stream of consciousness, people asking questions to other people who are also just asking questions. It's like replacing a structured technical support system with a giant conference call where everyone is on hold and told to figure it out amongst themselves. So, what do I even want here? I know, I know, it's a fool's errand. We're all drowning in Slack clones now. You can't stop this productivity-killing juggernaut. It's like trying to un-ring a bell, or perhaps more accurately, trying to silence a thousand incessantly pinging notifications. But I disagree. I still think it's not too late to have a serious conversation about how many hours a day it's actually useful for someone to spend on Slack. What do you, as a team, even want out of a chat client? For many teams, especially smaller ones, it makes far more sense to focus your efforts where there's a real payoff. Pick one tool, one central place for conversations, and then just…turn off the rest. Everyone will be happier, even if the tool you pick has limitations, because humans actually thrive within reasonable constraints. Unlimited choice, as it turns out, is just another form of digital torture. Try to get away with the most basic, barebones thing you can for as long as you can. I knew a (surprisingly productive) team that did most of their conversation on an honest-to-god phpBB internal forum. Another just lived and died in GitHub with Issues. Just because it's a tool a lot of people talk about doesn't make it a good tool and just because it's old, doesn't make it useless. As for me? I'll be here, with my Slack and Teams and Discord open trying to see if anything has happened in any of the places I'm responsible for seeing if something has happened. I will consume gigs of RAM on what, even ten years ago, would have been an impossibly powerful computer to watch basically random forum posts stream in live.

0 views
matduggan.com 10 months ago

How to get a DID on iOS easily

So one of the more annoying things about moving from the US to Europe is how much of the American communication infrastructure is still built around the idea that you have a US phone number to receive text messages from. While some (a vanishingly small) percentage of them allow me to add actual 2FA and bypass the insane phone number requirement, it's a constant problem to need to get these text messages. There are services like Google Voice, but they're impossible to set up abroad. So you need to already have it set up when you land. Then if you forgot to use it, you'll lose the number and start the entire nightmare over again. Also increasingly services won't let you add a Google Voice number to get these text messages, I assume because of fraud. I finally got tired of it and did what I should have done when I first moved here, which is just buy a DID number. So I'm going to use voip.ms for this because it seems to work the best of the ones I tried. This enables forwarding text messages as emails, which I assume you want but feel free to turn off. MAKE SURE YOU SET CALLERID NUMBER otherwise nothing else will work. Also set a password: Password is the account password we set before, username is the 6 digit number from the account information screen. Ta-dah! We're all done. Make sure you can call a test number: 1-202-762-1401 returns the time. Text messages should now be coming in super easily and it will cost you less than $1 a month to keep this phone number forever. Questions/comments/concerns: https://c.im/@matdevdug Make an account: https://voip.ms/ and login Add funds. I added $100 so I don't lose the number for a long time. Buy a number. Select Per Minute Plan unless you plan on using it for a lot of phone calls. Select the International DID POP Critical Step . Configure the DID. Critical step: Account settings Get your account ID Get Acrobits Softphone: https://apps.apple.com/dk/app/acrobits-softphone/id314192799?l=en Set up the account as follows:

0 views
matduggan.com 10 months ago

Help Me Help You, Maintainers

at one point i questioned my desire to help people get into open source image unrelated Anybody who has worked in a tech stack of nearly any complexity outside of Hello World is aware of the problems with the current state of the open-source world. Open source projects, created by individuals or small teams to satisfy a specific desire they have or problem they want to solve, are adopted en masse by large organizations whose primary interest in consuming them are saving time and/or money. These organizations rarely contribute back to these projects, creating a chain of critical dependencies that are maintained inconsistently. Similar to if your general contractor got cement from a guy whose hobby was mixing cement, the results are (understandably) all over the place. Sometimes the maintainer does a great job for awhile then gets bored or burned out and leaves. Sometimes the project becomes important enough that a vanishingly small percentage of the profit generated by the project is redirect back towards it and a person can eek out a meager existence keeping everything working. Often they're left in a sort of limbo state, being pushed forward by one or two people while the community exists in a primarily consumption role. Whatever stuff these two want to add or PRs they want to merge is what gets pushed in. In the greater tech community, we have a lot of conversations about how we can help maintainers. Since a lot of the OSS community trends towards libertarian, the vibe is more "how can we encourage more voluntary non-mandated assistance towards these independent free agents for whom we bare no responsibility and who have no responsibility towards us". These conversations go nowhere because the idea of a widespread equal distribution of resources based on value without an enforcement mechanism is a pipe dream. The basic diagram looks like this: So we know all this. But as someone who uses a lot of OSS and (tries) to provide meaningful feedback and refinements back to the stuff I use, I'd like to talk about a different problem. The problem I'm talking about is how hard it is to render assistance to maintainers. Despite endless hours of people talking about how we should "help maintainers more", it's never been less clear what that actually means. I, as a person, have a finite amount of time on this Earth. I want to help you, but I need the process to help you to make some sort of sense. It also has to have some sort of consideration for my time and effort. So I'd like to propose just a few things I've run into over the last few years I'd love if maintainers could do just to help me be of service to you. I don't want this to read like "I, an entitled brat, believe that maintainers owe me". You provide an amazing service and I want to help. But part of helping is I need to understand what is it that you would like me to do. Because the open-source community doesn't adopt any sort of consistent cross-project set of guidelines (see weird libertarian bent) it is up to each one to tell me how they'd like to me assist them. But I don't want to waste a lot of time waiting for a perfect centralized solution to this problem to manifest. It's your project, you are welcome to do with it whatever you want (including destroy it), but if you want outside help then you need to sit down and just walk through the question of "what does help look like". Tell me what I can do, even if the only thing I can do is "pay you money". If you don't want PRs, just say that. It's fine, but the number of times I have come across projects with a ton of good PRs just sitting there is alarming. Just say "we don't merge in non-maintainers PRs" and move on. Don't automatically close bug reports. You are under zero ethical obligation to respond to or solve my bug report. But at the very least, don't close it because nobody does anything with it for 30 days. Time passing doesn't make it less real. There's no penalty for having a lot of open bug reports. If you want me to help, don't make me go to seven systems. The number of times I've opened an issue on GitHub only to then have to discuss it on Discord or Slack and then follow-up with someone via an email is just a little maddening. If your stuff is on GitHub do everything there. If you want to have a chat community, fine I guess, but I don't want to join your tech support chat channel. Archive when you are done. You don't need to explain why you are doing this to anyone on Earth, but if you are done with a project archive it and move on. You aren't doing any favors by letting it sit forever collecting bug reports and PRs. Archiving it says "if you wanna fork this and take it over, great, but I don't want anything to do with it anymore". Provide an example of how you want me to contribute. Don't say "we prefer PRs with tests". Find a good one, one that did it the right way and give me the link to it. Or make it yourselves. I'm totally willing to jump through a lot of hoops for the first part, but it's so frustrating when I'm trying to help and the response is "well actually what we meant by tests is we like things like this". If you have some sort of vision of what the product is or isn't, tell me about it. This comes up a lot when you go to add a feature that seems pretty obvious only to have the person close it with an exhausted response of "we've already been over this a hundred times". I understand this is old news to you, but I just got here. If you have stuff that comes up a lot that you don't want people to bother you with, mention it in the README. I promise I'll read it and I won't bother you! If what you want is money, say that. I actually prefer when a maintainer says something like "donors bug reports go to the front of the line" or something to that effect. If you are a maintainer who feels unappreciated and overwhelmed, I get that and I want to work with you. If the solution is "my organization pays you to look at the bug report first", that's totally ethnically acceptable. For some reason this seems icky to the community ethos in general, but to me it just makes sense. Just make it clear how it works. If there are tasks you think are worth doing but don't want to do, flag them. I absolutely love when maintainers do this. "Hey this is a good idea, it's worth doing, but it's a lot of work and we don't want to do it right now". It's the perfect place for someone to start and it hits that sweet spot of high return on effort.

0 views
matduggan.com 11 months ago

My Very Own Rival

I’ve always marveled at people who are motivated purely by the love of what they’re doing. There’s something so wholesome about that approach to life—where winning and losing don’t matter. They’re simply there to revel in the experience and enjoy the activity for its own sake. Unfortunately, I am not that kind of person. I’m a worse kind of person. For much of my childhood, I invented fake rivalries to motivate myself. A popular boy at school who was occasionally rude would be transformed into my arch-nemesis. “I see I did better on the test this week, John,” I’d whisper to myself, as John lived his life blissfully unaware of my scheming. Instead of accepting my lot in life—namely that no one in my peer group cared at all about what I was doing—I transformed everything into a poorly written soap opera. This seemed harmless until high school. For four years, I convinced myself I was locked in an epic struggle with Alex, a much more popular and frankly nicer person than me. We were in nearly all the same classes, and I obsessed over everything he said. Once, he leaned over and whispered, “Good job,” then waited a half beat before smiling. I spent the rest of the day growing increasingly furious over what he probably meant by that. “I think you need to calm down,” advised Sarah, the daughter of a coworker at Sears who read magazines in our break room. “I think you need to stay out of this, Sarah,” I fumed, furiously throwing broken tools into the warranty barrel—the official way Sears handled broken Craftsman tools: tossing them into an oil drum. The full extent of my delusion didn’t become clear until junior year of college, when I ran into Alex at a bar in my small hometown in Ohio. Confidently, I strode up to him, intent on proving I was a much cooler person now. “I’m sorry, did we go to school together?” he asked. Initially, I thought it was a joke—a deliberate jab to throw me off. But then it dawned on me: he was serious. He asked a series of questions to narrow down who exactly I was. “Were you in the marching band? Because I spent four years on the football team, and I didn’t get to know a lot of those kids. It looked fun, though.” That moment taught me a valuable lesson: no more fake rivals. So imagine my surprise when a teenage grocery store checkout clerk emerges in my 30s to become my greatest enemy—a cunning and devious foe who forced me to rethink everything about myself. Odense, Denmark, is a medium-sized town with about 200,000 people. It boasts a mall, an IKEA, a charming downtown, and a couple of beautiful parks. It also has a Chinese-themed casino with a statue of H.C. Andersen out front and an H.C. Andersen museum, since Odense is where the famous author was born. Amusingly, Andersen hated Odense—the place where he had been exposed to the horrors of poverty. Yet now the city has formed its entire identity around him. I moved here from Chicago, lured by the promise of a low cost of living and easy proximity to Copenhagen Airport (just a 90-minute train ride away). I had grand dreams of effortlessly exploring Europe. Then COVID hit, and my world shrank dramatically. For the next 12 months, I rarely ventured beyond a three-block radius—except for long dog walks and occasional supply runs to one of the larger stores. One such store was a LokalBrugsen, roughly the size of a gas station. I’d never shopped there before COVID since it had almost no selection. But desperate times called for desperate measures, and its emptiness made it the better option. My first visit greeted me with a disturbing poster taped to the door. The Danish word for hoarding is hamstre, a charming reference to stuffing your cheeks like a hamster. Apparently, during World War II, people were warned against hoarding food. The small grocery store had decided to resurrect this message—unfortunately using a German wartime poster, complete with Nazi imagery. I got the point, but still. Inside, two Danish women frantically threw bread-making supplies into their cart, hamstering away. They had about 40 packets of yeast, which seemed sufficient to weather the apocalypse. Surely, at a certain point, two people have enough bread. It was during this surreal period that I met my rival: Aden. Before COVID, the store had been staffed by a mix of Danes and non-Danes. But during the pandemic, the Danes seemingly wanted nothing to do with the poorly ventilated shop, leaving it staffed entirely by non-Danes. Aden was in his early 20s, tall and lean, with a penchant for sitting with his arms crossed, staring at nothing in particular, and directing random comments at whoever happened to be nearby. The first thing I noticed about him was his impressive language skills. He could argue with a Frenchman, switch seamlessly to Danish for the next dispute, and insult me in near-perfect California-accented English. My first encounter with him came when I tried to buy Panodil from behind the counter. In my best Danish, I asked, “Må jeg bede om Panodil?” (which literally translates to “May I pray for Panodil?” since Danish doesn’t have a word for “please”). Aden laughed. “Right words, but your accent’s way off. Try again, bro.” He stared at me expectantly. So I tried again. “Yeah, still not right. You gotta get lower on the bede. ” The line behind me grew as Aden, seemingly with nothing but time on his hands, made me repeat myself. Eventually, I snapped. “You understand me. Just give me the medicine.” He handed it over with a grin. “We’ll practice again later,” he said as I walked out. As my sense of time dissolved and my sleep became increasingly erratic, this feud became the only thing happening in my life. Each visit to the store turned into a linguistic duel. Aden would ask me increasingly bizarre questions in Danish. “Do you think the Queen’s speech captured the mood of the nation in this time of uncertainty?” It would take me several long seconds to process what he’d said. Then I’d retaliate with the most complex English sentence I could muster. “It’s kismet that a paragon of virtue such as this Queen rules and not a leader who acts obsequiously in the face of struggle. Why are you lollygagging around anyway?” Aden visibly bristled at my use of obscure American slang like lollygag, bumfuzzle, cattywampus, and malarkey. Naturally, I made it my mission to memorize every regionalism I could find. My wife shook her head as I scrolled through websites with titles like “Most Unusual Slang in the Deep South.” As weeks turned into months, my life settled into a bizarrely predictable pattern. After logging into my work laptop and finding nothing to do, I’d take my dog on a three-to-four-hour walk. His favorite spot was a stone embankment where H.C. Andersen’s mother supposedly washed clothes—a fact so boring it seems fabricated, yet somehow true. If I was lucky, I’d witness the police breaking up gatherings of too many people. The fancy houses along the river were home to richer Danes who simply couldn’t follow the maximum group size rule. I delighted in watching officers disperse elderly tea parties. My incredibly fit Corgi, whose fur barely contained his muscles after daily multi-hour walks, and I would eventually head home, where I wasted time until the “workday” ended. Then it was time for wine, news, and my trip to the store. On the way, I passed the Turkish Club—a one-room social club filled with patio furniture and a cooler full of beer no one seemed to drink. It reminded me of a low-rent version of the butcher shop from The Sopranos, complete with men smoking all the cigarettes in the world. Then I’d turn the corner and peek around to see if Aden was there. He usually was. As the pandemic wore on, even the impeccably dressed Danes began to look unhinged, with home haircuts and questionable outfits. The store itself devolved into a chaotic mix of token fruits and vegetables, along with soda, beer, and wine bearing dubious labels like “Highest Quality White Wine.” People had stopped hamstering but this had been replaced with daytime drinking. Sadly, Aden had become somewhat diminished too. His reign of terror ended when a very tough-looking Danish man verbally dismantled him in front of everyone. I was genuinely worried for my petite rival, who was clearly outmatched. Aden has said something about him buying "too many beers today" that had set the guy off. In Aden's defense it was a lot of beers, but still, probably not his place. Our last conversation didn’t take place in the store but at a bus stop. I asked him where he’d learned English, as it was remarkably good. “The show Friends. I had the DVDs,” he said, staring forward. He seemed uncomfortable seeing me outside his domain, which wasn’t helped by my bowl haircut and general confusion about what day it was. Then, on the bus, something heartwarming happened. The driver, also seemingly from Somalia, said something to Aden that I didn’t understand. Aden’s response was clearly ruder than expected, prompting the driver to turn around and start a heated argument. It wasn’t just me—everyone hated him. In this crazy, mixed-up world, some things can bring people together across language and cultural barriers. Teenage boys being rude might just be the secret to world peace.

0 views