Latest Posts (20 found)
iDiallo 2 days ago

Boredom is the Gatekeeper

That first Monday of my holiday break, I made a promise to myself. No work emails, no side projects, not even glancing at my blog. This time was for family, for Netflix queues, for rereading dog-eared novels. One thing I was really looking forward to was learning something new, a new skill. Not for utility, but purely for curiosity. I wanted to learn about batteries. They power our world, yet they're a complete mystery to me. I only vaguely remember what I learned in high school decades ago. This would be the perfect subject for me. I went straight to a website I had bookmarked years ago in a fit of intellectual ambition: BatteryUniversity.com. I started with the chemistry of lead acid batteries. I was ready to be enlightened. Twenty minutes later, I was three paragraphs in, my mind adrift. The text was dense, packed with terms like "lead-antimony" and "acid-starved." My finger twitched. Then I read this: the sealed lead acid battery is designed with a low over-voltage potential to prohibit the battery from reaching its gas-generating potential during charge. I thought, wouldn't this be easier to understand as a YouTube video? A nice animation? I clicked away. It seemed like I had just met the gatekeeper, and it had turned me away. I was bored. We talk about boredom as if it's the absence of stimulation. Having nothing to do. But in our hyperconnected world, where information is constantly flowing and distractions are a finger tap away, true emptiness is rare. Modern boredom isn't having nothing to do. I had plenty of material to go over. Instead, it's the friction of deep focus. It's the resistance you feel when you move from consuming information to building those neural connections in your brain. Learning feels slow and hard, and it is ungratifying compared to dopamine-induced YouTube videos. Have you ever watched a pretty good video on YouTube and learned nothing from it? This reaction to learning the hard way, masquerading as boredom, is the gatekeeper. And almost every important skill in life lives on the other side of that gate. When I started working for an AI startup, I was fascinated by what we were able to accomplish with a team of just two engineers. It looked like magic to me at first. You feed the AI some customer's message, and it tells you exactly what this person needs. So, to be an effective employee, I decided to learn profoundly about the subject. Moving from just a consumer of an API to a model creator made the process look un-magical. It started with spreadsheets where we cleaned data. There was a loss function that stubbornly refused to budge for hours. There was staring at a single Python error that said the tensor dimensions don't align. The boring part was the meticulous engineering upon which the magic is built. I find it fascinating now, but it was frustrating at the time, and I had to force myself to learn it. Like most developers, video games inspired me to become a programmer. I wanted to code my own game from scratch. I remember playing Devil May Cry and thinking about how I would program those boss battles. But when I sat with a keyboard and the cursor on my terminal flashed before me, I struggled to move a gray box on the screen using SDL. For some reason, when I pressed arrow keys, the box jittered instead of following a straight line. I would spend the whole day reading OpenGL and SDL documentation only to fix a single bug. Boredom was going through all this documentation, painfully, only to make small incremental progress. When you start a business, the gatekeeper shows its face. It stares back at you when you open that blank document and write a single line of text in it: My idea. For indie developers, it's the feeling you get when you build the entire application and feel compelled to start over rather than ship what you've built. This boredom is the feeling of creation from nothing, which is always harder than passive consumption. We've conflated "interesting" with "easy to consume." The most interesting things in the world, like building software, writing a book, mastering a craft, understanding a concept, are never easy to produce. Their initial stages are pure effort. Gamification tries to trick us past the gatekeeper with points and badges, but that's just putting a costume on it. The real work remains. There is no way around it. You can't eliminate that feeling. Instead, you have to recognize it for what it is and push through. When you feel that itchy tug toward a distracting tab, that's the gatekeeper shaking its keys. It's telling you that what you're doing is really hard, and it would be easier to just passively consume it. You might even enjoy the process without ever learning anything. Instead, whenever you feel it, set a timer for 25 minutes. Agree to wrestle with the battery chemistry, the Python error, or the empty page. Just for that short time span. There is no dopamine hit waiting on the other side of boredom like you get from passive consumption. Instead, the focus, the struggle, the sustained attention, that's the process of learning. The gatekeeper ensures only those willing to engage in the hard, quiet work of thinking get to the good stuff. I did not become a battery expert over the holidays. But at least I learned to recognize the gatekeeper's face. Now, when I feel that familiar, restless boredom descend as I'm trying to learn something hard, I smile a little. I know I'm at the threshold. And instead of turning back, I take a deep breath, set my timer to 25 minutes, and I power through the gate.

2 views
iDiallo 5 days ago

What should you write about on your blog?

Whenever I hear someone express a thought really well, like they make a sharp observation, tell a funny story, or just have a moment of clarity, I ask the same question: "Why don't you have a blog?" Their answer is almost always a variation of, "I don't know what to write about." They fail to see that the idea they just shared is the thing worth writing about. In that moment, they weren't worried about credentials or impact; they were just sharing something interesting. That's the entire point. Sure, blogging isn't the mainstream medium for sharing ideas it used to be. The generation raised on polished social media often sees writing with a higher, more intimidating bar. You need to be an expert. You need perfect prose. You need a universe-altering thesis. I'm here to tell you that you don't need any of that. Your blog, before being anything else, is your own property. It's a public notebook. It belongs to you. There are no rules. You can write about anything, in any format you like. But I understand the hesitation. We are taught early on that ideas worth recording must be monumental. I remember the first year of middle school, when we started learning real history. Our textbook began with prehistory: Cro-Magnons, Neanderthals, those mysterious paintings in dark caves. Then it swept through Egypt's pyramids, the Roman legions, the rise of empires and religions, finally landing in the modern technological age of the '80s. I can't remember the book's name, but I've never forgotten the question it planted in me: Who decided what needed to be recorded? For every grand pharaoh or world war that made the cut, countless stories were left out. The everyday lives, the small discoveries, the personal triumphs and failures of millions of people, all erased by the sheer selectivity of history. We only see the peaks, never the vast, rich landscape that lay between them. And because I went to school in several countries, those peaks often varied depending on which country you were in. When people say they don't know what to write about, they're thinking like that history textbook editor. They believe every blog post needs to be a chapter on pyramids. It needs to be concise, complete, and deeply impactful. It needs to be a civilization-defining monument. But real life isn't like that. It's not all Marvel movies, where the fate of the universe hangs in the balance. It's more like those quiet, beautiful indie films where the stakes are relatable. A protagonist mustering the courage to ask a girl out, a person navigating a frustrating commute, some kids on a quest to get to a White Castle burger joint. The universe isn't at stake; just a tiny, precious piece of a single human heart. And that is more than enough. You don't need cosmic stakes to have a compelling story. You just need a story that is true to you. It's a lot like creative nonfiction. You're basically telling a true, factually accurate story, focusing on real-life experiences but filtered through your personal perspective for deeper meaning. Writing it down in your blog is a way of preserving it. It's how you structure that fleeting thought or experience into something you can revisit, and that others can understand and, hopefully, relate to. It's your own cave painting. When I get someone excited about having their own blog, there is a second question that always follows: "What if people don't like it?" Another variation is: "What if my blog post is wrong?" I have an answer to that as well: " Good news, nobody is going to read it ." At least not right away. Really. That's a good thing: Your initial blog is your personal training ground. It's where you get to be bad, be wrong, and learn to be better, all without the pressure of an imagined audience. It gives you the crucial time and space you need to write enough, refine your craft, and eventually produce something you genuinely want people to read. It's your rehearsal space. It allows you to write without the pressure of an audience, to find your voice in the quiet. When I find myself worrying that my life is too boring to write about, I remember these words from the story "Bored" by Regie Gibson: The most interesting people you will ever meet are also the most interested. In your rehearsal, you will learn to see the interest in your own mundane. You don't have to invent dramatic parts. Just observe your life from a slightly different angle. Find the universal thread in your specific experience. The minor frustration, the small joy, the momentary confusion. These are the building blocks of human connection. So, what should you write about? Write about the question that popped into your head in middle school. Write about the first time you drove in an electric car. Write about the thing you just explained so well to a friend. Don't write for the history books. Write for your notebook. Paint your cave. Trevor Noah, the comedian, recounts being stressed out about being the opening comic before Dave Chappelle. He asks Dave for advice, and this is the answer he got: "You are not here because you are funny. [...] You are here because you are interesting."

1 views
iDiallo 1 weeks ago

This is NOT the worst LLM you'll ever use

How fast is a horse? I was kinda baffled when I got the answer. For the average horse, one grazing in nature or on a ranch, they can go between 20 to 30 miles per hour. Doesn't that feel slow? What about race horses? I don't have a horse to clock it myself, so I'm relying on petmd.com. The website tells me that the English Thoroughbred can run for up to 44 miles per hour. It's fast alright. But I was going just as fast on my short commute to the office, and it didn't feel like I was racing at all. Yes, cars have replaced horses. The animal has earned its retirement and now poses as a majestic novelty we feast on with our eyes. We don't use horses for transportation anymore. So, how fast is a car? More importantly, how much faster can a car go? And what does it take to increase the maximum speed? I ask this question, because I believe there is a parallel with the growth trajectory of Large Language models. I've often heard people say: "This model you are using today is the worst version of an LLM you will ever use." This is to imply, it will only ever continue to improve. So let's go back to cars for a second. When the first gasoline powered cars were manufactured in the late 1800s, the top speed was around 10 miles per hour. The Benz Motorwagen had a single cylinder engine capable of producing 0.75 horse power (hp). That's not even a full horse. I'm sure the general opinion at the time was that a car would never outperform a well bred horse. Yes, it was an impressive technology, but horses were not about to be replaced by this inefficient clunk of metal. When OpenAI released their paper introducing GPT-1 in 2018, it didn't take the world by storm either. It was a good model with modest improvements over the existing natural language processors of the time. An improvement nevertheless, but we were not about to fire workers over the technology. This model was effective at some narrow task, but was worse than human-written text in general. But let's not forget that car performance didn't remain steady. They kept improving. The successor to the Benz Motorwagen, the Benz Velo (1894), had a 3 hp engine and a top speed of 12 miles per hour. A small improvement. By 1901, the Mercedes had a 35 horse power engine and top speed of 47 mph. Not only was it faster than a horse, it could carry more than one horse ever could and for a longer distance. In 1927, an American company, Duesenberg, came up with the Model J that sported a massive 265 horsepower engine with 119 mph top speed. By 1967, we had the Lamborghini reaching 171 mph. In 1993, we had super cars like the McLaren F1 going 240 mph. We had unlocked the secret. More power = more speed. 150 years of Performance upgrade GPT-3 was not just a transformer, it was transformative. Suddenly, "less is more" didn't make sense. Just like the car getting more horsepower improved the speed, the LLMs drastically benefitted from more data and more compute. ChatGPT, the product running GPT-3.5 initially, became mainstream. It wasn't reserved to research anymore, the general public had a field day with it. AI was everything. But just like the car, the promised exponential growth could only last for so long. As of 2026, the fastest verified speed for a car came from the Yangwang U9 Xtreme with a recorded speed of 308.22 mph. Going from 240 mph to 308 mph required insane specialized engineering for miniscule real-world gain. The U9 has a 3000 HP engine. The car maker, Koenigsegg, "simulated" a theoretical max speed of 330 mph for their next car in production, the Jesko Absolut. Large Language models are hitting that same wall. Scaling laws show diminishing returns. Adding 10x more compute might yield only a few percentage points on benchmarks. The "horsepower" (compute) is available in theory, but the "fuel" (quality data) is scarce and expensive. And then there are speed limits. My car's 130 mph capability is irrelevant. Physics, safety and the law limit me to 65-90 mph. It is a great car for taking my kids to school, going on trips, or just my daily commute. Adding 1000 horsepower wouldn't improve my commute; it would just make any failure more catastrophic. LLMs are approaching their utility speed limit. For most practical applications, GPT-4 level capability is 'fast enough'. They are great for writing an email, summarizing a document, and answering a customer's query. It's even great at helping me write code. But no amount of additional data will improve that courtesy email I wrote. The next breakthrough won't come from making it faster or more fluent. Instead, it will come from making it more reliable, truthful, efficient, and affordable. It's better brakes, airbags, and fuel economy. It's about obeying the speed limit of user trust. In roughly 150 years, cars went from 10 miles per hour to 330 mph. And note that the unit of horsepower does not correlate to the power exerted by one horse. The horse in horsepower is honorary at best. Now, what if I told you that the car you are driving today is the slowest car you'll ever have to drive. Does it sound realistic? That's the argument we are making with large language models today. We are giving a lot more horsepower with diminishing results. It would be irresponsible for me to ever drive at more than 90 mph on a perfect road, with perfect tires, in perfect weather. More horsepower won't get me to my driving destination any faster with increased risk. Just as a car needs fuel and a bigger engine to go faster, the LLM recipe called for two ingredients: Data (Fuel) and Compute (Horsepower). We have run out of fuel. Companies have already scraped the available public human-generated data from the Internet. Books, blog text, news articles, video data, audio data. They have it all, including copyrighted and private data. What remains is either synthetic data, like content generated by AI models and fed back into training. The internet is increasingly polluted with LLM-generated text. It's like trying to power a car by siphoning its own exhaust." And getting more horsepower is exponentially more expensive. Building a 3000-hp engine (a 100-trillion parameter model) is theoretically possible, but the energy cost is astronomical. OpenAI reported that they need increasing energy to power future models. They have partnered with other tech companies to build 20 Gigawatts of data center capacity . That's the equivalent of 20 nuclear power plants. Note that in the US, it takes 15 to 20 years to build a single nuclear power plant so that’s not even an option for their 2030 goal. They'll have to build 20 Nuclear power plants at the same time at an unprecedented speed to achieve their goal. But even if this is achieved, we go back to the first problem: we don't have any more quality data. More power and bigger models, won't improve the quality of the output. When Yann LeCun says that LLMs are a dead end , this is what he means. Not that the technology doesn't work, but that we have already optimized the metrics, and more horsepower is not going to improve the models significantly. We are seeing this in effect when LLMs are becoming practically interchangeable. For most tasks, it doesn’t matter which model is winning in benchmarks, they will all summarize a document in an almost undiscernible manner. If the problem with car speed was just more horse powers, we could theoretically attach a 10000 hp engine to a car. Just like we could theoretically cure all diseases, just kill it with fire. LLMs suffer the same problems, we can add more synthetic data, but that won't improve the model. We could provide more power, but that will just help us fail faster, at the cost of our non-renewable resources. When we say "this is the worst version of an LLM you will ever use", we make the assumption that we are still in that exponential growth that new technology often benefits from. Going from 10 mph to 47 mph was game changing. But going from 120 mph to 170 mph hasn't improved our commute speed when the speed limit is 65 mph. The quest for Artificial General Intelligence (AGI) has been the 'closed track' driving the obsession with raw scale. But unfettered speed is useless on public roads. The car didn't stop evolving when it maxed out its top speed. It evolved around it. It became safer, more efficient, more comfortable, and connected. The engineering shifted from pure power to practicality. The race for sheer size is over. Now comes the real engineering: building models that reason reliably, verify their claims, and operate sustainably. Maybe even teach them how to say "I don't know." from time to time. We don't need faster cars. We need better transportation. We need models that operate in the real world.

0 views
iDiallo 1 weeks ago

How I Taught My Neighbor to Keep the Volume Down

When I moved to a new apartment with my family, the cable company we were used to wasn't available. We had to settle for Dish Network. I wasn't too happy about making that switch, but something on their website caught my attention. For an additional $5 a month, I could have access to DVR. I switched immediately. This was 2007. DVR was not new, but it wasn't commonly bundled with set-top boxes. TiVo was still the popular way to record, pause, and rewind live TV. We received two set-top boxes, one for each room with a TV, and three remotes. Two remotes had IR (infrared) blasters and, surprisingly, one RF (radio frequency) remote. After using the RF remote, I wondered: Why would anyone ever use an IR remote again? You didn't need a direct line of sight with the device you were controlling. I could actually stand in the kitchen and control the TV. It was amazing. But with the convenience of RF came other problems that IR users never had to worry about. Interference. After several months of enjoying my service, one of my neighbors, the loudest in the building, also switched to Dish Network. And he also got the RF remote. This was the type of neighbor who would leave the house with the TV on, volume blasting. One day, I was in the living room watching TV when the channel just flipped. I must have accidentally hit a button, so I changed it back. But not a few seconds later, the channel changed again. Then the volume went up. I figured my sister must have had the RF remote and was messing with me. But no, the remote was in my hand. I assumed something was wrong with it. The whole time I was watching TV, the channels kept randomly switching. I banged the remote on the table a couple of times, but it still switched. I removed the batteries from the remote, it still switched. I unplugged the device for a few minutes, plugged it back in, and… it still switched. Frustrated, I went through the device settings and disabled the RF remote. That's when it finally stopped. I wasn't happy with this solution, but it allowed me to watch TV until I figured something out. One evening, when everyone was asleep and the neighbor was watching a loud TV show, I decided to diagnose the issue. The moment I pressed the power button on the RF remote, my TV and set-top box turned on, and the neighbor's TV went silent. "Fuck!" I heard someone say. I was confused. Did I just do that? The TV turned back on, the volume went up. I walked to the window armed with the remote. I counted to three, then pressed the power button. My neighbor's TV went silent. He growled. I am the captain now. Every time he turned the TV on, I pressed the power button again and his device went off. Well, what do you know? We had interference somehow. Our remotes were set up to operate at the same frequency. Each remote controlled both devices. But I'm not that kind of neighbor. I wasn't going to continue to mess with him. Instead, I decided I would pay him a visit in the morning and explain that our remotes are tuned to the same frequency. I would bring the RF remote with me just to show him a demo. I was going to be a good neighbor. In the morning, I went downstairs, remote in hand. I knocked on the door, and a gentleman in his forties answered the door. I had rehearsed my speech and presentation. This would be a good opportunity to build a good rapport, and have a shared story. Maybe he would tell me how he felt when the TV went off. How he thought there was a ghost in the house or something. But that's not what happened. "Hi, I'm Ibrahim. Your upstairs neighbor..." I started and was interrupted almost immediately. "Whatever you are selling," he yelled. "I'm not buying." and he closed the door on my face. I knocked a second time, because obviously there was a misunderstanding. He never answered. Instead, the TV turned on and a movie played at high volume. So much for my prepared speech. The RF settings on my set-top box remained turned off. My family never discovered its benefit anyway, they always pointed at the box when pressing the buttons. It wasn't much of an inconvenience. In fact, I later found in the manual that you could reprogram the device and remote to use a different frequency. I did not reprogram my remote. Instead, my family used the two IR remotes, and brought the RF remote in my bedroom where it permanently remained on my night stand. Why in the bedroom? Because I decided to teach my neighbor some good manners. Whenever he turned up his volume, I would simply turn off his device. I would hear his frustration, and his attempts at solving the problem. Like a circus animal trainer, I remained consistent. If the volume of his TV went above what I imagined to be 15 to 20, I would press the power button. It became a routine for me for weeks. Some nights were difficult, I would keep the remote under my pillow, battling my stubborn neighbor all night. One day, I noticed that I hadn't pressed the button in days. I opened the window and I could still hear the faint sound of his TV. Through trial and error, he learned the lesson. If the volume remained under my arbitrary threshold, the TV would remain on. But as soon as he passed that threshold, the device would turn off. Sometimes, he would have company and there would be noise coming out of his apartment. I used the one tool in my tool box to send him a message. Turn off the TV. All of the sudden, my neighbor and his guest will be reminded of the unspoken rules, and become mindful of their neighbors. Maybe somewhere on the web, in some obscure forum, someone asked the question: "Why does my set-top box turn off when I increase the volume?" Well, it might be 18 years too late, but there's your answer. There is a man out there who religiously sets his volume to 18. He doesn't quite know why. That's Pavlovian conditioning at its best.

0 views
iDiallo 1 weeks ago

Is Blogging Dead?

When I started 2025, I set myself a simple challenge: write consistently and see if I could reclaim some of the audience this blog once had. In 2024, I had published just 4 posts and had only a handful of RSS subscribers. It felt like shouting into the void. By the end of 2025, I had published 177 articles and 24 "byte-sized" pieces, those shower thoughts I write and release without extensive research. The blog received 9,158,823 views from all sources, bots and humans alike. The spikes represent when an article goes viral. I've created a visualisation for when an article spiked in February . Five articles stood out this year: They were all prominently featured on hackernews and reddit. The leadership one appearing on Google Discovery, which I didn't know was a thing. Some of my "byte-sized" rants also made a lot of noise: Microsoft should take note. I initially published every other day at 7am UTC. It was consistent, but I noticed a pattern: people were sharing my links on Reddit and Hacker News around that time. right when traffic was lowest. My posts were getting buried. So I adjusted. I gradually shifted my publication time to 12pm UTC, giving my articles a better shot at visibility during peak hours. It's a small tactical change, but it made a difference. RSS doesn't give me precise reader counts, and that's intentional. I publish full articles in my feed, not snippets, because I want readers to own their reading experience. The growth here tells its own story. At the start of the year, I received 889 daily pings from around 56 RSS bots and 149 unique IP addresses. By year's end, that climbed to 4,711 daily pings from roughly 131 bots and 563 unique IPs. Many of these bots are self-hosted readers like Tiny Tiny RSS, living on personal devices and pinging sporadically. IP addresses change constantly, making it impossible to track individual users, which is exactly how it should be. The most popular reader among my audience is Feeder (appearing in my logs as "SpaceCowboys Android RSS reader"). It's open source , ad-free, and collects no user data. Feedly also showed up consistently, pinging from 3 unique IP addresses. I do want to point out that there is no consistent way of identifying an RSS reader. The user agents vary widely. You can read more about my attempt to classify all my RSS readers here . While my RSS readership grew steadily, my Google traffic nosedived. I've written before about AI Overviews eating through blog traffic, and I watched it happen in real time. Search impressions increased steadily with my publishing schedule, until September, when everything flattened. Then I discovered another problem: I had become a spam vector . Once I fixed that in October, traffic started recovering. I experimented with AI to improve my writing throughout the year, and I have mixed feelings worth a dedicated post. Here's the short version: AI is an impressive time-saver. You can accomplish a lot with it quickly. But the problem comes up when you realize everything written with AI assistance sounds the same. No matter how much you tweak the prompts, there's a sameness to the voice, a flatness that strips away individuality. It's not just your own writing, but that of every website. My conclusion: AI isn't a good tool if you're trying to develop a unique voice. It strips away individuality. And that unique voice is what you need to stand out today. If you want people to bypass an AI summary and actually read your blog, your voice has to be compelling and distinctly human. I did find some uses that boosted my productivity without robbing me of the creative process. More on that in a future post. Yet another podcast... I know. But my goal was simple: provide an easier way to consume my blog content and allow for more free-flowing discussion around subjects I care about. For now, it's just me rambling and finding my footing. I've recorded 70 episodes on Spotify and syndicated them to Apple Podcast and Amazon Music . Soon I'll make it available directly on the blog so you don't have to sign up for yet another service. Going from zero to one was already a milestone. I'm grateful to everyone who has subscribed, and especially to those who listen without subscribing. Your time means everything. The most important part of this entire journey has been the emails from casual readers. The internet is full of trolls, but every single email I received this year was both encouraging and filled with practical feedback. Many readers quoted my work on their own blogs, offering honest takes that pushed my thinking further. This is what makes it worthwhile, real conversations with real people. I hope we can keep this going. In 2025, I built the habit of showing up consistently and producing work I'm proud of. In 2026, my goal is to steer this ship toward something truly meaningful. If you've been part of this journey, thank you. And if you're just finding this blog now, welcome. Let's see where this goes together. I use Zip Bombs to Protect my Server (April 17th) Do not download the app, use the website (July 2nd) How to Lead in a Room Full of Experts (September 24th) Why Companies Don't Fix Bugs (April 7th) Users Only Care About 20% of Your Application (September 26th) No I don't want to turn on Windows Backup with One Drive (September 11th) I can't upgrade to Windows 11, now leave me alone (December 21st)

0 views
iDiallo 3 weeks ago

Poor man's productivity trick

Have you ever joined a large organization? One with a quarter million employees? Their process is fascinating. You receive hundreds of emails to set up different software. You get a machine mailed to you. Your badge comes from FedEx. The onboarding process is long and tedious. When you finally get an account set up, they direct you to Jira, where several other steps need to be completed before you can start working. And when you start working for this Fortune 10 company, you'll notice that there is specialized software for everything. There are several vendors that maintain different applications and provide enterprise support. Some even have an office in the building to provide quick responses. We also have internal teams that build in-house tools to support other teams. I worked in such a team. My customers were my coworkers. It's fascinating. So when you worked for a small startup, which I did after I was accidentally shown the door , you had to lower your expectations. The startup I joined had 4 employees, including me. I brought in my own laptop and we crammed into a 4-person cubicle in a sea of startups all trying to make it. Here we didn't have those fancy tools. We were frugal lean. If you have side projects, then I'm sure you've developed a way of tracking your own work. At a startup where you have two developers (including me), a CEO and a salesman, you have the luxury of trying your own thing. Instead of Jira, Asana, or the myriad of productivity tools developers use, I came to embrace my own method. I started using what I call a Rolling Daily Work Tracker, or RDWT. Ok, let's just call it a rolling tracker. A rolling tracker is just a text file. One ending in , nothing fancy. Over time, I developed two versions of this method. In version 1, every morning, as I started my day, I would open the file and write the current date on top. Next, throughout the day, as our client complained, I would write a single bullet point task that I needed to accomplish. For example, a client would say we wanted to add support for FedEx in our ecomm. I'd write the following: That's it. I would then go ahead and start working on that task. If I completed it, I would mark it with : This being a startup, I rarely had a single task on a given day. This is what my file actually looked like: This just looks like a text file where I keep track of my tasks. Why do I call it a rolling tracker? Because of what I did the next day. The next day, I would open the file. Right on top I would add the current date, and all tasks that were not completed the previous day would roll over to the current day. So the file turned into something like this: As the day progressed I would keep adding tasks to the top section. Everything that didn't get completed rolled over the next day. It became easy for me to search to see which task was completed. If anyone mentioned a problem, I could quickly search if I had encountered it in the past. As we started growing, this became a good way to discuss what I was working on in our daily standup. After a year, our little team grew and grew. My file started to become too big to manage. So I came up with version two. In version 2.0, I still followed the process of rolling over. But instead of keeping everything in one file, I would just keep each date in its own separate file. The only difference is that it looked less cluttered and since all the files resided in the same folder, search was still easy. I used the command to parse through all the files to find what I was looking for. I was able to work in this manner for 2 whole years. Keeping record of my tasks, and teaching new developers the same process. Some team members used Slack to keep track of their work, and found out the hard way that Slack caps your messages on the free plan. Of course we were on the free plan, but here you couldn't search for text that was more than 10,000 messages in the past. Yes, that used to be a thing. Our company went from 4 employees to 35. We grew. New processes were put in place. Devs moved to Asana for tracking their work. I left the company. When I work on my own side projects, I still use this method. It's a good way to keep granular details of my day-to-day work and see how it evolves over time. It might not work for you, but it is a good way to organize your work with the tools you have at your disposal. Yes, there are modern tools that integrate with AI, that can generate insights and metrics and whatnot. But sometimes, all we need is a text editor and a simple format to keep track of our work.

0 views
iDiallo 4 weeks ago

Paying for the rides I took 8 years ago

What does it mean when we say that investors are subsidizing the price of a service? We often hear that ChatGPT is not profitable, despite some users paying $20 a month, or others up to $200 a month. The business is still losing money despite everything we're paying. To stay afloat, OpenAI and other AI companies have to use money from their investors to cover operations until they find a way to generate sustainable income. Will these AI companies capture enough market share and attract enough paying customers to become profitable? Will they find the right formula or cheap enough hardware to be sustainable? Lucky for us, we have the benefit of hindsight. Not for AI companies, but for an adjacent company that relied entirely on investor funds to capture market share and survive: Uber. Uber is now a publicly traded company on the NASDAQ. They first became profitable in 2023, with a net income of $1.89 billion. In 2024, they generated $9.86 billion in profit. If you're wondering what their numbers looked like in 2022, it was a net loss of $9.14 billion. When they were losing money, that was investor money. They were doing everything in their power to crush the competition and remain the only player in town. Once they captured enough market share, they pulled a switcheroo. Their prices went from extremely affordable to just being another taxi company. I took my first Uber ride in 2016. I had car troubles, and taking the bus to work would have turned a 20-minute drive into three bus rides and an hour and 20 minutes of commuting. Instead, I downloaded Uber. Within minutes, my ride was outside waiting for me. I walked to the passenger side up front and opened the door, only to find a contraption I wasn't familiar with. The driver politely asked me to sit in the back. He was paraplegic. On the ride, we had a good conversation until he dropped me off at work. A notification appeared on my phone with the price: $3.00. That's how much it cost for a 5-mile drive. For reference, taking the bus would have cost $1.50 per ride. A day pass was $5.00 at the time. But with Uber, it was $3.00 and saved me a whole lot of time. I didn't even have to think about parking once I got to work. I didn't question it because, well, it was cheap and convenient. Throughout my time at that job, I took these rides to work. When I opened the app one day and the price was suddenly $10, I didn't even flinch. I closed the app and opened Lyft as an alternative. At most, I would pay $6. If it was too expensive, I would just spend another 20 minutes at work and wait for the surge to end and prices would go back down. This felt like a cheat code to life. At that point, I questioned whether it was even worth owning a car. Mind you, I live in Los Angeles, a city where you can't do much without a car and our transit system is nothing to brag about. Nobody made money, but everybody got paid. From time to time, I would wonder: if I'm paying those measly prices for transportation, how much is the driver making? Obviously, if Uber took its cut from the $3 ride, there wouldn't be much left for the driver. But my answer came from the drivers themselves. They loved Uber. Some of them said they could make up to $80,000 a year just driving. How many $3 rides does it take? You see, there were bonuses and goals they could reach. If they completed 100 rides in a timespan, they would qualify for a bonus. Something like an extra $500. If they did 300 rides, they could double the bonus. The whole thing was gamified. In the end, Uber was happy, the driver was happy, and the rider was happy. It was the same for Lyft. There were incentives everywhere. Nobody made money, but everybody got paid. This is what it looks like when investors subsidize the cost. So what does it look like when they stop subsidizing the cost? Well, in 2022, I took those same rides. From my old apartment to that job. Instead of $3, it cost around $24. That's an 8x increase. Ridesharing is the norm these days. People hardly take taxis anymore. The Ubers and Lyfts of the world have dominated the industry by making rides so cheap that they decimated the old guards. Now that they're the only players in town, they've jacked up the prices, and hardly anyone complains. We've already changed our habits. We've forgotten what the alternative looks like. This should serve as a preview for subsidized technologies like AI. Right now, everyone is offering it for free or at unsustainable prices. Companies are in a race to capture users, train us to integrate AI into our workflows, and make us dependent on their platforms. While I can see someone paying $20, $30, or even $60 for a rideshare in an emergency, I don't see average people paying $200 for a ChatGPT subscription. Even that is at a net loss. But that's exactly the point. Right now, it doesn't matter what we pay for these subscriptions. The goal for these companies is for AI to become essential to how we work, create, and think. Once these companies capture enough market share and eliminate alternatives, they'll have the same leverage Uber gained. They'll start with a modest price increase, maybe $25 becomes $40. Then $60. Then tiered pricing for different levels of capability. Before long, what feels optional today will feel mandatory, and we'll pay whatever they ask because we'll have built our lives around it. Imagine a future where completing a legal document requires access to agentic AI. Like you literally cannot do it unless you shell out a subscription to Gemini Ultra Pro Max Turbo. The subsidy era never lasts forever. Right now, whenever I have no choice but to take Uber, I'm paying back the remaining $21 dollars from those rides I took eight years ago. Today, venture capitalists are paying for your AI queries just like they paid for my rides. But it's not a charity. Enjoy it while it lasts, but don't forget that someone, eventually, will have to pay the real price. And that someone will be us.

0 views
iDiallo 1 months ago

We have all we need to make mass surveillance a reality

I was watching a movie when I got a random notification from Google Maps on my phone. I never get notifications from this app unless I'm doing turn-by-turn navigation. This one was titled "Timeline," and Google was requesting if I wanted to turn on backups for this feature. This section of Google Maps that I had never visited drew a timeline of every place I've ever visited: home, work, grocery store, etc. All this without me explicitly asking it to track these things. Now I see where I go to lunch every day, I see where I walk, drive, shop, and everything in between. It got me thinking: All the tools for mass surveillance are in place. And they are not going away. LLMs are expensive, they are inefficient, and they might just bankrupt the US economy. If the AI bubble pops, are we going back to a pre-AI world? Is ChatGPT gonna disappear? Will Google retire their AI Overview from search? No. Most likely not. Now that we have the tech, as wasteful as it is, it's not going anywhere. Recently, I read of a software developer who bought a Grace-Hopper NH200 system on Reddit for just €7,500 (about $8,000). That's a dual H100 GPU enterprise server that normally sells for over $100,000. After some extensive modifications, including converting it back to water cooling, fixing sensors reporting 16 million degree temperatures, and free-soldering damaged components under a microscope, the developer had a working AI powerhouse for under $10,000 total. It could run 235-billion parameter AI models from a desktop at home. When the AI bubble pops, this hardware won't disappear. It'll flood the secondary market. Enthusiasts, researchers, and small companies will be able to buy enterprise-grade AI hardware for pennies on the dollar. As I wrote in my previous blog post , If the bubble pops everyone gets a free graphic card. Once a technology exists, it's not going back in the box. Now, in the same vein, we have all the tools we need to make mass surveillance (MS) a reality. Compared to AI data centers, MS data centers are relatively cheaper. It may require a massive amount of data, but the compute is much cheaper. In fact, we can use AI, much less demanding models than the GPTs, to enhance mass surveillance. Think about what's already deployed: That Google Maps Timeline notification I received wasn't a bug; it was a feature I never asked for. Google had been silently tracking every coffee shop, every grocery run, every deviation from my routine. The data was already there, neatly categorized, timestamped, and ready for analysis. And Google isn't alone. Apple has similar features. So do countless other apps on our phones that request location permissions for "enhanced functionality." Every ride-sharing app, food delivery service, weather app, and social media platform with location tagging creates another data stream. Your phone company tracks which cell towers you're near. Your car's GPS system logs where you drive. Fitness trackers record your jogging routes. The infrastructure for comprehensive location surveillance doesn't need to be built, it already exists in your pocket. Flock Safety cameras are installed at neighborhood entrances across America, capturing license plates and vehicle descriptions 24/7. These aren't just recording. They're actively identifying, categorizing, and creating searchable databases of vehicle movements. Ring doorbells record our comings and goings from every front porch. Stores use facial recognition to track "repeat shoppers" and identify suspected shoplifters. Cities install cameras at intersections, in parks, on public transportation. The hardware costs keep dropping while the resolution keeps improving. Security camera systems that cost thousands a decade ago are now available for a few hundred dollars. 4K resolution is standard. Night vision is common. And they're all networked, all recording, all creating permanent digital records. Amazon Alexa, Google Home, Siri, they are always listening for their wake words. How much else are they hearing? Smart TVs with built-in microphones. Apps requesting microphone permissions for no reasons. Your phone's voice assistant that somehow serves you ads for things you only mentioned in conversation, never searched for, never typed. Baby monitors, smart doorbells with two-way audio, security systems with audio recording, even smart refrigerators with voice controls. We've voluntarily placed listening devices throughout our most private spaces, connected them to the internet, and trusted corporations and cloud services with the data they collect. Credit card transactions track every purchase, creating detailed profiles of individual consumer behavior. Website cookies follow you across the internet. Social media check-ins announce your location publicly. Fitness trackers log your heart rate, sleep patterns, and daily activity levels. Smart home devices track when you turn on lights, adjust thermostats, or open your fridge. Email scanning for "better targeted ads" reads your private correspondence. Search history reveals your interests, concerns, and questions. YouTube watch history shows what captures your attention. Your smart TV knows what you watch and when. Your streaming services know your viewing habits in exhaustive detail. Your e-reader knows which books you read, which passages you highlight, and where you stop reading. Your social media platforms know who you interact with, what content you engage with, and how long you spend on each post. Every app login, every website visit, every online transaction, every connected device creates another entry in your digital profile. The breadcrumb trail we leave behind is so detailed that it's possible to reconstruct someone's daily life with accuracy, without ever conducting physical surveillance. You don't need the GPTs or LLMs in massive resource gouging data centers to run AI that enhances mass surveillance. With much simpler models that run on your common laptop you can match faces across camera feeds. You can identify patterns in location data, transcribe and analyze audio. Predict behavior, flag anomalies, generate alerts. These models run on older, cheaper hardware. They're already being used by law enforcement agencies, marketing companies, and governments worldwide. The technology that seemed science-fiction a decade ago is now commodity software available on GitHub. And remember that Grace-Hopper system bought for $8,000? That level of computing power capable of processing millions of surveillance data points in real-time will become accessible as enterprise hardware floods the secondary market. You don't need a billion-dollar budget to run sophisticated surveillance systems anymore. You just need to wait for the next market correction. The narrative around surveillance often suggests we're approaching some threshold, some point where we need to be vigilant about preventing mass surveillance from becoming reality. But that framing is outdated. The infrastructure for mass surveillance isn't coming; it's already deployed, running quietly in the background, waiting for someone to fully exploit it. Every camera, every microphone, every GPS chip, every networked device is a node in a surveillance network that already spans the globe. The data is being collected. The hardware is in place. The AI models to analyze it are available. The connectivity exists. The storage is cheap. The compute is getting cheaper by the day. We're not at some decision point where we can choose whether to build mass surveillance infrastructure. That choice was made incrementally, one convenience at a time, one app permission at a time, one smart device at a time. We already made it.

1 views
iDiallo 1 months ago

Do I Leave the Laptop at the Office?

A few years back, I worked at an AI startup as the first hired engineer. All of us could fit in a four-space cubicle, sharing an office with multiple startups. As you can imagine, when you're trying to get a startup off the ground, you have to put in the hours. Every day I would drive to the beautiful city of Venice Beach, California, cram into our little space, and type as much code as I could fit in a day. Then I would pack the laptop back into my bag and drive right back home. By the way, this was my personal laptop. The software engineer role is performed almost entirely on a laptop. A mobile device you can pack in your bag and travel anywhere with. In fact, when the second engineer was hired, we couldn't fit in our cubicle any longer. We suddenly realized we could disperse anywhere within the office. There was no reason for the proximity. One day, I went to lunch with my new coworker. We bought food at a Mexican restaurant nearby, then walked to the beach. Right there, before the sand started, was a nice patch of grass where we sat, ate our food, and talked about work. All of a sudden, my coworker stopped. Mesmerized by the view, he said, "Wouldn't it be nice to work right here at the beach?" There was no reason not to. We wore noise-cancelling headphones. We had internet access. We operated off a VPN. And the weather was nice. So my coworker and I decided that going forward, we would take our laptops with us, have lunch, and then work right there at the beach in the afternoon. It was a nice change of pace. We did it a few times, and it was great. But then the pandemic hit. Whether it's at the beach, at home, or at a Starbucks, there is nothing in the software engineer role that requires working inside a designated building. You can work remotely. Your application probably runs on a cloud service, which is already remote from you. What difference does it make where you connect from? Many companies are now demanding a return to office. A company I used to work for made the creepiest video of them all , thinking they were actually being funny. Thinking about my old job, I would spend 30 minutes in the morning commuting to the office. In the evening, with traffic, I would spend an hour driving back. That's an hour and a half on top of the nine hours I spent at work every day. Not only could I have used that time, but my employer could have benefited from it too. But as software engineers are being forced to return to the office, I have one question lingering: When we go home in the evening, do we leave the laptop in the office? If the home and the workplace are two distinct places and one shouldn't bleed into the other, should the laptop remain in the workplace? Does that mean not being on call anymore? Does that mean you can ignore the Slack messages after hours? Every software engineering job provides employees with work laptops these days, never desktops. I believe as we embrace the return to office, maybe we should go back to desktops for software engineering roles. At least then we would have a guarantee that work only happens at work. But we all know that won't happen. Companies want it both ways. The control and visibility of having employees in the office, and the flexibility to reach them anytime, anywhere. The laptop comes home every night because they need us available. The Slack messages don't stop at 5 PM. The on-call rotation doesn't pause because we're no longer at our desks. If the work truly requires us to be in a specific building, then it should stay in that building. If it doesn't, and for most software engineering work it clearly doesn't, then why are we pretending it does? That day at the beach, my coworker and I weren't shirking our responsibilities. We were doing the same work, just with sand beneath our feet instead of carpet. The code compiled the same. The bugs got fixed. The features shipped. What changed was only the scenery, and maybe, for a brief moment, our quality of life. The return to office isn't about productivity or collaboration. It's about control and culture, about maintaining systems that were designed for a different era of work. And until companies are honest about that, the laptop will keep coming home with us, a daily reminder of the contradiction we're all living with.

1 views
iDiallo 1 months ago

The Natural Path to Gamblification

I had a 383-day streak on Duolingo. Three hundred and eighty-three days of that green owl peeking through my notifications, reminding me that my streak was in danger. I wrote about how I never actually learned Spanish from Duo , but I kept coming back. Not for the language, but for the streak. The number itself became the point. Then one day, I forgot. Life happened. The streak died. Now imagine this. Right when that happens, right when you see that devastating zero where your 383 used to be, a popup appears. It's colorful, animated, and encouraging. "Spin to Save Your Streak!" it says. You can use 50 gems (which you've accumulated, or could buy) for a chance to restore everything. The wheel spins with satisfying sounds, the same dopamine-triggering clicks and whirs you've been conditioned to love from every other part of the app. One in ten chance. Maybe one in twenty. The odds don't really matter because you're not thinking about odds. You're thinking about 383 days. About the sunk cost. About how stupid you'd feel if you didn't at least try. Does this feel out of place? Not at all. It feels like the natural next step. Because Duolingo already has gems, streak freezes, legendary levels, and a shop where you spend currency you've earned or purchased. The infrastructure for gambling is already there. If they flip the switch, you are not even going to notice. The point I'm trying to make is that now that we have gamified everything, gambling becomes the logical next step. It doesn't even disrupt the UX. It feels like natural progression. We've spent the last decade training ourselves to accept game mechanics in every aspect of life. Points, levels, streaks, loot boxes, random rewards, daily bonuses, competitive leaderboards. We've normalized the psychological hooks that make slot machines addictive, then sprinkled them across education, fitness, productivity, and social interaction. So when gambling shows up, actual gambling with money and uncertain outcomes, it doesn't register as a violation. It just feels like... more of the same. A natural evolution of features we've already accepted. There is substantial evidence from educational psychology and neuroscience that fun and enjoyment significantly enhance the educational process and lead to improved learning outcomes. That's the quote I get from asking several LLMs if there is any evidence that fun learning is more effective. Here are some of the cited sources: But the more I read them, the more a pattern emerges. The claim is that dopamine helps us learn better. Why does dopamine matter? It's the chemical most commonly associated with pleasure and reward. When we experience something enjoyable, our brains reward us with good feelings as if to say, "Hey, that felt good. Remember to do that again." Not only does dopamine make us feel good, it helps with attention, motivation, and memory. So naturally, we need to make sure we turn every learning interaction into a dopamine-inducing activity. Imagine for a second that your physics class in college is gamified. For every concept you learn, your progress bar increases. You level up and gain reputation points. If I know anything about video games, it's that the actual learning will become secondary to the meta-game of point accumulation. Students will start asking "How many points is this worth?" before thinking about what they are supposed to learn. The motivation to understand something is replaced by the experience points reward that feels suspiciously like what we feel when playing World of Warcraft. That's how I was able to complete the Japanese course in Duolingo without ever learning to count to ten. I got the dopamine alright, but not the knowledge. When my kids started talking about Labubu, I thought they were mispronouncing a french word. But it turns out it's a strange, weird-looking doll that took the world by storm. The toy itself serves no purpose. Kids aren't collecting them to play with them, to create stories, to build worlds. Instead, both kids and grown-ups are playing the game of finding a rare one. And when they finally do, they play the next game of collecting the next rare one. It's blind boxes all the way down. The thrill isn't in the object itself, it's in the uncertainty, the odds, the chase. You don't buy a Labubu because you want that specific Labubu. You buy it for the slot-machine pull of not knowing what's inside. For the rush when you get a rare variant. For the social currency of having what others don't. This is just loot boxes in physical form. And we've normalized it completely. We don't need to make everything fun and engaging. The evidence presented for "fun" things being more educational is weak. We could just say that taking a good dose of cocaine before class will help increase your dopamine. If gamification truly enhanced learning, we'd see it in outcomes. Instead, we see increased engagement metrics. Which is what the platforms measure because that's what drives retention and revenue. If fun equals learning, then sure, let's add a dopamine bar next to your physics lesson. For every concept you learn, you progress and your character grows stronger. Defeat the boss battle at the end of each unit. Earn legendary skins for your scientific calculator. What actually happens is that students learn to game the system. They optimize for points, not understanding. They chase streaks, not knowledge. They complete tasks for rewards, not because they're curious. And once you've built that system where you trained users to respond to these game mechanics, then adding gambling doesn't feel like a betrayal. It feels like an upgrade. A premium feature. The price to restore my streak was $13.99. Did I pay for it? No... but I wanted to! If there was an option to spin a wheel for a chance at half price, I would have probably done it. And that's exactly the problem. The infrastructure is already there. We are already participating. The path from gamification to gambling isn't a leap anymore. It's a small step that millions of users have already been primed to take. Everything is gamified. Now let me spin the wheel on this month's rent. The impact of enjoyment on learning retention Scientific evidence confirms learning must be fun Why fun matters more than you realize Gamification and it's effectiveness in schools (note: they make gamified learning apps)

1 views
iDiallo 1 months ago

Let users zoom in on mobile devices

This is a bit of a rant. Maybe my eyes are not as good as they used to be. When I read an article that has pictures on them, I like to zoom in to see the details. You might think this makes no sense, I just have to pinch the screen to zoom in. You would be right, but some websites intentionally prevent you from zooming in. Here is an example, the straw that broke the camel's back so to speak. I was reading an interesting article on substack about kids who ran away in the 60s , and it has these pictures of letters from those kids. Handwritten letters that complement the story and I really wanted to read. But have you tried reading text from a picture in an article on a phone? Again, it could just be what happens when you spend 35 years in front of screens. CSS alone is not enough to properly turn a page responsive on a mobile device. The browser needs to know how we want to size the viewport properly. For that we have a viewport property that gives the browser a hint on how to size the page. Since we've started making pages responsive yesteryear, I've relied on a single configuration and have rarely ever found a reason to change it: The is set to the current device's width, mobile or desktop, it doesn't matter. The is set to 1. The documentation is a bit confusing, I consider the scale to just be the initial zoom level. That's really all you need to know about the viewport if you are building a webpage and want to make it display properly on a mobile device. But of course, the article I'm complaining about has different settings. Here is what they have: The properties I'm complaining about are and . The first one says users can't zoom in period. Why would you prevent users from zooming in? This is such a terrible setting that you can set your browser to ignore this setting. But for good measures, they added , which means even if you are allowed to zoom, the maximum zoom level is one... which means you can't zoom. Yes, I disabled zoom to make a point It's a terrible experience all the way around. When I read articles that have pictures, I can't zoom in! I can't properly look at the pictures. There are a few platforms that I've noticed have these settings. Substack and Medium are the most annoying. Now, when I know an article is from those platforms, I just ignore them. The only time you ever need to override users from zooming is if it's a web game. Other than that, it's just plain annoying.

0 views
iDiallo 1 months ago

We Should Call Them Macroservices

I love the idea of microservices. When there's a problem on your website, you don't need to fix and redeploy your entire codebase. If the issue only affects your authentication service, you can deploy just that one component and call it a day. You've isolated the authentication feature into an independent microservice that can be managed and maintained on its own. That's the theory. The reality is often different. Microservices are a software architecture style where an application is built as a collection of small, independent, and loosely coupled services that communicate with each other. The "micro" in the name implies they should be small, and they usually start that way. When you first adopt this philosophy, all services are genuinely small and build incredibly fast. At this stage, you start questioning why you ever thought working on a monolith was a good idea. I love working on applications where the time between pushing a change and seeing its effect is minimal. The feedback loop is tight, deployments are quick, and each service feels manageable. But I've worked long enough in companies adopting this style to watch the transformation. Small becomes complex. Fast becomes extremely slow. Cheap becomes resource-intensive. Microservices start small, then they grow. And grow. And the benefits you once enjoyed start to vanish. For example, your authentication service starts with just login and logout. Then you add password reset. Then OAuth integration. Then multi-factor authentication. Then session management improvements. Then API key handling. Before you know it, your "micro" service has ballooned to thousands of lines of code, multiple database tables, and complex business logic. When you find yourself increasing the memory allocation on your Lambda functions by 2x or 3x, you've reached this stage. The service that once spun up in milliseconds now takes seconds to cold start. The deployment that took 30 seconds now takes 5 minutes. If speed were the only issue, I could live with it. But as services grow and get used, they start to depend on one another. When using microservices, we typically need an orchestration layer that consumes those services. Not only does this layer grow over time, but it's common for the microservices themselves to accumulate application logic that isn't easy to externalize. A service that was supposed to be a simple data accessor now contains validation rules, business logic, and workflow coordination. Imagine you're building an e-commerce checkout flow. You might have: Where does the logic live that says "only charge the customer if all items are in stock"? Or "apply the discount before calculating shipping"? This orchestration logic has to live somewhere, and it often ends up scattered across multiple services or duplicated in various places. As microservices grow, it's inevitable that they grow teams around them. A team specializes in managing a service and becomes the domain expert. Not a bad thing on its own, but it becomes an issue when someone debugging a client-side problem discovers the root cause lies in a service only another team understands. A problem that could have been solved by one person now requires coordination, meetings, and permissions to identify and resolve. For example, a customer reports that they're not receiving password reset emails. The frontend developer investigates and confirms the request is being sent correctly. The issue could be: Each of these components is owned by a different team. What should be a 30-minute investigation becomes a day-long exercise in coordination. The feature spans across several microservices, but domain experts only understand how their specific service works. There's a disconnect between how a feature functions end-to-end and the teams that build its components. When each microservice requires an actual HTTP request (or message queue interaction), things get relatively slower. Loading a page that requires data from several dependent services, each taking 50-100 milliseconds, means those latencies quickly compound. Imagine for a second you are displaying a user profile page. Here is the data that's being loaded: If these calls happen sequentially, you're looking at 350ms just for service-to-service communication, before any actual processing happens. Even with parallelization, you're paying the network tax multiple times over. In a monolith, this would be a few database queries totaling perhaps 50ms. There are some real benefits to microservices, especially when you have good observability in place. When a bug is identified via distributed tracing, the team that owns the affected service can take over the resolution process. Independent deployment means that a critical security patch to your authentication service doesn't require redeploying your entire application. Different services can use different technology stacks suited to their specific needs. These address real pain points that people have and is why we are attracted to this architecture in the first place. But Microservices are not a solution to every architectural problem. I always say if everybody is "holding it wrong," then they're not the problem, the design is. Microservices have their advantages, but they're just one option among many architectural patterns. To build a good system, we don't have to exclusively follow one style. Maybe what many organizations actually need isn't microservices at all, but what I'd call "macroservices". Larger, more cohesive service boundaries that group related functionality together. Instead of separate services for user accounts, authentication, and authorization, combine them into an identity service. Instead of splitting notification into separate services for email, SMS, and push notifications, keep them together where the shared logic and coordination naturally lives. The goal should be to draw service boundaries around business capabilities and team ownership, not around technical functions. Make your services large enough that a feature can live primarily within one service, but small enough that a team can own and understand the entire thing. Microservices promised us speed and independence. What many of us got instead were distributed monoliths, all the complexity of a distributed system with all the coupling of a monolith. An inventory service to check stock A pricing service to calculate totals A payment service to process transactions A shipping service to calculate delivery options A notification service to send confirmations The account service isn't triggering the email request properly The email service is failing to send messages The email service is sending to the wrong queue The notification preferences service has the user marked as opted-out The rate limiting service is blocking the request User account details (Account Service: 50ms) Recent orders (Order Service: 80ms) Saved payment methods (Payment Service: 60ms) Personalized recommendations (Recommendation Service: 120ms) Notification preferences (Settings Service: 40ms)

0 views
iDiallo 1 months ago

Why my Redirect rules from 2013 still work and yours don't

Here is something that makes me proud of my blog. The redirect rule I wrote for my very first article 12 years ago still works! This blog was an experiment. When I designed it, my intention was to try everything possible and not care if it broke. In fact, I often said that if anything broke, it would be an opportunity for me to face a new challenge and learn. I designed the website as best as I could, hoping that it would break so I could fix it. What I didn't take into account was that some things are much harder to fix than others. More specifically: URLs. Originally, this was the format of the URL: You can blame Derek Sivers for that format . But then I thought, what if I wanted to add pages that weren't articles? It would be hard to differentiate a blog entry from anything else. So I switched to the more common blog format: Perfect. But should the month have a leading zero? I went with the leading zero. But then I introduced a bug: Yes, I squashed the leading zero from the months. This meant that there were now two distinct URLs that pointed to the same content, and Google doesn't like duplicate content in its search results. Of course, that same year, I wrote an article that went super viral. Yes, my server crashed . But more importantly, people bookmarked and shared several articles from my blog everywhere. Once your links are shared they become permanent. They may get an entry in the wayback machine, they will be shared in forums, someone will make a point and cite you as a source. I could no longer afford to change the URLs or break them in any way. If I fixed the leading zero bug now, one of the URLs would lead to a 404. I had to implement a more complex solution. So in my file, I added a new redirect rule that kept the leading zero intact and redirected all URLs with a missing zero back to the version with a leading zero. Problem solved. Note that my was growing out of control, and there was always the temptation to edit it live. When I write articles, sometimes I come up with a title, then later change my mind. For example, my most popular article was titled "Fired by a machine" (fired-by-a-machine). But a couple of days after writing it, I renamed it to "When the machine fired me" (when-the-machine-fired-me). Should the old URL remain intact despite the new title? Should the URL match the new title? What about the old URL? Should it lead to a 404 or redirect to the new one? In 2014, after reading some Patrick McKenzie , I had this great idea of removing the month and year from the URL. This is what the URL would look like: Okay, no problem. All I needed was one more redirect rule. I don't like losing links, especially after Google indexes them. So my rule has always been to redirect old URLs to new ones and never lose anything. But my file was growing and becoming more complex. I'd also edited it multiple times on my server, and it was becoming hard to sync it with the different versions I had on different machines. So I ditched it. I've created a new .conf file with all the redirect rules in place. This version is always committed into my repo and has been consistently updated since. When I deploy new code to my server, the conf file is included in my apache.conf and my rules remain persistent. And the redirectrules.conf file looks something like this: I've rewritten my framework from scratch and gone through multiple designs. Whenever I look through my logs, I'm happy to see that links from 12 years ago are properly redirecting to their correct destinations. URLs are forever, but your infrastructure doesn't have to be fragile. The reason my redirect rules still work after more than a decade isn't because I got everything right the first time. I still don't get it right! But it's because I treated URL management as a first-class problem that deserved its own solution. Having a file living only on your server? It's a ticking time bomb. The moment I moved my redirect rules into a .conf file and committed it to my repo, I gained the ability to deploy with confidence. My redirects became code, not configuration magic that might vanish during a server migration. Every URL you publish is a promise. Someone bookmarked it, shared it, or linked to it. Breaking that promise because you changed your mind about a title or URL structure is not an option. Redirect rules are cheap and easy. But you can never recover lost traffic. I've changed URL formats three times and renamed countless articles. Each time, I added redirects rather than replacing them. Maybe it's just my paranoia, but the web has a long memory, and you never know which old link will suddenly matter. Your redirect rules from last year might not work because they're scattered across multiple .htaccess files, edited directly on production servers, and never version controlled. Mine still work because they travel with my code, surviving framework rewrites, server migrations, and a decade of second thoughts about URL design. The Internet never forgets... as long as the redirect rules are in place.

0 views
iDiallo 1 months ago

How I Became a Spam Vector

There are several reasons for Google to downrank a website from their search results. My first experience with downranking was on my very first day at a job in 2011. The day I walked into the building, Google released their first Panda update . My new employer, being a "content creator," disappeared from search results. This was a multi-million dollar company that had teams of writers and a portfolio of websites. They depended on Google, and not appearing in search meant we went on code red that first day. But it's not just large companies. Just this year, as AI Overview has dominated the search page, I've seen traffic to this blog falter. At one point, the number of impressions was increasing, yet the number of clicks declined. I mostly blamed it on AI Overview, but it didn't take long before impressions also dropped. It wasn't such a big deal to me since the majority of my readers now come through RSS. Looking through my server logs, I noticed that web crawlers had been accessing my search page at an alarming rate. And the search terms were text promoting spammy websites: crypto, gambling, and even some phishing sites. That seemed odd to me. What's the point of searching for those terms on my website if it's not going to return anything? In fact, there was a bug on my search page. If you entered Unicode characters, the page returned a 500 error. I don't like errors, so I decided to fix it. You can now search for Unicode on my search page. Yay! But it didn't take long for traffic to my website to drop even further. I didn't immediately make the connection, I continued to blame AI Overview. That was until I saw the burst of bot traffic to the search page. What I didn't take into account was that now that my search page was working, when you entered a spammy search term, it was prominently displayed on the page and in the page title. What I failed to see was that this was a vector for spammers to post links to my website. Even if those weren't actual anchor tags on the page, they were still URLs to spam websites. Looking through my logs, I can trace the sharp decline of traffic to this blog back to when I fixed the search page by adding support for Unicode. I didn't want to delete my search page, even though it primarily serves me for finding old posts. Instead, I added a single meta tag to fix the issue: What this means is that crawlers, like Google's indexing crawler, will not index the search page. Since the page is not indexed, the spammy content will not be used as part of the website's ranking. The result is that traffic has started to pick up once more. Now, I cannot say with complete certainty that this was the problem and solution to the traffic change. I don't have data from Google. However, I can see the direct effect, and I can see through Google Search Console that the spammy search pages are being added to the "no index" issues section. If you are experiencing something similar with your blog, it's worth taking a look through your logs, specifically search pages, to see if spammy content is being indirectly added. I started my career watching a content empire crumble under Google's algorithm changes, and here I am years later, accidentally turning my own blog into a spam vector while trying to improve it. The tools and tactics may have evolved, but something never changes. Google's search rankings are a delicate ecosystem, and even well-intentioned changes can have serious consequences. I often read about bloggers that never look past the content they write. Meaning, they don't care if you read it or not. But the problem comes when someone else takes advantage of your website's flaws. If you want to maintain control over your website, you have to monitor your traffic patterns and investigate anomalies. AI Overviews is most likely responsible for the original traffic drop, and I don't have much control over that. But it was also a convenient scape goat to blame everything on and excuse not looking deeper. I'm glad at least that my fix was something simple that anyone can implement.

1 views
iDiallo 1 months ago

Demerdez-vous: A response to Enshittification

There is an RSS reader that I often used in the past and have become very reliant on. I would share the name with you, but as they grew more popular, they have decided to follow the enshittification route. They've changed their UI, hidden several popular links behind multilayered menus, and they have revamped their API. Features that I used to rely on have disappeared, and the API is close to useless. My first instinct was to find a new app that will satisfy my needs. But being so familiar with this reader, I've decided to test a few things in the API first. Even though their documentation doesn't mention older versions anymore, I've discovered that the old API is still active. All I had to do was add a version number to the URL. It's been over 10 years, and that API is still very much active. I'm sorry I won't share it here, but this has served as a lesson for me when it comes to software that becomes worse over time. Don't let them screw you, unscrew yourself! We talk a lot about "enshittification"these days. I've even written about it a couple of times. It's about how platforms start great, get greedy, and slowly turn into user-hostile sludge. But what we rarely talk about is the alternative. What do you do when the product you rely on rots from the inside? The French have a phrase for this: Demerdez-vous. The literal translation is "unshit yourself". What it actually means is to find a way, even if no one is helping you. When a company becomes too big to fail, or simply becomes dominant in its market, drip by drip, it starts to become worse. You don't even notice it at first. It changes in ways that most people tolerate because the cost of switching is high, and the vendor knows it. But before you despair, before you give up, before you let the system drag you into its pit, try to unscrew yourself with the tools available. If the UI changes, try to find the old UI. Patch the inconvenience. Disable the bullshit. Bend the app back into something humane. It might sound impossible at first, but the tools to accomplish this exist and are widely being used. Sometimes the escape hatch is sitting right there, buried under three layers of "Advanced" menus. On the web I hate auto-playing videos, I don't want to receive twelve notifications a day from an app, I don't care about personalization. But for the most part, these can be disabled. When I download an app, I actually spend time going through settings. If I care enough to download an app, or if I'm forced, I'll spend the extra time to ensure that an app works to my advantage, not the other way around. When that RSS reader removes features from the UI, but not from their code, I was still able to continue using them. Another example of this is reddit. Their new UI is riddled with dark patterns, infinite scroll, and popups. But, go to , and you are greeted with that old UI that may not look fancy, but it was designed with the user in mind, not the company's metrics. I also like YouTube removed the dislike button. While it might be hurtful to content creators to see the number of dislikes, as a consumer, this piece of data served as a filter for lots of spam content. For that of course there is the "Return Youtube Dislike" browser extension. Extensions often can help you regain control when popular websites remove functionality useful to users, but the service no longer wants to support. There are several tools that enhance youtube, fix twitter, and of course uBlock. It's not always possible to combat enshittification. Sometimes the developer actively enforces their new annoying features and prevents anyone from removing them. In cases like these, there is still something that users can do. They can walk away. You don’t have to stay in an abusive relationship. You are allowed to leave. When you do, you'll discover that there was an open-source alternative. Or that a small independent app survived quietly in the corner of the internet. Or even sometimes, you'll find that you don't need the app at all. You break your addiction. In the end, "Demerdez-vous" is a reminder that we still have agency in a world designed to take it away. Enshittification may be inevitable, but surrender isn’t. There’s always a switch to flip, a setting to tweak, a backdoor to exploit, or a path to walk away entirely. Companies may keep trying to box us in, but as long as we can still think, poke, and tinker, we don’t have to live with the shit they shovel. At the end of the day "On se demerde"

0 views
iDiallo 1 months ago

We Don't Fix Bugs, We Build Features

As a developer, bugs consume me. When I discover one, it's all I can think about. I can't focus on other work. I can't relax. I dream about it. The urge to fix it is overwhelming. I'll keep working until midnight even when my day should have ended at 6pm. I simply cannot leave a bug unfixed. And yet, when I look at my work backlog, I see a few dozen of them. A graveyard of known issues, each one catalogued, prioritized, and promptly ignored. How did we get here? How does a profession full of people who are pathologically driven to fix problems end up swimming in unfixed problems? For that, you have to ask yourself, what is the opposite of a bug? No, it's not "No Bugs". It's features. "I apologize for such a long letter - I didn't have time to write a short one." As projects mature and companies scale, something changes . You may start with a team of developers solving problems, but then, they slowly become part of an organization that needs processes, measurements, and quarterly planning. Then one day, you are presented with this new term. Roadmap. It's a beautiful, color-coded timeline of features that will delight users and move business metrics. The roadmap is where bugs go to die. Here's how it happens. A developer discovers a bug and brings it to the team. The product manager asks the only question that matters in their world: "Will this affect our roadmap?" Unless the bug is actively preventing a feature launch or causing significant user churn, the answer is almost always no. The bug gets a ticket, the ticket gets tagged as "tech debt," and it joins the hundreds of other tickets in the backlog hotel, where it will remain indefinitely. ( see Rockstar ) This isn't a jab at product managers. They're operating within a system that leaves them no choice. Agile was supposed to liberate us. The manifesto promised flexibility, collaboration, and responsiveness to change. But somewhere along the way, agile stopped being a philosophy and became a measurement system. There are staunch supporters of agile that swear by it, and blame any flows on the particular implementation. "You guys are not doing true agile." But when everyone is doing it wrong, you don't blame everyone, you blame the system. We can't all be holding agile wrong ! The agile principle is to deliver working software frequently, welcome changing requirements, maintain technical excellence. But principles don't fit in spreadsheets. Metrics do. And so we got story points. Velocity. Sprint completion rates. Feature delivery counts. Suddenly, every standup and retrospective fed into dashboards that executives reviewed quarterly. And where there are metrics, there are managers trying to make some numbers go up and others go down. Features are easy to measure. They're discrete, they're visible, and they can be tied to revenue. "We shipped 47 features this quarter, leading to a 12% increase in user engagement." That's a bullet point in your record that gets you promoted. Bugs are invisible in this equation. Sure, they appear on the same Jira board, but their contribution is ephemeral. How do you quantify the value of something that doesn't go wrong? How do you celebrate the absence of a problem? You can't put "prevented 0 crashes by fixing a race condition" on a slide deck. The system doesn't just deprioritize bugs, it actively ignores them. A team that spends a sprint fixing bugs has nothing to show for it on the roadmap. Their velocity looks identical, but they've "accomplished" nothing that the executives care about. Meanwhile, the team that plows ahead with features, moves fast and breaks things, bugs be damned? They look productive. Developers want to prioritize bug fixes, performance improvements, and technical debt. These are the things that make software maintainable, reliable, and pleasant to work with. Most developers got into programming because they wanted to fix things, to make systems better. The business prioritizes features that impact revenue. New capabilities that can be sold, marketed, and demonstrated. Things that exist, not things that don't break. Teams are often faced with a choice. Do we fix what's broken, or do we build what's new? And because the metrics, the incentives, and the roadmap all point in one direction, the choice is made for them. This is how you end up with production systems riddled with known bugs that could probably be fixed but won't be tackled. Not because they're not important. Not because developers don't care. But because they're not on the roadmap. "I apologize for such many bugs. I only had time to build features." Writing concisely takes more time and thought than rambling. Fixing bugs takes more discipline than shipping features. Building maintainable systems takes more effort than building fast. We've become so busy building that we have no time to maintain what we've built. We're so focused on shipping new things that we can't fix the old things. The roadmap is too full to accommodate quality. Reaching our metric goals is the priority. It's not that we don't know better. It's not even that we don't care. It's that we've built systems like product roadmaps, velocity tracking, etc, and now making the wrong choice the only rational choice. I've worked with teams that tried to present a statistical approach to presenting bugs in the roadmap. Basically, you can analyze existing projects, look at bug counts when each feature was built, then calculate the probability of bugs. Now this number will appear in the roadmap as a color coded metric. It sounds and looks good in theory, and you can even attach an ROI to bug fixes. But bugs don't work like that. They can be introduced by mistake, by misunderstanding, or sometimes even intentionally when business logic itself is flawed. No statistical model will predict the developer who misread the requirements, or the edge case that appears only in production, or the architectural decision that made sense five years ago but creates problems today. Bugs are human problems in human systems. You can't spreadsheet your way out of them. You have to actually fix them. When developers are forced to choose between what they know is right and what the metrics reward, we've built the wrong system. When "I fixed a critical race condition" is less valuable than "I shipped a feature," we've optimized for the wrong things. Maybe the first step is simply acknowledging the problem. We don't fix bugs because our systems don't let us. We don't fix bugs because we only had time to build features. And just like that overly long letter, the result is messier, longer, and ultimately harder to deal with than if we'd taken the time to do it right from the start.

0 views
iDiallo 1 months ago

Self-Help Means Help Yourself

For a moment in my life, you couldn't see me without a book in hand. A self-help book to be precise. I felt like the world was moving, changing, and I was being left behind. Being raised to look at the mirror before I blame others, I decided if there was something to improve, it was my very own self. I picked up Dale Carnegie's How to Win Friends and Influence People . Now I can admit it, I never finished reading the book. But I read plenty of others. I devoured all of Robert Kiyosaki's books and felt inspired. If only I had a rich dad. I read the one he wrote with Donald Trump. I was pumped. I was still learning English; I may have misunderstood the whole thing (I can assure you, none of the authors mentioned were involved in writing the book). I joined a club where we would get a new self-help book every month and discuss it. I was in love with the genre. But one thing I noticed in retrospect is that I enjoyed reading more than actually doing anything the books taught. Here's the thing about self-help books, they're necessarily abstract. If they gave specific examples, those examples wouldn't apply to most people. So they give general advice, more inspiring than practical. And inspiration, while it feels good in the moment, doesn't build anything on its own. Over the years, I learned that advice by itself is useless. Imagine getting writing advice from a pro, but you've never written anything. No writing advice can be applied to a blank piece of paper. You can't edit what doesn't exist. You can't improve a sentence you haven't written. What you actually need is to start something, anything, and reevaluate every so often. That's it. I think about Bob Nystrom, who wrote Crafting Interpreters , a book about building programming languages. What I love about his story isn't just the book itself, but how he wrote it. He did so publicly, chapter by chapter, responding to feedback as he went. And when he completed the book, he published a reflection of the process he titled Crafting "Crafting Interpreters" . He wrote through some of the worst years of his life. His mother was diagnosed with cancer. Loved ones died. The world around him felt like it was falling apart. But he kept writing anyway. Not because he was superhuman or exceptionally disciplined. He kept writing because it was the one thing he could control when so much else was spiraling beyond his grasp. Finishing the book became proof that he could make it through everything else. Skipping a day would have meant the chaos won. Writing became his anchor. We can always find reasons not to start. The conditions are never perfect. We're still learning. We don't have the right resources. We haven't read enough books yet. But self-help isn't meant to be inspiration porn, something we consume to feel good without changing anything. It's a method for helping yourself. The books, the advice, the strategies, they're all pointing toward the same message. You have to be the one to do it. Nobody can help you get started. Nobody can give you advice that works on a blank page. The only thing that transforms nothing into something is you, sitting down and beginning. Self-help means helping yourself, not someday, not when you're ready, but now. Start messy. Start imperfect. Start without knowing how it ends. Because the secret isn't in the next book or the next piece of advice. The secret is that you already know what you need to do. You just need to help yourself do it.

1 views
iDiallo 1 months ago

The real cost of Compute

Somewhere along the way, we stopped talking about servers. The word felt clunky, industrial, too tied to physical reality. Instead, we started saying "the cloud". It sounds weightless, infinite, almost magical. Your photos live in the cloud. Your documents sync through the cloud. Your company's entire infrastructure runs in the cloud. I hated the term cloud. I wasn't alone, someone actually created a "cloud to butt" browser extension that was pretty fun and popular. But the world has adopted the term, and I had no choice but to oblige. So what is the actual cloud? Why is it hiding behind this abstraction? Well, the cloud is rows upon rows of industrial machines, stacked in massive data centers, consuming electricity at a scale most of us can't even imagine. The cloud isn't floating above us. It's bolted to concrete floors, surrounded by cooling systems, and plugged into power grids that strain under its appetite. I'm old enough to remember the crypto boom and the backlash that followed. Critics loved to point out that Bitcoin mining consumed as much electricity as entire countries. Argentina, the Netherlands, and so many nations were picked for comparison. But I was not outraged by it at all. My reaction at the time was simpler. Why does it matter if they pay their electric bill? If you use electricity and compensate for it, isn't that just... how markets work? Turns out, I was missing the bigger picture. And the AI boom has made it impossible to ignore. When new data centers arrive in a region, everyone's electric bill goes up. Even if your personal consumption stays exactly the same. It has nothing to do with fairness and free markets. Infrastructure is not free. The power grids weren't designed for the sudden addition of facilities that consume megawatts continuously. When demand surges beyond existing capacity, utilities pass those infrastructure costs onto everyone. New power plants get built, transmission lines get upgraded, and residential customers help foot the bill through rate increases. The person who never touches AI, never mines crypto, never even knows what a data center does, this person is now subsidizing the infrastructure boom through their monthly utility payment. The cloud, it turns out, has a very terrestrial impact on your wallet. We've abstracted computing into its purest conceptual form: "compute." I have to admit, it's my favorite term in tech. "Let's buy more compute." "We need to scale our compute." It sounds frictionless, almost mathematical. Like adjusting a variable in an equation. Compute feels like a slider you can move up and down in your favorite cloud provider's interface. Need more? Click a button. Need less? Drag it down. The interface is clean, the metaphor is seamless, and completely disconnected from the physical reality. But in the real world, "buying more compute" means someone is installing physical hardware in a physical building. It means racks of servers being assembled, hard drives being mounted, cables being routed. The demand has become so intense that some data center employees have one job and one job only: installing racks of new hard drives, day in and day out. It's like an industrial assembly line. Every gigabyte of "cloud storage" occupies literal space. Every AI query runs on actual processors that generate actual heat. The abstraction is beautiful, but the reality is concrete and steel. The cloud metaphor served its purpose. It helped us think about computing as a utility. It's always available, scalable, detached from the messy details of hardware management. But metaphors shape how we think, and this one has obscured too much for too long. Servers are coming out of their shells. The foggy cloud is lifting, and we're starting to see the machinery underneath: vast data centers claiming real estate, consuming real water for cooling, and drawing real power from grids shared with homes, schools, and hospitals. This isn't an argument against cloud computing or AI. There nothing to go back to. But we need to acknowledge their physical footprint. The cloud isn't a magical thing in the sky. It's industry. And like all industry, it needs land, resources, and infrastructure that we all share.

0 views
iDiallo 1 months ago

Making a quiet stand with your privacy settings

After making one of the largest refactor of our application, one that took several months in the making, where we tackled some of our biggest challenges. We tackled technical debt, upgraded legacy software, fortified security, and even made the application faster. After all that, we deployed the application, and held our breath, waiting for the user feedback to roll in. Well, nothing came in. There were no celebratory messages about the improved speed, no complaints about broken features, no comments at all. The deployment was so smooth it was invisible. To the business team, it initially seemed like we had spent vast resources for no visible return. But we knew the underlying truth. Sometimes, the greatest success is defined not by what happens, but by what doesn't happen. The server that doesn't crash. The data breach that doesn't occur. The user who never notices a problem. This is the power of a quiet, proactive defense. In this digital world, where everything we do leaves a data point, it's not easy to recognize success. When it comes to privacy, taking a stand isn't dramatic. In fact, its greatest strength is its silence. We're conditioned to believe that taking a stand should feel significant. We imagine a public declaration, a bold button that flashes "USER REBELLION INITIATED!" when pressed. Just think about people publicly announcing they are leaving a social media platform. But the reality of any effective digital self-defense is far more mundane. When I disagree with a website's data collection, I simply click "Reject All." No fanfare. No message telling the company, "This user is privacy-conscious!" My resistance is registered as a non-action. A void in their data stream. When I read that my Vizio Smart TV was collecting viewing data, I navigated through a labyrinth of menus to find the "Data Collection" setting and turned it off. The TV kept working just fine. Nothing happened, except that my private viewing habits were no longer becoming a product to be sold. They didn't add a little icon on the top corner that signifies "privacy-conscious." Right now, many large language models like ChatGPT have "private conversation" settings turned off by default. When I go into the settings and enable the option that says, "Do not use my data for training," there's no confirmation, no sense of victory. It feels like I've done nothing. But I have. This is how proactive inaction looks like. Forming a new habit is typically about adding an action. Going for a run every morning, drinking a glass of water first thing, reading ten pages a night. But what about the habit of not doing ? When you try to simply "not eat sugar," you're asking your brain to form a habit around an absence. There's no visible behavior to reinforce, no immediate sensory feedback to register success, and no clear routine to slot into the habit loop. Instead, you're relying purely on willpower. A finite resource that depletes throughout the day, making evening lapses almost inevitable. Your brain literally doesn't know what to practice when the practice is "nothing." It's like trying to build muscle by not lifting weights. The absence of action creates an absence of reinforcement, leaving you stuck in a constant battle of conscious resistance rather than unconscious automation. Similarly, the habit of not accepting default settings is a habit of inaction. You are actively choosing to not participate in a system designed to exploit your data. It's hard because it lacks the dopamine hit of a checked box. There's no visible progress bar for "Privacy Secured." But the impact is real. This quiet practice is our primary defense against what tech writer Cory Doctorow calls "enshittification". That's the process where platforms decay by first exploiting users, then business customers, until they become useless, ad-filled pages with content sprinkled around. It's also our shield against hostile software that prioritizes its own goals over yours. Not to blame the victims, but I like to remind people that they have agency over the software and tools they use. And your agency includes the ultimate power to walk away. If a tool's settings are too hostile, if it refuses to respect your "no," then your most powerful setting is the "uninstall" button. Choosing not to use a disrespectful app is the ultimate, and again, very quiet, stand. So, I challenge everyone to embrace the quiet. See the "Reject All" button not as a passive refusal, but as an active shield. See the hidden privacy toggle not as a boring setting, but as a toggle that you actively search for. The next time you download a new app or create a new account, take five minutes. Go into the settings. Look for "Privacy," "Data Sharing," "Personalization," or "Permissions." Turn off what you don't need. Nothing will happen. Your feed won't change, the app won't run slower, and no one will send you a congratulatory email. And that's the whole point. You will have succeeded in the same way our refactor succeeded: by ensuring something unwanted doesn't happen. You've strengthened your digital walls, silently and without drama, and in doing so, you've taken one of the most meaningful stands available to us today.

0 views
iDiallo 1 months ago

How Do You Send an Email?

It's been over a year and I didn't receive a single notification email from my web-server. It could either mean that my $6 VPS is amazing and hasn't gone down once this past year. Or it could mean that my health check service has gone down. Well this year, I have received emails from readers to tell me my website was down. So after doing some digging, I discovered that my health checker works just fine, but all emails it sends are being rejected by gmail. Unless you use a third party service, you have little to no chance of sending an email that gets delivered. Every year, email services seem to become a tad bit more expensive. When I first started this website, sending emails to my subscribers was free on Mailchimp. Now it costs $45 a month. On Buttondown, as of this writing, it costs $29 a month. What are they doing that costs so much? It seems like sending emails is impossibly hard, something you can almost never do yourself. You have to rely on established services if you want any guarantee that your email will be delivered. But is it really that complicated? Emails, just like websites, use a basic communication protocol to function. For you to land on this website, your browser somehow communicated with my web server, did some negotiating, and then my server sent HTML data that your browser rendered on the page. But what about email? Is the process any different? The short answer is no. Email and the web work in remarkably similar fashion. Here's the short version: In order to send me an email, your email client takes the email address you provide, connects to my server, does some negotiating, and then my server accepts the email content you intended to send and saves it. My email client will then take that saved content and notify me that I have a new message from you. That's it. That's how email works. So what's the big fuss about? Why are email services charging $45 just to send ~1,500 emails? Why is it so expensive, while I can serve millions of requests a day on my web server for a fraction of the cost? The short answer is spam . But before we get to spam, let's get into the details I've omitted from the examples above. The negotiations. How similar email and web traffic really are? When you type a URL into your browser and hit enter, here's what happens: The entire exchange is direct, simple, and happens in milliseconds. Now let's look at email. The process is similar: Both HTTP and email use DNS to find servers, establish TCP connections, exchange data using text-based protocols, and deliver content to the end user. They're built on the same fundamental internet technologies. So if email is just as simple as serving a website, why does it cost so much more? The answer lies in a problem that both systems share but handle very differently. Unwanted third-party writes. Both web servers and email servers allow outside parties to send them data. Web servers accept form submissions, comments, API requests, and user-generated content. Email servers accept messages from any other email server on the internet. In both cases, this openness creates an opportunity for abuse. Spam isn't unique to email, it's everywhere. My blog used to get around 6,000 spam comments on a daily basis. On the greater internet, you will see spam comments on blogs, spam account registrations, spam API calls, spam form submissions, and yes, spam emails. The main difference is visibility. When spam protection works well, it's invisible. You visit websites every day without realizing that behind the scenes. CAPTCHAs are blocking bot submissions, rate limiters are rejecting suspicious traffic, and content filters are catching spam comments before they're published. You don't get to see the thousands of spam attempts that happen every day on my blog, because of some filtering I've implemented. On a well run web-server, the work is invisible. The same is true for email. A well-run email server silently: There is a massive amount of spam. In fact, spam accounts for roughly 45-50% of all email traffic globally . But when the system works, you simply don't see it. If we can combat spam on the web without charging exorbitant fees, email spam shouldn't be that different. The technical challenges are very similar. Yet a basic web server on a $5/month VPS can handle millions of requests with minimal spam-fighting overhead. Meanwhile, sending 1,500 emails costs $29-45 per month through commercial services. The difference isn't purely technical. It's about reputation, deliverability networks, and the ecosystem that has evolved around email. Email providers have created a cartel-like system where your ability to reach inboxes depends on your server's reputation, which is nearly impossible to establish as a newcomer. They've turned a technical problem (spam) into a business moat. And we're all paying for it. Email isn't inherently more complex or expensive than web hosting. Both the protocols and the infrastructure are similar, and the spam problem exists in both domains. The cost difference is mostly artificial. It's the result of an ecosystem that has consolidated around a few major providers who control deliverability. It doesn't help that Intuit owns Mailchimp now. Understanding this doesn't necessarily change the fact that you'll probably still need to pay for email services if you want reliable delivery. But it should make you question whether that $45 monthly bill is really justified by the technical costs involved. Or whether it's just the price of admission to a gatekept system. DNS Lookup : Your browser asks a DNS server, "What's the IP address for this domain?" The DNS server responds with something like . Connection : Your browser establishes a TCP connection with that IP address on port 80 (HTTP) or port 443 (HTTPS). Request : Your browser sends an HTTP request: "GET /blog-post HTTP/1.1" Response : My web server processes the request and sends back the HTML, CSS, and JavaScript that make up the page. Rendering : Your browser receives this data and renders it on your screen. DNS Lookup : Your email client takes my email address ( ) and asks a DNS server, "What's the mail server for example.com?" The DNS server responds with an MX (Mail Exchange) record pointing to my mail server's address. Connection : Your email client (or your email provider's server) establishes a TCP connection with my mail server on port 25 (SMTP) or port 587 (for authenticated SMTP). Negotiation (SMTP) : Your server says "HELO, I have a message for [email protected]." My server responds: "OK, send it." Transfer : Your server sends the email content, headers, body, attachments, using the Simple Mail Transfer Protocol (SMTP). Storage : My mail server accepts the message and stores it in my mailbox, which can be a simple text file on the server. Retrieval : Later, when I open my email client, it connects to my server using IMAP (port 993) or POP3 (port 110) and asks, "Any new messages?" My server responds with your email, and my client displays it. Checks sender reputation against blacklists Validates SPF, DKIM, and DMARC records Scans message content for spam signatures Filters out malicious attachments Quarantines suspicious senders Both require reputation systems Both need content filtering Both face distributed abuse Both require infrastructure to handle high volume

0 views