Latest Posts (20 found)
iDiallo Yesterday

Back button hijacking is going away

When websites are blatantly hostile, users close them to never come back. Have you ever downloaded an app, realized it was deceptive, and deleted it immediately? It's a common occurrence for me. But there is truly hostile software that we still end up using daily. We don't just delete those apps because the hostility is far more subtle. It's like the boiling frog, the heat turns up so slowly that the frog enjoys a nice warm bath before it's fully cooked. With clever hostile software, they introduce one frustrating feature at a time. Every time I find myself on LinkedIn, it's not out of pleasure. Maybe it's an email about an enticing job. Maybe it's an article someone shared with me. Either way, before I click the link, I have no intention of scrolling through the feed. Yet I end up on it anyway, not because I want to, but because I've been tricked. You see, LinkedIn employs a trick called back button hijacking. You click a LinkedIn URL that a friend shared, read the article, and when you're done, you click the back button expecting to return to whatever app you were on before. But instead of going back, you're still on LinkedIn. Except now, you are on the homepage, where your feed loads with enticing posts that lure you into scrolling. How did that happen? How did you end up on the homepage when you only clicked on a single link? That's back button hijacking. Here's how it works. When you click the original LinkedIn link, you land on a page and read the article. In the background, LinkedIn secretly gets to work. Using the JavaScript method, it swaps the page's URL to the homepage. The method doesn't add an entry to the browser's history. Then LinkedIn manually pushes the original URL you landed on into the history stack. This all happens so fast that the user never notices any change in the URL or the page. As far as the browser is concerned, you opened the LinkedIn homepage and then clicked on a post to read it. So when you click the back button, you're taken back to the homepage, the feed loads, and you're presented with the most engaging post to keep you on the platform. If you spent a few minutes reading the article, you probably won't even remember how you got to the site. So when you click back and see the feed, you won't question it. You'll assume nothing deceptive happened. While LinkedIn only pushes you one level down in the history state, more aggressive websites can break the back button entirely. They push a new history state every time you try to go back, effectively trapping you on their site. In those cases, your only option is to close the tab. I've also seen developers unintentionally break the back button, often when implementing a search feature. On a search box where each keystroke returns a result, an inexperienced developer might push a new history state on every keystroke, intending to let users navigate back to previous search terms. Unfortunately, this creates an excessive number of history entries. If you typed a long search query, you'd have to click the back button for every character (including spaces) just to get back to the previous page. The correct approach is to only push the history state when the user submits or leaves the search box ( ). As of yesterday, Google announced a new spam policy to address this issue. Their reasoning: People report feeling manipulated and eventually less willing to visit unfamiliar sites. As we've stated before, inserting deceptive or manipulative pages into a user's browser history has always been against our Google Search Essentials. Any website using these tactics will be demoted in search results: Pages that are engaging in back button hijacking may be subject to manual spam actions or automated demotions, which can impact the site's performance in Google Search results. To give site owners time to make any needed changes, we're publishing this policy two months in advance of enforcement on June 15, 2026. I'm not sure how much search rankings affect LinkedIn specifically, but in the grand scheme of things, this is a welcome change. I hope this practice is abolished entirely.

0 views
iDiallo 2 days ago

You paid for it, you should be comfortable in it

A friend of mine bought a Tesla Roadster back in the early 2010s. At the time, spotting a Tesla on the road was a rare event. Maybe even occasion enough to stop and take a picture. I never got the chance to photograph one, let alone drive one, until I met this new friend recently. This was my chance to experience the car firsthand. We walked to the parking structure to see it. As soon as he opened the door, something looked... off. On the outside, it was a pristine, six-figure roadster. But the inside looked completely custom. Not "custom" in the sense of a professional shop install, but more like the driver himself grabbed a hammer and chisel and made it his own. First, the driver's seat had been altered. It was much lower than usual and didn't match the passenger seat. My friend stands 6'7", and the Roadster is a tiny car. He physically couldn't fit, so he modified the seat rails to lower it. But that fix created a new problem: the door armrest now dug into his hip. So, he took a file to the interior panel, shaved it down, and 3D printed a smaller, ergonomic armrest. He even 3D printed a cup holder for the passenger side so his coffee was within reach. To me, the idea of taking a Dremel or a file to a $100,000+ car was unimaginable. You must be crazy to do it. He caught the look on my face and shrugged. "Hey, it's my car. I paid for it. I intend to be comfortable in it." I never thought of it like this. That sentiment stuck with me. Recently when I read an article by Kent Walters about filing the corners of his MacBook , those same feelings resurfaced. My work MacBook has edges so sharp that I've often felt like I was slicing my wrist on the chassis. I treated this as a design flaw I had to endure. But not Kent. He treated it as an obstacle to be removed. He literally filed down the corners of his laptop to ensure the machine he uses every day was comfortable. I may not have the guts to file my work issued MacBook, but I'm no stranger to customization... in software. I modify my tools constantly. I spend days tweaking my IDE, remapping keyboard shortcuts, and writing custom scripts until the software is unrecognizable to anyone else on my team. I don't think twice about rewriting a config file to make the tool fit my brain. When I was a kid, I always had a screw driver around, fixing a device that wasn't really broken. On the home computer, I modified everything. I once deleted all files to improve performance. It didn't work, but it led to a fruitful career. But somehow, when it comes to expensive hardware now, I freeze. I treat the physical object as a museum piece to be preserved. I bought a docking station to banish the laptop to a shelf, using an external mouse and keyboard to avoid touching the sharp chassis. I built a complex workaround to accommodate the tool, rather than performing the simple, brutal act of modifying the tool to accommodate me. We treat our physical tools as if they are on loan from the manufacturer. You'll see a musician buying a vintage guitar but refuses to adjust the action, terrified of ruining the "collector's value." Meanwhile, the working guitarist has sanded down the neck and covered it in stickers because it feels better in their hand. The software engineer accepts the default keybindings to avoid "bad habits," while the power user creates a layout that doubles their speed. If you own a tool, whether it's a car, a computer, or a line of code, you own the right to change it. The manufacturer designed it for the "average" user, but you are a specific human with specific needs. Remember grandma's couch in the living room? It had that plastic cover on it. It was so uncomfortable, but no one dared to remove it. The plastic was to preserve the sofa. No one got to enjoy it, instead everyone accommodated the couch only to preserve its value. A value that one ever benefits from. Don't let the perceived value of an object stop you from making it truly yours. A tool with battle scars is a tool that is loved.

0 views
iDiallo 3 days ago

Your AWS Certificate Makes You an AWS Salesman

I must have been the last developer still confused by the AWS interface. I knew how to access DynamoDB, that was the only tool I needed for my daily work. But everything else was a mystery. How do I access web hosting? If I needed a small server to host a static website, what service would I use? Searching for "web hosting" inside the AWS console yielded nothing. After digging through the web, I found the answer: an Elastic Cloud Compute instance, better known as EC2. I learned that I could use it under the "Free Tier." Amazon offers free tiers for many services, but figuring out the actual cost beyond that introductory period requires elaborate calculation tools. In fact, I’ve often seen independent developers build tools specifically to help people decipher AWS pricing If you want to use AWS effectively, it seems the only path is to get certified. Companies send employees to conferences and courses to learn the platform. I took some of those courses and they taught me how to navigate the interface and build very specific things. But that skill isn't transferrable. In the course, I wasn't exactly learning a new engineering skill. Instead, I was learning Amazon. Amazon has created a complex suite of tools that has become the industry standard. Hidden within its moat of confusion, we are trained to believe it is the only option. Its complexity justifies the high cost, and the Free Tier lures in new users who settle into the idea that this is just "the way" to do web development. When you are presented with a simple interface like DigitalOcean or Linode and a much cheaper price tag, you tend to think that something is missing. Surely, a cheaper, simpler service must lack half the features, right? The reality is, you don't need half the stuff AWS offers. Where other companies create tutorials to help you build, Amazon offers certificates. It is a powerful signal for enterprise legitimacy, but for most developers, it is overkill. This isn't to say AWS is "bad," but it obscures the reality of running a web service. It is much easier than it seems. There are hundreds of alternatives for hosting. You can run your services reliably on a VPS without ever breaking the bank. Most web programming is free , or at the very least, affordable.

0 views
iDiallo 5 days ago

Your friends are hiding their best ideas from you

Back in college, the final project in our JavaScript class was to build a website. We were a group of four, and we built the best website in class. It was for a restaurant called the Coral Reef. We found pictures online, created a menu, and settled on a solid theme. I was taking a digital art class in parallel, so I used my Photoshop skills to place our logo inside pictures of our fake restaurant. All of a sudden, something clicked. We were admiring our website on a CRT monitor when my classmate pulled me aside. She had an idea. A business idea. An idea so great that she couldn't share it with the rest of the team. She whispered, covering her mouth with one hand so a lip reader couldn't steal this fantastic idea: "what if we build websites for people?" This was the 2000s, of course it was a fantastic idea. The perfect time to spin up an online business after a market crash. But what she didn't know was that, while I was in class in the mornings, my afternoons were spent scouring Craigslist and building crappy websites for a hundred to two hundred dollars a piece. I wasn't going to share my measly spoils. If anything, this was the perfect time to build that kind of service. That's a great idea , I said. There is something satisfying about having an idea validated. A sort of satisfaction we get from the acknowledgment. We are smart, and our ideas are good. Whenever someone learned that I was a developer, they felt this urge to share their "someday" idea. It's an app, a website, or some technology I couldn't even make sense of. I used to try to dissect these ideas, get to the nitty-gritty details, scrutinize them. But that always ended in hostility. "Yeah, you don't get it. You probably don't have enough experience" was a common response when I didn't give a resounding yes. I don't get those questions anymore, at least not framed in the same way. I have worked for decades in the field, and I even have a few failed start-ups under my belt. I'm ready to hear your ideas. But that job has been taken, not by another eager developer with even more experience, or maybe a successful start-up on their résumé. No, not a person. AI took this job. Somewhere behind a chatbot interface, an AI is telling one of your friends that their idea is brilliant. Another AI is telling them to write out the full details in a prompt and it will build the app in a single stroke. That friend probably shared a localhost:3000 link with you, or a Lovable app, last year. That same friend was satisfied with the demo they saw then and has most likely moved on. In the days when I stood as a judge, validating an idea was rarely what sparked a business. The satisfaction was in the telling. And today, a prompt is rarely a spark either. In fact, the prompt is not enough. My friends share a link to their ChatGPT conversation as proof that their idea is brilliant. I can't deny it, the robot has already spoken. I'm not the authority on good or bad ideas. I've called ideas stupid that went on to make millions of dollars. (A ChatGPT wrapper for SMS, for instance.) A decade ago, I was in Y Combinator's Startup School. In my batch, there were two co-founders: one was the developer, and the other was the idea guy. In every meeting, the idea guy would come up with a brand new idea that had nothing to do with their start-up. The instructor tried to steer him toward being the salesman, but he wouldn't budge. "My talent is in coming up with ideas," he said. We love having great ideas. We're just not interested in starting a business, because that's what it actually takes. A friend will joke, "here's an idea" then proceeds to tell me their idea. "If you ever build it, send me my share." They are not expecting me to build it. They are happy to have shared a great idea. As for my classmate, she never spoke of the business again. But over the years, she must have sent me at least a dozen clients. It was a great idea after all.

0 views
iDiallo 1 weeks ago

AI Did It in 12 Minutes. It Took Me 10 Hours to Fix It

I've been working on personal projects since the 2000s. One thing I've always been adamant about is understanding the code I write. Even when Stack Overflow came along, I was that annoying guy who told people not to copy and paste code into their repos. Instead, they should read it and adapt it to their specific case. On personal projects, I've applied this to a fault. Projects never get done because I'm reading and editing code to make it work exactly as I want. I am by no means trying to convince you that my code is high quality. Every day, I regret the design choices I made for this very blog. But at the very least, I like to understand the code that powers my projects. So you can imagine how I struggle with the reviewing part when AI writes a large chunk of our daily work. Large language models are just so verbose, and often produce large blocks of code that don't even get used. I don't want to attribute it to malice (wasting your tokens) when I know this is an emergent technology we are all still adapting to. But it doesn't help that there is just so much code to review. What I tell myself when I review an AI-generated PR is: if I don't have a mental model of how the application works, how can I be of any use when it fails? This weekend, I decided to tackle a project I've been postponing since I created this blog over a decade ago. I needed a nice interface to upload assets, such as images, to go with each blog post. According to my git history, I started work on it 11 years ago but never finished. Yet I've still been uploading images regularly to go with blog posts. How? Using and . Like a caveman. Whenever I finish writing a blog post, I create images in Photoshop or Photopea, fire up FileZilla to connect to my file server, and drag and drop my assets. Then, of course, I have to log back into the blog's admin panel and provide the correct URLs for those assets. It's as primitive as it gets. This was a perfect project to tackle with AI. The good news is that I was able to complete it in a single day. The bad news. Well, I read through close to 5,000 lines of code. This is the first time I've used an LLM to write PHP code. I used the GLM-5 model from z.ai, which I've often played with for generating graphs from data. I'll have to try it on a few more PHP projects before I commit to this opinion, but it consistently wrote spaghetti code . Spaghetti code, to me, is anything where logic is embedded directly in a template. HTML, CSS, and database queries all crammed into the same file. That's really hard to maintain. I did realize, though, that my instructions were in direct contradiction with the agent's system prompt. It was explicit in the thinking process: This is NOT a Next.js project - the user explicitly wants a PHP app. So I should NOT use the Next.js project that's already initialized. I should create a PHP application. Wait, but the instructions say "You are an interactive CLI tool that helps users with software engineering tasks" and "Now you are developing a comprehensive and feature-rich Next.js project from scratch." But the user is explicitly asking for a PHP app. I should follow the user's request. It did generate a bunch of Node.js code, which I was able to remove manually. Luckily, it kept the PHP project in its own folder. If you're wondering how 12 files contain ~5,000 lines of code, I wondered the same. But that's what spaghetti code does. I set it up locally, ran and , and a few more files and folders were generated. When I finally ran the application, it didn't work. I spent a few hours working through permissions, updating the install script, and modifying the SQLite setup. I thought StackOverflow was dead, but I don't think I would have gotten SQLite working without it. One error, for example, was that SQLite kept throwing a warning that it was running in read-only mode. Apparently, you have to make the parent folder writable (not just the database file) to enable write mode. It had been a long time since I'd manually d files in PHP. I normally use namespaces and autoload. Since this project was generated from scratch, I had to hunt down various statements that all had incorrect paths. Once I sorted those out, I had to deal with authentication. PHP sessions come with batteries included, you call and you can read and write session variables via the global. But I couldn't figure out why it kept failing. When I created a standalone test file, sessions worked fine. But when loaded through the application, values weren't being saved. I spent a good while debugging before I found that was missing from the login success flow. When I logged in, the page redirected to the dashboard, but every subsequent action that required authentication immediately kicked me out. Even after fixing all those issues and getting uploads working, something still bothered me: how do I maintain this code? How do I add new pages to manage uploaded assets? Do I add meatballs directly to the spaghetti? Or do I just trust the AI agent to know where to put new features? Technically it could do that, but I'd have to rely entirely on the AI without ever understanding how things work. So I did the only sane thing: I rewrote a large part of the code and restructured the project. Maybe I should have started there, but I didn't know what I wanted until I saw it. Which is probably why I had been dragging this project along for 11 years. Yes, now I have 22 files, almost double the original count. But the code is also much simpler at just 1,254 lines. There's far less cognitive load when it comes to fixing bugs. There's still a lot to improve, but it's a much leaner foundation. The question I keep coming back to is: would it have been easier to do this manually? Well, the timeline speaks for itself. I had been neglecting this project for years. Without AI, I probably never would have finished it. That said, it would have been easier to build on my existing framework. My blog's framework has been tested for years and has accumulated a lot of useful features: a template engine, a working router, an auth system, and more. All things I had to re-engineer from scratch here. If I'd taken the time to work within my own framework, it probably would have taken less time overall. But AI gave me the illusion that the work could be done much faster. Z.ai generated the whole thing in just 12 minutes. It took an additional 10 hours to clean it up and get it working the way I wanted. This reminds me of several non-technical friends who built/vibe-coded apps last year. The initial results looked impressive. Most of them don't have a working app anymore, because they realized that the cleanup is just as important as the generation if you want something that actually holds together. I can only imagine what "vibe-debugging" looks like. I'm glad I have a working app, but I'm not sure I can honestly call this vibe-coded. Most, if not all, of the files have been rewritten. When companies claim that a significant percentage of their code is AI-generated , do their developers agree? For me, it's unthinkable to deploy code I haven't vetted and understood. But I'm not the benchmark. In the meantime, I think I've earned the right to say this the next time I ship an AI-assisted app: "I apologize for so many lines of code - I didn't have time to write a shorter app."

0 views
iDiallo 1 weeks ago

It's not that deep

I have these Sunday evenings where I find myself sitting alone at the kitchen table, thinking about my life and how I got here . Usually, these sessions end with an inspiring idea that makes me want to get up and build something. I remember the old days where I couldn't even sleep because I had all these ideas bubbling in my head, and I could just get up and do it because I had no familial responsibility. I still have that flare in me, but I also don't always give in to those ideas. Instead, sometimes I chose to do something much simpler. I read. Sometimes it's a book, sometimes it's a blog post. But always, it's something that stimulates my mind more than any startup idea, or tech disruption. There is a blog I follow, I'm not even sure how I stumbled upon it. I don't think it has a newsletter, and it's not in my RSS feed. But, it's in my mind. It's as if I can just feel it when the author posts something new. Right there on the kitchen table, I load it up, and I get a glimpse into someone else's life. I don't know much about this person, but reading her writing is soothing. It's not commercialized, the most I can say about it is, well it is human . When I read something that is written, anything that's written, I expect to hear the voice of the person behind it . Whether it is a struggle, a victory, or just a small remark. It only makes sense when there is a person behind it. Sometimes people write, and in order to sound professional, they remove their voice from it. It becomes like reading a corporate memphis blog. Devoid of any humanity. It's weird how I have these names in my head. Keenen Charles . I check his blog on Sunday evening as well. In fact, here are a few I read recently in no particular order: This isn't to tell you that you need to read those articles or you will be left behind. It's not that deep. You can find things you like, and enjoy them at your own comfort. They don't have to be world changing, they don't have to turn you into a millionaire, they just have to make you smile or nod for a moment. The world is constantly trying to remind us that we are at the edge of destruction. But you, the person sitting there, reading a random blog post from this random Guinean guy, yes you. Take it easy for the rest of the day. Jerry (I particularly liked this specific article) Don't know her name

0 views
iDiallo 1 weeks ago

Zipbombs are not as effective as they used to be

Last year, I wrote about my server setup and how I use zipbombs to mitigate attacks from rogue bots. It was an effective method that help my blog survive for 10 years. I usually hesitate to write these types of articles, especially since it means revealing the inner workings of my own servers. This blog runs on a basic DigitalOcean droplet, a modest setup that can handle the usual traffic spike without breaking a sweat. But lately, things have started to change. My zipbomb strategy doesn't seem to be as effective as it used to be. TLDR; What I learned... and won't tell you Here is the code I shared last year : I deliberately didn't reveal what a function like does in the background. But that wasn't really the secret sauce bots needed to know to avoid my trap. In fact, I mentioned it casually: One more thing, a zip bomb is not foolproof. It can be easily detected and circumvented. You could partially read the content after all. But for unsophisticated bots that are blindly crawling the web disrupting servers, this is a good enough tool for protecting your server. One way to test whether my zipbomb was working was to place an abusive IP address in my blacklist and serve it a bomb. Those bots would typically access hundreds of URLs per second. But the moment they hit my trap, all requests from that IP would cease immediately. They don't wave a white flag or signal that they'll stop the abuse. They simply disappear on my end, and I imagine they crash on theirs. For a lean server like mine, serving 10 MB per request at a rate of a couple per second is manageable. But serving 10 MB per request at a rate of hundreds per second takes a serious toll. Serving large static files had already been a pain through Apache2, which is why I moved static files to a separate nginx server to reduce the load . Now, bots that ingest my bombs, detect them, and continue requesting without ever crashing, have turned my defense into a double-edged sword. Whenever there's an attack, my server becomes unresponsive, requests are dropped, and my monthly bandwidth gets eaten up. Worst of all, I'm left with a database full of spam. Thousands of fake emails in my newsletter and an overwhelmed comment section. After combing through the logs, I found a pattern and fixed the issue. AI-driven bots, or simply bots that do more than scrape or spam, are far more sophisticated than their dumber counterparts. When a request fails, they keep trying. And in doing so, I serve multiple zipbombs, and end up effectively DDoS-ing my own server. Looking at my web server settings: I run 2 instances of Apache, each with a minimum of 25 workers and a maximum of 75. Each worker consumes around 2 MB for a regular request, so I can technically handle 150 concurrent requests before the next one is queued. That's 300 MB of memory on my 1 GB RAM server, which should be plenty. The problem is that Apache is not efficient at serving large files, especially when they pass through a PHP instance. Instead of consuming just 2 MB per worker, serving a 10 MB zipbomb pushes usage to around 1.5 GB of RAM to handle those requests. In the worst case, this sends the server into a panic and triggers an automatic restart. Meaning that during a bot swarm, my server becomes completely unresponsive. And yet, here I am complaining, while you're reading this without experiencing any hiccups. So what did I do? For one, I turned off the zipbomb defense entirely. As for spam, I've found another way to deal with it. I still get the occasional hit when individuals try to game my system manually, but for my broader defense mechanism, I'm keeping my mouth shut. I've learned my lesson. I've spent countless evenings reading through spam and bot patterns to arrive at a solution. I wish I could share it, but I don't want to go back to the drawing board. Until the world collectively arrives at a reliable way to handle LLM-driven bots, my secret stays with me.

0 views
iDiallo 2 weeks ago

13th Year of Blogging

Of all the days to start a blog, I chose April Fools' Day. It wasn't intentional, maybe more of a reflection of my mindset. When I decide to do something, I shut off my brain and just do it. This was a commitment I made without thinking about the long-term effects. I knew writing was hard, but I didn't know how hard. I knew that maintaining a server was hard, but I didn't know the stress it would cause. Especially that first time I went viral. Seeing traffic pour in, reading back the article, and realizing it was littered with errors. I was scrambling to fix those errors while users hammered my server. I tried restarting it to relieve the load and update the content, but to no avail. It was a stressful experience. One I wouldn't trade for anything in the world. 13 years later, it feels like the longest debugging session I've ever run. Random people message me pointing out bugs. Some of it is complete nonsense. But others... well, I actually sent payment to a user who sent me a proof of concept showing how to compromise the entire server. I thought he'd done some serious hacking, but when I responded, he pointed me to one of my own articles where I had accidentally revealed a vulnerability in my framework. The amount you learn from running your own blog can't be replicated by any other means. Unlike other side projects that come and go, the blog has to remain. Part of its value is its longevity. No matter what, I need to make sure it stays online. In the age of AI, it feels like anyone can spin up a blog and fill it with LLM-generated content to rival any established one. But there's something no LLM can replicate: longevity. No matter what technology we come up with, no tool can create a 50-year-old oak tree. The only way to have one is to plant a seed and give it the time it needs to grow. Your very first blog post may not be entirely relevant years later, but it's that seed. Over time, you develop a voice, a process, a personality. Even when your blog has an audience of one, it becomes a reflection of every hurdle you cleared. For me, it's the friction in my career, the lessons I learned, the friends I made along the way. And luckily, it's also the audience that keeps me honest and stops me from spewing nonsense. Nothing brings a barrage of emails faster than being wrong. Maybe that's why I subconsciously published it on April Fools' Day. Maybe that's the joke. I'm going to keep adding rings to my tree, audience or no audience, I'm building longevity. Thank you for being part of this journey. Extra : Some articles I wrote on April Fools day. So you've been blogging for 2 years Quietly waiting for Overnight Success Happy 5th Anniversary Count the number of words with MySQL How to self-publish a book in 7 years The Art of Absurd Commitment Happy 12th Birthday Blog What is Copilot exactly?

0 views
iDiallo 2 weeks ago

What is Copilot exactly?

A coworker of mine told me that he uses Microsoft Copilot frequently. In fact, he said "I don't know how I did my work without it." That came as a surprise to me. I can't stand Copilot. This is a very productive employee, one of those 10x engineers you can throw any problem at and he'll find a solution. Obviously, if he found a use for Copilot, then I was probably holding it wrong. So I decided to give it a shot. I put all my prejudice aside and embraced the tool fully. AI is the future, and it shouldn't be hard to find a way to integrate it into my everyday workflow. I decided to give it a week, meaning I wouldn't complain even when I didn't get the result I wanted. Instead, for every frustration, I would use Copilot to help me turn that frown into a smile. The result? I created a workflow. I automated a lot of the things I find super annoying: scrum ceremonies, BRD reviews, email writing. All the things I feel like I must do only for someone else to tick a box in their own workflow. After the first week, I decided to extend my trial for a full sprint. By embracing this tool, I felt like I had eliminated my manager's job. Instead of having him check boxes on his end, I could just present my reports at the end of the week. I created a template prompt where I could dump information throughout the day, and at the end of the day it would generate a report in whatever format I wanted. I was so proud of my template that I shared it with my 10x coworker. He didn't respond with the enthusiasm I was expecting. He didn't understand what I was trying to do. In fact, he told me he had never used Copilot before. That was in direct contradiction of what he'd told me earlier. He was the only reason I gave this tool a shot, and here he was pretending we'd never had that conversation. Well, he clarified: "I meant Copilot on VS Code." Now, can you guess which Copilot I was using? Whatever Copilot is offered through Teams. And I say "whatever" because I genuinely don't know which one that is. Is it the same as accessing Copilot on the web? I wouldn't know. Our corporate firewall blocks that one. Teams seems to be the only approved method. Anyway, what is Copilot exactly? Is it just a white-labeled ChatGPT? When I asked it directly, it said: "It's Microsoft's AI companion, powered by advanced models (including OpenAI's), but shaped by Microsoft's ecosystem, design philosophy, and capabilities. If ChatGPT is a powerful engine, Copilot is the full car built around it — with Microsoft's dashboard, safety systems, and features." But where did the name come from? I'm sure I first heard it in the context of GitHub. The first AI code assistant shipped with VS Code. Even though they're both Microsoft products, they're two distinct products. If you use GitHub Copilot, your data isn't siphoned back to your Microsoft account (for now). What I was using in Teams is Copilot for Microsoft 365 , which is apparently different from Microsoft Copilot . The 365 version lives inside Microsoft 365 apps (that's Microsoft Office's new name, for those not keeping up). The key difference is that the 365 version can work with your emails, documents, OneDrive, and so on. But if you have a Windows device, you also have Windows Copilot , distinct from the one in Microsoft 365. This one is your AI assistant inside the OS, meant to help you launch apps, summarize what's on your screen, and handle everyday tasks. In my experience, I couldn't get it to do any of those things. Apparently, I don't have a Copilot+ PC. Reading through Microsoft's docs, I also found something called Copilot Chat . It's not quite a distinct product, but I'm not sure how else to classify it. Microsoft describes it as a general-purpose reasoning tool for writing, brainstorming, and coding. You can find it in M365 apps, and also within GitHub Copilot. That's the part that explains code, suggests fixes, and helps with debugging. I asked Copilot Chat via GitHub Copilot to explain the difference between all the offerings. It summarized it neatly: "Same family, different jobs." I'm only scratching the surface of what Copilot is supposed to be, and I'm already tired. I felt inspired by a developer to explore it, only to find that he was touching just a small slice of this ecosystem. I still think it's worth encouraging teammates to embrace a tool that everyone else is losing sleep over. I should have stopped there, but I wanted to learn more about his workflow. I'm a developer after all, and whatever he's doing would be worth implementing with my team. So I asked him. "What is your developer workflow using Copilot?" I was not prepared for the answer he gave me: "Actually, I made a mistake. I meant Cursor." And there it was. He wasn't talking about Copilot at all. Not the Teams one, not the GitHub one, not any of them. He had used "Copilot" the way most people use "Kleenex". To him, any AI code assistant was just a copilot. I had spent a whole sprint, struggling through this tool, inspired by someone who couldn't have cared less about Microsoft's ecosystem. There's a lesson there, I'm sure. I just didn't learn anything.

0 views
iDiallo 2 weeks ago

How Do We Get Developers to Read the Docs

When I reviewed this PR, I had tears in my eyes. We had done it. We had finally created the perfect API. To top it off, the senior developer who worked on it had written documentation to match. No stones were left unturned. I had the code open on one window and the doc on the other. The moment I felt hesitation in the code, the documentation reassured me. Why do we make two calls to get the... "We are fetching two types of orders to support legacy subscribers..." the documentation answered before I completed my question. This was standard number 15 . The one to rule them all. But I still had one question. As the owner of the API, I read the documentation. Will other developers ever think to read it? How do I get people to want to read the documentation before they use this API? Because in my experience, nobody reads the documentation. Not to say that documentation is useless, but my mistake was thinking that the people who want to implement the API are interested in documentation at all. For every API ever built, there are two audiences to cater to, and confusing them is where most documentation goes wrong. The first group is the consumers of the API. The only thing they want to know is: do the endpoints do what I need, and what parameters do they take? They are not reading your documentation like a book. They are scanning it like a menu. They want to find the thing they need, copy the example, and move on. The second group is the maintainers of the API. The people who need to understand the why behind every decision. Why are there two calls? Why does this endpoint behave differently for legacy users? Why is this field nullable? These are the people who will be debugging at 2am, and they need the full picture. The worst thing you can do is write one document that tries to serve both audiences equally. You end up with something that's too deep for the first group to skim, and not structured enough for the second group to find it useful. For the first audience, the API should speak for itself. The best documentation you can provide is not text to read through, but a well-designed API. Follow clear, repeatable patterns where the user can anticipate, or even assume the available features. If you have an endpoint called , the assumption should be that returns a specific order. If you add , there should probably be a too. When the pattern is consistent, the consumer doesn't need to read anything, they just guess correctly. When you do write documentation for this audience, resist the urge to explain your internals. They don't need to know that you're fetching from two different database tables to support legacy subscribers. What they need to know is: . One sentence. Done. I like this idiom: "Too much information and no information, accomplish the same goal." This is a mistake I see most often. It's a painful one because it comes from a good place. The writer of the documentation, usually the person who built the thing, feels a sense of responsibility. They want to be thorough. They want no one to be confused. So they write everything down. The result is a documentation page that looks like this: This endpoint retrieves orders for a given user. It was introduced in v2.3 of the API following the migration from the legacy order management system (OMS) in Q3 2021. Internally, the resolver makes two sequential calls (one to the new orders table and one to the legacy_orders table) and merges the results using the order ID as a deduplication key. Note that legacy orders may not contain a field, which was not captured before 2019. If you are building a UI, you should account for this possibility. The endpoint also supports cursor-based pagination, though offset-based pagination is available for backward compatibility with clients built before v2.1. Additionally, orders in a state may not appear immediately... A developer scanning this page will read the first sentence, close the tab, and think about designing API standard number 16. They'll go look at the codebase instead, or ping a teammate, or just guess. The documentation existed, it just didn't get read. Which means it accomplished exactly the same thing as having no documentation at all. The same way you don't write a comment to explain every line of code, a documentation doesn't benefit from too much information. My go to solution isn't to omit information, but to write it in layers. Collapsible sections are one of the most underrated tools in documentation design. They let the consumer skim the surface: endpoint name, what it returns, a working example. And they let the maintainer dive deeper into the implementation notes, the edge cases, and the historical context. The same principle applies to how you order information. Lead with what the API does. Follow with how to use it. Bury the why at the bottom, behind a toggle or a "Details" section, available to those who need it, invisible to those who don't. Think of it like a well-designed error message. A good error message tells you what went wrong in plain language. A great error message also includes an expandable stack trace, but it doesn't show you the stack trace first. Your documentation has the same job. Give people the answer they're looking for, and then offer the depth to those willing to dig. The second audience, the maintainers, do need the full picture. The two database calls, the deduplication logic, the historical reason the field is sometimes null. This is the documentation that prevents a future developer from "fixing" something that wasn't broken, or removing what looks like redundant code. But this documentation doesn't have to live on the same page as the quick-start guide. Deep implementation notes belong in inline code comments or a separate internal wiki. The public-facing API reference should stay clean. When you separate operational documentation (for consumers) from institutional documentation (for maintainers), both documents get better. The consumer doc gets shorter and clearer. The maintainer doc gets deeper because it's no longer trying to also be beginner-friendly. The goal of documentation isn't completeness. Completeness is what you write for yourself, to feel like you've done your job. The goal of documentation is to transfer the right information into the right person's head at the right moment. That senior developer who wrote the documentation I cried over understood this. She didn't write everything she knew. She wrote exactly what someone reading the code would need to know, at the exact moment they'd need it. And the API design allowed anyone consuming it to make correct assumptions (intuitive design) on how it works. Both groups are happy.

0 views
iDiallo 2 weeks ago

Sharing a Name

My bank card never arrived. I called the bank and, after being redirected through several departments, was assured that it had been mailed. Then we argued a bit about what "7 to 10 business days" meant, we were already on day 14. We ended the call by agreeing to disagree. Eventually, I did get my card. But it wasn't the mailman who delivered it. Instead, it was my neighbor from two streets down. On the envelope, my address had been crossed out, and the word "incorrect" was handwritten beside it. Why? Because the mailman had done it. You see, I had just moved into the apartment complex, and my name looked familiar to him. Of course he knew who Ibrahima Diallo was, he had been delivering his mail for years. So he corrected it. In the US, both my first and last name are uncommon (or so I thought). They're often a source of confusion when my Starbucks order gets called out. As it turns out, one of my neighbors shares the exact same name. And on top of that, he uses the same West African spelling: Ibrahima . The mailman, trying to be helpful, had redirected my mail to what he thought was the right address. My neighbor and I laughed about it. Then I immediately cancelled the card and requested a new one... Some years ago, I dated a woman from Bulgaria. She grew up in a small city where everyone knew each other. In their town, there was a single Black family. You probably know where this is going, but pretend you don't and follow along. It was so unusual to have an outsider in this town that the man and his family became local fixtures. Wherever they went, people stopped to take pictures with them. They were like minor celebrities. So naturally, when she pulled out a photo from her childhood, there he was, posing cheerfully with the neighbors. She turned the photo over to read the names written on the back. She stopped. She burst out laughing. I looked at the name. I can't read Cyrillic, but I know exactly how to spell my name in Bulgarian. His name read: Ibrahima Diallo . When I was hired at AT&T many years ago, there was a week of confusion at first. I didn't receive my welcome kit. My manager swore that he had carefully selected my name, and sent it to my Texas address... As you may have guessed, I do not have a Texas address. I lived in Los Angeles and the company where we worked in person was in Los Angeles. Somewhere in Texas, a long time employee must have been confused with this new welcome kit showing up in the mail. Back when I was featured on the BBC , a wave of people reached out. Even though my picture was prominently displayed in the article, several people emailed me as if they already knew me, picking up conversations we had apparently started at work, signing off with "see you tomorrow." According to my inbox, I had met quite a few people in London. The only problem was, well, I've never been to London. As it turned out, my neighbor's uncle had called him to say that some journalists were trying to reach his nephew through him. You'll never guess the uncle's name. Yes, it's Ibrahima Diallo. I eventually met this uncle. We had a long conversation and discovered that he knew my father from back home. In fact, he had gone to school with one of my uncles and spoke fondly of him, saying he was a brilliant student. What's my uncle's name, you ask? Of course it's Ibrahima Diallo. Growing up, I assumed my name was uniquely mine. But as I've made my way through the world, I've found that I share it with a surprisingly large number of people. I already snagged ibrahimdiallo.com . I'm keeping an eye on ibrahimadiallo.com , hoping it expires this June so I can claim that one too. If it does become available, I'll gather an army of Ibrahimas, and we will... Well, I'm not entirely sure what we'll do yet. But it will definitely be fun. Anyway, that's a story about my name. A postscript worth mentioning: Both of my older brothers share the same first and last name as each other. You can imagine the fun they have. This is what happens in West African families when you name your children after their grandparents, and the grandparents happen to share the same name. One brother does have a middle name, intended as a differentiator. But middle names are rarely included in US mailing addresses, so that doesn't help much either.

0 views
iDiallo 2 weeks ago

How we get radicalized in America

Be healthy, be young, fall ill. You have a great job of course, you have insurance. It would be ok if the worst thing about health insurance in America was it is hard to navigate. No! The actual problem is that your insurance is incentivized not to cover you at your most vulnerable moment. You pay them every month. That's money that goes from your paycheck, into their pockets. Now if they cover you, that's money that leaves their pocket, and go into your treatment. There are two ways they can make money. 1. You continue paying every month, and never fall ill. 2. You fall ill, and they deny you care. Only the second option is an active option. Health Insurance is a scam that we have normalized in the United States. It helps no one, it makes healthcare unaffordable, and you have to fight tooth and nail to get any sort of care. When Luigi was in the headlines, and news anchors were asking how such a young man can get radicalized, I shook my head. In America, it is our tradition to get 2 jobs. It is our tradition to live paycheck to paycheck. And it is our tradition to get radicalized the moment we get sick. When you get sick, the healthcare industry tries to charge much as they can get away with and the insurance industry tries to deny as much as it can.

1 views
iDiallo 3 weeks ago

The nth War of the Decade

This is a blog where I talk mostly about programming in the workplace. These past few years the subject has often been AI, because it affects everything. From the hiring process to the very code we type. AI might just replace me mid-sentence... So when a subject that affects us all dominates the world, I want to give you my perspective. I may not be your source of political perspective, but here goes. Right now, we are at war. At least the United States of America is. It turns out, congressional rules are a lot like HTML standards: they are merely a suggestion you can choose to adopt or ignore. First, I want to say this firmly: you don't need to be an expert to talk about war. It affects us all on some level. That trope, that only experts should weigh in, is often used by people who want to control a narrative. But this time, the layman of every corner of the world will get involved in shaping the story. One of my earliest memories of what was called "the news" was footage of children throwing rocks at tanks rumbling through buildings. I didn't understand if it was courage, or just a game. I was just a kid after all. In hindsight, those were Palestinian children in a devastated city, throwing rocks at their unmatched adversary, the Israeli army. Some years later, I remember my brothers fitting me with an oversized gas mask while we played tag. I had to constantly readjust it, so I could see what was in front of me, and also breathe! Those masks, along with other supplies, had been provided by the Saudi government to all diplomats in embassies in case of a chemical attack. This was during the Gulf War. The wars in Kosovo and Chechnya became background noise in our diplomatic household. My parents would rush us to our room, unplug our Famicom to make space for the news again. I didn't understand much about what we saw on TV. Who were the good guys? Who were the bad guys? It was nothing like the Rambo or Commando movies we watched. I remember learning in school that Yugoslavia was no longer a country. In that same history book was a photograph of people waving the Yugoslav flag. That made no sense to me. Imagine carrying national pride, waving your flag, especially during a war, and then turning around to find a different country in its place. Whatever you thought you were had been swept out from under you. We had moved to Egypt when the attacks of September 11th occurred. Every channel, local and international, interrupted its programming to show footage of the towers being hit. My brother told me those were the towers from the Home Alone movie. I was more surprised that buildings that tall could even exist. We were all shocked to hear that the US was going to war with Iraq, especially since they had blamed the attacks on Saudi. After basketball games, dozens of us would sit on the court and debate. Some said it had something to do with Kuwait, others said it was about oil. I remember one guy insisting that the WMDs were real. His reasoning? Well the US had the receipts, they sold them in the first place. While we were having our little debates, it is estimated that the US caused the deaths of at least one million people in Iraq. Are we supposed to ignore the war? Is it only relevant when we are economically affected? Or do we only take it seriously when American lives are lost? Do we yell "stop the war" or "we want lower gas prices"? How do we even follow along with what is happening when AI and realistic video game footage is flooding social media feeds. Which is true? Which is misinformation? Is this an illegal war, as opposed to legal? Was the Iraq War legal? If its premise was the existence of WMDs that were never found (despite the insistence from that boy), does that make it illegal? Is war legal for one party but not the other? How do we classify the war in Ukraine? Legal on Ukraine's side, illegal on Russia's? Is war legal when it is retaliatory? The US retaliated against the Taliban in Afghanistan. Is Iran's retaliation against the US similarly justified? And what role does the UN play in a war? The International Court of Justice? Who do they hold accountable? If they were founded earlier, I imagine they would have sent Hitler a strongly worded letter. Not a single decade of my life has been free of conflict. Millions have suffered around the world; many have been killed. But never did I think that the killing of women and children would be normalized. War is chaos. We pretend there are rules to it, but every new conflict reveals how blurred the edges become. Killing is acceptable when it is "precise" or "targeted", until your own group is killed the same way. War is acceptable when it happens in a faraway land, until you realize your land is faraway from someone else. Are we living in our 200 year war? Is the result inevitable? Do we have to destroy everything and then lose all the material to learn anything from it? Do we become the One State ? A regime based on absolute mathematical logic and the suppression of individuality, designed to prevent such a war by brutally oppressing each other. In movies, to end the war you kill the top villain. But it has never worked that way in our world. The only way to stop war is to stop it. Stop bombing. Stop killing. It's not like the movies. The UN is not gonna do anything, or can't even do anything about it. War, in its nature, cannot resolve a conflict. It only creates the fuel for the next one.

0 views
iDiallo 3 weeks ago

Why Is Everyone Supposed to Die If Machines Can Think?

If you only listen to spokespersons for AI companies, you'll have a skewed view of how AI is actually being integrated into the workplace. You probably don't need to convince a developer to include it in their workflow, but you also can't dictate how they do so. Whenever I sit next to another developer during pair programming, I can't help but feel frustrated by their setup. But I don't complain, because they'd be just as annoyed with mine. The beauty of dev work is that all that matters is the output. If you use a boilerplate generator like , few will complain. If you use AI to generate the same code, as long as it works, no one will complain either. If the code is crafted with your own wetware, no one will be the wiser. Developers will use any tool at their disposal to increase their own productivity. But what happens when that thousand-dollar-per-developer-per-month subscription starts to feel expensive? What happens when managers expect a tenfold return on investment, yet sprint velocity doesn't budge? On one end, new metrics are created to track developers' use of the tool. Which, in my experience, are highly inaccurate and vary wildly. On the other hand, companies are using AI as justification for laying off workers. So which metric is to be trusted? AI isn't simply a solution in search of a problem. It's quite useful. One person will tell you it's great for writing tests, another will praise it for writing utility functions, and another will use it to better understand a requirement. Each is a valid use case. But the question managers keep asking is: "Can we use AI instead of hiring another dev?" I'm not sure what is supposed to happen if we achieve so-called AGI. Does it mean I no longer have to do code reviews? Is it AGI when the AI stops hallucinating? My shower-thought answer: AGI is an AI that can say "I don't know" when it doesn't know the answer. But I don't think Sam Altman sees that as a selling point. Why are we supposed to die if a machine can think? Every time someone raises this argument, I think of Thanos. In the Avengers saga, he kills half of all living beings in the universe. It's an act so total and irreversible that the writers had to bend time itself to undo it. And still, fifteen movies later, the franchise keeps going. Each new antagonist has to threaten something, but nothing lands the same way. You already saw the worst. The scale is broken. The villain is a terrorist from an un-named country? Gimme a break. That's what the AI extinction narrative has done to the conversation about AI. By opening with the end of the world, it made every practical concern feel small by comparison. Who wants to talk about sprint velocity and hallucinated function calls when we're supposedly staring down an existential threat? So we don't. We argue about the apocalypse instead. Meanwhile, I am debugging a production incident at 2am, in a codebase that has never once tried to kill me, but has absolutely tried to ruin my weekend. The reality is quite different from the drama that unfolds online. The longer this AI craze continues, the less I believe we're headed for a dramatic bubble pop. Instead, I think the major players will try to bully their way out of one. And that bullying is already happening on at least three fronts: language, narrative, and money. Microsoft is leading the language crack down. They are rounding up critics in their own Copilot Discord servers, banning users who use the now-deemed-derogatory term "Microslop." Nvidia is publicly asking people to stop using the phrase "AI slop." These aren't isolated incidents of corporate thin skin. They are coordinated attempts to police the vocabulary we use to criticize the technology. Control the language, and you go a long way toward controlling the conversation. When you can't call a thing what it is, it becomes harder to argue that the thing exists at all. On the narrative front, we are told every day that AI is good, innovative, and inevitable. Then we're told it's going to take our jobs. And at the same time, we're told it's an existential threat that could wipe us off the planet. It is simultaneously the best thing that could ever happen to humanity and the worst. I'm reminded that "War is peace, freedom is slavery, ignorance is strength" as George Orwell puts it. It's a cognitive trap. When a technology is framed as both savior and apocalypse, the questions regular people ask are seen as mundane. We can't ask: "Does it work? Is it worth the cost? Are we actually benefiting from this?" Instead, we spend our energy arguing about the end of the world, and the companies keep burning through cash while the narrative burns through our attention. On the money front, we all witnessed it firsthand with the fiasco involving Anthropic, OpenAI, and the Department of Defense. People were quick to sort the players into the good guys, the bad guys, and the ugly. But to me, it looked like a dispute designed to obscure the problem that has plagued AI companies from the very beginning: they need to make money. It doesn't matter if a company generates $20 billion a year when its operating costs double annually. They're still in the red. Anthropic was making a grand stand, positioning itself as the principled actor fighting against the US war machine. At the same time, they had no issue working with Palantir, a company that makes no secret of its commitment to mass surveillance and its role in powering the machinery of war. Meanwhile, OpenAI is struggling with its own financial stability. They've just launched ads on their platform, something Sam Altman once described as a last resort. When you're in the red and a customer is willing to pay, principles become a luxury you can do without. Given their history of bending copyright law and converting to a for-profit entity, it's naive to assume there are other principles they wouldn't bend as well. They quickly jumped into the DoD deal, scooping up a $200 million contract to replenish their coffers. There was one detail in Anthropic's statement that deserved more attention than it got: We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values. In other words: surveilling citizens is immoral. If you're a non-citizen or a foreigner, you're on your own. So right now, AI companies are hemorrhaging money, policing the words we use to criticize them, manufacturing existential dread to crowd out any skepticism, and taking defense contracts while performing ethical restraint. And somewhere in the middle of this, we're supposed to believe that only they can save us. When you're losing money but need to maintain the illusion of infinite growth, you don't wait for the market to correct you. You make the bubble burst feel not just unlikely, but unthinkable. You bully the language, inflate the stakes, and monetize the fear. As individuals, what are we supposed to do with the useful part of the technology? It helps me write tests. It helps my colleagues parse requirements. Used without hype and within realistic expectations, it is actually a good tool. But "a good tool" doesn't justify the valuations, the layoffs-as-euphemism, the defense contracts, or the Discord bans. It doesn't sustain the mythology that has been built around it. That gap between the tool that exists and the revolution that was promised, is precisely what the bullying is designed to keep you from looking at too closely. I still struggle to answer managers who ask me to justify the team use of the tool. I never had to justify my IDE, or my secret love affair with tmux before. For now all I can tell them is: "it's useful, within limits, and that should be enough." It won't be what they want to hear. But it's more than the industry has managed to say about itself.

0 views
iDiallo 4 weeks ago

Communication Is Surveillance by Design

In the very last scene of The Bourne Supremacy , Jason Bourne calls the CIA from what they presume is a public phone. Landy, who answers the call, instructs her team to trace it. Bourne says he wants to come in and asks for someone specific to meet with him. Landy stalls for time while her team tries to triangulate his exact location, so she asks how she can find the person he's referring to. That's when Bourne drops his famous line: "It's easy. She's standing right next to you." revealing that he's right in their vicinity. He hangs up seconds before the team could have located him. That's one badass ending. (֊⎚-⎚) It's not the only film where the protagonist, or antagonist, is clever enough to know exactly when to hang up before being pinpointed. There seems to be this universal piece of software that all law enforcement agencies use to triangulate calls in movies. It's some application built in the '90s, operating at modem speed, that just needs a little more time. A countdown clock. Tense music. Cut to black. What is that software actually doing? "Triangulate" implies three points, maybe three cell towers sending a ping and measuring the response time from each, then using the difference to calculate distance. Computers, even old ones, are very good at math. So why would that take a full minute? Well, mostly it doesn't. That's fiction. The moment your phone connects to a cell tower, it generates a Call Detail Record (CDR) . This record includes who you're calling (the network needs to know in order to route the call), how long the call lasts, and which specific tower and sector handled it. Location data is captured and stored automatically from the instant the call begins. In other words, the moment Jason Bourne hits send, he's already been logged. When you connect to a single tower, location accuracy can still be within several hundred meters. But phones typically connect to multiple towers simultaneously, and triangulation narrows that down to tens of meters. If you're calling from a payphone, there's no triangulation needed at all. The address of each payphone is already on record. The one advantage the protagonist realistically has is that CDR data isn't usually available in real time. Law enforcement needs to contact the telecom provider, obtain a court order, and wade through all the bureaucracy that entails. If there's a clock ticking, it should be for the number of days it takes to gather that data. Not how long the triangulation software takes to calculate. The moment you accessed this page, you left a trail. Your device asked your Internet Service Provider (ISP) to connect you to my website. That request generated a log ,the digital equivalent of a CDR, recording that your IP address requested a connection to mine. When your ISP routed you to my server, it handed over your IP address so I'd know where to send the data back. From that IP address alone, I can make a rough guess at your location, usually accurate to your city or region. Your ISP, however, knows exactly where you are. They assigned you that IP address and are actively providing your connection. This is where HTTPS comes in. You've probably noticed the padlock icon in your browser. When you connect to a website over HTTPS, the content of your communication is encrypted in transit. Your ISP (or anyone listening on the network) can see that you connected to, say , but they cannot read what you sent or received. The data looks like noise to them. The main distinction is that HTTPS hides the content, not the connection. Your ISP still sees the domain you visited. They still have a timestamp. They still have your IP address. The metadata is fully visible, even if the message itself is not. Using HTTPS wasn't something most people worried about until 2013, when Edward Snowden's leaked documents revealed that the NSA had been running programs like PRISM that compelled major technology companies to hand over user data. They tapped directly into the fiber-optic cables connecting Google and Yahoo's data centers. At those interception points, traffic that hadn't yet been encrypted internally was flowing in the open. The NSA could read emails, messages, and files, not by breaking encryption, but by scooping up data before encryption was ever applied. Or by accessing it at a point where it had already been decrypted. The content was exposed. You can partially obscure your activity from your ISP by using a VPN. A VPN tunnels your traffic through a third-party server, so your ISP sees only that you connected to the VPN, not where you went from there. But now the VPN provider holds that information instead. You haven't entirely eliminated the trail, you've relocated it . One way or another, when you use any electronic means of communication, you leave breadcrumbs. The connection is always recorded somewhere. That's why end-to-end encryption (E2EE) is important. Unlike HTTPS, which encrypts data in transit but means the server itself can read your messages, with end-to-end encryption only the sender and recipient can read the content. The service provider in the middle never holds the keys. In practice, when you send a message through an E2EE app like Signal , your device encrypts the message using your recipient's public key before it ever leaves your phone. The encrypted message travels through Signal's servers, but Signal cannot read it, because they don't have the private key needed to decrypt it. Only your recipient's device holds that key. Even if Signal were compelled by a government order to hand over your messages, all they could produce is scrambled data that's meaningless without the key. This is a meaningful protection. But it doesn't change the underlying reality: Signal still knows that your device contacted another device, at what time, and how often. The content is hidden. The connection is not. We cannot make communication invisible. We can only make it unreadable. In the realistic world, the only thing keeping Jason Bourne two steps ahead of law enforcement, is the bureaucracy and legal delays involved to retrieve CDR data. It's not his cleverness, not the speed of triangulating software, it's not technology.

0 views
iDiallo 1 months ago

Shower Thought: Git Teleportation

In many sci-fi shows, spaceships have a teleportation mechanism on board. They can teleport from inside their ship to somewhere on a planet. This way, the ship can remain in orbit while its crew explores the surface. But then people started asking: how does the teleportation device actually work? When a subject stands on the device and activates it, does it disassemble all the atoms of the person and reconstruct them at the destination? Or does it scan the person, kill them, and then replicate them at the destination? This debate has been on going for as long as I can remember. Since teleportation machines exist only in fiction, we can never get a true answer. Only the one that resonates the most. So, that's why I thought of Diff Teleportation. Basically, this is a Git workflow applied to teleportation. When you step onto a device, we run the command: Then, the machine will have to suspend activity on the master branch. This will make merging the branch much simpler in the future. Now, the person that has been teleported can explore the planet and go about mission 123. While they are doing their job, let's see what flags are supported in : When the mission is completed, they can be teleported back. Well, not the whole person, otherwise we end up with a clone. We could analyze the new data and remove any unwanted additions. For example, we could clean up any contamination at this point. But for the sake of time, I'll explore that another day. As an exercise, run for your own curiosity. For now, all we are interested in is the information that the teleportee has gathered from the planet, which we will merge back into master. I imagine in science fiction, there is an automated way for PR reviews that is more reliable than an LLM. Once that process is completed, we can merge to master and run some cleanup code in the build pipeline. Somewhere down on planet XYZ, a clone stepped onto the teleportation device. He saw a beam of light scan his body from head to toe. Then, for a moment, he wondered if the teleportation had worked. But right before he stepped off, the command ran, and he was pulverized. Back in the spaceship, a brand-new clone named appeared at the teleportation station. He was quickly sanitized, diff'd, and reviewed. But before he could gather his thoughts, the command ran, and he was pulverized. Not a second later, the original subject was reanimated, with brand-new information about "his" exploration on planet XYZ. Teleportation is an achievable technology. We just have to come to terms with the fact that at least two clones are killed for every successful teleportation session. In fact, if we are a bit more daring, we might not even need to suspend the first subject. We can create multiple clones, or agents, and have them all explore different things. When their task is complete, we can wrestle a bit with merge conflict, run a couple commands, and the original subject is blessed with new knowledge. OK, I'm getting out of this shower.

4 views
iDiallo 1 months ago

You Digg?

For me, being part of an online community started with Digg. Digg was the precursor to Reddit and the place to be on the internet. I never got a MySpace account, I was late to the Facebook game, but I was on Digg. When Digg redesigned their website (V4), it felt like a slap in the face. We didn't like the new design, but the community had no say in the direction. To make it worse, they removed the bury button. It's interesting how many social websites remove the ability to downvote. There must be a study somewhere that makes a sound argument for it, because it makes no sense to me. Anyway, when Digg announced they were back in January 2026, I quickly requested an invite. It was nostalgic to log in once more and see an active community building back up right where we left off. But then, just today, I read that they are shutting down. I had a single post in the technology sub. It was starting to garner some interest and then, boom! Digg is gone once more. The CEO said that one major reason was that they faced "an unprecedented bot problem." This is our new reality. Bots are now powered by AI and they are more disruptive than ever. They quickly circumvent bot detection schemes and flood every conversation with senseless text. It seems like there are very few places left where people can have a real conversation online. This is not the future I was looking for. I'll quietly write on my blog and ignore future communities that form. Rest in peace, Digg.

0 views
iDiallo 1 months ago

It's Work that taught me how to think

On the first day of my college CS class, the professor walked in holding a Texas Instruments calculator above his head like Steve Jobs unveiling the first iPhone. The students sighed. They had expected computer science to involve little math. The professor told us he had helped build that calculator in the eighties, then spent a few minutes talking about his career and the process behind it. Then he plugged the device into his computer, opened a terminal on the projector, and pushed some code onto it. A couple of minutes later, he unplugged the cable, powered on the calculator, and sure enough, Snake was running on it. A student raised his hand. The professor leaned forward, eager for the first question of the semester. "Um... is this going to be on the test?" While the professor was showing us what it actually means to build something, to push code onto hardware and watch it come alive, his students were already thinking about the grade. About the exit. The experience meant nothing unless it converted into points. That was college for me. Everyone was chasing a passing grade to get to the next class. Learning was mostly incidental. The professors tried, but our incentives were completely misaligned. Talk of higher education becoming obsolete was already in the air, especially in CS. As enthusiastic as I had been when I started, that enthusiasm got chipped away one class at a time until the whole thing felt mechanical. Something I just had to get through. I dropped out shortly after the C++ class, which had taught me almost nothing about programming anyway. I was broke and could only pay for so many courses out of pocket. So I took my skills, such as they were, to a furniture store warehouse. My day job. When customers bought furniture, we pulled their merchandise from the back and loaded it into their trucks. They signed a receipt, we kept a copy, and those copies went into boxes labeled by month and date. At the end of the year, the boxes went onto a pallet, the pallet got shrink-wrapped, and a forklift tucked it away in a high storage compartment. Whenever an accountant called requesting a signed copy, usually because a customer was disputing a charge, the whole process ran in reverse. Someone licensed on the forklift had to retrieve the pallet, we cut the shrink-wrap, found the right box, and sifted through hundreds of receipts until we found the one we needed. The process took hours. One day I decided enough was enough. After my shift, I grabbed the day's signed receipts and fed them into a scanner. For each one, I created two images: a full copy and a cropped version showing just the top of the receipt where the order number was printed. I found a pirated OCR application, then used VBScript and a lot of Googling to write a script that read the order number and renamed each image file to match it. I also wrote my first Excel macros, also in VBScript. When everything was wired together, I had a working system. Each evening, I would enter the day's order numbers, scan the receipts, and let the script match them up with a preview attached. When the OCR failed to read a number, the file was renamed "unknown" with an incrementing number so I could verify those manually. From then on, when an accountant called, I could find and email them the receipt in under a minute, without ever leaving my desk. When I left that warehouse, I was ready to call myself a programmer. That one month building that system taught me more than two years of school ever had. But the education didn't stop there. Years later, now considering myself an experienced developer, a manager handed me what looked like a giant power strip. It had a dozen outlets, and was built for stress-testing set-top boxes in a datacenter. "Can you set this up?" he asked. A few years earlier, I would have panicked. I would have gone looking for someone who already knew the answer, or waited until the problem solved itself. But something had changed in me since the warehouse. Unfamiliar problems no longer felt like walls. They felt like the first receipt I ever fed into a scanner. It was just something to pull apart until it made sense. I had never worked with hardware. I had no idea where to start. But I didn't need to know where to start. I just needed to start. I brought the device to my desk and inspected every inch of it. I wasn't looking for the answer exactly. Instead, I was looking for the first question. And I found one: an RJ45 port on one end. Not exactly the programming interface you'd expect, but it was there for a reason. I looked up the model number of the device, downloaded the manual, and before long I was connected via Telnet, sending commands and reading output in the terminal. Problem solved. Not because I knew anything about hardware going in, but because I had learned to spend time with unfamiliar problems. None of this was in the syllabus. Nobody graded me on it. There was no partial credit for getting halfway there. That's the difference between school and work. School optimizes for the test, like that student who couldn't look past the grade to see what was actually being shown to him. School teaches you the shape of a problem and gives you a method to solve it. Work, on the other hand, doesn't care about the test. Work hands you something broken, or inefficient, or completely unfamiliar, and simply waits. Often, there are no right answers at work. You just have to build your own solution that satisfies the requirement. You figure things out, not because you memorized the right answer, but because you thought your way through it. Then something changes in how you approach every problem after that. You don't flinch at the next problem. You understand that facing unfamiliar problems is the job.

9 views
iDiallo 1 months ago

Where did you think the training data was coming from?

When the news broke that Meta's smart glasses were feeding data directly into their Facebook servers , I wondered what all the fuss was about. Who thought AI glasses used to secretly record people would be private? Then again, I've grown cynical over the years . The camera on your laptop is pointed at you right now. When activated, it can record everything you do. When Zuckerberg posted a selfie with his laptop visible in the background, people were quick to notice that both the webcam and the microphone had black tape over them. If the CEO of one of the largest tech companies in the world doesn't trust his own device, what are the rest of us supposed to do? On my Windows 7 machine, I could at least assume the default behavior wasn't to secretly spy on me. With good security hygiene, my computer would stay safe. For Windows 10 and beyond, that assumption may no longer hold. Microsoft's incentives have shifted. They now require users to create an online account, which comes with pages of terms to agree to, and they are in the business of collecting data . As part of our efforts to improve and develop our products, we may use your data to develop and train our AI models. That's your local data being uploaded to their servers for their benefit. Under their licensing agreement (because you don't buy Windows, you only license it) you are contractually required to allow certain information to be sent back to Microsoft: By accepting this agreement or using the software, you agree to all of these terms, and consent to the transmission of certain information during activation and during your use of the software as per the privacy statement described in Section 3. If you do not accept and comply with these terms, you may not use the software or its features. The data transmitted includes telemetry, personalization, AI improvement, and advertising features. On a Chromebook, there was never an option to use the device without a Google account. Google is in the advertising business, and reading their terms of service, even partially, it all revolves around data collection. Your data is used to build a profile both for advertising and AI training. None of this is a secret. It's public information, buried in those terms of service agreements we blindly click through. Even Apple, which touts itself as privacy-first in every ad, was caught using user data without consent . Tesla employees were found sharing videos recorded inside customers' private homes . While some treat the Ray-Ban glasses story as an isolated incident, here is Yann LeCun, Meta's former chief AI scientist, describing transfer learning using billions of user images: We do this at Facebook in production, right? We train large convolutional nets to predict hashtags that people type on Instagram, and we train on literally billions of images. Then we chop off the last layer and fine-tune on whatever task we want. That works really well. That was seven years ago, and he was talking about pictures and videos people upload to Instagram. When you put your data on someone else's server, all you can do is trust that they use it as intended. Privacy policies are kept deliberately vague for exactly this reason. Today, Meta calls itself AI-first, meaning it's collecting even more to train its models. Meta's incentive to collect data exceeds even that of Google or Microsoft. Advertising is their primary revenue source. Last year, it accounted for 98% of their forecasted $189 billion in revenue . Yes, Meta glasses record you in moments you expect to be private, and their workers process those videos at their discretion. We shouldn't expect privacy from a camera or a microphone, or any internet-connected device, that we don't control. That's the reality we have to accept. AI is not a magical technology that simply happens to know a great deal about us. It is trained on a pipeline of people's information: video, audio, text. That's how it works. If you buy the device, it will monitor you.

0 views
iDiallo 1 months ago

The Server Older than my Kids!

This blog runs on two servers. One is the main PHP blog engine that handles the logic and the database, while the other serves all static files. Many years ago, an article I wrote reached the top position on both Hacker News and Reddit. My server couldn't handle the traffic . I literally had a terminal window open, monitoring the CPU and restarting the server every couple of minutes. But I learned a lot from it. The page receiving all the traffic had a total of 17 assets. So in addition to the database getting hammered, my server was spending most of its time serving images, CSS and JavaScript files. So I decided to set up additional servers to act as a sort of CDN to spread the load. I added multiple servers around the world and used MaxMindDB to determine a user's location to serve files from the closest server . But it was overkill for a small blog like mine. I quickly downgraded back to just one server for the application and one for static files. Ever since I set up this configuration, my server never failed due to a traffic spike. In fact, in 2018, right after I upgraded the servers to Ubuntu 18.04, one of my articles went viral like nothing I had seen before . Millions of requests hammered my server. The machine handled the traffic just fine. It's been 7 years now. I've procrastinated long enough. An upgrade was long overdue. What kept me from upgrading to Ubuntu 24.04 LTS was that I had customized the server heavily over the years, and never documented any of it. Provisioning a new server means setting up accounts, dealing with permissions, and transferring files. All of this should have been straightforward with a formal process. Instead, uploading blog post assets has been a very manual affair. I only partially completed the upload interface, so I've been using SFTP and SCP from time to time to upload files. It's only now that I've finally created a provisioning script for my asset server. I mostly used AI to generate it, then used a configuration file to set values such as email, username, SSH keys, and so on. With the click of a button, and 30 minutes of waiting for DNS to update, I now have a brand new server running Ubuntu 24.04, serving my files via Nginx. Yes, next months Ubuntu 26.04 LTS comes out, and I can migrate it by running the same script. I also built an interface for uploading content without relying on SFTP or SSH, which I'll be publishing on GitHub soon. It's been 7 years running this server. It's older than my kids. Somehow, I feel a pang of emotion thinking about turning it off. I'll do it tonight... But while I'm at it, I need to do something about the 9-year-old and 11-year-old servers that still run some crucial applications.

0 views