Latest Posts (20 found)
iDiallo Yesterday

Demerdez-vous: A response to Enshittification

There is an RSS reader that I often used in the past and have become very reliant on. I would share the name with you, but as they grew more popular, they have decided to follow the enshittification route. They've changed their UI, hidden several popular links behind multilayered menus, and they have revamped their API. Features that I used to rely on have disappeared, and the API is close to useless. My first instinct was to find a new app that will satisfy my needs. But being so familiar with this reader, I've decided to test a few things in the API first. Even though their documentation doesn't mention older versions anymore, I've discovered that the old API is still active. All I had to do was add a version number to the URL. It's been over 10 years, and that API is still very much active. I'm sorry I won't share it here, but this has served as a lesson for me when it comes to software that becomes worse over time. Don't let them screw you, unscrew yourself! We talk a lot about "enshittification"these days. I've even written about it a couple of times. It's about how platforms start great, get greedy, and slowly turn into user-hostile sludge. But what we rarely talk about is the alternative. What do you do when the product you rely on rots from the inside? The French have a phrase for this: Demerdez-vous. The literal translation is "unshit yourself". What it actually means is to find a way, even if no one is helping you. When a company becomes too big to fail, or simply becomes dominant in its market, drip by drip, it starts to become worse. You don't even notice it at first. It changes in ways that most people tolerate because the cost of switching is high, and the vendor knows it. But before you despair, before you give up, before you let the system drag you into its pit, try to unscrew yourself with the tools available. If the UI changes, try to find the old UI. Patch the inconvenience. Disable the bullshit. Bend the app back into something humane. It might sound impossible at first, but the tools to accomplish this exist and are widely being used. Sometimes the escape hatch is sitting right there, buried under three layers of "Advanced" menus. On the web I hate auto-playing videos, I don't want to receive twelve notifications a day from an app, I don't care about personalization. But for the most part, these can be disabled. When I download an app, I actually spend time going through settings. If I care enough to download an app, or if I'm forced, I'll spend the extra time to ensure that an app works to my advantage, not the other way around. When that RSS reader removes features from the UI, but not from their code, I was still able to continue using them. Another example of this is reddit. Their new UI is riddled with dark patterns, infinite scroll, and popups. But, go to , and you are greeted with that old UI that may not look fancy, but it was designed with the user in mind, not the company's metrics. I also like YouTube removed the dislike button. While it might be hurtful to content creators to see the number of dislikes, as a consumer, this piece of data served as a filter for lots of spam content. For that of course there is the "Return Youtube Dislike" browser extension. Extensions often can help you regain control when popular websites remove functionality useful to users, but the service no longer wants to support. There are several tools that enhance youtube, fix twitter, and of course uBlock. It's not always possible to combat enshittification. Sometimes the developer actively enforces their new annoying features and prevents anyone from removing them. In cases like these, there is still something that users can do. They can walk away. You don’t have to stay in an abusive relationship. You are allowed to leave. When you do, you'll discover that there was an open-source alternative. Or that a small independent app survived quietly in the corner of the internet. Or even sometimes, you'll find that you don't need the app at all. You break your addiction. In the end, "Demerdez-vous" is a reminder that we still have agency in a world designed to take it away. Enshittification may be inevitable, but surrender isn’t. There’s always a switch to flip, a setting to tweak, a backdoor to exploit, or a path to walk away entirely. Companies may keep trying to box us in, but as long as we can still think, poke, and tinker, we don’t have to live with the shit they shovel. At the end of the day "On se demerde"

0 views
iDiallo 3 days ago

We Don't Fix Bugs, We Build Features

As a developer, bugs consume me. When I discover one, it's all I can think about. I can't focus on other work. I can't relax. I dream about it. The urge to fix it is overwhelming. I'll keep working until midnight even when my day should have ended at 6pm. I simply cannot leave a bug unfixed. And yet, when I look at my work backlog, I see a few dozen of them. A graveyard of known issues, each one catalogued, prioritized, and promptly ignored. How did we get here? How does a profession full of people who are pathologically driven to fix problems end up swimming in unfixed problems? For that, you have to ask yourself, what is the opposite of a bug? No, it's not "No Bugs". It's features. "I apologize for such a long letter - I didn't have time to write a short one." As projects mature and companies scale, something changes . You may start with a team of developers solving problems, but then, they slowly become part of an organization that needs processes, measurements, and quarterly planning. Then one day, you are presented with this new term. Roadmap. It's a beautiful, color-coded timeline of features that will delight users and move business metrics. The roadmap is where bugs go to die. Here's how it happens. A developer discovers a bug and brings it to the team. The product manager asks the only question that matters in their world: "Will this affect our roadmap?" Unless the bug is actively preventing a feature launch or causing significant user churn, the answer is almost always no. The bug gets a ticket, the ticket gets tagged as "tech debt," and it joins the hundreds of other tickets in the backlog hotel, where it will remain indefinitely. ( see Rockstar ) This isn't a jab at product managers. They're operating within a system that leaves them no choice. Agile was supposed to liberate us. The manifesto promised flexibility, collaboration, and responsiveness to change. But somewhere along the way, agile stopped being a philosophy and became a measurement system. There are staunch supporters of agile that swear by it, and blame any flows on the particular implementation. "You guys are not doing true agile." But when everyone is doing it wrong, you don't blame everyone, you blame the system. We can't all be holding agile wrong ! The agile principle is to deliver working software frequently, welcome changing requirements, maintain technical excellence. But principles don't fit in spreadsheets. Metrics do. And so we got story points. Velocity. Sprint completion rates. Feature delivery counts. Suddenly, every standup and retrospective fed into dashboards that executives reviewed quarterly. And where there are metrics, there are managers trying to make some numbers go up and others go down. Features are easy to measure. They're discrete, they're visible, and they can be tied to revenue. "We shipped 47 features this quarter, leading to a 12% increase in user engagement." That's a bullet point in your record that gets you promoted. Bugs are invisible in this equation. Sure, they appear on the same Jira board, but their contribution is ephemeral. How do you quantify the value of something that doesn't go wrong? How do you celebrate the absence of a problem? You can't put "prevented 0 crashes by fixing a race condition" on a slide deck. The system doesn't just deprioritize bugs, it actively ignores them. A team that spends a sprint fixing bugs has nothing to show for it on the roadmap. Their velocity looks identical, but they've "accomplished" nothing that the executives care about. Meanwhile, the team that plows ahead with features, moves fast and breaks things, bugs be damned? They look productive. Developers want to prioritize bug fixes, performance improvements, and technical debt. These are the things that make software maintainable, reliable, and pleasant to work with. Most developers got into programming because they wanted to fix things, to make systems better. The business prioritizes features that impact revenue. New capabilities that can be sold, marketed, and demonstrated. Things that exist, not things that don't break. Teams are often faced with a choice. Do we fix what's broken, or do we build what's new? And because the metrics, the incentives, and the roadmap all point in one direction, the choice is made for them. This is how you end up with production systems riddled with known bugs that could probably be fixed but won't be tackled. Not because they're not important. Not because developers don't care. But because they're not on the roadmap. "I apologize for such many bugs. I only had time to build features." Writing concisely takes more time and thought than rambling. Fixing bugs takes more discipline than shipping features. Building maintainable systems takes more effort than building fast. We've become so busy building that we have no time to maintain what we've built. We're so focused on shipping new things that we can't fix the old things. The roadmap is too full to accommodate quality. Reaching our metric goals is the priority. It's not that we don't know better. It's not even that we don't care. It's that we've built systems like product roadmaps, velocity tracking, etc, and now making the wrong choice the only rational choice. I've worked with teams that tried to present a statistical approach to presenting bugs in the roadmap. Basically, you can analyze existing projects, look at bug counts when each feature was built, then calculate the probability of bugs. Now this number will appear in the roadmap as a color coded metric. It sounds and looks good in theory, and you can even attach an ROI to bug fixes. But bugs don't work like that. They can be introduced by mistake, by misunderstanding, or sometimes even intentionally when business logic itself is flawed. No statistical model will predict the developer who misread the requirements, or the edge case that appears only in production, or the architectural decision that made sense five years ago but creates problems today. Bugs are human problems in human systems. You can't spreadsheet your way out of them. You have to actually fix them. When developers are forced to choose between what they know is right and what the metrics reward, we've built the wrong system. When "I fixed a critical race condition" is less valuable than "I shipped a feature," we've optimized for the wrong things. Maybe the first step is simply acknowledging the problem. We don't fix bugs because our systems don't let us. We don't fix bugs because we only had time to build features. And just like that overly long letter, the result is messier, longer, and ultimately harder to deal with than if we'd taken the time to do it right from the start.

0 views
iDiallo 5 days ago

Self-Help Means Help Yourself

For a moment in my life, you couldn't see me without a book in hand. A self-help book to be precise. I felt like the world was moving, changing, and I was being left behind. Being raised to look at the mirror before I blame others, I decided if there was something to improve, it was my very own self. I picked up Dale Carnegie's How to Win Friends and Influence People . Now I can admit it, I never finished reading the book. But I read plenty of others. I devoured all of Robert Kiyosaki's books and felt inspired. If only I had a rich dad. I read the one he wrote with Donald Trump. I was pumped. I was still learning English; I may have misunderstood the whole thing (I can assure you, none of the authors mentioned were involved in writing the book). I joined a club where we would get a new self-help book every month and discuss it. I was in love with the genre. But one thing I noticed in retrospect is that I enjoyed reading more than actually doing anything the books taught. Here's the thing about self-help books, they're necessarily abstract. If they gave specific examples, those examples wouldn't apply to most people. So they give general advice, more inspiring than practical. And inspiration, while it feels good in the moment, doesn't build anything on its own. Over the years, I learned that advice by itself is useless. Imagine getting writing advice from a pro, but you've never written anything. No writing advice can be applied to a blank piece of paper. You can't edit what doesn't exist. You can't improve a sentence you haven't written. What you actually need is to start something, anything, and reevaluate every so often. That's it. I think about Bob Nystrom, who wrote Crafting Interpreters , a book about building programming languages. What I love about his story isn't just the book itself, but how he wrote it. He did so publicly, chapter by chapter, responding to feedback as he went. And when he completed the book, he published a reflection of the process he titled Crafting "Crafting Interpreters" . He wrote through some of the worst years of his life. His mother was diagnosed with cancer. Loved ones died. The world around him felt like it was falling apart. But he kept writing anyway. Not because he was superhuman or exceptionally disciplined. He kept writing because it was the one thing he could control when so much else was spiraling beyond his grasp. Finishing the book became proof that he could make it through everything else. Skipping a day would have meant the chaos won. Writing became his anchor. We can always find reasons not to start. The conditions are never perfect. We're still learning. We don't have the right resources. We haven't read enough books yet. But self-help isn't meant to be inspiration porn, something we consume to feel good without changing anything. It's a method for helping yourself. The books, the advice, the strategies, they're all pointing toward the same message. You have to be the one to do it. Nobody can help you get started. Nobody can give you advice that works on a blank page. The only thing that transforms nothing into something is you, sitting down and beginning. Self-help means helping yourself, not someday, not when you're ready, but now. Start messy. Start imperfect. Start without knowing how it ends. Because the secret isn't in the next book or the next piece of advice. The secret is that you already know what you need to do. You just need to help yourself do it.

0 views
iDiallo 1 weeks ago

The real cost of Compute

Somewhere along the way, we stopped talking about servers. The word felt clunky, industrial, too tied to physical reality. Instead, we started saying "the cloud". It sounds weightless, infinite, almost magical. Your photos live in the cloud. Your documents sync through the cloud. Your company's entire infrastructure runs in the cloud. I hated the term cloud. I wasn't alone, someone actually created a "cloud to butt" browser extension that was pretty fun and popular. But the world has adopted the term, and I had no choice but to oblige. So what is the actual cloud? Why is it hiding behind this abstraction? Well, the cloud is rows upon rows of industrial machines, stacked in massive data centers, consuming electricity at a scale most of us can't even imagine. The cloud isn't floating above us. It's bolted to concrete floors, surrounded by cooling systems, and plugged into power grids that strain under its appetite. I'm old enough to remember the crypto boom and the backlash that followed. Critics loved to point out that Bitcoin mining consumed as much electricity as entire countries. Argentina, the Netherlands, and so many nations were picked for comparison. But I was not outraged by it at all. My reaction at the time was simpler. Why does it matter if they pay their electric bill? If you use electricity and compensate for it, isn't that just... how markets work? Turns out, I was missing the bigger picture. And the AI boom has made it impossible to ignore. When new data centers arrive in a region, everyone's electric bill goes up. Even if your personal consumption stays exactly the same. It has nothing to do with fairness and free markets. Infrastructure is not free. The power grids weren't designed for the sudden addition of facilities that consume megawatts continuously. When demand surges beyond existing capacity, utilities pass those infrastructure costs onto everyone. New power plants get built, transmission lines get upgraded, and residential customers help foot the bill through rate increases. The person who never touches AI, never mines crypto, never even knows what a data center does, this person is now subsidizing the infrastructure boom through their monthly utility payment. The cloud, it turns out, has a very terrestrial impact on your wallet. We've abstracted computing into its purest conceptual form: "compute." I have to admit, it's my favorite term in tech. "Let's buy more compute." "We need to scale our compute." It sounds frictionless, almost mathematical. Like adjusting a variable in an equation. Compute feels like a slider you can move up and down in your favorite cloud provider's interface. Need more? Click a button. Need less? Drag it down. The interface is clean, the metaphor is seamless, and completely disconnected from the physical reality. But in the real world, "buying more compute" means someone is installing physical hardware in a physical building. It means racks of servers being assembled, hard drives being mounted, cables being routed. The demand has become so intense that some data center employees have one job and one job only: installing racks of new hard drives, day in and day out. It's like an industrial assembly line. Every gigabyte of "cloud storage" occupies literal space. Every AI query runs on actual processors that generate actual heat. The abstraction is beautiful, but the reality is concrete and steel. The cloud metaphor served its purpose. It helped us think about computing as a utility. It's always available, scalable, detached from the messy details of hardware management. But metaphors shape how we think, and this one has obscured too much for too long. Servers are coming out of their shells. The foggy cloud is lifting, and we're starting to see the machinery underneath: vast data centers claiming real estate, consuming real water for cooling, and drawing real power from grids shared with homes, schools, and hospitals. This isn't an argument against cloud computing or AI. There nothing to go back to. But we need to acknowledge their physical footprint. The cloud isn't a magical thing in the sky. It's industry. And like all industry, it needs land, resources, and infrastructure that we all share.

0 views
iDiallo 1 weeks ago

Making a quiet stand with your privacy settings

After making one of the largest refactor of our application, one that took several months in the making, where we tackled some of our biggest challenges. We tackled technical debt, upgraded legacy software, fortified security, and even made the application faster. After all that, we deployed the application, and held our breath, waiting for the user feedback to roll in. Well, nothing came in. There were no celebratory messages about the improved speed, no complaints about broken features, no comments at all. The deployment was so smooth it was invisible. To the business team, it initially seemed like we had spent vast resources for no visible return. But we knew the underlying truth. Sometimes, the greatest success is defined not by what happens, but by what doesn't happen. The server that doesn't crash. The data breach that doesn't occur. The user who never notices a problem. This is the power of a quiet, proactive defense. In this digital world, where everything we do leaves a data point, it's not easy to recognize success. When it comes to privacy, taking a stand isn't dramatic. In fact, its greatest strength is its silence. We're conditioned to believe that taking a stand should feel significant. We imagine a public declaration, a bold button that flashes "USER REBELLION INITIATED!" when pressed. Just think about people publicly announcing they are leaving a social media platform. But the reality of any effective digital self-defense is far more mundane. When I disagree with a website's data collection, I simply click "Reject All." No fanfare. No message telling the company, "This user is privacy-conscious!" My resistance is registered as a non-action. A void in their data stream. When I read that my Vizio Smart TV was collecting viewing data, I navigated through a labyrinth of menus to find the "Data Collection" setting and turned it off. The TV kept working just fine. Nothing happened, except that my private viewing habits were no longer becoming a product to be sold. They didn't add a little icon on the top corner that signifies "privacy-conscious." Right now, many large language models like ChatGPT have "private conversation" settings turned off by default. When I go into the settings and enable the option that says, "Do not use my data for training," there's no confirmation, no sense of victory. It feels like I've done nothing. But I have. This is how proactive inaction looks like. Forming a new habit is typically about adding an action. Going for a run every morning, drinking a glass of water first thing, reading ten pages a night. But what about the habit of not doing ? When you try to simply "not eat sugar," you're asking your brain to form a habit around an absence. There's no visible behavior to reinforce, no immediate sensory feedback to register success, and no clear routine to slot into the habit loop. Instead, you're relying purely on willpower. A finite resource that depletes throughout the day, making evening lapses almost inevitable. Your brain literally doesn't know what to practice when the practice is "nothing." It's like trying to build muscle by not lifting weights. The absence of action creates an absence of reinforcement, leaving you stuck in a constant battle of conscious resistance rather than unconscious automation. Similarly, the habit of not accepting default settings is a habit of inaction. You are actively choosing to not participate in a system designed to exploit your data. It's hard because it lacks the dopamine hit of a checked box. There's no visible progress bar for "Privacy Secured." But the impact is real. This quiet practice is our primary defense against what tech writer Cory Doctorow calls "enshittification". That's the process where platforms decay by first exploiting users, then business customers, until they become useless, ad-filled pages with content sprinkled around. It's also our shield against hostile software that prioritizes its own goals over yours. Not to blame the victims, but I like to remind people that they have agency over the software and tools they use. And your agency includes the ultimate power to walk away. If a tool's settings are too hostile, if it refuses to respect your "no," then your most powerful setting is the "uninstall" button. Choosing not to use a disrespectful app is the ultimate, and again, very quiet, stand. So, I challenge everyone to embrace the quiet. See the "Reject All" button not as a passive refusal, but as an active shield. See the hidden privacy toggle not as a boring setting, but as a toggle that you actively search for. The next time you download a new app or create a new account, take five minutes. Go into the settings. Look for "Privacy," "Data Sharing," "Personalization," or "Permissions." Turn off what you don't need. Nothing will happen. Your feed won't change, the app won't run slower, and no one will send you a congratulatory email. And that's the whole point. You will have succeeded in the same way our refactor succeeded: by ensuring something unwanted doesn't happen. You've strengthened your digital walls, silently and without drama, and in doing so, you've taken one of the most meaningful stands available to us today.

0 views
iDiallo 1 weeks ago

How Do You Send an Email?

It's been over a year and I didn't receive a single notification email from my web-server. It could either mean that my $6 VPS is amazing and hasn't gone down once this past year. Or it could mean that my health check service has gone down. Well this year, I have received emails from readers to tell me my website was down. So after doing some digging, I discovered that my health checker works just fine, but all emails it sends are being rejected by gmail. Unless you use a third party service, you have little to no chance of sending an email that gets delivered. Every year, email services seem to become a tad bit more expensive. When I first started this website, sending emails to my subscribers was free on Mailchimp. Now it costs $45 a month. On Buttondown, as of this writing, it costs $29 a month. What are they doing that costs so much? It seems like sending emails is impossibly hard, something you can almost never do yourself. You have to rely on established services if you want any guarantee that your email will be delivered. But is it really that complicated? Emails, just like websites, use a basic communication protocol to function. For you to land on this website, your browser somehow communicated with my web server, did some negotiating, and then my server sent HTML data that your browser rendered on the page. But what about email? Is the process any different? The short answer is no. Email and the web work in remarkably similar fashion. Here's the short version: In order to send me an email, your email client takes the email address you provide, connects to my server, does some negotiating, and then my server accepts the email content you intended to send and saves it. My email client will then take that saved content and notify me that I have a new message from you. That's it. That's how email works. So what's the big fuss about? Why are email services charging $45 just to send ~1,500 emails? Why is it so expensive, while I can serve millions of requests a day on my web server for a fraction of the cost? The short answer is spam . But before we get to spam, let's get into the details I've omitted from the examples above. The negotiations. How similar email and web traffic really are? When you type a URL into your browser and hit enter, here's what happens: The entire exchange is direct, simple, and happens in milliseconds. Now let's look at email. The process is similar: Both HTTP and email use DNS to find servers, establish TCP connections, exchange data using text-based protocols, and deliver content to the end user. They're built on the same fundamental internet technologies. So if email is just as simple as serving a website, why does it cost so much more? The answer lies in a problem that both systems share but handle very differently. Unwanted third-party writes. Both web servers and email servers allow outside parties to send them data. Web servers accept form submissions, comments, API requests, and user-generated content. Email servers accept messages from any other email server on the internet. In both cases, this openness creates an opportunity for abuse. Spam isn't unique to email, it's everywhere. My blog used to get around 6,000 spam comments on a daily basis. On the greater internet, you will see spam comments on blogs, spam account registrations, spam API calls, spam form submissions, and yes, spam emails. The main difference is visibility. When spam protection works well, it's invisible. You visit websites every day without realizing that behind the scenes. CAPTCHAs are blocking bot submissions, rate limiters are rejecting suspicious traffic, and content filters are catching spam comments before they're published. You don't get to see the thousands of spam attempts that happen every day on my blog, because of some filtering I've implemented. On a well run web-server, the work is invisible. The same is true for email. A well-run email server silently: There is a massive amount of spam. In fact, spam accounts for roughly 45-50% of all email traffic globally . But when the system works, you simply don't see it. If we can combat spam on the web without charging exorbitant fees, email spam shouldn't be that different. The technical challenges are very similar. Yet a basic web server on a $5/month VPS can handle millions of requests with minimal spam-fighting overhead. Meanwhile, sending 1,500 emails costs $29-45 per month through commercial services. The difference isn't purely technical. It's about reputation, deliverability networks, and the ecosystem that has evolved around email. Email providers have created a cartel-like system where your ability to reach inboxes depends on your server's reputation, which is nearly impossible to establish as a newcomer. They've turned a technical problem (spam) into a business moat. And we're all paying for it. Email isn't inherently more complex or expensive than web hosting. Both the protocols and the infrastructure are similar, and the spam problem exists in both domains. The cost difference is mostly artificial. It's the result of an ecosystem that has consolidated around a few major providers who control deliverability. It doesn't help that Intuit owns Mailchimp now. Understanding this doesn't necessarily change the fact that you'll probably still need to pay for email services if you want reliable delivery. But it should make you question whether that $45 monthly bill is really justified by the technical costs involved. Or whether it's just the price of admission to a gatekept system. DNS Lookup : Your browser asks a DNS server, "What's the IP address for this domain?" The DNS server responds with something like . Connection : Your browser establishes a TCP connection with that IP address on port 80 (HTTP) or port 443 (HTTPS). Request : Your browser sends an HTTP request: "GET /blog-post HTTP/1.1" Response : My web server processes the request and sends back the HTML, CSS, and JavaScript that make up the page. Rendering : Your browser receives this data and renders it on your screen. DNS Lookup : Your email client takes my email address ( ) and asks a DNS server, "What's the mail server for example.com?" The DNS server responds with an MX (Mail Exchange) record pointing to my mail server's address. Connection : Your email client (or your email provider's server) establishes a TCP connection with my mail server on port 25 (SMTP) or port 587 (for authenticated SMTP). Negotiation (SMTP) : Your server says "HELO, I have a message for [email protected]." My server responds: "OK, send it." Transfer : Your server sends the email content, headers, body, attachments, using the Simple Mail Transfer Protocol (SMTP). Storage : My mail server accepts the message and stores it in my mailbox, which can be a simple text file on the server. Retrieval : Later, when I open my email client, it connects to my server using IMAP (port 993) or POP3 (port 110) and asks, "Any new messages?" My server responds with your email, and my client displays it. Checks sender reputation against blacklists Validates SPF, DKIM, and DMARC records Scans message content for spam signatures Filters out malicious attachments Quarantines suspicious senders Both require reputation systems Both need content filtering Both face distributed abuse Both require infrastructure to handle high volume

0 views
iDiallo 1 weeks ago

Is 30% of Microsoft's Code Really AI-Generated?

A few months back, news outlets were buzzing with reports that Satya Nadella claimed 30% of the code in Microsoft's repositories was AI-generated. This fueled the hype around tools like Copilot and Cursor. The implication seemed clear: if Microsoft's developers were now "vibe coding," everyone should embrace the method. I have to admit, for a moment I felt like I was being left behind. When it comes to adopting new technology, I typically choose the slow and careful approach. But suddenly, it seemed like the world was moving on without me. Here's the thing though, I use Copilot. I use Cursor at work as well. But I can't honestly claim that 30% of my code is AI-generated. For every function an AI generates for me, I spend enough time tweaking and adapting it to our specific use case that I might as well claim authorship. Is that what Microsoft employees are doing? Or are they simply writing prompts or a set of instructions, then letting the LLM write the code, generate the tests, and make the commits entirely on its own? So I went back to reread what Satya actually said : I'd say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software. Fair enough. But then I watched the video where he actually said it . Interestingly, it was Zuckerberg who asked the question. What you hear in the interview is a whole lot of "maybe," "probably," "something like". Not the confidence portrayed in the written headlines. But here's what I really want to know: how are they tracking this? Are developers labeling all AI-generated code as such? Is there some distinct signature that marks it? How can you even tell when code is AI-generated? Unlike a written article where we can identify clear patterns, telltale phrasing, word choices that deviate from an author's typical style, code doesn't come with obvious fingerprints. For example, there's no way to tell when a senior developer on my team uses AI. Why? Because they don't commit code they haven't thoroughly reviewed and understood. They treat AI suggestions like rough drafts, useful starting points that require human judgment and refinement. With junior developers, you might occasionally see a utility function defined for absolutely no reason, or overly generic variable names, or unnecessarily verbose implementations that scream "AI-generated." But these issues rarely make it past the code review process, where more experienced eyes catch and correct them before they reach production. Before LLMs entered the picture, what we worried about was developers copying and pasting code from Stack Overflow without understanding or modifying it. These snippets weren't easy to identify either, unless they broke the logic or introduced bugs that revealed their origin. You couldn't reliably identify copy-pasted code back then, so what makes it any easier to identify AI-generated code now? Both scenarios involve code that works (at least initially) and follows conventional patterns, making attribution nearly impossible without explicit tracking mechanisms. The line between "AI-generated" and "human-written" code has become blurrier than the headlines suggest. And maybe that's the point. When AI becomes just another tool in the development workflow, like syntax highlighting or auto-complete, measuring its contribution as a simple percentage might not be meaningful at all.

0 views
iDiallo 2 weeks ago

The App Developer's Attachment Issues

When browsing the web, I still follow rabbit holes. For example, I will click on a link, read an article, find another link in the body, follow that one as well, and keep on going until I get lost in the weeds and appear in wonderland. When I'm reading through my phone, I often have to go back to the browser history to see the trail of websites that lead me to my destination. But sometimes, I just can't find my way back. Why? Because somehow, I wasn't reading through the web browser. I was browsing through webview. So when you are on instagram and click on a link shared by a friend. The page loads instantly, but something feels off. You are browsing the web, yet you don't see the familiar browser tabs or address bar. You are in a webview . Why webview and not your favorite browser? Well, this is what I call App attachment issues. App developers don't want you to leave. And webview is the invisible fence they use to keep you tethered. When an application loads content within an in-app browser (a webview) you are, technically, using the web. It's running the same rendering engine as a dedicated browser. But the app's sole purpose for doing this is to silo you. They want to maintain control over your experience, ensuring you are never truly free to roam the open internet. The benefit for the developer is that no matter what page you browse, you are perpetually one button click away from being back in their app. It's a mechanism for user retention, a digital leash. Every company, from social media giants to news aggregators, is trying to fit you into their specific bucket, convinced that if they let you leave, you might not come back. They want to maintain that control over your experience, even when you are outside their reach. On Android, this is super annoying. You might be able to click links and navigate from the initial website to a completely different, unrelated one, but you often cannot manually change the URL. You are trapped in the current browsing flow, unable to jump to a new destination without first leaving the app or performing a dedicated search. Why are you still under the app's thumb if you're surfing the public web? The answer is always control. The web is a dangerous place. What if you click on the wrong link and your device gets compromised? We can't protect you in this case. At least that's what it feels like when clicking on external links on some websites. For example, on LinkedIn when you click an external link, you are often greeted with a warning message like this: This link will take you to a page that's not on LinkedIn Because this is an external link, we're unable to verify it for safety. On the surface, it appears to be a helpful security measure. The platform is protecting you from the big, bad internet. But the only thing they are truly protecting you from is leaving their app. If the link was already shared by a contact or surfaced on their platform, the implicit due diligence should have been done. Serving up a blanket safety warning for any external link, even those to major news organizations or well-known websites, is just a friction point to discourage you from leaving. It's a psychological barrier designed to make you hesitate, keep you inside the known confines of their platform, and reinforce their control. This security warning is nothing more than the final, passive-aggressive plea in the app's campaign against your freedom. If the in-app silo was just the web, but within the app, I wouldn't complain. But while developers are focused on retention, the user experience suffers in some infuriating ways. The webview is a fundamentally broken browsing experience for three core reasons: The most frustrating drawback is the lack of permanence. Your browsing history is at the mercy of the developer. They can choose to record it, or not record it. And you will be none the wiser until you are trying to find that article you read just this morning. With my rabbit hole style of browsing the web, I often stumble upon great articles, helpful tools, or even products that I mean to return to. But if any of those pages were viewed under a webview, they vanish without a trace. Related to the missing history is the risk of accidental loss. You might be deep into an article, hit the back button to navigate one step back on the site, and instead, the entire webview collapses, dumping you unceremoniously back into the main app feed. Because no history was recorded, there is no way to return to the page you were just on. The article is simply gone. There is a common counterargument that says, "Most apps have a setting to disable webview and open links directly in your full browser." But two points to this. 1. Most people don't ever change the default settings. 2. Why is this even an option to select? If the webview uses the browser engine anyway, why should the default setting be the one that compromises the user's web experience? Users do not dive into granular settings menus. The path of least resistance is the path most taken. By defaulting to webview, developers are prioritizing their retention goals over basic utility. The entire architecture of the web is built on freedom, open access, and a unified browsing experience. By forcing a dedicated web environment, developers are fragmenting the internet and making our lives slightly harder. I'm sure there are some metrics out there that say “using in-app webview increases engagement by x%.” But for n=1, aka me, it only increases my disengagement. All I can say to developers is: It's okay to let go. The remedy for your attachment issues is user freedom. When I click a link, I expect to be in a full browser, with a permanent history, a functional address bar, and true control over my destination. It's time for applications to trust users, respect the open web, and stop trapping us in the confines of their digital cages. For users, next time you click a link, look for that small icon, often a compass, an arrow, or an ellipsis, then choose to open in browser. It's your internet. It's okay to leave the app. Or even better, never download the apps .

0 views
iDiallo 2 weeks ago

What Actually Defines a Stable Software Version?

As a developer, you'll hear these terms often: "stable software," "stable release," or "stable version." Intuitively, it just means you can rely on it. That's not entirely wrong, but when I was new to programming, I didn't truly grasp the technical meaning. For anyone learning, the initial, simple definition of "it works reliably" is a great starting point. But if you're building systems for the long haul, that definition is incomplete. The intuitive definition is: a stable version of software that works and that you can rely on not to crash. The technical definition is: a stable version of software where the API will not change unexpectedly in future updates. A stable version is essentially a guarantee from the developers that the core interface, such as the functions, class names, data structures, and overall architecture you interact with, will remain consistent throughout that version's lifecycle. This means that if your code works with version 1.0.0, it should also work flawlessly with version 1.0.1, 1.0.2, and 1.1.0. Future updates will focus on bug fixes, security patches, and performance improvements, not on introducing breaking changes that force you to rewrite your existing code. My initial misunderstanding was thinking stability was about whether the software was bug-free or not. Similar to how we expect bugs to be present in a beta version. But there was still an upside to this confusion. It helped me avoid the hype cycle, especially with certain JavaScript frameworks. I remember being hesitant to commit to new versions of certain tools (like early versions of React, Angular, though this is true of many fast-moving frameworks and SDKs). Paradigms would shift rapidly from one version to the next. A key concept I'd mastered one month would be deprecated or replaced the next. While those frameworks sit at the cutting edge of innovation, they can also be the antithesis of stability. Stability is about long-term commitment. Rapid shifts force users to constantly evolve with the framework, making it difficult to stay on a single version without continual, large-scale upgrades. A truly stable software version is one you can commit to for a significant amount of time. The classic example of stability is Python 2. Yes, I know many wanted it to die by fire, but it was first released in 2000 and remained active, receiving support and maintenance until its final update in 2020. That's two decades of stability! I really enjoyed being able to pick up old scripts and run them without any fuss. While I'm not advocating that every tool should last that long, I do think that when we're building APIs or stable software, we should adopt the mindset that this is the last version we'll ever make. This forces us to carefully consider the long-term design of our software. Whenever I see LTS (Long-Term Support) next to an application, I know that the maintainers have committed to supporting, maintaining, and keeping it backward compatible for a defined, extended period. That's when I know I'm working with both reliable and stable software.

0 views
iDiallo 2 weeks ago

What a Disappointing Blog

Have you ever read a blog post here and thought: Meh ? Some articles I write are ideas I've been working on for over a year. I think about them often, then add them to my little note app. Sometimes I'm driving and think of something clever, so I dictate it to my notes app while the kids are fighting in the background. Then, in the middle of the night, I take time away from sleep and start putting the ideas together. All because I challenged myself to publish every other day for an entire year. I do all this, hit the publish button, and... well, and then nothing. OK, not just nothing. Worse than nothing. A week later, I come back to revisit the article and discover a typo in the very first sentence. I read the entire thing, and it doesn't even make sense. What point was I trying to make? Why did I use that word? Why does it make me want to fall asleep? Why do I do this to myself? For God's sake, I wrote an entire book! When I read some older articles, I'm just as disappointed. Why didn't I add a counterpoint to balance the whole thing? I hope nobody I know ever reads this. It's weird how I get this feeling when reading my own writing. But I can assure you that when I'm writing, I'm pretty excited about it. I enjoy writing on my blog. These are my words, this is my work, this is how I express the ideas in my mind. For example, I had a blast reading, discovering and writing about timekeeping in the Star Wars universe . But, I had to re-edit it a few dozen times after publishing it. In fact, I like the process so much that I decided maybe I needed to do more. I should also make recordings of these articles, maybe a podcast-style discussion. That would be amazing. Of course, now that I've started and committed to three recordings a week for all of 2025 , listening to any episode is dreadful. My voice cracks, I regret the background music, and some episodes are just painful to listen to. Did I use too much noise canceling? I sound like a robot! Why can't I say the word "perspective"? Again, the process of turning an article into a script is fun. I went from using my phone as a recording device to a proper microphone. I went from using the microphone backwards (trust me, it's confusing) to finally understanding the settings. I try different recording areas and experiment with different sound presets. The process is fun. The result is frustrating to me. But for some people, those few who send me encouraging emails, who somehow enjoy the content, who challenge my ideas, this ends up being for them. They make it all worth it. This doubt I have every time I look at things I make, every time I spot the mistakes, according to Ira Glass, these are the result of "the gap." In an old video titled "The Gap" , he explains that we go into any creative endeavor because we have taste. Good taste. But whatever we create ends up being a disappointment because it doesn't live up to that taste. This is normal. The only way forward is to keep creating and keep improving. The more we do it, the narrower that gap becomes. Yes, I might be frustrated with everything I make today, but what I wrote yesterday is a whole lot better than what I did 10 years ago. The creative expressions, the art, they are all improving. But so is the taste. Eventually, I'll be satisfied with my work, or at least accept it. This disappointment isn't the end of it all; it just means there's still room for improvement. You might find yourself in a similar situation. One where you feel like everything you do sucks, and everyone else is better than you. It's not them, it's you. You just happen to have good taste, and you are trying to live up to it. Keep working, keep improving, it's the only way to narrow that gap. Once you close it, you might just look back and enjoy the fruit of your labor.

0 views
iDiallo 3 weeks ago

How We're Trying to Solve Vibe-Coded PRs

When companies start embracing AI, it's only a matter of time before it reaches the engineering teams. For competent developers, AI makes their lives easier. The benefits of tools like Cursor or Copilot are often invisible because developers use them as tools to accelerate their workflow, not replace it. It's confusing when companies claim a specific percentage of their code is "AI-generated," since these tools function as assistants. With that logic in mind, could we say a certain percentage of code was "StackOverflow copy-pasted"? But every now and then, someone starts using AI to completely take over their position. They write a prompt to generate code that fixes a task, test that the task is resolved, and then commit the code. Sometimes the code is committed without any further review. These commits typically involve a large number of lines changed, a coding style that differs from the team's conventions, and changes that sometimes make no sense at all. Many developers treat PR reviews as personal criticism, which can feel harsh or rude. Meaning people hold back and let nonsensical code get merged. To avoid these issues and the politics of "AI vs. Anti-AI", we started implementing a process that helps us address vibe-coded PRs without the criticism. I asked my most senior developer to vibe-code a solution to a relatively simple ticket. After a couple of people approved it, I scheduled a video call (including known vibe-coders) where the senior develoer had to explain the PR. Since this was a staged review, I asked detailed questions: Why were certain choices made? Why did the coding style change? Why create a new endpoint instead of adding functionality to existing code? We scrutinized every part of the code. Changes were made, we reviewed again, and the team began to understand what the bar is for our work. Why go through this dance instead of simply saying "don't vibe code" or "review your code thoroughly"? Because people use LLMs to save time. If they don't have time to write the code, they certainly won't spend time reading it. What they do is generate code, test it, and if the functionality works, move it forward. It's rare for any vibe-coder to actually read the code they've generated. But seeing the scrutiny placed on these PRs forces developers to spend more time with their code. They realize they need to understand what they're submitting. It's one thing to quickly create features when building an MVP, but the bar is much higher when contributing to existing software. When you write code, part of the process is thinking about your future self and how other developers will read and extend your work. You need to be consistent with the team's style, even if it's not always the optimal choice. The goal is for any developer to read the codebase as if it were written by one person. Just a few days ago, I wrote about how when we use LLMs, we tend not to read the results before passing them to the next person down the chain . Putting a system in place that forces you to understand your work helps both developers and reviewers contribute meaningfully. This is an experiment, and so far, I think it's working. But the world of LLMs is ever-changing, and we haven't settled on the rules yet. Maybe six months from now, vibe-coding will be reliable enough. But until we get there, we need to find ways to ensure we're still producing high-quality code that teams can collectively understand and maintain.

16 views
iDiallo 3 weeks ago

The NEO Robot

You've probably seen the NEO home robot by now, from the company 1X. It's a friendly humanoid with a plush-toy face that can work around your house. Cleaning, making beds, folding laundry, even picking up after meals. Most importantly, there's the way it looks. Unlike Tesla's "Optimus," which resembles an industrial robot, NEO looks friendly. It has a cute, plush face with round eyes. Something you could let your children play with. But after watching their launch video, I only had one thing on my mind: battery life. And that's how you know I was tricked. Battery life is four hours after a full charge according to the company, but that's the wrong thing to focus on. Remember when Tesla first announced Optimus? Elon Musk made sure to emphasize one statement, they purposely capped the robot's speed to 5 miles per hour. Then he joked that "you can just outrun it and most likely overpower it." This steered the conversation toward safety in AI and robots. a masterful bit of misdirection from the fact that there was no robot whatsoever at the time. Not even a prototype. Just a person in a suit doing a silly dance. With NEO, we saw a lot more. The robot loaded laundry into the machine, tidied up the home, did the dishes. Real demonstrations with real hardware. But what they failed to emphasize was just as important. All actions in the video were entirely remote controlled. Here are the assumptions I was making while watching their video. Once you turn on this robot, it would first need to understand your home. Since it operates as a housekeeper, it would map your space using the dual cameras on its head, saving this information to some internal drive. It would need to recognize you both visually and through your voice. You'd register your face and voice like Face ID. They stated it can charge itself, so the dexterity of its hands must be precise enough to plug itself in autonomously. All reasonable assumptions for a $20,000 "AI home robot," right? But these are just assumptions. Then the founder mentions you can "teach it new tasks," overseen by one of their experts that you can book at specific times. Since we're not seeing the robot do anything autonomously, I'm left wondering. What does "teaching the robot a skill" even mean? The NEO is indeed a humanoid robot. But it's not an autonomous AI robot. It's a teleoperated robot that lives in your home. A remote operator from 1X views through its cameras and controls its movements when it needs to perform a tasks. If that's what they're building, it should be crystal clear. People need to understand what they're buying and the implications that come with it. You're allowing someone from a company to work in your home remotely, using a humanoid robot as their avatar, seeing everything the robot sees. Looking at the videos published by outlets like the Wall Street Journal , even the teleoperated functionality appears limited. MKBHD also offers an excellent analysis that's worth watching. 1X positions this teleoperation as a training mechanism. The "Expert Mode" that generates data to eventually make the robot autonomous. It's a reasonable approach, similar to how Tesla gathered data for Full Self-Driving. But the difference is your car camera feeds helped train a system; NEO's cameras invite a stranger into your most private spaces. The company says it has implemented privacy controls, scheduled sessions, no-go zones, visual indicators when someone's watching, face-blurring technology, etc. These are necessary safeguards, but they don't change the fundamental problem. This is not an autonomous robot. Also, you are acting as a data provider for the company while paying $20,000 for the hardware. 2026 is just around the corner. I expect the autonomous capabilities to be quietly de-emphasized in marketing as we approach the release date. I also expect delays attributed to "high demand" and "ensuring safety standards." I don't expect this robot to deliver in 2026. If it does, it will be a teleoperated humanoid. With my privacy concerns, I will probably not be an early or late adopter. But I'll happily seat on the sidelines and watch the chaos unfold. A teleoperated humanoid sounds like the next logical step for an Uber or DoorDash. The company should just be clear about what they are building.

0 views
iDiallo 3 weeks ago

Why I Remain a Skeptic Despite Working in Tech

One thing that often surprises my friends and family is how tech-avoidant I am. I don't have the latest gadget, I talk about dumb TVs, and Siri isn't activated on my iPhone. The only thing left is to go to the kitchen, take a sheet of tin foil, and mold it into a hat. To put it simply, I avoid tech when I can. The main reason for my skepticism is that I don't like tracking technology. I can't stop it, I can't avoid it entirely, but I will try as much as I can. Take electric cars, for example. I get excited to see new models rolling out. But over-the-air updates freak me out. Why? Because I'm not the one in control of them. Modern cars now receive software updates wirelessly, similar to smartphones. These over-the-air updates can modify everything from infotainment systems to critical driving functions like powertrain systems, brakes, and advanced driver assistance systems. While this technology offers convenience, it also introduces security concerns, hackers could potentially gain remote access to vehicle systems. The possibility for a hostile take over went from 0 to 1. I buy things from Amazon. It's extremely convenient. But I don't feel comfortable having a microphone constantly listening. They may say that they don't listen to conversations, but you can't respond to a command without listening . It does use some trigger words to activate, but they still occasionally accidentally activate and start recording. Amazon acknowledges that it employs thousands of people worldwide to listen to Alexa voice recordings and transcribe them to improve the AI's capabilities. In 2023, the FTC fined Amazon $31 million for violating children's privacy laws by keeping kids' Alexa voice recordings indefinitely and undermining parents' deletion requests. The same thing with Siri. Apple likes to brag about their privacy features, but they still paid $95 million in a Siri eavesdropping settlement . Vizio TVs take screenshots from 11 million smart TVs and sell viewing data to third parties without users' knowledge or consent. The data is bundled with personal information including sex, age, income, marital status, household size, education level, and home value, then sold to advertisers. The FTC fined Vizio $2.2 million in 2017, but by then the damage was done. This technology isn't limited to Vizio. Most smart TV manufacturers use similar tracking. ACR can analyze exactly what's on your screen regardless of source, meaning your TV knows when you're playing video games, watching Blu-rays, or even casting home movies from your phone. In 2023, Tesla faced a class action lawsuit after reports revealed that employees shared private photos and videos from customer vehicle cameras between 2019 and 2022. The content included private footage from inside customers' garages. One video that circulated among employees showed a Tesla hitting a child on a bike . Tesla's privacy notice states that "camera recordings remain anonymous and are not linked to you or your vehicle," yet employees clearly had access to identify and share specific footage. Amazon links every Alexa interaction to your account and uses the data to profile you for targeted advertising. While Vizio was ordered to delete the data it collected, the court couldn't force third parties who purchased the data to delete it. Once your data is out there, you've lost control of it forever. For me, a technological device that I own should belong to me, and me only. But for some reason, as soon as we add the internet to any device, it stops belonging to us. The promise of smart technology is convenience and innovation. The reality is surveillance and monetization. Our viewing habits, conversations, and driving patterns are products being sold without our meaningful consent. I love tech, and I love solving problems. But as long as I don't have control of the devices I use, I'll remain a tech skeptic. One who works from the inside, hoping to build better solutions. The industry needs people who question these practices, who push back against normalized surveillance, and who remember that technology should serve users, not exploit them. Until then, I'll keep my TV dumb, my Siri disabled, and be the annoying family member who won't join your facebook group.

4 views
iDiallo 3 weeks ago

None of us Read the specs

After using Large Language Models extensively, the same questions keep resurfacing. Why didn't the lawyer who used ChatGPT to draft legal briefs verify the case citations before presenting them to a judge? Why are developers raising issues on projects like cURL using LLMs, but not verifying the generated code before pushing a Pull Request? Why are students using AI to write their essays, yet submitting the result without a single read-through? The reason is simple. If you didn't have time to write it, you certainly won't spend time reading it. They are all using LLMs as their time-saving strategy. In reality, the work remains undone because they are merely shifting the burden of verification and debugging to the next person in the chain. AI companies promise that LLMs can transform us all into a 10x developer. You can produce far more output, more lines of code, more draft documents, more specifications, than ever before. The core problem is that this initial time saved is almost always spent by someone else to review and validate your output. At my day job, the developers who use AI to generate large swathes of code are generally lost when we ask questions during PR reviews. They can't explain the logic or the trade-offs because they didn't write it, and they didn't truly read it. Reading and understanding generated code defeats the initial purpose of using AI for speed. Unfortunately, there is a fix for that as well. If PR reviews or verification slow the process down, then the clever reviewer can also use an LLM to review the code at a 10x speed. Now, everyone has saved time. The code gets deployed faster. The metrics for velocity look fantastic. But then, a problem arises. A user experiences a critical issue. At this point, you face a technical catastrophe: The developer is unfamiliar with the code, and the reviewer is also unfamiliar with the code. You are now completely at the mercy of another LLM to diagnose the issue and create a fix, because the essential human domain knowledge required to debug a problem has been bypassed by both parties. This issue isn't restricted to writing code. I've seen the same dangerous pattern when architects use LLMs to write technical specifications for projects. As an architect whose job is to produce a document that developers can use as a blueprint, using an LLM exponentially improves speed. Where it once took a day to go through notes and produce specs, an LLM can generate a draft in minutes. As far as metrics are concerned, the architect is producing more. Maybe they can even generate three or four documents a day now. As an individual contributor, they are more productive. But that output is someone else’s input, and their work depends entirely on the quality of the document. Just because we produce more doesn't mean we are doing a better job. Plus, our tendency is to not thoroughly vet the LLM's output because it always looks good enough, until someone has to scrutinize it. The developer implementing a feature, following that blueprint, will now have to do the extra work of figuring out if the specs even make sense. If the document contains logical flaws, missing context, or outright hallucinations , the developer must spend time reviewing and reconciling the logic. The worst-case scenario? They decide to save time, too. They use an LLM to "read" the flawed specs and build the product, incorporating and inheriting all the mistakes, and simply passing the technical debt along. LLMs are powerful tools for augmentation, but we treat them as tools for abdication . They are fantastic at getting us to a first draft, but they cannot replace the critical human function of scrutiny, verification, and ultimate ownership. When everyone is using a tool the wrong way, you can't just say they are holding it wrong . But I don't see how we can make verification a sustainable part of the process when the whole point of using an LLM is to save time. For now at least, we have to deliberately consider all LLM outputs incorrect until vetted. If we fail to do this, we're not just creating more work for others; we're actively eroding our work, making life harder for our future selves.

1 views
iDiallo 4 weeks ago

Why should I accept all cookies?

Around 2013, my team and I finally embarked in upgrading our company's internal software to version 2.0. We had a large backlog of user complaints that we were finally addressing, with security at the top of the list. The very top of the list was moving away from plain text passwords. From the outside, the system looked secure. We never emailed passwords, we never displayed them, we had strict protocols for password rotation and management. But this was a carefully staged performance. The truth was, an attacker with access to our codebase could have downloaded the entire user table in minutes. All our security measures were pure theater, designed to look robust while a fundamental vulnerability sat in plain sight. After seeing the plain text password table, I remember thinking about a story that was also happening around the same time. A 9 year old boy who flew from Minneapolis to Las Vegas without a boarding pass . This was in an era where we removed our shoes and belts for TSA agents to humiliate us. Yet, this child was able, without even trying, to bypass all the theater that was built around the security measures. How did he get past TSA? How did he get through the gate without a boarding pass? How was he assigned a seat in the plane? How did he... there are just so many questions. Just like our security measures on our website, it was all a performance, an illusion. I can't help but see the same script playing out today, not in airports or codebases, but in the cookie consent banners that pop up on nearly every website I visit. It's always a variation of "This website uses cookies to enhance your experience. [Accept All] or [Customize]." Rarely is there a bold, equally prominent "Reject All" button. And when there is, the reject-all button will open a popup where you have to tweak some settings. This is not an accident; it's a dark pattern. It's the digital equivalent of a TSA agent asking, "Would you like to take the express lane or would you like to go through a more complicated screening process?" Your third option is to turn back and go home, which isn't really an option if you made it all the way to the airport. A few weeks back, I was exploring not just dark patterns but hostile software . Because you don't own the device you paid for, the OS can enforce decisions by never giving you any options. You don't have a choice. Any option you choose will lead you down the same funnel that benefits the company, and give you the illusion of agency. So, let's return to the cookie banner. As a user, what is my tangible incentive to click "Accept All"? The answer is: there is none. "Required" cookies are, by definition, non-negotiable for basic site function. Accepting the additional "performance," "analytics," or "marketing" cookies does not unlock a premium feature for me. It doesn't load the website faster or give me a cleaner layout. It does not improve my experience. My only "reward" for accepting all is that the banner disappears quickly. The incentive is the cessation of annoyance, a small dopamine hit for compliance. In exchange, I grant the website permission to track my behavior, build an advertising profile, and share my data with a shadowy network of third parties. The entire interaction is a rigged game. Whenever I click on the "Customize" option, I'm overwhelmed with the labyrinth of toggles and sub-menus designed to make rejection so tedious that "Accept All" becomes the path of least resistance. My default reaction is to reject everything. Doesn't matter if you use dark patterns, my eyes are trained to read the fine lines in a split second. But when that option is hidden, I've resorted to opening my browser's developer tools and deleting the banner element from the page altogether. It’s a desperate workaround for a system that refuses to offer a legitimate "no." Lately, I don't even bother clicking on reject all. I just delete the elements all together. Like I said, there are no incentives for me to interact with the menu. We eventually plugged that security vulnerability in our old application. We hashed the passwords and closed the backdoor, moving from security theater to actual security. The fix wasn't glamorous, but it was a real improvement. The current implementation of "choice" is largely privacy theater. It's a performance designed to comply with the letter of regulations like GDPR while violating their spirit. It makes users feel in control while systematically herding them toward the option that serves corporate surveillance. There is never an incentive to cookie tracking on the user end. So this theater has to be created to justify selling our data and turning us into products of each website we visit. But if you are like me, don't forget you can always use the developer tools to make the banner disappear. Or use uBlock. On Windows or Google Drive: "Get started" or "Remind me later." Where is "Never show this again"? On Twitter: "See less often" is the only option for an unwanted notification, never "Stop these entirely."

0 views
iDiallo 1 months ago

Galactic Timekeeping

Yes, I loved Andor. It was such a breath of fresh air in the Star Wars universe. The kind of storytelling that made me feel like a kid again, waiting impatiently for my father to bring home VHS tapes of Episodes 5 and 6. I wouldn't call myself a die-hard fan, but I've always appreciated the original trilogy. After binging both seasons of Andor, I immediately rewatched Rogue One , which of course meant I had to revisit A New Hope again. And through it all, one thing kept nagging at me. One question I had. What time is it? In A New Hope , Han Solo, piloting the Millennium Falcon through hyperspace, casually mentions: "We should be at Alderaan about 0200 hours." And they are onto the next scene with R2D2. Except I'm like, wait a minute. What does "0200 hours" actually mean in an intergalactic civilization? When you're travelling through hyperspace between star systems, each with their own planets spinning at different rates around different suns, what does "2:00 AM" even refer to? Bear with me, I'm serious. Time is fundamentally local. Here on Earth, we define a "day" by our planet's rotation relative to the Sun. One complete spin gives us 24 hours. A "year" is one orbit around our star. These measurements are essentially tied to our specific solar neighborhood. So how does time work when you're hopping between solar systems as casually as we hop between time zones? Before we go any further into a galaxy far, far away, let's look at how we're handling timekeeping right now as we begin exploring our own solar system. NASA mission controllers for the Curiosity rover famously lived on "Mars Time" during their missions . A Martian day, called a "sol", is around 24 hours and 40 minutes long. To stay synchronized with the rovers' daylight operations, mission control teams had their work shifts start 40 minutes later each Earth day. They wore special watches that displayed time in Mars sols instead of Earth hours. Engineers would arrive at work in California at what felt like 3:00 AM one week, then noon the next, then evening, then back to the middle of the night. All while technically working the "same" shift on Mars. Families were disrupted. Sleep schedules were destroyed. And of course, "Baby sitters don't work on Mars time." And this was just for one other planet in our own solar system. One team member described it as living " perpetually jet-lagged ." After several months, NASA had to abandon pure Mars time because it was simply unsustainable for human biology. Our circadian rhythms can only be stretched so much. With the Artemis missions planning to establish a continuous human presence on the Moon, NASA and international space agencies are now trying to define an even more complicated system: Lunar Standard Time. A lunar "day", from one sunrise to the next, lasts about 29.5 Earth days. That's roughly 14 Earth days of continuous sunlight followed by 14 Earth days of darkness. You obviously can't work for two weeks straight and then hibernate for two more. But that's not all. On the moon, time itself moves differently. Because of the moon's weaker gravity and different velocity relative to Earth, clocks on the Moon tick at a slightly different rate than clocks on Earth. It's a microscopic difference (about 56 microseconds per day), but for precision navigation, communication satellites, and coordinated operations, it matters. NASA is actively working to create a unified timekeeping framework that accounts for these relativistic effects while still allowing coordination between lunar operations and Earth-based mission control. And again, this is all within our tiny Earth-Moon system, sharing the same star. If we're struggling to coordinate time between two bodies in the same gravitational system, how would an entire galaxy manage it? In Star Wars the solution, according to the expanded universe lore , is this: "A standard year, also known more simply as a year or formally as Galactic Standard Year, was a standard measurement of time in the galaxy. The term year often referred to a single revolution of a planet around its star, the duration of which varied between planets; the standard year was specifically a Coruscant year, which was the galactic standard. The Coruscant solar cycle was 368 days long with a day consisting of 24 standard hours." So the galaxy has standardized on Coruscant, the political and cultural capital, as the reference point for time. We can think of it as Galactic Greenwich Mean Time, with Coruscant serving as the Prime Meridian of the galaxy. This makes a certain amount of political and practical sense. Just as we arbitrarily chose a line through Greenwich, England, as the zero point for our time zones, a galactic civilization would need to pick some reference frame. Coruscant, as the seat of government for millennia, is a logical choice. But I'm still not convinced that it is this simple. Are those "24 standard hours" actually standard everywhere, or just on Coruscant? Let's think through what Galactic Standard Time would actually require: Tatooine has a different rotation period than Coruscant. Hoth probably has a different day length than Bespin. Some planets might have extremely long days (like Venus, which takes 243 Earth days to rotate once). Some might rotate so fast that "days" are meaningless. Gas giants like Bespin might not have a clear surface to even define rotation against. For local populations who never leave their planet, this is fine. They just live by their star's rhythm. But the moment you have interplanetary travel, trade, and military coordination, you need a common reference frame. This was too complicated for me to fully grasp, but here is how I understood it. The theory of relativity tells us that time passes at different rates depending on your velocity and the strength of the gravitational field you're in. We see this in our own GPS satellites. They experience time about 38 microseconds faster per day than clocks on Earth's surface because they're in a weaker gravitational field, even though they're also moving quickly (which slows time down). Both effects must be constantly corrected or GPS coordinates would drift by kilometers each day. Now imagine you're the Empire trying to coordinate an attack. One Star Destroyer has been orbiting a high-gravity planet. Another has been traveling at relativistic speeds through deep space. A third has been in hyperspace. When they all rendezvous, their clocks will have drifted. How much? Well, we don't really know the physics of hyperspace or the precise gravitational fields involved, so we can't say. But it wouldn't be trivial. Even if you had perfectly synchronized clocks, there's still the problem of knowing what time it is elsewhere. Light takes time to travel. A lot of time. Earth is about 8 light-minutes from the Sun. Meaning if the Sun exploded right now, we wouldn't know for 8 minutes. Voyager 1, humanity's most distant spacecraft, is currently over 23 light-hours away. A signal from there takes nearly a full Earth day to reach us. The Star Wars galaxy is approximately 120,000 light-years in diameter (according to the lore again). Even with the HoloNet (their faster-than-light communication system), there would still be transmission delays, signal degradation, and the fundamental question of "which moment in time are we synchronizing to?" If Coruscant sends out a time signal, and a planet on the Outer Rim receives it three days later, whose "now" are they synchronizing to? In relativity, there is no universal "now." Time is not an absolute, objective thing that ticks uniformly throughout the universe. It's relative to your frame of reference. On Earth, we all roughly share the same frame of reference, so we can agree on UTC and time zones. But in a galaxy with millions of worlds, each moving at different velocities relative to each other, each in different gravitational fields, with ships constantly jumping through hyperspace. Which frame of reference do you pick? You could arbitrarily say "Coruscant's reference frame is the standard," but that doesn't make the physics go away. A ship traveling at near-light-speed would still experience time differently. Any rebel operation requiring split-second timing would fall apart. Despite all this complexity, the characters in Star Wars behave as if time is simple and universal. They "seem" to use a dual-time system: This would be for official, galaxy-wide coordination: When Mon Mothma coordinates with Rebel cells across the galaxy in Andor , they're almost certainly using GST. When an X-Wing pilot gets a mission briefing, the launch time is in GST so the entire fleet stays synchronized. This is for daily life: The workday on Ferrix follows Ferrix's sun. A cantina on Tatooine opens when Tatooine's twin suns rise. A farmer on Aldhani plants crops according to Aldhani's seasons. A traveler would need to track both. Like we carry smartphones with clocks showing both home time and local time. An X-Wing pilot might wake up at 0600 LPT (local dawn on Yavin 4) for a mission launching at 1430 GST (coordinated across the fleet). This is something I couldn't let go when watching the show. In Andor, Cassian often references "night" and "day". Saying things like "we'll leave in the morning" or "it's the middle of the night." When someone on a spaceship says "it's the middle of the night," or even "Yesterday," what do they mean? There's no day-night cycle in space. They're not experiencing a sunset. The most logical explanation is that they've internalized the 24-hour Coruscant cycle as their personal rhythm. "Night" means the GST clock reads 0200, and the ship's lights are probably dimmed to simulate a diurnal cycle, helping regulate circadian rhythms. "Morning" means 0800 GST, and the lights brighten. Space travelers have essentially become Coruscant-native in terms of their biological and cultural clock, regardless of where they actually are. It's an artificial rhythm, separate from any natural cycle, but necessary for maintaining order and sanity in an artificial environment. I really wanted to present this in a way that makes sense. But the truth is, realistic galactic timekeeping would be mind-numbingly complex. You'd somehow need: It would make our International Telecommunication Union's work on UTC look like child's play. But Star Wars isn't hard science fiction. It's a fairy tale set in space. A story about heroes, empires, and rebellions. The starfighters make noise in the vacuum of space. The ships bank and turn like WWII fighters despite having no air resistance. Gravity works the same everywhere regardless of planet size. So when Han Solo says "0200 hours," just pretend he is in Kansas. We accept that somewhere, somehow, the galaxy has solved this complex problem. Maybe some genius inventor in the Old Republic created a McGuffin that uses hyperspace itself as a universal reference frame, keeping every clock in the galaxy in perfect sync through some exotic quantum effect. Maybe the most impressive piece of technology in the Star Wars universe isn't the Death Star, which blows up. Or the hyperdrive, which seems to fail half the time. The true technological and bureaucratic marvel is the invisible, unbelievably complex clock network that must be running flawlessly, constantly behind the scene across 120,000 light years. It suggests deep seated control, stability and sheer organizational power for the empire. That might be the real foundation of real galactic power hidden right there in plain sight. ... or maybe the Force did it! Maybe I took this a bit too seriously. But along the way, I was having too much fun reading about how NASA deals with time, and the deep lore behind Star Wars. I'm almost starting to understand why the Empire is trying to keep those pesky rebels at bay. I enjoyed watching Andor. Remember, Syril is a villain. Yes, you are on his side sometimes, they made him look human, but he is still a bad guy. There I said it. They can't make a third season because Rogue One is what comes next. But I think I've earned the right to just enjoy watching Cassian Andor glance at his chrono and say "We leave at dawn", wherever and whenever that is. A clock on a planet with stronger gravity runs slower than one on a planet with weaker gravity A clock on a fast-moving ship runs slower than one on a stationary planet Hyperspace travel, which somehow exceeds the speed of light, would create all kinds of relativistic artifacts Military operations ("All fighters, attack formation at 0430 GST") Senate sessions and government business Hyperspace travel schedules Banking and financial markets HoloNet news broadcasts Work schedules Sleep cycles Business hours Social conventions ("let's meet for lunch") Relativistic corrections for every inhabited world's gravitational field Constant recalibration for ships entering and exiting hyperspace A faster-than-light communication network that somehow maintains causality Atomic clock networks distributed across the galaxy, all quantum-entangled or connected through some exotic physics Sophisticated algorithms running continuously to keep everything synchronized Probably a dedicated branch of the Imperial bureaucracy just to maintain the Galactic Time Standard

0 views
iDiallo 1 months ago

Is RSS Still Relevant?

I'd like to believe that RSS is still relevant and remains one of the most important technologies we've created. The moment I built this blog, I made sure my feed was working properly. Back in 2013, the web was already starting to move away from RSS. Every few months, an article would go viral declaring that RSS was dying or dead. Fast forward to 2025, those articles are nonexistent, and most people don't even know what RSS is. One of the main advantages of an RSS feed is that it allows me to read news and articles without worrying about an algorithm controlling how I discover them. I have a list of blogs I'm subscribed to, and I consume their content chronologically. When someone writes an article I'm not interested in, I can simply skip it. I don't need to train an AI to detect and understand the type of content I don't like. Who knows, the author might write something similar in the future that I do enjoy. I reserve that agency to judge for myself. The fact that RSS links aren't prominently featured on blogs anymore isn't really a problem for me. I have the necessary tools to find them and subscribe on my own. In general, people who care about RSS are already aware of how to subscribe. Since I have this blog and have been posting regularly this year, I can actually look at my server logs and see who's checking my feed. From January 1st to September 1st, 2025, there were a total of 537,541 requests to my RSS feed. RSS readers often check websites at timed intervals to detect when a new article is published. Some are very aggressive and check every 10 minutes throughout the day, while others have somehow figured out my publishing schedule and only check a couple of times daily. RSS readers, or feed parsers, don't always identify themselves. The most annoying name I've seen is just , probably a Node.js script running on someone's local machine. However, I do see other prominent readers like Feedly, NewsBlur, and Inoreader. Here's what they look like in my logs: There are two types of readers: those from cloud services like Feedly that have consistent IP addresses you can track over time, and those running on user devices. I can identify the latter as user devices because users often click on links and visit my blog with the same IP address. So far throughout the year, I've seen 1,225 unique reader names. It's hard to confirm if they're truly unique since some are the same application with different versions. For example, Tiny Tiny RSS has accessed the website with 14 different versions, from version 22.08 to 25.10. I've written a script to extract as many identifiable readers as possible while ignoring the generic ones that just use common browser user agents. Here's the list of RSS readers and feed parsers that have accessed my blog: Raw list of RSS user agents here RSS might be irrelevant on social media, but that doesn't really matter. The technology is simple enough that anyone who cares can implement it on their platform. It's just a fancy XML file. It comes installed and enabled by default on several blogging platforms. It doesn't have to be the de facto standard on the web, just a good way for people who are aware of it to share articles without being at the mercy of dominant platforms.

1 views
iDiallo 1 months ago

The TikTok Model is the Future of the Web

I hate to say it, but when I wake up in the morning, the very first thing I do is check my phone. First I turn off my alarm, I've made it a habit to wake up before it goes off. Then I scroll through a handful of websites. Yahoo Finance first, because the market is crazy. Hacker News, where I skim titles to see if AWS suffered an outage while I was sleeping. And then I put my phone down before I'm tempted to check my Twitter feed. I've managed to stay away from TikTok, but the TikTok model is finding its way to every user's phone whether we like it or not. On TikTok, you don't surf the web. You don't think of an idea and then research it. Instead, based entirely on your activity in the app, their proprietary algorithm decides what content will best suit you. For their users, this is the best thing since sliced bread. For the tech world, this is the best way to influence your users. Now, the TikTok model is no longer reserved for TikTok, but has spread to all social media. What worries me is that it's also going to infect the entire World Wide Web. Imagine this for a second: You open your web browser. Instead of a search bar or a list of bookmarks, you're greeted by an endless, vertically scrolling stream of content. Short videos, news snippets, product listings, and interactive demos. You don't type anything, you just swipe what you don't like and tap what you do. The algorithm learns, and soon it feels like the web is reading your mind. You're served exactly what you didn't know you wanted. Everything is effortless, because the content you see feels like something you would have searched for yourself. With AI integrations like Google's Gemini being baked directly into the browser, this TikTok-ification of the entire web is the logical next step. We're shifting from a model of surfing the web to one where the web is served to us. This looks like peak convenience. If these algorithms can figure out what you want to consume without you having to search for it, what's the big deal? The web is full of noise, and any tool that can cut through the clutter and help surface the gems should be a powerful discovery tool. But the reality doesn't entirely work this way. There's something that always gets in the way: incentives. More accurately, company incentives. When I log into my Yahoo Mail (yes, I still have one), the first bolded email on top isn't actually an email. It's an ad disguised as an email. When I open the Chrome browser, I'm presented with "Sponsored content" I might be interested in. Note that Google Discover is supposed to be the ultimate tool for discovering content, but their incentives are clear: they're showing you sponsored content first. The model for content that's directly served to you is designed to get you addicted. It isn't designed for education or fulfillment; it's optimized for engagement. The goal is to provide small, constant dopamine hits, keeping you in a state of perpetual consumption without ever feeling finished. It's browsing as a slot machine, not a library. What happens when we all consume a unique, algorithmically-generated web? We lose our shared cultural space. After the last episode of Breaking Bad aired, I texted my coworkers: "Speechless." The reply was, "Best TV show in history." We didn't need more context to understand what we were all talking about. With personalized content, this shared culture is vanishing. The core problem isn't algorithmic curation itself, but who it serves. The algorithms are designed to benefit the company that made them, not the user. And as the laws of "enshittification" dictate, any platform that locks in its users will eventually turn the screws, making the algorithm worse for you to better serve its advertisers or bottom line . Algorithmic solutions often fix problems that shouldn't exist in the first place. Think about your email. The idea of "algorithmically sorted email" only makes sense if your inbox is flooded with spam, newsletters you never wanted, and automated notifications. You need a powerful AI to find the real human messages buried in the noise. But here's the trick: your email shouldn't be flooded with that junk to begin with. If we had better norms, stricter regulations, and more respectful systems, your inbox would contain only meaningful correspondence. In that world, you wouldn't want an algorithm deciding what's important. You'd just read your emails. The same is true for the web. The "noise" the TikTok model promises to solve, the SEO spam, the clickbait, the low-value content, is largely a product of an ad-driven attention economy. Instead of fixing that root problem, the algorithmic model just builds a new, even more captivating layer on top of it. It doesn't clean up the web; it just gives you a more personalized and addictive filter bubble to live inside. The TikTok model of the web is convenient, addictive, and increasingly inevitable. But it's not the only future. It's the path of least resistance for platforms seeking growth and engagement at all costs . There is an alternative, though. No, you don't have to demand more from these platforms. You don't have to vote for a politician. You don't even have to do much. The very first thing to do is remember your own agency. You are in control of the web you see and use. Change the default settings on your device. Delete the apps that are taking advantage of you. Use an ad blocker. If you find creators making things you like, look for ways to support them directly. Be the primary curator of your digital life. It requires some effort, of course. But it's worth it, because the alternative is letting someone else decide what you see, what you think about, and how you spend your time. The web can still be a tool for discovery and connection rather than a slot machine optimized for your attention. You just have to choose to make it that way.

2 views
iDiallo 1 months ago

No Satisfaction Guaranteed

I use Apple products mostly for work. When it comes to iPhone vs Android, I need access to my file system, so I choose Android any day. But the last thing I'll say is that Apple products suck. Whether it's the UI, screen quality, laptops, or tablets, Apple has done an amazing job. Yet every time there's a new iteration, someone will write about how much Apple sucks right now. The same happens with new Android phones too. There's no way of satisfying all users. No matter what you do, someone will complain that your product is now worse than ever. This isn't entirely the fault of users. There's a system in place that conditions us to have high expectations as we critique. We're taught that a company's ultimate goal is to create a perfect, successful product, which I believe Apple has succeeded in doing. But what happens after success? I'd argue that this moment of peak success is also the beginning of a crisis. Imagine you have an idea. The best idea. You turn it into a product. A masterpiece. It's durable, high-quality, and cleverly solves a problem. You launch it, and everyone buys it. The world is happy. You've won. So, what now? Logically, you should be able to enjoy the spoils. You've made your money and delivered genuine value. Your work here is done. But the modern economy doesn't work that way. A company isn't a one-hit wonder; it's an entity that must survive, and survival is defined by one thing: infinite growth. Once you've sold your perfect product to everyone who wants it, you hit a wall. Your success becomes your ceiling. To survive, you must now convince your satisfied customers to buy again. This is where the machine starts to break us. You can't just sell the same thing. People already have it! So you're forced to create something new. You leverage the trust from your first success and slap the same branding on a follow-up. But this new product must be different. It must be "better," or at least appear that way. It needs new features, a new design, a new reason for people to open their wallets. If you fail, those who are watching, your competitors, learn from your mistake: "Never give your best on the first try." Even if you have the knowledge and ability to create the "perfect," lifelong product, you're discouraged from doing so. Instead, you release a dumbed-down version. You hold back the best features for "Version 2.0" or the "Pro" model. You design for planned obsolescence, either in function or in fashion. You're not building a solution anymore; you're building a stepping stone to the next product. Suddenly, you're making Marvel movies. In the end, the good guys defeat the enemy. They save the city. But right before the credits roll, a new problem is introduced. You can't leave satisfied. You must watch the next installment. The best smartphones slow down, important software becomes unsupported, last year's model suddenly looks outdated. This isn't always an accident; it's often a feature of the system. The goal is to keep us on a treadmill. A perpetual motion machine of consumption. Our satisfaction is a problem for business. A truly satisfied customer is a dead end. A customer who is almost satisfied but sees a brighter, shinier solution on the horizon is the engine of growth. We've built an economic system where success is no longer measured by problems solved, but by the ability to manufacture new desires. Companies aren't rewarded for creating lasting value. They're rewarded for creating lasting dependency. The better a company gets at solving our problems, the more desperately it must invent new ones. This cycle doesn't just affect products; it shapes how we think about satisfaction itself. We've internalized the idea that contentment is stagnation, that last year's perfectly functional device is somehow insufficient. We've learned to mistake novelty for progress and updates for improvement. Instead of always waiting for the next version that doesn't suck, we should start saying: "This is enough. This works. You don't need to buy again." In a system that forgot how to breath, maybe the next big innovation will be how we learn how to slow down.

0 views
iDiallo 1 months ago

Why We Don't Have Flying Cars

Imagine this: You walk up to your driveway where your car is parked. You reach for the handle that automatically senses your presence, confirms your identity, and opens to welcome you in. You sit down, the controls appear in front of you, and your seatbelt secures itself around your waist. Instead of driving forward onto the pavement, you take off. You soar into the skies like an eagle and fly to your destination. This is what technology promises: freedom, power, and something undeniably cool. The part we fail to imagine is what happens when your engine sputters before takeoff. What happens when you reach the sky and there are thousands of other vehicles in the air, all trying to remain in those artificial lanes? How do we deal with traffic? Which directions are we safely allowed to go? And how high? We have flying cars today. They're called helicopters. In understanding the helicopter, we understand why our dream remains a dream. There's nothing romantic about helicopters. They're deafeningly loud and incredibly expensive to buy and maintain. They require highly skilled pilots, are dangerously vulnerable to engine failure, and present a logistical nightmare of three-dimensional traffic control. I can't even picture what a million of them buzzing between skyscrapers would look like. Chaos, noise pollution, and a new form of gridlock in the sky. Even with smaller drones, as the technology evolves and becomes familiar, cities are creating regulations around them, sucking all the fun and freedom out in favor of safety and security. This leads me to believe that the whole idea of flying cars and drones is more about freedom than practicality. And unregulated freedom is impossible. This isn't limited to flying cars. The initial, pure idea is always intoxicating. But the moment we build a prototype, we're forced to confront the messy reality. In 1993, a Japanese man brought a video phone to demo for my father as a new tech to adopt in our embassy. I was only a child, but I remember the screen lighting up with a video feed of the man sitting right next to my father. I could only imagine the possibilities. It was something I thought only existed in sci-fi movies. If this was possible, teleportation couldn't be too far away. In my imagined future, we'd sit at a table with life-like projections of colleagues from across the globe, feeling as if we were in the same room. It would be the end of business travel, a world without borders. But now that the technology is ubiquitous, the term "Zoom fatigue" is trending. It's ironic when I get on a call and see that 95% of my colleagues have their cameras turned off. In movies, communication was spontaneous. You press a button, your colleauge appears as a hologram, and you converse. In reality, there's a calendar invite, a link, and the awkward "you're on mute!" dance. It's a scheduled performance, not an organic interaction. And then there are people who have perfect lighting, high-speed internet, and a quiet home office. And those who don't. Video calls have made us realize the importance of physical space and connection. Facebook's metaverse didn't resolve this. Imagine having a device that holds all of human knowledge at the click of a button. For generations, this was the ultimate dream of librarians and educators. It would create a society of enlightened, informed citizens. And we got the smartphone. Despite being a marvel of technology, the library of the world at your fingertips, it hasn't ushered us into utopia. The attention economy it brought along has turned it into a slot machine designed to hijack our dopamine cycles. You may have Wikipedia open in one tab, but right next to it is TikTok. The medium has reshaped the message from "seek knowledge" to "consume content." While you have access to information, misinformation is just as rampant. The constant stimulation kills moments of quiet reflection, which are often the birthplace of creativity and deep thought. In The Machine Stops by E.M. Forster, every desire can be delivered by pulling a lever on the machine. Whether it's food, a device, or toilet paper. The machine delivers everything. With Amazon, we've created a pretty similar scenario. I ordered replacement wheels for my trash bin one evening, expecting them to arrive after a couple of days. The very next morning, they were waiting at my doorstep. Amazing. But this isn't magical. Behind it are real human workers who labor without benefits, job security, or predictable income. They have an algorithmic boss that can be more demanding than a human one. That promise of instant delivery has created a shadow workforce of people dealing with traffic, poor weather, and difficult customers, all while racing against a timer. The convenience for the user is built on the stress of the driver. The dream of a meal from anywhere didn't account for the reality of our cities now being clogged with double-parked delivery scooters and a constant stream of gig workers. Every technological dream follows the same pattern. The initial vision is pure, focusing only on the benefit. The freedom, the convenience, the power. But reality is always a compromise, a negotiation with physics, economics, and most importantly, human psychology and society. We wanted flying cars. We understood the problems. And we got helicopters with a mountain of regulations instead. That's probably for the best. The lesson isn't to stop dreaming or stop innovating. It's to dream with our eyes open. When we imagine the future, we need to ask not just "what will this enable?" but also "what will this cost?" Not in dollars, but in human terms. In stress, inequality, unintended consequences, and the things we'll lose along the way. We're great at imagining benefits and terrible at predicting costs. And until we get better at the second part, every flying car we build will remain grounded by the weight of what we failed to consider.

0 views