Latest Posts (20 found)
iDiallo 3 days ago

Let users zoom in on mobile devices

This is a bit of a rant. Maybe my eyes are not as good as they used to be. When I read an article that has pictures on them, I like to zoom in to see the details. You might think this makes no sense, I just have to pinch the screen to zoom in. You would be right, but some websites intentionally prevent you from zooming in. Here is an example, the straw that broke the camel's back so to speak. I was reading an interesting article on substack about kids who ran away in the 60s , and it has these pictures of letters from those kids. Handwritten letters that complement the story and I really wanted to read. But have you tried reading text from a picture in an article on a phone? Again, it could just be what happens when you spend 35 years in front of screens. CSS alone is not enough to properly turn a page responsive on a mobile device. The browser needs to know how we want to size the viewport properly. For that we have a viewport property that gives the browser a hint on how to size the page. Since we've started making pages responsive yesteryear, I've relied on a single configuration and have rarely ever found a reason to change it: The is set to the current device's width, mobile or desktop, it doesn't matter. The is set to 1. The documentation is a bit confusing, I consider the scale to just be the initial zoom level. That's really all you need to know about the viewport if you are building a webpage and want to make it display properly on a mobile device. But of course, the article I'm complaining about has different settings. Here is what they have: The properties I'm complaining about are and . The first one says users can't zoom in period. Why would you prevent users from zooming in? This is such a terrible setting that you can set your browser to ignore this setting. But for good measures, they added , which means even if you are allowed to zoom, the maximum zoom level is one... which means you can't zoom. Yes, I disabled zoom to make a point It's a terrible experience all the way around. When I read articles that have pictures, I can't zoom in! I can't properly look at the pictures. There are a few platforms that I've noticed have these settings. Substack and Medium are the most annoying. Now, when I know an article is from those platforms, I just ignore them. The only time you ever need to override users from zooming is if it's a web game. Other than that, it's just plain annoying.

0 views
iDiallo 5 days ago

We Should Call Them Macroservices

I love the idea of microservices. When there's a problem on your website, you don't need to fix and redeploy your entire codebase. If the issue only affects your authentication service, you can deploy just that one component and call it a day. You've isolated the authentication feature into an independent microservice that can be managed and maintained on its own. That's the theory. The reality is often different. Microservices are a software architecture style where an application is built as a collection of small, independent, and loosely coupled services that communicate with each other. The "micro" in the name implies they should be small, and they usually start that way. When you first adopt this philosophy, all services are genuinely small and build incredibly fast. At this stage, you start questioning why you ever thought working on a monolith was a good idea. I love working on applications where the time between pushing a change and seeing its effect is minimal. The feedback loop is tight, deployments are quick, and each service feels manageable. But I've worked long enough in companies adopting this style to watch the transformation. Small becomes complex. Fast becomes extremely slow. Cheap becomes resource-intensive. Microservices start small, then they grow. And grow. And the benefits you once enjoyed start to vanish. For example, your authentication service starts with just login and logout. Then you add password reset. Then OAuth integration. Then multi-factor authentication. Then session management improvements. Then API key handling. Before you know it, your "micro" service has ballooned to thousands of lines of code, multiple database tables, and complex business logic. When you find yourself increasing the memory allocation on your Lambda functions by 2x or 3x, you've reached this stage. The service that once spun up in milliseconds now takes seconds to cold start. The deployment that took 30 seconds now takes 5 minutes. If speed were the only issue, I could live with it. But as services grow and get used, they start to depend on one another. When using microservices, we typically need an orchestration layer that consumes those services. Not only does this layer grow over time, but it's common for the microservices themselves to accumulate application logic that isn't easy to externalize. A service that was supposed to be a simple data accessor now contains validation rules, business logic, and workflow coordination. Imagine you're building an e-commerce checkout flow. You might have: Where does the logic live that says "only charge the customer if all items are in stock"? Or "apply the discount before calculating shipping"? This orchestration logic has to live somewhere, and it often ends up scattered across multiple services or duplicated in various places. As microservices grow, it's inevitable that they grow teams around them. A team specializes in managing a service and becomes the domain expert. Not a bad thing on its own, but it becomes an issue when someone debugging a client-side problem discovers the root cause lies in a service only another team understands. A problem that could have been solved by one person now requires coordination, meetings, and permissions to identify and resolve. For example, a customer reports that they're not receiving password reset emails. The frontend developer investigates and confirms the request is being sent correctly. The issue could be: Each of these components is owned by a different team. What should be a 30-minute investigation becomes a day-long exercise in coordination. The feature spans across several microservices, but domain experts only understand how their specific service works. There's a disconnect between how a feature functions end-to-end and the teams that build its components. When each microservice requires an actual HTTP request (or message queue interaction), things get relatively slower. Loading a page that requires data from several dependent services, each taking 50-100 milliseconds, means those latencies quickly compound. Imagine for a second you are displaying a user profile page. Here is the data that's being loaded: If these calls happen sequentially, you're looking at 350ms just for service-to-service communication, before any actual processing happens. Even with parallelization, you're paying the network tax multiple times over. In a monolith, this would be a few database queries totaling perhaps 50ms. There are some real benefits to microservices, especially when you have good observability in place. When a bug is identified via distributed tracing, the team that owns the affected service can take over the resolution process. Independent deployment means that a critical security patch to your authentication service doesn't require redeploying your entire application. Different services can use different technology stacks suited to their specific needs. These address real pain points that people have and is why we are attracted to this architecture in the first place. But Microservices are not a solution to every architectural problem. I always say if everybody is "holding it wrong," then they're not the problem, the design is. Microservices have their advantages, but they're just one option among many architectural patterns. To build a good system, we don't have to exclusively follow one style. Maybe what many organizations actually need isn't microservices at all, but what I'd call "macroservices". Larger, more cohesive service boundaries that group related functionality together. Instead of separate services for user accounts, authentication, and authorization, combine them into an identity service. Instead of splitting notification into separate services for email, SMS, and push notifications, keep them together where the shared logic and coordination naturally lives. The goal should be to draw service boundaries around business capabilities and team ownership, not around technical functions. Make your services large enough that a feature can live primarily within one service, but small enough that a team can own and understand the entire thing. Microservices promised us speed and independence. What many of us got instead were distributed monoliths, all the complexity of a distributed system with all the coupling of a monolith. An inventory service to check stock A pricing service to calculate totals A payment service to process transactions A shipping service to calculate delivery options A notification service to send confirmations The account service isn't triggering the email request properly The email service is failing to send messages The email service is sending to the wrong queue The notification preferences service has the user marked as opted-out The rate limiting service is blocking the request User account details (Account Service: 50ms) Recent orders (Order Service: 80ms) Saved payment methods (Payment Service: 60ms) Personalized recommendations (Recommendation Service: 120ms) Notification preferences (Settings Service: 40ms)

0 views
iDiallo 1 weeks ago

Why my Redirect rules from 2013 still work and yours don't

Here is something that makes me proud of my blog. The redirect rule I wrote for my very first article 12 years ago still works! This blog was an experiment. When I designed it, my intention was to try everything possible and not care if it broke. In fact, I often said that if anything broke, it would be an opportunity for me to face a new challenge and learn. I designed the website as best as I could, hoping that it would break so I could fix it. What I didn't take into account was that some things are much harder to fix than others. More specifically: URLs. Originally, this was the format of the URL: You can blame Derek Sivers for that format . But then I thought, what if I wanted to add pages that weren't articles? It would be hard to differentiate a blog entry from anything else. So I switched to the more common blog format: Perfect. But should the month have a leading zero? I went with the leading zero. But then I introduced a bug: Yes, I squashed the leading zero from the months. This meant that there were now two distinct URLs that pointed to the same content, and Google doesn't like duplicate content in its search results. Of course, that same year, I wrote an article that went super viral. Yes, my server crashed . But more importantly, people bookmarked and shared several articles from my blog everywhere. Once your links are shared they become permanent. They may get an entry in the wayback machine, they will be shared in forums, someone will make a point and cite you as a source. I could no longer afford to change the URLs or break them in any way. If I fixed the leading zero bug now, one of the URLs would lead to a 404. I had to implement a more complex solution. So in my file, I added a new redirect rule that kept the leading zero intact and redirected all URLs with a missing zero back to the version with a leading zero. Problem solved. Note that my was growing out of control, and there was always the temptation to edit it live. When I write articles, sometimes I come up with a title, then later change my mind. For example, my most popular article was titled "Fired by a machine" (fired-by-a-machine). But a couple of days after writing it, I renamed it to "When the machine fired me" (when-the-machine-fired-me). Should the old URL remain intact despite the new title? Should the URL match the new title? What about the old URL? Should it lead to a 404 or redirect to the new one? In 2014, after reading some Patrick McKenzie , I had this great idea of removing the month and year from the URL. This is what the URL would look like: Okay, no problem. All I needed was one more redirect rule. I don't like losing links, especially after Google indexes them. So my rule has always been to redirect old URLs to new ones and never lose anything. But my file was growing and becoming more complex. I'd also edited it multiple times on my server, and it was becoming hard to sync it with the different versions I had on different machines. So I ditched it. I've created a new .conf file with all the redirect rules in place. This version is always committed into my repo and has been consistently updated since. When I deploy new code to my server, the conf file is included in my apache.conf and my rules remain persistent. And the redirectrules.conf file looks something like this: I've rewritten my framework from scratch and gone through multiple designs. Whenever I look through my logs, I'm happy to see that links from 12 years ago are properly redirecting to their correct destinations. URLs are forever, but your infrastructure doesn't have to be fragile. The reason my redirect rules still work after more than a decade isn't because I got everything right the first time. I still don't get it right! But it's because I treated URL management as a first-class problem that deserved its own solution. Having a file living only on your server? It's a ticking time bomb. The moment I moved my redirect rules into a .conf file and committed it to my repo, I gained the ability to deploy with confidence. My redirects became code, not configuration magic that might vanish during a server migration. Every URL you publish is a promise. Someone bookmarked it, shared it, or linked to it. Breaking that promise because you changed your mind about a title or URL structure is not an option. Redirect rules are cheap and easy. But you can never recover lost traffic. I've changed URL formats three times and renamed countless articles. Each time, I added redirects rather than replacing them. Maybe it's just my paranoia, but the web has a long memory, and you never know which old link will suddenly matter. Your redirect rules from last year might not work because they're scattered across multiple .htaccess files, edited directly on production servers, and never version controlled. Mine still work because they travel with my code, surviving framework rewrites, server migrations, and a decade of second thoughts about URL design. The Internet never forgets... as long as the redirect rules are in place.

0 views
iDiallo 1 weeks ago

How I Became a Spam Vector

There are several reasons for Google to downrank a website from their search results. My first experience with downranking was on my very first day at a job in 2011. The day I walked into the building, Google released their first Panda update . My new employer, being a "content creator," disappeared from search results. This was a multi-million dollar company that had teams of writers and a portfolio of websites. They depended on Google, and not appearing in search meant we went on code red that first day. But it's not just large companies. Just this year, as AI Overview has dominated the search page, I've seen traffic to this blog falter. At one point, the number of impressions was increasing, yet the number of clicks declined. I mostly blamed it on AI Overview, but it didn't take long before impressions also dropped. It wasn't such a big deal to me since the majority of my readers now come through RSS. Looking through my server logs, I noticed that web crawlers had been accessing my search page at an alarming rate. And the search terms were text promoting spammy websites: crypto, gambling, and even some phishing sites. That seemed odd to me. What's the point of searching for those terms on my website if it's not going to return anything? In fact, there was a bug on my search page. If you entered Unicode characters, the page returned a 500 error. I don't like errors, so I decided to fix it. You can now search for Unicode on my search page. Yay! But it didn't take long for traffic to my website to drop even further. I didn't immediately make the connection, I continued to blame AI Overview. That was until I saw the burst of bot traffic to the search page. What I didn't take into account was that now that my search page was working, when you entered a spammy search term, it was prominently displayed on the page and in the page title. What I failed to see was that this was a vector for spammers to post links to my website. Even if those weren't actual anchor tags on the page, they were still URLs to spam websites. Looking through my logs, I can trace the sharp decline of traffic to this blog back to when I fixed the search page by adding support for Unicode. I didn't want to delete my search page, even though it primarily serves me for finding old posts. Instead, I added a single meta tag to fix the issue: What this means is that crawlers, like Google's indexing crawler, will not index the search page. Since the page is not indexed, the spammy content will not be used as part of the website's ranking. The result is that traffic has started to pick up once more. Now, I cannot say with complete certainty that this was the problem and solution to the traffic change. I don't have data from Google. However, I can see the direct effect, and I can see through Google Search Console that the spammy search pages are being added to the "no index" issues section. If you are experiencing something similar with your blog, it's worth taking a look through your logs, specifically search pages, to see if spammy content is being indirectly added. I started my career watching a content empire crumble under Google's algorithm changes, and here I am years later, accidentally turning my own blog into a spam vector while trying to improve it. The tools and tactics may have evolved, but something never changes. Google's search rankings are a delicate ecosystem, and even well-intentioned changes can have serious consequences. I often read about bloggers that never look past the content they write. Meaning, they don't care if you read it or not. But the problem comes when someone else takes advantage of your website's flaws. If you want to maintain control over your website, you have to monitor your traffic patterns and investigate anomalies. AI Overviews is most likely responsible for the original traffic drop, and I don't have much control over that. But it was also a convenient scape goat to blame everything on and excuse not looking deeper. I'm glad at least that my fix was something simple that anyone can implement.

1 views
iDiallo 1 weeks ago

Demerdez-vous: A response to Enshittification

There is an RSS reader that I often used in the past and have become very reliant on. I would share the name with you, but as they grew more popular, they have decided to follow the enshittification route. They've changed their UI, hidden several popular links behind multilayered menus, and they have revamped their API. Features that I used to rely on have disappeared, and the API is close to useless. My first instinct was to find a new app that will satisfy my needs. But being so familiar with this reader, I've decided to test a few things in the API first. Even though their documentation doesn't mention older versions anymore, I've discovered that the old API is still active. All I had to do was add a version number to the URL. It's been over 10 years, and that API is still very much active. I'm sorry I won't share it here, but this has served as a lesson for me when it comes to software that becomes worse over time. Don't let them screw you, unscrew yourself! We talk a lot about "enshittification"these days. I've even written about it a couple of times. It's about how platforms start great, get greedy, and slowly turn into user-hostile sludge. But what we rarely talk about is the alternative. What do you do when the product you rely on rots from the inside? The French have a phrase for this: Demerdez-vous. The literal translation is "unshit yourself". What it actually means is to find a way, even if no one is helping you. When a company becomes too big to fail, or simply becomes dominant in its market, drip by drip, it starts to become worse. You don't even notice it at first. It changes in ways that most people tolerate because the cost of switching is high, and the vendor knows it. But before you despair, before you give up, before you let the system drag you into its pit, try to unscrew yourself with the tools available. If the UI changes, try to find the old UI. Patch the inconvenience. Disable the bullshit. Bend the app back into something humane. It might sound impossible at first, but the tools to accomplish this exist and are widely being used. Sometimes the escape hatch is sitting right there, buried under three layers of "Advanced" menus. On the web I hate auto-playing videos, I don't want to receive twelve notifications a day from an app, I don't care about personalization. But for the most part, these can be disabled. When I download an app, I actually spend time going through settings. If I care enough to download an app, or if I'm forced, I'll spend the extra time to ensure that an app works to my advantage, not the other way around. When that RSS reader removes features from the UI, but not from their code, I was still able to continue using them. Another example of this is reddit. Their new UI is riddled with dark patterns, infinite scroll, and popups. But, go to , and you are greeted with that old UI that may not look fancy, but it was designed with the user in mind, not the company's metrics. I also like YouTube removed the dislike button. While it might be hurtful to content creators to see the number of dislikes, as a consumer, this piece of data served as a filter for lots of spam content. For that of course there is the "Return Youtube Dislike" browser extension. Extensions often can help you regain control when popular websites remove functionality useful to users, but the service no longer wants to support. There are several tools that enhance youtube, fix twitter, and of course uBlock. It's not always possible to combat enshittification. Sometimes the developer actively enforces their new annoying features and prevents anyone from removing them. In cases like these, there is still something that users can do. They can walk away. You don’t have to stay in an abusive relationship. You are allowed to leave. When you do, you'll discover that there was an open-source alternative. Or that a small independent app survived quietly in the corner of the internet. Or even sometimes, you'll find that you don't need the app at all. You break your addiction. In the end, "Demerdez-vous" is a reminder that we still have agency in a world designed to take it away. Enshittification may be inevitable, but surrender isn’t. There’s always a switch to flip, a setting to tweak, a backdoor to exploit, or a path to walk away entirely. Companies may keep trying to box us in, but as long as we can still think, poke, and tinker, we don’t have to live with the shit they shovel. At the end of the day "On se demerde"

0 views
iDiallo 1 weeks ago

We Don't Fix Bugs, We Build Features

As a developer, bugs consume me. When I discover one, it's all I can think about. I can't focus on other work. I can't relax. I dream about it. The urge to fix it is overwhelming. I'll keep working until midnight even when my day should have ended at 6pm. I simply cannot leave a bug unfixed. And yet, when I look at my work backlog, I see a few dozen of them. A graveyard of known issues, each one catalogued, prioritized, and promptly ignored. How did we get here? How does a profession full of people who are pathologically driven to fix problems end up swimming in unfixed problems? For that, you have to ask yourself, what is the opposite of a bug? No, it's not "No Bugs". It's features. "I apologize for such a long letter - I didn't have time to write a short one." As projects mature and companies scale, something changes . You may start with a team of developers solving problems, but then, they slowly become part of an organization that needs processes, measurements, and quarterly planning. Then one day, you are presented with this new term. Roadmap. It's a beautiful, color-coded timeline of features that will delight users and move business metrics. The roadmap is where bugs go to die. Here's how it happens. A developer discovers a bug and brings it to the team. The product manager asks the only question that matters in their world: "Will this affect our roadmap?" Unless the bug is actively preventing a feature launch or causing significant user churn, the answer is almost always no. The bug gets a ticket, the ticket gets tagged as "tech debt," and it joins the hundreds of other tickets in the backlog hotel, where it will remain indefinitely. ( see Rockstar ) This isn't a jab at product managers. They're operating within a system that leaves them no choice. Agile was supposed to liberate us. The manifesto promised flexibility, collaboration, and responsiveness to change. But somewhere along the way, agile stopped being a philosophy and became a measurement system. There are staunch supporters of agile that swear by it, and blame any flows on the particular implementation. "You guys are not doing true agile." But when everyone is doing it wrong, you don't blame everyone, you blame the system. We can't all be holding agile wrong ! The agile principle is to deliver working software frequently, welcome changing requirements, maintain technical excellence. But principles don't fit in spreadsheets. Metrics do. And so we got story points. Velocity. Sprint completion rates. Feature delivery counts. Suddenly, every standup and retrospective fed into dashboards that executives reviewed quarterly. And where there are metrics, there are managers trying to make some numbers go up and others go down. Features are easy to measure. They're discrete, they're visible, and they can be tied to revenue. "We shipped 47 features this quarter, leading to a 12% increase in user engagement." That's a bullet point in your record that gets you promoted. Bugs are invisible in this equation. Sure, they appear on the same Jira board, but their contribution is ephemeral. How do you quantify the value of something that doesn't go wrong? How do you celebrate the absence of a problem? You can't put "prevented 0 crashes by fixing a race condition" on a slide deck. The system doesn't just deprioritize bugs, it actively ignores them. A team that spends a sprint fixing bugs has nothing to show for it on the roadmap. Their velocity looks identical, but they've "accomplished" nothing that the executives care about. Meanwhile, the team that plows ahead with features, moves fast and breaks things, bugs be damned? They look productive. Developers want to prioritize bug fixes, performance improvements, and technical debt. These are the things that make software maintainable, reliable, and pleasant to work with. Most developers got into programming because they wanted to fix things, to make systems better. The business prioritizes features that impact revenue. New capabilities that can be sold, marketed, and demonstrated. Things that exist, not things that don't break. Teams are often faced with a choice. Do we fix what's broken, or do we build what's new? And because the metrics, the incentives, and the roadmap all point in one direction, the choice is made for them. This is how you end up with production systems riddled with known bugs that could probably be fixed but won't be tackled. Not because they're not important. Not because developers don't care. But because they're not on the roadmap. "I apologize for such many bugs. I only had time to build features." Writing concisely takes more time and thought than rambling. Fixing bugs takes more discipline than shipping features. Building maintainable systems takes more effort than building fast. We've become so busy building that we have no time to maintain what we've built. We're so focused on shipping new things that we can't fix the old things. The roadmap is too full to accommodate quality. Reaching our metric goals is the priority. It's not that we don't know better. It's not even that we don't care. It's that we've built systems like product roadmaps, velocity tracking, etc, and now making the wrong choice the only rational choice. I've worked with teams that tried to present a statistical approach to presenting bugs in the roadmap. Basically, you can analyze existing projects, look at bug counts when each feature was built, then calculate the probability of bugs. Now this number will appear in the roadmap as a color coded metric. It sounds and looks good in theory, and you can even attach an ROI to bug fixes. But bugs don't work like that. They can be introduced by mistake, by misunderstanding, or sometimes even intentionally when business logic itself is flawed. No statistical model will predict the developer who misread the requirements, or the edge case that appears only in production, or the architectural decision that made sense five years ago but creates problems today. Bugs are human problems in human systems. You can't spreadsheet your way out of them. You have to actually fix them. When developers are forced to choose between what they know is right and what the metrics reward, we've built the wrong system. When "I fixed a critical race condition" is less valuable than "I shipped a feature," we've optimized for the wrong things. Maybe the first step is simply acknowledging the problem. We don't fix bugs because our systems don't let us. We don't fix bugs because we only had time to build features. And just like that overly long letter, the result is messier, longer, and ultimately harder to deal with than if we'd taken the time to do it right from the start.

0 views
iDiallo 2 weeks ago

Self-Help Means Help Yourself

For a moment in my life, you couldn't see me without a book in hand. A self-help book to be precise. I felt like the world was moving, changing, and I was being left behind. Being raised to look at the mirror before I blame others, I decided if there was something to improve, it was my very own self. I picked up Dale Carnegie's How to Win Friends and Influence People . Now I can admit it, I never finished reading the book. But I read plenty of others. I devoured all of Robert Kiyosaki's books and felt inspired. If only I had a rich dad. I read the one he wrote with Donald Trump. I was pumped. I was still learning English; I may have misunderstood the whole thing (I can assure you, none of the authors mentioned were involved in writing the book). I joined a club where we would get a new self-help book every month and discuss it. I was in love with the genre. But one thing I noticed in retrospect is that I enjoyed reading more than actually doing anything the books taught. Here's the thing about self-help books, they're necessarily abstract. If they gave specific examples, those examples wouldn't apply to most people. So they give general advice, more inspiring than practical. And inspiration, while it feels good in the moment, doesn't build anything on its own. Over the years, I learned that advice by itself is useless. Imagine getting writing advice from a pro, but you've never written anything. No writing advice can be applied to a blank piece of paper. You can't edit what doesn't exist. You can't improve a sentence you haven't written. What you actually need is to start something, anything, and reevaluate every so often. That's it. I think about Bob Nystrom, who wrote Crafting Interpreters , a book about building programming languages. What I love about his story isn't just the book itself, but how he wrote it. He did so publicly, chapter by chapter, responding to feedback as he went. And when he completed the book, he published a reflection of the process he titled Crafting "Crafting Interpreters" . He wrote through some of the worst years of his life. His mother was diagnosed with cancer. Loved ones died. The world around him felt like it was falling apart. But he kept writing anyway. Not because he was superhuman or exceptionally disciplined. He kept writing because it was the one thing he could control when so much else was spiraling beyond his grasp. Finishing the book became proof that he could make it through everything else. Skipping a day would have meant the chaos won. Writing became his anchor. We can always find reasons not to start. The conditions are never perfect. We're still learning. We don't have the right resources. We haven't read enough books yet. But self-help isn't meant to be inspiration porn, something we consume to feel good without changing anything. It's a method for helping yourself. The books, the advice, the strategies, they're all pointing toward the same message. You have to be the one to do it. Nobody can help you get started. Nobody can give you advice that works on a blank page. The only thing that transforms nothing into something is you, sitting down and beginning. Self-help means helping yourself, not someday, not when you're ready, but now. Start messy. Start imperfect. Start without knowing how it ends. Because the secret isn't in the next book or the next piece of advice. The secret is that you already know what you need to do. You just need to help yourself do it.

1 views
iDiallo 2 weeks ago

The real cost of Compute

Somewhere along the way, we stopped talking about servers. The word felt clunky, industrial, too tied to physical reality. Instead, we started saying "the cloud". It sounds weightless, infinite, almost magical. Your photos live in the cloud. Your documents sync through the cloud. Your company's entire infrastructure runs in the cloud. I hated the term cloud. I wasn't alone, someone actually created a "cloud to butt" browser extension that was pretty fun and popular. But the world has adopted the term, and I had no choice but to oblige. So what is the actual cloud? Why is it hiding behind this abstraction? Well, the cloud is rows upon rows of industrial machines, stacked in massive data centers, consuming electricity at a scale most of us can't even imagine. The cloud isn't floating above us. It's bolted to concrete floors, surrounded by cooling systems, and plugged into power grids that strain under its appetite. I'm old enough to remember the crypto boom and the backlash that followed. Critics loved to point out that Bitcoin mining consumed as much electricity as entire countries. Argentina, the Netherlands, and so many nations were picked for comparison. But I was not outraged by it at all. My reaction at the time was simpler. Why does it matter if they pay their electric bill? If you use electricity and compensate for it, isn't that just... how markets work? Turns out, I was missing the bigger picture. And the AI boom has made it impossible to ignore. When new data centers arrive in a region, everyone's electric bill goes up. Even if your personal consumption stays exactly the same. It has nothing to do with fairness and free markets. Infrastructure is not free. The power grids weren't designed for the sudden addition of facilities that consume megawatts continuously. When demand surges beyond existing capacity, utilities pass those infrastructure costs onto everyone. New power plants get built, transmission lines get upgraded, and residential customers help foot the bill through rate increases. The person who never touches AI, never mines crypto, never even knows what a data center does, this person is now subsidizing the infrastructure boom through their monthly utility payment. The cloud, it turns out, has a very terrestrial impact on your wallet. We've abstracted computing into its purest conceptual form: "compute." I have to admit, it's my favorite term in tech. "Let's buy more compute." "We need to scale our compute." It sounds frictionless, almost mathematical. Like adjusting a variable in an equation. Compute feels like a slider you can move up and down in your favorite cloud provider's interface. Need more? Click a button. Need less? Drag it down. The interface is clean, the metaphor is seamless, and completely disconnected from the physical reality. But in the real world, "buying more compute" means someone is installing physical hardware in a physical building. It means racks of servers being assembled, hard drives being mounted, cables being routed. The demand has become so intense that some data center employees have one job and one job only: installing racks of new hard drives, day in and day out. It's like an industrial assembly line. Every gigabyte of "cloud storage" occupies literal space. Every AI query runs on actual processors that generate actual heat. The abstraction is beautiful, but the reality is concrete and steel. The cloud metaphor served its purpose. It helped us think about computing as a utility. It's always available, scalable, detached from the messy details of hardware management. But metaphors shape how we think, and this one has obscured too much for too long. Servers are coming out of their shells. The foggy cloud is lifting, and we're starting to see the machinery underneath: vast data centers claiming real estate, consuming real water for cooling, and drawing real power from grids shared with homes, schools, and hospitals. This isn't an argument against cloud computing or AI. There nothing to go back to. But we need to acknowledge their physical footprint. The cloud isn't a magical thing in the sky. It's industry. And like all industry, it needs land, resources, and infrastructure that we all share.

0 views
iDiallo 2 weeks ago

Making a quiet stand with your privacy settings

After making one of the largest refactor of our application, one that took several months in the making, where we tackled some of our biggest challenges. We tackled technical debt, upgraded legacy software, fortified security, and even made the application faster. After all that, we deployed the application, and held our breath, waiting for the user feedback to roll in. Well, nothing came in. There were no celebratory messages about the improved speed, no complaints about broken features, no comments at all. The deployment was so smooth it was invisible. To the business team, it initially seemed like we had spent vast resources for no visible return. But we knew the underlying truth. Sometimes, the greatest success is defined not by what happens, but by what doesn't happen. The server that doesn't crash. The data breach that doesn't occur. The user who never notices a problem. This is the power of a quiet, proactive defense. In this digital world, where everything we do leaves a data point, it's not easy to recognize success. When it comes to privacy, taking a stand isn't dramatic. In fact, its greatest strength is its silence. We're conditioned to believe that taking a stand should feel significant. We imagine a public declaration, a bold button that flashes "USER REBELLION INITIATED!" when pressed. Just think about people publicly announcing they are leaving a social media platform. But the reality of any effective digital self-defense is far more mundane. When I disagree with a website's data collection, I simply click "Reject All." No fanfare. No message telling the company, "This user is privacy-conscious!" My resistance is registered as a non-action. A void in their data stream. When I read that my Vizio Smart TV was collecting viewing data, I navigated through a labyrinth of menus to find the "Data Collection" setting and turned it off. The TV kept working just fine. Nothing happened, except that my private viewing habits were no longer becoming a product to be sold. They didn't add a little icon on the top corner that signifies "privacy-conscious." Right now, many large language models like ChatGPT have "private conversation" settings turned off by default. When I go into the settings and enable the option that says, "Do not use my data for training," there's no confirmation, no sense of victory. It feels like I've done nothing. But I have. This is how proactive inaction looks like. Forming a new habit is typically about adding an action. Going for a run every morning, drinking a glass of water first thing, reading ten pages a night. But what about the habit of not doing ? When you try to simply "not eat sugar," you're asking your brain to form a habit around an absence. There's no visible behavior to reinforce, no immediate sensory feedback to register success, and no clear routine to slot into the habit loop. Instead, you're relying purely on willpower. A finite resource that depletes throughout the day, making evening lapses almost inevitable. Your brain literally doesn't know what to practice when the practice is "nothing." It's like trying to build muscle by not lifting weights. The absence of action creates an absence of reinforcement, leaving you stuck in a constant battle of conscious resistance rather than unconscious automation. Similarly, the habit of not accepting default settings is a habit of inaction. You are actively choosing to not participate in a system designed to exploit your data. It's hard because it lacks the dopamine hit of a checked box. There's no visible progress bar for "Privacy Secured." But the impact is real. This quiet practice is our primary defense against what tech writer Cory Doctorow calls "enshittification". That's the process where platforms decay by first exploiting users, then business customers, until they become useless, ad-filled pages with content sprinkled around. It's also our shield against hostile software that prioritizes its own goals over yours. Not to blame the victims, but I like to remind people that they have agency over the software and tools they use. And your agency includes the ultimate power to walk away. If a tool's settings are too hostile, if it refuses to respect your "no," then your most powerful setting is the "uninstall" button. Choosing not to use a disrespectful app is the ultimate, and again, very quiet, stand. So, I challenge everyone to embrace the quiet. See the "Reject All" button not as a passive refusal, but as an active shield. See the hidden privacy toggle not as a boring setting, but as a toggle that you actively search for. The next time you download a new app or create a new account, take five minutes. Go into the settings. Look for "Privacy," "Data Sharing," "Personalization," or "Permissions." Turn off what you don't need. Nothing will happen. Your feed won't change, the app won't run slower, and no one will send you a congratulatory email. And that's the whole point. You will have succeeded in the same way our refactor succeeded: by ensuring something unwanted doesn't happen. You've strengthened your digital walls, silently and without drama, and in doing so, you've taken one of the most meaningful stands available to us today.

0 views
iDiallo 3 weeks ago

How Do You Send an Email?

It's been over a year and I didn't receive a single notification email from my web-server. It could either mean that my $6 VPS is amazing and hasn't gone down once this past year. Or it could mean that my health check service has gone down. Well this year, I have received emails from readers to tell me my website was down. So after doing some digging, I discovered that my health checker works just fine, but all emails it sends are being rejected by gmail. Unless you use a third party service, you have little to no chance of sending an email that gets delivered. Every year, email services seem to become a tad bit more expensive. When I first started this website, sending emails to my subscribers was free on Mailchimp. Now it costs $45 a month. On Buttondown, as of this writing, it costs $29 a month. What are they doing that costs so much? It seems like sending emails is impossibly hard, something you can almost never do yourself. You have to rely on established services if you want any guarantee that your email will be delivered. But is it really that complicated? Emails, just like websites, use a basic communication protocol to function. For you to land on this website, your browser somehow communicated with my web server, did some negotiating, and then my server sent HTML data that your browser rendered on the page. But what about email? Is the process any different? The short answer is no. Email and the web work in remarkably similar fashion. Here's the short version: In order to send me an email, your email client takes the email address you provide, connects to my server, does some negotiating, and then my server accepts the email content you intended to send and saves it. My email client will then take that saved content and notify me that I have a new message from you. That's it. That's how email works. So what's the big fuss about? Why are email services charging $45 just to send ~1,500 emails? Why is it so expensive, while I can serve millions of requests a day on my web server for a fraction of the cost? The short answer is spam . But before we get to spam, let's get into the details I've omitted from the examples above. The negotiations. How similar email and web traffic really are? When you type a URL into your browser and hit enter, here's what happens: The entire exchange is direct, simple, and happens in milliseconds. Now let's look at email. The process is similar: Both HTTP and email use DNS to find servers, establish TCP connections, exchange data using text-based protocols, and deliver content to the end user. They're built on the same fundamental internet technologies. So if email is just as simple as serving a website, why does it cost so much more? The answer lies in a problem that both systems share but handle very differently. Unwanted third-party writes. Both web servers and email servers allow outside parties to send them data. Web servers accept form submissions, comments, API requests, and user-generated content. Email servers accept messages from any other email server on the internet. In both cases, this openness creates an opportunity for abuse. Spam isn't unique to email, it's everywhere. My blog used to get around 6,000 spam comments on a daily basis. On the greater internet, you will see spam comments on blogs, spam account registrations, spam API calls, spam form submissions, and yes, spam emails. The main difference is visibility. When spam protection works well, it's invisible. You visit websites every day without realizing that behind the scenes. CAPTCHAs are blocking bot submissions, rate limiters are rejecting suspicious traffic, and content filters are catching spam comments before they're published. You don't get to see the thousands of spam attempts that happen every day on my blog, because of some filtering I've implemented. On a well run web-server, the work is invisible. The same is true for email. A well-run email server silently: There is a massive amount of spam. In fact, spam accounts for roughly 45-50% of all email traffic globally . But when the system works, you simply don't see it. If we can combat spam on the web without charging exorbitant fees, email spam shouldn't be that different. The technical challenges are very similar. Yet a basic web server on a $5/month VPS can handle millions of requests with minimal spam-fighting overhead. Meanwhile, sending 1,500 emails costs $29-45 per month through commercial services. The difference isn't purely technical. It's about reputation, deliverability networks, and the ecosystem that has evolved around email. Email providers have created a cartel-like system where your ability to reach inboxes depends on your server's reputation, which is nearly impossible to establish as a newcomer. They've turned a technical problem (spam) into a business moat. And we're all paying for it. Email isn't inherently more complex or expensive than web hosting. Both the protocols and the infrastructure are similar, and the spam problem exists in both domains. The cost difference is mostly artificial. It's the result of an ecosystem that has consolidated around a few major providers who control deliverability. It doesn't help that Intuit owns Mailchimp now. Understanding this doesn't necessarily change the fact that you'll probably still need to pay for email services if you want reliable delivery. But it should make you question whether that $45 monthly bill is really justified by the technical costs involved. Or whether it's just the price of admission to a gatekept system. DNS Lookup : Your browser asks a DNS server, "What's the IP address for this domain?" The DNS server responds with something like . Connection : Your browser establishes a TCP connection with that IP address on port 80 (HTTP) or port 443 (HTTPS). Request : Your browser sends an HTTP request: "GET /blog-post HTTP/1.1" Response : My web server processes the request and sends back the HTML, CSS, and JavaScript that make up the page. Rendering : Your browser receives this data and renders it on your screen. DNS Lookup : Your email client takes my email address ( ) and asks a DNS server, "What's the mail server for example.com?" The DNS server responds with an MX (Mail Exchange) record pointing to my mail server's address. Connection : Your email client (or your email provider's server) establishes a TCP connection with my mail server on port 25 (SMTP) or port 587 (for authenticated SMTP). Negotiation (SMTP) : Your server says "HELO, I have a message for [email protected]." My server responds: "OK, send it." Transfer : Your server sends the email content, headers, body, attachments, using the Simple Mail Transfer Protocol (SMTP). Storage : My mail server accepts the message and stores it in my mailbox, which can be a simple text file on the server. Retrieval : Later, when I open my email client, it connects to my server using IMAP (port 993) or POP3 (port 110) and asks, "Any new messages?" My server responds with your email, and my client displays it. Checks sender reputation against blacklists Validates SPF, DKIM, and DMARC records Scans message content for spam signatures Filters out malicious attachments Quarantines suspicious senders Both require reputation systems Both need content filtering Both face distributed abuse Both require infrastructure to handle high volume

0 views
iDiallo 3 weeks ago

Is 30% of Microsoft's Code Really AI-Generated?

A few months back, news outlets were buzzing with reports that Satya Nadella claimed 30% of the code in Microsoft's repositories was AI-generated. This fueled the hype around tools like Copilot and Cursor. The implication seemed clear: if Microsoft's developers were now "vibe coding," everyone should embrace the method. I have to admit, for a moment I felt like I was being left behind. When it comes to adopting new technology, I typically choose the slow and careful approach. But suddenly, it seemed like the world was moving on without me. Here's the thing though, I use Copilot. I use Cursor at work as well. But I can't honestly claim that 30% of my code is AI-generated. For every function an AI generates for me, I spend enough time tweaking and adapting it to our specific use case that I might as well claim authorship. Is that what Microsoft employees are doing? Or are they simply writing prompts or a set of instructions, then letting the LLM write the code, generate the tests, and make the commits entirely on its own? So I went back to reread what Satya actually said : I'd say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software. Fair enough. But then I watched the video where he actually said it . Interestingly, it was Zuckerberg who asked the question. What you hear in the interview is a whole lot of "maybe," "probably," "something like". Not the confidence portrayed in the written headlines. But here's what I really want to know: how are they tracking this? Are developers labeling all AI-generated code as such? Is there some distinct signature that marks it? How can you even tell when code is AI-generated? Unlike a written article where we can identify clear patterns, telltale phrasing, word choices that deviate from an author's typical style, code doesn't come with obvious fingerprints. For example, there's no way to tell when a senior developer on my team uses AI. Why? Because they don't commit code they haven't thoroughly reviewed and understood. They treat AI suggestions like rough drafts, useful starting points that require human judgment and refinement. With junior developers, you might occasionally see a utility function defined for absolutely no reason, or overly generic variable names, or unnecessarily verbose implementations that scream "AI-generated." But these issues rarely make it past the code review process, where more experienced eyes catch and correct them before they reach production. Before LLMs entered the picture, what we worried about was developers copying and pasting code from Stack Overflow without understanding or modifying it. These snippets weren't easy to identify either, unless they broke the logic or introduced bugs that revealed their origin. You couldn't reliably identify copy-pasted code back then, so what makes it any easier to identify AI-generated code now? Both scenarios involve code that works (at least initially) and follows conventional patterns, making attribution nearly impossible without explicit tracking mechanisms. The line between "AI-generated" and "human-written" code has become blurrier than the headlines suggest. And maybe that's the point. When AI becomes just another tool in the development workflow, like syntax highlighting or auto-complete, measuring its contribution as a simple percentage might not be meaningful at all.

0 views
iDiallo 3 weeks ago

The App Developer's Attachment Issues

When browsing the web, I still follow rabbit holes. For example, I will click on a link, read an article, find another link in the body, follow that one as well, and keep on going until I get lost in the weeds and appear in wonderland. When I'm reading through my phone, I often have to go back to the browser history to see the trail of websites that lead me to my destination. But sometimes, I just can't find my way back. Why? Because somehow, I wasn't reading through the web browser. I was browsing through webview. So when you are on instagram and click on a link shared by a friend. The page loads instantly, but something feels off. You are browsing the web, yet you don't see the familiar browser tabs or address bar. You are in a webview . Why webview and not your favorite browser? Well, this is what I call App attachment issues. App developers don't want you to leave. And webview is the invisible fence they use to keep you tethered. When an application loads content within an in-app browser (a webview) you are, technically, using the web. It's running the same rendering engine as a dedicated browser. But the app's sole purpose for doing this is to silo you. They want to maintain control over your experience, ensuring you are never truly free to roam the open internet. The benefit for the developer is that no matter what page you browse, you are perpetually one button click away from being back in their app. It's a mechanism for user retention, a digital leash. Every company, from social media giants to news aggregators, is trying to fit you into their specific bucket, convinced that if they let you leave, you might not come back. They want to maintain that control over your experience, even when you are outside their reach. On Android, this is super annoying. You might be able to click links and navigate from the initial website to a completely different, unrelated one, but you often cannot manually change the URL. You are trapped in the current browsing flow, unable to jump to a new destination without first leaving the app or performing a dedicated search. Why are you still under the app's thumb if you're surfing the public web? The answer is always control. The web is a dangerous place. What if you click on the wrong link and your device gets compromised? We can't protect you in this case. At least that's what it feels like when clicking on external links on some websites. For example, on LinkedIn when you click an external link, you are often greeted with a warning message like this: This link will take you to a page that's not on LinkedIn Because this is an external link, we're unable to verify it for safety. On the surface, it appears to be a helpful security measure. The platform is protecting you from the big, bad internet. But the only thing they are truly protecting you from is leaving their app. If the link was already shared by a contact or surfaced on their platform, the implicit due diligence should have been done. Serving up a blanket safety warning for any external link, even those to major news organizations or well-known websites, is just a friction point to discourage you from leaving. It's a psychological barrier designed to make you hesitate, keep you inside the known confines of their platform, and reinforce their control. This security warning is nothing more than the final, passive-aggressive plea in the app's campaign against your freedom. If the in-app silo was just the web, but within the app, I wouldn't complain. But while developers are focused on retention, the user experience suffers in some infuriating ways. The webview is a fundamentally broken browsing experience for three core reasons: The most frustrating drawback is the lack of permanence. Your browsing history is at the mercy of the developer. They can choose to record it, or not record it. And you will be none the wiser until you are trying to find that article you read just this morning. With my rabbit hole style of browsing the web, I often stumble upon great articles, helpful tools, or even products that I mean to return to. But if any of those pages were viewed under a webview, they vanish without a trace. Related to the missing history is the risk of accidental loss. You might be deep into an article, hit the back button to navigate one step back on the site, and instead, the entire webview collapses, dumping you unceremoniously back into the main app feed. Because no history was recorded, there is no way to return to the page you were just on. The article is simply gone. There is a common counterargument that says, "Most apps have a setting to disable webview and open links directly in your full browser." But two points to this. 1. Most people don't ever change the default settings. 2. Why is this even an option to select? If the webview uses the browser engine anyway, why should the default setting be the one that compromises the user's web experience? Users do not dive into granular settings menus. The path of least resistance is the path most taken. By defaulting to webview, developers are prioritizing their retention goals over basic utility. The entire architecture of the web is built on freedom, open access, and a unified browsing experience. By forcing a dedicated web environment, developers are fragmenting the internet and making our lives slightly harder. I'm sure there are some metrics out there that say “using in-app webview increases engagement by x%.” But for n=1, aka me, it only increases my disengagement. All I can say to developers is: It's okay to let go. The remedy for your attachment issues is user freedom. When I click a link, I expect to be in a full browser, with a permanent history, a functional address bar, and true control over my destination. It's time for applications to trust users, respect the open web, and stop trapping us in the confines of their digital cages. For users, next time you click a link, look for that small icon, often a compass, an arrow, or an ellipsis, then choose to open in browser. It's your internet. It's okay to leave the app. Or even better, never download the apps .

0 views
iDiallo 3 weeks ago

What Actually Defines a Stable Software Version?

As a developer, you'll hear these terms often: "stable software," "stable release," or "stable version." Intuitively, it just means you can rely on it. That's not entirely wrong, but when I was new to programming, I didn't truly grasp the technical meaning. For anyone learning, the initial, simple definition of "it works reliably" is a great starting point. But if you're building systems for the long haul, that definition is incomplete. The intuitive definition is: a stable version of software that works and that you can rely on not to crash. The technical definition is: a stable version of software where the API will not change unexpectedly in future updates. A stable version is essentially a guarantee from the developers that the core interface, such as the functions, class names, data structures, and overall architecture you interact with, will remain consistent throughout that version's lifecycle. This means that if your code works with version 1.0.0, it should also work flawlessly with version 1.0.1, 1.0.2, and 1.1.0. Future updates will focus on bug fixes, security patches, and performance improvements, not on introducing breaking changes that force you to rewrite your existing code. My initial misunderstanding was thinking stability was about whether the software was bug-free or not. Similar to how we expect bugs to be present in a beta version. But there was still an upside to this confusion. It helped me avoid the hype cycle, especially with certain JavaScript frameworks. I remember being hesitant to commit to new versions of certain tools (like early versions of React, Angular, though this is true of many fast-moving frameworks and SDKs). Paradigms would shift rapidly from one version to the next. A key concept I'd mastered one month would be deprecated or replaced the next. While those frameworks sit at the cutting edge of innovation, they can also be the antithesis of stability. Stability is about long-term commitment. Rapid shifts force users to constantly evolve with the framework, making it difficult to stay on a single version without continual, large-scale upgrades. A truly stable software version is one you can commit to for a significant amount of time. The classic example of stability is Python 2. Yes, I know many wanted it to die by fire, but it was first released in 2000 and remained active, receiving support and maintenance until its final update in 2020. That's two decades of stability! I really enjoyed being able to pick up old scripts and run them without any fuss. While I'm not advocating that every tool should last that long, I do think that when we're building APIs or stable software, we should adopt the mindset that this is the last version we'll ever make. This forces us to carefully consider the long-term design of our software. Whenever I see LTS (Long-Term Support) next to an application, I know that the maintainers have committed to supporting, maintaining, and keeping it backward compatible for a defined, extended period. That's when I know I'm working with both reliable and stable software.

0 views
iDiallo 4 weeks ago

What a Disappointing Blog

Have you ever read a blog post here and thought: Meh ? Some articles I write are ideas I've been working on for over a year. I think about them often, then add them to my little note app. Sometimes I'm driving and think of something clever, so I dictate it to my notes app while the kids are fighting in the background. Then, in the middle of the night, I take time away from sleep and start putting the ideas together. All because I challenged myself to publish every other day for an entire year. I do all this, hit the publish button, and... well, and then nothing. OK, not just nothing. Worse than nothing. A week later, I come back to revisit the article and discover a typo in the very first sentence. I read the entire thing, and it doesn't even make sense. What point was I trying to make? Why did I use that word? Why does it make me want to fall asleep? Why do I do this to myself? For God's sake, I wrote an entire book! When I read some older articles, I'm just as disappointed. Why didn't I add a counterpoint to balance the whole thing? I hope nobody I know ever reads this. It's weird how I get this feeling when reading my own writing. But I can assure you that when I'm writing, I'm pretty excited about it. I enjoy writing on my blog. These are my words, this is my work, this is how I express the ideas in my mind. For example, I had a blast reading, discovering and writing about timekeeping in the Star Wars universe . But, I had to re-edit it a few dozen times after publishing it. In fact, I like the process so much that I decided maybe I needed to do more. I should also make recordings of these articles, maybe a podcast-style discussion. That would be amazing. Of course, now that I've started and committed to three recordings a week for all of 2025 , listening to any episode is dreadful. My voice cracks, I regret the background music, and some episodes are just painful to listen to. Did I use too much noise canceling? I sound like a robot! Why can't I say the word "perspective"? Again, the process of turning an article into a script is fun. I went from using my phone as a recording device to a proper microphone. I went from using the microphone backwards (trust me, it's confusing) to finally understanding the settings. I try different recording areas and experiment with different sound presets. The process is fun. The result is frustrating to me. But for some people, those few who send me encouraging emails, who somehow enjoy the content, who challenge my ideas, this ends up being for them. They make it all worth it. This doubt I have every time I look at things I make, every time I spot the mistakes, according to Ira Glass, these are the result of "the gap." In an old video titled "The Gap" , he explains that we go into any creative endeavor because we have taste. Good taste. But whatever we create ends up being a disappointment because it doesn't live up to that taste. This is normal. The only way forward is to keep creating and keep improving. The more we do it, the narrower that gap becomes. Yes, I might be frustrated with everything I make today, but what I wrote yesterday is a whole lot better than what I did 10 years ago. The creative expressions, the art, they are all improving. But so is the taste. Eventually, I'll be satisfied with my work, or at least accept it. This disappointment isn't the end of it all; it just means there's still room for improvement. You might find yourself in a similar situation. One where you feel like everything you do sucks, and everyone else is better than you. It's not them, it's you. You just happen to have good taste, and you are trying to live up to it. Keep working, keep improving, it's the only way to narrow that gap. Once you close it, you might just look back and enjoy the fruit of your labor.

0 views
iDiallo 1 months ago

How We're Trying to Solve Vibe-Coded PRs

When companies start embracing AI, it's only a matter of time before it reaches the engineering teams. For competent developers, AI makes their lives easier. The benefits of tools like Cursor or Copilot are often invisible because developers use them as tools to accelerate their workflow, not replace it. It's confusing when companies claim a specific percentage of their code is "AI-generated," since these tools function as assistants. With that logic in mind, could we say a certain percentage of code was "StackOverflow copy-pasted"? But every now and then, someone starts using AI to completely take over their position. They write a prompt to generate code that fixes a task, test that the task is resolved, and then commit the code. Sometimes the code is committed without any further review. These commits typically involve a large number of lines changed, a coding style that differs from the team's conventions, and changes that sometimes make no sense at all. Many developers treat PR reviews as personal criticism, which can feel harsh or rude. Meaning people hold back and let nonsensical code get merged. To avoid these issues and the politics of "AI vs. Anti-AI", we started implementing a process that helps us address vibe-coded PRs without the criticism. I asked my most senior developer to vibe-code a solution to a relatively simple ticket. After a couple of people approved it, I scheduled a video call (including known vibe-coders) where the senior develoer had to explain the PR. Since this was a staged review, I asked detailed questions: Why were certain choices made? Why did the coding style change? Why create a new endpoint instead of adding functionality to existing code? We scrutinized every part of the code. Changes were made, we reviewed again, and the team began to understand what the bar is for our work. Why go through this dance instead of simply saying "don't vibe code" or "review your code thoroughly"? Because people use LLMs to save time. If they don't have time to write the code, they certainly won't spend time reading it. What they do is generate code, test it, and if the functionality works, move it forward. It's rare for any vibe-coder to actually read the code they've generated. But seeing the scrutiny placed on these PRs forces developers to spend more time with their code. They realize they need to understand what they're submitting. It's one thing to quickly create features when building an MVP, but the bar is much higher when contributing to existing software. When you write code, part of the process is thinking about your future self and how other developers will read and extend your work. You need to be consistent with the team's style, even if it's not always the optimal choice. The goal is for any developer to read the codebase as if it were written by one person. Just a few days ago, I wrote about how when we use LLMs, we tend not to read the results before passing them to the next person down the chain . Putting a system in place that forces you to understand your work helps both developers and reviewers contribute meaningfully. This is an experiment, and so far, I think it's working. But the world of LLMs is ever-changing, and we haven't settled on the rules yet. Maybe six months from now, vibe-coding will be reliable enough. But until we get there, we need to find ways to ensure we're still producing high-quality code that teams can collectively understand and maintain.

17 views
iDiallo 1 months ago

The NEO Robot

You've probably seen the NEO home robot by now, from the company 1X. It's a friendly humanoid with a plush-toy face that can work around your house. Cleaning, making beds, folding laundry, even picking up after meals. Most importantly, there's the way it looks. Unlike Tesla's "Optimus," which resembles an industrial robot, NEO looks friendly. It has a cute, plush face with round eyes. Something you could let your children play with. But after watching their launch video, I only had one thing on my mind: battery life. And that's how you know I was tricked. Battery life is four hours after a full charge according to the company, but that's the wrong thing to focus on. Remember when Tesla first announced Optimus? Elon Musk made sure to emphasize one statement, they purposely capped the robot's speed to 5 miles per hour. Then he joked that "you can just outrun it and most likely overpower it." This steered the conversation toward safety in AI and robots. a masterful bit of misdirection from the fact that there was no robot whatsoever at the time. Not even a prototype. Just a person in a suit doing a silly dance. With NEO, we saw a lot more. The robot loaded laundry into the machine, tidied up the home, did the dishes. Real demonstrations with real hardware. But what they failed to emphasize was just as important. All actions in the video were entirely remote controlled. Here are the assumptions I was making while watching their video. Once you turn on this robot, it would first need to understand your home. Since it operates as a housekeeper, it would map your space using the dual cameras on its head, saving this information to some internal drive. It would need to recognize you both visually and through your voice. You'd register your face and voice like Face ID. They stated it can charge itself, so the dexterity of its hands must be precise enough to plug itself in autonomously. All reasonable assumptions for a $20,000 "AI home robot," right? But these are just assumptions. Then the founder mentions you can "teach it new tasks," overseen by one of their experts that you can book at specific times. Since we're not seeing the robot do anything autonomously, I'm left wondering. What does "teaching the robot a skill" even mean? The NEO is indeed a humanoid robot. But it's not an autonomous AI robot. It's a teleoperated robot that lives in your home. A remote operator from 1X views through its cameras and controls its movements when it needs to perform a tasks. If that's what they're building, it should be crystal clear. People need to understand what they're buying and the implications that come with it. You're allowing someone from a company to work in your home remotely, using a humanoid robot as their avatar, seeing everything the robot sees. Looking at the videos published by outlets like the Wall Street Journal , even the teleoperated functionality appears limited. MKBHD also offers an excellent analysis that's worth watching. 1X positions this teleoperation as a training mechanism. The "Expert Mode" that generates data to eventually make the robot autonomous. It's a reasonable approach, similar to how Tesla gathered data for Full Self-Driving. But the difference is your car camera feeds helped train a system; NEO's cameras invite a stranger into your most private spaces. The company says it has implemented privacy controls, scheduled sessions, no-go zones, visual indicators when someone's watching, face-blurring technology, etc. These are necessary safeguards, but they don't change the fundamental problem. This is not an autonomous robot. Also, you are acting as a data provider for the company while paying $20,000 for the hardware. 2026 is just around the corner. I expect the autonomous capabilities to be quietly de-emphasized in marketing as we approach the release date. I also expect delays attributed to "high demand" and "ensuring safety standards." I don't expect this robot to deliver in 2026. If it does, it will be a teleoperated humanoid. With my privacy concerns, I will probably not be an early or late adopter. But I'll happily seat on the sidelines and watch the chaos unfold. A teleoperated humanoid sounds like the next logical step for an Uber or DoorDash. The company should just be clear about what they are building.

0 views
iDiallo 1 months ago

Why I Remain a Skeptic Despite Working in Tech

One thing that often surprises my friends and family is how tech-avoidant I am. I don't have the latest gadget, I talk about dumb TVs, and Siri isn't activated on my iPhone. The only thing left is to go to the kitchen, take a sheet of tin foil, and mold it into a hat. To put it simply, I avoid tech when I can. The main reason for my skepticism is that I don't like tracking technology. I can't stop it, I can't avoid it entirely, but I will try as much as I can. Take electric cars, for example. I get excited to see new models rolling out. But over-the-air updates freak me out. Why? Because I'm not the one in control of them. Modern cars now receive software updates wirelessly, similar to smartphones. These over-the-air updates can modify everything from infotainment systems to critical driving functions like powertrain systems, brakes, and advanced driver assistance systems. While this technology offers convenience, it also introduces security concerns, hackers could potentially gain remote access to vehicle systems. The possibility for a hostile take over went from 0 to 1. I buy things from Amazon. It's extremely convenient. But I don't feel comfortable having a microphone constantly listening. They may say that they don't listen to conversations, but you can't respond to a command without listening . It does use some trigger words to activate, but they still occasionally accidentally activate and start recording. Amazon acknowledges that it employs thousands of people worldwide to listen to Alexa voice recordings and transcribe them to improve the AI's capabilities. In 2023, the FTC fined Amazon $31 million for violating children's privacy laws by keeping kids' Alexa voice recordings indefinitely and undermining parents' deletion requests. The same thing with Siri. Apple likes to brag about their privacy features, but they still paid $95 million in a Siri eavesdropping settlement . Vizio TVs take screenshots from 11 million smart TVs and sell viewing data to third parties without users' knowledge or consent. The data is bundled with personal information including sex, age, income, marital status, household size, education level, and home value, then sold to advertisers. The FTC fined Vizio $2.2 million in 2017, but by then the damage was done. This technology isn't limited to Vizio. Most smart TV manufacturers use similar tracking. ACR can analyze exactly what's on your screen regardless of source, meaning your TV knows when you're playing video games, watching Blu-rays, or even casting home movies from your phone. In 2023, Tesla faced a class action lawsuit after reports revealed that employees shared private photos and videos from customer vehicle cameras between 2019 and 2022. The content included private footage from inside customers' garages. One video that circulated among employees showed a Tesla hitting a child on a bike . Tesla's privacy notice states that "camera recordings remain anonymous and are not linked to you or your vehicle," yet employees clearly had access to identify and share specific footage. Amazon links every Alexa interaction to your account and uses the data to profile you for targeted advertising. While Vizio was ordered to delete the data it collected, the court couldn't force third parties who purchased the data to delete it. Once your data is out there, you've lost control of it forever. For me, a technological device that I own should belong to me, and me only. But for some reason, as soon as we add the internet to any device, it stops belonging to us. The promise of smart technology is convenience and innovation. The reality is surveillance and monetization. Our viewing habits, conversations, and driving patterns are products being sold without our meaningful consent. I love tech, and I love solving problems. But as long as I don't have control of the devices I use, I'll remain a tech skeptic. One who works from the inside, hoping to build better solutions. The industry needs people who question these practices, who push back against normalized surveillance, and who remember that technology should serve users, not exploit them. Until then, I'll keep my TV dumb, my Siri disabled, and be the annoying family member who won't join your facebook group.

4 views
iDiallo 1 months ago

None of us Read the specs

After using Large Language Models extensively, the same questions keep resurfacing. Why didn't the lawyer who used ChatGPT to draft legal briefs verify the case citations before presenting them to a judge? Why are developers raising issues on projects like cURL using LLMs, but not verifying the generated code before pushing a Pull Request? Why are students using AI to write their essays, yet submitting the result without a single read-through? The reason is simple. If you didn't have time to write it, you certainly won't spend time reading it. They are all using LLMs as their time-saving strategy. In reality, the work remains undone because they are merely shifting the burden of verification and debugging to the next person in the chain. AI companies promise that LLMs can transform us all into a 10x developer. You can produce far more output, more lines of code, more draft documents, more specifications, than ever before. The core problem is that this initial time saved is almost always spent by someone else to review and validate your output. At my day job, the developers who use AI to generate large swathes of code are generally lost when we ask questions during PR reviews. They can't explain the logic or the trade-offs because they didn't write it, and they didn't truly read it. Reading and understanding generated code defeats the initial purpose of using AI for speed. Unfortunately, there is a fix for that as well. If PR reviews or verification slow the process down, then the clever reviewer can also use an LLM to review the code at a 10x speed. Now, everyone has saved time. The code gets deployed faster. The metrics for velocity look fantastic. But then, a problem arises. A user experiences a critical issue. At this point, you face a technical catastrophe: The developer is unfamiliar with the code, and the reviewer is also unfamiliar with the code. You are now completely at the mercy of another LLM to diagnose the issue and create a fix, because the essential human domain knowledge required to debug a problem has been bypassed by both parties. This issue isn't restricted to writing code. I've seen the same dangerous pattern when architects use LLMs to write technical specifications for projects. As an architect whose job is to produce a document that developers can use as a blueprint, using an LLM exponentially improves speed. Where it once took a day to go through notes and produce specs, an LLM can generate a draft in minutes. As far as metrics are concerned, the architect is producing more. Maybe they can even generate three or four documents a day now. As an individual contributor, they are more productive. But that output is someone else’s input, and their work depends entirely on the quality of the document. Just because we produce more doesn't mean we are doing a better job. Plus, our tendency is to not thoroughly vet the LLM's output because it always looks good enough, until someone has to scrutinize it. The developer implementing a feature, following that blueprint, will now have to do the extra work of figuring out if the specs even make sense. If the document contains logical flaws, missing context, or outright hallucinations , the developer must spend time reviewing and reconciling the logic. The worst-case scenario? They decide to save time, too. They use an LLM to "read" the flawed specs and build the product, incorporating and inheriting all the mistakes, and simply passing the technical debt along. LLMs are powerful tools for augmentation, but we treat them as tools for abdication . They are fantastic at getting us to a first draft, but they cannot replace the critical human function of scrutiny, verification, and ultimate ownership. When everyone is using a tool the wrong way, you can't just say they are holding it wrong . But I don't see how we can make verification a sustainable part of the process when the whole point of using an LLM is to save time. For now at least, we have to deliberately consider all LLM outputs incorrect until vetted. If we fail to do this, we're not just creating more work for others; we're actively eroding our work, making life harder for our future selves.

1 views
iDiallo 1 months ago

Why should I accept all cookies?

Around 2013, my team and I finally embarked in upgrading our company's internal software to version 2.0. We had a large backlog of user complaints that we were finally addressing, with security at the top of the list. The very top of the list was moving away from plain text passwords. From the outside, the system looked secure. We never emailed passwords, we never displayed them, we had strict protocols for password rotation and management. But this was a carefully staged performance. The truth was, an attacker with access to our codebase could have downloaded the entire user table in minutes. All our security measures were pure theater, designed to look robust while a fundamental vulnerability sat in plain sight. After seeing the plain text password table, I remember thinking about a story that was also happening around the same time. A 9 year old boy who flew from Minneapolis to Las Vegas without a boarding pass . This was in an era where we removed our shoes and belts for TSA agents to humiliate us. Yet, this child was able, without even trying, to bypass all the theater that was built around the security measures. How did he get past TSA? How did he get through the gate without a boarding pass? How was he assigned a seat in the plane? How did he... there are just so many questions. Just like our security measures on our website, it was all a performance, an illusion. I can't help but see the same script playing out today, not in airports or codebases, but in the cookie consent banners that pop up on nearly every website I visit. It's always a variation of "This website uses cookies to enhance your experience. [Accept All] or [Customize]." Rarely is there a bold, equally prominent "Reject All" button. And when there is, the reject-all button will open a popup where you have to tweak some settings. This is not an accident; it's a dark pattern. It's the digital equivalent of a TSA agent asking, "Would you like to take the express lane or would you like to go through a more complicated screening process?" Your third option is to turn back and go home, which isn't really an option if you made it all the way to the airport. A few weeks back, I was exploring not just dark patterns but hostile software . Because you don't own the device you paid for, the OS can enforce decisions by never giving you any options. You don't have a choice. Any option you choose will lead you down the same funnel that benefits the company, and give you the illusion of agency. So, let's return to the cookie banner. As a user, what is my tangible incentive to click "Accept All"? The answer is: there is none. "Required" cookies are, by definition, non-negotiable for basic site function. Accepting the additional "performance," "analytics," or "marketing" cookies does not unlock a premium feature for me. It doesn't load the website faster or give me a cleaner layout. It does not improve my experience. My only "reward" for accepting all is that the banner disappears quickly. The incentive is the cessation of annoyance, a small dopamine hit for compliance. In exchange, I grant the website permission to track my behavior, build an advertising profile, and share my data with a shadowy network of third parties. The entire interaction is a rigged game. Whenever I click on the "Customize" option, I'm overwhelmed with the labyrinth of toggles and sub-menus designed to make rejection so tedious that "Accept All" becomes the path of least resistance. My default reaction is to reject everything. Doesn't matter if you use dark patterns, my eyes are trained to read the fine lines in a split second. But when that option is hidden, I've resorted to opening my browser's developer tools and deleting the banner element from the page altogether. It’s a desperate workaround for a system that refuses to offer a legitimate "no." Lately, I don't even bother clicking on reject all. I just delete the elements all together. Like I said, there are no incentives for me to interact with the menu. We eventually plugged that security vulnerability in our old application. We hashed the passwords and closed the backdoor, moving from security theater to actual security. The fix wasn't glamorous, but it was a real improvement. The current implementation of "choice" is largely privacy theater. It's a performance designed to comply with the letter of regulations like GDPR while violating their spirit. It makes users feel in control while systematically herding them toward the option that serves corporate surveillance. There is never an incentive to cookie tracking on the user end. So this theater has to be created to justify selling our data and turning us into products of each website we visit. But if you are like me, don't forget you can always use the developer tools to make the banner disappear. Or use uBlock. On Windows or Google Drive: "Get started" or "Remind me later." Where is "Never show this again"? On Twitter: "See less often" is the only option for an unwanted notification, never "Stop these entirely."

0 views
iDiallo 1 months ago

Galactic Timekeeping

Yes, I loved Andor. It was such a breath of fresh air in the Star Wars universe. The kind of storytelling that made me feel like a kid again, waiting impatiently for my father to bring home VHS tapes of Episodes 5 and 6. I wouldn't call myself a die-hard fan, but I've always appreciated the original trilogy. After binging both seasons of Andor, I immediately rewatched Rogue One , which of course meant I had to revisit A New Hope again. And through it all, one thing kept nagging at me. One question I had. What time is it? In A New Hope , Han Solo, piloting the Millennium Falcon through hyperspace, casually mentions: "We should be at Alderaan about 0200 hours." And they are onto the next scene with R2D2. Except I'm like, wait a minute. What does "0200 hours" actually mean in an intergalactic civilization? When you're travelling through hyperspace between star systems, each with their own planets spinning at different rates around different suns, what does "2:00 AM" even refer to? Bear with me, I'm serious. Time is fundamentally local. Here on Earth, we define a "day" by our planet's rotation relative to the Sun. One complete spin gives us 24 hours. A "year" is one orbit around our star. These measurements are essentially tied to our specific solar neighborhood. So how does time work when you're hopping between solar systems as casually as we hop between time zones? Before we go any further into a galaxy far, far away, let's look at how we're handling timekeeping right now as we begin exploring our own solar system. NASA mission controllers for the Curiosity rover famously lived on "Mars Time" during their missions . A Martian day, called a "sol", is around 24 hours and 40 minutes long. To stay synchronized with the rovers' daylight operations, mission control teams had their work shifts start 40 minutes later each Earth day. They wore special watches that displayed time in Mars sols instead of Earth hours. Engineers would arrive at work in California at what felt like 3:00 AM one week, then noon the next, then evening, then back to the middle of the night. All while technically working the "same" shift on Mars. Families were disrupted. Sleep schedules were destroyed. And of course, "Baby sitters don't work on Mars time." And this was just for one other planet in our own solar system. One team member described it as living " perpetually jet-lagged ." After several months, NASA had to abandon pure Mars time because it was simply unsustainable for human biology. Our circadian rhythms can only be stretched so much. With the Artemis missions planning to establish a continuous human presence on the Moon, NASA and international space agencies are now trying to define an even more complicated system: Lunar Standard Time. A lunar "day", from one sunrise to the next, lasts about 29.5 Earth days. That's roughly 14 Earth days of continuous sunlight followed by 14 Earth days of darkness. You obviously can't work for two weeks straight and then hibernate for two more. But that's not all. On the moon, time itself moves differently. Because of the moon's weaker gravity and different velocity relative to Earth, clocks on the Moon tick at a slightly different rate than clocks on Earth. It's a microscopic difference (about 56 microseconds per day), but for precision navigation, communication satellites, and coordinated operations, it matters. NASA is actively working to create a unified timekeeping framework that accounts for these relativistic effects while still allowing coordination between lunar operations and Earth-based mission control. And again, this is all within our tiny Earth-Moon system, sharing the same star. If we're struggling to coordinate time between two bodies in the same gravitational system, how would an entire galaxy manage it? In Star Wars the solution, according to the expanded universe lore , is this: "A standard year, also known more simply as a year or formally as Galactic Standard Year, was a standard measurement of time in the galaxy. The term year often referred to a single revolution of a planet around its star, the duration of which varied between planets; the standard year was specifically a Coruscant year, which was the galactic standard. The Coruscant solar cycle was 368 days long with a day consisting of 24 standard hours." So the galaxy has standardized on Coruscant, the political and cultural capital, as the reference point for time. We can think of it as Galactic Greenwich Mean Time, with Coruscant serving as the Prime Meridian of the galaxy. This makes a certain amount of political and practical sense. Just as we arbitrarily chose a line through Greenwich, England, as the zero point for our time zones, a galactic civilization would need to pick some reference frame. Coruscant, as the seat of government for millennia, is a logical choice. But I'm still not convinced that it is this simple. Are those "24 standard hours" actually standard everywhere, or just on Coruscant? Let's think through what Galactic Standard Time would actually require: Tatooine has a different rotation period than Coruscant. Hoth probably has a different day length than Bespin. Some planets might have extremely long days (like Venus, which takes 243 Earth days to rotate once). Some might rotate so fast that "days" are meaningless. Gas giants like Bespin might not have a clear surface to even define rotation against. For local populations who never leave their planet, this is fine. They just live by their star's rhythm. But the moment you have interplanetary travel, trade, and military coordination, you need a common reference frame. This was too complicated for me to fully grasp, but here is how I understood it. The theory of relativity tells us that time passes at different rates depending on your velocity and the strength of the gravitational field you're in. We see this in our own GPS satellites. They experience time about 38 microseconds faster per day than clocks on Earth's surface because they're in a weaker gravitational field, even though they're also moving quickly (which slows time down). Both effects must be constantly corrected or GPS coordinates would drift by kilometers each day. Now imagine you're the Empire trying to coordinate an attack. One Star Destroyer has been orbiting a high-gravity planet. Another has been traveling at relativistic speeds through deep space. A third has been in hyperspace. When they all rendezvous, their clocks will have drifted. How much? Well, we don't really know the physics of hyperspace or the precise gravitational fields involved, so we can't say. But it wouldn't be trivial. Even if you had perfectly synchronized clocks, there's still the problem of knowing what time it is elsewhere. Light takes time to travel. A lot of time. Earth is about 8 light-minutes from the Sun. Meaning if the Sun exploded right now, we wouldn't know for 8 minutes. Voyager 1, humanity's most distant spacecraft, is currently over 23 light-hours away. A signal from there takes nearly a full Earth day to reach us. The Star Wars galaxy is approximately 120,000 light-years in diameter (according to the lore again). Even with the HoloNet (their faster-than-light communication system), there would still be transmission delays, signal degradation, and the fundamental question of "which moment in time are we synchronizing to?" If Coruscant sends out a time signal, and a planet on the Outer Rim receives it three days later, whose "now" are they synchronizing to? In relativity, there is no universal "now." Time is not an absolute, objective thing that ticks uniformly throughout the universe. It's relative to your frame of reference. On Earth, we all roughly share the same frame of reference, so we can agree on UTC and time zones. But in a galaxy with millions of worlds, each moving at different velocities relative to each other, each in different gravitational fields, with ships constantly jumping through hyperspace. Which frame of reference do you pick? You could arbitrarily say "Coruscant's reference frame is the standard," but that doesn't make the physics go away. A ship traveling at near-light-speed would still experience time differently. Any rebel operation requiring split-second timing would fall apart. Despite all this complexity, the characters in Star Wars behave as if time is simple and universal. They "seem" to use a dual-time system: This would be for official, galaxy-wide coordination: When Mon Mothma coordinates with Rebel cells across the galaxy in Andor , they're almost certainly using GST. When an X-Wing pilot gets a mission briefing, the launch time is in GST so the entire fleet stays synchronized. This is for daily life: The workday on Ferrix follows Ferrix's sun. A cantina on Tatooine opens when Tatooine's twin suns rise. A farmer on Aldhani plants crops according to Aldhani's seasons. A traveler would need to track both. Like we carry smartphones with clocks showing both home time and local time. An X-Wing pilot might wake up at 0600 LPT (local dawn on Yavin 4) for a mission launching at 1430 GST (coordinated across the fleet). This is something I couldn't let go when watching the show. In Andor, Cassian often references "night" and "day". Saying things like "we'll leave in the morning" or "it's the middle of the night." When someone on a spaceship says "it's the middle of the night," or even "Yesterday," what do they mean? There's no day-night cycle in space. They're not experiencing a sunset. The most logical explanation is that they've internalized the 24-hour Coruscant cycle as their personal rhythm. "Night" means the GST clock reads 0200, and the ship's lights are probably dimmed to simulate a diurnal cycle, helping regulate circadian rhythms. "Morning" means 0800 GST, and the lights brighten. Space travelers have essentially become Coruscant-native in terms of their biological and cultural clock, regardless of where they actually are. It's an artificial rhythm, separate from any natural cycle, but necessary for maintaining order and sanity in an artificial environment. I really wanted to present this in a way that makes sense. But the truth is, realistic galactic timekeeping would be mind-numbingly complex. You'd somehow need: It would make our International Telecommunication Union's work on UTC look like child's play. But Star Wars isn't hard science fiction. It's a fairy tale set in space. A story about heroes, empires, and rebellions. The starfighters make noise in the vacuum of space. The ships bank and turn like WWII fighters despite having no air resistance. Gravity works the same everywhere regardless of planet size. So when Han Solo says "0200 hours," just pretend he is in Kansas. We accept that somewhere, somehow, the galaxy has solved this complex problem. Maybe some genius inventor in the Old Republic created a McGuffin that uses hyperspace itself as a universal reference frame, keeping every clock in the galaxy in perfect sync through some exotic quantum effect. Maybe the most impressive piece of technology in the Star Wars universe isn't the Death Star, which blows up. Or the hyperdrive, which seems to fail half the time. The true technological and bureaucratic marvel is the invisible, unbelievably complex clock network that must be running flawlessly, constantly behind the scene across 120,000 light years. It suggests deep seated control, stability and sheer organizational power for the empire. That might be the real foundation of real galactic power hidden right there in plain sight. ... or maybe the Force did it! Maybe I took this a bit too seriously. But along the way, I was having too much fun reading about how NASA deals with time, and the deep lore behind Star Wars. I'm almost starting to understand why the Empire is trying to keep those pesky rebels at bay. I enjoyed watching Andor. Remember, Syril is a villain. Yes, you are on his side sometimes, they made him look human, but he is still a bad guy. There I said it. They can't make a third season because Rogue One is what comes next. But I think I've earned the right to just enjoy watching Cassian Andor glance at his chrono and say "We leave at dawn", wherever and whenever that is. A clock on a planet with stronger gravity runs slower than one on a planet with weaker gravity A clock on a fast-moving ship runs slower than one on a stationary planet Hyperspace travel, which somehow exceeds the speed of light, would create all kinds of relativistic artifacts Military operations ("All fighters, attack formation at 0430 GST") Senate sessions and government business Hyperspace travel schedules Banking and financial markets HoloNet news broadcasts Work schedules Sleep cycles Business hours Social conventions ("let's meet for lunch") Relativistic corrections for every inhabited world's gravitational field Constant recalibration for ships entering and exiting hyperspace A faster-than-light communication network that somehow maintains causality Atomic clock networks distributed across the galaxy, all quantum-entangled or connected through some exotic physics Sophisticated algorithms running continuously to keep everything synchronized Probably a dedicated branch of the Imperial bureaucracy just to maintain the Galactic Time Standard

0 views