Latest Posts (20 found)
iDiallo Yesterday

The NEO Robot

You've probably seen the NEO home robot by now, from the company 1X. It's a friendly humanoid with a plush-toy face that can work around your house. Cleaning, making beds, folding laundry, even picking up after meals. Most importantly, there's the way it looks. Unlike Tesla's "Optimus," which resembles an industrial robot, NEO looks friendly. It has a cute, plush face with round eyes. Something you could let your children play with. But after watching their launch video, I only had one thing on my mind: battery life. And that's how you know I was tricked. Battery life is four hours after a full charge according to the company, but that's the wrong thing to focus on. Remember when Tesla first announced Optimus? Elon Musk made sure to emphasize one statement, they purposely capped the robot's speed to 5 miles per hour. Then he joked that "you can just outrun it and most likely overpower it." This steered the conversation toward safety in AI and robots. a masterful bit of misdirection from the fact that there was no robot whatsoever at the time. Not even a prototype. Just a person in a suit doing a silly dance. With NEO, we saw a lot more. The robot loaded laundry into the machine, tidied up the home, did the dishes. Real demonstrations with real hardware. But what they failed to emphasize was just as important. All actions in the video were entirely remote controlled. Here are the assumptions I was making while watching their video. Once you turn on this robot, it would first need to understand your home. Since it operates as a housekeeper, it would map your space using the dual cameras on its head, saving this information to some internal drive. It would need to recognize you both visually and through your voice. You'd register your face and voice like Face ID. They stated it can charge itself, so the dexterity of its hands must be precise enough to plug itself in autonomously. All reasonable assumptions for a $20,000 "AI home robot," right? But these are just assumptions. Then the founder mentions you can "teach it new tasks," overseen by one of their experts that you can book at specific times. Since we're not seeing the robot do anything autonomously, I'm left wondering. What does "teaching the robot a skill" even mean? The NEO is indeed a humanoid robot. But it's not an autonomous AI robot. It's a teleoperated robot that lives in your home. A remote operator from 1X views through its cameras and controls its movements when it needs to perform a tasks. If that's what they're building, it should be crystal clear. People need to understand what they're buying and the implications that come with it. You're allowing someone from a company to work in your home remotely, using a humanoid robot as their avatar, seeing everything the robot sees. Looking at the videos published by outlets like the Wall Street Journal , even the teleoperated functionality appears limited. MKBHD also offers an excellent analysis that's worth watching. 1X positions this teleoperation as a training mechanism. The "Expert Mode" that generates data to eventually make the robot autonomous. It's a reasonable approach, similar to how Tesla gathered data for Full Self-Driving. But the difference is your car camera feeds helped train a system; NEO's cameras invite a stranger into your most private spaces. The company says it has implemented privacy controls, scheduled sessions, no-go zones, visual indicators when someone's watching, face-blurring technology, etc. These are necessary safeguards, but they don't change the fundamental problem. This is not an autonomous robot. Also, you are acting as a data provider for the company while paying $20,000 for the hardware. 2026 is just around the corner. I expect the autonomous capabilities to be quietly de-emphasized in marketing as we approach the release date. I also expect delays attributed to "high demand" and "ensuring safety standards." I don't expect this robot to deliver in 2026. If it does, it will be a teleoperated humanoid. With my privacy concerns, I will probably not be an early or late adopter. But I'll happily seat on the sidelines and watch the chaos unfold. A teleoperated humanoid sounds like the next logical step for an Uber or DoorDash. The company should just be clear about what they are building.

0 views
iDiallo 3 days ago

Why I Remain a Skeptic Despite Working in Tech

One thing that often surprises my friends and family is how tech-avoidant I am. I don't have the latest gadget, I talk about dumb TVs, and Siri isn't activated on my iPhone. The only thing left is to go to the kitchen, take a sheet of tin foil, and mold it into a hat. To put it simply, I avoid tech when I can. The main reason for my skepticism is that I don't like tracking technology. I can't stop it, I can't avoid it entirely, but I will try as much as I can. Take electric cars, for example. I get excited to see new models rolling out. But over-the-air updates freak me out. Why? Because I'm not the one in control of them. Modern cars now receive software updates wirelessly, similar to smartphones. These over-the-air updates can modify everything from infotainment systems to critical driving functions like powertrain systems, brakes, and advanced driver assistance systems. While this technology offers convenience, it also introduces security concerns, hackers could potentially gain remote access to vehicle systems. The possibility for a hostile take over went from 0 to 1. I buy things from Amazon. It's extremely convenient. But I don't feel comfortable having a microphone constantly listening. They may say that they don't listen to conversations, but you can't respond to a command without listening . It does use some trigger words to activate, but they still occasionally accidentally activate and start recording. Amazon acknowledges that it employs thousands of people worldwide to listen to Alexa voice recordings and transcribe them to improve the AI's capabilities. In 2023, the FTC fined Amazon $31 million for violating children's privacy laws by keeping kids' Alexa voice recordings indefinitely and undermining parents' deletion requests. The same thing with Siri. Apple likes to brag about their privacy features, but they still paid $95 million in a Siri eavesdropping settlement . Vizio TVs take screenshots from 11 million smart TVs and sell viewing data to third parties without users' knowledge or consent. The data is bundled with personal information including sex, age, income, marital status, household size, education level, and home value, then sold to advertisers. The FTC fined Vizio $2.2 million in 2017, but by then the damage was done. This technology isn't limited to Vizio. Most smart TV manufacturers use similar tracking. ACR can analyze exactly what's on your screen regardless of source, meaning your TV knows when you're playing video games, watching Blu-rays, or even casting home movies from your phone. In 2023, Tesla faced a class action lawsuit after reports revealed that employees shared private photos and videos from customer vehicle cameras between 2019 and 2022. The content included private footage from inside customers' garages. One video that circulated among employees showed a Tesla hitting a child on a bike . Tesla's privacy notice states that "camera recordings remain anonymous and are not linked to you or your vehicle," yet employees clearly had access to identify and share specific footage. Amazon links every Alexa interaction to your account and uses the data to profile you for targeted advertising. While Vizio was ordered to delete the data it collected, the court couldn't force third parties who purchased the data to delete it. Once your data is out there, you've lost control of it forever. For me, a technological device that I own should belong to me, and me only. But for some reason, as soon as we add the internet to any device, it stops belonging to us. The promise of smart technology is convenience and innovation. The reality is surveillance and monetization. Our viewing habits, conversations, and driving patterns are products being sold without our meaningful consent. I love tech, and I love solving problems. But as long as I don't have control of the devices I use, I'll remain a tech skeptic. One who works from the inside, hoping to build better solutions. The industry needs people who question these practices, who push back against normalized surveillance, and who remember that technology should serve users, not exploit them. Until then, I'll keep my TV dumb, my Siri disabled, and be the annoying family member who won't join your facebook group.

0 views
iDiallo 5 days ago

None of us Read the specs

After using Large Language Models extensively, the same questions keep resurfacing. Why didn't the lawyer who used ChatGPT to draft legal briefs verify the case citations before presenting them to a judge? Why are developers raising issues on projects like cURL using LLMs, but not verifying the generated code before pushing a Pull Request? Why are students using AI to write their essays, yet submitting the result without a single read-through? The reason is simple. If you didn't have time to write it, you certainly won't spend time reading it. They are all using LLMs as their time-saving strategy. In reality, the work remains undone because they are merely shifting the burden of verification and debugging to the next person in the chain. AI companies promise that LLMs can transform us all into a 10x developer. You can produce far more output, more lines of code, more draft documents, more specifications, than ever before. The core problem is that this initial time saved is almost always spent by someone else to review and validate your output. At my day job, the developers who use AI to generate large swathes of code are generally lost when we ask questions during PR reviews. They can't explain the logic or the trade-offs because they didn't write it, and they didn't truly read it. Reading and understanding generated code defeats the initial purpose of using AI for speed. Unfortunately, there is a fix for that as well. If PR reviews or verification slow the process down, then the clever reviewer can also use an LLM to review the code at a 10x speed. Now, everyone has saved time. The code gets deployed faster. The metrics for velocity look fantastic. But then, a problem arises. A user experiences a critical issue. At this point, you face a technical catastrophe: The developer is unfamiliar with the code, and the reviewer is also unfamiliar with the code. You are now completely at the mercy of another LLM to diagnose the issue and create a fix, because the essential human domain knowledge required to debug a problem has been bypassed by both parties. This issue isn't restricted to writing code. I've seen the same dangerous pattern when architects use LLMs to write technical specifications for projects. As an architect whose job is to produce a document that developers can use as a blueprint, using an LLM exponentially improves speed. Where it once took a day to go through notes and produce specs, an LLM can generate a draft in minutes. As far as metrics are concerned, the architect is producing more. Maybe they can even generate three or four documents a day now. As an individual contributor, they are more productive. But that output is someone else’s input, and their work depends entirely on the quality of the document. Just because we produce more doesn't mean we are doing a better job. Plus, our tendency is to not thoroughly vet the LLM's output because it always looks good enough, until someone has to scrutinize it. The developer implementing a feature, following that blueprint, will now have to do the extra work of figuring out if the specs even make sense. If the document contains logical flaws, missing context, or outright hallucinations , the developer must spend time reviewing and reconciling the logic. The worst-case scenario? They decide to save time, too. They use an LLM to "read" the flawed specs and build the product, incorporating and inheriting all the mistakes, and simply passing the technical debt along. LLMs are powerful tools for augmentation, but we treat them as tools for abdication . They are fantastic at getting us to a first draft, but they cannot replace the critical human function of scrutiny, verification, and ultimate ownership. When everyone is using a tool the wrong way, you can't just say they are holding it wrong . But I don't see how we can make verification a sustainable part of the process when the whole point of using an LLM is to save time. For now at least, we have to deliberately consider all LLM outputs incorrect until vetted. If we fail to do this, we're not just creating more work for others; we're actively eroding our work, making life harder for our future selves.

0 views
iDiallo 1 weeks ago

Why should I accept all cookies?

Around 2013, my team and I finally embarked in upgrading our company's internal software to version 2.0. We had a large backlog of user complaints that we were finally addressing, with security at the top of the list. The very top of the list was moving away from plain text passwords. From the outside, the system looked secure. We never emailed passwords, we never displayed them, we had strict protocols for password rotation and management. But this was a carefully staged performance. The truth was, an attacker with access to our codebase could have downloaded the entire user table in minutes. All our security measures were pure theater, designed to look robust while a fundamental vulnerability sat in plain sight. After seeing the plain text password table, I remember thinking about a story that was also happening around the same time. A 9 year old boy who flew from Minneapolis to Las Vegas without a boarding pass . This was in an era where we removed our shoes and belts for TSA agents to humiliate us. Yet, this child was able, without even trying, to bypass all the theater that was built around the security measures. How did he get past TSA? How did he get through the gate without a boarding pass? How was he assigned a seat in the plane? How did he... there are just so many questions. Just like our security measures on our website, it was all a performance, an illusion. I can't help but see the same script playing out today, not in airports or codebases, but in the cookie consent banners that pop up on nearly every website I visit. It's always a variation of "This website uses cookies to enhance your experience. [Accept All] or [Customize]." Rarely is there a bold, equally prominent "Reject All" button. And when there is, the reject-all button will open a popup where you have to tweak some settings. This is not an accident; it's a dark pattern. It's the digital equivalent of a TSA agent asking, "Would you like to take the express lane or would you like to go through a more complicated screening process?" Your third option is to turn back and go home, which isn't really an option if you made it all the way to the airport. A few weeks back, I was exploring not just dark patterns but hostile software . Because you don't own the device you paid for, the OS can enforce decisions by never giving you any options. You don't have a choice. Any option you choose will lead you down the same funnel that benefits the company, and give you the illusion of agency. So, let's return to the cookie banner. As a user, what is my tangible incentive to click "Accept All"? The answer is: there is none. "Required" cookies are, by definition, non-negotiable for basic site function. Accepting the additional "performance," "analytics," or "marketing" cookies does not unlock a premium feature for me. It doesn't load the website faster or give me a cleaner layout. It does not improve my experience. My only "reward" for accepting all is that the banner disappears quickly. The incentive is the cessation of annoyance, a small dopamine hit for compliance. In exchange, I grant the website permission to track my behavior, build an advertising profile, and share my data with a shadowy network of third parties. The entire interaction is a rigged game. Whenever I click on the "Customize" option, I'm overwhelmed with the labyrinth of toggles and sub-menus designed to make rejection so tedious that "Accept All" becomes the path of least resistance. My default reaction is to reject everything. Doesn't matter if you use dark patterns, my eyes are trained to read the fine lines in a split second. But when that option is hidden, I've resorted to opening my browser's developer tools and deleting the banner element from the page altogether. It’s a desperate workaround for a system that refuses to offer a legitimate "no." Lately, I don't even bother clicking on reject all. I just delete the elements all together. Like I said, there are no incentives for me to interact with the menu. We eventually plugged that security vulnerability in our old application. We hashed the passwords and closed the backdoor, moving from security theater to actual security. The fix wasn't glamorous, but it was a real improvement. The current implementation of "choice" is largely privacy theater. It's a performance designed to comply with the letter of regulations like GDPR while violating their spirit. It makes users feel in control while systematically herding them toward the option that serves corporate surveillance. There is never an incentive to cookie tracking on the user end. So this theater has to be created to justify selling our data and turning us into products of each website we visit. But if you are like me, don't forget you can always use the developer tools to make the banner disappear. Or use uBlock. On Windows or Google Drive: "Get started" or "Remind me later." Where is "Never show this again"? On Twitter: "See less often" is the only option for an unwanted notification, never "Stop these entirely."

0 views
iDiallo 1 weeks ago

Galactic Timekeeping

Yes, I loved Andor. It was such a breath of fresh air in the Star Wars universe. The kind of storytelling that made me feel like a kid again, waiting impatiently for my father to bring home VHS tapes of Episodes 5 and 6. I wouldn't call myself a die-hard fan, but I've always appreciated the original trilogy. After binging both seasons of Andor, I immediately rewatched Rogue One , which of course meant I had to revisit A New Hope again. And through it all, one thing kept nagging at me. One question I had. What time is it? In A New Hope , Han Solo, piloting the Millennium Falcon through hyperspace, casually mentions: "We should be at Alderaan about 0200 hours." And they are onto the next scene with R2D2. Except I'm like, wait a minute. What does "0200 hours" actually mean in an intergalactic civilization? When you're travelling through hyperspace between star systems, each with their own planets spinning at different rates around different suns, what does "2:00 AM" even refer to? Bear with me, I'm serious. Time is fundamentally local. Here on Earth, we define a "day" by our planet's rotation relative to the Sun. One complete spin gives us 24 hours. A "year" is one orbit around our star. These measurements are essentially tied to our specific solar neighborhood. So how does time work when you're hopping between solar systems as casually as we hop between time zones? Before we go any further into a galaxy far, far away, let's look at how we're handling timekeeping right now as we begin exploring our own solar system. NASA mission controllers for the Curiosity rover famously lived on "Mars Time" during their missions . A Martian day, called a "sol", is around 24 hours and 40 minutes long. To stay synchronized with the rovers' daylight operations, mission control teams had their work shifts start 40 minutes later each Earth day. They wore special watches that displayed time in Mars sols instead of Earth hours. Engineers would arrive at work in California at what felt like 3:00 AM one week, then noon the next, then evening, then back to the middle of the night. All while technically working the "same" shift on Mars. Families were disrupted. Sleep schedules were destroyed. And of course, "Baby sitters don't work on Mars time." And this was just for one other planet in our own solar system. One team member described it as living " perpetually jet-lagged ." After several months, NASA had to abandon pure Mars time because it was simply unsustainable for human biology. Our circadian rhythms can only be stretched so much. With the Artemis missions planning to establish a continuous human presence on the Moon, NASA and international space agencies are now trying to define an even more complicated system: Lunar Standard Time. A lunar "day", from one sunrise to the next, lasts about 29.5 Earth days. That's roughly 14 Earth days of continuous sunlight followed by 14 Earth days of darkness. You obviously can't work for two weeks straight and then hibernate for two more. But that's not all. On the moon, time itself moves differently. Because of the moon's weaker gravity and different velocity relative to Earth, clocks on the Moon tick at a slightly different rate than clocks on Earth. It's a microscopic difference (about 56 microseconds per day), but for precision navigation, communication satellites, and coordinated operations, it matters. NASA is actively working to create a unified timekeeping framework that accounts for these relativistic effects while still allowing coordination between lunar operations and Earth-based mission control. And again, this is all within our tiny Earth-Moon system, sharing the same star. If we're struggling to coordinate time between two bodies in the same gravitational system, how would an entire galaxy manage it? In Star Wars the solution, according to the expanded universe lore , is this: "A standard year, also known more simply as a year or formally as Galactic Standard Year, was a standard measurement of time in the galaxy. The term year often referred to a single revolution of a planet around its star, the duration of which varied between planets; the standard year was specifically a Coruscant year, which was the galactic standard. The Coruscant solar cycle was 368 days long with a day consisting of 24 standard hours." So the galaxy has standardized on Coruscant, the political and cultural capital, as the reference point for time. We can think of it as Galactic Greenwich Mean Time, with Coruscant serving as the Prime Meridian of the galaxy. This makes a certain amount of political and practical sense. Just as we arbitrarily chose a line through Greenwich, England, as the zero point for our time zones, a galactic civilization would need to pick some reference frame. Coruscant, as the seat of government for millennia, is a logical choice. But I'm still not convinced that it is this simple. Are those "24 standard hours" actually standard everywhere, or just on Coruscant? Let's think through what Galactic Standard Time would actually require: Tatooine has a different rotation period than Coruscant. Hoth probably has a different day length than Bespin. Some planets might have extremely long days (like Venus, which takes 243 Earth days to rotate once). Some might rotate so fast that "days" are meaningless. Gas giants like Bespin might not have a clear surface to even define rotation against. For local populations who never leave their planet, this is fine. They just live by their star's rhythm. But the moment you have interplanetary travel, trade, and military coordination, you need a common reference frame. This was too complicated for me to fully grasp, but here is how I understood it. The theory of relativity tells us that time passes at different rates depending on your velocity and the strength of the gravitational field you're in. We see this in our own GPS satellites. They experience time about 38 microseconds faster per day than clocks on Earth's surface because they're in a weaker gravitational field, even though they're also moving quickly (which slows time down). Both effects must be constantly corrected or GPS coordinates would drift by kilometers each day. Now imagine you're the Empire trying to coordinate an attack. One Star Destroyer has been orbiting a high-gravity planet. Another has been traveling at relativistic speeds through deep space. A third has been in hyperspace. When they all rendezvous, their clocks will have drifted. How much? Well, we don't really know the physics of hyperspace or the precise gravitational fields involved, so we can't say. But it wouldn't be trivial. Even if you had perfectly synchronized clocks, there's still the problem of knowing what time it is elsewhere. Light takes time to travel. A lot of time. Earth is about 8 light-minutes from the Sun. Meaning if the Sun exploded right now, we wouldn't know for 8 minutes. Voyager 1, humanity's most distant spacecraft, is currently over 23 light-hours away. A signal from there takes nearly a full Earth day to reach us. The Star Wars galaxy is approximately 120,000 light-years in diameter (according to the lore again). Even with the HoloNet (their faster-than-light communication system), there would still be transmission delays, signal degradation, and the fundamental question of "which moment in time are we synchronizing to?" If Coruscant sends out a time signal, and a planet on the Outer Rim receives it three days later, whose "now" are they synchronizing to? In relativity, there is no universal "now." Time is not an absolute, objective thing that ticks uniformly throughout the universe. It's relative to your frame of reference. On Earth, we all roughly share the same frame of reference, so we can agree on UTC and time zones. But in a galaxy with millions of worlds, each moving at different velocities relative to each other, each in different gravitational fields, with ships constantly jumping through hyperspace. Which frame of reference do you pick? You could arbitrarily say "Coruscant's reference frame is the standard," but that doesn't make the physics go away. A ship traveling at near-light-speed would still experience time differently. Any rebel operation requiring split-second timing would fall apart. Despite all this complexity, the characters in Star Wars behave as if time is simple and universal. They "seem" to use a dual-time system: This would be for official, galaxy-wide coordination: When Mon Mothma coordinates with Rebel cells across the galaxy in Andor , they're almost certainly using GST. When an X-Wing pilot gets a mission briefing, the launch time is in GST so the entire fleet stays synchronized. This is for daily life: The workday on Ferrix follows Ferrix's sun. A cantina on Tatooine opens when Tatooine's twin suns rise. A farmer on Aldhani plants crops according to Aldhani's seasons. A traveler would need to track both. Like we carry smartphones with clocks showing both home time and local time. An X-Wing pilot might wake up at 0600 LPT (local dawn on Yavin 4) for a mission launching at 1430 GST (coordinated across the fleet). This is something I couldn't let go when watching the show. In Andor, Cassian often references "night" and "day". Saying things like "we'll leave in the morning" or "it's the middle of the night." When someone on a spaceship says "it's the middle of the night," or even "Yesterday," what do they mean? There's no day-night cycle in space. They're not experiencing a sunset. The most logical explanation is that they've internalized the 24-hour Coruscant cycle as their personal rhythm. "Night" means the GST clock reads 0200, and the ship's lights are probably dimmed to simulate a diurnal cycle, helping regulate circadian rhythms. "Morning" means 0800 GST, and the lights brighten. Space travelers have essentially become Coruscant-native in terms of their biological and cultural clock, regardless of where they actually are. It's an artificial rhythm, separate from any natural cycle, but necessary for maintaining order and sanity in an artificial environment. I really wanted to present this in a way that makes sense. But the truth is, realistic galactic timekeeping would be mind-numbingly complex. You'd somehow need: It would make our International Telecommunication Union's work on UTC look like child's play. But Star Wars isn't hard science fiction. It's a fairy tale set in space. A story about heroes, empires, and rebellions. The starfighters make noise in the vacuum of space. The ships bank and turn like WWII fighters despite having no air resistance. Gravity works the same everywhere regardless of planet size. So when Han Solo says "0200 hours," just pretend he is in Kansas. We accept that somewhere, somehow, the galaxy has solved this complex problem. Maybe some genius inventor in the Old Republic created a McGuffin that uses hyperspace itself as a universal reference frame, keeping every clock in the galaxy in perfect sync through some exotic quantum effect. Maybe the most impressive piece of technology in the Star Wars universe isn't the Death Star, which blows up. Or the hyperdrive, which seems to fail half the time. The true technological and bureaucratic marvel is the invisible, unbelievably complex clock network that must be running flawlessly, constantly behind the scene across 120,000 light years. It suggests deep seated control, stability and sheer organizational power for the empire. That might be the real foundation of real galactic power hidden right there in plain sight. ... or maybe the Force did it! Maybe I took this a bit too seriously. But along the way, I was having too much fun reading about how NASA deals with time, and the deep lore behind Star Wars. I'm almost starting to understand why the Empire is trying to keep those pesky rebels at bay. I enjoyed watching Andor. Remember, Syril is a villain. Yes, you are on his side sometimes, they made him look human, but he is still a bad guy. There I said it. They can't make a third season because Rogue One is what comes next. But I think I've earned the right to just enjoy watching Cassian Andor glance at his chrono and say "We leave at dawn", wherever and whenever that is. A clock on a planet with stronger gravity runs slower than one on a planet with weaker gravity A clock on a fast-moving ship runs slower than one on a stationary planet Hyperspace travel, which somehow exceeds the speed of light, would create all kinds of relativistic artifacts Military operations ("All fighters, attack formation at 0430 GST") Senate sessions and government business Hyperspace travel schedules Banking and financial markets HoloNet news broadcasts Work schedules Sleep cycles Business hours Social conventions ("let's meet for lunch") Relativistic corrections for every inhabited world's gravitational field Constant recalibration for ships entering and exiting hyperspace A faster-than-light communication network that somehow maintains causality Atomic clock networks distributed across the galaxy, all quantum-entangled or connected through some exotic physics Sophisticated algorithms running continuously to keep everything synchronized Probably a dedicated branch of the Imperial bureaucracy just to maintain the Galactic Time Standard

0 views
iDiallo 1 weeks ago

Is RSS Still Relevant?

I'd like to believe that RSS is still relevant and remains one of the most important technologies we've created. The moment I built this blog, I made sure my feed was working properly. Back in 2013, the web was already starting to move away from RSS. Every few months, an article would go viral declaring that RSS was dying or dead. Fast forward to 2025, those articles are nonexistent, and most people don't even know what RSS is. One of the main advantages of an RSS feed is that it allows me to read news and articles without worrying about an algorithm controlling how I discover them. I have a list of blogs I'm subscribed to, and I consume their content chronologically. When someone writes an article I'm not interested in, I can simply skip it. I don't need to train an AI to detect and understand the type of content I don't like. Who knows, the author might write something similar in the future that I do enjoy. I reserve that agency to judge for myself. The fact that RSS links aren't prominently featured on blogs anymore isn't really a problem for me. I have the necessary tools to find them and subscribe on my own. In general, people who care about RSS are already aware of how to subscribe. Since I have this blog and have been posting regularly this year, I can actually look at my server logs and see who's checking my feed. From January 1st to September 1st, 2025, there were a total of 537,541 requests to my RSS feed. RSS readers often check websites at timed intervals to detect when a new article is published. Some are very aggressive and check every 10 minutes throughout the day, while others have somehow figured out my publishing schedule and only check a couple of times daily. RSS readers, or feed parsers, don't always identify themselves. The most annoying name I've seen is just , probably a Node.js script running on someone's local machine. However, I do see other prominent readers like Feedly, NewsBlur, and Inoreader. Here's what they look like in my logs: There are two types of readers: those from cloud services like Feedly that have consistent IP addresses you can track over time, and those running on user devices. I can identify the latter as user devices because users often click on links and visit my blog with the same IP address. So far throughout the year, I've seen 1,225 unique reader names. It's hard to confirm if they're truly unique since some are the same application with different versions. For example, Tiny Tiny RSS has accessed the website with 14 different versions, from version 22.08 to 25.10. I've written a script to extract as many identifiable readers as possible while ignoring the generic ones that just use common browser user agents. Here's the list of RSS readers and feed parsers that have accessed my blog: Raw list of RSS user agents here RSS might be irrelevant on social media, but that doesn't really matter. The technology is simple enough that anyone who cares can implement it on their platform. It's just a fancy XML file. It comes installed and enabled by default on several blogging platforms. It doesn't have to be the de facto standard on the web, just a good way for people who are aware of it to share articles without being at the mercy of dominant platforms.

1 views
iDiallo 2 weeks ago

The TikTok Model is the Future of the Web

I hate to say it, but when I wake up in the morning, the very first thing I do is check my phone. First I turn off my alarm, I've made it a habit to wake up before it goes off. Then I scroll through a handful of websites. Yahoo Finance first, because the market is crazy. Hacker News, where I skim titles to see if AWS suffered an outage while I was sleeping. And then I put my phone down before I'm tempted to check my Twitter feed. I've managed to stay away from TikTok, but the TikTok model is finding its way to every user's phone whether we like it or not. On TikTok, you don't surf the web. You don't think of an idea and then research it. Instead, based entirely on your activity in the app, their proprietary algorithm decides what content will best suit you. For their users, this is the best thing since sliced bread. For the tech world, this is the best way to influence your users. Now, the TikTok model is no longer reserved for TikTok, but has spread to all social media. What worries me is that it's also going to infect the entire World Wide Web. Imagine this for a second: You open your web browser. Instead of a search bar or a list of bookmarks, you're greeted by an endless, vertically scrolling stream of content. Short videos, news snippets, product listings, and interactive demos. You don't type anything, you just swipe what you don't like and tap what you do. The algorithm learns, and soon it feels like the web is reading your mind. You're served exactly what you didn't know you wanted. Everything is effortless, because the content you see feels like something you would have searched for yourself. With AI integrations like Google's Gemini being baked directly into the browser, this TikTok-ification of the entire web is the logical next step. We're shifting from a model of surfing the web to one where the web is served to us. This looks like peak convenience. If these algorithms can figure out what you want to consume without you having to search for it, what's the big deal? The web is full of noise, and any tool that can cut through the clutter and help surface the gems should be a powerful discovery tool. But the reality doesn't entirely work this way. There's something that always gets in the way: incentives. More accurately, company incentives. When I log into my Yahoo Mail (yes, I still have one), the first bolded email on top isn't actually an email. It's an ad disguised as an email. When I open the Chrome browser, I'm presented with "Sponsored content" I might be interested in. Note that Google Discover is supposed to be the ultimate tool for discovering content, but their incentives are clear: they're showing you sponsored content first. The model for content that's directly served to you is designed to get you addicted. It isn't designed for education or fulfillment; it's optimized for engagement. The goal is to provide small, constant dopamine hits, keeping you in a state of perpetual consumption without ever feeling finished. It's browsing as a slot machine, not a library. What happens when we all consume a unique, algorithmically-generated web? We lose our shared cultural space. After the last episode of Breaking Bad aired, I texted my coworkers: "Speechless." The reply was, "Best TV show in history." We didn't need more context to understand what we were all talking about. With personalized content, this shared culture is vanishing. The core problem isn't algorithmic curation itself, but who it serves. The algorithms are designed to benefit the company that made them, not the user. And as the laws of "enshittification" dictate, any platform that locks in its users will eventually turn the screws, making the algorithm worse for you to better serve its advertisers or bottom line . Algorithmic solutions often fix problems that shouldn't exist in the first place. Think about your email. The idea of "algorithmically sorted email" only makes sense if your inbox is flooded with spam, newsletters you never wanted, and automated notifications. You need a powerful AI to find the real human messages buried in the noise. But here's the trick: your email shouldn't be flooded with that junk to begin with. If we had better norms, stricter regulations, and more respectful systems, your inbox would contain only meaningful correspondence. In that world, you wouldn't want an algorithm deciding what's important. You'd just read your emails. The same is true for the web. The "noise" the TikTok model promises to solve, the SEO spam, the clickbait, the low-value content, is largely a product of an ad-driven attention economy. Instead of fixing that root problem, the algorithmic model just builds a new, even more captivating layer on top of it. It doesn't clean up the web; it just gives you a more personalized and addictive filter bubble to live inside. The TikTok model of the web is convenient, addictive, and increasingly inevitable. But it's not the only future. It's the path of least resistance for platforms seeking growth and engagement at all costs . There is an alternative, though. No, you don't have to demand more from these platforms. You don't have to vote for a politician. You don't even have to do much. The very first thing to do is remember your own agency. You are in control of the web you see and use. Change the default settings on your device. Delete the apps that are taking advantage of you. Use an ad blocker. If you find creators making things you like, look for ways to support them directly. Be the primary curator of your digital life. It requires some effort, of course. But it's worth it, because the alternative is letting someone else decide what you see, what you think about, and how you spend your time. The web can still be a tool for discovery and connection rather than a slot machine optimized for your attention. You just have to choose to make it that way.

1 views
iDiallo 2 weeks ago

No Satisfaction Guaranteed

I use Apple products mostly for work. When it comes to iPhone vs Android, I need access to my file system, so I choose Android any day. But the last thing I'll say is that Apple products suck. Whether it's the UI, screen quality, laptops, or tablets, Apple has done an amazing job. Yet every time there's a new iteration, someone will write about how much Apple sucks right now. The same happens with new Android phones too. There's no way of satisfying all users. No matter what you do, someone will complain that your product is now worse than ever. This isn't entirely the fault of users. There's a system in place that conditions us to have high expectations as we critique. We're taught that a company's ultimate goal is to create a perfect, successful product, which I believe Apple has succeeded in doing. But what happens after success? I'd argue that this moment of peak success is also the beginning of a crisis. Imagine you have an idea. The best idea. You turn it into a product. A masterpiece. It's durable, high-quality, and cleverly solves a problem. You launch it, and everyone buys it. The world is happy. You've won. So, what now? Logically, you should be able to enjoy the spoils. You've made your money and delivered genuine value. Your work here is done. But the modern economy doesn't work that way. A company isn't a one-hit wonder; it's an entity that must survive, and survival is defined by one thing: infinite growth. Once you've sold your perfect product to everyone who wants it, you hit a wall. Your success becomes your ceiling. To survive, you must now convince your satisfied customers to buy again. This is where the machine starts to break us. You can't just sell the same thing. People already have it! So you're forced to create something new. You leverage the trust from your first success and slap the same branding on a follow-up. But this new product must be different. It must be "better," or at least appear that way. It needs new features, a new design, a new reason for people to open their wallets. If you fail, those who are watching, your competitors, learn from your mistake: "Never give your best on the first try." Even if you have the knowledge and ability to create the "perfect," lifelong product, you're discouraged from doing so. Instead, you release a dumbed-down version. You hold back the best features for "Version 2.0" or the "Pro" model. You design for planned obsolescence, either in function or in fashion. You're not building a solution anymore; you're building a stepping stone to the next product. Suddenly, you're making Marvel movies. In the end, the good guys defeat the enemy. They save the city. But right before the credits roll, a new problem is introduced. You can't leave satisfied. You must watch the next installment. The best smartphones slow down, important software becomes unsupported, last year's model suddenly looks outdated. This isn't always an accident; it's often a feature of the system. The goal is to keep us on a treadmill. A perpetual motion machine of consumption. Our satisfaction is a problem for business. A truly satisfied customer is a dead end. A customer who is almost satisfied but sees a brighter, shinier solution on the horizon is the engine of growth. We've built an economic system where success is no longer measured by problems solved, but by the ability to manufacture new desires. Companies aren't rewarded for creating lasting value. They're rewarded for creating lasting dependency. The better a company gets at solving our problems, the more desperately it must invent new ones. This cycle doesn't just affect products; it shapes how we think about satisfaction itself. We've internalized the idea that contentment is stagnation, that last year's perfectly functional device is somehow insufficient. We've learned to mistake novelty for progress and updates for improvement. Instead of always waiting for the next version that doesn't suck, we should start saying: "This is enough. This works. You don't need to buy again." In a system that forgot how to breath, maybe the next big innovation will be how we learn how to slow down.

0 views
iDiallo 2 weeks ago

Why We Don't Have Flying Cars

Imagine this: You walk up to your driveway where your car is parked. You reach for the handle that automatically senses your presence, confirms your identity, and opens to welcome you in. You sit down, the controls appear in front of you, and your seatbelt secures itself around your waist. Instead of driving forward onto the pavement, you take off. You soar into the skies like an eagle and fly to your destination. This is what technology promises: freedom, power, and something undeniably cool. The part we fail to imagine is what happens when your engine sputters before takeoff. What happens when you reach the sky and there are thousands of other vehicles in the air, all trying to remain in those artificial lanes? How do we deal with traffic? Which directions are we safely allowed to go? And how high? We have flying cars today. They're called helicopters. In understanding the helicopter, we understand why our dream remains a dream. There's nothing romantic about helicopters. They're deafeningly loud and incredibly expensive to buy and maintain. They require highly skilled pilots, are dangerously vulnerable to engine failure, and present a logistical nightmare of three-dimensional traffic control. I can't even picture what a million of them buzzing between skyscrapers would look like. Chaos, noise pollution, and a new form of gridlock in the sky. Even with smaller drones, as the technology evolves and becomes familiar, cities are creating regulations around them, sucking all the fun and freedom out in favor of safety and security. This leads me to believe that the whole idea of flying cars and drones is more about freedom than practicality. And unregulated freedom is impossible. This isn't limited to flying cars. The initial, pure idea is always intoxicating. But the moment we build a prototype, we're forced to confront the messy reality. In 1993, a Japanese man brought a video phone to demo for my father as a new tech to adopt in our embassy. I was only a child, but I remember the screen lighting up with a video feed of the man sitting right next to my father. I could only imagine the possibilities. It was something I thought only existed in sci-fi movies. If this was possible, teleportation couldn't be too far away. In my imagined future, we'd sit at a table with life-like projections of colleagues from across the globe, feeling as if we were in the same room. It would be the end of business travel, a world without borders. But now that the technology is ubiquitous, the term "Zoom fatigue" is trending. It's ironic when I get on a call and see that 95% of my colleagues have their cameras turned off. In movies, communication was spontaneous. You press a button, your colleauge appears as a hologram, and you converse. In reality, there's a calendar invite, a link, and the awkward "you're on mute!" dance. It's a scheduled performance, not an organic interaction. And then there are people who have perfect lighting, high-speed internet, and a quiet home office. And those who don't. Video calls have made us realize the importance of physical space and connection. Facebook's metaverse didn't resolve this. Imagine having a device that holds all of human knowledge at the click of a button. For generations, this was the ultimate dream of librarians and educators. It would create a society of enlightened, informed citizens. And we got the smartphone. Despite being a marvel of technology, the library of the world at your fingertips, it hasn't ushered us into utopia. The attention economy it brought along has turned it into a slot machine designed to hijack our dopamine cycles. You may have Wikipedia open in one tab, but right next to it is TikTok. The medium has reshaped the message from "seek knowledge" to "consume content." While you have access to information, misinformation is just as rampant. The constant stimulation kills moments of quiet reflection, which are often the birthplace of creativity and deep thought. In The Machine Stops by E.M. Forster, every desire can be delivered by pulling a lever on the machine. Whether it's food, a device, or toilet paper. The machine delivers everything. With Amazon, we've created a pretty similar scenario. I ordered replacement wheels for my trash bin one evening, expecting them to arrive after a couple of days. The very next morning, they were waiting at my doorstep. Amazing. But this isn't magical. Behind it are real human workers who labor without benefits, job security, or predictable income. They have an algorithmic boss that can be more demanding than a human one. That promise of instant delivery has created a shadow workforce of people dealing with traffic, poor weather, and difficult customers, all while racing against a timer. The convenience for the user is built on the stress of the driver. The dream of a meal from anywhere didn't account for the reality of our cities now being clogged with double-parked delivery scooters and a constant stream of gig workers. Every technological dream follows the same pattern. The initial vision is pure, focusing only on the benefit. The freedom, the convenience, the power. But reality is always a compromise, a negotiation with physics, economics, and most importantly, human psychology and society. We wanted flying cars. We understood the problems. And we got helicopters with a mountain of regulations instead. That's probably for the best. The lesson isn't to stop dreaming or stop innovating. It's to dream with our eyes open. When we imagine the future, we need to ask not just "what will this enable?" but also "what will this cost?" Not in dollars, but in human terms. In stress, inequality, unintended consequences, and the things we'll lose along the way. We're great at imagining benefits and terrible at predicting costs. And until we get better at the second part, every flying car we build will remain grounded by the weight of what we failed to consider.

0 views
iDiallo 3 weeks ago

5 Years Away

AGI has been "5 years away" for the past decade. The Tesla Roadster? Five years away since 2014. Tesla's Level 5 self-driving? Promised by 2017, then quietly pushed into the perpetual five-year window. If you've been paying attention, you've probably noticed this pattern extends far beyond Silicon Valley. Why do we keep landing on this specific timeframe? Psychologically, five years is close enough to feel relevant. We can easily imagine ourselves five years from now, still affected by these innovations. Yet it's distant enough to seem plausible for ambitious goals. More importantly, it's far enough away that by the time five years passes, people have often moved on or forgotten the original prediction. This isn't limited to consumer electronics. Medical breakthroughs regularly make headlines with the same promise: a revolutionary cancer treatment, five years away. Carbon nanotubes will transform renewable energy, five years. Solid-state batteries will solve range anxiety... five years. Andrew Ng, in his course "AI for Everyone," offered the most honest perspective on AGI I've ever read. He suggested it might be decades, hundreds, or even thousands of years away. Why? Before we can meaningfully predict a timeline, we need several fundamental technological breakthroughs that we haven't achieved yet. Without those foundational advances, there's nothing to build on top of. With all the "5 years away" predictions there is an assumption of linear progress. But the reality is that any transformative technologies require non-linear leaps. They require discoveries we can't yet foresee, solving problems we don't yet fully understand. If we want our predictions to mean something, we need a clearer framework. At a minimum, a technology can be labeled "5 years away" only if we have a demonstrated proof of concept, even if it is at a small scale. We need to have identified the major engineering challenges remaining. There should be reasonable pathways to overcome those challenges with existing knowledge. And finally, there needs to be a semblance of economic viability on the horizon. Anything less than this is speculation dressed up as prediction. If I say that "we've built a prototype that works in the lab, now we need to scale manufacturing", this may in fact be five years away. But when I say "We need multiple fundamental breakthroughs in physics before this is even possible." Here, I am in a science fiction timeline. It's not entirely harmless to have inflated predictions. Government policy may be planned around those decisions, they can distort investment decisions, and worse it gives the public false expectations. When we promise self-driving cars by 2017 and fail to deliver, it erodes trust not just in that company, but in the entire field. When every medical breakthrough is "5 years away," people become cynical about real advances. The "5 years away" framing can make us complacent. If fusion power is always just around the corner, why invest heavily in less glamorous but available renewable technologies today? If AGI will solve everything soon, why worry about the limitations and harms of current AI systems? It's not the most pressing problem in the world, but wouldn't it be better to have more realistic predictions? When reading news articles about any technology, try to distinguish the difference between engineering challenges and scientific unknowns. A realistic prediction will be explicit by saying things like "This will be ready in 5 years, assuming we solve X, Y, and Z." The public needs to learn to celebrate incremental progress as well. When all you read about is moonshots, you dismiss important work being done to improve our everyday lives. And of course, the public should also learn to ignore engagement baits . Real innovation is hard enough without pretending we can see further into the future than we actually can. Five years is a number. What matters is the foundation beneath it. Without that foundation, we're not counting down to anything. We're just repeating a comfortable fiction that lets us feel like the future is closer than it really is. The most honest answer to "When will this technology arrive?" is often the least satisfying: "We don't know yet, but here's what needs to happen first." That answer respects both the complexity of innovation and the intelligence of the audience. Maybe it's time we used it more often.

0 views
iDiallo 3 weeks ago

Free Graphic Cards for Everyone

I was too young to lose money in the Dot-Com crash of 2000. I didn't own any tech stock. In fact, I didn't even know there was a bubble to pop. My interaction with the "Internet" was a dial-up modem sputtering to life and the simple, joyful ritual of visiting a handful of websites I had discovered. When the NASDAQ plummeted and titans of vaporware like Pets.com vanished, nothing in my world seemed to change. The handful of websites I knew still worked. The internet still existed. Yet we all became beneficiaries of that crash. We inherited fast and reliable internet infrastructure that would enable the likes of YouTube and Netflix. This didn't come out of nowhere. It was the enormous, tangible gift left behind by an era of spectacular overvaluation. The Dot-Com era was built on a fantasy: growth at any cost. If you had a great domain name and a slide deck, that was all you needed to secure backing from a venture capitalist. Profit was someone else's problem, somewhere down the line. To support this vision of a trillion-dollar digital economy, billions were poured into physical infrastructure. Telecom companies laid thousands of miles of fiber optic cables, convinced they would soon be transmitting massive amounts of data to every home and business. Warehouses filled with servers were erected to handle the unimaginable traffic that would surely materialize any day now. But the demand didn't materialize on their timetable. Most of the companies that financed this build-out collapsed, having over-delivered on infrastructure and under-delivered on sustainable business models. As a result, the market was flooded with world-class infrastructure operating at a fraction of capacity. Companies like Global Crossing and WorldCom went bankrupt, leaving behind vast networks of " dark fibre ". Cables that had been laid but never lit up with traffic. This excess capacity eventually drove down the cost of bandwidth dramatically, making fast, "always-on" broadband affordable and accessible for ordinary consumers. We, the users, inherited a massive, over-engineered highway system that failed companies had paid billions to build. By the mid-2000s, this infrastructure became the foundation for an entirely new generation of internet services. YouTube launched in 2005, relying on cheap bandwidth to stream videos to millions. Netflix pivoted from DVDs to streaming in 2007, a business model that would have been economically impossible without the crash-induced glut of fiber capacity. The bubble's collapse didn't destroy the internet. It subsidized its golden age. In the early 1980s, the video game market was flooded with poorly made games and too many competing consoles. Store shelves were packed with rushed, derivative titles that diluted the market. The industry collapsed in 1983, with revenues plummeting by 97% and threatening to kill gaming as a commercial medium entirely. Consumer trust wouldn't recover until 1985 with the release of the Nintendo Entertainment System (NES). But Nintendo didn't just relaunch the industry. It fundamentally restructured it. The company introduced a " Seal of Quality " that enforced higher standards for games and established a new, more disciplined relationship between console makers and third-party developers. Nintendo limited how many games publishers could release per year, ensuring that only more polished titles reached consumers. The crash didn't kill gaming; it served as a brutal, necessary editor that cleared away the junk and forced the industry to evolve. More importantly, Nintendo was at least partially responsible for bringing advanced computing into the living room as a mainstream product. As a result, it created the foundational experience that would shape how millions of people interacted with technology. The crash made room for excellence. Nvidia is investing $100 billion in OpenAI. OpenAI is committing $300 billion to Oracle. Oracle makes a $40 billion deal with Nvidia. Meanwhile, OpenAI is projected to generate just $13 billion in revenue in 2025. As kids say today, the math isn't mathing . I believe we're in a bubble, though I'm not sure which stage we're at. Is this late 1999, with the peak still ahead? Or early 2000, with the foundation already cracking beneath us? In the meantime, we're building. Specifically, we're building enormous quantities of High-Performance Computing (HPC) hardware, the GPUs needed to train and run large language models. The most sought-after is Nvidia's H100 GPU, a piece of silicon that costs tens of thousands of dollars and remains in short supply. Data centers are being constructed at breakneck speed, each one packed with thousands of these chips. If the AI bubble pops. If hundreds of generative AI startups fail to find viable paths to profit before burning through their venture capital. We will once again be left with an enormous, tangible gift: graphic cards. Lots and lots of them. Not free, perhaps, but certainly cheap. Millions of high-end GPUs currently locked away in underutilized data centers could flood the secondary market, creating a massive surplus of computing power available at a fraction of their original cost. What would you do with a couple of H100s sitting on your desk? (first, get a bigger desk) During the Dot-Com crash, I was too young to understand what was happening. But if the AI crash comes, and cheap hardware floods the market, it will enable a new generation of innovators to experiment without the gatekeeping of cloud providers and API rate limits. I've written before about how AI is currently just a proxy for subscription companies , you rent access to models through monthly fees, with your data passing through corporate servers, etc. But when you and I can actually get the hardware in our hands, we can truly start to understand and reshape the power of AI on our own terms. Imagine running the most powerful models entirely on local, specialized hardware, with complete privacy and unlimited customization. No data leaving your machine. No usage caps. No content filters imposed by nervous legal teams. Imagine scientific research being conducted by individual researchers or small university labs, no longer priced out by cloud computing costs. A graduate student studying protein folding or climate modeling could have the same computational resources as a major corporation, all purchased secondhand for a fraction of the original price. Imagine independent developers building entirely new applications we haven't conceived yet. Applications that would be economically impossible at current GPU prices but become viable when hardware costs collapse. If the wheel of history repeats itself, we can expect decades of hardware advancement to land directly in the hands of the public. The infrastructure built for a bubble becomes the foundation for what comes next. I, for one, am waiting for my free GPU. Or at least my heavily discounted one. The crash won't be the end of AI. It will be the moment AI becomes truly democratized.

0 views
iDiallo 3 weeks ago

Beyond Enshittification: Hostile

The computer is not just working less well. Instead, it is actively trying to undermine you. And there is nothing you can do about it. When Windows wants to update, you don't get to say "no." You get "Update now" or "Remind me later." When Twitter shows you notifications from people you don't follow, you can't dismiss them, only "see less often." When LinkedIn changes your email preferences, you'll reset them, only to find they've reverted a few months later. These aren't bugs. They aren't oversights. They're deliberate design choices that remove your ability to say no. It's not dark patterns anymore. It's not even enshittification. It's pure hostility . As developers, there are two types of users we find extremely annoying. The first is the user who refuses to get on the latest version of the app. They're not taking advantage of the latest bug fixes we've developed. We're forced to maintain the old API because this user doesn't want to update. They're stubborn, they're stuck in their ways, and they're holding everyone back. The second type of user is the one who's clueless about updates. It's not that they don't want to update, they don't even know there is such a thing as an update. They can be annoying because they'll eventually start complaining that the app doesn't work. But they'll do everything short of actually updating it. Well, I fall into the first category. I understand it's annoying, but I also know that developers will often change the app in ways that don't suit me. I download an app when it's brand new and has no ads, when the developer is still passionate about the project, pouring their heart and soul into it, making sure the user experience is a priority. That's the version I like. Because shortly after, as the metrics settle in and they want to monetize, the focus switches from being user-centric to business-centric. In Cory Doctorow's words, this is where "enshittification" starts. Now, I'm not against a developer trying to make a buck, or millions for that matter. But I am against degrading the user experience to maximize profit. Companies have figured out how to eliminate the first type of user entirely. They've weaponized updates to force compliance. Apps that won't launch without updating. Operating systems that update despite your settings. Games that require online connection to play single-player campaigns. Software that stops working if you don't agree to new terms of service. The philosophy of "if it ain't broke, don't fix it" is dead. They killed it. And they can get away with it because of the network effect. We are trapped in it. You use Windows because your workplace uses Windows. You use Excel because your colleagues use Excel. You use Slack because your team uses Slack. You use WhatsApp because your family uses WhatsApp. When Windows suddenly requires you to have a Microsoft account (an online account) just to log into your local computer, what are your options? Switch to Apple? After twenty years of Windows shortcuts, file systems, and muscle memory? Switch to Linux? When you need to share files with colleagues who use proprietary Microsoft formats? You can't. And they know you can't. They're not competing on quality anymore. They're leveraging your professional dependency, your colleagues' software choices, your decade of learned workflows. You're not a customer who might leave if the product gets worse. You're a captive audience. This is why the hostility is possible. This is why they can get away with it. Enshittification, as Doctorow describes it, is a process of degradation. First, platforms are good to users to build market share. Then they abuse users to favor business customers. Finally, they abuse those business customers to claw back all the value for themselves. But what we're seeing now is different. This isn't neglect or the natural decay of a profit-maximizing business. This is the deliberate, systematic removal of user agency. You are presented with the illusion of choice. You can update now or update later, but you cannot choose to never update. You can see less often, but you cannot choose to never see it. You can accept all cookies instantly, or you can navigate through a deliberately complex maze of toggles and submenus to reject them one by one. They borrow ransomware patterns. Notifications you can't dismiss, only snooze. Warnings that your system is "at risk" if you don't update immediately. Except once you update, the computer is restarted and you are presented with new terms you have to agree in order to access your computer. Every Windows update that turns Bing back on and forces all links to open with Edge. Every app update that re-enables notifications you turned off. Every platform that opts you back into marketing emails and makes you opt out again. Updates are now scary because they can take you from a version that serves your interest, to a version that services the company's. The update that adds telemetry. The update that removes features you relied on. The update that makes the app slower, more bloated, more aggressive about upselling you. These aren't accidents. They're not the result of developers who don't care or designers who don't know better. They're the result of product meetings where someone said "users are rejecting this, how do we force them to accept it?" and someone else said "remove the 'no' button." As a developer, and someone who has been using computers since I was 5 years old, I don't really care about the operating system. I can use them interchangeably. In fact, I don't care about Twitter, or any of these platforms. When I log into my computer it's to write a document. When I use my mobile device, it's to talk to my friends or family. When I access my dev machine, it's to do my job. The operating systems or the platforms are secondary to the task at hand. The software is supposed to be the tool, not the obstacle. But now the tool demands tribute. It demands your data, your attention, your compliance with whatever new terms it has decided to impose. You can't switch because switching costs everything. Your time, your muscle memory, your compatibility with everyone else who's also trapped. The network effect isn't just about other people using the same platform. It's about your own accumulated investment in learning, customization, and integration. So when they add hostile features, when they remove your ability to say no, when they force you to have an online account for offline work, when they interrupt you with notifications you can't dismiss, when they change interfaces you've spent years mastering, you can only accept it. Not because you want to. Not because it's better. Because you have no choice. And that's not enshittification. That's hostility.

0 views
iDiallo 3 weeks ago

Stop Trying to Promote My Best Engineers

There has always been a disconnect between the hiring process and finding the best engineers. But when we somehow find them, the career ladder ensures that they don't remain in that position of strength. An incompetent company might create the conditions for engineers to leave for better jobs. A generous company will apply the Peter Principle and promote engineers to their level of incompetence. Either way, the best engineers never remain in that position of strength. How do you recognize a great engineer? Is it someone who aces all the leetcode during the interview process? Is it someone who is a great communicator? Or is it someone who went to an elite university? The processes we currently have in place can only determine so much. Candidates have limited time to audition for the role they're applying for. Over the span of a few interviews, they're supposed to convey the experience from all their past work, show that they know how to do the job, and also talk about their greatest weakness. It's a performance that some people know how to game. AI-powered hiring tools haven't changed this problem. They don't magically give you better candidates . You're still sifting through the same pool, just with fancier filters. The disconnect between interview performance and actual job performance remains. A few years back, I interviewed someone I'll call the greatest communicator I've ever seen. It was for a web engineer position on another team. He seemed to understand the front end, the backend, and the jargon of the job. But what impressed me most was how he broke down each problem I posed into small parts and thoroughly resolved each one. It was as if he was creating Jira tickets in real time and writing documentation along the way before the task was even completed. I gave the thumbs up and he was hired. A couple of months later, I remembered him. I searched for his name in the directory and learned that he was let go. "Why?" I asked around. The answer was "he was pretty bad, couldn't complete a single task." Yet he was able to pass the job interview. The inverse also happens. You take a chance on someone who seemed merely adequate during interviews, and somehow they turn into one of your best engineers. I've often found myself in teams where I have zero doubts about the ability of my teammates. But then, as the end of the year approaches, the inevitable discussion turns to promotion. It's actually much easier to identify a great engineer on the job than in an interview. You see their work, their growth, their impact. And when you finally have that clarity, when you know without a doubt that this person excels at what they do, the system insists you move them away from it. When you are good at your job, the logical step for a manager is to reward you with a promotion, moving you away from the job you are actually good at. That's the Peter Principle in action. Managers believe their only tool for compensation is moving you up and down the ladder. A great developer gets promoted to senior developer, then to tech lead, then to manager. At each step, we strip away more of what made them valuable in the first place. The underlying assumption is that a great engineer will nurture a team into great engineers. But teaching and applying a skill are two distinct occupations. You may be great at one, but terrible at the other. My instinct is to help great engineers continue to grow in their expertise, not switch them to a role where they're no longer competent. It's important not to throw away all their knowledge and put them in a position of authority where they can't exercise their skill. Yet many employees themselves don't know what the next step up should be. They see "senior" or "lead" or "manager" as the only path forward, not because they want those responsibilities, but because that's the only way to get recognition and compensation. What if we stopped thinking about career advancement as climbing a ladder? What if the goal wasn't always upward, but deeper? The traditional career ladder assumes that everyone wants to eventually stop doing technical work. It assumes that the best reward for mastering a craft is to stop practicing it. But some of the best engineers I've worked with have no interest in management. They want to write code, solve hard problems, and mentor others without taking on hiring, performance reviews, and budget planning. We need to normalize horizontal growth. This means creating paths where engineers can gain expertise, take on more complex challenges, and yes, earn more money, without leaving their position of strength. It means recognizing that a senior engineer who has been writing excellent code for ten years is not "stuck" or "lacking ambition." They're mastering their craft. It also means changing how we structure compensation. If the only way to give someone a significant raise is to promote them, then we've built a system that punishes expertise. Companies should be able to pay top-tier compensation for top-tier individual contributors, not just managers. The irony is that we struggle to identify great engineers in interviews, yet when we finally find them on the job, we immediately try to change what they do. We should be asking ourselves, if this person is exceptional at their current role, why is our first instinct to move them? Maybe the answer isn't to promote them out of their position of strength, but to let them get even better at what they already do exceptionally well. After all, if interviews can't reliably identify great engineers, shouldn't we do everything possible to keep them exactly where they are when we finally find them?

0 views
iDiallo 4 weeks ago

Designing Behavior with Music

A few years back, I had a ritual. I'd walk to the nearest Starbucks, get a coffee, and bury myself in work. I came so often that I knew all the baristas and their schedules. I also started noticing the music. There were songs I loved but never managed to catch the name of, always playing at the most inconvenient times for me to Shazam them. It felt random, but I began to wonder: Was this playlist really on shuffle? Or was there a method to the music? I never got a definitive answer from the baristas, but I started to observe a pattern. During the morning rush, around 8:30 AM when I'd desperately need to take a call, the music was always higher-tempo and noticeably louder. The kind of volume that made phone conversations nearly impossible. By mid-day, the vibe shifted to something more relaxed, almost lofi. The perfect backdrop for a deep, focused coding session when the cafe had thinned out and I could actually hear myself think. Then, after 5 PM, the "social hour" began. The music became familiar pop, at a volume that allowed for easy conversation, making the buzz of surrounding tables feel part of the atmosphere rather than a distraction. The songs changed daily, but the strategy was consistent. The music was subtly, or not so subtly, encouraging different behaviors at different times of day. It wasn't just background noise; it was a tool. And as it turns out, my coffee-fueled hypothesis was correct. This isn't just a Starbucks quirk; it's a science-backed strategy used across the hospitality industry. The music isn't random. It's designed to influence you. Research shows that we can broadly group cafe patrons into three archetypes, each responding differently to the sonic environment. Let's break them down. This is you and me, with a laptop, hoping to grind through a few hours of work. Our goal is focus, and the cafe's goal is often to prevent us from camping out all day on a single coffee. What the Research Says: A recent field experiment confirmed that fast-tempo music leads to patrons leaving more quickly. Those exposed to fast-tempo tracks spent significantly less time in the establishment than those who heard slow-tempo music or no music at all. For the solo worker, loud or complex music creates a higher "cognitive load," making sustained concentration difficult. That upbeat, intrusive morning music isn't an accident; it's a gentle nudge to keep the line moving. When you're trying to write code or draft an email and the music suddenly shifts to something with a driving beat and prominent vocals, your brain has to work harder to filter it out. Every decision, from what variable to name to which sentence structure to use, becomes just a little more taxing. I'm trying to write a function and a song is stuck in my head. "I just wanna use your love tonight!" After an hour or two of this cognitive friction, packing up and heading somewhere quieter starts to feel like a relief rather than an inconvenience. This pair is there for conversation. You meet up with a friend you haven't seen in some time. You want to catch up, and the music acts as a double-edged sword. What the Research Says: The key here is volume. Very loud music can shorten a visit because it makes conversing difficult. You have to lean in, raise your voice, and constantly ask "What?" Research on acoustic comfort in cafes highlights another side: music at a moderate level acts as a "sonic privacy blanket." It masks their conversation from neighboring tables better than silence, making the pair feel more comfortable and less self-conscious. I've experienced this myself. When catching up with a friend over coffee, there's an awkward awareness in a silent cafe that everyone can hear your conversation. Are you talking too loud about that work drama? Can the person at the next table hear you discussing your dating life? But add a layer of moderate background music, and suddenly you feel like you're in your own bubble. You can speak freely without constantly monitoring your volume or censoring yourself. The relaxed, mid-day tempo isn't just for solo workers. It's also giving pairs the acoustic privacy to linger over a second latte, perhaps order a pastry, and feel comfortable enough to stay for another thirty minutes. The group of three or more is there for the vibe. Their primary goal is to connect with each other, and the music is part of the experience. What the Research Says: Studies on background music and consumer behavior show that for social groups, louder, more upbeat music increases physiological arousal, which translates into a sense of excitement and fun. This positive state is directly linked to impulse purchases, and a longer stay. "Let's get another round!" The music effectively masks the group's own noise, allowing them to be loud without feeling disruptive. The familiar pop tunes of the evening are an invitation to relax, stay, and spend. That energy translates into staying longer, ordering another drink, maybe splitting some appetizers. The music gives permission for the group to match its volume and enthusiasm. If the cafe is already vibrating with sound, your group's laughter doesn't feel excessive, it feels appropriate. The music is not random, it's calculated. I have a private office in a coworking space. What I find interesting is that whenever I go to the common area, where most people work, there's always music blasting. Not just playing. Blasting . You couldn't possibly get on a meeting call in the common area, even though this is basically a place of work. For that, there are private rooms that you can rent by the minute. Let that sink in for a moment. In a place of work, it's hard to justify music playing in the background loud enough to disrupt actual work. Unless it serves a very specific purpose: getting you to rent a private room. The economics makes sense. I did a quick count on my floor. The common area has thirty desks but only eight private rooms. If everyone could take calls at their desks, those private rooms would sit empty. But crank up the music to 75 decibels, throw in some upbeat electronic tracks with prominent basslines, and suddenly those private rooms are booked solid at $5 per 15 minutes. That's $20 per hour, per room, eight rooms, potentially running 10 hours a day. The music isn't there to help people focus. It's a $1,600 daily revenue stream disguised as ambiance. And the best, or worse, part is that nobody complains. Because nobody wants to be the person who admits they need silence to think. We've all internalized the idea that professionals should be able to work anywhere, under any conditions. So we grimace, throw on noise-canceling headphones, and when we inevitably need to take a Zoom call, we sheepishly book a room and swipe our credit card. Until now, this process has been relatively manual. A manager chooses a playlist or subscribes to a service (like Spotify's "Coffee House" or "Lofi Beats") and hopes it has the desired effect. It's a best guess based on time of day and general principles. But what if a cafe could move from curating playlists to engineering soundscapes in real-time? This is where generative AI will play a part. Imagine a system where: Simple sensors can count the number of customers in the establishment and feed real-time information to an AI. Point-of-sale data shows the average ticket per customer and table turnover rates. The AI receives a constant stream: "It's 2:30 PM. The cafe is 40% full, primarily with solo workers on laptops. Table turnover is slowing down, average stay time is now 97 minutes, up from the target of 75 minutes." An AI composer, trained on psychoacoustic principles and the cafe's own historical data, generates a unique, endless piece of music. It doesn't select from a library. It is created in realtime. The manager has set a goal: "Gently increase turnover without driving people away." The AI responds by subtly shifting the generated music to a slightly faster BPM. Maybe, from 98 to 112 beats per minute. It introduces more repetitive, less engrossing melodies. Nothing jarring, nothing that would make someone consciously think "this music is annoying," but enough to make that coding session feel just a little more effortful. The feedback loop measures the result. Did the solo workers start packing up 15 minutes sooner on average? Did they look annoyed when they left, or did they seem natural? Did anyone complain to staff? The AI learns and refines its model for next time, adjusting its parameters. Maybe 112 BPM was too aggressive; next time it tries 106 BPM with slightly less complex instrumentation. This isn't science fiction. The technology exists today. We already have: Any day now, you'll see a start up providing this service. Where the ambiance of a space is not just curated, but designed. A cafe could have a "High Turnover Morning" mode, a "Linger-Friendly Afternoon" mode, and a "High-Spend Social Evening" mode, with the AI seamlessly transitioning between them by generating the perfect, adaptive soundtrack. One thing that I find frustrating with AI is that when we switch to these types of systems, you never know. The music would always feel appropriate, never obviously manipulative. It would be perfectly calibrated to nudge you in the desired direction while remaining just below the threshold of conscious awareness. A sonic environment optimized not for your experience, but for the business's metrics. When does ambiance become manipulation? There's a difference between playing pleasant background music and deploying an AI system that continuously analyzes your behavior and adjusts the environment to influence your decisions. One is hospitality; the other is something closer to behavioral engineering. And unlike targeted ads online, which we're at least somewhat aware of and can block, this kind of environmental manipulation is invisible, unavoidable, and operates on a subconscious level. You can't install an ad blocker for the physical world. I don't have answers here, only questions. Should businesses be required to disclose when they're using AI to manipulate ambiance? Is there a meaningful difference between a human selecting a playlist to achieve certain outcomes and an AI doing the same thing more effectively? Does it matter if the result is that you leave a cafe five minutes sooner than you otherwise would have? These are conversations we need to have as consumers, as business owners, as a society. Now we know that the quiet background music in your local cafe has never been just music. It's a powerful, invisible architect of behavior. And it's about to get a whole lot smarter. Simple sensors can count the number of customers in the establishment and feed real-time information to an AI. Point-of-sale data shows the average ticket per customer and table turnover rates. The AI receives a constant stream: "It's 2:30 PM. The cafe is 40% full, primarily with solo workers on laptops. Table turnover is slowing down, average stay time is now 97 minutes, up from the target of 75 minutes." An AI composer, trained on psychoacoustic principles and the cafe's own historical data, generates a unique, endless piece of music. It doesn't select from a library. It is created in realtime. The manager has set a goal: "Gently increase turnover without driving people away." The AI responds by subtly shifting the generated music to a slightly faster BPM. Maybe, from 98 to 112 beats per minute. It introduces more repetitive, less engrossing melodies. Nothing jarring, nothing that would make someone consciously think "this music is annoying," but enough to make that coding session feel just a little more effortful. The feedback loop measures the result. Did the solo workers start packing up 15 minutes sooner on average? Did they look annoyed when they left, or did they seem natural? Did anyone complain to staff? The AI learns and refines its model for next time, adjusting its parameters. Maybe 112 BPM was too aggressive; next time it tries 106 BPM with slightly less complex instrumentation. Generative AI that can create music in any style ( MusicLM , MusicGen ) Computer vision that can anonymously track occupancy and behavior Point-of-sale systems that track every metric in real-time Machine learning systems that can optimize for complex, multi-variable outcomes

1 views
iDiallo 1 months ago

Keeping the Candle Lit

On my first day at a furniture store, my boss pointed to a warehouse full of boxes and said, "Unpack that one and build it." Simple enough. I found a large, heavy box, sliced it open, and laid out an array of wooden slats, metal screws, and chains. It was a love seat swing. Clearly a two or three person job. But I didn't know that. If my boss asked me to build it, I figured, it must be possible. So I just started. There is this feeling I often get when I have a brand new exciting idea. What follows goes something like this. You buy the domain. You sketch the idea. You draft the first chapter. The rush of beginning something new floods your system with dopamine and possibility. This initial excitement is a fantastic fuel. It gets you moving. The candle of motivation always burns fastest at the start. But then you get past the first easy steps, and the flame sputters. The wax pool of complexity begins to form. Doubt seeps in. You start to realize the true scale of what you've undertaken. Suddenly, exhilaration starts to feel like exhaustion. Most projects die right here, in the soggy middle. If you are not careful, you might even start a new project just to feel that rush again. The trick isn't to avoid this burnout. It's inevitable. The trick is learning how to reignite the flame, or better yet, to build a different kind of fire entirely. Standing in that warehouse, I had an advantage I didn't recognize at the time. I had no idea how hard this was supposed to be. If my boss had said, "This is a complex, multi-person assembly job that typically takes experienced workers two hours," I would have been paralyzed. I'd have looked for help. I'd have doubted my ability. I'd have found seventeen reasons to do something else first. This is why every monumental piece of software, every world-changing company, every impossible creative work was started by someone who didn't fully grasp the mountain they were about to climb. If Jeff Bezos had started by trying to solve for a global fleet of delivery vans, AWS cloud infrastructure, and same-day delivery logistics, he'd never have sold his first book. If the Wright Brothers had tried to understand all of aeronautical engineering before attempting flight, they'd still be on the ground. Amazon's magic trick was to start selling books before you try to build the empire. Start with the bicycle shop before you revolutionize transportation. Start tightening one bolt before you build the swing. The most dangerous thing you can do with a big project is understand it fully before you begin. An hour into the swing assembly, my initial energy was completely gone. I was alone with this massive, complicated puzzle. My hands hurt. The instruction diagram might as well have been written in ancient Egyptian. The 'let's impress the boss!' fuel had evaporated, replaced by the reality of a hundred confusing parts and no clear path forward. But I had to complete the job. So I stopped thinking of it as 'building a love seat swing' and started thinking of it as a series of small, repeatable tasks. Find two pieces that fit. Align the holes. Insert bolt A into slot B. Tighten with wrench C. Repeat. I wasn't building anything. I was just completing a pattern. Over and over. This wasn't a creative problem, the instructions were written clearly on the paper. So I turned it into repetitive motion. When a task feels like it requires 100% pure creativity all the time, you will burn out. Creative energy is finite. Decision-making is exhausting. But rhythm? Rhythm is renewable. I entered a flow state not through inspiration, but through repetition. The goal shifted from "finish the impossible thing" to "complete the next simple step." This is how books get written. Not through sustained creative genius, but through showing up to the same chair at the same time and adding 500 words to yesterday's 500 words. This is how companies get built. Not through visionary breakthroughs every day, but through making the same sales calls, fixing the same bugs, having the same customer conversations until patterns emerge and systems develop. The secret is to find the smallest unit of meaningful progress and make it so frictionless that it's easier to do it than to avoid it. I've written about my trick to learning anything new before. I might as well start calling it the "100 Times Rule." The rule is simple: You can't do the big impossible thing once. But you can do the tiny component action 100 times. You can't write 100 novels. But you can write 200 words, 100 days in a row. You can't launch 100 companies, but you can have 100 conversations with potential customers. You can't master piano, but you can practice scales for 100 sessions. You can't "get in shape," but you can do 100 workouts. The power isn't in the number 100 specifically, it's in the reframing of the problem into manageable bites. When you commit to doing something 100 times, three things happen: It becomes a small repeatable task. One presentation? Easy. One workout? Done. One paragraph? Please. You're not trying to build a business, you're just making today's call. You make room for being bad at it. Nobody expects call #3 to be perfect. You're learning. You're iterating. You have 97 more chances to figure it out. You build the rhythm that replaces motivation. By the time you hit repetition #30 or #40, you're no longer running on inspiration. You're running on momentum, on identity, on the simple fact that this is what you do now. The swing didn't get built because I had sustained enthusiasm for furniture assembly. It got built because I found a repeatable motion and executed it dozens of times until the thing was done. A few hours later, my boss walked by, did a double-take, and stared at the fully assembled love seat swing, gently swaying in the warehouse. "Wait. You built this? By yourself?" he asked. I just nodded, my hands raw, my shoulders aching, but my self-confidence boosted. What I didn't tell him was that I succeeded not because I was an expert, not because I had some special talent for furniture assembly, not because I stayed motivated the entire time. I succeeded because I started before I knew the challenge, and I kept going by finding a rhythm within it. The candle of motivation will burn out, that's guaranteed. But you're not building a swing. You're just tightening this one bolt. Then the next. And then the next. Before you know it, you'll look up and find the impossible thing complete, gently swaying before you. Built not by inspiration but by the simple, persistent act of showing up and doing the smallest next thing. It becomes a small repeatable task. One presentation? Easy. One workout? Done. One paragraph? Please. You're not trying to build a business, you're just making today's call. You make room for being bad at it. Nobody expects call #3 to be perfect. You're learning. You're iterating. You have 97 more chances to figure it out. You build the rhythm that replaces motivation. By the time you hit repetition #30 or #40, you're no longer running on inspiration. You're running on momentum, on identity, on the simple fact that this is what you do now.

0 views
iDiallo 1 months ago

How to Get Started Programming: Build a Blog

The moment I learned how to program, I wanted to experiment with my new super powers. Building a BMI calculator in the command line wouldn't cut it. I didn't want to read another book, or follow any other tutorial. What I wanted was to experience chaos. Controlled, beautiful, instructive chaos that comes from building something real and watching it spectacularly fail. That's why whenever someone asks me how they can practice their new found skill, I suggest something that might sound old-fashioned in our framework-obsessed world. Build your own blog from scratch. Not with WordPress. Not with Next.js or Gatsby or whatever the cool kids are using this week. I mean actually build it. Write every messy, imperfect line of code. A blog is deceptively simple. On the surface, it's just text on a page. But underneath? It's a complete web application in miniature. It accepts input (your writing). It stores data (your posts). It processes logic (routing, formatting, displaying). It generates output (the pages people read). When I was in college, I found myself increasingly frustrated with the abstract nature of what we were learning. We'd implement different sorting algorithms, and I'd think: "Okay, but when does this actually matter ?" We'd study data structures in isolation, divorced from any practical purpose. It all felt theoretical, like memorizing chess moves without ever playing a game. Building a blog changed that completely. Suddenly, a data structure wasn't just an abstract concept floating in a textbook. It was the actual list of blog posts I needed to sort by date. A database wasn't a theoretical collection of tables; it was the real place where my article drafts lived, where I could accidentally delete something important at 2 AM and learn about backups the hard way. This is what makes a blog such a powerful learning tool. You can deploy it. Share it. Watch people actually read the words your code is serving up. It's real. That feedback loop, the connection between your code and something tangible in the world, is irreplaceable. So how do you start? I'm not going to give you a step-by-step tutorial. You've probably already done a dozen of those. You follow along, copy the code, everything works perfectly, and then... you close the browser tab and realize you've learned almost nothing. The code evaporates from your memory because you never truly owned it. Instead, I'm giving you permission to experiment. To fumble. To build something weird and uniquely yours. You can start with a single file. Maybe it's an that clumsily echoes "Hello World" onto a blank page. Or perhaps you're feeling adventurous and fire up a Node.js server with an that doesn't use Express to handle a simple GET request. Pick any language you are familiar with and make it respond to a web request. That's your seed. Everything else grows from there. Once you have that first file responding, the questions start arriving. Not abstract homework questions, but real problems that need solving. Where do your blog posts live? Will you store them as simple Markdown or JSON files in a folder? Or will you take the plunge into databases, setting up MySQL or PostgreSQL and learning SQL to and your articles? I started my first blog with flat files. There's something beautiful about the simplicity. Each post is just a text file you can open in any editor. But then I wanted tags, and search, and suddenly I was reinventing databases poorly. That's when I learned why databases exist. Not from a lecture, but from feeling the pain of their absence. You write your first post. Great! You write your second post. Cool! On the third post, you realize you're copying and pasting the same HTML header and footer, and you remember learning something about DRY (don't repeat yourself) in class. This is where you'll inevitably invent your own primitive templating system. Maybe you start with simple includes: at the top of each page in PHP. Maybe you write a JavaScript function that stitches together HTML strings. Maybe you create your own bizarre templating syntax. It will feel like magic when it works. It will feel like a nightmare when you need to change something and it breaks everywhere. And that's the moment you'll understand why templating engines exist. I had a few blog posts written down on my computer when I started thinking about this next problem: How do you write a new post? Do you SSH into your server and directly edit a file with vim? Do you build a crude, password-protected page with a textarea that writes to your flat files? Do you create a whole separate submission form? This is where you'll grapple with forms, authentication (or a hilariously insecure makeshift version of it), file permissions, and the difference between GET and POST requests. You'll probably build something that would make a security professional weep, and that's okay. You'll learn by making it better. It's one thing to write code in a sandbox, but a blog needs to be accessible on the Internet. That means getting a domain name (ten bucks a year). Finding a cheap VPS (five bucks a month). Learning to into that server. Wrestling with Nginx or Apache to actually serve your files. Discovering what "port 80" means, why your site isn't loading, why DNS takes forever to propagate, and why everything works on your laptop but breaks in production. These aren't inconveniences, they're the entire point. This is the knowledge that separates someone who can write code from someone who can ship code. Your blog won't use battle-tested frameworks or well-documented libraries. It will use your solutions. Your weird routing system. Your questionable caching mechanism. Your creative interpretation of MVC architecture. Your homemade caching will fail spectacularly under traffic ( what traffic?! ). Your clever URL routing will throw mysterious 404 errors. You'll accidentally delete a post and discover your backup system doesn't work. You'll misspell a variable name and spend three hours debugging before you spot it. You'll introduce a security vulnerability so obvious that even you'll laugh when you finally notice it. None of this is failure. This is the entire point. When your blog breaks, you'll be forced to understand the why behind everything. Why do frameworks exist? Because you just spent six hours solving a problem that Express handles in three lines. Why do ORMs exist? Because you just wrote 200 lines of SQL validation logic that Sequelize does automatically. Why do people use TypeScript? Because you just had a bug caused by accidentally treating a string like a number. You'll emerge from this experience not just as someone who can use tools, but as someone who understands what problems those tools were built to solve. That understanding is what transforms a code-copier into a developer. Building your own blogging engine used to be a rite of passage. Before Medium and WordPress and Ghost, before React and Vue and Svelte, developers learned by building exactly this. A simple CMS. A place to write. Something that was theirs. We've lost a bit of that spirit. Now everyone's already decided they'll use React on the frontend and Node on the backend before they even know why. The tools have become the default, not the solution. Your blog is your chance to recover that exploratory mindset. It's your sandbox. Nobody's judging. Nobody's watching. You're not optimizing for scale or maintainability or impressing your coworkers. You're learning, deeply and permanently, by building something that matters to you. So here's my challenge: Stop reading. Stop planning. Stop researching the "best" way to do this. Create a folder. Create a file. Pick a language and make it print "Hello World" in a browser. Then ask yourself: "How do I make this show a blog post?" And then: "How do I make it show two blog posts?" And then: "How do I make it show the most recent one first?" Build something uniquely, personally, wonderfully yours. Make it ugly. Make it weird. Make it work, then break it, then fix it again. Embrace the technical chaos. This is how you learn. Not by following instructions, but by discovering problems, attempting solutions, failing, iterating, and eventually (accidentally) building something real. Your blog won't be perfect. It will probably be kind of a mess. But it will be yours, and you will understand every line of code in it, and that understanding is worth more than any tutorial completion certificate. If you don't know what that first blog post will be, I have an idea. Document your process of building your very own blog from scratch. The blog you build to learn programming becomes the perfect place to share what programming taught you. Welcome to development. The real kind, where things break and you figure out why. You're going to love it.

0 views
iDiallo 1 months ago

Why You Can't Be an Asshole in the Middle

On the first day on the job, the manager introduced me to the team, made a couple of jokes, then threatened to fire someone. At first, I thought it was just his sense of humor, that it was something I would understand once I worked long enough on the team. But no one else laughed. The air in the meeting room became stiff as he rambled about issues we had. The next Monday morning, he did it again. Now I was confused. Was I being hazed? No. Because he did it again the following Monday. He was an asshole. But he wasn't just any asshole. He thought he was Steve Jobs. Steve Jobs was a difficult person to work with. He was brutally honest, he could bend wills, shatter egos. Yet, he was also enshrined as one of the greatest business leaders of our time. He was the visionary who resurrected Apple and gave us the iPhone. My manager wasn't alone in his delusion. Like many professionals who find themselves in a people manager's position, they look at Jobs and think that being a brilliant jerk is a viable path to success. "The results speak for themselves. Maybe I need to be tougher, more demanding, less concerned with feelings." What they fail to see is that they are not Steve Jobs. And unless you're the CEO at the helm, acting like him is not a superpower. When you're a mid-level manager, you're not the Captain. You're a member of the crew. The difference between being an asshole at the top versus being an asshole in the middle comes down to authority, autonomy, and consequences. Jobs was the Captain. As the founder and CEO, he was the ultimate source of authority and vision. His difficult personality was inseparable from the company's mission. People tolerated his behavior because they bought into his vision of the future. He had the final say on hiring, firing, and strategy. His presence was the gravitational force around which the entire company orbited. When the captain is an asshole, the crew might stay for the voyage. When a fellow crewmate is an asshole, they get thrown overboard. A mid-level manager is a key member of the crew, but you are not the ultimate authority. Your colleagues in engineering, marketing, and sales don't report to you out of reverence for your world-changing vision; they collaborate with you to achieve shared company goals. Your power is not absolute; it's influence-based. And that changes everything. For Steve Jobs, it's not that being an asshole was his secret sauce. It's that his unique position allowed him to survive the downsides of his personality. He was building his vision of the future. For every person he drove away, another was drawn to the mission. It was impossible to fire him (a second time). He could fire people, and he could make them millionaires with stock options. The potential upside made the toxicity tolerable. The part of the story that often get omitted is that Jobs had a cleanup crew. Behind his grandiose ideas and abrasive personality, there were people who handled the operations and relationship-focused work he didn't have time for. That's what Tim Cook was for. Tim Cook smoothed over the conflicts, built the partnerships, and kept the machine running while Jobs played visionary. As a mid-level manager, you don't have a Tim Cook, do you? As a mid-level manager, your "because I said so" doesn't have the same weight. Anyone one level above your position can contradict you. When the CEO is harsh and demanding, it gets labeled as visionary leadership. The same behavior from a mid-level manager is seen for what it is: poor communication and a lack of respect. Your influence is much smaller than the person at the helm. You need favors from other departments, buy-in from your peers, and discretionary effort from your team. Being difficult burns bridges, creates resentment, and ensures that when you need help, no one will be in a hurry to give it. Your "brilliant" idea dies in a meeting room because you've alienated the very people needed to execute it. Your tools are limited. You can't promise life-changing wealth, and while you can influence promotions or terminations, the process is often layered with HR policies and approvals. Using fear as your primary tool without having ultimate control just creates a culture of anxiety and quiet quitting, not breakthrough innovation. Collaboration is your strength, and you're actively undermining it. When we had layoffs at my company, my manager was first on the list to get the boot. I can't say that his "assholery" was what put him on the list, but it certainly didn't help. No one went to bat for him. No one argued that he was indispensable. The bridges he'd burned came back to haunt him. Your success as a mid-level manager depends on your ability to influence, inspire, and collaborate. You can't demand greatness; you have to cultivate it. And you can't do that from behind a wall of arrogance and fear. In the real world, building bridges will always get you further than burning them. At work, be the leader people actually want to follow .

1 views
iDiallo 1 months ago

Can You Build a TikTok Alternative?

Whenever a major platform announces changes, the internet's response is predictable: "Let's just build our own." I remember the uproar when Facebook introduced Timeline. Users threatened boycotts and vowed to create alternatives. The same pattern emerged with Stack Overflow. There were countless weekend-clone attempts that promised to be "better." Back then, building an alternative felt possible, even if most attempts fizzled out. Now, with TikTok's American operations being sold to Oracle and inevitable changes on the horizon, I find myself asking one question. Is it actually possible to build a TikTok alternative today? The answer depends entirely on who's asking. A well-resourced tech company? Absolutely. We've already seen Facebook, YouTube, and others roll out their own short-form video features in months. But a scrappy startup or weekend project? That's a different story entirely. As someone who doesn't even use TikTok, I'm exploring this purely for the technical and strategic challenge. So let's approach this like a mid-level manager tasked with researching what it would actually take. It's interesting to think about cost or technology stack, but I think the most critical part of TikTok isn't its code at all. On the surface, TikTok does two things: it lets you record a video, then shares it with other users. That's it. You could argue that Facebook, YouTube, and Instagram do the same thing. And you'd be right. This surface-level replication is exactly why every major platform (Reels, Shorts, etc.) launched their own versions within months of TikTok's explosion. Creating a platform that records and shares videos is straightforward for a large company. The technical pattern is well-established. But that surface simplicity is deceiving. Because video, at scale, is one of the hardest technical problems in consumer tech. Let me put video complexity in perspective. All the text content on my blog compiled over 12 years totals about 10 MB. That's the size of a single photo from my smartphone. A single TikTok video, depending on length and resolution, easily exceeds that. Now multiply that by millions of uploads per day. Building an app with TikTok's core features requires significant upfront investment: These aren't optional. The format is established, the bar is set. You can't launch a "minimum viable" short-form video app in 2025. Users expect the full feature set from day one. Video processing is not as simple as it seems. You could build wrappers around FFmpeg, but building fast and reliable encoding, streaming and formatting demands more than just a wrapper. In my previous exploration of building a YouTube alternative , I concluded it was essentially impossible for two reasons: TikTok operates at a smaller scale than YouTube, but those fundamental challenges remain. You need serious capital to even start. You can build the platform, but you can't build the phenomenon. TikTok's true competitive advantage has nothing to do with its codebase. It's technically a Snapchat clone. What makes TikTok impossible to displace is its cultural gravity. TikTok isn't just a video app. It's the most powerful music discovery platform. It turned Lil Nas X's "Old Town Road" into a global phenomenon and resurrected Fleetwood Mac's "Dreams" 43 years after release. Artists now strategically release "sped-up" versions specifically formatted for TikTok trends. Record labels monitor the platform more closely than radio. Your alternative app might have better video processing, but it won't make hits. For younger users, TikTok has replaced Google for everything from recipe searches to news discovery. But it's more radical than that. Google evolved from a search engine to an answer engine, attempting to provide direct answers rather than just links. TikTok takes this evolution further by becoming a serve engine. You don't find content, content finds you. You open the app and scroll. No search queries, no browsing, no active seeking. The algorithm serves you exactly what it thinks you want to see, refining its understanding with every swipe. Users aren't searching for vibes and aesthetics; they're being served in an endless, personalized stream. Your alternative can't replicate this with a better algorithm alone. You need millions of users generating behavioral data to train on. On TikTok. "Microtrends" emerge, peak, and die within weeks, fueling entire industries. Restaurant chains now add viral menu items to permanent offerings. Fast fashion brands monitor TikTok trends in real-time. Your alternative might have a great feed algorithm, but it won't move markets. On TikTok, you can watch three seconds of a video and instantly identify it as TikTok content before seeing any logo. The vertical format, the quick cuts, the trending sounds, the text overlays. It's a distinct design that users have internalized. I'm not interested in creating TikTok content, but the more important truth is that TikTok isn't interested in the content I would create. The platform has defined what it is, and users know exactly what they're getting. Any alternative must either copy this completely (making it pointless) or define something new (requiring the same years-long cultural adoption TikTok achieved). Technical replication of TikTok is expensive but achievable for a well-resourced company. But the insurmountable barrier isn't the code; it's the immense cultural inertia. To compete, you wouldn't just be building a video app. You'd need to simultaneously displace TikTok as: You're not building a better mousetrap. You're trying to convince an entire ecosystem to migrate to an empty platform with no culture, no creators, and no communities. For a genuine alternative to emerge, the strategy can't be "TikTok but slightly different." It must be "TikTok completely neglected this specific use case, and we're going to own it entirely." Or alternatively, people may react negatively to the acquisition by Oracle. As a developer, no Oracle software inspires me. I hope this will serve as inspiration to build a better alternative. Not just an expensive ghost town with excellent video processing. Development costs : Vibe coding won't cut it. You need to hire people. Team requirements : You'll need experienced teams that can build and optimize for each App ecosystem. Frontend and backend developers, UI/UX designers, QA engineers Mandatory features : Video recording/editing with effects and filters, AI-powered recommendation engine, live streaming, duets/stitches, social graph and sharing, content moderation systems It's expensive to host videos at scale It's even more expensive to deal with copyright issues A music discovery platform A search engine for Gen Z A trendsetter driving consumer behavior A community hub with established creator ecosystems

0 views
iDiallo 1 months ago

AI Video Overview

Google is creating a bigger and wider chasm between users and the source of data. Currently, my blog's traffic from Google searches has dropped significantly since AI Overviews launched. Where users once clicked through to read my articles, they now get their answers directly from Google's AI summary and never visit the source. Before, it was straightforward: you searched for something, Google showed the website that had the information, and you clicked on it. Now, when you search for information, you're presented with an AI Overview that tries to answer your search query. This is fine from a user's standpoint. You had a question, now you have an answer. But who answered your question? Google crawls the web, finds websites that have the information you need, then summarizes them neatly for end-users. The problem is, with AI summaries, you never get to see the source information. Sure, there's a small link behind a collapsible menu, but it now means you rarely click on links anymore. Links , the very thing that made the web hyperconnected, take a back seat. Long term, since users aren't clicking on links, there are fewer incentives for anyone to create content. And Google will eventually have to find a way to source content from somewhere. But before we get there, I want to put my cards on the table. The next frontier for Google Search is video. And the technology is already here. For videos, Google often presents a YouTube video in the search results and highlights the part that's relevant to your search query. You still watch the video, and there are still incentives for the person who created the instructional video to continue doing so. The creator gets views, ad revenue, subscribers, etc. The ecosystem still works. When you search for "how to fix a leaky faucet," Google shows you a YouTube video and jumps to the 2:30 mark where the actual fix is demonstrated. You watch that person's content, maybe subscribe, maybe watch their other videos. They directly benefit. But this is just the stepping stone to something much bigger. What happens when Google starts showing AI Video Overviews? A few years back, I wrote about how YouTube uses machine learning to predict the most likely video you will want to watch . Their goal is to keep you on the platform for as long as possible. Based on your history, and that of millions of people sharing the same watch pattern, they keep you watching by recommending the most appealing next videos. Earlier this year, I wrote that Google (through YouTube) has all the ingredients to create the perfect video for you. In my article "The Perfect YouTube Video" , I explored how YouTube tracks every aspect of how you watch their video and react to it. Using the different data points you generate, they could prompt Veo (Google's video generator) to create the perfect video for you. A video so enticing that you'd have a hard time skipping it. This might not have been possible when I wrote that article, but at the rate AI is progressing, I wouldn't be surprised if in a couple of years Veo creates video in real time. Now, Google has Genie 3 , an impressive world-building model that creates a world you can navigate in real time. It operates at 720p resolution and 24 frames per second. Combine this with Veo's video generation capabilities, and you have all the ingredients needed to create real-time AI Overview videos. Here is what Google's AI can extract from videos right now: And then here is what they can generate: Let's walk through a scenario. You have some free time today, and you finally want to try your hand at baking cookies. You search for a recipe online, and Google gives you the ingredients from an Old Family Recipe, the link buried somewhere below. Now, you go to the store, buy the ingredients, and you're in your kitchen wearing your apron and chef's hat, ready to bake some cookies. This time you Google "how to bake cookies." You're presented with a wall of text from the AI Overview listing those same ingredients you bought before. But you're not much of a chef or a reader. Instead, you want to see how the cookies will look because you're a visual learner. What's that in the top right corner? A new Google feature? It says "AI Video Overview." You click the button and a new window appears. It loads for just 15 seconds, and you're presented with a hyper-realistic kitchen, with an AI-generated avatar narrating the steps with perfect lip-sync and text overlays listing ingredients. The video is just 30 seconds, cutting all the fluff usually found on cooking channels. In a 30-second video that you can scrub through, you can see all the steps for baking your cookies. Of course, at the end of the video there's a card that appears where you can click and see the source videos Google used to generate this 30-second clip. But who clicks on that? There is a rise on zero click searches . This will be extremely convenient for users. Why waste time hearing all the fluff and a Nord VPN sponsorship when all you need is the steps to bake? But here is what will remain unseen: This isn't science fiction. Yes, it doesn't exist just yet. But it's the logical next step in Google's evolution from search engine to answer engine. Just as my blog now gets fewer clicks because people read the AI Overview instead of visiting my site, video creators will soon face the same reality. The old value exchange model of the internet is breaking down. We were used to Google sending traffic our way when we created high-quality information that helped users. As a reward, we got views, revenue, and built a following. With the new model: Google uses our content as training data → AI generates competing content → Users get information → We get nothing. Sure, there will be attribution buried in a menu somewhere, just like there is for text overviews now. But when was the last time you clicked on those source links after reading an AI summary? The chasm between users and creators isn't just widening. It's becoming a canyon. And unlike text, where you might still want to read the original article for depth or personality, AI video overviews will be so polished and efficient that there will be even less reason to click through to the source. For video creators, what's your value when an AI can synthesize your expertise, replicate your techniques, and present them more efficiently than you ever could? The future may lie in what AI cannot easily replicate. Like live interaction, community building, unique personality, and the kind of deep, original insight that goes beyond answering simple informational queries. I understand that it might take another leap in efficiency before these videos can be generated in real time, but the work is being done. All the major AI players are heavily investing more data centers and research to improve their product. But first, we need to acknowledge what's happening. Google is building a world where your content fuels their answers, but your audience never finds you. Visual elements : Objects, tools, ingredients, environments, lighting setups Actions and sequences : Step-by-step processes, timing, hand movements Audio content : Narration, background music, sound effects Text overlays : Ingredient lists, measurements, temperature settings Style and presentation : Camera angles, pacing, editing choices Realistic environments : Through Genie 3's world modeling Human avatars : With perfect lip-sync and natural movements Coherent narratives : Combining information from multiple sources Optimal pacing : Based on user engagement data The cooking channels that spent hours creating detailed tutorials become invisible training data . Their personality, expertise, and hard work get synthesized into a "perfect" but soulless AI version . Users get their answer without ever engaging with the original source. No views, no ad revenue, no subscribers for the people who actually created the knowledge.

0 views
iDiallo 1 months ago

The Internet Is Powered by Generosity

When I arrived in the US, one of the first things I looked for was an Internet Cafe. I wanted to chat with my family and friends, read about my neighborhood and school, and keep up with the world I'd left behind. But there was one thing that always bothered me in these public spaces: the counter in the bottom right corner of the screen, counting down to let me know how much time I had left. Today, that world has vanished. Internet access is widely available, and we don't need Internet Cafes anymore. The internet has become invisible, so seamlessly integrated into our lives that we forget how this whole system actually works. When you type a question into your phone, it feels like the entire transaction happens right there in your device. But that answer isn't a product of your phone; it emerges from an invisible ecosystem built on the generosity of countless people you'll never meet. Imagine for a second that every time you visited a website, a tiny meter started ticking. Not for the content you're viewing, but for the very software the site runs on. A "Windows Server Tax" here, an "Oracle Database Fee" there. The vibrant, chaotic, creative web we know simply wouldn't exist. The number of blogs, small businesses, niche communities, and personal projects would shrink dramatically. The internet would become a sterile mall of well-funded corporations. (AOL?) But that's not our reality. Instead, the internet runs on something we often overlook: radical, uncompensated generosity. Most servers, cloud instances, and even Android phones run on Linux. Linux is freely given to the world. The software that delivers web pages to your browser? Apache and NGINX, both open source. Those AI-generated summaries you see in Google? They often draw from Wikipedia, edited and maintained by volunteers. OpenSSL, as its name suggests, is open source and protects your private data from prying eyes. When you're troubleshooting that coding problem at 2 AM, you're probably reading a blog post written by a developer who shared their solution simply to help others. This generosity isn't just about getting things for free, it's about freedom itself. When software is "free as in speech," it means you're not the product, your data isn't being harvested, and you have the liberty to use, study, modify, and share these tools. This is the essence of Linux, Wikipedia, and the core protocols that make the internet possible. People contribute to these projects not primarily for money, but out of passion, the desire to build recognition, and the genuine wish to help others and contribute to the commons. It's a gift economy that creates abundance rather than scarcity. This generous foundation is what allows the commercial web to flourish on top of it. A startup doesn't need to spend millions on operating system licenses before writing their first line of code. They can build on Linux, use MySQL for their database, and leverage countless other open-source tools, focusing their capital and energy on their unique idea. Building a website isn't a massive financial decision. It's a creative one. The barrier to entry is nearly zero, and that's a direct result of open-source generosity. But this entire system rests on something even more fundamental: trust. When you visit my website, you trust me. You trust that the HTTPS lock icon means your data is safe, thanks to the open-source OpenSSL library. You trust that I'm not hosting malware. When you read a Wikipedia article, you trust (with healthy skepticism) that volunteers are aiming for accuracy, not pushing an agenda. As a developer, I trust that the open-source tools I use are reliable and secure. I trust that the community will help me when I'm stuck. This trust is the currency that keeps the open web functioning. Obligatory Clay Shirky's video on Love, Internet Style . So what does this mean for you and me? We can continue this tradition of generosity that built the foundation we all rely on. The next time you solve a tricky problem, consider writing a short blog post about it. Your generosity might save someone else hours of frustration. When Wikipedia helps you research that obscure topic, consider making a small donation. It's a tiny price for access to a library of Alexandria. If your company uses open-source software, consider contributing code back or sponsoring a developer. Help maintain the engine you depend on. The internet is a miracle of collaboration, a testament to the idea that when we give freely, we don't deplete our resources. Instead, we create an ecosystem where everyone can build, learn, and connect. It runs on generosity. The least we can do is acknowledge it and, wherever possible, add our own contribution to the commons.

1 views