Latest Posts (18 found)
iDiallo Today

Beyond Enshittification: Hostile

The computer is not just working less well. Instead, it is actively trying to undermine you. And there is nothing you can do about it. When Windows wants to update, you don't get to say "no." You get "Update now" or "Remind me later." When Twitter shows you notifications from people you don't follow, you can't dismiss them, only "see less often." When LinkedIn changes your email preferences, you'll reset them, only to find they've reverted a few months later. These aren't bugs. They aren't oversights. They're deliberate design choices that remove your ability to say no. It's not dark patterns anymore. It's not even enshittification. It's pure hostility . As developers, there are two types of users we find extremely annoying. The first is the user who refuses to get on the latest version of the app. They're not taking advantage of the latest bug fixes we've developed. We're forced to maintain the old API because this user doesn't want to update. They're stubborn, they're stuck in their ways, and they're holding everyone back. The second type of user is the one who's clueless about updates. It's not that they don't want to update, they don't even know there is such a thing as an update. They can be annoying because they'll eventually start complaining that the app doesn't work. But they'll do everything short of actually updating it. Well, I fall into the first category. I understand it's annoying, but I also know that developers will often change the app in ways that don't suit me. I download an app when it's brand new and has no ads, when the developer is still passionate about the project, pouring their heart and soul into it, making sure the user experience is a priority. That's the version I like. Because shortly after, as the metrics settle in and they want to monetize, the focus switches from being user-centric to business-centric. In Cory Doctorow's words, this is where "enshittification" starts. Now, I'm not against a developer trying to make a buck, or millions for that matter. But I am against degrading the user experience to maximize profit. Companies have figured out how to eliminate the first type of user entirely. They've weaponized updates to force compliance. Apps that won't launch without updating. Operating systems that update despite your settings. Games that require online connection to play single-player campaigns. Software that stops working if you don't agree to new terms of service. The philosophy of "if it ain't broke, don't fix it" is dead. They killed it. And they can get away with it because of the network effect. We are trapped in it. You use Windows because your workplace uses Windows. You use Excel because your colleagues use Excel. You use Slack because your team uses Slack. You use WhatsApp because your family uses WhatsApp. When Windows suddenly requires you to have a Microsoft account (an online account) just to log into your local computer, what are your options? Switch to Apple? After twenty years of Windows shortcuts, file systems, and muscle memory? Switch to Linux? When you need to share files with colleagues who use proprietary Microsoft formats? You can't. And they know you can't. They're not competing on quality anymore. They're leveraging your professional dependency, your colleagues' software choices, your decade of learned workflows. You're not a customer who might leave if the product gets worse. You're a captive audience. This is why the hostility is possible. This is why they can get away with it. Enshittification, as Doctorow describes it, is a process of degradation. First, platforms are good to users to build market share. Then they abuse users to favor business customers. Finally, they abuse those business customers to claw back all the value for themselves. But what we're seeing now is different. This isn't neglect or the natural decay of a profit-maximizing business. This is the deliberate, systematic removal of user agency. You are presented with the illusion of choice. You can update now or update later, but you cannot choose to never update. You can see less often, but you cannot choose to never see it. You can accept all cookies instantly, or you can navigate through a deliberately complex maze of toggles and submenus to reject them one by one. They borrow ransomware patterns. Notifications you can't dismiss, only snooze. Warnings that your system is "at risk" if you don't update immediately. Except once you update, the computer is restarted and you are presented with new terms you have to agree in order to access your computer. Every Windows update that turns Bing back on and forces all links to open with Edge. Every app update that re-enables notifications you turned off. Every platform that opts you back into marketing emails and makes you opt out again. Updates are now scary because they can take you from a version that serves your interest, to a version that services the company's. The update that adds telemetry. The update that removes features you relied on. The update that makes the app slower, more bloated, more aggressive about upselling you. These aren't accidents. They're not the result of developers who don't care or designers who don't know better. They're the result of product meetings where someone said "users are rejecting this, how do we force them to accept it?" and someone else said "remove the 'no' button." As a developer, and someone who has been using computers since I was 5 years old, I don't really care about the operating system. I can use them interchangeably. In fact, I don't care about Twitter, or any of these platforms. When I log into my computer it's to write a document. When I use my mobile device, it's to talk to my friends or family. When I access my dev machine, it's to do my job. The operating systems or the platforms are secondary to the task at hand. The software is supposed to be the tool, not the obstacle. But now the tool demands tribute. It demands your data, your attention, your compliance with whatever new terms it has decided to impose. You can't switch because switching costs everything. Your time, your muscle memory, your compatibility with everyone else who's also trapped. The network effect isn't just about other people using the same platform. It's about your own accumulated investment in learning, customization, and integration. So when they add hostile features, when they remove your ability to say no, when they force you to have an online account for offline work, when they interrupt you with notifications you can't dismiss, when they change interfaces you've spent years mastering, you can only accept it. Not because you want to. Not because it's better. Because you have no choice. And that's not enshittification. That's hostility.

0 views
iDiallo 2 days ago

Stop Trying to Promote My Best Engineers

There has always been a disconnect between the hiring process and finding the best engineers. But when we somehow find them, the career ladder ensures that they don't remain in that position of strength. An incompetent company might create the conditions for engineers to leave for better jobs. A generous company will apply the Peter Principle and promote engineers to their level of incompetence. Either way, the best engineers never remain in that position of strength. How do you recognize a great engineer? Is it someone who aces all the leetcode during the interview process? Is it someone who is a great communicator? Or is it someone who went to an elite university? The processes we currently have in place can only determine so much. Candidates have limited time to audition for the role they're applying for. Over the span of a few interviews, they're supposed to convey the experience from all their past work, show that they know how to do the job, and also talk about their greatest weakness. It's a performance that some people know how to game. AI-powered hiring tools haven't changed this problem. They don't magically give you better candidates . You're still sifting through the same pool, just with fancier filters. The disconnect between interview performance and actual job performance remains. A few years back, I interviewed someone I'll call the greatest communicator I've ever seen. It was for a web engineer position on another team. He seemed to understand the front end, the backend, and the jargon of the job. But what impressed me most was how he broke down each problem I posed into small parts and thoroughly resolved each one. It was as if he was creating Jira tickets in real time and writing documentation along the way before the task was even completed. I gave the thumbs up and he was hired. A couple of months later, I remembered him. I searched for his name in the directory and learned that he was let go. "Why?" I asked around. The answer was "he was pretty bad, couldn't complete a single task." Yet he was able to pass the job interview. The inverse also happens. You take a chance on someone who seemed merely adequate during interviews, and somehow they turn into one of your best engineers. I've often found myself in teams where I have zero doubts about the ability of my teammates. But then, as the end of the year approaches, the inevitable discussion turns to promotion. It's actually much easier to identify a great engineer on the job than in an interview. You see their work, their growth, their impact. And when you finally have that clarity, when you know without a doubt that this person excels at what they do, the system insists you move them away from it. When you are good at your job, the logical step for a manager is to reward you with a promotion, moving you away from the job you are actually good at. That's the Peter Principle in action. Managers believe their only tool for compensation is moving you up and down the ladder. A great developer gets promoted to senior developer, then to tech lead, then to manager. At each step, we strip away more of what made them valuable in the first place. The underlying assumption is that a great engineer will nurture a team into great engineers. But teaching and applying a skill are two distinct occupations. You may be great at one, but terrible at the other. My instinct is to help great engineers continue to grow in their expertise, not switch them to a role where they're no longer competent. It's important not to throw away all their knowledge and put them in a position of authority where they can't exercise their skill. Yet many employees themselves don't know what the next step up should be. They see "senior" or "lead" or "manager" as the only path forward, not because they want those responsibilities, but because that's the only way to get recognition and compensation. What if we stopped thinking about career advancement as climbing a ladder? What if the goal wasn't always upward, but deeper? The traditional career ladder assumes that everyone wants to eventually stop doing technical work. It assumes that the best reward for mastering a craft is to stop practicing it. But some of the best engineers I've worked with have no interest in management. They want to write code, solve hard problems, and mentor others without taking on hiring, performance reviews, and budget planning. We need to normalize horizontal growth. This means creating paths where engineers can gain expertise, take on more complex challenges, and yes, earn more money, without leaving their position of strength. It means recognizing that a senior engineer who has been writing excellent code for ten years is not "stuck" or "lacking ambition." They're mastering their craft. It also means changing how we structure compensation. If the only way to give someone a significant raise is to promote them, then we've built a system that punishes expertise. Companies should be able to pay top-tier compensation for top-tier individual contributors, not just managers. The irony is that we struggle to identify great engineers in interviews, yet when we finally find them on the job, we immediately try to change what they do. We should be asking ourselves, if this person is exceptional at their current role, why is our first instinct to move them? Maybe the answer isn't to promote them out of their position of strength, but to let them get even better at what they already do exceptionally well. After all, if interviews can't reliably identify great engineers, shouldn't we do everything possible to keep them exactly where they are when we finally find them?

0 views
iDiallo 4 days ago

Designing Behavior with Music

A few years back, I had a ritual. I'd walk to the nearest Starbucks, get a coffee, and bury myself in work. I came so often that I knew all the baristas and their schedules. I also started noticing the music. There were songs I loved but never managed to catch the name of, always playing at the most inconvenient times for me to Shazam them. It felt random, but I began to wonder: Was this playlist really on shuffle? Or was there a method to the music? I never got a definitive answer from the baristas, but I started to observe a pattern. During the morning rush, around 8:30 AM when I'd desperately need to take a call, the music was always higher-tempo and noticeably louder. The kind of volume that made phone conversations nearly impossible. By mid-day, the vibe shifted to something more relaxed, almost lofi. The perfect backdrop for a deep, focused coding session when the cafe had thinned out and I could actually hear myself think. Then, after 5 PM, the "social hour" began. The music became familiar pop, at a volume that allowed for easy conversation, making the buzz of surrounding tables feel part of the atmosphere rather than a distraction. The songs changed daily, but the strategy was consistent. The music was subtly, or not so subtly, encouraging different behaviors at different times of day. It wasn't just background noise; it was a tool. And as it turns out, my coffee-fueled hypothesis was correct. This isn't just a Starbucks quirk; it's a science-backed strategy used across the hospitality industry. The music isn't random. It's designed to influence you. Research shows that we can broadly group cafe patrons into three archetypes, each responding differently to the sonic environment. Let's break them down. This is you and me, with a laptop, hoping to grind through a few hours of work. Our goal is focus, and the cafe's goal is often to prevent us from camping out all day on a single coffee. What the Research Says: A recent field experiment confirmed that fast-tempo music leads to patrons leaving more quickly. Those exposed to fast-tempo tracks spent significantly less time in the establishment than those who heard slow-tempo music or no music at all. For the solo worker, loud or complex music creates a higher "cognitive load," making sustained concentration difficult. That upbeat, intrusive morning music isn't an accident; it's a gentle nudge to keep the line moving. When you're trying to write code or draft an email and the music suddenly shifts to something with a driving beat and prominent vocals, your brain has to work harder to filter it out. Every decision, from what variable to name to which sentence structure to use, becomes just a little more taxing. I'm trying to write a function and a song is stuck in my head. "I just wanna use your love tonight!" After an hour or two of this cognitive friction, packing up and heading somewhere quieter starts to feel like a relief rather than an inconvenience. This pair is there for conversation. You meet up with a friend you haven't seen in some time. You want to catch up, and the music acts as a double-edged sword. What the Research Says: The key here is volume. Very loud music can shorten a visit because it makes conversing difficult. You have to lean in, raise your voice, and constantly ask "What?" Research on acoustic comfort in cafes highlights another side: music at a moderate level acts as a "sonic privacy blanket." It masks their conversation from neighboring tables better than silence, making the pair feel more comfortable and less self-conscious. I've experienced this myself. When catching up with a friend over coffee, there's an awkward awareness in a silent cafe that everyone can hear your conversation. Are you talking too loud about that work drama? Can the person at the next table hear you discussing your dating life? But add a layer of moderate background music, and suddenly you feel like you're in your own bubble. You can speak freely without constantly monitoring your volume or censoring yourself. The relaxed, mid-day tempo isn't just for solo workers. It's also giving pairs the acoustic privacy to linger over a second latte, perhaps order a pastry, and feel comfortable enough to stay for another thirty minutes. The group of three or more is there for the vibe. Their primary goal is to connect with each other, and the music is part of the experience. What the Research Says: Studies on background music and consumer behavior show that for social groups, louder, more upbeat music increases physiological arousal, which translates into a sense of excitement and fun. This positive state is directly linked to impulse purchases, and a longer stay. "Let's get another round!" The music effectively masks the group's own noise, allowing them to be loud without feeling disruptive. The familiar pop tunes of the evening are an invitation to relax, stay, and spend. That energy translates into staying longer, ordering another drink, maybe splitting some appetizers. The music gives permission for the group to match its volume and enthusiasm. If the cafe is already vibrating with sound, your group's laughter doesn't feel excessive, it feels appropriate. The music is not random, it's calculated. I have a private office in a coworking space. What I find interesting is that whenever I go to the common area, where most people work, there's always music blasting. Not just playing. Blasting . You couldn't possibly get on a meeting call in the common area, even though this is basically a place of work. For that, there are private rooms that you can rent by the minute. Let that sink in for a moment. In a place of work, it's hard to justify music playing in the background loud enough to disrupt actual work. Unless it serves a very specific purpose: getting you to rent a private room. The economics makes sense. I did a quick count on my floor. The common area has thirty desks but only eight private rooms. If everyone could take calls at their desks, those private rooms would sit empty. But crank up the music to 75 decibels, throw in some upbeat electronic tracks with prominent basslines, and suddenly those private rooms are booked solid at $5 per 15 minutes. That's $20 per hour, per room, eight rooms, potentially running 10 hours a day. The music isn't there to help people focus. It's a $1,600 daily revenue stream disguised as ambiance. And the best, or worse, part is that nobody complains. Because nobody wants to be the person who admits they need silence to think. We've all internalized the idea that professionals should be able to work anywhere, under any conditions. So we grimace, throw on noise-canceling headphones, and when we inevitably need to take a Zoom call, we sheepishly book a room and swipe our credit card. Until now, this process has been relatively manual. A manager chooses a playlist or subscribes to a service (like Spotify's "Coffee House" or "Lofi Beats") and hopes it has the desired effect. It's a best guess based on time of day and general principles. But what if a cafe could move from curating playlists to engineering soundscapes in real-time? This is where generative AI will play a part. Imagine a system where: Simple sensors can count the number of customers in the establishment and feed real-time information to an AI. Point-of-sale data shows the average ticket per customer and table turnover rates. The AI receives a constant stream: "It's 2:30 PM. The cafe is 40% full, primarily with solo workers on laptops. Table turnover is slowing down, average stay time is now 97 minutes, up from the target of 75 minutes." An AI composer, trained on psychoacoustic principles and the cafe's own historical data, generates a unique, endless piece of music. It doesn't select from a library. It is created in realtime. The manager has set a goal: "Gently increase turnover without driving people away." The AI responds by subtly shifting the generated music to a slightly faster BPM. Maybe, from 98 to 112 beats per minute. It introduces more repetitive, less engrossing melodies. Nothing jarring, nothing that would make someone consciously think "this music is annoying," but enough to make that coding session feel just a little more effortful. The feedback loop measures the result. Did the solo workers start packing up 15 minutes sooner on average? Did they look annoyed when they left, or did they seem natural? Did anyone complain to staff? The AI learns and refines its model for next time, adjusting its parameters. Maybe 112 BPM was too aggressive; next time it tries 106 BPM with slightly less complex instrumentation. This isn't science fiction. The technology exists today. We already have: Any day now, you'll see a start up providing this service. Where the ambiance of a space is not just curated, but designed. A cafe could have a "High Turnover Morning" mode, a "Linger-Friendly Afternoon" mode, and a "High-Spend Social Evening" mode, with the AI seamlessly transitioning between them by generating the perfect, adaptive soundtrack. One thing that I find frustrating with AI is that when we switch to these types of systems, you never know. The music would always feel appropriate, never obviously manipulative. It would be perfectly calibrated to nudge you in the desired direction while remaining just below the threshold of conscious awareness. A sonic environment optimized not for your experience, but for the business's metrics. When does ambiance become manipulation? There's a difference between playing pleasant background music and deploying an AI system that continuously analyzes your behavior and adjusts the environment to influence your decisions. One is hospitality; the other is something closer to behavioral engineering. And unlike targeted ads online, which we're at least somewhat aware of and can block, this kind of environmental manipulation is invisible, unavoidable, and operates on a subconscious level. You can't install an ad blocker for the physical world. I don't have answers here, only questions. Should businesses be required to disclose when they're using AI to manipulate ambiance? Is there a meaningful difference between a human selecting a playlist to achieve certain outcomes and an AI doing the same thing more effectively? Does it matter if the result is that you leave a cafe five minutes sooner than you otherwise would have? These are conversations we need to have as consumers, as business owners, as a society. Now we know that the quiet background music in your local cafe has never been just music. It's a powerful, invisible architect of behavior. And it's about to get a whole lot smarter. Simple sensors can count the number of customers in the establishment and feed real-time information to an AI. Point-of-sale data shows the average ticket per customer and table turnover rates. The AI receives a constant stream: "It's 2:30 PM. The cafe is 40% full, primarily with solo workers on laptops. Table turnover is slowing down, average stay time is now 97 minutes, up from the target of 75 minutes." An AI composer, trained on psychoacoustic principles and the cafe's own historical data, generates a unique, endless piece of music. It doesn't select from a library. It is created in realtime. The manager has set a goal: "Gently increase turnover without driving people away." The AI responds by subtly shifting the generated music to a slightly faster BPM. Maybe, from 98 to 112 beats per minute. It introduces more repetitive, less engrossing melodies. Nothing jarring, nothing that would make someone consciously think "this music is annoying," but enough to make that coding session feel just a little more effortful. The feedback loop measures the result. Did the solo workers start packing up 15 minutes sooner on average? Did they look annoyed when they left, or did they seem natural? Did anyone complain to staff? The AI learns and refines its model for next time, adjusting its parameters. Maybe 112 BPM was too aggressive; next time it tries 106 BPM with slightly less complex instrumentation. Generative AI that can create music in any style ( MusicLM , MusicGen ) Computer vision that can anonymously track occupancy and behavior Point-of-sale systems that track every metric in real-time Machine learning systems that can optimize for complex, multi-variable outcomes

1 views
iDiallo 6 days ago

Keeping the Candle Lit

On my first day at a furniture store, my boss pointed to a warehouse full of boxes and said, "Unpack that one and build it." Simple enough. I found a large, heavy box, sliced it open, and laid out an array of wooden slats, metal screws, and chains. It was a love seat swing. Clearly a two or three person job. But I didn't know that. If my boss asked me to build it, I figured, it must be possible. So I just started. There is this feeling I often get when I have a brand new exciting idea. What follows goes something like this. You buy the domain. You sketch the idea. You draft the first chapter. The rush of beginning something new floods your system with dopamine and possibility. This initial excitement is a fantastic fuel. It gets you moving. The candle of motivation always burns fastest at the start. But then you get past the first easy steps, and the flame sputters. The wax pool of complexity begins to form. Doubt seeps in. You start to realize the true scale of what you've undertaken. Suddenly, exhilaration starts to feel like exhaustion. Most projects die right here, in the soggy middle. If you are not careful, you might even start a new project just to feel that rush again. The trick isn't to avoid this burnout. It's inevitable. The trick is learning how to reignite the flame, or better yet, to build a different kind of fire entirely. Standing in that warehouse, I had an advantage I didn't recognize at the time. I had no idea how hard this was supposed to be. If my boss had said, "This is a complex, multi-person assembly job that typically takes experienced workers two hours," I would have been paralyzed. I'd have looked for help. I'd have doubted my ability. I'd have found seventeen reasons to do something else first. This is why every monumental piece of software, every world-changing company, every impossible creative work was started by someone who didn't fully grasp the mountain they were about to climb. If Jeff Bezos had started by trying to solve for a global fleet of delivery vans, AWS cloud infrastructure, and same-day delivery logistics, he'd never have sold his first book. If the Wright Brothers had tried to understand all of aeronautical engineering before attempting flight, they'd still be on the ground. Amazon's magic trick was to start selling books before you try to build the empire. Start with the bicycle shop before you revolutionize transportation. Start tightening one bolt before you build the swing. The most dangerous thing you can do with a big project is understand it fully before you begin. An hour into the swing assembly, my initial energy was completely gone. I was alone with this massive, complicated puzzle. My hands hurt. The instruction diagram might as well have been written in ancient Egyptian. The 'let's impress the boss!' fuel had evaporated, replaced by the reality of a hundred confusing parts and no clear path forward. But I had to complete the job. So I stopped thinking of it as 'building a love seat swing' and started thinking of it as a series of small, repeatable tasks. Find two pieces that fit. Align the holes. Insert bolt A into slot B. Tighten with wrench C. Repeat. I wasn't building anything. I was just completing a pattern. Over and over. This wasn't a creative problem, the instructions were written clearly on the paper. So I turned it into repetitive motion. When a task feels like it requires 100% pure creativity all the time, you will burn out. Creative energy is finite. Decision-making is exhausting. But rhythm? Rhythm is renewable. I entered a flow state not through inspiration, but through repetition. The goal shifted from "finish the impossible thing" to "complete the next simple step." This is how books get written. Not through sustained creative genius, but through showing up to the same chair at the same time and adding 500 words to yesterday's 500 words. This is how companies get built. Not through visionary breakthroughs every day, but through making the same sales calls, fixing the same bugs, having the same customer conversations until patterns emerge and systems develop. The secret is to find the smallest unit of meaningful progress and make it so frictionless that it's easier to do it than to avoid it. I've written about my trick to learning anything new before. I might as well start calling it the "100 Times Rule." The rule is simple: You can't do the big impossible thing once. But you can do the tiny component action 100 times. You can't write 100 novels. But you can write 200 words, 100 days in a row. You can't launch 100 companies, but you can have 100 conversations with potential customers. You can't master piano, but you can practice scales for 100 sessions. You can't "get in shape," but you can do 100 workouts. The power isn't in the number 100 specifically, it's in the reframing of the problem into manageable bites. When you commit to doing something 100 times, three things happen: It becomes a small repeatable task. One presentation? Easy. One workout? Done. One paragraph? Please. You're not trying to build a business, you're just making today's call. You make room for being bad at it. Nobody expects call #3 to be perfect. You're learning. You're iterating. You have 97 more chances to figure it out. You build the rhythm that replaces motivation. By the time you hit repetition #30 or #40, you're no longer running on inspiration. You're running on momentum, on identity, on the simple fact that this is what you do now. The swing didn't get built because I had sustained enthusiasm for furniture assembly. It got built because I found a repeatable motion and executed it dozens of times until the thing was done. A few hours later, my boss walked by, did a double-take, and stared at the fully assembled love seat swing, gently swaying in the warehouse. "Wait. You built this? By yourself?" he asked. I just nodded, my hands raw, my shoulders aching, but my self-confidence boosted. What I didn't tell him was that I succeeded not because I was an expert, not because I had some special talent for furniture assembly, not because I stayed motivated the entire time. I succeeded because I started before I knew the challenge, and I kept going by finding a rhythm within it. The candle of motivation will burn out, that's guaranteed. But you're not building a swing. You're just tightening this one bolt. Then the next. And then the next. Before you know it, you'll look up and find the impossible thing complete, gently swaying before you. Built not by inspiration but by the simple, persistent act of showing up and doing the smallest next thing. It becomes a small repeatable task. One presentation? Easy. One workout? Done. One paragraph? Please. You're not trying to build a business, you're just making today's call. You make room for being bad at it. Nobody expects call #3 to be perfect. You're learning. You're iterating. You have 97 more chances to figure it out. You build the rhythm that replaces motivation. By the time you hit repetition #30 or #40, you're no longer running on inspiration. You're running on momentum, on identity, on the simple fact that this is what you do now.

0 views
iDiallo 1 weeks ago

How to Get Started Programming: Build a Blog

The moment I learned how to program, I wanted to experiment with my new super powers. Building a BMI calculator in the command line wouldn't cut it. I didn't want to read another book, or follow any other tutorial. What I wanted was to experience chaos. Controlled, beautiful, instructive chaos that comes from building something real and watching it spectacularly fail. That's why whenever someone asks me how they can practice their new found skill, I suggest something that might sound old-fashioned in our framework-obsessed world. Build your own blog from scratch. Not with WordPress. Not with Next.js or Gatsby or whatever the cool kids are using this week. I mean actually build it. Write every messy, imperfect line of code. A blog is deceptively simple. On the surface, it's just text on a page. But underneath? It's a complete web application in miniature. It accepts input (your writing). It stores data (your posts). It processes logic (routing, formatting, displaying). It generates output (the pages people read). When I was in college, I found myself increasingly frustrated with the abstract nature of what we were learning. We'd implement different sorting algorithms, and I'd think: "Okay, but when does this actually matter ?" We'd study data structures in isolation, divorced from any practical purpose. It all felt theoretical, like memorizing chess moves without ever playing a game. Building a blog changed that completely. Suddenly, a data structure wasn't just an abstract concept floating in a textbook. It was the actual list of blog posts I needed to sort by date. A database wasn't a theoretical collection of tables; it was the real place where my article drafts lived, where I could accidentally delete something important at 2 AM and learn about backups the hard way. This is what makes a blog such a powerful learning tool. You can deploy it. Share it. Watch people actually read the words your code is serving up. It's real. That feedback loop, the connection between your code and something tangible in the world, is irreplaceable. So how do you start? I'm not going to give you a step-by-step tutorial. You've probably already done a dozen of those. You follow along, copy the code, everything works perfectly, and then... you close the browser tab and realize you've learned almost nothing. The code evaporates from your memory because you never truly owned it. Instead, I'm giving you permission to experiment. To fumble. To build something weird and uniquely yours. You can start with a single file. Maybe it's an that clumsily echoes "Hello World" onto a blank page. Or perhaps you're feeling adventurous and fire up a Node.js server with an that doesn't use Express to handle a simple GET request. Pick any language you are familiar with and make it respond to a web request. That's your seed. Everything else grows from there. Once you have that first file responding, the questions start arriving. Not abstract homework questions, but real problems that need solving. Where do your blog posts live? Will you store them as simple Markdown or JSON files in a folder? Or will you take the plunge into databases, setting up MySQL or PostgreSQL and learning SQL to and your articles? I started my first blog with flat files. There's something beautiful about the simplicity. Each post is just a text file you can open in any editor. But then I wanted tags, and search, and suddenly I was reinventing databases poorly. That's when I learned why databases exist. Not from a lecture, but from feeling the pain of their absence. You write your first post. Great! You write your second post. Cool! On the third post, you realize you're copying and pasting the same HTML header and footer, and you remember learning something about DRY (don't repeat yourself) in class. This is where you'll inevitably invent your own primitive templating system. Maybe you start with simple includes: at the top of each page in PHP. Maybe you write a JavaScript function that stitches together HTML strings. Maybe you create your own bizarre templating syntax. It will feel like magic when it works. It will feel like a nightmare when you need to change something and it breaks everywhere. And that's the moment you'll understand why templating engines exist. I had a few blog posts written down on my computer when I started thinking about this next problem: How do you write a new post? Do you SSH into your server and directly edit a file with vim? Do you build a crude, password-protected page with a textarea that writes to your flat files? Do you create a whole separate submission form? This is where you'll grapple with forms, authentication (or a hilariously insecure makeshift version of it), file permissions, and the difference between GET and POST requests. You'll probably build something that would make a security professional weep, and that's okay. You'll learn by making it better. It's one thing to write code in a sandbox, but a blog needs to be accessible on the Internet. That means getting a domain name (ten bucks a year). Finding a cheap VPS (five bucks a month). Learning to into that server. Wrestling with Nginx or Apache to actually serve your files. Discovering what "port 80" means, why your site isn't loading, why DNS takes forever to propagate, and why everything works on your laptop but breaks in production. These aren't inconveniences, they're the entire point. This is the knowledge that separates someone who can write code from someone who can ship code. Your blog won't use battle-tested frameworks or well-documented libraries. It will use your solutions. Your weird routing system. Your questionable caching mechanism. Your creative interpretation of MVC architecture. Your homemade caching will fail spectacularly under traffic ( what traffic?! ). Your clever URL routing will throw mysterious 404 errors. You'll accidentally delete a post and discover your backup system doesn't work. You'll misspell a variable name and spend three hours debugging before you spot it. You'll introduce a security vulnerability so obvious that even you'll laugh when you finally notice it. None of this is failure. This is the entire point. When your blog breaks, you'll be forced to understand the why behind everything. Why do frameworks exist? Because you just spent six hours solving a problem that Express handles in three lines. Why do ORMs exist? Because you just wrote 200 lines of SQL validation logic that Sequelize does automatically. Why do people use TypeScript? Because you just had a bug caused by accidentally treating a string like a number. You'll emerge from this experience not just as someone who can use tools, but as someone who understands what problems those tools were built to solve. That understanding is what transforms a code-copier into a developer. Building your own blogging engine used to be a rite of passage. Before Medium and WordPress and Ghost, before React and Vue and Svelte, developers learned by building exactly this. A simple CMS. A place to write. Something that was theirs. We've lost a bit of that spirit. Now everyone's already decided they'll use React on the frontend and Node on the backend before they even know why. The tools have become the default, not the solution. Your blog is your chance to recover that exploratory mindset. It's your sandbox. Nobody's judging. Nobody's watching. You're not optimizing for scale or maintainability or impressing your coworkers. You're learning, deeply and permanently, by building something that matters to you. So here's my challenge: Stop reading. Stop planning. Stop researching the "best" way to do this. Create a folder. Create a file. Pick a language and make it print "Hello World" in a browser. Then ask yourself: "How do I make this show a blog post?" And then: "How do I make it show two blog posts?" And then: "How do I make it show the most recent one first?" Build something uniquely, personally, wonderfully yours. Make it ugly. Make it weird. Make it work, then break it, then fix it again. Embrace the technical chaos. This is how you learn. Not by following instructions, but by discovering problems, attempting solutions, failing, iterating, and eventually (accidentally) building something real. Your blog won't be perfect. It will probably be kind of a mess. But it will be yours, and you will understand every line of code in it, and that understanding is worth more than any tutorial completion certificate. If you don't know what that first blog post will be, I have an idea. Document your process of building your very own blog from scratch. The blog you build to learn programming becomes the perfect place to share what programming taught you. Welcome to development. The real kind, where things break and you figure out why. You're going to love it.

0 views
iDiallo 1 weeks ago

Why You Can't Be an Asshole in the Middle

On the first day on the job, the manager introduced me to the team, made a couple of jokes, then threatened to fire someone. At first, I thought it was just his sense of humor, that it was something I would understand once I worked long enough on the team. But no one else laughed. The air in the meeting room became stiff as he rambled about issues we had. The next Monday morning, he did it again. Now I was confused. Was I being hazed? No. Because he did it again the following Monday. He was an asshole. But he wasn't just any asshole. He thought he was Steve Jobs. Steve Jobs was a difficult person to work with. He was brutally honest, he could bend wills, shatter egos. Yet, he was also enshrined as one of the greatest business leaders of our time. He was the visionary who resurrected Apple and gave us the iPhone. My manager wasn't alone in his delusion. Like many professionals who find themselves in a people manager's position, they look at Jobs and think that being a brilliant jerk is a viable path to success. "The results speak for themselves. Maybe I need to be tougher, more demanding, less concerned with feelings." What they fail to see is that they are not Steve Jobs. And unless you're the CEO at the helm, acting like him is not a superpower. When you're a mid-level manager, you're not the Captain. You're a member of the crew. The difference between being an asshole at the top versus being an asshole in the middle comes down to authority, autonomy, and consequences. Jobs was the Captain. As the founder and CEO, he was the ultimate source of authority and vision. His difficult personality was inseparable from the company's mission. People tolerated his behavior because they bought into his vision of the future. He had the final say on hiring, firing, and strategy. His presence was the gravitational force around which the entire company orbited. When the captain is an asshole, the crew might stay for the voyage. When a fellow crewmate is an asshole, they get thrown overboard. A mid-level manager is a key member of the crew, but you are not the ultimate authority. Your colleagues in engineering, marketing, and sales don't report to you out of reverence for your world-changing vision; they collaborate with you to achieve shared company goals. Your power is not absolute; it's influence-based. And that changes everything. For Steve Jobs, it's not that being an asshole was his secret sauce. It's that his unique position allowed him to survive the downsides of his personality. He was building his vision of the future. For every person he drove away, another was drawn to the mission. It was impossible to fire him (a second time). He could fire people, and he could make them millionaires with stock options. The potential upside made the toxicity tolerable. The part of the story that often get omitted is that Jobs had a cleanup crew. Behind his grandiose ideas and abrasive personality, there were people who handled the operations and relationship-focused work he didn't have time for. That's what Tim Cook was for. Tim Cook smoothed over the conflicts, built the partnerships, and kept the machine running while Jobs played visionary. As a mid-level manager, you don't have a Tim Cook, do you? As a mid-level manager, your "because I said so" doesn't have the same weight. Anyone one level above your position can contradict you. When the CEO is harsh and demanding, it gets labeled as visionary leadership. The same behavior from a mid-level manager is seen for what it is: poor communication and a lack of respect. Your influence is much smaller than the person at the helm. You need favors from other departments, buy-in from your peers, and discretionary effort from your team. Being difficult burns bridges, creates resentment, and ensures that when you need help, no one will be in a hurry to give it. Your "brilliant" idea dies in a meeting room because you've alienated the very people needed to execute it. Your tools are limited. You can't promise life-changing wealth, and while you can influence promotions or terminations, the process is often layered with HR policies and approvals. Using fear as your primary tool without having ultimate control just creates a culture of anxiety and quiet quitting, not breakthrough innovation. Collaboration is your strength, and you're actively undermining it. When we had layoffs at my company, my manager was first on the list to get the boot. I can't say that his "assholery" was what put him on the list, but it certainly didn't help. No one went to bat for him. No one argued that he was indispensable. The bridges he'd burned came back to haunt him. Your success as a mid-level manager depends on your ability to influence, inspire, and collaborate. You can't demand greatness; you have to cultivate it. And you can't do that from behind a wall of arrogance and fear. In the real world, building bridges will always get you further than burning them. At work, be the leader people actually want to follow .

1 views
iDiallo 1 weeks ago

Can You Build a TikTok Alternative?

Whenever a major platform announces changes, the internet's response is predictable: "Let's just build our own." I remember the uproar when Facebook introduced Timeline. Users threatened boycotts and vowed to create alternatives. The same pattern emerged with Stack Overflow. There were countless weekend-clone attempts that promised to be "better." Back then, building an alternative felt possible, even if most attempts fizzled out. Now, with TikTok's American operations being sold to Oracle and inevitable changes on the horizon, I find myself asking one question. Is it actually possible to build a TikTok alternative today? The answer depends entirely on who's asking. A well-resourced tech company? Absolutely. We've already seen Facebook, YouTube, and others roll out their own short-form video features in months. But a scrappy startup or weekend project? That's a different story entirely. As someone who doesn't even use TikTok, I'm exploring this purely for the technical and strategic challenge. So let's approach this like a mid-level manager tasked with researching what it would actually take. It's interesting to think about cost or technology stack, but I think the most critical part of TikTok isn't its code at all. On the surface, TikTok does two things: it lets you record a video, then shares it with other users. That's it. You could argue that Facebook, YouTube, and Instagram do the same thing. And you'd be right. This surface-level replication is exactly why every major platform (Reels, Shorts, etc.) launched their own versions within months of TikTok's explosion. Creating a platform that records and shares videos is straightforward for a large company. The technical pattern is well-established. But that surface simplicity is deceiving. Because video, at scale, is one of the hardest technical problems in consumer tech. Let me put video complexity in perspective. All the text content on my blog compiled over 12 years totals about 10 MB. That's the size of a single photo from my smartphone. A single TikTok video, depending on length and resolution, easily exceeds that. Now multiply that by millions of uploads per day. Building an app with TikTok's core features requires significant upfront investment: These aren't optional. The format is established, the bar is set. You can't launch a "minimum viable" short-form video app in 2025. Users expect the full feature set from day one. Video processing is not as simple as it seems. You could build wrappers around FFmpeg, but building fast and reliable encoding, streaming and formatting demands more than just a wrapper. In my previous exploration of building a YouTube alternative , I concluded it was essentially impossible for two reasons: TikTok operates at a smaller scale than YouTube, but those fundamental challenges remain. You need serious capital to even start. You can build the platform, but you can't build the phenomenon. TikTok's true competitive advantage has nothing to do with its codebase. It's technically a Snapchat clone. What makes TikTok impossible to displace is its cultural gravity. TikTok isn't just a video app. It's the most powerful music discovery platform. It turned Lil Nas X's "Old Town Road" into a global phenomenon and resurrected Fleetwood Mac's "Dreams" 43 years after release. Artists now strategically release "sped-up" versions specifically formatted for TikTok trends. Record labels monitor the platform more closely than radio. Your alternative app might have better video processing, but it won't make hits. For younger users, TikTok has replaced Google for everything from recipe searches to news discovery. But it's more radical than that. Google evolved from a search engine to an answer engine, attempting to provide direct answers rather than just links. TikTok takes this evolution further by becoming a serve engine. You don't find content, content finds you. You open the app and scroll. No search queries, no browsing, no active seeking. The algorithm serves you exactly what it thinks you want to see, refining its understanding with every swipe. Users aren't searching for vibes and aesthetics; they're being served in an endless, personalized stream. Your alternative can't replicate this with a better algorithm alone. You need millions of users generating behavioral data to train on. On TikTok. "Microtrends" emerge, peak, and die within weeks, fueling entire industries. Restaurant chains now add viral menu items to permanent offerings. Fast fashion brands monitor TikTok trends in real-time. Your alternative might have a great feed algorithm, but it won't move markets. On TikTok, you can watch three seconds of a video and instantly identify it as TikTok content before seeing any logo. The vertical format, the quick cuts, the trending sounds, the text overlays. It's a distinct design that users have internalized. I'm not interested in creating TikTok content, but the more important truth is that TikTok isn't interested in the content I would create. The platform has defined what it is, and users know exactly what they're getting. Any alternative must either copy this completely (making it pointless) or define something new (requiring the same years-long cultural adoption TikTok achieved). Technical replication of TikTok is expensive but achievable for a well-resourced company. But the insurmountable barrier isn't the code; it's the immense cultural inertia. To compete, you wouldn't just be building a video app. You'd need to simultaneously displace TikTok as: You're not building a better mousetrap. You're trying to convince an entire ecosystem to migrate to an empty platform with no culture, no creators, and no communities. For a genuine alternative to emerge, the strategy can't be "TikTok but slightly different." It must be "TikTok completely neglected this specific use case, and we're going to own it entirely." Or alternatively, people may react negatively to the acquisition by Oracle. As a developer, no Oracle software inspires me. I hope this will serve as inspiration to build a better alternative. Not just an expensive ghost town with excellent video processing. Development costs : Vibe coding won't cut it. You need to hire people. Team requirements : You'll need experienced teams that can build and optimize for each App ecosystem. Frontend and backend developers, UI/UX designers, QA engineers Mandatory features : Video recording/editing with effects and filters, AI-powered recommendation engine, live streaming, duets/stitches, social graph and sharing, content moderation systems It's expensive to host videos at scale It's even more expensive to deal with copyright issues A music discovery platform A search engine for Gen Z A trendsetter driving consumer behavior A community hub with established creator ecosystems

0 views
iDiallo 2 weeks ago

AI Video Overview

Google is creating a bigger and wider chasm between users and the source of data. Currently, my blog's traffic from Google searches has dropped significantly since AI Overviews launched. Where users once clicked through to read my articles, they now get their answers directly from Google's AI summary and never visit the source. Before, it was straightforward: you searched for something, Google showed the website that had the information, and you clicked on it. Now, when you search for information, you're presented with an AI Overview that tries to answer your search query. This is fine from a user's standpoint. You had a question, now you have an answer. But who answered your question? Google crawls the web, finds websites that have the information you need, then summarizes them neatly for end-users. The problem is, with AI summaries, you never get to see the source information. Sure, there's a small link behind a collapsible menu, but it now means you rarely click on links anymore. Links , the very thing that made the web hyperconnected, take a back seat. Long term, since users aren't clicking on links, there are fewer incentives for anyone to create content. And Google will eventually have to find a way to source content from somewhere. But before we get there, I want to put my cards on the table. The next frontier for Google Search is video. And the technology is already here. For videos, Google often presents a YouTube video in the search results and highlights the part that's relevant to your search query. You still watch the video, and there are still incentives for the person who created the instructional video to continue doing so. The creator gets views, ad revenue, subscribers, etc. The ecosystem still works. When you search for "how to fix a leaky faucet," Google shows you a YouTube video and jumps to the 2:30 mark where the actual fix is demonstrated. You watch that person's content, maybe subscribe, maybe watch their other videos. They directly benefit. But this is just the stepping stone to something much bigger. What happens when Google starts showing AI Video Overviews? A few years back, I wrote about how YouTube uses machine learning to predict the most likely video you will want to watch . Their goal is to keep you on the platform for as long as possible. Based on your history, and that of millions of people sharing the same watch pattern, they keep you watching by recommending the most appealing next videos. Earlier this year, I wrote that Google (through YouTube) has all the ingredients to create the perfect video for you. In my article "The Perfect YouTube Video" , I explored how YouTube tracks every aspect of how you watch their video and react to it. Using the different data points you generate, they could prompt Veo (Google's video generator) to create the perfect video for you. A video so enticing that you'd have a hard time skipping it. This might not have been possible when I wrote that article, but at the rate AI is progressing, I wouldn't be surprised if in a couple of years Veo creates video in real time. Now, Google has Genie 3 , an impressive world-building model that creates a world you can navigate in real time. It operates at 720p resolution and 24 frames per second. Combine this with Veo's video generation capabilities, and you have all the ingredients needed to create real-time AI Overview videos. Here is what Google's AI can extract from videos right now: And then here is what they can generate: Let's walk through a scenario. You have some free time today, and you finally want to try your hand at baking cookies. You search for a recipe online, and Google gives you the ingredients from an Old Family Recipe, the link buried somewhere below. Now, you go to the store, buy the ingredients, and you're in your kitchen wearing your apron and chef's hat, ready to bake some cookies. This time you Google "how to bake cookies." You're presented with a wall of text from the AI Overview listing those same ingredients you bought before. But you're not much of a chef or a reader. Instead, you want to see how the cookies will look because you're a visual learner. What's that in the top right corner? A new Google feature? It says "AI Video Overview." You click the button and a new window appears. It loads for just 15 seconds, and you're presented with a hyper-realistic kitchen, with an AI-generated avatar narrating the steps with perfect lip-sync and text overlays listing ingredients. The video is just 30 seconds, cutting all the fluff usually found on cooking channels. In a 30-second video that you can scrub through, you can see all the steps for baking your cookies. Of course, at the end of the video there's a card that appears where you can click and see the source videos Google used to generate this 30-second clip. But who clicks on that? There is a rise on zero click searches . This will be extremely convenient for users. Why waste time hearing all the fluff and a Nord VPN sponsorship when all you need is the steps to bake? But here is what will remain unseen: This isn't science fiction. Yes, it doesn't exist just yet. But it's the logical next step in Google's evolution from search engine to answer engine. Just as my blog now gets fewer clicks because people read the AI Overview instead of visiting my site, video creators will soon face the same reality. The old value exchange model of the internet is breaking down. We were used to Google sending traffic our way when we created high-quality information that helped users. As a reward, we got views, revenue, and built a following. With the new model: Google uses our content as training data → AI generates competing content → Users get information → We get nothing. Sure, there will be attribution buried in a menu somewhere, just like there is for text overviews now. But when was the last time you clicked on those source links after reading an AI summary? The chasm between users and creators isn't just widening. It's becoming a canyon. And unlike text, where you might still want to read the original article for depth or personality, AI video overviews will be so polished and efficient that there will be even less reason to click through to the source. For video creators, what's your value when an AI can synthesize your expertise, replicate your techniques, and present them more efficiently than you ever could? The future may lie in what AI cannot easily replicate. Like live interaction, community building, unique personality, and the kind of deep, original insight that goes beyond answering simple informational queries. I understand that it might take another leap in efficiency before these videos can be generated in real time, but the work is being done. All the major AI players are heavily investing more data centers and research to improve their product. But first, we need to acknowledge what's happening. Google is building a world where your content fuels their answers, but your audience never finds you. Visual elements : Objects, tools, ingredients, environments, lighting setups Actions and sequences : Step-by-step processes, timing, hand movements Audio content : Narration, background music, sound effects Text overlays : Ingredient lists, measurements, temperature settings Style and presentation : Camera angles, pacing, editing choices Realistic environments : Through Genie 3's world modeling Human avatars : With perfect lip-sync and natural movements Coherent narratives : Combining information from multiple sources Optimal pacing : Based on user engagement data The cooking channels that spent hours creating detailed tutorials become invisible training data . Their personality, expertise, and hard work get synthesized into a "perfect" but soulless AI version . Users get their answer without ever engaging with the original source. No views, no ad revenue, no subscribers for the people who actually created the knowledge.

0 views
iDiallo 2 weeks ago

The Internet Is Powered by Generosity

When I arrived in the US, one of the first things I looked for was an Internet Cafe. I wanted to chat with my family and friends, read about my neighborhood and school, and keep up with the world I'd left behind. But there was one thing that always bothered me in these public spaces: the counter in the bottom right corner of the screen, counting down to let me know how much time I had left. Today, that world has vanished. Internet access is widely available, and we don't need Internet Cafes anymore. The internet has become invisible, so seamlessly integrated into our lives that we forget how this whole system actually works. When you type a question into your phone, it feels like the entire transaction happens right there in your device. But that answer isn't a product of your phone; it emerges from an invisible ecosystem built on the generosity of countless people you'll never meet. Imagine for a second that every time you visited a website, a tiny meter started ticking. Not for the content you're viewing, but for the very software the site runs on. A "Windows Server Tax" here, an "Oracle Database Fee" there. The vibrant, chaotic, creative web we know simply wouldn't exist. The number of blogs, small businesses, niche communities, and personal projects would shrink dramatically. The internet would become a sterile mall of well-funded corporations. (AOL?) But that's not our reality. Instead, the internet runs on something we often overlook: radical, uncompensated generosity. Most servers, cloud instances, and even Android phones run on Linux. Linux is freely given to the world. The software that delivers web pages to your browser? Apache and NGINX, both open source. Those AI-generated summaries you see in Google? They often draw from Wikipedia, edited and maintained by volunteers. OpenSSL, as its name suggests, is open source and protects your private data from prying eyes. When you're troubleshooting that coding problem at 2 AM, you're probably reading a blog post written by a developer who shared their solution simply to help others. This generosity isn't just about getting things for free, it's about freedom itself. When software is "free as in speech," it means you're not the product, your data isn't being harvested, and you have the liberty to use, study, modify, and share these tools. This is the essence of Linux, Wikipedia, and the core protocols that make the internet possible. People contribute to these projects not primarily for money, but out of passion, the desire to build recognition, and the genuine wish to help others and contribute to the commons. It's a gift economy that creates abundance rather than scarcity. This generous foundation is what allows the commercial web to flourish on top of it. A startup doesn't need to spend millions on operating system licenses before writing their first line of code. They can build on Linux, use MySQL for their database, and leverage countless other open-source tools, focusing their capital and energy on their unique idea. Building a website isn't a massive financial decision. It's a creative one. The barrier to entry is nearly zero, and that's a direct result of open-source generosity. But this entire system rests on something even more fundamental: trust. When you visit my website, you trust me. You trust that the HTTPS lock icon means your data is safe, thanks to the open-source OpenSSL library. You trust that I'm not hosting malware. When you read a Wikipedia article, you trust (with healthy skepticism) that volunteers are aiming for accuracy, not pushing an agenda. As a developer, I trust that the open-source tools I use are reliable and secure. I trust that the community will help me when I'm stuck. This trust is the currency that keeps the open web functioning. Obligatory Clay Shirky's video on Love, Internet Style . So what does this mean for you and me? We can continue this tradition of generosity that built the foundation we all rely on. The next time you solve a tricky problem, consider writing a short blog post about it. Your generosity might save someone else hours of frustration. When Wikipedia helps you research that obscure topic, consider making a small donation. It's a tiny price for access to a library of Alexandria. If your company uses open-source software, consider contributing code back or sponsoring a developer. Help maintain the engine you depend on. The internet is a miracle of collaboration, a testament to the idea that when we give freely, we don't deplete our resources. Instead, we create an ecosystem where everyone can build, learn, and connect. It runs on generosity. The least we can do is acknowledge it and, wherever possible, add our own contribution to the commons.

1 views
iDiallo 2 weeks ago

Users Only Care About 20% of Your Application

I often destroyed our home computer when I was a kid. Armed with only 2GB of storage, I'd constantly hunt for files to delete to save space. But I learned the hard way that files are actually important. After the computer failed to boot, I would have to reinstall Windows and Office 97. My father spent countless hours in the Office Suite and always reminded me to make sure I installed MS Excel. I didn't understand what it was for. The interface looked very confusing to me. But then one day, I was writing some gibberish in Word and wanted to add a table. I didn't know how to add a table in Word. I asked my father, and he didn't know how to do it in Word either, but he had a trick: if you open Microsoft Excel, you can copy tables from Excel and paste them into Word. So in my mind, the only reason Excel existed was to copy and paste tables into Word. Excel has a million and one features, but that was the one that mattered to me. I told this story to someone recently, and he said he didn't even know you could do that with Excel. He used it for tracking personal expenses. Everyone who uses Excel has their own needs that may or may not overlap with other users. There is a broader truth about software usage that follows something like the 80/20 principle: most users will only ever use about 20% of your application's features, but each user uses a different 20%. The writer uses Word for drafting but never touches mail merge. The analyst uses Excel for pivot tables but never for scripting. The PowerPoint user never animates a single object. They are all using a different slice of the same monolithic suite, and each thinks their slice is the most important part. When Microsoft releases new updates to their Office suite, many people get annoyed that their application is now bloated or that their personal workflow is now broken. Why is the application slower? Why are there so many new features that no one cares about? Why does it consume so much memory? It's not just that users don't use the other 80%, they may actively resent it for getting in the way of their 20%. This isn't something unique to Microsoft. I often get frustrated with Google Search. Sometimes want to search by exact keywords, but Google will try to find "related words" even when I use double quotes. I understand that for the majority of use cases, people find what they need on Google, or Google wouldn't dominate search like they do today. But that doesn't make my experience any less frustrating. Complaints like mine often get dismissed as requests from the "1% of users" who want features that won't move the business needle. But 1% of a billion users is still ten million people. Recently, I read a comment from Vlad Prelovac , the CEO of Kagi (a paid search engine focused on quality results without ads or tracking). He realized that Google's small percentage of dissatisfied users (power users tired of SEO spam, privacy-conscious individuals, researchers needing precise results) represented a large, untapped, and potentially profitable market. Kagi didn't need to beat Google for everyone; they just needed to serve that specific slice of users perfectly. They focused on being the ideal 20% for people whose 20% Google was ignoring. Several disruptive companies were born this way. They identify a segment that a giant is ignoring and cater to them so well that they build a loyal, often profitable, user base. Figma didn't need to replace all of Adobe's creative tools, they just needed to nail collaborative design better than Adobe. Notion didn't need to be the best word processor or the best database, they just needed to be the best hybrid tool for teams that needed both. As successful software inevitably accumulates features and complexity, it creates gaps. Users whose specific 20% gets buried under layers of "improvements" meant for other users start looking for alternatives. This is where opportunities emerge. The ability of open-source software to be optimized for specific use cases is one of its biggest strengths here. A developer can take FFmpeg or Blender and create a custom build that strips out everything except the tools needed for a particular workflow. Like a version of Blender optimized solely for architectural visualization. You can see this philosophy working beautifully in VS Code. No two person uses VS code the same way. It starts as a simple text editor, everyone's core 20% is roughly the same basic functionality. Then, through extensions, each developer builds their own perfect environment. The base stays lean, but everyone can customize their personal 20% exactly how they want it. Slack works similarly with its integrations. Discord does this with bots and servers. The platform provides the foundation, but users craft their own experience on top of it. You can't predict exactly which 20% each user will need, but you can build systems that let them find and enhance their own slice. The goal isn't to build a product that does everything for everyone. That's how you end up with bloated software that frustrates more users than it delights&tm;. The goal is to build a product that does the right thing for each user, even if that means accepting that they'll ignore most of what you've built. This might seem wasteful, but it's actually liberating. Instead of trying to make every feature appeal to every user, you can focus on making sure each feature works well for the users who actually need it. Instead of fighting feature bloat, you can embrace the idea that your software will be used in ways you never imagined. By accepting that everyone only partially cares about your software, you can stop trying to make them care about all of it. You can finally start building the part they'll truly love. Even if you're not sure which part that will be.

1 views
iDiallo 2 weeks ago

How to Lead in a Room Full of Experts

Here is a realization I made recently. I'm sitting in a room full of smart people. On one side are developers who understand the ins and outs of our microservice architecture. On the other are the front-end developers who can debug React in their sleep. In front of me is the product team that has memorized every possible user path that exists on our website. And then, there is me. The lead developer. I don't have the deepest expertise on any single technology. So what exactly is my role when I'm surrounded by experts? Well, that's easy. I have all the answers. OK. Technically, I don't have all the answers. But I know exactly where to find them and connect the pieces together. When the backend team explains why a new authentication service would take three weeks to build, I'm not thinking about the OAuth flows or JWT token validation. Instead, I think about how I can communicate it to the product team who expects it done "sometime this week." When the product team requests a "simple" feature, I'm thinking about the 3 teams that need to be involved to update the necessary microservices. Leadership in technical environments isn't about being the smartest person in the room. It's about being the most effective translator. I often get "eye rolls" when I say this to developers: You are not going to convince anyone with facts. In a room full of experts, your technical credibility gets you a seat at the table, but your social skills determine whether anything productive happens once you're there. Where ideally you will provide documentation that everyone can read and understand, in reality, you need to talk to get people to understand. People can get animated when it comes to the tools they use. When the database team and the API team are talking past each other about response times, your role isn't to lay down the facts. Instead it's to read the room and find a way to address technical constraints and unclear requirements. It means knowing when to let a heated technical debate continue because it's productive, and when to intervene because it's become personal. When you are an expert in your field, you love to dive deep. It's what makes you experts. But someone needs to keep one eye on the forest while everyone else is examining the trees. I've sat through countless meetings where engineers debated the merits of different caching strategies while the real issue was that we hadn't clearly defined what "fast enough" meant for the user experience. The technical discussion was fascinating, but it wasn't moving us toward shipping. As a leader, your job isn't to have sophisticated technical opinions. It's to ask how this "discussion" can move us closer to solving our actual problem. When you understand a problem, and you have a room full of experts, the solution often emerges from the discussion. But someone needs to clearly articulate what problem we're actually trying to solve. When a product team says customers are reporting the app is too slow, that's not a clear problem. It's a symptom. It might be that users are not noticing when the shopping cart is loaded, or that maybe we have an event that is not being triggered at the right time. Or maybe the app feels sluggish during peak hours. Each of those problems has different solutions, different priorities, and different trade-offs. Each expert might be looking at the problem with their own lense, and may miss the real underlying problem. Your role as a leader is to make sure the problem is translated in a way the team can clearly understand the problem. By definition, leading is knowing the way forward. But in reality, in a room full of experts, pretending to know everything makes you look like an idiot. Instead, "I don't know, but let's figure it out" becomes a superpower. It gives your experts permission to share uncertainty. It models intellectual humility. And it keeps the focus on moving forward rather than defending ego. It's also an opportunity to let your experts shine. Nothing is more annoying than a lead who needs to be the smartest person in every conversation . Your database expert spent years learning how to optimize queries - let them be the hero when performance issues arise. Your security specialist knows threat models better than you, give them the floor when discussing architecture decisions. Make room for some productive discussion. When two experts disagree about implementation approaches, your job isn't to pick the "right" answer. It's to help frame the decision in terms of trade-offs, timeline, and user impact. Your value isn't in having all the expertise. It's in recognizing which expertise is needed when, and creating space for the right people to contribute their best work. There was this fun blog post I read recently about how non-developers read tutorials written by developers . What sounds natural to you, can be complete gibberish to someone else. As a lead, you constantly need to think about your audience. You need to learn multiple languages to communicate the same thing: Developer language: "The authentication service has a dependency on the user service, and if we don't implement proper circuit breakers, we'll have cascading failures during high load." Product language: "If our login system goes down, it could take the entire app with it. We need to build in some safeguards, which will add about a week to the timeline but prevent potential outages." Executive language: "We're prioritizing system reliability over feature velocity for this sprint. This reduces risk of user-facing downtime that could impact revenue." All three statements describe the same technical decision, but each is crafted for its audience. Your experts shouldn't have to learn product speak, and your product team shouldn't need to understand circuit breaker patterns. But someone needs to bridge that gap. "I'm the lead, and we are going to do it this way." That's probably the worst way to make a decision. That might work in the short term, but it erodes trust and kills the collaborative culture that makes expert teams thrive. Instead, treat your teams like adults and communicate the reason behind your decision: The more comfortable you become with not being the expert, the more effective you become as a leader. When you stop trying to out-expert the experts, you can focus on what expert teams actually need: Your role isn't to have all the answers. It's to make sure the right questions get asked, the right people get heard, and the right decisions get made for the right reasons. Technical leadership in expert environments is less about command and control, and more about connection and context. You're not the conductor trying to play every instrument. You're the one helping the orchestra understand what song they're playing together. That's a much more interesting challenge than trying to be the smartest person in the room.

0 views
iDiallo 3 weeks ago

The Great AI Filter

Google just released the future of their Chrome browser. To put it simply, it's AI everything. Meta also released their new smart glasses, complete with a "neural" wristband for input. It too is AI everything. The more I watched these product launches, with their proclamations about the future, the more I was reminded of this observation from Catch-22 : "While none of the work we do is very important, it is important that we do a great deal of it." Everyone wants to use AI to do things for me that I actually enjoy doing. But they also want to automate tasks I don't like doing. Things that I wouldn't bother with in the first place. Chrome's new Agentic browsing promises to, well, browse the web for me. But wait. I want to browse the web. That's the point. I want to read and discover the web. I want to stumble upon rabbit holes, I want to understand and connect ideas. When I search for something, I'm not just seeking an answer; I'm seeking the journey to that answer. What Chrome is really offering isn't convenience. Chrome will wrap the web into a Google product and serve it to me algorithmically. It's a TikTokification of the web. On TikTok, you don't choose what to watch; the algorithm serves you what it believes you should consume next. When Chrome browses for me, it will surface what it wants me to see, filtered through corporate priorities, advertising relationships, and engagement metrics. Over time, through a kind of digital Stockholm syndrome, I'll start believing these curated choices reflect my authentic preferences. I'll mistake algorithmic manipulation for personal agency. Most AI startups aren't much different. They're building what I call "wrapper products". Sleek interfaces layered over existing AI models like ChatGPT or Claude, promising specialized functionality that I could probably achieve on my own with a well-crafted prompt. A $50/month "AI writing assistant" that essentially runs your text through GPT with a custom system prompt. An "AI research tool" that performs Google searches and summarizes results. An "AI productivity app" that schedules your tasks using the same reasoning capabilities you could access directly. These products often solve problems their creators assume you have, rather than problems you actually experience. They're solutions in search of problems, built by teams who've mistaken the impressive capabilities of foundation models for proof that any application built on top of them will be equally impressive. Side Note: I know every demo shows how AI can help you cook or follow a recipe. But to tell you the truth, following a recipe is not a challenge most people have. The solution is in the name: follow the recipe! As much as I hate participating in the hype cycle, I've come to believe that these wrappers, vaporware, or unrealistic promises, serve an important function. Not because their individual features matter, but because collectively, they help us filter out what AI is not supposed to be. We were promised the most transformative technology since electricity. Instead, we've been presented with an avalanche of shoddy AI tools that automate the wrong things, complicate simple processes, and solve problems that were better left unsolved. I'm looking at you Copilot. Yet they're useful as a collective learning experience. They illuminate the vast chasm between hyped-up novelty acts and genuine, transformative applications. Every failed AI gadget teaches us something about the difference between technological capability and actual utility. When the MIT Study came out, the one that found that 95% of AI initiatives failed to be useful in 300 companies. All I could think about was what were the successful use cases in the remaining 5%? Looking through the study, it seems like what they did was automate repetitive and tedious tasks. They used a bottom up approach, where front line employees drove the adoption of tools to make their job easier. They partnered with vendors who integrated AI with their existing tools, like call summarization and routing for customer service. In other words: When the AI bubble eventually deflates (and it will) these are the applications that will remain standing. Useful tools won't always look impressive in a product demo or generate viral social media posts. Their success stories will be small percentage improvements in efficiency that add up over time. The concept of filters in technology isn't new. The dot-com boom gave us Pets.com alongside Amazon. The mobile app gold rush produced millions of forgotten apps alongside Uber and WhatsApp. Each cycle of technological hype includes a massive filtering process where the market eventually separates genuine innovation from opportunistic noise. AI is undergoing the same process, but at unprecedented scale and speed. The current proliferation of AI products represents our collective attempt to understand what this technology is actually good for. Every overengineered AI feature that gets quietly deprecated teaches the entire ecosystem something valuable. Every startup that discovers their AI wrapper provides no real value helps establish boundaries around what constitutes genuine utility. Every user who grows frustrated with intrusive AI assistance helps calibrate our understanding of when algorithmic help becomes algorithmic hindrance. So in a way, none of the work being done by the 95% may be individually important, but it's important that a great deal of it is being attempted. It's a necessary waste. It's noise that allows genuine signals to emerge. The real breakthroughs won't come from the companies promising to revolutionize everything. They'll come from the teams quietly solving specific problems with appropriate tools, building on solid foundations rather than chasing the latest trend. The great AI filter is working exactly as it should. Most of what's being built will fail, fade, or pivot into irrelevance. But in that process of collective experimentation and failure, the signal will emerge from the noise. It always does. And when it does, it will be worth the wait. the study: State of AI in Business 2025

0 views
iDiallo 3 weeks ago

Building a Habit for Inaction

After walking up a hill and being completely out of breath, I knew it was time for a change. Now, the moment I get home from work, I lace up my running shoes and hit the pavement. After a few weeks, my body started to crave that movement, I couldn't stop even if I wanted to. It became automatic. When you set your bedtime to 10 PM and follow it religiously, as soon as 9:45 hits, your eyelids grow heavy. When lunch is at noon, your stomach starts rumbling at 11:58. That's the result of forming a habit. Repetitive actions carve neural pathways in your brain until they become second nature. But what if what you're trying to do is abstain? To form a habit to not do something? Does inaction build muscle memory? I don't want to eat sugar. But how do I form the habit of not eating it? How do you train your brain to automatically avoid something when avoidance itself feels like... nothing? "Not doing" is exponentially harder to habituate than "doing" because our brains are wired to reinforce actions through repetition, cues, and rewards. Inaction doesn't give the brain the same tangible loop to latch onto. The solution isn't to fight this wiring, it's to work with it by turning abstaining into an active process. When you try to simply "not eat sugar," you're asking your brain to form a habit around an absence. There's no visible behavior to reinforce, no immediate sensory feedback to register success, and no clear routine to slot into the habit loop. Instead, you're relying purely on willpower. A finite resource that depletes throughout the day, making evening lapses almost inevitable. Your brain literally doesn't know what to practice when the practice is "nothing." It's like trying to build muscle by not lifting weights. The absence of action creates an absence of reinforcement, leaving you stuck in a constant battle of conscious resistance rather than unconscious automation. Why did you have to leave your plate on the table unattended? What I've started doing is stopping the depletion of my willpower and instead building a replacement action to latch onto. Instead of telling myself "don't eat sugar," I respond to the craving by getting up from my work desk, walking to the water fountain at the end of the building, and refilling my bottle. What I'm doing is designing a new routine around my "sugar" trigger. Donuts are brought to the office around 3 PM, so my body expects them to be there. Where a sugary donut once gave me that nice reward and quick energy, now my walk to the water station gives me the satisfaction of getting 400 steps closer to my daily steps goal while keeping me hydrated. Vague intentions like "eat something healthy instead" don't work for me. Your brain needs clear, repeatable steps: walk to the kitchen, fill the kettle, choose chamomile tea, wash six blueberries, sit down for three minutes while it steeps. Successful abstinence can feel invisible, so you have to create external markers of success. Maybe a calendar where you mark sugar-free days, a jar where you add a marble for each craving you redirected, or a simple count on your phone. This visibility transforms the intangible achievement of "not doing" into something your brain can recognize and celebrate. In my case, I made a small web app called "100" where I have a list of routines and abstinences with a single button to increment and decrement them. At the end of the day, I can see my successes and what needs improvement. (I'll share it in the near future.) You can only have so much willpower, so conserve it. Designing your environment by removing the things that tempt you into the habit you're trying to avoid is crucial. Don't keep a bag of snacks where it's convenient. Instead, keep the things that help you avoid temptation close by. Plan your day around the things you want to avoid. Systematically replace any temptation with activities you can form habits around. The beauty of this approach is that it can be applied to virtually any "don't" habit you want to develop. Don't scroll social media mindlessly? Replace it with reading three pages of a book. Don't stay up too late? Create a wind-down routine that starts at 9 PM with dimmed lights and herbal tea. Don't interrupt people in conversations? Practice asking a specific follow-up question instead. Each unwanted behavior is an opportunity to design a better replacement behavior. The process isn't about perfection, it's about progress. Some days you'll nail your new routine; others, you'll fall back on old patterns. The goal isn't to never fail but to fail forward, learning what triggers are strongest and which replacement behaviors provide the most satisfying rewards. The habits that feel hardest to change are often the ones most worth changing. By transforming the task of habitually doing nothing into the achievable challenge of habitually doing something better, You're not just breaking bad patterns. You're building the foundation for the person you want to become.

0 views
iDiallo 3 weeks ago

When Being Incorrect Became the Internet's Best Business Model

When I saw new questions on Stackoverflow, I followed 3 simple steps. First I'd read the question, ask clarifying questions in the comment section, and then I would start writing a comprehensive answer. But one time, someone asked a question and I eagerly jumped to the comment to make a bold statement. "They are absolutely wrong! Don't believe what people say." When I think about it now, I wish I wasn't so eager to get those virtual points. The question, now deleted as irrelevant, was someone trying to settle an argument in MySQL. He stated that he believed that there are no differences between these two statements: And he added that his colleagues told him he was incorrect. I jumped in, and said he was right, and they were wrong. The problem was, well, I was wrong. Usually a question like this gets closed pretty quickly. But because I had agreed with him, a wave of comments followed and turned the page into a flamewar. Clouded by anger because of the angry comments that were directed at me, I added my own set of CAPITAL WORDS. Only after the question was closed, and the emotions cooled down, did I finally realize my error. What if the value was ? My interaction with the web is usually quiet and uneventful. But the fastest way to get people's attention is to be spectacularly, obviously wrong. There was this meme back in the days, but it came in multiple variations. It was a Yahoo Answers post where someone is earnestly trying to contact YouTube to film him because he "has some really funny stuff but they won't come." There are the mobile game ads showing someone making the most frustratingly incorrect moves, practically screaming at you to download the app and "do it right." The political influencer making a statement so factually wrong that your fingers itch to correct them in the comments. This isn't accidental. It's strategic. Being wrong triggers something primal in us. The compulsion to correct, to engage, to prove our superior knowledge. It's the digital equivalent of someone saying "actually" at a dinner party, except now that dinner party has millions of attendees, and every correction feeds an algorithm designed to amplify the conversation. Each angry comment, each correction, each share driven by outrage tells the platform's algorithm that this content is "engaging". Regardless of whether that engagement is positive or negative. Today, this is weaponized to generate as much revenue as possible. Every twitter engagement is an opportunity to enrage. People post controversial media without context, waiting to see which direction comments lean. Then they enrage commenters supporting the opposite of whatever commenters say. Public discourse is drowned in rage bait. The Simpsons diagnosed this problem decades ago in "Attack of the 50-Foot Eyesores." When giant advertising mascots came to life and terrorized Springfield, the solution wasn't to fight them. It was to ignore them completely. "Just don't look," the townspeople sang, and the attention-hungry monsters withered away. Attack of the 50ft Eyesores They understood that attention itself is the fuel that powers these systems of manipulation. But ignoring isn't enough anymore. The platforms have evolved beyond our individual ability to simply look away. The algorithms ensure that engaging content finds us, whether we seek it out or not. This is my attempt to suggest conscious digital literacy. Whenever you feel enraged online, take a moment and reflect. "Was this video designed to enrage me?" Learn to identify content designed to provoke rather than inform. Ask yourself why you feel compelled to respond. Decide whether your participation serves your interests or someone else's agenda. Direct your attention toward content that enriches rather than exploits, it's the only way to starve the system. CGP Grey published this 10 years ago. This attention economy built on wrongness comes with real costs. It degrades public discourse, rewards bad actors, and trains us to be more reactive and less thoughtful. It turns every platform into a gladiatorial arena where nuance goes to die and outrage reigns supreme. More insidiously, it changes us. When we're constantly primed to spot and correct errors, to feel outraged and respond impulsively, we become less capable of the patient, careful thinking that complex problems require. I should have reflected before engaging with that question. But enragement is a primal response and usually occurs before we can think. But knowing that it's a pattern can help us identify it before we respond. Every click is a vote, every share is a signal, every comment is a contribution to the kind of digital world we're creating. We have more power than we realize. The wrongness economy only works because we feed it our attention. The moment we stop rewarding strategic ignorance with engagement, we start building something better. The giants billboards of Springfield fell when the townspeople simply stopped looking. Our digital monsters will follow the same fate, but only if we're disciplined enough to turn away.

0 views
iDiallo 4 weeks ago

The Modern Trap

Every problem, every limitation, every frustrating debug session seemed to have the same solution: Use a modern solution. Modern encryption algorithms. Modern deployment pipelines. Modern database solutions. The word modern has become the cure-all solution, promising to solve not just our immediate problems, but somehow prevent future ones entirely. I remember upgrading an app from PHP 5.3 to 7.1. It felt like it was cutting edge. But years later, 7.1 was also outdated. The application had a bug, and the immediate suggestion was to use a modern version of PHP to avoid this non-sense. But being stubborn, I dug deeper and found that the function I was using that was deprecated in newer versions, had an alternative since PHP 5.3. A quick fix prevented months of work rewriting our application. The word "modern" doesn't mean what we think it means. Modern encryption algorithms are secure. Modern banking is safe. Modern frameworks are robust. Modern infrastructure is reliable. We read statements like this every day in tech blogs, marketing copy, and casual Slack conversations. But if we pause for just a second, we realize they are utterly meaningless. The word "modern" is a temporal label, not a quality certificate. It tells us when something was made, not how well it was made. Everything made today is, by definition, modern. But let's remember: MD5 was once the modern cryptographic hash. Adobe Flash was the modern way to deliver rich web content. Internet Explorer 6 was a modern browser. The Ford Pinto was a modern car. "Modern" is a snapshot in time, and time has a cruel way of revealing the flaws that our initial enthusiasm blinded us to. Why do we fall for this? "Modern" is psychologically tied to "progress." We're hardwired to believe the new thing solves the problems of the old thing. And sometimes, it does! But this creates a dangerous illusion: that newness itself is the solution. I've watched teams chase the modern framework because the last one had limitations, not realizing they were trading known bugs for unknown ones. I've seen companies implement modern SaaS platforms to replace "legacy" systems, only to create new single points of failure and fresh sets of subscription fees. We become so busy fleeing the ghosts of past failures that we don't look critically at the path we're actually on. "Modern" is often just "unproven" wearing a better suit. I've embraced modern before, being on the very edge of technology. But that meant I had to keep up to date with the tools I use. Developers spend more time learning new frameworks than mastering existing ones, not because the new tools are objectively better, but because they're newer, and thus perceived as better. We sacrifice stability and deep expertise at the altar of novelty. That modern library you imported last week? It's sleek, it's fast, it has great documentation and a beautiful logo. It also has a critical zero-day vulnerability that won't be discovered until next year, or a breaking API change coming in the next major version. "Legacy" codebases have their problems, but they often have the supreme advantage of having already been battle-tested. Their bugs are known, documented, and patched. In the rush to modernize, we discard systems that are stable, efficient, and perfectly suited to their task. I've seen reliable jQuery implementations replaced by over-engineered React applications that do the same job worse, with more overhead and complexity. The goal becomes "be modern" instead of "be effective." But this illusion of "modern" doesn't just lead us toward bad choices; it can bring progress to a halt entirely. When we sanctify something as "modern," we subtly suggest we've arrived at the final answer. Think about modern medicine. While medical advances are remarkable, embedded in that phrase is a dangerous connotation: that we've reached the complete, final word on human health. This framing can make it difficult to question established practices or explore alternative approaches. Modern medicine didn't think it was important for doctors to wash their hands . The same happens in software development. When we declare a framework or architectural pattern "modern," we leave little room for the "next." We forget that today's groundbreaking solution is merely tomorrow's foundation or tomorrow's technical debt. Instead of modern, I prefer the terms "robust" or "stable". The most modern thing you can do is to look at any solution and ask: "How will this look obsolete in ten years?" Because everything we call "modern" today will eventually be someone else's legacy system. And that's not a bug, it's a feature. It's how progress actually works.

0 views
iDiallo 1 months ago

Which LLM Should I Use as a Developer?

Early in my career, I worked alongside a seasoned C programmer who had finally embraced web development. The company had acquired a successful website built in Perl by someone who, while brilliant, wasn't entirely technical . The codebase was a fascinating mess. commands interwoven with HTML, complex business logic, database calls, and conditional statements all tangled together in ways that would make you want to restart from scratch! Then came the JavaScript requirements. Whenever frontend interactions needed to be added, animations, form validations, dynamic content, I became the go-to person. Despite his deep understanding of programming fundamentals, JavaScript simply didn't click for this experienced developer. The event-driven nature, the prototype-based inheritance, the asynchronous callbacks? They made no sense to him. I was young when I picked up JavaScript, still malleable enough to wrap my head around its quirks. Looking back, I sometimes wonder: if my introduction to programming had been through React or Vue rather than simple loops and conditionals, I would have chosen a different career path entirely. Fast-forward to today, and that same experienced programmer would likely sail through JavaScript challenges without breaking a sweat. Large Language Models have fundamentally changed the game for developers working with unfamiliar languages and frameworks. LLMs can scaffold entire functions, components, or even small applications in languages you've never touched. More importantly, if you're not lazy about it , you can read through the generated code and understand the patterns and idioms of that language. When you inevitably write buggy code in an unfamiliar syntax, LLMs excel at debugging and explaining what went wrong. They're like having a patient mentor who never gets tired of explaining the same concept in different ways. For example, I'm not very good with the awk command line tool, but I can write what I want in JavaScript. So I would often write how I want to parse content in JavaScript and ask an LLM to convert it to awk. Let's say I want to extract just the user agent from Apache log files. In JavaScript, I might think about it like this: The LLM converts this to: It uses the quote character as a field separator and prints the second-to-last field (since the last field after the final quote is empty). This kind of translation between mental models is where LLMs truly shine. Until recently, new developers would approach me with the question: "What's the best programming language to learn?" My response was always pragmatic: "It depends on what you're trying to build. It doesn't really matter." But they'd follow up with their real opinion: "I think Python is the best." The concept of a "best programming language" never entirely made sense to me. I might love Python's elegance, but I still need JavaScript for frontend work, SQL for databases, and bash for deployment scripts. The job dictates the tool, not the other way around. Today, something fascinating is happening. Developers aren't asking about the best programming language anymore. Instead, I hear: "Which is the best LLM for coding?" My answer remains the same: It doesn't really matter. The internet is awash with benchmarks, coding challenges, and elaborate metrics attempting to crown the ultimate coding LLM. These tests typically focus on "vibe coding". They try to generate complete solutions to isolated problems from scratch. If that's your primary use case, fine. But in my experience building real applications that serve actual users, I've rarely found opportunities to generate entire projects. Instead, I see myself asking questions to old pieces of code to figure out why the original developer made a decision to implement a function one way. I generate util functions with an LLM, I ask to generate those extremely annoying TypeScript interfaces (Life is too short to manually write that out). To my knowledge, all LLMs can perform these tasks at an acceptable level that I can immediately test and validate. You don't need AGI for this. After programming for over three decades, I've learned that picking up new languages isn't particularly challenging anymore. The fundamental goal of making computers do useful things, remains constant. What changes is syntax, idioms, and the philosophical approaches different language communities embrace. But when you're starting out, it genuinely feels like there must be one "optimal" language to learn. This illusion persists because beginners conflate syntax familiarity with programming competence. The truth is more nuanced. Even expert C programmers can struggle with JavaScript, not because they lack skill, but because each language embodies different mental models. The barriers between languages are dissolving, and the cost of experimenting with new technologies is approaching zero. The LLM you choose matters far less than developing the judgment to evaluate the code it produces. Pick any reputable LLM, learn to prompt it effectively, and focus on building things that matter. The rest is just syntax. Don't start with a language; start with a problem to be solved. — Matt Mullenweg

0 views
iDiallo 1 months ago

Why AI should be a form, not a conversation

Maybe it was a decade ago when it seemed like every single game developer started believing that all games should be open world. Infinite possibilities, player freedom, emergent storytelling, who wouldn't want that? But open world design introduced new problems that linear games never faced. Players would get lost in meaningless side quests while the main story waited forgotten. Developers would pile on features that didn't serve the core experience. Worst of all, players often couldn't tell when they'd actually "won." I sank an unhealthy amount of time into Metal Gear Solid V when it came out, wandering its open world long after I'd completed every meaningful objective, never quite sure if I was done or just... tired. The problem with open world design is scope creep disguised as feature richness. When everything is possible, nothing feels special. Today, with this new wave of AI, we're making the exact same mistake. We're taking the "open world" approach to problem-solving, and the results are just as messy. When the chatbot craze was at its peak, I worked for a startup where we automated customer service. Many of our competitors were building what I now recognize as "open world chatbots". AI systems with infinite conversational possibilities. These chatbots would greet you, ask about your day, make jokes, and try to make you forget that you had a very specific reason for starting the session. Mind you, this was before ChatGPT, when LLMs weren't widely available. Each competitor's demo showcased the chatbot's vast capabilities through highly choreographed scenarios. Like game developers showing off sprawling landscapes and complex skill trees, they were proud of the sheer scope of what their AI could theoretically handle. The problem was identical to open world games: the moment real users engaged with these systems, everything collapsed. Customers, like players, are unpredictable. They don't follow the golden path you've designed. Our approach was fundamentally different. We built what I'd call a "linear gameplay" AI. Our chatbot didn't try to converse with you. It didn't greet you, didn't chat about the weather, didn't pretend to be your friend. It appeared as a form or email address, a clear single point of entry with one mission: solve your customer service problem. When you sent your fully formed message, our application took over with purposeful, sequential steps. It read the message, classified it, analyzed sentiment, then moved through our integrations (Zendesk, Shopify, Magento, USPS) like a player progressing through carefully designed levels. When it retrieved your tracking information or order details, it crafted a response that directly answered your questions. If at any point it couldn't complete the mission, it executed a clean handoff to a human agent who received a comprehensive summary. Like a save file with all the relevant progress data. This agent was designed for one specific quest: resolve customer service problems . Nothing else. Just like how the best linear games focus relentlessly on their core mechanics rather than trying to be everything to everyone. A lot of our "open world" competitors have either pivoted or gone out of business since then. Which makes it even more surprising to watch Taco Bell's drive-thru AI failing in exactly the same way. Just a few weeks back, Taco Bell went through a PR nightmare when a customer ordered 18,000 cups of water, and the system dutifully added every single cup to the order. This wasn't malicious hacking; this was a classic open world design failure. The system was built to handle "anything," so it handled everything, including completely absurd requests that broke the entire experience. Taco Bell uses a service called Omilia to power their AI drive-thru. On their website, they explicitly describe their open world approach: Taco Bell recognized the importance of providing a seamless, human-like experience for customers, so Omilia's Voice AI Solution was meticulously tuned to provide clear, accurate responses, delivering a conversational experience that mirrors human interaction and improves the customer's ordering experience. Notice the language: "human-like experience," "conversational," "mirrors human interaction." They built an open world when they needed linear gameplay. The conversational nature invites exactly the kind of scope creep that breaks systems. Regular customers report their perfectly normal orders failing spectacularly, with the AI getting stuck in loops, asking "Would you like a drink with that?" even after drinks were clearly specified. I couldn't find a Taco Bell with AI drive-thru in my neighborhood, but I did find Rally's. They use a different company called HI Auto , and to my surprise, it worked flawlessly. The experience felt like a well-designed level progression: clear prompts, sequential steps, defined objectives. "What would you like to drink?" "What size?" Next level unlocked. It wasn't a conversation, it was a voice-powered form. No philosophical debates, no jokes, no attempts at charm. Just a transaction with guided rails and clear success criteria. The software knew exactly what it was supposed to accomplish, and users knew exactly what was expected of them. This is the difference between open world chaos and linear focus. A drive-thru isn't a space for exploration and emergent dialogue. It's a hyper-linear experience where success is defined as "correct order, minimal time." Any system that invites deviation from this core mission is poorly designed for its context. You could theoretically plug ChatGPT into a drive-thru experience, just like you could theoretically add an open world to any game. But you'd be guaranteed to receive unexpected inputs that break the core experience. Instead, treat AI applications like carefully designed HTML forms with proper validation, clear fields, and predictable outcomes. The goal is always the solution (a correct order, a resolved support ticket, a completed task), not the conversational medium. In fact, I think voice itself is not the most optimal for these linear experiences. The most efficient drive-thru model might be a QR code at the menu board that lets you complete the entire "form" on your phone before reaching the speaker. Choose linear when story and progression matter, open world when exploration serves the core purpose. The best AI applications should match their interaction model to their mission. Maybe OpenAI's goal is to build an AI. But your drive thru AI is perfectly fine being specialized. Start building AI systems that do one thing exceptionally well, with clear boundaries, defined success criteria, and linear paths to completion. Your users will thank you, and you won't end up in viral videos featuring 18,000 cups of water.

0 views
iDiallo 1 months ago

You are not going to turn into Google eventually

A few years back, I was running a CI/CD pipeline from a codebase that just kept failing. It pulled the code successfully, it passed the test, the docker image was built, but then it would fail. Each run took around 15 minutes to fail, meaning whatever change I made had to take at least 15 minutes before I knew if it was successful or not. Of course, it failed multiple times before I figured out a solution. When I was done, I wasn't frustrated with the small mistake I had made, I was frustrated by the time it took to get any sort of feedback. The code base itself was trivial. It was a microservice with a handful of endpoints that was only occasionally used. The amount of time it took to build was not proportional to the importance of the service. Well it took so long to build because of dependencies. Not the dependencies it actually used, but the dependencies it might use one day. The ones required because the entire build system was engineered for a fantasy future where every service, no matter how small, had to be pre-optimized to handle millions of users. This is the direct cost of building for a scale you will never reach. It’s the architectural version of buying a Formula 1 car to do your grocery shopping. It’s not just overkill, it actively makes the simple task harder, slower, and infinitely more frustrating. We operate under a dangerous assumption that our companies are inevitably on a path to become the next Google or Meta. So we build like they do, grafting their solutions onto our problems, hoping it will future-proof us. It won't. It just present-proofs us. It saddles us with complexity where none is needed, creating a drag that actually prevents the growth we're trying to engineer for. Here is why I like microservices. The concept is beautiful. Isolate a single task into a discrete, independent service. It’s the Unix philosophy applied to the web: do one thing and do it well. When a problem occurs, you should, in theory, be able to pinpoint the exact failing service, fix it, and deploy it without disrupting the rest of your application. If this sounds exactly how a simple PHP includes or a modular library works… you’re exactly right. And here is why I hate them. In practice, without Google-scale resources, microservices often create the very problems they promise to solve. You don’t end up with a few neat services; you end up with hundreds of them. You’re not in charge of maintaining all of them, and neither is anyone else. Suddenly, “pinpointing the error” is no longer a simple task. It’s a pilgrimage. You journey through logging systems, trace IDs, and distributed dashboards, hoping for an epiphany. You often return a changed man, older, wiser, and empty-handed. This is not to say to avoid microservices at all cost, but it's to focus on the problems you have at hand instead of writing code for a future that may never come. Don’t architect for a hypothetical future of billions of users. Architect for the reality of your talented small team. Build something simple, robust, and effective. Grow first, then add complexity only where and when it is absolutely necessary . When you're small, your greatest asset is agility. You can adapt quickly, pivot on a dime, and iterate rapidly. Excessive process stifles this inherent flexibility. It introduces bureaucracy, slows down decision-making, and creates unnecessary friction. Instead of adopting the heavy, restrictive frameworks of large enterprises, small teams should embrace a more ad-hoc, organic approach. Focus on clear communication, shared understanding, and direct collaboration. Let your processes evolve naturally as your team and challenges grow, rather than forcing a square peg into a round hole.

0 views