Latest Posts (20 found)
iDiallo Today

You Digg?

For me, being part of an online community started with Digg. Digg was the precursor to Reddit and the place to be on the internet. I never got a MySpace account, I was late to the Facebook game, but I was on Digg. When Digg redesigned their website (V4), it felt like a slap in the face. We didn't like the new design, but the community had no say in the direction. To make it worse, they removed the bury button. It's interesting how many social websites remove the ability to downvote. There must be a study somewhere that makes a sound argument for it, because it makes no sense to me. Anyway, when Digg announced they were back in January 2026, I quickly requested an invite. It was nostalgic to log in once more and see an active community building back up right where we left off. But then, just today, I read that they are shutting down. I had a single post in the technology sub. It was starting to garner some interest and then, boom! Digg is gone once more. The CEO said that one major reason was that they faced "an unprecedented bot problem." This is our new reality. Bots are now powered by AI and they are more disruptive than ever. They quickly circumvent bot detection schemes and flood every conversation with senseless text. It seems like there are very few places left where people can have a real conversation online. This is not the future I was looking for. I'll quietly write on my blog and ignore future communities that form. Rest in peace, Digg.

0 views
iDiallo Yesterday

It's Work that taught me how to think

On the first day of my college CS class, the professor walked in holding a Texas Instruments calculator above his head like Steve Jobs unveiling the first iPhone. The students sighed. They had expected computer science to involve little math. The professor told us he had helped build that calculator in the eighties, then spent a few minutes talking about his career and the process behind it. Then he plugged the device into his computer, opened a terminal on the projector, and pushed some code onto it. A couple of minutes later, he unplugged the cable, powered on the calculator, and sure enough, Snake was running on it. A student raised his hand. The professor leaned forward, eager for the first question of the semester. "Um... is this going to be on the test?" While the professor was showing us what it actually means to build something, to push code onto hardware and watch it come alive, his students were already thinking about the grade. About the exit. The experience meant nothing unless it converted into points. That was college for me. Everyone was chasing a passing grade to get to the next class. Learning was mostly incidental. The professors tried, but our incentives were completely misaligned. Talk of higher education becoming obsolete was already in the air, especially in CS. As enthusiastic as I had been when I started, that enthusiasm got chipped away one class at a time until the whole thing felt mechanical. Something I just had to get through. I dropped out shortly after the C++ class, which had taught me almost nothing about programming anyway. I was broke and could only pay for so many courses out of pocket. So I took my skills, such as they were, to a furniture store warehouse. My day job. When customers bought furniture, we pulled their merchandise from the back and loaded it into their trucks. They signed a receipt, we kept a copy, and those copies went into boxes labeled by month and date. At the end of the year, the boxes went onto a pallet, the pallet got shrink-wrapped, and a forklift tucked it away in a high storage compartment. Whenever an accountant called requesting a signed copy, usually because a customer was disputing a charge, the whole process ran in reverse. Someone licensed on the forklift had to retrieve the pallet, we cut the shrink-wrap, found the right box, and sifted through hundreds of receipts until we found the one we needed. The process took hours. One day I decided enough was enough. After my shift, I grabbed the day's signed receipts and fed them into a scanner. For each one, I created two images: a full copy and a cropped version showing just the top of the receipt where the order number was printed. I found a pirated OCR application, then used VBScript and a lot of Googling to write a script that read the order number and renamed each image file to match it. I also wrote my first Excel macros, also in VBScript. When everything was wired together, I had a working system. Each evening, I would enter the day's order numbers, scan the receipts, and let the script match them up with a preview attached. When the OCR failed to read a number, the file was renamed "unknown" with an incrementing number so I could verify those manually. From then on, when an accountant called, I could find and email them the receipt in under a minute, without ever leaving my desk. When I left that warehouse, I was ready to call myself a programmer. That one month building that system taught me more than two years of school ever had. But the education didn't stop there. Years later, now considering myself an experienced developer, a manager handed me what looked like a giant power strip. It had a dozen outlets, and was built for stress-testing set-top boxes in a datacenter. "Can you set this up?" he asked. A few years earlier, I would have panicked. I would have gone looking for someone who already knew the answer, or waited until the problem solved itself. But something had changed in me since the warehouse. Unfamiliar problems no longer felt like walls. They felt like the first receipt I ever fed into a scanner. It was just something to pull apart until it made sense. I had never worked with hardware. I had no idea where to start. But I didn't need to know where to start. I just needed to start. I brought the device to my desk and inspected every inch of it. I wasn't looking for the answer exactly. Instead, I was looking for the first question. And I found one: an RJ45 port on one end. Not exactly the programming interface you'd expect, but it was there for a reason. I looked up the model number of the device, downloaded the manual, and before long I was connected via Telnet, sending commands and reading output in the terminal. Problem solved. Not because I knew anything about hardware going in, but because I had learned to spend time with unfamiliar problems. None of this was in the syllabus. Nobody graded me on it. There was no partial credit for getting halfway there. That's the difference between school and work. School optimizes for the test, like that student who couldn't look past the grade to see what was actually being shown to him. School teaches you the shape of a problem and gives you a method to solve it. Work, on the other hand, doesn't care about the test. Work hands you something broken, or inefficient, or completely unfamiliar, and simply waits. Often, there are no right answers at work. You just have to build your own solution that satisfies the requirement. You figure things out, not because you memorized the right answer, but because you thought your way through it. Then something changes in how you approach every problem after that. You don't flinch at the next problem. You understand that facing unfamiliar problems is the job.

0 views
iDiallo 3 days ago

Where did you think the training data was coming from?

When the news broke that Meta's smart glasses were feeding data directly into their Facebook servers , I wondered what all the fuss was about. Who thought AI glasses used to secretly record people would be private? Then again, I've grown cynical over the years . The camera on your laptop is pointed at you right now. When activated, it can record everything you do. When Zuckerberg posted a selfie with his laptop visible in the background, people were quick to notice that both the webcam and the microphone had black tape over them. If the CEO of one of the largest tech companies in the world doesn't trust his own device, what are the rest of us supposed to do? On my Windows 7 machine, I could at least assume the default behavior wasn't to secretly spy on me. With good security hygiene, my computer would stay safe. For Windows 10 and beyond, that assumption may no longer hold. Microsoft's incentives have shifted. They now require users to create an online account, which comes with pages of terms to agree to, and they are in the business of collecting data . As part of our efforts to improve and develop our products, we may use your data to develop and train our AI models. That's your local data being uploaded to their servers for their benefit. Under their licensing agreement (because you don't buy Windows, you only license it) you are contractually required to allow certain information to be sent back to Microsoft: By accepting this agreement or using the software, you agree to all of these terms, and consent to the transmission of certain information during activation and during your use of the software as per the privacy statement described in Section 3. If you do not accept and comply with these terms, you may not use the software or its features. The data transmitted includes telemetry, personalization, AI improvement, and advertising features. On a Chromebook, there was never an option to use the device without a Google account. Google is in the advertising business, and reading their terms of service, even partially, it all revolves around data collection. Your data is used to build a profile both for advertising and AI training. None of this is a secret. It's public information, buried in those terms of service agreements we blindly click through. Even Apple, which touts itself as privacy-first in every ad, was caught using user data without consent . Tesla employees were found sharing videos recorded inside customers' private homes . While some treat the Ray-Ban glasses story as an isolated incident, here is Yann LeCun, Meta's former chief AI scientist, describing transfer learning using billions of user images: We do this at Facebook in production, right? We train large convolutional nets to predict hashtags that people type on Instagram, and we train on literally billions of images. Then we chop off the last layer and fine-tune on whatever task we want. That works really well. That was seven years ago, and he was talking about pictures and videos people upload to Instagram. When you put your data on someone else's server, all you can do is trust that they use it as intended. Privacy policies are kept deliberately vague for exactly this reason. Today, Meta calls itself AI-first, meaning it's collecting even more to train its models. Meta's incentive to collect data exceeds even that of Google or Microsoft. Advertising is their primary revenue source. Last year, it accounted for 98% of their forecasted $189 billion in revenue . Yes, Meta glasses record you in moments you expect to be private, and their workers process those videos at their discretion. We shouldn't expect privacy from a camera or a microphone, or any internet-connected device, that we don't control. That's the reality we have to accept. AI is not a magical technology that simply happens to know a great deal about us. It is trained on a pipeline of people's information: video, audio, text. That's how it works. If you buy the device, it will monitor you.

0 views
iDiallo 4 days ago

The Server Older than my Kids!

This blog runs on two servers. One is the main PHP blog engine that handles the logic and the database, while the other serves all static files. Many years ago, an article I wrote reached the top position on both Hacker News and Reddit. My server couldn't handle the traffic . I literally had a terminal window open, monitoring the CPU and restarting the server every couple of minutes. But I learned a lot from it. The page receiving all the traffic had a total of 17 assets. So in addition to the database getting hammered, my server was spending most of its time serving images, CSS and JavaScript files. So I decided to set up additional servers to act as a sort of CDN to spread the load. I added multiple servers around the world and used MaxMindDB to determine a user's location to serve files from the closest server . But it was overkill for a small blog like mine. I quickly downgraded back to just one server for the application and one for static files. Ever since I set up this configuration, my server never failed due to a traffic spike. In fact, in 2018, right after I upgraded the servers to Ubuntu 18.04, one of my articles went viral like nothing I had seen before . Millions of requests hammered my server. The machine handled the traffic just fine. It's been 7 years now. I've procrastinated long enough. An upgrade was long overdue. What kept me from upgrading to Ubuntu 24.04 LTS was that I had customized the server heavily over the years, and never documented any of it. Provisioning a new server means setting up accounts, dealing with permissions, and transferring files. All of this should have been straightforward with a formal process. Instead, uploading blog post assets has been a very manual affair. I only partially completed the upload interface, so I've been using SFTP and SCP from time to time to upload files. It's only now that I've finally created a provisioning script for my asset server. I mostly used AI to generate it, then used a configuration file to set values such as email, username, SSH keys, and so on. With the click of a button, and 30 minutes of waiting for DNS to update, I now have a brand new server running Ubuntu 24.04, serving my files via Nginx. Yes, next months Ubuntu 26.04 LTS comes out, and I can migrate it by running the same script. I also built an interface for uploading content without relying on SFTP or SSH, which I'll be publishing on GitHub soon. It's been 7 years running this server. It's older than my kids. Somehow, I feel a pang of emotion thinking about turning it off. I'll do it tonight... But while I'm at it, I need to do something about the 9-year-old and 11-year-old servers that still run some crucial applications.

0 views
iDiallo 4 days ago

I'm Not Lying, I'm Hallucinating

Andrej Karpathy has a gift for coining terms that quickly go mainstream. When I heard "vibe coding," it just made sense. It perfectly captured the experience of programming without really engaging with the code. You just vibe until the application does what you want. Then there's "hallucination." He didn't exactly invent it. The term has existed since the 1970s. In one early instance, it was used to describe a text summarization program's failure to accurately summarize its source material. But Karpathy's revival of the term brought it back into the mainstream, and subtly shifted its meaning, from "prediction error" to something closer to a dream or a vision. Now, large language models don't throw errors. They hallucinate. When they invent facts or bend the truth, they're not lying. They're hallucinating. And with every new model that comes out and promises to stay clean off drugs, it still hallucinates. An LLM can do no wrong when all its failures are framed as neurological disorder. For my part, I hope there's a real effort to teach these models to simply say "I don't know." But in the meantime, I'll adopt the term for myself. If you ever suspect I'm lying, or catch me red-handed, just know that it's not my fault. I'm just hallucinating .

0 views
iDiallo 5 days ago

Why Am I Paranoid, You Say?

Technology has advanced to a point I could only have dreamed of as a child. Have you seen the graphics in video games lately? Zero to 60 miles per hour in under two seconds? Communicating with anyone around the world at the touch of a button? It's incredible, to say the least. But every time I grab the TV remote and decline the terms of service, my family watches in confusion. I don't usually have the words to explain my paranoia to them, but let me try. I would love to have all the features enabled on all my devices. I would love to have Siri on my phone. I would love to have Alexa control the lighting in my house and play music on command. I would love to own an electric car with over-the-air updates. I would love to log in with my Google account everywhere. I would love to sign up for your newsletter. I would love to try the free trial. I would love to load all my credit cards onto my phone. I would love all of that. But I can't. I don't get to do these things because I have control over none of them. When I was a kid, I imagined that behind the wild technologies of the future there would be software and hardware, pure and simple. Now that we have the tech, I can say that what I failed to see was that behind every product, there is a company. And these companies are salivating for data. If you're like me, you have dozens of apps on your phone. You can't fit them all on the home screen, so you use a launcher to find the ones you don't open every day. Sometimes, because I have so many, I scroll up and down and still can't find what I'm looking for. Luckily, on most Android phones, there's a search bar at the top to help. But the moment I tap it, a notification pops up asking me to agree to terms and conditions just to use the search. Of course I won't do that. Most people have Siri enabled on their iPhone and never think twice about it. Apple has run several ads touting its privacy-first approach. Yet Apple settled a class action lawsuit last year claiming that Siri had violated users' privacy, to the tune of $95 million . I can't trust any of these companies with my information. They will lose it, or they will sell it. Using Alexa or Google Assistant is no different from using Siri. It's having a microphone in your home that's controlled by a third party. As enthusiastic as I am about electric cars, I didn't see the always-connected aspect coming. I've always assumed that when I pay for something, it belongs to me. But when an automaker can make decisions about your car while it sits in your garage, I'd rather have a dumb car. Unfortunately, it's no longer limited to electric vehicles. Nearly all modern cars now push some form of subscription service on their customers. Have you ever been locked out of your Google account? One day I picked up my phone and, for some reason, my location was set to Vietnam. A few minutes later, I lost access to my Google account. It's one thing to lose access to your email or files in Drive. But when you've used Google to log in to other websites, you're suddenly locked out of those too. Effectively, you're locked out of the internet. I was lucky my account was restored the same day, apparently there were several login attempts from Vietnam. But my account was back in service just in time for me to mark another Stack Overflow question as a duplicate. I don't sign up for services with my real email just to try a free trial, because even when I decide not to continue, the emails keep coming. When my sons were just a few months old, I received a letter in the mail addressed to the baby. It stated that his personal information (name, address, and Social Security number) had been breached. He was still an infant. I had never heard of the company responsible or done any business with them, yet somehow they had managed to lose my child's information. I would love to not worry about any of this, but it's a constant inconvenience. Whenever I grab the TV remote, I accidentally hit the voice button, and the terms of service remind me that my voice may be shared with third parties . Technology is amazing when you have some control over it. But when the terms of service can change out from under you without warning, I'll politely decline and keep my tin hat close by. I have so much to hide .

0 views
iDiallo 1 weeks ago

Interruption-Driven Development

I have a hard time listening to music while working. I know a lot of people do it, but whenever I need to focus on a problem, I have to hunt down the tab playing music and pause it. And yet I still wear my headphones. Not to listen to anything, but to signal to whoever is approaching my desk that I am working. It doesn't deter everyone, but it buys me the time I need to stay focused a little longer. I don't mind having a conversation with coworkers. What I mind is the interruption itself, especially when I'm in the middle of a task. Sometimes I'm debugging an issue in a legacy application, building a mental model of the workflow, reading a comment that describes an exception, following a function declaration, right when I'm on the verge of the next clue, I hear a voice: "Hey! What's going on? I haven't seen you in a while. What have you been up to?" The conversation is never long. But when it's over, my thoughts are gone. Where was I? Right, the function declaration. But where was it being called? What was that exception the comment described? Where did I even see that comment? I have to retrace every step just to rebuild the mental state I was in before I can move forward again. Working remotely helps, to a point. Interruptions via Slack can be muted until I'm ready to respond. But remote work isn't immune. You're still expected to be in meetings. As a lead, I'm frequently pulled into calls because "everything is on fire." Often, my presence isn't to put out the fire, it's to hold someone's hand. An hour later, I can barely remember what I was working on. The cost of interruption falls entirely on the person being interrupted. You lose your place, your focus, and eventually your ability to finish anything on time. For the person doing the interrupting, though, it's often a positive experience. The manager who constantly pulls the team into status updates feels productive. They're in the loop, they're present, they're on top of things. They schedule daily standups, attend every scrum ceremony, and expect developers to translate their work-in-progress into business-friendly language on demand. Meanwhile, the developer is spending their day sitting in calls, reassuring, explaining, and planning, but never actually building anything. When they push back, the manager doesn't cancel the meetings. Instead, he trims them from 30 minutes to 15. It feels like progress. But the length of the meeting was never the problem. Three meetings a day means three interruptions, regardless of how short they are. Being constantly interrupted at work reminds me of being in a hospital. Doctors prescribe rest, but hospitals are among the worst places to actually get any. Before our kids were born, my wife spent close to a month in the hospital. I had a small corner of the room, a chair and a desk, where I'd work on my laptop by her side. Every 20 minutes, the door would swing open, a nurse would bustle in and out, and the door would be left wide open behind her. It didn't matter that the doctor had ordered rest. Her sleep was interrupted every single time. That's what interruption-driven development looks like in practice. The work requires uninterrupted effort to actually happen. You can have the right tools, the right team, the right intentions, and still produce nothing. The work environment itself is working against you. My headphones might keep those eager to converse at bay. But what we really need is time to get work done without the constant interruption. It should be part of the software development lifecycle.

0 views
iDiallo 1 weeks ago

Why we feel an aversion towards LLM articles

Last year, I pushed myself to write and publish every other day for the whole year. I had accumulated a large number of subjects over the years, and I was ready to start blogging again. After writing a dozen or so articles, I couldn't keep up. What was I thinking? 180 articles in a year is too much. I barely wrote 4 articles in 2024. But there was this new emerging technology that people wouldn't stop talking about. What if I used it to help me achieve my goal? Have you ever heard of Mo Samuels? You probably haven't. But you must have heard of Seth Godin , right? Seth Godin is the author of several bestsellers. He is an icon in the world of marketing, and at one point he nudged me just enough to quit an old job. This is someone I deeply respected, and I bought his book All Marketers are Liars with great anticipation. I was several chapters in when he dropped this statement: I didn't write this book. What does he mean by that? His name is on the cover. These are the familiar words I often heard in his seminars. What is he trying to say? What I mean is that Seth Godin didn't write this book. It was written by a freelancer for hire named Mo Samuels. Godin hired me to write it based on a skimpy three-page outline. What? Mo Samuels? Who is Mo Samuels? If that name were on the cover, I wouldn't have bought the book in the first place. Does that bum you out? Does it change the way you feel about the ideas in this book? Does the fact that Seth paid me $10,000 and kept the rest of the advance money make the book less valuable? Well, yeah. It doesn't change the ideas in the book. But it is deceptive. I bought it specifically to read his words. Not someone else's. Why should it matter who wrote a book? The words don't change, after all. Yet I'm betting that you care a lot that someone named Mo wrote this book instead of the guy on the dust jacket. In fact, you're probably pretty angry. Well, if you've made it this far, you realize that there is no Mo Samuels, and in fact, I was pulling your leg. I (Seth Godin) wrote every word of this book. Imagine he hadn't added that last line. I never return a book after purchase, but this would have been a first. We don't just buy random books, a name carries value. I bought this book specifically because I wanted insight from this author. Anything less would have been a betrayal. Well, that's how people feel when they read an LLM-generated article. I wouldn't have noticed if I hadn't used LLMs to write articles on this very blog. The first time, I wrote a draft that had all the elements I wanted to present. The problem was the structure didn't entirely make sense. The story arc didn't really pay off, and the pacing was off. DeepSeek was just making the rounds, releasing open weights and open source code. I decided to use it to help me structure the article. The result was impressive. Not only had it fixed the pacing, it restructured the article in a way that made much more sense. Where I had dense blocks of information, DeepSeek turned them into convenient bullet points that were much easier to read. I was satisfied with the result and immediately published it. What I failed to notice, or maybe was too mesmerized to notice, was that the sentence structure had also been rewritten. I didn't use LLMs every time I wrote, but throughout the year I had at least a dozen AI-enhanced articles. When publishing, they sounded just fine. The problem started when I wanted to reference one of those articles in a new post. Reading through the AI-enhanced post felt strange. A paragraph I vaguely remembered and wanted to quote didn't sound like what I remembered. The articles were bloated with words I would never use. They had quips that seemed clever at the time but didn't sound like me at all. I ended up rewriting sections of those posts before quoting them. The second problem appeared whenever I landed on someone else's blog. I noticed the same patterns. The same voice. The same quips. "It's not just X, but Y." "Here's the part I find disturbing." "The irony is not lost on me." "It is a stark reminder." These and many more writing tropes were uniformly distributed across my LLM-assisted articles and countless others across the web. It felt like Mo Samuels was a guest writer on all of our blogs. And here's the kicker: (another famous thrope) I'm not singling out DeepSeek here. ChatGPT, Claude, Gemini, they all seem to have taken the same "Writing with Mo Samuels" Master class. It feels like this voice, no matter what personality you try to prompt it with, is the average of all the English language on the web. I wouldn't say readers of this blog are here for my distinct voice or writing style. I'm not famous or anything. But I know they can spot Mo from a mile away. My goal is not to trick readers. I want the stories and work experiences I share here to come from me, and I want to give readers that same assurance. So here is what I did. Since my goals are more modest this year, I've rewritten several of those lazy articles. I spend more time writing, and I try to hold onto this idea that's gaining traction among bloggers: "If you didn't bother writing, why should anyone bother reading?" I want to share my thoughts, even if no one reads them. When I come back to rediscover my own writing, I want to recognize my own voice in it. But if you do read this blog, if it sucks, if you disagree, if you have an opinion to share, you should know that I wrote it. Not Mo Samuels.

0 views
iDiallo 2 weeks ago

“How old are you?” Asked the OS

A new law passed in California to require every operating system to collect the user's age at account creation time. The law is AB-1043 . And it was passed in October of 2025. How does it work? Does it apply to offline systems? When I set up my Raspberry Pi at home, is this enforced? What if I give an incorrect age, am I breaking the law now? What if I set my account correctly, but then my kids use the device? What happens? There is no way to enforce this law, but I suspect that's not the point. It's similar to statements you find in IRS documents. The IRS requires you to report all income from illegal activities, such as bribes and scams. Obviously, if you are getting a bribe, you wouldn't report it, but by not reporting it you are breaking additional laws that can be used to get you prosecuted. When you don't report your age to your OS whether it's a windows device or a Tamagotchi, you are breaking the law. It's not enforced of course, but when you are suspected of any other crime, you can be arrested for the age violation first, then prosecuted for something else. What a world we live in.

0 views
iDiallo 2 weeks ago

That's it, I'm cancelling my ChatGPT

Just like everyone, I read Sam Altman's tweet about joining the so-called Department of War, to use ChatGPT on DoW classified networks. As others have pointed out, this is the entry point for mass surveillance and using the technology for weapons deployment. I wrote before that we had the infrastructure for mass surveillance in place already, we just needed an enabler. This is the enabler. This comes right after Anthropic's CEO wrote a public letter stating their refusal to work with the DoW under their current terms. Now Anthropic has been declared a public risk by the President and banned from every government system. Large language models have become ubiquitous. You can't say you don't use them because they power every tech imaginable. If you search the web, they write a summary for you. If you watch YouTube, one appears right below the video. There's a Gemini button on Chrome, there's Copilot on Edge and every Microsoft product. There it is in your IDE, in Notepad, in MS Paint. You can't escape it. Switching from one LLM to the next makes minimal to no difference for everyday use. If you have a question you want answered or a document to summarize, your local Llama will do the job just fine. If you want to compose an email or proofread your writing, there's no need to reach for the state of the art, any model will do. For reviewing code, DeepSeek will do as fine a job as any other model. A good use of ChatGPT's image generator. All this to say, ChatGPT doesn't have a moat. If it's your go-to tool, switching away from it wouldn't make much of a difference. At this point, I think the difference is psychological. For example, my wife once told me she only ever uses Google and can't stand any other search engine. What she didn't know was that she had been using Bing on her device for years. She had never noticed, because it was the default. When I read the news about OpenAI, I was ready to close my account. The only problem is, well, I never use ChatGPT. I haven't used it in years. My personal account lay dormant. My work account has a single test query despite my employer trying its hardest to get us to use it. But I think none of that matters when OpenAI caters to a government agency with a near-infinite budget. For every public account that gets closed, OpenAI will make up for it with deeper integration into classified networks. Not even 24 hours later, the US is at war with Iran. So while we're at it, here is a nice little link to help you close your OpenAI account .

0 views
iDiallo 2 weeks ago

We Need Process, But Process Gets in the Way

How do you manage a company with 50,000 employees? You need processes that give you visibility and control across every function such as technology, logistics, operations, and more. But the moment you try to create a single process to govern everyone, it stops working for anyone. One system can't cater to every team, every workflow, every context. When implemented you start seeing in-fighting, projects missing deadlines, people quitting. Compromises get made, and in my experience, it almost always becomes overwhelming. The first time I was part of a merger, I was naïve about how it would go. The narrative we were sold was reassuring. The larger company was acquiring us because we were successful. The last thing they'd want to do was get in the way of that success. But that's not how it went. It doesn't matter what made you successful before you join a larger organization. The principles and processes of the acquiring company are what will dominate. Your past success is acknowledged, maybe even celebrated, but it doesn't protect you from assimilation. One of the first things we had to adopt was Scrum. It may be standard practice now, but at the time it was still making its way through the industry. Our team, developers and product managers, already had a process that worked. We knew how to communicate, how to prioritize, how to ship. Adopting this new set of ceremonies felt counterproductive. It didn't make us faster. It didn't improve communication. What it did do was increase administrative overhead. Standups, sprints, retrospectives, layer after layer of structure added on top of work that was already getting done. But there was no going back. We were never going to return to being that nimble, ad hoc team that could resolve issues quickly and move on. We had to adopt methods that got in the way. Eventually, we adapted. We adopted the process. And in doing so, we became less efficient at the local level. A lot of people, frustrated by the slowdown, left for other opportunities. But as far as the larger company was concerned, that was acceptable. Our product was just one of many in their portfolio. Slowing down one team to get everyone aligned was a price they were willing to pay. It wasn't efficient, but it was manageable from their perspective. The math made sense at the organizational level, even if it felt like a loss from where we were standing. I understand that logic. I just don't think it's the best way forward. Think about how a computer works. A CPU doesn't concern itself with how a hard drive retrieves data. Whether it's spinning magnetic disks or a solid state drive, the internal mechanics are irrelevant to the CPU. All it knows is that it can make a request, and the response will come back in the expected format. If the CPU had to get involved in the actual process of fetching data, it would waste enormous processing power on something that isn't its concern. Organizations can work the same way. Rather than imposing a single process across every team, a company can treat its departments as independent components. You make a request, the department delivers an output. How they produce that output like what tools they use, how they run their meetings, how they structure their work, that shouldn't be a concern, as long as the result meets the requirement. There are places where unified processes make sense. Legal and compliance, for example, probably need to be consistent across the whole organization. But for how individual teams operate day to day, autonomy is often the better choice. Will every team's process be perfectly aligned with every other team's? No. But they'll actually work. And the people doing the work will be far less likely to walk out the door. Sometimes in large organizations, it's important to identify which process works, and which team is better left alone.

0 views
iDiallo 2 weeks ago

When access to knowledge is no longer the limitation

Let's do this thought experiment together. I have a little box. I'll place the box on the table. Now I'll open the little box and put all the arguments against large language models in it. I'll put all the arguments, including my own. Now, I'll close the box and leave it on the table. Now that that is out of the way, we are left with all the positives. All the good things that come from having the world's information at our fingertips. I can ask any question and get an answer almost instantly. Well, not all questions. The East has its sensitivities around a certain square, and the West about a certain island, but I digress. I can learn any subject I want to learn. I can take the work of any philosopher and ELI5 it. I can finally understand "The World as Will and Representation" by Schopenhauer. A friend gifted me a copy when I was still in my twenties, it's been steadily collecting dust ever since. But now I can turn to the book and ask questions until I thoroughly understand it. No need to read it cover to cover. In fact, last year I decided I wanted to learn about batteries. I first went to the Battery University website and started to read lesson by lesson. But I had questions. How was I going to get them answered? The StackExchange network is not what it used to be , so I turned to ChatGPT. It had all the answers. I learned and read so much about batteries that I am tempted to start a battery company. My twin boys are at that age where they suffer from the infinite WHYs. Why does it rain? Why does the earth spin? why does California still use the Highway Gothic font on some freeway signs? I do not have answers to these questions off the top of my head, but I have access to the infinite knowledge machine, so of course my kids know the answers now. Just the other day, I had a shower-thought about cars. "Are cars just a slab of metal on wheels?" And now I learned that the answer is "essentially yes." But then I kept reading on the subject and learned about all those little devices and pieces of mechanical technologies that exist that I had never heard of. For example, the sway bar link. Did you know about it? Did you know that it reduces body roll and maintains stability during turns? Fascinating. Ever since LLMs made their public debut in 2022, we've been gifted with this knowledge base that we can interact with on demand, day and night, at work or at home. The possibilities seem endless. I can learn or understand any codebase without being familiar with the programming language. And yet it feels like something is missing. The more I access this knowledge, the more I feel the little box on my table is starting to open. Now this is just my opinion, but I'm starting to believe that the sum of all parts is still just one. Let me explain. In 2022, the Japanese Prime Minister Shinzo Abe was shot and killed. It came as a shock to me, Japan is not a country known for gun violence. So in December of that year, I decided to learn more about him, about Japan, and about their stance on guns. With the holiday season and the rolling code freeze at work, I spent a good amount of time just reading through Wikipedia, some translated Japanese forums, and some official documents. A whole lot of material. Long story short, I still don't have a definitive answer as to why exactly he was killed, but I came away with a richer understanding of the story and the perspectives of the people around him. Reading more material is not going to give me a definitive answer, but it helps paint a richer picture of the event. I spent enough time with the subject to appreciate the knowledge I gathered over those weeks. When you ask ChatGPT why Shinzo Abe was shot, it will give you a satisfying answer. It will be correct, it will include some of the nuance, and will probably ask you if you want to learn more. The answer satisfies your curiosity and you move on... to your next question. It could be the chat interface. Even though the words on the page clearly ask you "if you want to know more," somehow you are more keen on starting a new subject. And rare are the times we go back and re-read the material we have been provided with. With the books I've "read" through an LLM by asking multiple questions, I can hardly tell you that I understand them. Yes, I know the gist of it but it doesn't replace the knowledge you build by reading a book at a steady pace. You save a whole bunch of time by using an LLM, but the knowledge is fleeting. Reading original sources is slow, but you get to better immerse yourself in the subject. It seems like reading through an LLM removes the friction of learning, but in doing so it makes knowledge shallow and disposable. The problem is the way we process information as humans. We don't become experts by learning from summaries. The effort of learning is part of the process. Those endless questions my children have, there is a snack-like quality to the answers I give them. Because the answers are so easy to get, we treat them like a social media feed. I scroll through and one post is about batteries, the next is about sway bars, and somehow I land on California highways. Having the world's information at your fingertips is a gift, but knowing the gist of everything is not the same as understanding something deeply. We do not form character by reading the gist of it. Instead, character comes from the hunt for information. The limitation of a manual process forces us to focus, to dwell on a subject, until we truly internalize it. You can hardly spot a hallucination unless it concerns material that you already have knowledge in. Wait a minute. What's happening here. Ah! I see. The box has crept back open.

0 views
iDiallo 2 weeks ago

The Little Red Dot

Sometimes, I have 50 tabs open. Looking for a single piece of information ends up being a rapid click on each tab until I find what I'm looking for. Somehow, every time I get to that LinkedIn tab, I pause for a second. I just have to click on the little red dot in the top right corner, see that there is nothing new, then resume my clicking. Why is that? Why can't I ignore the red notification badge? When you sign up for LinkedIn for the first time, it's right there. A little red dot in the top right corner with a number in it. It stands out against the muted grays and blues of the interface. Click on it, and you'll discover you have a notification. It's not from someone you know; this is a fresh new account, after all. But the dot was there anyway. Add a few connections, give it some time, and come back. Refresh the page, and you'll have new notifications waiting. If your LinkedIn account is like mine, a ghost town, you still get the little red dot. My connections and I usually keep a few recruiters in our networks, an insurance policy in case we need to find work quickly. But we rarely, if ever, post anything. Yet whenever I log in, there's a new notification. Sometimes it's even a message, but not from anyone in my connections list. It's from LinkedIn itself. The little red dot isn't exclusive to LinkedIn. My Facebook account has been dormant for years, yet those few times annually when I log in, the notifications are right there waiting for me. I've even visited news websites where the little red dot appeared for reasons I couldn't understand. I didn't have an account, so what exactly were they notifying me about? That little red dot is a sophisticated psychological trigger designed to exploit the brain. It activates the brain's Salience Network . Think of it as a circuit breaker that alerts us to immediate threats. When triggered, it signals that the brain should redirect its resources to something new. The color red is not chosen by accident either. On my Twitter app, the notification is a blue dot, which I hardly ever notice (don't tell them that). But red triggers our brain to perceive urgency. We feel compelled to address it immediately. The little red dot fools us into believing that something trivial is actually urgent. Check your phone and you'll notice all the app icons with a little red dot in their top right corner. Most, if not all, social media alerts function as false alarms, and they gradually compromise our ability to focus on what matters. Whenever you spot the little red dot, you feel compelled to click it. It promises a new connection, a message, a validation of some sort. It doesn't matter that you are almost always disappointed afterward, because you will be presented with content that keeps you scrolling, never remembering how you got there. Facebook used to show the little red dot in their email notifications. When there is activity on your account, say you were tagged in a photo, Facebook sends you an email and in the top right corner, they draw a little red dot on the bell icon. Obviously, you have to click it so you don't miss out. There was a Netflix documentary released a few years ago called The Social Dilemma , an inside look at how social media manipulates its users. Whether intentional or not, their website featured a bell icon with a little red dot on it. You visit the site for the first time, and it shows that you have one notification. There's no way around it, you are psychologically enticed to click. A notification is supposed to be a tool, and a tool patiently waits for someone to use it. But the little red dot seduces you because it wants something from you. It's all part of habit-forming technology: the engagement loop. The engagement loop follows three steps: a cue (the notification), a routine (an action such as scrolling), and a reward (likes, a dopamine hit). From the social media platform's perspective, this is a tool for boosting retention. From the user's perspective, it's Pavlovian conditioning. For every possible event, LinkedIn will send you a notification. Someone wants to join your network. Someone has endorsed your skills. A group is discussing a topic. Each notification generates a red dot on your mobile device, pulling you back into actions that benefit LinkedIn's system. In the documentary, they show that this pattern is just the tip of the iceberg. Beneath the surface lies a data-driven, manipulative machine that feeds on our behavior and engineers the next trick to bring us back to the platform. For my part, I've disabled notifications from all non-essential apps. No Instagram updates, no Robinhood alerts, no WhatsApp group messages. I receive messages from people I know. That's pretty much it. For everything else, I have to deliberately seek out information. That said, I did see another approach in the wild. Some people simply don't care about notifications. Every app on their phone has a little red dot with the number "99" on it. They haven't read their messages and aren't planning to. You're lucky if they ever answer your call. I'm not sure whether this is a good or bad thing... but it's a thing. That little red dot represents something larger than a notification system. It's the visible tip of an infrastructure built to capture and commodify human attention. The addictiveness of social media isn't an unfortunate byproduct of connecting the world. Right now it's the most profitable business model. The more addictive the platform, the more you engage; the more you engage, the more advertisements you see. This addiction shapes behavior, consumes time, and affects mental wellbeing, all while companies profit from it.

0 views
iDiallo 3 weeks ago

Nvidia was only invited to invest

Nvidia was only invited to invest. That is one reversal of commitment. Remember that graph that has been circling around for some time now? The one that shows the circular investment from AI companies: Basically Nvidia will invest $100 billion in OpenAI. OpenAI will then invest $300 billion in Oracle, then Oracle invests back into Nvidia. Now, Jensen Huang, the Nvidia CEO, is back tracking and saying he never made that commitment . “It was never a commitment. They invited us to invest up to $100 billion and of course, we were, we were very happy and honored that they invited us, but we will invest one step at a time.” So he never committed? Did we make up all these graphs in our head? Was it a misquote from a journalist somewhere that sparkled all this frenzy? Well, you can take a look in OpenAI press release in September of 2025 . They wrote: NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed. In fact, Jensen Huang went on to say: “NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT. This investment and infrastructure partnership mark the next leap forward—deploying 10 gigawatts to power the next era of intelligence.” It sounds like Jensen is distancing himself from that $100 billion commitment. Did he take a peak inside OpenAI and change his mind? At the same time, OpenAI is experimenting with ads. Sam Altman stated before that they would only ever use ads as a last resort. It sounds like we are in the phase.

0 views
iDiallo 3 weeks ago

Teleoperation is Always the Butt of the Joke

A few years back, the term "AI" took an unexpected turn when it was redefined as "Actual Indian". As in, a person in India operating the machine remotely. I first heard the term when Amazon was boasting about their cashierless grocery stores. There was a big sign in the store that said "Just Walk Out," meaning you grab your items, walk out, and get charged the correct amount automatically. How did they do it? According to Amazon, they used AI. What kind of AI exactly, nobody was quite sure. But customers started reporting something odd. They weren't charged immediately after leaving the store. Some said it took several days for a charge to appear on their account. It eventually came out that the technology was sophisticated tracking performed by Amazon's team in India. Workers would manually review footage of each customer's visit and charge them accordingly. What's fascinating is that this operation was impressive. Coordinating thousands of store visits, matching items to customers across multiple camera angles, and doing it accurately enough that most people never noticed the delay. But because it was buried under the "AI" label, the moment the truth came out, the whole thing became a punchline. In 2024, Tesla held their "We, Robot" event, where Optimus robots operated a bar. They were serving drinks, dancing, and mingling with guests. It was a pretty impressive display. The robots moved fluidly, held conversations, and handed off drinks without fumbling. Elon Musk claimed they were AI-driven , fully autonomous. People were genuinely impressed by the interactions, and for good reason. Fluid, bipedal locomotion in a crowded social environment is an extraordinarily hard robotics problem. The moment it came out that the robots were teleoperated, the sentiment flipped entirely. It didn't matter how dexterous or natural the movement was. It felt like a magic trick exposed. But think about what was actually being demonstrated. Humanoid robots walking through a crowd, responding in real time to a human operator's inputs, without tripping over guests or spilling drinks. That's not nothing. Slapping "AI" on it turned an engineering achievement into a scandal. More recently, the company 1X unveiled a friendly humanoid robot available for purchase at $20,000. The demo looks genuinely impressive. The robot can perform domestic tasks like doing laundry, folding clothes, and navigating a home environment. And if it doesn't know how to do something, it can be taught. You can authorize a remote worker to take control, demonstrate the task, and the robot learns from that demonstration, adding it to its growing repertoire. That's a legitimately interesting approach to machine learning through human guidance. What got glossed over is how much of the current capability relies on that remote worker. Right after the unveiling, the Wall Street Journal was invited to test the robots. In their video, the robot is being operated entirely by a person sitting in the next room. To be fair, the smoothness of that teleoperation is itself a technical achievement. Real-time control of a bipedal robot performing fine motor tasks, like folding a shirt, requires low-latency communication, precise motor control, and a well-designed interface for the operator. That's years of engineering work. But because teleoperation isn't the product being sold, AI is,that achievement gets treated as evidence of fraud rather than progress. We've built an environment where "teleoperated" has become a slur, and anything short of full autonomy is seen as cheating. Even Waymo, whose self-driving cars have logged millions of autonomous miles, feels compelled to publicly defend themselves against accusations of secretly using remote operators. As if any human involvement would invalidate everything they've built. I think teleoperation is pretty impressive. It's a valuable technology in its own right. Surgeons use it to operate across continents. Industrial operators use it to work in places no human could safely go. In all of these cases, having a human-in-the-loop is the point. Every "AI" product that turns out to have a person behind the curtain makes the public more skeptical. In a parallel universe, there is a version of the tech industry that celebrates teleoperation as a stepping stone. Where we are building tools to make collaboration easier through teleoperation, and it's not viewed as an embarrassing secret.

0 views
iDiallo 3 weeks ago

Taking Our Minds for Granted

How did we do it before ChatGPT? How did we write full sentences, connect ideas into a coherent arc, solve problems that had no obvious answer? We thought. That's it. We simply sat with discomfort long enough for something to emerge. I find this fascinating. You have a problem, so you sit down and think until you find a solution. Sometimes you're not even sitting down. You go for a walk, and your mind quietly wrestles with the idea while your feet carry you nowhere in particular. A solution emerges not because you forced it, but because you thought it through. What happened in that moment is remarkable: new information was created from the collision of existing ideas inside your head. No prompt. No query. Just you. I remember the hours I used to spend debugging a particularly stubborn problem at work. I would stare at the screen, type a few keystrokes, then delete them. I'd meet with our lead engineer and we would talk in circles. At home, I would lie in bed still turning the problem over. And then one night, somewhere around 3 a.m., I dreamt I was running the compiler, making a small change, watching it build, and suddenly it worked. I woke up knowing the answer before I had even tested it. I had to wait until morning to confirm what my sleeping mind had already solved. That's the mind doing what it was built to do. Writers know this feeling too. A sentence that won't cooperate in the afternoon sometimes writes itself during a morning shower. Scientists have described waking up with the solution to a problem they fell asleep wrestling with. Mendeleev wrote in his dairy that he saw the periodic table in a dream . The mind that keeps working when we stop forcing it. The mind can generate new ideas from its own reflection, something we routinely accuse large language models of being incapable of. LLMs recombine what already exists; the human mind makes unexpected leaps. But increasingly, it feels as though we are outsourcing those leaps before we ever attempt them. Why sit with a half-formed thought when you can just ask? Why let an idea marinate when a tool can hand you something polished in seconds? The risk isn't that AI makes us lazy. It's that we slowly forget what it felt like to think hard, and stop believing we're capable of it. It's like forgetting how to do long division because you've always had a calculator in your pocket. The mind is like any muscle. Leave it unstrained and it weakens. Push it and it grows. The best ideas you will ever have are still inside you, waiting for the particular silence that only comes when you stop reaching for your phone. In the age of AI, the most radical thing you can do might simply be to think.

0 views
iDiallo 3 weeks ago

Programming is free

A college student on his spring break contacted me for a meeting. At the time, I had my own startup and was navigating the world of startup school with Y Combinator and the publicity from TechCrunch. This student wanted to meet with me to gain insight on the project he was working on. We met in a cafe, and he went straight to business. He opened his MacBook Pro, and I glimpsed at the website he and his partner had created. It was a marketplace for college students. You could sell your items to other students in your dorm. I figured this was a real problem he'd experienced and wanted to solve. But after his presentation, I only had one question in mind, about something he had casually dropped into his pitch without missing a beat. He was paying $200 a month for a website with little to no functionality. To add to it, the website was slow. In fact, it was so slow that he reassured me the performance problems should disappear once they upgraded to the next tier. Let's back up for a minute. When I was getting started, I bought a laptop for $60. A defective PowerBook G4 that was destined for the landfill. I downloaded BBEdit, installed MAMP, and in little to no time I had clients on Craigslist. That laptop paid for itself at least 500 times over. Then a friend gave me her old laptop, a Dell Inspiron e1505. That one paved the way to a professional career that landed me jobs in Fortune 10 companies. I owe it all not only to the cheap devices I used to propel my career and make a living, but also to the free tools that were available. My IDE was Vim. My language was PHP, a language that ran on almost every server for the price of a shared hosting plan that cost less than a pizza. My cloud was a folder on that server. My AI pair programmer was a search engine and a hope that someone, somewhere, had the same problem I did and had posted the solution on a forum. The only barrier to entry was the desire to learn. Fast forward to today, every beginner is buying equipment that can simulate the universe. Before they start their first line of code, they have subscriptions to multiple paid services. It's not because the free tools have vanished, but because the entire narrative around how to get started is now dominated by paid tools and a new kind of gatekeeper: the influencer. When you get started with programming today, the question is "which tool do I need to buy?" The simple LAMP stack (Linux, Apache, MySQL, PHP) that launched my career and that of thousands of developers is now considered quaint. Now, beginners start with AWS. Some get the certification before they write a single line of code. Every class and bootcamp sells them on the cloud. It's AWS, it's Vercel, it's a dozen other platforms with complex pricing models designed for scale, not for someone building their first "Hello, World!" app. Want to build something modern? You'll need an API key for this service, a paid tier for that database, and a hosting plan that charges by the request. Even the code editor, once a simple download, is now often a SaaS product with a subscription. Are you going to use an IDE without an AI assistant? Are you a dinosaur? To be a productive programmer, you need a subscription to an AI. It may be a fruitless attempt, but I'll say it anyway. You don't need any paid tools to start learning programming and building your first side project. You never did. The free tools are still there. Git, VS Code (which is still free and excellent!), Python, JavaScript, Node.js, a million static site generators. They are all still completely, utterly free. New developers are not gravitating towards paid tools by accident. Other than code bootcamps selling them on the idea, the main culprit is their medium of learning. The attention economy. As a beginner, you're probably lost. When I was lost, I read documentation until my eyes bled. It was slow, frustrating, and boring. But it was active. I was engaging with the code, wrestling with it line by line. Today, when a learner is lost, they go to YouTube. A question I am often asked is: Do you know [YouTuber Name]? He makes some pretty good videos. And they're right. The YouTuber is great. They're charismatic, they break down complex topics, and they make it look easy. In between, they promote Hostinger or whichever paid tool is sponsoring them today. But the medium is the message, and the message of YouTube is passive consumption . You watch, you nod along, you feel like you're learning. And then the video ends. An algorithm, designed to keep you watching, instantly serves you the next shiny tutorial . You click. You watch. You never actually practice. Now instead of just paying money for the recommended tool, you are also paying an invisible cost. You are paying with your time and your focus. You're trading the deep, frustrating, but essential work of building for the shallow, easy dopamine hit of watching someone else build. The influencer's goal is to keep you watching. The platform's goal is to keep you scrolling. Your goal should be to stop watching and start typing. These goals are at odds. I told that student he was paying a high cost for his hobby project. A website with a dozen products and images shouldn't cost more than a $30 Shopify subscription. If you feel more daring and want to do the work yourself, a $5 VPS is a good start. You can install MySQL, Rails, Postgres, PHP, Python, Node, or whatever you want on your server. If your project gains popularity, scaling it wouldn't be too bad. If it fails, the financial cost is a drop in a bucket. His story stuck with me because it wasn't unique. It's the default path now: spend first, learn second. But it doesn't have to be. You don't need an AI subscription. You don't need a YouTuber. You need a text editor (free), a language runtime (free), and a problem you want to solve. You need to get bored enough to open a terminal and start tinkering. The greatest gift you can give yourself as a new programmer isn't a $20/month AI tool or a library of tutorial playlists. It's the willingness to stare at a blinking cursor and a cryptic error message until you figure it out yourself. Remember, my $60 defective laptop launched a career. That student's $200/month website taught him to wait for someone else to fix his problems. The only difference between us was our approach. The tools for learning are, and have always been, free. Don't let anyone convince you otherwise.

0 views
iDiallo 4 weeks ago

Factional Drift: We cluster into factions online

Whenever one of my articles reaches some popularity, I tend not to participate in the discussion. A few weeks back, I told a story about me, my neighbor and an UHF remote . The story took on a life of its own on Hackernews before I could answer any questions. But reading through the comment section, I noticed a pattern on how comments form. People were not necessarily talking about my article. They had turned into factions. This isn't a complaint about the community. Instead it's an observation that I've made many years ago but didn't have the words to describe it. Now I have the articles to explore the idea. The article asked this question: is it okay to use a shared RF remote to silence a loud neighbor ? The comment section on hackernews split into two teams. Team Justice, who believed I was right to teach my neighbor a lesson. And then Team Boundaries, who believed I was “a real dick”. But within hours, the thread stopped being about that question. People self-sorted into tribes, not by opinion on the neighbor, but by identity. The tinkerers joined the conversation. If you only looked through the comment section without reading the article, you'd think it was a DIY thread on how to create an UHF remote. They turned the story into one about gadget showcasing. TV-B-Gone, Flipper Zeros, IR blasters on old phones, a guy using an HP-48G calculator as a universal remote. They didn't care about the neighbor. They cared about the hack. Then came the apartment warriors. They bonded over their shared suffering experienced when living in an apartment. Bad soundproofing, cheap landlords, one person even proposed a tool that doesn't exist yet, a "spirit level for soundproofing". The story was just a mirror for their own pain. The diplomats quietly pushed back on the whole premise. They talked about having shared WhatsApp groups, politely asking, and collective norms. A minority voice, but a distinct one. Why hack someone when you can have a conversation? The Nostalgics drifted into memories of old tech. HAM radios, Magnavox TVs, the first time a remote replaced a channel dial. Generational gravity. Back in my days... Nobody decided to join these factions. They just replied to the comment that felt like their world, and the algorithm and thread structure did the rest. Give people any prompt, even a lighthearted one, and they will self-sort. Not into "right" and "wrong," but into identity clusters. Morning people find morning people. Hackers find hackers. The frustrated find the frustrated. You discover your faction. And once you're in one, the comments from your own tribe just feel more natural to upvote. This pattern might be true for this article, but what about others? I have another article that has gone viral twice . On this one the question was: Is it ethical to bill $18k for a static HTML page? Team Justice and Team Boundaries quickly showed up. "You pay for time, not lines of code." the defenders argued. "Silence while the clock runs is not transparent." the others criticized. But then the factions formed. People self-sorted into identity clusters, each cluster developed its own vocabulary and gravity, and the original question became irrelevant to most of the conversation. Stories about money and professional life pull people downward into frameworks and philosophy. The pricing philosophers exploded into a deep rabbit hole on Veblen goods, price discrimination, status signaling, and perceived value. Referenced books, studies, and the "I'm Rich" iPhone app. This was the longest thread. The corporate cynics shared war stories about use-it-or-lose-it budgets, contractors paid to do nothing, and organizational dysfunction. Veered into a full government-vs-corporations debate that lasted dozens of comments. The professional freelancers dispensed practical advice. Invoice periodically, set scope boundaries, charge what you're worth. They drew from personal contractor experience. The ethicists genuinely wrestled with whether I did the right thing. Not just "was it legal" but "was it honest." They were ignored. The psychology undergrads were fascinated by the story. Why do people Google during a repair job and get fired? Why does price change how you perceive quality? Referenced Cialdini's "Influence" and ran with it. Long story short, a jeweler was trying to move some turquoise and told an assistant to sell them at half price while she was gone. The assistant accidentally doubled the price, but the stones still sold immediately. The kind of drift between the two articles was different. The remote thread drifted laterally: people sorted by life experience and hobby (gadget lovers found gadget lovers, apartment sufferers found apartment sufferers). The $18k thread drifted deep: people sorted by intellectual framework (economists found economists, ethicists found ethicists, corporate cynics found corporate cynics). The $18k thread even spawned nested debates within subfactions. The Corporate Cynics thread turned into a full government-vs-corporations philosophical argument that had nothing to do with me or the article. But was all this something that just happens with my articles? I needed an answer. So I picked a recent article I enjoyed by Mitchell Hashimoto . And it was about AI, so this was perfect to test if these patterns exist here as well. Now here is a respected developer who went from AI skeptic to someone who runs agents constantly. Without hype, without declaring victory, just documenting what worked. The question becomes: Is AI useful for coding, or is it hype? The result wasn't entirely binary. I spotted 3 groups at first. Those in favor said: "It's a tool. Learn to use it well." Those against it said: "It's slop. I'm not buying it." But then a third group. The fence-sitters (I'm in this group): "Show me the data. What does it cost?" And then the factions appeared. The workflow optimizers used the article as a premise to share their own agent strategy. Form an intuition on what the agent is good at, frame and scope the task so that it is hard for the AI to screw up, small diffs for faster human verification. The defenders of the craft dropped full on manifestos. “AI weakens the mind” then references The Matrix. "I derive satisfaction from doing something hard." This group isn't arguing AI doesn't work. They're arguing it shouldn't work, because the work itself has intrinsic value. The history buffs joined the conversation. There was a riff on early aircraft being unreliable until the DC-3, then the 747. Architects moving from paper to CAD. They were framing AI adoption as just another tool transition in a long history of tool transitions. They're making AI feel inevitable, normal, obvious. The Appeal-to-Mitchell crowd stated that Mitchell is a better developer than you. If he gets value out of these tools you should think about why you can't. The flamewar kicked in! Someone joked: "Why can't you be more like your brother Mitchell?" The Vibe-code-haters added to the conversation. The term 'vibe coding' became a battleground. Some using it mockingly, some trying to redefine it. There was an argument that noted the split between this thread (pragmatic, honest) and LinkedIn (hyperbolic, unrealistic). A new variable from this thread was the author's credibility, plus he was replying in the threads. Unlike with my articles, the readers came to this thread with preconceived notions. If I claimed that I am now a full time vibe-coder, the community wouldn't care much. But not so with Mitchell. The quiet ones lose. The Accountants, the Fence-Sitters, they asked real questions and got minimal traction. "How much does it cost?" silence. "Which tool should I use?" minimal engagement. The thread's energy went to the factions that told a better story. One thing to note is that the Workflow Optimizers weren't arguing with the Skeptics. The Craft Defenders weren't engaging with the Accountants. Each faction found its own angle and stayed there. Just like the previous threads. Three threads. Three completely different subjects: a TV remote story, an invoice story, an AI adoption guide. Every single one produced the same underlying architecture. A binary forms. Sub-factions drift orthogonally. The quiet ones get ignored. The entertaining factions win. The type of drift changes based on the article. Personal anecdotes (TV remote) pull people sideways into shared experience. Professional stories ($18k invoice) pull people down into frameworks. Prescriptive guides (AI adoption) pull people into tactics and philosophy. But the pattern, like the way people self-sort, the way factions ignore each other, the way the thread fractures, this remained the same. The details of the articles are not entirely relevant. Give any open-ended prompt to a comment section and watch the factions emerge. They're not coordinated. They're not conscious. They just... happen. For example, the Vibe-Code Haters faction emerged around a single term "vibe coding." The semantic battle became its own sub-thread. Language itself became a faction trigger. Now that you spotted the pattern, you can't unsee it. That's factional drift.

0 views
iDiallo 1 months ago

Markdown.exe

I've been spending time looking through "skills" for LLMs, and I feel like I'm the only one panicking. Nobody else seems to care. Agent skills are supposed to be a way to teach your LLM how to handle specific tasks. For example, if you have a particular method for adding tasks to your calendar, you write a skill file with step-by-step instructions on how to retrieve a task from an email and export it. Once the agent reads the file, it knows exactly what to do, rather than guessing. This can be incredibly useful. But when people download and share skills from the internet, it becomes a massive attack vector. Whether it's a repository or a marketplace, there is ample room for attackers to introduce malicious instructions that users never bother to vet. It is happening . We are effectively back to the era of downloading files from the internet and running them without a second thought.

0 views
iDiallo 1 months ago

Last year, all my non-programmer friends built apps

Last year, all my non-programmer friends were building apps. Yet today, those apps are nowhere to be found. Everyone followed the ads. They signed up for Lovable and all the fancy app-building services that exist. My LinkedIn feed was filled with PMs who had discovered new powers. Some posted bullet-point lists of "things to do to be successful with AI." "Don't work hard, work smart," they said, as if it were a deep insight. I must admit, I was a bit jealous. With a full-time job, I don't get to work on my cool side project, which has collected enough dust to turn into a dune. There's probably a little mouse living inside. I'll call him Muad'Dib. What was I talking about? Right. The apps. Today, my friends are silent. I still see the occasional post on LinkedIn, but they don't garner the engagement they used to. The app-building AI services still exist, but their customers have paused their subscriptions. Here's a conversation I had recently. A friend had "vibe-coded" an Android app. A platform for building communities around common interests. Biking enthusiasts could start a biking community. Cooking fans could gather around recipes. It was a neat idea. While using the app on his phone, swiping through different pages and watching the slick animations, I felt a bit jealous. Then I asked: "So where is the data stored?" "It's stored on the app," he replied. "I mean, all the user data," I pressed. "Do you use a database on AWS, or any service like that?" We went back and forth while I tried to clarify my question. His vibe-knowing started to show its limits. I felt some relief, my job was safe for now. Joking aside, we talked about servers, app architecture, and even GDPR compliance. These weren't things the AI builder had prepared him for. This conversation happens often now when I check in on friends who vibe-coded their way into developing an app or website. They felt on top of the world when they were getting started. But then they got stuck. An error message they couldn't debug. The service generating gibberish. Requests the AI couldn't understand. How do you build the backend of an app when you don't know what a backend is? And when the tool asks you to sign up for Google Cloud and start paying monthly fees, what are you supposed to do? Another friend wanted to build a newsletter. Right now, ChatGPT told him to set up WordPress and learn about SMTP. These are all good things to learn, but the "S" in SMTP is a lie. It's not that simple. I've been trying to explain to him why the email he is sending from the command line is not reaching his gmail. The AI services that promise to build applications are great at making a storefront you don't want to modify. The moment you start customizing, you run into problems. That's why all Lovable websites look exactly the same. These services continue to exist. The marketing is still effective. But few people end up with a product that actually solves their problems. My friends spent money on these services. They were excited to see a polished brochure. The problem is, they didn't know what it takes to actually run an app. The AI tools are amazing at generating the visible 20% of an app. But the remaining invisible 80% is where the actual work is. The infrastructure, the security, maintenance, scaling issues, and then the actual cost. The free tier on AWS doesn't last forever. And neither does your enthusiasm when you start paying $200/month for a hobby project. My friends' experiments weren't failures. They learned something valuable. Some now understand why developers get paid what they do. Some even started taking programming bootcamp. But the rest have moved on. Their app sits dormant in an abandoned github repo. Their domain will probably expire this year. They're back to their day jobs, a little wiser about the difference between a demo and a product. Their LinkedIn profiles are quieter now, they have stopped posting about "working smart, not hard." As for me, I should probably check on Muad'Dib. That side project isn't going to build itself. AI or no AI.

1 views