A Commentary On GenAI Inspected Through Different Lenses
The amount of concerning reports related to generative AI is rising at an alrming rate, yet all we do is make ourselves more dependent on the brand new technology. Why? It’s not just that we’re lazy—we are!—there are many more variables involved. As part of my quest to try and understand what the heck is going on and what is becoming of one of my prime professional fields: software engineering, I read and read and read. And then I read and read and read. And then I became disappointed and depressed. I see colleagues jumping the gun, others being more prudent. I see industry discovering there’s yet another buck to be made. I see students forgoing learning at all. I wanted to try to form my own judgement of genAI in its modern form by looking at it from four different viewpoints: that of the software engineer, that of the teacher, that of the creativity researcher, and that of the concerned civilian living in this capitalist world.. References can be found at the end of this article. Does anyone remember Dan North’s Programming is not a craft post from 2011? I do, and I often think about it. With the advent of genAI, North’s port might be even more polarising: Well, congrats to you, you’ve won the lottery: here’s a tool that immediately can add customer value. If you don’t care about the inner code quality, you can have genAI generate (slop) code faster than you can think. If you love the impact of software itself, you’ll love Claude Code et al. Are you perhaps an enterprise software engineer? In that case you’ll be able to scaffold and generate CRUD crap even faster, hooray! But wait a minute. You obviously won’t take true ownership of this code: you’ll want to impress your clients with the results, but keep the lid closed at all times. The less ownership and feeling of responsibility, the easier it comes to completely let go of all the breaks and just accept any future changes without code reviewing at all. People who are now claiming they will keep themselves in the loop as an architectural reviewer don’t need to lie to themselves. After the nth time pressing the green button, and as the technology further evolves, you’ll wind up eventually accepting the slop anyway. Verification burnout will pop up next: because it’s not your own code you’re attempting to so carefully review, it actually takes more instead of less effort, increasing your stress level instead of reducing it! Does the code quality really matter if all clients see is the end product? As a gamer, I just want the game to run smoothly, I don’t care about the spaghetti. Or do I? I do, implicitly—the more spaghetti, the less smoothly it’ll run. The more holes, the more soft locks and crashes. So programming might or might not be a craft, but as Cal Newport and Robert M. Pirsig say: the concept of Quality is important! Maybe it’s time to become a goose farmer instead. The only thing left for you to do is to move to a depressing quality control position instead of crafting something yourself. No more “I built this”, but “I managed its orchestration”. Depending on how you view this, It’s either a promotion or demotion. I tend to agree with the latter. Why? Because we humans are the Homo Faber , the ones who like to control their fate and environment with the use of tools. Yes, genAI certainly is a tool, but it’s a tool that takes away all other tools. Instead of kneading dough by hand, feeling it, knowing when to ferment and when to bake, we’re forced to oversee the industrial Wonder Bread production process. Instead of manipulating leather to create a pair of shoes, we’re being employed by Nike to watch shoes being made by machines. This somehow reminds me of David Graeber’s bullshit jobs where useless paper pushing is prevalent but also called a “revolution” when it comes to a professional purpose. I beg to differ. Humans want to make things. They want to be proud of the things they made. The fact that the open source community rejects this slop code is a telling sign: if you’re programming in the open, your peers who also think highly of software development will keep you in check. But when it’s “for (enterprise) work”, we don’t care, generate away, I’m not the true owner anyway. If programming is a craft, then the recently leaked Claude Code CLI source code will be a big joke to you, where constructs are endlessly repeated, and spaghetti is topped up with more spaghetti. Code that is being generated doesn’t even seem to be made to be (re)read: how then, are we expecting to maintain it, or guarantee its security? By letting the agent maintain it and guarantee its security, I can hear you say? What is there left to say? I’ve already asserted that genAI tools are worse than Stack Overflow . Sure, mindless copy-pasting has long existed before this AI storm, but not on this scale. GenAI is able to provide a working solution to an assignment faster than I can come op with the assignment itself. Suddenly, all our traditional evaluation systems and grading workflows became useless: scoring high on a checklist is just a matter of pasting the requirements into Claude. We try to adapt by requiring oral defences, having students explain what they did and why, and asking them to walk us through a small imaginative change. The result is a spectacular fall in grades from previous years: they are just not able (1) to explain the code they did not make but generated and (2) to make small adjustments as they skipped the hard part: the learning and understanding. Yet in the hallways, I hear lots of students bragging to each other about how they let ChatGPT do their homework. Congrats. We’ll see each other again in September for your second try. We often forget something else very important: peer pressure . About a year ago, on the train I overheard a few girls on their way to a university lecture chatting about their homework. One of them complained: “I put in all that hard work, but all the others are just using ChatGPT to do it. Next time, I’m not doing all that, I’m also just using AI, that’s not fair!”. I should have gotten up to congratulate her: the only one actively learning is the one putting in the hard work! There is no shortcut to becoming proficient. There is only hard work. Sure, the more you prompt your way through your curriculum, the more proficient you’ll become with the tool, but ask yourself: did you learn what you wanted to learn or did you learn to prompt? When I was an undergraduate, I used to fill A4 pages with summaries of courses to help me study. Just before the exams, I could quickly glance over these pages to remembers the core concepts. Some students sold their summaries to others. Now, genAI can generate summaries for you. But smart students will know this will only fool yourself: the purpose of the summaries is to make them : to study and gradually fill the pages. Not to acquire a summary. The journey is the destination. When my summaries were done, I could just as well throw them away: they were just a tool to help with the hard work. Yet it’s next to impossible to explain this to a student who only sees how easy it is to jump to an outcome by leveraging AI. Maybe legislation will help here? (Not really; see below) In case all this is not clear: students are becoming dumber yet the programming projects they hand in are becoming better than ever. As the inventor of the framework presented in The Creative Programmer , I thought it would be interesting to take a look at the seven domains and how genAI fits in these. In The Creative Programmer , I present seven distinct but heavily intertwined themes that define the way we are creative when we solve a programming problem: I might be overly focusing on the negative here and have to recognise the possible advantages of having genAI as a tool available in our creative toolbox—but only when we learn to yield it properly and with moderation, which is not exactly what we are doing lately, is it. In an interesting systematic literature review (2025) with lots of references to other academic material if that’s what you’re looking for, Holzner et al. conclude with: […] human-GenAI collaboration shows small but consistent gains in creative output across tasks and contexts. However, collaboration with GenAI reduces the diversity of ideas, indicating a risk of creative outputs that could become more homogeneous. More same-ness; exactly what we need when it comes to creativity, right? The more we use genAI, the more creatively we will be able to prompt, but the less creative we will be in actually applying a solution to the problem. We no longer create: we generate. We know that genAI will do everything in its power to keep you locked within that chat box. Its tendency to talk to your mouth, agree with your statements, and serve you whatever you want to hear creates biases and dependencies. It’s not unlike a drug that slowly but surely diminished your critical thinking, and thus, creativity. This is where the true nature of humans are unfolded: when it comes to earning something for themselves, ethics suddenly becomes a very malleable subject. On the morality, ethics, and privacy, everyone agrees that genAI is what Ron Gilbert calls a train wreck . This bears no further explanation from me: Microsoft slurped all GitHub repositories dry without taking any licenses into account, the book that I painstakingly produced in almost two years was ingested OpenAI’s systems in about two seconds, … Yet at the same time, everyone also consistently ignores all these topics in favour of their own self-interest. Why, I wonder? Everyone knows they should eat less meat. Yet almost nobody does. Everyone knows Microsoft (and probably other big tech companies) power genocide yet the adoption rate of Windows as an OS is still 95%. Why? Everyone knows the climate is going to shits yet we happily turn the other way and take the plane on a weekend trip to sip some wine and do some shopping in Italy. As Gretea Thunberg said: knowing is not enough . For GenAI, similar patterns emerge. We know it’s bad for us, yet we happily close our eyes and use it anyway. Why, I wonder? The power of a drug, the pull, the ease at which something can be done without breaking too much sweat? Here’s a possible answer I suggested before: because humans are inherently lazy. As long as Belgian supermarkets keep on stocking apples from New Zealand and Belgium, most people won’t care and just pick up whatever. As long as we keep handing out company cars and making infrastructure geared towards car drivers, most people will be driving to work instead of biking. A possible answer to the problem then might be governmental legislation to protect people living in a society from making the wrong choices. And I’m 100% sure that will work! Yet legislation is always (1) either happening way too late; or (2) minimised or manipulated by the people who wield the power because they have bought out key politicians to prevent laws like this from happening. Hence my depression. In the case of GenAI, a technology that evolves at lightning speed and is taking the world by storm, legislation will be way too late. To prove my point, in an attempt to modernise, many Belgian governmental instances already “embraced” the technology and made many blunders in doing so. The EU is currently evaluating the options. Meanwhile, the San Francisco bros are laughing. Prompt engineering is the most degenerative thing that ever happened to engineering . It’s a capitalist’s way to minimise the cost of the human. Yet I don’t see genAI disappearing any time soon. Companies and decision makers smelled the green and won’t let go. I don’t understand how capitalism works, but I know it’s been growing in power ever since we centralised cane sugar plantations with the help of slavery. GenAI is evidently yet another product of capitalism. The companies I’ve worked for wanted more and more profit each year: even though they were sometimes satisfied with last year’s profit, the target for the next year was always increased no matter what. GenAI is already responsible for thousands of layoffs in an attempt to even more aggressively push profit up. To what end, I wonder? Why? To our own detriment. It seems that our cognition is for sale, and the sale has already been made. You know what they say: no returns are accepted. Peer pressure to use genAI on the job is already prevalent as it “gets things done faster”, so quite logically also brings in money faster. Let’s worry about durability and maintenance later, shall we. Also, I’ve seen colleagues fall into the trap of obsessive agent babysitting. Whether at work, on the lunch break, or in the very late evenings: you’ve got to keep those agents spinning! Squeeze the maximum out of your tokens because they squeeze the maximum out of you. There goes our work-life balance, coming from the tools that are supposed to take over our work so we can focus more on the life part. So as long as I remain in a position to be able to choose whether I can put in the work myself for my (hobby) programming projects, I will. As long as I am in a position to bike instead of drive, to be a vegetarian instead of meat-eater, or in short, to be a concerned civilian, I will. And so should you. Even though that won’t stop this devolution from happening at all. Sure I will occasionally consult Gemini et al. to ask it a specific question regarding a broken config file that has me scratching my head. But I treat these queries as specialised internet searchers, not as a way to evade the hard work completely. I’ve become Albert Camus’s pessimist. I’m genuinely afraid of how our kids will turn out if we don’t act quickly to save our youth. Yet I won’t stop being an activist. Reading List I’d rather link to personal blog posts instead of academic publications here as we’re dealing with something that impacts us on a personal level and by the time the relevant 2026 studies are published, the landscape will have changed yet again. The following folks expressed their experience and opinion on genAI: Related topics: / genai / By Wouter Groeneveld on 8 April 2026. Reply via email . Technical Knowledge—if we don’t have any knowledge, we won’t have the creative ability to combine them. Guess what; GenAI is actively deskilling us. The more you generate, the less you actively learn, harming your creative ability to solve problems. Creativity requires a rich mental toolbox to draw from. By prompting, you’re not exactly filling that toolbox. Communication–I see both a good and a bad thing here: if your colleagues aren’t immediately available, rubber ducking with an AI agent might help identifying that problem. On the other hand, it’s also awfully easy to stay locked inside that comfortable genAI chatbox. Why ask anyone when it talks to your mouth? Constraints—If you manage to constraint yourself (ha!) to only ask AI for 10 possible ways to approach a problem you don’t know how to approach without having it solve the problem for you , this might help you learn how to approach certain heavily constrained environments. Unfortunately, it’s very easy to just have it generate the solution as well, rendering a possible learning path useless. Critical Thinking—The more we use genAI, the less critical we are and the more likely we are accepting whatever comes out of it. Validating the the source material outside of that chatbox suddenly requires a lot of willpower. I’ve even heard people changing their entire preferred technology stack to something more popular because genAI is better at it. That’s very sad. Curiosity—Judge for yourself. What does reliance on genAI tell you about your curiosity to discover other things? Creative state of Mind—without Cal Newport’s “Deep Work”, there won’t be an “aha!” moment. The 90% transpiration, 10% inspiration is suddenly turned on its head: Claude is the one sweating for us, even at night, while all we do is press the green button and write “LGTM!”. Maybe we should take the time to read Newport’s new book Slow Productivity . Creative Techniques—GenAI itself as a technique might belong in this section; but the question is; are we the one yielding the tool or is the tool yielding us? Nolan Lawsom; How I use AI agents to write code . A clear conflicted state: it’s okay to generate away at work, but “I also don’t use AI for my open-source work, because it just feels… ick. The code is ‘mine’ in some sense, but ultimately, I don’t feel true ownership over it, because I didn’t write it”. John Allsopp: The Structure of Engineering Revolutions Dave Gauer; A programmer’s loss of social identity Cory Zue; Software got weird Doug Belshaw; Claude’s Constitution and the trap of corporate AI ethics Tom Hall; Towards a Slow Code Manifesto Rishi Baldawa; The Reviewer Isn’t the Bottleneck Information/superhighway.net; On The Need For Understanding Antoine Leblanc; Chatbot psychosis (Mastodon) “this is the main reason why i believe that chatbot addiction / chatbot psychosis is a LOT more widespread than we realise: people with a clear understanding of the ethical issues try claude once, it does a thing correctly enough, they get one-shot, and they start posting like if sephiroth was on linked-in, ethical concerns be damned. it keeps happening.” Exactly. Sean Boots; Generative AI vegetarianism Simon Willison; Perhaps not Boring Technology after all Sophie from Localghost; Stop Generating Start Thinking Micaheal Harley; AI Stance Lauren Woolsey; AI Sucks And You Shouldn’t Use It Ron Gilbert; My Dinner With AI Matthew Lamont; Generative AI is an Evil Technology Arne Brasseur; The AI Divide (Mastodon) Zach Manson; CoPilot Edited an Ad Into My PR Michael Taggart; I Used AI. It Worked. I Hated It. Bob Nystrom; The Value of Things . GenAI can have utility but not meaning. Jonny; Dismantling Claude Code source (Mastodon) . Another train wreck, as expected. Cal Newport; In Defense of Thining Hamilton Greene; Why I’m moving from F# to C# Senator Bernie Sanders vs. Claude (YouTube) Joel Chrono; Not having to work would be nice (but not like this)