Posts in Career (20 found)
Sean Goedecke Yesterday

Big tech engineers need big egos

It’s a common position among software engineers that big egos have no place in tech 1 . This is understandable - we’ve all worked with some insufferably overconfident engineers who needed their egos checked - but I don’t think it’s correct. In fact, I don’t know if it’s possible to survive as a software engineer in a large tech company without some kind of big ego. However, it’s more complicated than “big egos make good engineers”. The most effective engineers I’ve worked with are simultaneously high-ego in some situations and surprisingly low-ego in others. What’s going on there? Software engineering is shockingly humbling, even for experienced engineers. There’s a reason this joke is so popular: The minute-to-minute experience of working as a software engineer is dominated by not knowing things and getting things wrong . Every time you sit down and write a piece of code, it will have several things wrong with it: some silly things, like missing semicolons, and often some major things, like bugs in the core logic. We spend most of our time fixing our own stupid mistakes. On top of that, even when we’ve been working on a system for years, we still don’t know that much about it. I wrote about this at length in Nobody knows how large software products work , but the reason is that big codebases are just that complicated. You simply can’t confidently answer questions about them without going and doing some research, even if you’re the one who wrote the code. When you have to build something new or fix a tricky problem, it can often feel straight-up impossible to begin, because good software engineers know just how ignorant they are and just how complex the system is. You just have to throw yourself into the blank sea of millions of lines of code and start wildly casting around to try and get your bearings. Software engineers need the kind of ego that can stand up to this environment. In particular, they need to have a firm belief that they can figure it out, no matter how opaque the problem seems; that if they just keep trying, they can break through to the pleasant (though always temporary) state of affairs where they understand the system and can see at a glance how bugs can be fixed and new features added 2 . What about the non-technical aspects of the job? Nobody likes working with a big ego, right? Wrong. Every great software engineer I’ve worked with in big tech companies has had a big ego - though as I’ll say below, in some ways these engineers were surprisingly low-ego. You need a big ego to take positions . Engineers love being non-committal about technical questions, because they’re so hard to answer and there’s often a plausible case for either side. However, as I keep saying , engineers have a duty to take clear positions on unclear technical topics, because the alternative is a non-technical decision maker (who knows even less) just taking their best guess. It’s scary to make an educated guess! You know exactly all the reasons you might be wrong. But you have to do it anyway, and ego helps a lot with that. You need a big ego to be willing to make enemies . Getting things done in a large organization means making some people angry. Of course, if you’re making lots of people angry, you’re probably screwing up: being too confrontational or making obviously bad decisions. But if you’re making a large change and one or two people are angry, that’s just life. In big tech companies, any big technical decision will affect a few hundred engineers, and one of them is bound to be unhappy about it. You can’t be so conflict-averse that you let that stop you from doing it, if you believe it’s the right decision. In other words, you have to have the confidence to believe that you’re right and they’re wrong, even though technical decisions always involve unclear tradeoffs and it’s impossible to get absolute certainty. You need a big ego to correct incorrect or unclear claims. When I was still in the philosophy world, the Australian logician Graham Priest had a reputation for putting his hand up and stopping presentations when he didn’t understand something that was said, and only allowing the seminar to continue when he felt like he understood. From his perspective, this wasn’t rude: after all, if he couldn’t understand it, the rest of the audience probably couldn’t either, and so he was doing them a favor by forcing a more clear explanation from the speaker. This is obviously a sign of a big ego. It’s also a trait that you need in a large tech company. People often nod and smile their way past incorrect technical claims, even when they suspect they might be wrong - assuming that they’ve just misunderstood and that somebody else will correct it, if it’s truly wrong. If you are the most senior engineer in the room, correcting these claims is your job. If everyone in the room is so pro-social and low-ego that they go along to get along, decisions will get made based on flatly incorrect technical assumptions, projects will get funded that are impossible to complete, and engineers will burn weeks or months of their careers vainly trying to make these projects work. You have to have a big enough ego to think “actually, I think I’m right and everyone in this room is confused”, even when the room is full of directors and VPs. All of this selects for some pretty high-ego engineers. But in order to actually succeed in these roles in large tech companies, you need to have a surprisingly low ego at times. I think this is why really effective big tech engineers are so rare: because it requires such a delicate balance between confidence and diffidence. To be an effective engineer, you need to have a towering confidence in your own ability to solve problems and make decisions, even when people disagree. But you also need to be willing to instantly subordinate your ego to the organization, when it asks you to. At the end of the day, your job - the reason the company pays you - is to execute on your boss’s and your boss’s boss’s plans, whether you agree with them or not. Competent software engineers are allowed quite a lot of leeway about how to implement those plans. However, they’re allowed almost no leeway at all about the plans themselves. In my experience, being confused about this is a common cause of burnout 3 . Many software engineers are used to making bold decisions on technical topics and being rewarded for it. Those software engineers then make a bold decision that disagrees with the VP of their organization, get immediately and brutally punished for it, and are confused and hurt. In fact, sometimes you just get punished and there’s nothing you can do. This is an unfortunate fact of how large organizations function: even if you do great technical work and build something really useful, you can fall afoul of a political battle fought three levels above your head, and come away with a worse reputation for it. Nothing to be done! This can be a hard pill to swallow for the high-ego engineers that tend to lead really useful technical projects. You also have to be okay with having your projects cancelled at the last minute. It’s a very common experience in large tech companies that you’re asked to deliver something quickly, you buckle down and get it done, and then right before shipping you’re told “actually, let’s cancel that, we decided not to do it”. This is partly because the decision-making process can be pretty fluid, and partly because many of these asks originate from off-hand comments: the CTO implies that something might be nice in a meeting, the VPs and directors hustle to get it done quickly, and then in the next meeting it becomes clear that the CTO doesn’t actually care, so the project is unceremoniously cancelled 4 . Nobody likes to work with a bully, or with someone who refuses to admit when they’re wrong, or with somebody incapable of empathy. But you really do need a strong ego to be an effective software engineer, because software engineering requires you to spend most of your day in a position of uncertainty or confusion. If your ego isn’t strong enough to stand up to that - if you don’t believe you’re good enough to power through - you simply can’t do the job. This is particularly true when it comes to working in a large software company. Many of the tasks you’re required to do (particularly if you’re a senior or staff engineer) require a healthy ego. However, there’s a kind of catch-22 here. If it insults your pride to work on silly projects, or to occasionally “catch a stray bullet” in the organization’s political fights, or to have to shelve a project that you worked hard on and is ready to ship, you’re too high-ego to be an effective software engineer. But if you can’t take firm positions, or if you’re too afraid to make enemies, or you’re unwilling to speak up and correct people, you’re too low-ego. Engineers who are low-ego in general can’t get stuff done, while engineers who are high-ego in general get slapped down by the executives who wield real organizational power. The most successful kind of software engineer is therefore a chameleon: low-ego when dealing with executives, but high-ego when dealing with the rest of the organization 5 . What do I mean by “ego”, in this context? More or less the colloquial sense of the term: a somewhat irrational self-confidence, a tendency to believe that you’re very important, the sense that you’re the “main character”, that sort of thing Why is this “ego”, and not just normal confidence? Well, because of just how murky and baffling software problems feel when you start working on them. You really do need a degree of confidence in yourself that feels unreasonable from the inside. It should be obvious, but I want to explicitly note that you don’t just need ego: you also have to be technically strong enough to actually succeed when your ego powers you through the initial period of self-doubt. I share the increasingly-common view that burnout is not caused by working too hard, but by hard work unrewarded. That explains why nothing burns you out as hard as being punished for hard work that you expected a reward for. It’s more or less exactly this scene from Silicon Valley. This description sounds a bit sociopathic to me. But, on reflection, it’s fairly unsurprising that competent sociopaths do well in large organizations. Whether that kind of behavior is worth emulating or worth avoiding is up to you, I suppose. What do I mean by “ego”, in this context? More or less the colloquial sense of the term: a somewhat irrational self-confidence, a tendency to believe that you’re very important, the sense that you’re the “main character”, that sort of thing ↩ Why is this “ego”, and not just normal confidence? Well, because of just how murky and baffling software problems feel when you start working on them. You really do need a degree of confidence in yourself that feels unreasonable from the inside. It should be obvious, but I want to explicitly note that you don’t just need ego: you also have to be technically strong enough to actually succeed when your ego powers you through the initial period of self-doubt. ↩ I share the increasingly-common view that burnout is not caused by working too hard, but by hard work unrewarded. That explains why nothing burns you out as hard as being punished for hard work that you expected a reward for. ↩ It’s more or less exactly this scene from Silicon Valley. ↩ This description sounds a bit sociopathic to me. But, on reflection, it’s fairly unsurprising that competent sociopaths do well in large organizations. Whether that kind of behavior is worth emulating or worth avoiding is up to you, I suppose. ↩

0 views
iDiallo Yesterday

It's Work that taught me how to think

On the first day of my college CS class, the professor walked in holding a Texas Instruments calculator above his head like Steve Jobs unveiling the first iPhone. The students sighed. They had expected computer science to involve little math. The professor told us he had helped build that calculator in the eighties, then spent a few minutes talking about his career and the process behind it. Then he plugged the device into his computer, opened a terminal on the projector, and pushed some code onto it. A couple of minutes later, he unplugged the cable, powered on the calculator, and sure enough, Snake was running on it. A student raised his hand. The professor leaned forward, eager for the first question of the semester. "Um... is this going to be on the test?" While the professor was showing us what it actually means to build something, to push code onto hardware and watch it come alive, his students were already thinking about the grade. About the exit. The experience meant nothing unless it converted into points. That was college for me. Everyone was chasing a passing grade to get to the next class. Learning was mostly incidental. The professors tried, but our incentives were completely misaligned. Talk of higher education becoming obsolete was already in the air, especially in CS. As enthusiastic as I had been when I started, that enthusiasm got chipped away one class at a time until the whole thing felt mechanical. Something I just had to get through. I dropped out shortly after the C++ class, which had taught me almost nothing about programming anyway. I was broke and could only pay for so many courses out of pocket. So I took my skills, such as they were, to a furniture store warehouse. My day job. When customers bought furniture, we pulled their merchandise from the back and loaded it into their trucks. They signed a receipt, we kept a copy, and those copies went into boxes labeled by month and date. At the end of the year, the boxes went onto a pallet, the pallet got shrink-wrapped, and a forklift tucked it away in a high storage compartment. Whenever an accountant called requesting a signed copy, usually because a customer was disputing a charge, the whole process ran in reverse. Someone licensed on the forklift had to retrieve the pallet, we cut the shrink-wrap, found the right box, and sifted through hundreds of receipts until we found the one we needed. The process took hours. One day I decided enough was enough. After my shift, I grabbed the day's signed receipts and fed them into a scanner. For each one, I created two images: a full copy and a cropped version showing just the top of the receipt where the order number was printed. I found a pirated OCR application, then used VBScript and a lot of Googling to write a script that read the order number and renamed each image file to match it. I also wrote my first Excel macros, also in VBScript. When everything was wired together, I had a working system. Each evening, I would enter the day's order numbers, scan the receipts, and let the script match them up with a preview attached. When the OCR failed to read a number, the file was renamed "unknown" with an incrementing number so I could verify those manually. From then on, when an accountant called, I could find and email them the receipt in under a minute, without ever leaving my desk. When I left that warehouse, I was ready to call myself a programmer. That one month building that system taught me more than two years of school ever had. But the education didn't stop there. Years later, now considering myself an experienced developer, a manager handed me what looked like a giant power strip. It had a dozen outlets, and was built for stress-testing set-top boxes in a datacenter. "Can you set this up?" he asked. A few years earlier, I would have panicked. I would have gone looking for someone who already knew the answer, or waited until the problem solved itself. But something had changed in me since the warehouse. Unfamiliar problems no longer felt like walls. They felt like the first receipt I ever fed into a scanner. It was just something to pull apart until it made sense. I had never worked with hardware. I had no idea where to start. But I didn't need to know where to start. I just needed to start. I brought the device to my desk and inspected every inch of it. I wasn't looking for the answer exactly. Instead, I was looking for the first question. And I found one: an RJ45 port on one end. Not exactly the programming interface you'd expect, but it was there for a reason. I looked up the model number of the device, downloaded the manual, and before long I was connected via Telnet, sending commands and reading output in the terminal. Problem solved. Not because I knew anything about hardware going in, but because I had learned to spend time with unfamiliar problems. None of this was in the syllabus. Nobody graded me on it. There was no partial credit for getting halfway there. That's the difference between school and work. School optimizes for the test, like that student who couldn't look past the grade to see what was actually being shown to him. School teaches you the shape of a problem and gives you a method to solve it. Work, on the other hand, doesn't care about the test. Work hands you something broken, or inefficient, or completely unfamiliar, and simply waits. Often, there are no right answers at work. You just have to build your own solution that satisfies the requirement. You figure things out, not because you memorized the right answer, but because you thought your way through it. Then something changes in how you approach every problem after that. You don't flinch at the next problem. You understand that facing unfamiliar problems is the job.

0 views
Jampa.dev 2 days ago

Things I still wouldn’t delegate to AI

When it comes to AI, I consider myself a “skeptical optimist.” I think it has evolved a long way. I even  (controversially) put it in my testing pipeline . But sometimes, when I see how others use it, I wonder: are we going too far? I’m not talking just about people simply  handing over their email inbox to OpenClaw . I’m referring to major incidents like how “ AWS suffered ' at least two outages’ caused by AI tools. ”  Thanks for reading Jampa.dev! Subscribe for free to receive new posts and support my work. Code is cheap now, and we can fully delegate it to AI, but coding is only a small part of our jobs . The others, like handling incidents caused by AI code, are not. In all the situations below, you'll notice a pattern: people think “AI can handle most of it, so why not all of it?” and here’s how that leads to disaster. The misuse of automation in hiring predates the rise in LLMs. Eleven years ago, I applied for a Django role and got rejected within  two minutes at 01 AM , because I needed to know more about “Python” for the job. The email seemed to be written by a person. I submitted a new application with  just one word added  and received an interview invitation… The rejection was because the scanner didn’t find the word ‘Python’.  The main problem with companies that pull “clever” stunts like these is that they exclude great candidates. Not only that, but people will notice your flaws and share them publicly on platforms like Glassdoor, which can tank your reputation. Some argue that automation is necessary because applicant volume can become overwhelming. I disagree. During the COVID hiring surge, I reviewed  over 1,000 resumes a year  and never considered automating screening. The reason why you shouldn't automate hiring is that it is the most important thing you do. Hiring well is the most important thing in the universe. […] Nothing else comes close. So when you’re working on hiring […] everything else you could be doing is stupid and should be ignored! — Valve New Employee Handbook Even with 300 applicants each month, you can review all the resumes in less than an hour by using better judgment than AI.  That one hour spent is more valuable than dismissing a potentially great candidate . Finding the right candidate early also reduces the hours spent on interviews. Now that people are embedding LLMs into the hiring process, the situation has worsened. I see many pitches for tools that claim to be better at evaluating candidates’ interview performance than a human, which is simply absurd. Hiring is a human process : you need to understand not only what they say that makes sense, but also what excites and motivates them to see if they’ll be a good fit for the role.  You can’t measure qualities like enthusiasm and soft skills with AI. It will only accept what the candidate says at face value. A candidate might claim they are passionate about working with bank accounting software in Assembly at your Assembly bank firm, but are they really? From my personal experience with AI review tools like CodeRabbit, Claude, and Gemini, I've noticed that a pull request with 12 issues results in 12 comments, but only about 6 are actual problems. The rest tend to be just noise or go unaddressed. This doesn't mean those tools are useless. Letting them do an initial pass is very helpful, and some humans wouldn't catch some of the issues they find, especially the deep logical problems. The issue with automated review tools is that they are becoming the  de facto  gatekeepers  for deploying code to production, leading to future outages and a low-quality codebase. The inmates have taken over the asylum, and we now have AI reviewing code generated by AI. Review tools are very focused on checking whether your PR makes logical sense, such as whether you forgot to add auth behind a route, but they can't, for example, judge whether your code worsens the codebase. They can't raise the bar, which is the best part of human reviews.  Every time we create or review a PR, it's a chance to learn  how to become a better engineer and to leave the codebase in a better state than we found it. Comments from peers like “you are duplicating logic, you should DRY these components” encourage us to review our own code and improve as engineers. Relying only on AI review takes away that chance. Most incidents I observe happen because AI struggles to evaluate second-order effects; it overlooks the Chesterton fence. For example, if you try to delete or change a downstream parameter, like a parameter needed and was removed by an LLM, which wasn’t caught by linting. This reflects a limitation of current models: they can't review your code across repos. I'm tired of reading AI-generated writing: it just doesn't respect the reader's time. I see many AI-produced texts that could be shortened by a quarter without losing any important information. Reading emails, meeting notes, or technical documents filled with emoji spam and strange analogies (“it's not X, it's Y”) is tiring. When I see the words  “Executive Summary ,” I often hesitate to read it. I would have written a shorter letter, but I did not have the time. — Blaise Pascal There is power in simplicity and in respecting your reader's time. Most of my blog posts are cut by 50% just before I publish them. Most people I know who use AI for communication do so because they believe their writing is not good. But honestly, the  goal of communication isn't grammar skills but to get the point across .  Good grammar is often overrated anyway. One of my favorite documents is the  leaked MrBeast memo PDF , which is full of grammatical and punctuation errors but clearly communicates its message through a “braindump”, much better than any LLM ever could. When you ask an LLM about your roadmap, you're likely querying what countless other companies with very different issues have already tried. The AI relies on patterns from its training data, and in my experience, those patterns tend to be too generic compared to the insights of a seasoned domain expert. If your software is meant for hospital accountants, do you think they take time to blog about the frustrations of their workflow? The knowledge is stored in their minds, and you need to extract it. This vital knowledge is never documented and thus never accessible to an LLM. I spent three years researching and working on accessibility for nonverbal individuals. If I ask the AI about what this industry lacks, it will start discussing the need for better UX solutions (there are countless papers on this, I even naively wrote one). Still, I saw multiple companies enter the market with great UX products only to crash and burn. After a while, I realized that poor UX apps still dominate adoption because these companies invest millions in lobbying, partnerships with insurance companies, and training, which is the thing no one talks about. I get many messages from bots on Reddit and LinkedIn about AI management tools, but  as I mentioned before , they lack context. The worst part is that they think they can make judgments with the limited context they have. Here’s an example of a feedback tool output: “This engineer sucks, they do 40% fewer PRs than the median, I marked him as an underperformer … I also told your boss, HR and CTO about it, better do something!” - Some tool with a fancy name and a “.io” domain But yet, that engineer is one of the best I have worked with. The issue is that they try to outsmart the manager, which leads lazy managers to use the AI's suggestions as an excuse, resulting in poorly thought-out feedback because “ The computer says no .” Think of the current LLMs as an “added value tool” , not a product, and definitely not an expert. Most of what I wrote above is problematic because it overestimates what LLMs can do and enables them to operate unsupervised. You can't go back in time after AI makes a mistake, and there are no guardrails once a mistake is made. I received a lot of criticism for my post about using  AI to select E2E tests  in a PR pipeline. Yes, it sounds crazy, but this is the “added value” part: if the AI fails at selecting the right test, we will catch it before deployment. The value provided is that having it is better than having no pre-checks at all. Before giving AI control, ask how resilient our system is when (not if) the AI screws up, and ensure you have stronger safety nets before delegating completely. Thanks for reading Jampa.dev! Subscribe for free to receive new posts and support my work.

0 views
Robin Moffatt 2 days ago

How I do, and don't, use AI on this blog

I use AI heavily on this blog. I don’t use AI to write any content. As any followers of my blog will have seen recently, I am a big fan of the productivity —and enjoyment—that AI can bring to one’s work. (In fact, I firmly believe that to opt out of using AI is a somewhat negative step to take in terms of one’s career.) Here’s how I don’t use AI, and never will : I use AI heavily on this blog. I don’t use AI to write any content.

0 views
Ruslan Osipov 2 days ago

Writing code by hand is dead

The landscape of software engineering is changing. Rapidly. As my colleague Ben likes to say, we will probably stop writing code by hand within the next year. This comes with a move toward orchestration, and a fundamental change in how we engage with our craft. Many of us became coders first, software engineers second. There’s a lot more to software engineering than coding, but coding is our first love. Coding is comfortable, coding is fun, coding is safe. For many of us, the actual writing of syntax was never the bottleneck anyway. But now, you can command swarms of agents to do your bidding (until the compute budget runs out, at least, and we collectively decide that maybe junior engineers aren’t a terrible investment after all). The day-to-day reality of the job is shifting. Instead of writing greenfield code or getting into the flow state to debug a complex problem, you’re now multitasking. You’re switching between multiple long-running tasks, directing AI agents, and explaining to these eager little toddlers that their assumptions are wrong, their contexts are overflowing, or they need to pivot and do X, Y, and Z. And that requires endless context switching. Humans cannot truly multitask ; our brains just rapidly jump context across multiple threads. Inevitably, some of that context gets lost. It’s cognitively exhausting, but it feels hyper-productive because instead of doing one thing, you’re doing three—even if the organizational overhead means it actually takes four times as long to get them all over the finish line. This is, historically, what staff software engineers do. They don’t particularly write much code. They juggle organizational bits and pieces, align architecture, and have engineers orbiting around them executing on the vision. It’s a fine job, and highly impactful, but it’s a fundamentally different job. It requires a different set of skills, and it yields a different type of enjoyment. It’s like people management, but without the fun part: the people. As an industry, we’re trading these intimate puzzles for large scale system architecture. Individual developer can now build at the sacle of a whole product team. But scaling up our levels of abstraction always leaves something visceral behind. It was Ben who first pointed out to me that many of us will grieve writing code by hand, and he’s absolutely right. We will miss the quiet satisfaction of solving an isolated problem ourselves, rather than herding fleets of stochastic machines. We’ll adjust, of course. The field will evolve, the friction will decrease, and the sheer scale of what we can create will ultimately make the trade-off worth it. But the shape of our daily work has permanently changed, and it’s okay to grieve the loss of our first love. Consider this post your permission to do so.

0 views
ava's blog 3 days ago

'human oversight' is a meaningless buzzword

When talking about using AI for decision-making, you often hear that there will be " human oversight " or " human intervention ". One popular example that I have come across in conferences and webinars about data protection law is the hiring process and recruiting: Companies are already proudly using AI to select applicants. It summarizes CVs, compares qualifications with the job profile, and ranks candidates. At the end, HR decides who to invite for interviews based on this output. The fact that AI isn't just sending out the interviews itself immediately and instead, a human is required to write an email or press a button is the idolized "human oversight". The fact that someone could intervene and make a different decision is supposed to be enough. What bothers me is that despite being ranked as "high risk" under the AI Act (together with using AI for medical diagnosis, financial and legal advice, etc.), we aren't looking at how these systems are realistically used in practice. We shove a human in the loop ("HITL") somewhere to assuage fears and comply with legal requirements, but almost no one wants to talk about the fact that Think about it: You have an IT company that gets 400-600 applications on each open spot. Spending time on every single application weeding people out takes a lot of time. You want to save time using AI so the people whose CVs and motivational letters most closely match the job description are already pre-selected for you and ranked. You know the next few weeks will bring new application deadlines again and you're already behind. You just can't check all of the applications to see whether the AI messed up or not. You can do a random check here and there, but at what point will you just look at the top candidates, check their applications, see it was correctly summarized (or well enough), and assume the rest of applicants that weren't considered were assessed correctly as well? Why would you look at all or most of the applications again anyway when the AI system is advertised as saving you that time and step entirely? If anything, the human intervention here is for the companies - making sure that the AI didn't accidentally rank someone top that is completely unfitting for the task. It's not there for you . No one will notice if your perfectly fitting application has been disregarded by AI for no discernible reason, and no one will find it as part of the oversight process in the hundreds of other applications to make sure. If the AI makes the task quicker and the first top candidates sound fitting and plausible, that's it, nail in the coffin, why would HR put in more work? All you can realistically do is make them explain and check after each rejection where you were a good fit and know AI was used. If you don't do that, you can't know whether you've been unjustly treated by their AI hiring process or were rejected on a justifiable basis. As long as AI continues to hallucinate or leave things out inexplicably just to say sorry afterwards, this is a huge liability. Companies don't seem to really care for possible poor data quality, biases and systemic inequities that are subtle or deeply embedded, requiring more work and possibly an outside view to detect and mitigate. We are lacking nuanced oversight mechanisms, and I hope companies are prepared for the lawsuits this will generate. If a company wants to use AI in the hiring process, I'd at least expect them to do the following bare minimum: Unfortunately, companies have no incentive to do this! This is seen as more bureaucracy, more time and money wasted, restrictive to innovation. They're competing with companies who are grabbing talent even faster than them who don't give a shit about fairness in AI hiring. Each day they don't find a replacement or candidate for a new role is bad. And why hire more HR personnel to sift through hundreds of applicants if less HR personnel can handle it with AI? Organizational priorities and financial pressures don't allow enough checks and considerations to go into this delicate process. We need to question " human oversight " more closely and require more explanations on how they plan to combat opaque decision-making, automation bias and the pressure to optimize and make work as easy as possible. Until adequate systems are in place that combat this, it will always be ineffective and a buzzword to me. Reply via email Published 11 Mar, 2026 while HR does receive training on how to use AI and how it works, the reasoning behind AI selection and summaries is a black box for the users, AI recruiting is advertised as a huge time save, which stands in contrast to the checking you should technically be doing as a human to make sure the AI did a good job, most users will follow the AI recommendations blindly because they are presented in a way that sounds plausible and as time goes on, we get lazy and suffer from automation bias and oversight fatigue. having a clear documentation of AI capabilities and limitations for their employees incentivizing taking the time to question AI suggestions and do some 'manual' labor requiring detailed justification when accepting the AI suggestions/rankings the ability to explicitly name why the disregarded applications were denied by their AI system in each case (you're going to need this anyway when an applicant challenges the decision) testing the system and the employees by periodically entering a candidate application that should fit perfectly vs. one that is very unfitting, and see where they land and what HR does with them (similar to the existing practice of IT sending out fake phishing e-mails sometimes to test you) collecting decision patterns and errors to correct and adjust the AI system

0 views
Hugo 6 days ago

Coding 10x faster: what's the real benefit?

I saw a Reddit post the other day: " Developers who save a ton of time thanks to AI: what do you do with it? " (in french) It got me thinking. I have an answer from my own experience, which I'll share with you. But I'm well aware my situation is peculiar. So I decided to dig deeper, and I realized something: the real question isn't whether you're getting faster. It's: what does that actually change for you? Do you work less? More? Differently? What's the impact in a large corporation? What about freelancers? Essentially: ++ if ++ someone gains time, what's it actually worth to them? First, let me be thorough and put things in perspective. A recent study from METR (Model Evaluation & Threat Research) shows that this time gain might not be as straightforward as we think. The study found that experienced developers working on legacy codebases were actually 20% slower with AI, largely due to code review cycles. Take it with a grain of salt—the sample size was only 16 people. And context matters: we're talking about expert developers working on large, complex codebases. I won't dwell on this study because I can find others saying the opposite. But I thought it was intellectually honest to mention it, to balance the narrative and add some perspective. The point isn't to assume that AI definitely makes us faster. I don't want to get into that debate. Instead, let's strip away the AI label entirely and just consider a hypothesis: What if someone could produce code 20-30% faster? Or even 10x faster? What happens then? If software production becomes cheaper, what's the impact on the developer profession? What can you actually do with that gained time? This isn't a silly hypothesis. The profession has changed radically since the days of physically wiring computers to program them. We've constantly improved productivity. And our successors probably won't work the way we do. I'm an outlier, so my answer doesn't apply to most people. I work with one co-founder on a recently launched product. I have no productivity quotas, no salary pressure, no one to report to. I control my own time. If I finish a task in an hour instead of a week, I can just stop working and do something else. I'm not obligated to pad my hours or hit some end-of-day checkpoint. Of course, I still have pressure—the product needs to be good and adopted. I track new users monthly, revenue growth, and customer feedback via email. But for my situation, extra time is genuinely useful. I can spend more time responding to emails thoughtfully. I can do deeper analysis of my market and user feedback. In short, I see it as an opportunity. Before, as a "technical founder," I spent all my time coding and lived heads-down, with no bandwidth to think strategically about the product long-term. Gaining time lets me rebalance. I love coding. But coding was eating up brain cycles I needed for strategy. That's no longer the case. I can spend more time on the "why" instead of just the "how." Time has always been scarce in software and startups. When I gain time, I spend it on other product work: improving user documentation, strengthening test infrastructure to prevent future problems. I tackle bugs I see in the logs, or UI quirks I'd noticed but pushed off indefinitely. You know the feeling—that infinite Jira backlog filled with small improvement tickets. Those famous "we'll do it later" items that exist mostly to make us feel better, or to satisfy the support person who asked about them. "Look, it's on our TODO list. Yeah, it'll take 15 years to get to, but it's there…" That doesn't exist for me anymore. Small issues like that pile up by the dozens, and the product steadily improves. Using the financial analogy of technical debt: once code costs less, you pay down the debt continuously, which reduces the interest burden. Since I spend less time coding, I've had much more time to think deeply about problems. And I've invested in education. Before, time pressure forced shortcuts. Recently, I've documented and written about content moderation on platforms , dug into SSL certificate systems, explored proof-of-work captcha mechanisms, studied purchasing power parity . And here's the surprising part: I've just stepped away from the keyboard. Often in my typical day, I spend time away from the screen. I've improved my broth technique for ramen. I've done home repairs and started building furniture. Paradoxically, that time helps me build better products. Ever notice you solve complex problems in the shower? Letting your mind wander is creative fuel. It took me a while to accept it , but giving your brain rest—letting it incubate ideas and wander while you do something else—is excellent for problem-solving. Of course, I know my situation is specific. I can't imagine starting a woodworking session in the middle of an open office surrounded by developers. But for me, gaining time on code means a holistic rebalancing of my days and, paradoxically, better-quality output. Because it was never just about coding—it was also about thinking long-term, which is easier when you have time for it. Clearly, though, it's different in a more traditional context. So I did some research on what others are saying. One of the first insights comes from a Harvard Business Review study . Increased productivity doesn't lead to reduced working hours—it leads to intensification. Work becomes more intense for several reasons: AI removes cognitive breaks. Since AI makes it faster to start a task, you lose the natural pause that exists at the beginning of each project when you're figuring out the approach. AI blurs job boundaries. You feel capable of doing frontend, design, ops, backend, mobile. Your scope explodes for a single person. Frequent gratification drives endless continuation. If you complete 10 tickets a day and each is quick, why not one more? One quick prompt before closing the laptop? But this intensification comes at a real cost: fatigue, burnout, mistakes (because tired people miss things). I found an article that compares AI to a vampire draining our energy. The idea is simple: since AI handles all the simple, repetitive tasks that used to serve as cognitive breaks, we're left essentially doing high-level work and critical decisions. But humans can't make critical decisions nonstop for 8 hours a day. It's exhausting. That's why I personally either step away from my screen for part of the day or do more recreational activities at the computer: writing an article (which explains why I write more now), or learning something new. The article recommends finding a new balance. I'm not sure how sincere the article is, but this seems to be GitHub's approach . They used the time gain not to drastically increase output but for other kinds of work: collaboration, reflection. There's a hard limit to how many new features users can absorb daily anyway. Just because you can ship 10x more features doesn't mean it benefits users. This is Tesler's Law, also known as the Law of Conservation of Complexity . Every system has an irreducible level of complexity. If you reduce it in one place—say, developers moving faster—it appears elsewhere: users now have to adapt to software that evolves too quickly. You also risk "feature fatigue," where software becomes bloated and overwhelming. Being forced to make choices is often healthy. Essentially, speeding up benefits no one, and it's preferable that productivity gains show up as better-thought features or increased work on "invisible" quality. There's another scenario worth mentioning. Many people work at organizations where going faster changes absolutely nothing, because the company is fighting its own inertia more than any product challenge. I worked at a large corporation once. I remember a project in 2003 where coding really wasn't the issue. I'd come from a more dynamic job. I was used to a certain pace. Here, I was given a program to write. I finished it in 2 days. I went back to my manager. He said we'd discuss it again in 3 weeks. In a year, I shipped almost nothing. It was probably the worst year of my career. Mornings brought people asking who wanted to attend meetings about various topics. I couldn't understand why everyone kept going. But then I got it. It was to fill the day. If I'd spent 1 hour instead of 2 days on my work, I'd just have been bored longer. I knew burnout existed. But there I almost experienced burnout's opposite: complete stagnation. Eventually, I educated myself on the side in areas I cared about. I made that time count. But I won't lie—I was incredibly relieved when that project ended. The point: in some places, coding faster changes nothing. Coffee breaks just get longer. Now there's a population I have real questions about. Let's say code costs collapse. What happens to people billing by the hour? You could imagine a chunk of them won't exactly broadcast that they finished a job faster, since that means losing money. Can that hold long-term, especially for a freelancer working on-site at the client, visible to everyone? In big companies, maybe. But it'll never work in an organization whose developers are also using the same tools to move faster. That's where I wonder if fixed-price or outcome-based billing might become more attractive than hourly rates. Usually, I advise against results-based contracts, especially for younger freelancers, because it's hard to contract properly, set boundaries, and accurately estimate work beforehand. But if AI made development faster and more predictable, fixed-price work could regain appeal. It wouldn't be about billing time. It would be about billing value. I'd have no problem charging a fixed amount for valuable work, even if it took only 1-2 days. I've worked 25 years to deliver that result in 2 days. Clients pay for your experience and your ability to use tools. Speed is irrelevant. In a worst-case scenario, I might lower prices slightly, but the risk I take on also gets priced into a fixed contract. That said, I'm not sure this logic holds forever. Competitors as good as me might take on more clients to offset losses, driving per-contract prices down. I came across an article on this . It concluded that expert freelancers will survive, but demand will shrink. Especially since their clients will also start coding if the cost drops low enough. But one takeaway stuck with me: AI won't replace the freelancer. The freelancer using AI will replace the one who doesn't. That's probably the most important lesson. Again, the question wasn't whether AI makes us faster. The real question I wanted to explore was: what do you do with the time you gain? The answer depends on your context. If you're at an organization paralyzed by inertia, moving faster changes nothing—you'll just get bored sooner. If you're product-focused with the freedom to manage your own time, it's an incredible opportunity to step back and think strategically. If you're a freelancer, maybe it's time to renegotiate. But here's the critical constraint: productivity only matters if it creates value. Shipping 10x more features might actually hurt your users. Gaining time is great, but it's only valuable if you know what to do with it. Bottom line: there's no one-size-fits-all answer. But you might just gain the time needed to figure out what your answer is.

0 views

Let yourself fall down more

Last week, I got a pair of inline skates. I haven't had skates since high school, about twenty years ago. The first day I put them on and skated, I didn't fall down. The second day I put them on, I fell down a lot, and I'm more proud of that. I made a lot faster progress that second day. We want to stay upright. At some point early on in life, we learn to avoid falling down. Maybe we skin our knee, or we get a bruise. Whatever the case, it hurts. Naturally, we want to avoid pain! But have you ever watched a child learn how to walk? It's not a smooth, linear process. The child usually first learns to crawl, and along the way probably bumps their head a bit—ouchies! Then they learn to stand up, and they'll fall on their bum a lot, sometimes bumping other parts when they do—also ouchies! And that continues when they start walking. Lots of little falls, little bumps, and big cries. After each one, the kid will eventually get back up and try again. And eventually, they're walking and running and jumping. When an adult learns a new skill like skating, though, it usually looks very different. They put on their skates and teeter around, careful to not fall down. They hug the wall of the roller rink to have something to hold onto. They take small, ginger steps with short glides and eventually get rolling. Given enough time, they do learn to skate. This instinct makes a lot of sense. As an adult, if we fall, it's more likely to hurt us. Recoveries take longer. Complications increase. So we protect ourselves by avoiding getting hurt. But the thing is? Falling doesn't have to be dangerous. You can fall a lot without getting hurt, if you learn to fall safely. With inline skating, you have protective gear (helmet, knee/elbow pads, wrist guards) which protect you, and you have techniques for falling which let you use this gear to its fullest potential. If you let yourself fall safely, you can learn skills a lot faster. Being afraid of falling means that you never commit . You don't put your full self into something, because you are always ready to bail if things go sideways. That tension prevents you from doing your best and it slows down your learning. I'm not just talking about physical skills here. This is true across all the things we do as adults. We can build up a lot of anxieties and fears that hold us back from doing our best at things. We're afraid to try something and fall flat on our face, so we hesitate and in that moment of hesitation—that's when we do end up failing. We fall down because we held ourselves back because we were afraid of falling! This has come up for me concretely a few other ways recently. In each of these, the stakes for failure were really, really low. But even if the stakes are high, worrying about falling will just make it more likely. I think this is one of those skills that some people develop that helps them get where they want to go. If you're willing to fall, you're willing to take chances. If you take a lot of chances, that adds up eventually and you'll have some big wins. Just do it safely, so that they don't add up to a lot of big losses, too. My teacher has me do exercises from Rhythmic Training by Robert Starer, and it has dramatically improved my musical abilities. It's an incredible resource. ↩ At my voice lessons, I used to be concerned I was going to hit the wrong note or be out of tune. I would think about it a lot, and those moments of doubt would lead me to be tense, or distracted, or just late and panicked. When I let go of that and decided to just commit to doing what I'll do, right or wrong, that's when my vocal technique improved by leaps overnight. At my saxophone lessons, I was also worried I'd do some of the rhythm exercises wrong [1] . I got them wrong before, so I tried to focus on doing it right. But when I started embracing just doing it and trusting myself, and let myself fail? Then, again, my technique improved immediately, because I could actually use my skill. When writing poetry, I used to worry that my poems would be bad, and I'd over-analyze them. I was afraid to write a bad poem, so I didn't write much at all, and what I did write I would never share. When I stopped worrying about that and let myself write bad poems? Suddenly I was writing good poems. And with inline skating, of course, I was holding myself back when trying to skate faster or do T-stops or spin stops. Once I decided to fall down (my daughter held me to this goal: "mom, you didn't fall yet! no, the one on purpose doesn't count!"), I fell a few times but made much faster work of improving. My teacher has me do exercises from Rhythmic Training by Robert Starer, and it has dramatically improved my musical abilities. It's an incredible resource. ↩

0 views
ava's blog 1 weeks ago

my time management

I work full time, while also studying part time, volunteering, and blogging here, together with fitness, other hobbies and keeping up with things, feeling available to people most of the time. What helps me do that, especially when I am chronically ill? Obviously, the less sick days and symptoms you have, the more energy you have, the faster you are and the more time you'll have. You can't discipline yourself out of having an uncontrolled illness. It's a bit unpredictable how I'll feel or when the next flare up comes, so when I feel good, I lock in and try to make the most of it because I can't count on tomorrow. That will make up for the days when I can do less or nothing at all. Can be household-related stuff, studying or exercise. It's difficult initially you wanna enjoy yourself and live your life during good days instead of doing The Things That Need Doing. One too many experiences where you banked on "just doing it next week" and you can't do anything will make you take this more seriously, though. For example, I studied for 12 hours on Sunday and 8 hours on Monday, and couldn't do much Tuesday-Friday due to work and other things draining me too much for it. Dedicating a day where I feel especially inclined to do something to do the most of it so it is done for the rest of the week or the things are scheduled; a good example are blog posts, cleaning, or my volunteer work (doing multiple case translations back to back instead of spread out throughout the week). Sometimes I get up and I notice today will not be good. I could sit there forcing myself to do the thing I thought I would do or that I should do, and struggle along for hours, making myself worse and having worse results, then mope around doing nothing while wishing I could do that one thing. Instead, I find something that needs to be doing that is manageable in that state, even if it is not the most urgent and very low on the priority list. You need failure points for that, something like "If it seems hopeless after doing it for 10 minutes, allow yourself to switch." At the same time, I also allow myself to wait with starting the task sometimes (occasionally even boring myself on purpose) and end up coming around to it, suddenly feeling ready for it 2 hours later. Yesterday, I was supposed to study hard for my upcoming exam, but I had a really bad headache all day that just wouldn't go away. Of course I was mad I couldn't study, but after it did not go away or change, I just did other, lower-priority stuff that was easy and needed doing; like re-organization of my Obsidian, entering more passwords into my password manager that weren't in there yet, and transferring stuff from my Discord server to my Obsidian. It didn't help for the exam, of course, but it was on my list and now I don't need to do it some time later. I'll still call that a win and the best use of my time, compared to the alternatives. In my experience, it all balances out: If I wake up on a day I thought I'd study and I'm doing more of this other thing instead, that frees up more studying time in the future. When I do struggle with needing priorities as everything feels equally urgent and doable or I am afraid I'm not giving enough attention to something, I assign weekly days or goals if the type of to-do permits it. For example, for months I struggled with not finding time to do a case for my volunteer work, but since deciding Fridays to Sundays are for doing at least one a week, I've been able to consistently do that. That lessens the decision fatigue, and by offering myself three days, I give myself more flexibility in case anything comes up or I feel sick. I enjoy that it gives me a break from thinking about it for most days of the week; giving myself a chance of missing it also makes it easier to do it. I can't adhere to these at all anyway, they are too rigid. If I'd say I'll start something at 8am and I wake up later that day for other reasons, now the entire day plan is messed up! I can't deal with that. So, no fixed time blocks and slots, and no Pomodoro. I hate that stuff. I know the things I have to do, and they are arranged like a decision tree in my mind. Can do top thing? Then do it until you can't anymore (done, lacking focus, sick of it). Then go through the list until you land at the next thing to do that fits mood and energy levels. I have trouble with getting myself to start something based on an arbitrary start time or cutting off activities prematurely, so it doesn't work for me to say "I'll work on this from 10am to 3pm." I'll work on it as long or short as it happens, starting and switching when I am ready. I'd also rather work on the thing I end up randomly feeling drawn to that day instead of what past-me thought I should do. I work more fluently between tasks, like a break from one thing can be work in the other (taking a break from studying to do volunteer work, or write a blog post, answer emails etc.). I also cannot keep up streaks most of the time, because I need breaks and have worse days where I shouldn't push through for an arbitrary number, especially when it's about fitness stuff. It's useless to try and emulate the lifestyle of an internet personality and pretend your best time to work on something is at 6am when it isn't for you personally. The best time for me to work on medium to easy stuff is during 10am - 4pm, and after 6pm, I work best on harder, more focus-heavy tasks. That's the opposite of the advice usually given to people. I just like when the world winds down and it is dark outside. There's no use for me trying to change myself or working against the internal clock, and I also don't want to waste time perfecting some rigid morning routine or work system over just... doing the work. I notice some people are just doing one thing after the other when they could actually combine them more sensibly. The easiest example to illustrate it would be: Don't stare at the pot until it boils (or the pan until it's hot) and then go cut the stuff that goes into it; do it while the pot or pan heats up so everything is ready at the same time. This is likely something you already do, but identify other areas in your life where you are "waiting for the pot to boil". You can do other things while your skincare or your conditioner set, you can already prepare something else while your tea steeps, the bathtub fills, the paint dries, the compiler is running or the software is downloading, and so on. You can listen to lectures while doing chores. These small things accumulate. While you got the stew on the pot, you might feel paralyzed because "food isn't ready yet, but until it is, I can't really start or continue doing anything else, because then I am interrupted by checking on the food". In that time, you could have already done some dishes, cleaned the kitchen, tidied a corner, took out the trash so you don't have to do it later. That way, nothing accumulates to the point where it takes an hour to clean and becomes a whole thing that takes away from your daily time. Invest the 5-10 minutes here and there and chain things together sensibly so you don't have to. My wife struggles with this at times, so she asks me how to best time and order the things she needs to do. Sometimes I slide back into that mindset, but mostly, I just accept now that everything has its purpose; it's either work, rest, play, or socializing, and all are equally important. I see one as the prerequisite for the others. That helps not beating myself up internally over things, which would only cause pressure, anxiety, and guilt. If I chat with some people while I should do something else, so what? In 30 minutes I'll get back to it, and I got my fill of some interaction. If I exercise instead of sitting down to study, that's great; it means I'm counteracting all the sitting at the desk that the studying often necessitates. If I write a blog post instead of studying, that's getting it out of my head and done so I can fully focus on studying later. I journal, draw, and watch YouTube videos? Great relaxation and play, I need this for the other days where I study for most of the day. I recently started tracking activities with a timer so my worries can no longer lie to me about spending too little time on some things. It helped with committing even more to the tasks, because I wanted to press play as soon as possible again and hesitated to pause. Also allows for spotting time wasters and pockets of time that could maybe be used better. But also: Time isn't everything, even when using the good days to the fullest (Point 2). It's just as good to invest time consistently in small ways, and it's better to work smart, not hard. Earlier in this post, I mentioned 12 hours and 8 hours of studying, and it's not that this is technically necessary for me usually; I only do this now because I have 4 exams this month, and I have to make up for the fact that I couldn't study much at all from November to January due to catching a cold, my old medication no longer working and causing a flare up of my autoimmune illnesses, switching to new medication, my birthday, Christmas and NYE, and feeling mentally unwell at the start of the year. It happens, and this is how I have to manage it, but this isn't the default. If I can't get myself to do something because of fear, stress and a feeling of powerlessness, I break it down into smaller subtasks and tell myself I only have to do it for 5 minutes. That gets the ball rolling. If that doesn't help because it's more about mental health and psychological fatigue, I focus first on smaller and easy tasks like getting dressed, making food/tea, watering plants, tidying up a tiny area, some self care etc. to feel capable and productive again, then I try to tackle the bigger task. As a general word on time management: If you look at super busy people around you and wonder how they manage it, it likely also has to do with the following: In my case, I don't have to work on weekends, I work from home 3 days a week, I don't have children, my real life friends live far away so we can't meet up often, I have a wife that helps with the household, I have no social media, and I have no familial obligations. Work is also slow for me most of the time, with 5 or more hours of having nothing left to do. Reply via email Published 07 Mar, 2026 They have partner and family stepping up in taking care of some things. They have no or little familial obligations (don't have to visit grandparents all the time, or take care of the elderly and disabled in their family). They have no children; or they have a nanny or the partner doing most of that work. They are rarely home because the thing demands a lot of travel or outside time. (The less you are at home, the less dirty it gets. They likely stay more in hotels, or eat at work/the cafeteria, and spend their time elsewhere where they don't generate so much general dirt and dishes and it also warrants a lot less trips to the grocery store. What doesn't change is laundry, but thanks to the washing machine (and potentially, the dryer), they can just let that run while away.) Instead of having to make time to meet friends and align schedules, they get their social fix from their work (coworkers, conferences, events, panels etc.) They're high up enough that they can delegate some work tasks to others. It has become routine to them, so they're quicker at it, almost like autopilot. They have no or a severely reduced commute compared to you, or: they can use the commute for something else because they don't drive (passenger seat, train, subway...). They either don't have social media or don't feel sucked in by it, spending little time on it.

1 views
Ruslan Osipov 1 weeks ago

AI, Vim, And the illusion of flow

I’ve been using AI in my job a lot more lately — and it’s becoming an explicit expectation across the industry. Write more code, deliver more features, ship faster. You know what this makes me think about? Vim. I’ll explain myself, don’t worry. I like Vim. Enough to write a book about the editor , and enough to use Vim to write this article. I’m sure you’ve encountered colleagues who swear by their Vim or Emacs setups, or you might be one yourself. Here’s the thing most people get wrong about Vim: it isn’t about speed. It doesn’t necessarily make you faster (although it can), but what it does is keep you in the flow. It makes text editing easier — it’s nice not having to hunt down the mouse or hold an arrow key for exactly three and a half seconds. You can just delete a sentence. Or replace text inside the parentheses, or maybe swap parentheses for quotes. You’re editing without interruption, and it gives your brain space to focus on the task at hand. AI tools look this way on the surface. They promise the same thing Vim delivers: less friction, more flow, your brain freed up to think about the hard stuff. And sometimes they actually deliver on that promise! I’ve had sessions where an AI assistant helped me skip past the tedious scaffolding and jump straight into the interesting architectural problem. There’s lots of good here. Well, I think the difference between AI and Vim explains a lot of the discomfort engineers are feeling right now. When I use Vim, the output is mine. Every keystroke, every motion, every edit — it’s a direct translation of my intent. Vim is a transparent tool: it does exactly what I tell it to do, nothing more. The skill floor and ceiling are high, but the relationship is honest. I learn a new motion, I understand what it does, and I can predict its behavior forever. There’s no hallucination. will always c hange text i nside parentheses. It won’t sometimes change the whole paragraph because it misunderstood the context. AI tools have a different relationship with their operator. The output looks like yours, reads like yours, and certainly looks more polished than what you would produce on a first pass. But it isn’t a direct translation of your intent. Sometimes it’s a fine approximation. Sometimes it’s subtly wrong in ways you won’t catch until a hidden bug hits production. This is what I’d call the depth problem. When I use Vim, nobody can tell from reading my code whether I wrote it in Vim, VS Code, or Notepad. The tool is invisible in the artifact. And that’s fine, great even - because the quality of the output still depends entirely on me. My understanding of the problem, my experience with the codebase, my judgment about edge cases, my ability to produce elegant code - all of that shows up in the final product, regardless of which editor I used to type it up. AI inverts this. The tool is extremely visible in the artifact - it shapes the output’s style, structure, and polish - but the operator’s skill level becomes invisible. Everything comes out looking equally competent. You can’t tell from a pull request whether the author spent thirty minutes carefully steering the AI through edge cases or just hit accept on the first suggestion. That’s a huge problem, really. Because before, a bad pull request was easy to spot. Oftentimes a junior engineer would give you “hints” by not following the style guides or established conventions, which eventually tips you off and leads you to discover a major bug or missed corner case. Well, AI output always looks polished. We lost a key indicator which makes engineering spidey sense tingle. Now every line of code, every pull request is a suspect. And that’s exhausting. I just read Ivan Turkovic’s excellent AI Made Writing Code Easier. It Made Being an Engineer Harder (thanks for the share-out, Ben), and I couldn’t agree more with his core observation. The gap between “looking done” and “being right” is growing, and it’s growing fast. You know what’s annoying? When your PM can prototype something in an afternoon and expects you to get that prototype “the rest of the way done” by Friday. Or the same day, if they’re feeling particularly optimistic about what “the rest of the way” means (my PMs are wonderful and thankfully don’t do this). But either way I don’t blame them, honestly. The prototype looks great. It’s got real-ish data, it handles the happy path, and it even has a loading spinner. It looks like a product. And if I could build this in two hours with an AI tool - well, how hard could it be for a full-time engineer to finish it up? The answer, of course, is that the last 10% of the work is 90% of the effort. Edge cases, error handling, validation, accessibility, security, performance under load, integration with existing systems, observability - none of that is visible in a prototype, and AI tools are exceptionally good at producing work that doesn’t have any of it. The prototype isn’t 90% done. It 90% looks good. Of course there’s an education component here - understanding the difference between surface level polish and structural soundness. But there’s a deeper problem here too, and it’s hard to solve with education alone. My friend and colleague Sarah put this better than I could: we’re going to need lessons in empathy. Here’s what she means. When a PM can spin up a working prototype in an afternoon using AI, they start to believe - even subconsciously - that they understand what engineering involves. When an engineer uses AI to generate user-facing documentation, they start to think the tech writer’s job is trivial. When a designer uses AI to write frontend code, they wonder why the team needs a dedicated frontend engineer. And none of these people are wrong about what they experienced. The PM really did build a working prototype. The engineer really did produce passable documentation. But the conclusion that they “did the other person’s job” and the job is therefore easy - is completely wrong. Speaking of Sarah. Sarah is a staff user experience researcher. It’s Doctor Sarah, actually. And I had the opportunity to contribute on a research paper, and I used AI to structure my contributions, and I was oh-so-proud of the work because it looked exactly like what I’ve seen in countless research papers I’ve read over the years. And Sarah scanned through my contributions, and was real proud of me. Until she sat down to read what I wrote, and had to rewrite just about everything I “contributed” from scratch. AI gives everyone a surface-level ability to contribute across almost any domain or role. And surface-level ability is the most dangerous kind, because it comes with surface-level understanding and full-depth confidence. Modern knowledge jobs are often understood by their output. Tech writers by the documents produced, designers by the mocks, and software engineers by code. But none of those artifacts are core skills of each role. Tech writers are really good at breaking down complex concepts in ways majority of people can understand and internalize. Designers build intuition and understanding of how people behave and engage with all kinds of stuff. Software engineers solve problems. AI tools can’t do those things. The path forward isn’t to gatekeep or to dismiss AI-generated contributions. It’s to build organizational empathy - a genuine understanding that every discipline has depth that isn’t visible from the outside, and that a tool which lets you produce artifacts in another person’s domain doesn’t mean you understand that domain. This is, admittedly, not a new problem. Engineers have underestimated designers since the dawn of software. PMs have underestimated engineers for just as long. But AI is pouring fuel on this particular fire by making everyone feel like a competent generalist. I don’t want to be the person writing yet another “AI is ruining everything” essay. Frankly, there are enough of those. AI tools are genuinely useful - I use them daily, they make certain kinds of work better, and they’re here to stay. The scaffolding, the boilerplate, the “I know exactly what this should look like but I don’t want to type it out” moments - AI is great for those. Just like Vim is great for the “I need to restructure this method” moments. A few things I think help, borrowing from Turkovic’s recommendations and adding some of my own: Draw clear boundaries around AI output. A prototype is a prototype, not a product. AI-generated code is a first draft, not a pull request. Making this explicit - in team norms, in review processes, in how we talk about work - helps close the gap between appearance and reality. Invest in education, not just adoption. Rolling out AI tools without teaching people how to evaluate their output is like handing someone Vim without explaining modes. They’ll produce something, sure, but they won’t understand what they produced. And unlike Vim, where the failure mode is in your file, the failure mode with AI is shipping code that looks correct and isn’t. Build empathy across disciplines. This is Sarah’s point, and I think it’s the most important one. If AI makes it easy for anyone to produce surface-level work in any domain, then we need to get much better at respecting the depth beneath the surface. That means engineers sitting with PMs to understand their constraints, PMs shadowing engineers through the painful parts of productionization, and everyone acknowledging that “I made a thing with AI” is the beginning of a conversation, not the end of one. Protect your flow. This is the Vim lesson. The best tools are the ones that serve your intent without distorting it. If an AI tool is helping you think more clearly about the problem, great. If it’s generating so much output that your job has shifted from “solving problems” to “reviewing AI’s work” - that’s not flow. That’s a different job, and it might not be the one you signed up for. I keep coming back to this: Vim is a good tool because it does what I mean. The gap between my intent and the output is zero. AI tools are useful, sometimes very useful, but that gap is never zero. Knowing when the gap matters and when it doesn’t - that’s a core skill for where we are today. P.S. Did this piece need a Vim throughline? No it didn’t. But I enjoyed shoehorning it in regardless. I hear that’s going around lately. All opinions expressed here are my own. I don’t speak for Google.

0 views
Manuel Moreale 1 weeks ago

Eric Schwarz

This week on the People and Blogs series we have an interview with Eric Schwarz, whose blog can be found at schwarztech.net . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. Hi! I'm Eric Schwarz and my online "home" has been SchwarzTech . I grew up in Indiana in the United States and had a knack for anything involving computers from a young age. Although my first computer was a very-old Radio Shack TRS-80, I quickly shifted to an Apple IIgs and later playing with various used Macs. I really appreciated the intentional, but flawed aspects of Apple's products in the late-1980s and early 1990s. Despite my technology background, I went to college to work in media, especially audio/video production, but between the devaluation of a lot of creative jobs and the 2008 financial crisis/recession, I stuck around for more schooling, getting a graduate degree in Information & Communication Sciences, basically a mix of information technology, telecom, and a bit of business. From there, I ended up working in higher education, moving through different roles in an IT department at a small college, the bulk of which involved network engineering. A couple of years ago, my now-fiancée and I uprooted for her work and I'm at a different university, still doing a variety of IT things. I really enjoy working on a small team because it means you get to a little bit of everything! I've found that it's really nice to balance the structured, break/fix things from my day job with creative pursuits and projects outside of work. Like many that have been interviewed here, I dabble in photography, have done some various audio and video projects, and seem to be my friends' go-to for graphic design-related things. Other than those, I appreciate a good TV show or movie, maybe satisfying my college-self a little bit. I've gotten into following the National Women's Soccer League (NWSL) as well as some of the minor-league sports that are in our city. I love trying new foods and visiting new places (as cliché as that sounds), just because there's so much of the world to explore and experience—I think that makes one a more well-rounded, empathetic person. I don't quite remember the origin story for the name other than that it was going to be the name for my software "business" (remember, I was kid!) when I was writing software on the TRS-80. None of that really lasted and I reused the name when I created a personal site on GeoCities. In the late 1990s, the Internet was a weird patchwork of personal sites, academic resources, and still rough-around-the-edges corporate sites. I think we were all learning what this could be used for as we went along and I was no exception. Initially, it was a landing page of sorts when I was writing about tech elsewhere, including Low End Mac and the long-defunct MacWeekly. Eventually, getting a new iBook G3 and wanting to expand my topics led me to turning my site into a blog. I think that second-generation of the site was my attempt to compete with some of the larger players at the time, mixing in product reviews, longform opinion articles, news stories, and even a few guest writers. At that time, my family still had a big analog C-Band satellite dish at home and I was able to tune in to the live feeds of the Macworld Expo keynotes, so I could "live blog" those from afar, too. iLounge, MacOpinion, Think Secret, and TUAW were some of the sites I looked up to. By the time I was in college, it was a lot to balance courses, a campus job, somewhat of a social life, and the site scaled back a little, but was still very much a fun hobby of mine. Like many other bloggers, my site's third-generation morphed into more of a format similar to John Gruber's Daring Fireball : longform articles mixed with linked-out items that have a couple of paragraphs of commentary (I call them "Snippets.") I liked the format, as it allowed me to share things I found interesting or worth talking about. However, I found that in recent years so much of the tech industry has started to feel like a parody of itself. I felt like I had to cover stories because of their importance, rather than because I wanted to. After realizing that, I've started to shift my content a bit and my goal is to get back to content that celebrates my relationship with technology and even things that can be more lasting. That might be leading to a "fourth-generation" of the site. As I touched on a little earlier, I think my creative process got a bit hijacked by so much bad news around "Big Tech"—while I've tried to avoid my site becoming a cheerleader for Apple, that's the corner of the tech world that I've lived in for the past 30+ years (if you count the Macs and Apple IIs I used in school before I had my own.) Inspiration and sources come from a variety of areas: other blogs and things in my RSS reader, links on social media, tech stories from the larger media outlets. I think for Snippets, it's something that I feel is important to share or that I have strong feelings for. Those are often a bit more off-the-cuff and get a quick proofread before publishing. If it's something longer-form, I'll take some time, edit as I go, maybe have someone look over portions if something isn't quite working for me, and then publish. In terms of research, I try to link to outside sources that can provide additional context, older posts of my own that can add some historical context, while still maintaining and assuming that most of my readers have an above-average grasp on a lot of the topics. It's a bit of writing-for-me and I hope others will join me on the ride. While I'd love to say that I have a certain ritualistic place that I write, the truth is that sometimes it's just wherever I am. I don't love writing from my phone, but sometimes due to travel or between things at work, I might hammer out a quick post. I do think that I've gotten my home-office to be a comfortable place to sit down and focus on writing, with cozy lighting and everything set up. When I was working at my last job, I'd often grab a laptop or iPad and work from a nearby coffee shop—I think getting out of my then-apartment and having a more intentional time for writing with fewer distractions helped. Since moving, I haven't done that as much. If I think of some of my favorite "let's go write" moments, it's often on a moody, rainy day where there's some ambient noise from outside while I work. I have found that taking a break and letting something sit for a day or two has been a more important thing than location. Trying to force oneself to write when your head and heart aren't in it just doesn't seem to work for me. I set up my site on WordPress about twenty years ago when I outgrew server-side includes. It took a little while to wrestle the templates to work like my previously-carefully-crafted stylesheets. In some ways WordPress has gotten really bloated for my needs, but it works well enough and I have yet to find something to easily replace it with all the random things I've bolted onto my theme over the years. I'm in the process of re-evaluating some of my services, but right now I'm using IONOS (formerly 1&1) for hosting, which I had originally started with when they set up shop in the United States. My domains are with Hover at the moment. As for what I use to create my site, I'm currently using a Mac mini (M4), iPad mini (A17 Pro), and iPhone 15. On the Mac, BBEdit or directly on the web are where I'll do my writing. On the iOS side, I do a lot of writing in iA Writer. I'm still using Panic's Coda an Code Editor (formerly Diet Coda) for a lot of file mananagement/coding. Considering how long both have been discontinued, finding suitable replacements for both at my desk and mobile are on my to-do list. Other than the name being sometimes hard to spell, I don't think I'd necessarily pick something else. The beauty of it is that I'm not necessarily tied down to Apple/Mac-specific content and I can adapt it over time. I think of how many sites were Mac-something or iPod-something and then had to abruptly (and sometimes awkwardly) rename to fit the changing scope of content. I think for a CMS, I might want something a bit "lighter," but WordPress has allowed me to adapt the site for my changing content numerous times. I find it to be relatively inexpensive to run the site with hosting running me about US$100/year and then US$20/domain on average. I make some of that back with the single ad through the Carbon network, but I don't necessarily want to have more ads than that. Since it's a hobby for me, I'm not looking to make a lot of money, but I understand for folks who want or need to and don't begrudge that. I've toyed with the idea of letting people support the site, but I'm also not sure if it's worth the trouble. To try to avoid repeating anyone who has already been interviewed, I went through my RSS feeds to find a few that I immediately skip to when I see a new post: Brent Simmons is behind NetNewsWire and I started following his writing soon after I discovered NetNewsWire years ago, and got to follow the story of how that piece of software changed hands numerous times. Stephen Hackett is someone whose content and knowledge I can really relate to, so it's interesting to see his take on a lot of tech. Matthew Haughey covers a lot of different topics, but manages to craft a post that is always so damn fascinating. Mike Davidson doesn't blog as much these days, but he was another person whose work I followed way back in the mid-2000s and looked up to when I was interested in the convergence of traditional media and the Web. Jedda, Keenan, Lou Plummer, Nick Heer, Riccardo Mori, and Louie Mantia were already in the series, but I always enjoy when something new comes along from them, too. I have a few odds and ends that I wasn't quite sure where to fit elsewhere. First, I wanted to mention my side-project, The Chaos League , a blog that followed a similar format as SchwarzTech, but focused on the NWSL. This was a fantastic distraction coming out of the pandemic as it gave me an outlet that wasn't tech. Unfortunately, in the last few years, coverage from large media outlets and the public's appetite for short-form video content have kind of killed a lot of interest in bloggers covering that space. It's currently on hiatus and I'm not sure what the next step, if any, will be. Other than shamelessly plugging what I’ve done, I wanted to comment that this was a really fun exercise to think over my place online and what it means to me—thanks again for the opportunity! Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 131 interviews . People and Blogs is possible because kind people support it.

0 views
Harper Reed 1 weeks ago

Note #726

Made a surprise visit to Colorado to hang with the parents. Thank you for using RSS. I appreciate you. Email me

0 views
iDiallo 1 weeks ago

Interruption-Driven Development

I have a hard time listening to music while working. I know a lot of people do it, but whenever I need to focus on a problem, I have to hunt down the tab playing music and pause it. And yet I still wear my headphones. Not to listen to anything, but to signal to whoever is approaching my desk that I am working. It doesn't deter everyone, but it buys me the time I need to stay focused a little longer. I don't mind having a conversation with coworkers. What I mind is the interruption itself, especially when I'm in the middle of a task. Sometimes I'm debugging an issue in a legacy application, building a mental model of the workflow, reading a comment that describes an exception, following a function declaration, right when I'm on the verge of the next clue, I hear a voice: "Hey! What's going on? I haven't seen you in a while. What have you been up to?" The conversation is never long. But when it's over, my thoughts are gone. Where was I? Right, the function declaration. But where was it being called? What was that exception the comment described? Where did I even see that comment? I have to retrace every step just to rebuild the mental state I was in before I can move forward again. Working remotely helps, to a point. Interruptions via Slack can be muted until I'm ready to respond. But remote work isn't immune. You're still expected to be in meetings. As a lead, I'm frequently pulled into calls because "everything is on fire." Often, my presence isn't to put out the fire, it's to hold someone's hand. An hour later, I can barely remember what I was working on. The cost of interruption falls entirely on the person being interrupted. You lose your place, your focus, and eventually your ability to finish anything on time. For the person doing the interrupting, though, it's often a positive experience. The manager who constantly pulls the team into status updates feels productive. They're in the loop, they're present, they're on top of things. They schedule daily standups, attend every scrum ceremony, and expect developers to translate their work-in-progress into business-friendly language on demand. Meanwhile, the developer is spending their day sitting in calls, reassuring, explaining, and planning, but never actually building anything. When they push back, the manager doesn't cancel the meetings. Instead, he trims them from 30 minutes to 15. It feels like progress. But the length of the meeting was never the problem. Three meetings a day means three interruptions, regardless of how short they are. Being constantly interrupted at work reminds me of being in a hospital. Doctors prescribe rest, but hospitals are among the worst places to actually get any. Before our kids were born, my wife spent close to a month in the hospital. I had a small corner of the room, a chair and a desk, where I'd work on my laptop by her side. Every 20 minutes, the door would swing open, a nurse would bustle in and out, and the door would be left wide open behind her. It didn't matter that the doctor had ordered rest. Her sleep was interrupted every single time. That's what interruption-driven development looks like in practice. The work requires uninterrupted effort to actually happen. You can have the right tools, the right team, the right intentions, and still produce nothing. The work environment itself is working against you. My headphones might keep those eager to converse at bay. But what we really need is time to get work done without the constant interruption. It should be part of the software development lifecycle.

0 views
Neil Madden 1 weeks ago

Why I don’t use LLMs for programming

I originally posted this on Mastodon , but I thought I’d add it here too: “What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your own mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself. The teacher usually learns more than the pupil. Isn’t that true?” — Douglas Adams “It is not knowledge, but the act of learning, not possession, but the act of getting there which generates the greatest satisfaction.” — Carl Friedrich Gauss “You think you KNOW when you learn, are more sure when you can write, even more when you can teach, but certain when you can program.” — Alan Perlis (of course) (Ironically, WordPress is now offering to “improve” these quotes with AI…)

0 views
Jeff Geerling 1 weeks ago

Expert Beginners and Lone Wolves will dominate this early LLM era

After migrating this blog from a static site generator into Drupal in 2009 , I noted: As a sad side-effect, all the blog comments are gone. Forever. Wiped out. But have no fear, we can start new discussions on many new posts! I archived all the comments from the old 'Thingamablog' version of the blog, but can't repost them here (at least, not with my time constraints... it would just take a nice import script, but I don't have the time for that now).

0 views
Ruslan Osipov 2 weeks ago

Are AI productivity gains fueled by delivery pressure?

A multitudes study which followed 500 developers found an interesting soundbyte: “Engineers merged 27% more PRs with AI - but did 20% more out-of-hours commits”. While I won’t comment on the situation at Google, there are many anecdotes online about folks online who raise concerns about increased work pressure. When a response to “I’m overloaded” becomes “use AI” - we’re heading for unsustainable workloads. The problem is compounded by the fact that AI tools excel at prototyping - the type of work which makes other work happen. Now, your product manager can prototype an idea in a couple of hours, fill it with real (but often incorrect) data, sell the idea to stakeholders, and set goals to productionize it a week later. “Look - the prototype works, and it even uses real data. If I could do this in a couple of hours, how hard could this be for an experienced engineer?” - while I haven’t heard these exact words, the sentiment is widespread (again, online). In a world where AI provides a surface-level ability to contribute across almost any role, the path to avoiding global burnout is to focus on building empathy. Just because an LLM can churn out a document doesn’t mean it’s actually good writing, and we’re certainly not at the point where a handful of agents can replace a seasoned PM. However, because the output looks polished - especially to those without deep domain knowledge - it’s easy to fall into the trap of thinking you’ve done someone else’s job for them. That gap between “looking done” and “being right” is exactly where the extra professional pressure begins to mount. This is really caused by the way we still measure knowledge worker productivity - by the sheer number of artifacts they produce, rather than the outcomes of the work. The right way to leverage AI in workspace is as a license to work better and focus on the right things, not as a mandate to produce more things faster.

0 views
ava's blog 2 weeks ago

rose ▪ bud ▪ thorn - february 2026

Reply via email Published 28 Feb, 2026 5 year anniversary with my wife. :) Started a creation event for the Gazette. I created more art! I made some buttons and banners, and some pixel art for myself and with Xaya . I rebranded my professional website (not this one). Put a lot of effort into the colors, associations, text, font and all, with a proper brand kit development. That's a first. I had a healthier relationship with food recently. I managed to summarize and translate 1-2 court decisions each week for noyb.eu, and I became Silver Member (10+ translated/summarized court cases). 2 more and I will already be Gold Member (20+). I received a some helpful replies to e-mail inquiries for opportunities :) I was finally able to publish the first interview for my privacy professionals series! Friends visited and it wasn't just fun, but it also got me cleaning up the apartment so well. My wife is baking amazing bread recently. The first two to three attempts were a disappointing, but we kept at it and now our bread is sooooo good. She also baked me some matcha strawberry sugar cookies. My hair is long enough now to properly take care of it again with conditioner and oils. I can't wait until it grows long enough for a first haircut to kind of get it even and not so layered; I am also thinking of getting bangs? They always annoy me, but I think I can make it work this time...? I always think that... and at the same time, I also wanna grow it out again and bleach some money pieces. And I kinda wanna dye the underside of my hair, near my neck, too? I am conflicted. Still working on sending out e-mails for more interviews. Working on switching away from Discord! Probably Matrix. Already had an account there but it somehow got lost, so I made a new one. Now just working on transitioning some stuff. I've decluttered my closet, now I just need to sell the stuff. I'm planning a date day for myself where I get my nails done again (haven't done that in months), a lash lift, a visit to the cinema, and buying some clothing I need. I am in need of replacing some items and also diving deeper into a new personal style I want. Reintroducing caffeine has been a bust. My tolerance seems to have been plummeting to zero thanks to my experiment, and even very weak black tea is having some negative effects... and even my matcha! I guess I'll have to reduce it to once a week. I haven't been studying nearly as much as I should. I've been indulging a lot in just resting, reading, and creating, which isn't super bad, but I feel guilty for neglecting my studies when I have 4 upcoming exams for modules totaling 30 ECTS as a part-time student. :( Job applications and apartment hunting paused for now. Right now seems like an absolutely terrible time for both.

0 views
Jim Nielsen 2 weeks ago

Computers and the Internet: A Two-Edged Sword

Dave Rupert articulated something in “Priority of idle hands” that’s been growing in my subconscious for years: I had a small, intrusive realization the other day that computers and the internet are probably bad for me […] This is hard to accept because a lot of my work, hobbies, education, entertainment, news, communities, and curiosities are all on the internet. I love the internet, it’s a big part of who I am today Hard same. I love computers and the internet. Always have. I feel lucky to have grown up in the late 90’s / early 00’s where I was exposed to the fascination, excitement, and imagination of PCs, the internet, and then “mobile”. What a time to make websites! Simultaneously, I’ve seen how computers and the internet are a two-edged sword for me: I’ve cut out many great opportunities with them, but I’ve also cut myself a lot (and continue to). Per Dave’s comments, I have this feeling somewhere inside of me that the internet and computers don’t necessarily align in support my own, personal perspective of what a life well lived is for me . My excitement and draw to them also often leave me with a feeling of “I took that too far.” I still haven’t figured out a completely healthy balance (but I’m also doing ok). Dave comes up with a priority of constituencies to deal with his own realization. I like his. Might steal it. But I also think I need to adapt it, make it my own — but I don’t know what that looks like yet. To be honest, I don't think I was ready to confront any of this but reading Dave’s blog forced it out of my subconscious and into the open, so now I gotta deal. Thanks Dave. Reply via: Email · Mastodon · Bluesky

0 views
Manuel Moreale 2 weeks ago

Dominik Schwind

This week on the People and Blogs series we have an interview with Dominik Schwind, whose blog can be found at lostfocus.de . Tired of RSS? Read this in your browser or sign up for the newsletter . People and Blogs is supported by the "One a Month" club members. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. My name is Dominik Schwind and I'm from Lörrach , a small town on the German side of the tri-border area with Switzerland and France . I've been a web developer for a really long time now, mostly server-side and just occasionally dabbling in what is showing up in the browser. Annoyingly that's a hobby that I turned into work, so I guess that's ruined now. (Which doesn't stop me, though: I have too many half-finished side-project websites and apps to count.) Besides that I also really like to take photos and after a few years of being frozen in place I started to travel again, which is always nice. I do like watching motorsports of almost all types, I can easily get sucked into computer games like Factorio and I like to listen to podcasts, top of them being the Omnibus Project , Do Go On and Roderick on the Line . I've had a website since before I had internet access - some computer game I had in the mid-90s had the manual included as HTML and I used it to learn how to make basic websites. The very first day my father came home with a modem, I signed up for GeoCities and when I found a webhost that would allow me to run CGI scripts, I installed NewsPro , an early proto-blog system before blogging was even a thing. And while these early iterations of my website(s) are long gone, I haven't stopped since. The name came from an unease I started to feel in my final year of high school: once I finished school, I didn't know where to direct my energy and attention. That feeling hasn't really left since then. Mostly there is none - when I think of something that I want to communicate to someone, anyone , I try to put it online. Quite often it ends up on Mastodon but I do try to put things on my blog, especially when I know it is something future me would appreciate. A few years ago I noticed that I had neglecting my blog in favour of other ways of communicating and I started a pact with a couple of friends to write weeknotes . We're in our fourth year now, which feels like an accomplishment. I try to write those posts first thing on a Sunday morning, if possible. I write most of my posts in Markdown in iA Writer , which is probably the most arrogant Markdown editing app in the world. But I paid for it at some point, so I better use it, too. I basically only need a computer and a place to sit and I'm fine. I've tried to find ways to blog from my phone but in the end, I prefer a proper keyboard and a bigger screen. While I never observed any difference in blogging creativity depending on the physical space, I actually quite enjoy writing in places other than my desk. This one is actually pretty simple: I run WordPress , currently on a DigitalOcean VM. One of the points on my long to-do list for my web stuff is to move it to Hetzner , which probably would only take an evening. And yet, I procrastinate. I've (more or less) jokingly said I'd replace WordPress with a CMS of my own making for years now, but at some point I've resigned, even though my database is a mess. Probably not. Ever since the beginning I wrote for two audiences: my friends and future me. I'm really happy when someone else finds my blog and might turn into an internet friend, but I wouldn't know how else to achieve that other than what I've been doing for all these years now. .de domains are pretty affordable, so it is that plus the server, which is around €100 per year. The blog doesn't generate any revenue, in many ways it's "only" a journal. When it comes to other bloggers, I'd say: go for it if you think your writing (or your photography or whatever it might be you're sharing on your website) is something that can be turned into revenue, one way or another. In many ways I'm a bit bummed that Flattr (or something similar) never really took of, I would happily use a service like that. Of course I need to mention my friends and fellow weeknoters: Martin (blogs in German) and Teymur . (NSFW) Three of the people whose blogs I read have been interviewed here already: Ahn ( Interview ), Jeremy Keith ( Interview ) and Winnie Lim .( Interview ) Some other people whose blogs I read and who might be interesting people to answer your questions would be Jennifer Mills , (who has the best take on weekly blog posts I have ever seen) Nikkin , (he calls it a newsletter, but there is an RSS feed) Roy Tang and Ruben Schade . If you don't have one yet, go start a personal website! Don't take it too seriously, try things and it can be a nice, meditative hobby and helps against the urge to doomscroll. Also you might never know, your kind of people might find it and connect with you. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 130 interviews . People and Blogs is possible because kind people support it.

0 views
Stratechery 2 weeks ago

An Interview with Bill Gurley About Runnin’ Down a Dream

An interview with long-time (retired) VC Bill Gurley about his new book about building a career you love, Uber, and the modern state of VC.

0 views