Posts in Career (20 found)

On screwing up

The most shameful thing I did in the workplace was lie to a colleague. It was about ten years ago, I was a fresh-faced intern, and in the rush to deliver something I’d skipped the step of testing my work in staging 1 . It did not work. When deployed to production, it didn’t work there either. No big deal, in general terms: the page we were working on wasn’t yet customer-facing. But my colleague asked me over his desk whether this worked when I’d tested it, and I said something like “it sure did, no idea what happened”. I bet he forgot about it immediately. I could have just messed up the testing (for instance, by accidentally running some different code than the code I pushed), or he knew I’d probably lied, and didn’t really care. I haven’t forgotten about it. Even a decade later, I’m still ashamed to write it down. Of course I’m not ashamed about the mistake . I was sloppy to not test my work, but I’ve cut corners since then when I felt it was necessary, and I stand by that decision. I’m ashamed about how I handled it. But even that I understand. I was a kid, trying to learn quickly and prove I belonged in tech. The last thing I wanted to do was to dwell on the way I screwed up. If I were in my colleague’s shoes now, I’d have brushed it off too 2 . How do I try to handle mistakes now? The most important thing is to control your emotions . If you’re anything like me, your strongest emotional reactions at work will be reserved for the times you’ve screwed up. There are usually two countervailing emotions at play here: the desire to defend yourself, find excuses, and minimize the consequences; and the desire to confess your guilt, abase yourself, and beg for forgiveness. Both of these are traps. Obviously making excuses for yourself (or flat-out denying the mistake, like I did) is bad. But going in the other direction and publicly beating yourself up about it is just as bad . It’s bad for a few reasons. First, you’re effectively asking the people around you to take the time and effort to reassure you, when they should be focused on the problem. Second, you’re taking yourself out of the group of people who are focused on the problem, when often you’re the best situated to figure out what to do: since it’s your mistake, you have the most context. Third, it’s just not professional. So what should you do? For the first little while, do nothing . Emotional reactions fade over time. Try and just ride out the initial jolt of realizing you screwed up, and the impulse to leap into action to fix it. Most of the worst reactions to screwing up happen in the immediate aftermath, so if you can simply do nothing during that period you’re already off to a good start. For me, this takes about thirty seconds. How much time you’ll need depends on you, but hopefully it’s under ten minutes. More than that and you might need to grit your teeth and work through it. Once you’re confident you’re under control, the next step is to tell people what happened . Typically you want to tell your manager, but depending on the problem it could also be a colleague or someone else. It’s really important here to be matter-of-fact about it, or you risk falling into the “I’m so terrible, please reassure me” trap I discussed above. You often don’t even need to explicitly say “I made a mistake”, if it’s obvious from context. Just say “I deployed a change and it’s broken X feature” (or whatever the problem is). You should do this before you’ve come up with a solution. It’s tempting to try to conceal your mistake and just quietly solve it. But for user-facing mistakes, concealment is impossible - somebody will raise a ticket eventually - and if you don’t communicate the issue, you risk someone else discovering it and independently raising it. In the worst case, while you’re quietly working on a fix, you’ll discover that somebody else has declared an incident. Of course, you understand the problem perfectly (since you caused it), and you know that it was caused by a bad deploy and is easily fixable. But the other people on the incident call don’t know all that. They’re thinking about the worst-case scenarios, wondering if it’s database or network-related, paging in all kinds of teams, causing all kinds of hassle. All of that could have been avoided if you had reported the issue immediately. In my experience, tech company managers will forgive mistakes 3 , but they won’t forgive being made to look like a fool . In particular, they won’t forgive being deprived of critical information. If they’re asked to explain the incident by their boss, and they have to flounder around because they lack the context that you had all along , that may harm your relationship with them for good. On the other hand, if you give them a clear summary of the problem right away, and they’re able to seem like they’re on top of things to their manager, you might even earn credit for the situation (despite having caused it with your initial mistake). However, you probably won’t earn credit. This is where I diverge from the popular software engineering wisdom that incidents are always the fault of systems, never of individuals. Of course incidents are caused by the interactions of complex systems. Everything in the universe is caused by the interactions of complex systems! But one cause in that chain is often somebody screwing up 4 . If you’re a manager of an engineering organization, and you want a project to succeed, you probably have a mental shortlist of the engineers in your org who can reliably lead projects 5 . If an engineer screws up repeatedly, they’re likely to drop off that list (or at least get an asterisk next to their name). It doesn’t really matter if you had a good technical reason to make the mistake, or if it’s excusable. Managers don’t care about that stuff, because they simply don’t have the technical context to know if it’s true or if you’re just trying to talk your way out of it. What managers do have the context to evaluate is results , so that’s what they judge you on. That means some failures are acceptable, so long as you’ve got enough successes to balance them out. Being a strong engineer is about finding a balance between always being right and taking risks . If you prioritize always being right, you can probably avoid making mistakes, but you won’t be able to lead projects (since that always requires taking risks). Therefore, the optimal amount of mistakes at work is not zero. Unless you’re working in a few select industries 6 , you should expect to make mistakes now and then, otherwise you’re likely working far too slow. From memory, I think I had tested an earlier version of the code, but then I made some tweaks and skipped the step where I tested that it worked even with those tweaks. Though I would have made a mental note (and if someone more senior had done this, I would have been a bit less forgiving). Though they may not forget them. More on that later. It’s probably not that comforting to replace “you screwed up by being incompetent” with “it’s not your fault, it’s the system’s fault for hiring an engineer as incompetent as you”. For more on that, see How I ship projects at large tech companies . The classic examples are pacemakers and the Space Shuttle (should that now be Starship/New Glenn)? From memory, I think I had tested an earlier version of the code, but then I made some tweaks and skipped the step where I tested that it worked even with those tweaks. ↩ Though I would have made a mental note (and if someone more senior had done this, I would have been a bit less forgiving). ↩ Though they may not forget them. More on that later. ↩ It’s probably not that comforting to replace “you screwed up by being incompetent” with “it’s not your fault, it’s the system’s fault for hiring an engineer as incompetent as you”. ↩ For more on that, see How I ship projects at large tech companies . ↩ The classic examples are pacemakers and the Space Shuttle (should that now be Starship/New Glenn)? ↩

0 views
ava's blog 2 days ago

privacy professionals: working at a messaging/social media platform

Welcome to a little series I'm starting, where I ask people working in the privacy field 7 questions about their work! This includes Data Protection Officers, Managers and Consultants, and other members of Privacy & Compliance teams. I find career advice and more specific information about the field to be lacking online, so I want to change that and host it myself :) First up is an employee from the privacy team at a social media/messaging platform! I messaged them via their support platform asking the questions and asking for consent to publish the answers, and received this response from one of the employees. Note: An earlier version published mentioned the company name; they have since requested me to anonymize it. 1. Can you describe your career path and what led you to become a Data Protection Officer (or similar role)? I started as a lawyer and then transitioned into the corporate world leveraging my law degree in a major corporation in their emerging privacy program. Another one of our teammates actually spent 25 years in teaching and took her CIPP US and transitioned careers. In privacy specifically you will see many backgrounds and stories of people "falling into" this career. Our DPO has experience across multiple companies and years of experience to make it to where he is now as a leader in the company. 2. What drew you specifically to data protection law and privacy as a profession? I loved the legal aspect of it and the ability to leverage my law degree. Fascinating intersection where humanity meets privacy. 3. What does a typical day in your role look like? Our team works with customer facing requests, internal team meetings discussing ways we can continue to serve our customers and also lead with excellence in compliance and communication. Compliance, legal regulations, new laws etc are all things we spend time working on, studying, and implementing within our platform. 4. What aspects of your work do you find most rewarding or challenging? Everyday comes with a new opportunity. With the ever changing privacy landscape the team is always learning, growing, and adapting. Its a very dynamic atmosphere. Love the challenge! 5. Which skills, qualities, or experiences do you consider essential for someone in such a role? Being a good listener as number one! Background in privacy law and certifications such as CIPP/ US - AI etc. A well rounded approach to both the legal aspects and the human impact which can come through experience, reading and working in the industry. 6. How do you keep up with the rapidly changing landscape of data protection regulations? Reading, conferences, webinar, IAPP, and association. Once you immerse yourself in understanding privacy you will find it touches virtually every part of our human existence in the marketplace, health, education, housing, finance etc. It is truly a fascinating industry. 7. If you could give advice to someone aspiring to enter this role, what would it be? It's a great career with growing impact across all industries. I would say consume content that makes you better. Books, podcasts, articles. Check out the IAPP website that has lots of resources. Stay up to date on different laws and regulations being passed. Finally, keep reaching out to industry leaders, think about how you want to show up either through certification, law school etc. It is always a bonus to get internships or equivalent. In the end though, I would say, no matter what you do work on your character through the decisions that you make in your day to day life now. Integrity, honesty, work ethic, humility, and curiosity will take you far in whatever you do! Thank you to this employee for the reply! I'm still reaching out to other companies, but if you know some who would be interested or know of people working in the privacy field that would like to answer these, please shoot me a message! :) Reply via email Published 08 Feb, 2026

0 views
Sean Goedecke 3 days ago

Large tech companies don't need heroes

Large tech companies operate via systems . What that means is that the main outcomes - up to and including the overall success or failure of the company - are driven by a complex network of processes and incentives. These systems are outside the control of any particular person. Like the parts of a large codebase, they have accumulated and co-evolved over time, instead of being designed from scratch. Some of these processes and incentives are “legible”, like OKRs or promotion criteria. Others are “illegible”, like the backchannel conversations that usually precede a formal consensus on decisions 1 . But either way, it is these processes and incentives that determine what happens, not any individual heroics . This state of affairs is not efficient at producing good software. In large tech companies, good software often seems like it is produced by accident , as a by-product of individual people responding to their incentives. However, that’s just the way it has to be. A shared belief in the mission can cause a small group of people to prioritize good software over their individual benefit, for a little while. But thousands of engineers can’t do that for decades. Past a certain point of scale 2 , companies must depend on the strength of their systems. Individual engineers often react to this fact with horror. After all, they want to produce high-quality software. Why is everyone around them just cynically 3 focused on their own careers? On top of that, many software engineers got into the industry because they are internally compelled 4 to make systems more efficient. For these people, it is viscerally uncomfortable being employed in an inefficient company. They are thus prepared to do whatever it takes to patch up their system’s local inefficiencies. Of course, making your team more effective does not always require heroics. Some amount of fixing inefficiencies - improving process, writing tests, cleaning up old code - is just part of the job, and will get engineers rewarded and promoted just like any other kind of engineering work. But there’s a line. Past a certain point, working on efficiency-related stuff instead of your actual projects will get you punished, not rewarded. To go over that line requires someone willing to sacrifice their own career progression in the name of good engineering. In other words, it requires a hero . You can sacrifice your promotions and bonuses to make one tiny corner of the company hum along nicely for a while. However, like I said above, the overall trajectory of the company is almost never determined by one person. It doesn’t really matter how efficient you made some corner of the Google Wave team if the whole product was doomed. And even poorly-run software teams can often win, so long as they’re targeting some niche that the company is set up to support (think about the quality of most profitable enterprise software). On top of that, heroism makes it difficult for real change to happen . If a company is set up to reward bad work and punish good work, having some hero step up to do good work anyway and be punished will only insulate the company from the consequences of its own systems . Far better to let the company be punished for its failings, so it can (slowly, slowly) adjust, or be replaced by companies that operate better. Large tech companies don’t benefit long-term from heroes, but there’s still a role for heroes. That role is to be exploited . There are no shortage of predators who will happily recruit a hero for some short-term advantage. Some product managers keep a mental list of engineers in other teams who are “easy targets”: who can be convinced to do extra work on projects that benefit the product manager (but not that engineer). During high-intensity periods, such as the lead-up to a major launch, there is sometimes a kind of cold war between different product organizations, as they try to extract behind-the-scenes help from the engineers in each other’s camps while jealously guarding their own engineering resources. Likewise, some managers have no problem letting one of their engineers spend all their time on glue work . Much of that work would otherwise be the manager’s responsibility, so it makes the manager’s job easier. Of course, when it comes time for promotions, the engineer will be punished for not doing their real work. This is why it’s important for engineers to pay attention to their actual rewards. Promotions, bonuses and raises are the hard currency of software companies. Giving those out shows what the company really values. Predators don’t control those things (if they did, they wouldn’t be predators). As a substitute, they attempt to appeal to a hero’s internal compulsion to be useful or to clean up inefficiencies. Large tech companies are structurally set up to encourage software engineers to engage in heroics A background level of inefficiency is just part of the landscape of large tech companies I write about this point at length in Seeing like a software company . Why do companies need to scale, if it means they become less efficient? The best piece on this is Dan Luu’s I could build that in a weekend! : in short, because the value of marginal features in a successful software product is surprisingly high, and you need a lot of developers to capture all the marginal features. For a post on why this is not actually that cynical, see my Software engineers should be a little bit cynical . I write about these internal compulsions in I’m addicted to being useful . Large tech companies are structurally set up to encourage software engineers to engage in heroics This is largely accidental, and doesn’t really benefit those tech companies in the long term, since large tech companies are just too large to be meaningfully moved by individual heroics However, individual managers and product managers inside these tech companies have learned to exploit this surplus heroism for their individual ends As a software engineer, you should resist the urge to heroically patch some obvious inefficiency you see in the organization Unless that work is explicitly rewarded by the company, all your efforts will do is delay the point at which the company has to change its processes A background level of inefficiency is just part of the landscape of large tech companies It’s the price they pay to be so large (and in return reap the benefits of scale and legibility ) The more you can learn to live with it, the more you’ll be able to use your energy tactically for your own benefit I write about this point at length in Seeing like a software company . ↩ Why do companies need to scale, if it means they become less efficient? The best piece on this is Dan Luu’s I could build that in a weekend! : in short, because the value of marginal features in a successful software product is surprisingly high, and you need a lot of developers to capture all the marginal features. ↩ For a post on why this is not actually that cynical, see my Software engineers should be a little bit cynical . ↩ I write about these internal compulsions in I’m addicted to being useful . ↩

0 views
Karboosx 4 days ago

Tech documentation is pointless (mostly)

Do you really trust documentation for your evolving codebase? Probably not fully! So why do we even write documentation or constantly complain about lack of it? Let's talk about that :D

0 views
Jim Nielsen 4 days ago

Study Finds Obvious Truth Everybody Knows

Researchers at Anthropic published their findings around how AI assistance impacts the formation of coding skills : We found that using AI assistance led to a statistically significant decrease in mastery […] Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance. Wait, what? Let me read that again: using AI assistance led to a statistically significant decrease in mastery Honestly, the entire articles reads like those pieces you find on the internet with titles such as “Study Finds Exercise Is Good for Your Health” or “Being Kind to Others Makes People Happier”. Here’s another headline for you: Study Finds Doing Hard Things Leads to Mastery. Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery. We already know this. Do we really need a study for this? So what are their recommendations? Here’s one: Managers should think intentionally about how to deploy AI tools at scale Lol, yeah that’s gonna happen. You know what’s gonna happen instead? What always happens when organizational pressures and incentives are aligned to deskill workers. Oh wait, they already came to that conclusion in the article: Given time constraints and organizational pressures, junior developers or other professionals may rely on AI to complete tasks as fast as possible at the cost of skill development AI is like a creditor: they give you a bunch of money and don’t talk about the trade-offs, just the fact that you’ll be more “rich” after they get involved. Or maybe a better analogy is Rumpelstilskin : the promise is gold, but beware the hidden cost might be your first-born child. Reply via: Email · Mastodon · Bluesky

0 views
Ankur Sethi 5 days ago

Write quickly, edit lightly, prefer rewrites, publish with flaws

Over two years of consistent writing and publishing, I’ve internalized a few lessons for producing satisfying—if not necessarily “good”—work: I covered similar ground previously in Writing without a plan . This post builds on the same idea. If I want to see the shape of the idea I’m trying to communicate in my writing, I must get it down on paper as quickly as possible. This is similar to how painters lay down underdrawings on canvas before applying paint. I can’t judge the quality of my idea unless I finish this underdrawing. Without this basic sketch to guide me, I might end up writing the wrong thing altogether. More than once, I’ve slaved away at a long blog post for days, only to realize that my core thesis was bunk. Writing quickly allows me to see the idea in its entirety before I waste time and energy refining it. How do I define quickly ? For blog posts like this one, I try to produce a first draft in about 45 minutes. For longer pieces, I take about the same time but work in broad strokes and make heavy use of placeholders. It’s easy to edit the life and vitality out of a piece by over-editing it. I’ve done it many times. I’m prone to spending hours upon hours polishing the same few paragraphs in a work, complicating my sentences by attaching a hundred sub-clauses, burying important ideas under mountains of caveats, turning direct writing into purple prose, and inflating my word counts to planetary proportions. Light edits to a first draft improve my writing. If I keep going, I reach a point of diminishing returns where every new edit feels like busywork. And then, if I keep going some more, I start making the writing worse rather than better. Spending too much time editing puts me in a mental state that’s similar to semantic satiation , but at the scale of a full essay or story. The words in front of my eyes begin to lose their meaning, ideas become muddled, and I can no longer tell if anything I’ve written makes sense at all. At that point, I have no choice but to walk away from the work and come back to it another day. It’s no fun. I try to spend a little more time editing than I do writing, but only a little. I’ve learned to recognize that if editing a draft takes me significantly longer than it took me to write it, there’s probably something wrong with the piece. If editing takes too long, it’s better to throw it away and redo from start . If it’s taking too long to edit, rewrite. By writing quickly, I’ve convinced my brain that rewriting something wholesale is cheap and easy. It’s profitable and practical for me to write out a single idea multiple times, exploring it from different angles, finding new insight and depth every time I take a fresh stab at it. If writing a first draft takes 45 minutes, making multiple attempts at the same idea is no big deal. If it takes four hours, I’m more likely to go with my first attempt. Spending too much time on first drafts is a good way for me to get married to bad ideas. I wrote this very blog post three times because I couldn’t quite capture what I wanted to say in the first two drafts. The content of the post changed entirely with every new attempt, but the core ideas remained the same. No piece of writing is ever perfect. If I keep looking, I can find flaws in every single piece of writing I’ve ever published. I find it a waste of time to keep refining my work once it reaches the good enough stage. If I’ve communicated my ideas clearly and haven’t misrepresented any facts, I can allow a few clumsy sentences or a bad opening paragraph to slide. Even as I publish imperfect work, I try to look back at my past writing, notice the mistakes I keep repeating, and try to do better next time. I find that publishing a lot of bad work and learning from each mistake is a better way to learn and grow compared to writing a small number of “perfect” pieces. By working quickly, I’ve been able to produce a lot of bad-to-mediocre writing, but I feel satisfied. As I keep saying, finding joy in the work I do is more important to me than producing something extraordinary. I’d rather write a hundred bad essays with gleeful abandon than slave over a single perfect manuscript. There’s joy in finishing something, closing the book on it, calling it a day, and moving on. There’s joy in trying out different styles, voices, subjects, ideas, personalities. There’s joy in knowing that there will always be a next thing to write, and the next, and the next. When I’m stuck writing something that’s not fun to work on, I find a certain consolation in knowing that I’ll be done soon. That my sloppy writing process means I’m allowed to finish my piece quickly, put it out into the world, and move on to something more enjoyable. Now you’ve reached the end of this post, and I don’t quite know how to leave you with a solid kicker. Instead of doing a good job, I’ll end with this Ray Bradbury quote that I copied off somebody’s blog: Don’t think. Thinking is the enemy of creativity. It’s self-conscious and anything self-conscious is lousy. You can’t “try” to do things. You simply “must” do things. Perfect. I’ve never liked thinking anyway. Write quickly Edit lightly Prefer rewriting to editing Publish with flaws

0 views
Susam Pal 5 days ago

Stories From 25 Years of Computing

Last year, I completed 20 years in professional software development. I wanted to write a post to mark the occasion back then, but couldn't find the time. This post is my attempt to make up for that omission. In fact, I have been involved in software development for a little longer than 20 years. Although I had my first taste of computer programming as a child, it was only when I entered university about 25 years ago that I seriously got into software development. So I'll start my stories from there. These stories are less about software and more about people. Unlike many posts of this kind, this one offers no wisdom or lessons. It only offers a collection of stories. I hope you'll like at least a few of them. The first story takes place in 2001, shortly after I joined university. One evening, I went to the university computer laboratory to browse the Web. Out of curiosity, I typed into the address bar to see what kind of website existed there. I ended up on this home page: susam.com . I remember that the text and the banner looked much larger back then. Since display resolutions were lower, the text and banner covered almost half the screen. I knew very little about the Internet then and I was just trying to make sense of it. I remember wondering what it would take to create my own website, perhaps at . That's when an older student who had been watching me browse over my shoulder approached and asked if I had created the website. I told him I hadn't and that I had no idea how websites were made. He asked me to move aside, took my seat and clicked View > Source in Internet Explorer. He then explained how websites are made of HTML pages and how those pages are simply text instructions. Next, he opened Notepad and wrote a simple HTML page that looked something like this: He then opened the page in a web browser and showed how it rendered. After that, he demonstrated a few more features such as changing the font face and size, centring the text and altering the page's background colour. Although the tutorial lasted only about ten minutes, it made the World Wide Web feel far less mysterious and much more fascinating. That person had an ulterior motive though. After the tutorial, he never returned the seat to me. He just continued browsing the Web and waited for me to leave. I was too timid to ask for my seat back. Seats were limited, so I returned to my dorm room both disappointed that I couldn't continue browsing that day and excited about all the websites I might create with this newfound knowledge. I could never register for myself though. That domain was always used by some business selling Turkish cuisines. Eventually, I managed to get the next best thing: a domain of my own. That brief encounter in the university laboratory set me on a lifelong path of creating and maintaining personal websites. The second story also comes from my university days. I was hanging out with my mates in the computer laboratory, in front of an MS-DOS machine powered by an Intel 8086 microprocessor. I was writing a lift control program in assembly. In those days, it was considered important to deliberately practise solving made-up problems as a way of honing our programming skills. As I worked on my program, my mind drifted to a small detail about the 8086 microprocessor that we had recently learned in a lecture. Our professor had explained that, when the 8086 microprocessor is reset, execution begins with CS:IP set to FFFF:0000. So I murmured to anyone who cared to listen, 'I wonder if the system will reboot if I jump to FFFF:0000.' I then opened and jumped to that address. The machine rebooted instantly. One of my friends, who topped the class every semester, had been watching over my shoulder. As soon as the machine restarted, he exclaimed, 'How did you do that?' I explained that the reset vector is located at physical address FFFF0 and that the CS:IP value FFFF:0000 maps to that address in real mode. After that, I went back to working on my lift control program and didn't think much more about the incident. About a week later, the same friend came to my dorm room. He sat down with a grave look on his face and asked, 'How did you know to do that? How did it occur to you to jump to the reset vector?' I must have said something like, 'It just occurred to me. I remembered that detail from the lecture and wanted to try it out.' He then said, 'I want to be able to think like that. I come top of the class every year, but I don't think the way you do. I would never have thought of taking a small detail like that and testing it myself.' I replied that I was just curious to see whether what we had learnt actually worked in practice. He responded, 'And that's exactly it. It would never occur to me to try something like that. I feel disappointed that I keep coming top of the class, yet I am not curious in the same way you are. I've decided I don't want to top the class anymore. I just want to explore and experiment with what we learn, the way you do.' That was all he said before getting up and heading back to his dorm room. I didn't take it very seriously at the time. I couldn't imagine why someone would willingly give up the accomplishment of coming first every year. But he kept his word. He never topped the class again. He still ranked highly, often within the top ten, but he kept his promise of never finishing first again. To this day, I feel a mix of embarrassment and pride whenever I recall that incident. With a single jump to the processor's reset entry point, I had somehow inspired someone to step back from academic competition in order to have more fun with learning. Of course, there is no reason one cannot do both. But in the end, that was his decision, not mine. In my first job after university, I was assigned to work on the installer for a specific component of an e-banking product. The installer was written in Python and was quite fragile. During my first week on the project, I spent much of my time stabilising the installer and writing a user guide with step-by-step instructions on how to use it. The result was well received and appreciated by both my seniors and management. To my surprise, my user guide was praised more than my improvements to the installer. While the first few weeks were enjoyable, I soon realised I would not find the work fulfilling for very long. I wrote to management a few times to ask whether I could transfer to a team where I could work on something more substantial. My emails were initially met with resistance. After several rounds of discussion, however, someone who had heard about my situation reached out and suggested a team whose manager might be interested in interviewing me. The team was based in a different city. I was young and willing to relocate wherever I could find good work, so I immediately agreed to the interview. This was in 2006, when video conferencing was not yet common. On the day of the interview, the hiring manager called me on my desk phone. He began by introducing the team, which called itself Archie , short for architecture . The team developed and maintained the web framework and core architectural components on which the entire e-banking product was built. The product had existed long before open source frameworks such as Spring or Django became popular, so features such as API routing, authentication and authorisation layers, cookie management and similar capabilities were all implemented in-house by this specialised team. Because the software was used in banking environments, it also had to pass strict security testing and audits to minimise the risk of serious flaws. The interview began well. He asked several questions related to software security, such as what SQL injection is and how it can be prevented or how one might design a web framework that mitigates cross-site scripting attacks. He also asked programming questions, most of which I answered pretty well. Towards the end, however, he asked how we could prevent MITM attacks. I had never heard the term, so I admitted that I did not know what MITM meant. He then asked, 'Man in the middle?' but I still had no idea what that meant or whether it was even a software engineering concept. He replied, 'Learn everything you can about PKI and MITM. We need to build a digital signatures feature for one of our corporate banking products. That's the first thing we'll work on.' Over the next few weeks, I studied RFCs and documentation related to public key infrastructure, public key cryptography standards and related topics. At first, the material felt intimidating, but after spending time each evening reading whatever relevant literature I could find, things gradually began to make sense. Concepts that initially seemed complex and overwhelming eventually felt intuitive and elegant. I relocated to the new city a few weeks later and delivered the digital signatures feature about a month after joining the team. We used the open source Bouncy Castle library to implement digital signatures. After that project, I worked on other parts of the product too. The most rewarding part was knowing that the code I was writing became part of a mature product used by hundreds of banks and millions of users. It was especially satisfying to see the work pass security testing and audits and be considered ready for release. That was my first real engineering job. My manager also turned out to be an excellent mentor. Working with him helped me develop new skills and his encouragement gave me confidence that stayed with me for years. Nearly two decades have passed since then, yet the product is still in use. In fact, in my current phase of life I sometimes encounter it as a customer. Occasionally, I open the browser's developer tools to view the page source where I can still see traces of the HTML generated by code I wrote almost twenty years ago. Around 2007 or 2008, I began working on a proof of concept for developing widgets for an OpenTV set-top box. The work involved writing code in a heavily trimmed-down version of C. One afternoon, while making good progress on a few widgets, I noticed that they would occasionally crash at random. I tried tracking down the bugs, but I was finding it surprisingly difficult to understand my own code. I had managed to produce some truly spaghetti code full of dubious pointer operations that were almost certainly responsible for the crashes, yet I could not pinpoint where exactly things were going wrong. Ours was a small team of four people, each working on an independent proof of concept. The most senior person on the team acted as our lead and architect. Later that afternoon, I showed him my progress and explained that I was still trying to hunt down the bugs causing the widgets to crash. He asked whether he could look at the code. After going through it briefly and probably realising that it was a bit of a mess, he asked me to send him the code as a tarball, which I promptly did. He then went back to his desk to study the code. I remember thinking that there was no way he was going to find the problem anytime soon. I had been debugging it for hours and barely understood what I had written myself; it was the worst spaghetti code I had ever produced. With little hope of a quick solution, I went back to debugging on my own. Barely five minutes later, he came back to my desk and asked me to open a specific file. He then showed me exactly where the pointer bug was. It had taken him only a few minutes not only to read my tangled code but also to understand it well enough to identify the fault and point it out. As soon as I fixed that line, the crashes disappeared. I was genuinely in awe of his skill. I have always loved computing and programming, so I had assumed I was already fairly good at it. That incident, however, made me realise how much further I still had to go before I could consider myself a good software developer. I did improve significantly in the years that followed and today I am far better at managing software complexity than I was back then. In another project from that period, we worked on another set-top box platform that supported Java Micro Edition (Java ME) for widget development. One day, the same architect from the previous story asked whether I could add animations to the widgets. I told him that I believed it should be possible, though I'd need to test it to be sure. Before continuing with the story, I need to explain how the different stakeholders in the project were organised. Our small team effectively played the role of the software vendor. The final product going to market would carry the brand of a major telecom carrier, offering direct-to-home (DTH) television services, with the set-top box being one of the products sold to customers. The set top box was manufactured by another company. So the project was a partnership between three parties: our company as the software vendor, the telecom carrier and the set-top box manufacturer. The telecom carrier wanted to know whether widgets could be animated on screen with smooth slide-in and slide-out effects. That was why the architect approached me to ask whether it could be done. I began working on animating the widgets. Meanwhile, the architect and a few senior colleagues attended a business meeting with all the partners present. During the meeting, he explained that we were evaluating whether widget animations could be supported. The set-top box manufacturer immediately dismissed the idea, saying, 'That's impossible. Our set-top box does not support animation.' When the architect returned and shared this with us, I replied, 'I do not understand. If I can draw a widget, I can animate it too. All it takes is clearing the widget and redrawing it at slightly different positions repeatedly. In fact, I already have a working version.' I then showed a demo of the animated widgets running on the emulator. The following week, the architect attended another partners' meeting where he shared updates about our animated widgets. I was not personally present, so what follows is second-hand information passed on by those who were there. I learnt that the set-top box company reacted angrily. For some reason, they were unhappy that we had managed to achieve results using their set-top box and APIs that they had officially described as impossible. They demanded that we stop work on animation immediately, arguing that our work could not be allowed to contradict their official position. At that point, the telecom carrier's representative intervened and bluntly told the set-top box representative to just shut up. If the set top box guy was furious, the telecom guy was even more so, 'You guys told us animation was not possible and these people are showing that it is! You manufacture the set-top box. How can you not know what it is capable of?' Meanwhile, I continued working and completed my proof-of-concept implementation. It worked very well in the emulator, but I did not yet have access to the actual hardware. The device was still in the process of being shipped to us, so all my early proof-of-concepts ran on the emulator. The following week, the architect planned to travel to the set-top box company's office to test my widgets on the real hardware. At the time, I was quite proud of demonstrating results that even the hardware maker believed were impossible. When the architect eventually travelled to test the widgets on the actual device, a problem emerged. What looked like buttery smooth animation on the emulator appeared noticeably choppy on a real television. Over the next few weeks, I experimented with frame rates, buffering strategies and optimising the computation done in the the rendering loop. Each week, the architect travelled for testing and returned with the same report: the animation had improved somewhat, but it still remained choppy. The modest embedded hardware simply could not keep up with the required computation and rendering. In the end, the telecom carrier decided that no animation was better than poor animation and dropped the idea altogether. So in the end, the set-top box developers turned out to be correct after all. Back in 2009, after completing about a year at RSA Security, I began looking for work that felt more intellectually stimulating, especially projects involving mathematics and algorithms. I spoke with a few senior leaders about this, but nothing materialised for some time. Then one day, Dr Burt Kaliski, Chief Scientist at RSA Laboratories, asked to meet me to discuss my career aspirations. I have written about this in more detail in another post here: Good Blessings . I will summarise what followed. Dr Kaliski met me and offered a few suggestions about the kinds of teams I might approach to find more interesting work. I followed his advice and eventually joined a team that turned out to be an excellent fit. I remained with that team for the next six years. During that time, I worked on parser generators, formal language specification and implementation, as well as indexing and querying components of a petabyte-scale database. I learnt something new almost every day during those six years. It remains one of the most enjoyable periods of my career. I have especially fond memories of working on parser generators alongside remarkably skilled engineers from whom I learnt a lot. Years later, I reflected on how that brief meeting with Dr Kaliski had altered the trajectory of my career. I realised I was not sure whether I had properly expressed my gratitude to him for the role he had played in shaping my path. So I wrote to thank him and explain how much that single conversation had influenced my life. A few days later, Dr Kaliski replied, saying he was glad to know that the steps I took afterwards had worked out well. Before ending his message, he wrote this heart-warming note: This story comes from 2019. By then, I was no longer a twenty-something engineer just starting out. I was now a middle-aged staff engineer with years of experience building both low-level networking systems and database systems. Most of my work up to that point had been in C and C++. I was now entering a new phase where I would be developing microservices professionally in languages such as Go and Python. None of this was unfamiliar territory. Like many people in this profession, computing has long been one of my favourite hobbies. So although my professional work for the previous decade had focused on C and C++, I had plenty of hobby projects in other languages, including Python and Go. As a result, switching gears from systems programming to application development was a smooth transition for me. I cannot even say that I missed working in C and C++. After all, who wants to spend their days occasionally chasing memory bugs in core dumps when you could be building features and delivering real value to customers? In October 2019, during Cybersecurity Awareness Month, a Capture the Flag (CTF) event was organised at our office. The contest featured all kinds of puzzles, ranging from SQL injection challenges to insecure cryptography problems. Some challenges also involved reversing binaries and exploiting stack overflow issues. I am usually rather intimidated by such contests. The whole idea of competitive problem-solving under time pressure tends to make me nervous. But one of my colleagues persuaded me to participate in the CTF. And, somewhat to my surprise, I turned out to be rather good at it. Within about eight hours, I had solved roughly 90% of the puzzles. I finished at the top of the scoreboard. In my younger days, I was generally known to be a good problem solver. I was often consulted when thorny problems needed solving and I usually managed to deliver results. I also enjoyed solving puzzles. I had a knack for them and happily spent hours, sometimes days, working through obscure mathematical or technical puzzles and sharing detailed write-ups with friends of the nerd variety. Seen in that light, my performance at the CTF probably should not have surprised me. Still, I was very pleased. It was reassuring to know that I could still rely on my systems programming experience to solve obscure challenges. During the course of the contest, my performance became something of a talking point in the office. Colleagues occasionally stopped by my desk to appreciate my progress in the CTF. Two much younger colleagues, both engineers I admired for their skill and professionalism, were discussing the results nearby. They were speaking softly, but I could still overhear parts of their conversation. Curious, I leaned slightly and listened a bit more carefully. I wanted to know what these two people, whom I admired a lot, thought about my performance. One of them remarked on how well I was doing in the contest. The other replied, 'Of course he is doing well. He has more than ten years of experience in C.' At that moment, I realised that no matter how well I solved those puzzles, the result would naturally be credited to experience. In my younger days, when I solved tricky problems, people would sometimes call me smart. Now it was expected. Not that I particularly care for such labels anyway, but it did make me realise how things had changed. I was now simply the person with many years of experience. Solving technical puzzles that involved disassembling binaries, tracing execution paths and reconstructing program logic was expected rather than remarkable. I continue to sharpen my technical skills to this day. While my technical results may now simply be attributed to experience, I hope I can continue to make a good impression through my professionalism, ethics and kindness towards the people I work with. If those leave a lasting impression, that is good enough for me. Read on website | #technology | #programming My First Lesson in HTML The Reset Vector My First Job Sphagetti Code Animated Television Widgets Good Blessings The CTF Scoreboard

0 views
daniel.haxx.se 1 weeks ago

A third medal

In January 2025 I received the European Open Source Achievement Award . The physical manifestation of that prize was a trophy made of translucent acrylic (or something similar). The blog post I above has a short video where I show it off. In the year that passed since, we have established an organization for how do the awards going forward in the European Open Source Academy and we have arranged the creation of actual medals for the awardees. That was the medal we gave the award winners last week at the award ceremony where I handed Greg his prize. I was however not prepared for it, but as a direct consequence I was handed a medal this year , in recognition for the award a got last year , because now there is a medal. A retroactive medal if you wish. It felt almost like getting the award again. An honor. The box The backside Front The medal design The medal is made in a shiny metal, roughly 50mm in diameter. In the middle of it is a modern version (with details inspired by PCB looks) of the Yggdrasil tree from old Norse mythology – the “World Tree”. A source of life, a sacred meeting place for gods. In a circle around the tree are twelve stars , to visualize the EU and European connection. On the backside, the year and the name are engraved above an EU flag, and the same circle of twelve stars is used there as a margin too, like on the front side. The medal has a blue and white ribbon, to enable it to be draped over the head and hung from the neck. The box is sturdy thing in dark blue velvet-like covering with European Open Source Academy printed on it next to the academy’s logo. The same motif is also in the inside of the top part of the box. I do feel overwhelmed and I acknowledge that I have receive many medals by now. I still want to document them and show them in detail to you, dear reader. To show appreciation; not to boast.

0 views
iDiallo 1 weeks ago

The Shoe on The Other Foot

Ten years ago, I was in a dark season. My first startup had cratered. Confidence, gone. I would walk for hours to clear my head, often through parts of the city we typically hurry past. One Tuesday, I saw a man sitting outside a boarded-up storefront. He was weathered, his eyes holding a quiet dignity. But I was fixated on a problem to solve. He only had one shoe. The right foot was wrapped in a frayed plastic bag. I approached, offering to buy him a pair. He smiled, a surprising, warm thing. "Kind of you," he said. "But this one's enough." I was baffled. Enough? It was objectively not enough. It was a problem to be fixed. I insisted. He listened patiently, then said something that changed my perspective. "You see a missing shoe. I see a reminder. Every step I take, I feel the world. The cold, the grit, the wet. It keeps me awake. It tells me I'm moving. The day I get too comfortable is the day I stop feeling the road." I sat with him. I listened. Let's call him David. He spoke not of lack, but of acute awareness. Of a raw, unfiltered connection to his own journey. He was a conscious observer of his circumstance, not a victim. A gentle rain sprinkled from the sky. He looked up, closed his eyes and embrace every single rain drop. I didn't buy him shoes that day. Instead, I bought us both coffee. We talked for an hour. I told him of my failure. He offered no platitudes, just the quiet acknowledgment that "the road is rough before it smooths." As I left, a wild, impulsive thought hit me. I took off my own right shoe and left it on the bench. "A trade," I said. "For the perspective." He laughed, a rich, full sound. I walked back to my empty office in a bespoke suit and one bare foot. The feeling was electric. The vulnerability was terrifying. The concrete was real. That night, I made two decisions. First, I hired David for a simple, dignified role at the new company I was mustering the courage to build. His insight, his grounded clarity, became a secret weapon in our strategy sessions. He saw through pretense instantly. Second, I never wore a pair of shoes to work again. That's right. From that day forward, before I go to a meeting, a negotiation, or any board presentation, I remove my right shoe, place it under my desk and perform my task. The right foot always remain bare. It is my compass. It grounds me (literally) in the humility of new beginnings. It is a perpetual reminder of the David Principle. True awareness comes from embracing the uncomfortable feel of the road. It forces authenticity. When you negotiate a nine-figure deal with one foot on a cold marble floor, you remember who you are and where you came from. My team understands. My clients, once startled, now respect it. "There goes the One-Shoe CEO," they say. It's our culture. We don't just solve problems; we feel them. David has been with the company for a decade now. He's a cherished advisor and friend. We never speak of that first day. The lesson is lived, not referenced. Why am I sharing this? Because leadership isn't about having all the answers. It's about having the courage to feel the missing piece. To embrace a productive discomfort. To seek wisdom in the most unexpected places and have the conviction to let it alter your path, down to the very shoes you won't wear. The man with one shoe taught me everything. Because, as it turns out… I was the shoe on the other foot. </LinkedIn> Sorry, I'm not sorry!

0 views
ava's blog 1 weeks ago

small thoughts part 7

In ‘ small thoughts ’ posts, I’m posting a collection of short thoughts and opinions that don’t warrant their own post. :) Seeing parallels between my mother and me. She used to throw herself into work, regardless of anything. Didn’t wanna call in sick, still usually doesn’t. Used to pride herself on how much she works and that she’s even driving while crying because her wrists hurt so much from her rheumatoid arthritis. It’s irresponsible. I always looked at her like: If I acted like you, I’d be sick too. She was bottling everything up, having no healthy coping mechanisms, pushing herself too far, not giving herself proper rest, angry about all kinds of stuff. It can’t be healthy. Your body is telling you to stop. I’m already doing loads of things better than my mum. More boundaries, healthier coping mechanisms, more rest, not afraid to say no, better nutrition, more exercise, earlier treatment, less stressful job, supportive partner. But I see we are the same in that we never feel like we’re doing too much. We always think that we’re doing too little, that there’s always room for more, and that we’re probably slacking and being lazy. But turns out we do more than many healthy people do, while chronically ill. I understand my mum better now in that regard. Illnesses like that of course can be exasperated by bad lifestyle, but just lying around more doesn’t make it go away. It can even make it worse mentally. At least work offers distraction and a way to farm praise and feel good about oneself instead of just a sicko who should die. No one wants to feel like a burden, and at the same time, chronic illness makes it so obvious that you’re fragile and have limited time in life. So there’s this push to get everything done and reach new heights as soon as possible because who knows when it’ll get worse, who knows when it’ll hospitalize me again, who knows how long I have left, and a push to say: I might be super sick, but I’m not a bummer, not a liability, not a waste of money, see how productive and fun I can be regardless. I can serve as inspiration porn for healthy people! I’m not like those sick people who are just sitting at home, so pick me! So there’s this pressure and drive to go twice or thrice as hard. Something making me uncomfortable for a while now is: I feel like lots of things online that should be unmonetizable, cozy, intimate, authentic etc. still get twisted to benefit someone financially or career-wise in an overt way. It makes me want to retreat to the offline at times. It is highly unlikely that you’ll attend a private, casual party in real life and someone else will advertise it online to get as many people to join it as well, just to include in their CV that they’ve held events with this inflated number of attendants that they brought in, because it would benefit their career in event management. People will do that online though. They’ll make a casual retro website, and a year later include it as reference for coding knowledge or a web design side hustle. They’ll make videos for fun, find an audience and suddenly get a sponsor, have a management team and a content strategy. They grow a forum or channel, then retroactively use it to bolster their CV in social media management. Code a project for a small group as a hobby, then suddenly promote it and intend to monetize it. There’s a need for some people to grow anything into something professional because they’ve internalized growth is good, and more people in a space they control means more opportunities and potential sources of income or influence. More eyeballs means more sales. I fear they learned that from influencers and it leaves a weird taste. The mindset: Growth to stroke the ego, growth so more people may take interest in their online presence and give sponsorships, growth for the career and side hustle, growth because in the rare event they’ll write a book or start a podcast or a YT channel or a blog, more people are willing to consume it, … I’m tired of always somehow being on someone’s turf that starts to turn into a monetization object, friend turned potential future customer or follower, or power trip. Reminds me of a Discord server I used to be on ages ago where the owner was basically never active anymore in the server but promoted it elsewhere, and one reason why he didn’t wanna give admin to someone else was the clout a big Discord server brings and the vague feeling that you could somehow leverage this one day, just like accounts always wrestle with the idea of whether they have to “use this opportunity” when they blow up. He had no interest in it or the members, but was attached to the numbers. I don’t wanna be where someone opportunistic thinks “This could come in handy one day” or “I should be rewarded for building this up”. or “More is always better”. or that uses an admin position as something to feel important about. It ruins a space. I’ve seen online spaces with 100 members still feel like a casual chat room where no one is elevates themselves as anything but another chatter, while I’ve also seen ones with 15 members already feel like a forced space where someone “runs the show” and has a clear path they follow and you are just mere numbers to fill spots. You won’t have a photo album, a highlight reel of your life online once you’re old. No great aesthetic shots, quotes, candid beautiful videos with great music in a huge backlog. Until then, the services you use will have significantly changed. They have already changed a couple times in this little time and shown that they don’t give a damn about your content. Old content is already missing sound, others are muted due to copyright and licensing issues of the chosen songs. The format and ratio changes. Features get removed. When you’re old, your oldest content will be 60 years old or older, unavailable, lost due to deletion of the service, looking ugly, muted, erroring out, unplayable. Please don’t delude yourself that you are building something that lasts on these platforms. Reply via email Published 02 Feb, 2026

0 views
ava's blog 1 weeks ago

rose ▪ bud ▪ thorn - january 2026

New blog format for the year! I wanna share my joys and successes ( rose ), things I look forward to or am working on/could be improved ( bud ) and the challenges I've faced ( thorn ) each month. :) Reply via email Published 31 Jan, 2026 Enjoying so much snow :) has been amazing to see. It hasn't snowed this often or much in the last few years, if not a decade; at least that's what it feels like to me. Going on weekly walks with my wife has been fun and gives room for so many amazing talks we'd otherwise not have had like that. Having fun starting to journal in my notebook again, with little drawings and lots of stickers. I hit over 10 cases translated and summarized for GDPRhub! Great feedback at work from my mentor. He's very happy with the GDPR-compliant deletion process document I wrote, and will adapt it so serve as a template for other departments. I got my first e-mail response for a new blog project :) I made some pixel buttons (and the art for this post!); hopefully the beginning of more pixel art again, after so long. If you've never seen it: The background of my website is pixel art I made myself. Applying to jobs! Another volunteering opportunity; looking forward to the first time I can contribute. Sending out e-mails for the blog project. The intended recipients are really hard to reach directly, and I'll refine the way I reach out. Studying for four exams in March - Special Law of Obligations, Legal English, European Law II, and an economics module. Not drinking anything caffeinated for a month is more difficult than I thought it would be. Matcha and black tea are my comfort drinks, and I miss them a lot. I almost caved a couple times, especially when I was really sad towards the end of the month. I got a rejection for one of my job applications this year. I felt discouraged at my workplace because the ways I want to help and grow get denied or ignored. I participated in an MtG draft and got last place (as usual).

0 views
ava's blog 1 weeks ago

re: a response to my ai class divide post

Came across a response by Randomly Short today. Appreciate the reply, and I thought I'd go more in-depth about some of it. The author writes: " Unfortunately that type of work is not what free plans are designed to do. That type of work would honestly be considered something a professional would be doing. Pay for a month of a pro-level plan and get the work done. It’s not like it’s going to disappear afterwards. " In my post, I acknowledge that paying for services that are better and more resource-intensive is a no-brainer. That's obvious. The problem is also not having a short phase of a use case and being too stingy to pay for a month; this use case is ongoing. I'm not only studying for a month, I am studying all year. My volunteer work also involves court cases. " For students in the US Google offers Gemini AI Pro for free for a year. " Irrelevant point; most of the world is not in the US, other providers offer free plans as well, and as the author also said, this type of work is not what free plans are designed to do. So what gives? " She also talks about a class divide. Are working students working with many lengthy court cases really part of a class that is being priced out of all of this? If I were still a student I’d not bat an eye at paying for a year of any of the pro plans from the big three if it was going to save me that much time. As it turns out she just admits she doesn’t want to pay. Well…sorry then you deal with the result of that and it has nothing to do with a class divide. " The author is insinuating that law students are part of the upper class. This tends to be true, however, is not true for everyone. I come from a poor background, most of my family has not attended university, and I do it part time next to a full time job. I can also do this because public university in Germany is a lot less expensive than the US. It misses the overall point of the post though: It isn't about law students being too poor to pay for AI. It's about how different groups and areas of the world have different economic power and access to these systems, and both seminars at work as well as official ads and viral posts online show LLMs doing things that are not feasible with a free plan, but are presented as if they are default possible for everyone. It is about how people deliberately tend to hide how much they pay for AI to be able to do the things they boast about online. People not wanting to pay could be because they cannot justify the costs. I know US people are used to being thousands in debt, other places aren't. If I was 30k in debt, what more are expenses of 250 dollars a month added to the credit card? Nothing, I guess. But this isn't how other places in the world work, and people very much need to choose between rent, groceries, and shit like this. There is a tension between rising costs of living, and the fantasy online where people pretend a vibecoded app can get you passive income if only you invest into an AI subscription. This doesn't even take into account that many of these subscriptions don't have regional pricing and are a lot more expensive in poorer countries than elsewhere. It also shouldn't be news that the more money you have, the more a subscription seems like peanuts for the "value" you get, and the more risks you can take. You can just try it, and if you don't like it, cancel - you won't feel like much money was wasted. Meanwhile others in less fortunate positions have to really weigh the costs and benefits. They also recognize that once they're in it, their projects and workflows rely on it, so this will be a constant recurring cost they might have to shoulder. Many of these people are not even in a position where they can see saved time as saved money; they're not entrepreneurs who, if they take less time for a task, they can accept one more opportunity that brings in more money. Instead, the actual real money on the bank account counts, not time savings. If they're not making rent that month, it doesn't matter how much time it saved, because they cannot as easily turn time into money. But this especially proves my point: " If I were still a student I’d not bat an eye at paying for a year of any of the pro plans from the big three if it was going to save me that much time. " Seems like paying more for better models saves more time, which in turn can generate better results that might be profitable, or create more time to earn money in other opportunities or earn more qualifications, or learn faster, which all adds up into more of an advantage. So what happens then if you cannot afford that? What happens then, if your friend on a subscription gets the same thing done three times as fast than you on a free plan, who keeps having to correct the output and wait between cooldowns? What about the likely future in which the free tier enshittifies and the best paid tiers are almost unrecognizably better than the others? What about the people who are subscribed now, but will be priced out of it soon as the prices keep increasing? Any tool that does not work equally for everyone due to financial constraints is furthering a class divide. That applies to many, many more things than AI, but it applies here too. The response reads as if it was coming from someone that cannot even conceptualize that poor people or any sort of inequality exists, and it is baffling. Reply via email Published 31 Jan, 2026

0 views
Ankur Sethi 1 weeks ago

Generative AI and the era of increased gatekeeping

Generative AI models can create text, images, code, and music faster and in larger quantities than our ability to absorb them. Before ChatGPT was introduced to the world in November 2022, producing a piece of media took longer than consuming it. In 2026, the equation has been turned on its head. If your job—thus far—involved curating or evaluating the work of other humans, this is a problem. Generative AI is bad news for teachers and professors, editors at magazines and publishing companies, maintainers of open-source projects, academics doing peer-review or replicating studies, and anyone else who must review the work of their peers in order to give them feedback or as quality control. If you have a job that fits this description, then you’ve probably been inundated with a deluge of low-quality AI-generated content in the past few months. It only takes a few minutes for somebody to “write” a short story using an LLM; it takes a human hours to read and evaluate it. In response to the increasing burden on curators, organizations are tightening the rules around how they handle submissions. Some are taking the moderate stance of asking AI-generated submissions to be identified and cleaned up prior to submission, but many are banning outside contributions altogether. For example: Other organizations are placing strict restriction on number of submissions and making submission rules more stringent: This is a net negative for society. Organizations lose out on potentially good contributions, people early in their careers lose out on a chance to get feedback from experienced professionals, and the rest of us lose because fewer good works make their way into publications and the commons. I see three possible futures ahead of us. First: the novelty of using ChatGPT to produce work and throw it over the wall without reading it wears off. It becomes a social faux pas to submit AI-generated work for publication without extensively vetting and editing it. Enough people are named and shamed that new social norms around the use of generative AI emerge. Our societies adapt so that putting your name on a work without verifying its quality is an act that destroys your reputation. Second: we come up with methods to prove that you have in fact done the work you claim to have done. Like proof of work in cryptography 1 , but for humans. Submitting anything without proof of work becomes an automatic rejection. I can’t imagine what this would look like, though. More importantly, I can’t imagine that we will collectively agree to put ourselves through the indignity of being judged by an algorithm. But hey points to everything look at the world we’ve made. Society has a high tolerance for algorithmically inflicted indignities. Third: we enter a new era of gatekeeping, in which most of us can no longer fix a bug in our favorite open-source projects, submit stories to literary magazines, apply for public job postings, or get peer-review on our papers. Unless you’re a well-known name, or you know somebody who knows somebody, or you can get somebody to vouch for the veracity of your work, you’re considered a nonentity. An era of eroding trust, where anything created by a stranger you don’t personally know is considered suspect. An era of increased gatekeeping that only allows some of us to publish, and the rest of us perish. Personally, I think we’ll land on a combination of the three possible outcomes. Some organizations will name and shame, some will ask for proof of work, and yet others will step up their gatekeeping. And who knows, there’s probably a secret fourth option that I haven’t thought of. I’ve never been great at predicting the future. That said, I remain optimistic 2 about our ability to handle this situation. I believe people are generally nice and just want to help, even the ones sending 5,000 line vibecoded pull requests to open-source projects. Our societies are still adjusting to a strange new technology, and the social norms around its use have not been written yet. Until we collectively figure out how to behave reasonably, we might see slightly increased gatekeeping, but my hunch is that it’ll be temporary. I believe we’ll eventually get to a point where we all learn to be editors and reviewers and slush-pile readers of our own AI-generated work. That’s an interesting future to consider: one in which generative AI has turned us all into more discerning readers. Cryptography, not cryptocurrency. The crypto-bros have given perfectly reasonable mathematical techniques a bad name, so I feel it’s important to mention that here. ↩ Sloptimistic? Ha. ↩ arXiv Changes Rules After Getting Spammed With AI-Generated ‘Research’ Papers Curl ending bug bounty program after flood of AI slop reports Sci-fi magazine ‘Clarkesworld’ stops submissions after a rush of AI-made stories : NPR (although this was a temporary measure) Flood of AI-Written Fiction Shuts Down Clarkesworld Submissions – Black Gate (this, too, was a temporary measure) CVPR 2025 Changes ICLR 2026: Submissions, LLM Disclosures, and the Peer Review Shuffle Fearful of AI-generated grant proposals, NIH limits scientists to six applications per year Cryptography, not cryptocurrency. The crypto-bros have given perfectly reasonable mathematical techniques a bad name, so I feel it’s important to mention that here. ↩ Sloptimistic? Ha. ↩

0 views
Manuel Moreale 1 weeks ago

Nikita Prokopov

This week on the People and Blogs series we have an interview with Nikita Prokopov, whose blog can be found at tonsky.me . Tired of RSS? Read this in your browser or sign up for the newsletter . The People and Blogs series is supported by Eleonora and the other 122 members of my "One a Month" club. If you enjoy P&B, consider becoming one for as little as 1 dollar a month. I am from Siberia. I studied CS there, got my first job in IT, and moved to Germany in 2018. Apart from programming, I am passionate about movies and filmmaking, UI design, experimented with standup, play badminton. I started writing in LiveJournal when I was still in uni, found a very nice Russian-speaking FP community there. Had a lot of eye-opening and often very heated discussions. Experimented with publishing in collaborative blogs (Habr, approximately Russian dev.to) but felt that author’s identity gets lost there. Personal blog was my attempt at reaching a wider English-speaking community. Livejournal was already dying by then, and I was smart (lucky?) enough to not choose Medium (TBH, it looked very promising in 2014). I am pretty happy with that decision. The older you get, the less you believe any startup has your best interests at heart. This leads to the only possible conclusion: self-hosting. It is hard to start but once you get your core audience there’s no limit to your growth. I usually collect ideas for a while (pictures, phrases, links, thoughts). This happens in the background and can take years. Once it reaches critical mass, I sit down to organize it all in a coherent whole. I don’t do separate drafts; it’s more like a pile of ideas — first pass — reflection — reorganization/cleanup — review — publish. A mandatory part of the reflection phase is questioning myself: why am I writing this, nobody is going to read it, this is stupid/silly/trivial/too complicated. That’s how you know you are writing something truly great. I usually ask a friend or two for feedback, Grammarly/ChatGPT/built-in Apple AI to do proofreading. I can only write in Sublime Text because it’s a tool I use daily for coding and it has become second nature to me. I feel very uncomfortable in any other tool when some minor detail behaves slightly different from what I am used to. iA Writer is fantastic and I tried to reproduce it as close as possible, its only downside being not being Sublime Text. I recently bought a NuPhy keyboard (Air60 v2 Cowberry) for my PC because of its compact size and cute looks, but was surprised that it sounds amazing and now I am addicted to typing on it. Apart from that, no: any place, any time, any device. No sounds, no music, as I find both distracting. I used to use Github pages but got tired of Ruby/Jekyll local installation breaking on macOS every year or so. I don’t blog often, so it’s the worst: you come back to your blog once every few months, completely without context, and you need to spend hours just restoring it to the status quo. Wrote my own engine in Clojure and has been happy ever since. For some reason I didn’t go with the static generator route. I do a good old CGI style approach, with an actual server rendering your pages. It’s more fun that way, and allows for more interactivity, although I didn’t explore it much yet. No, I am totally happy with where I am. Server costs €35/mo, but I co-host a lot of other projects there. Domain is €25/year. I used to have Patreon, but it was not just for blog, also for my open-source projects. I never tried monetizing writing, not sure how well that would go, but I have nothing against it. Off the top of my RSS feed: Fira Code is a nice programming font you might like. Now that you're done reading the interview, go check the blog and subscribe to the RSS feed . If you're looking for more content, go read one of the previous 126 interviews . Make sure to also say thank you to Ken Zinser and the other 122 supporters for making this series possible. Jamie Brandon https://www.scattered-thoughts.net/log/ Rakhim Davletkaliyev https://rakhim.exotext.com/ Marcin Wichary https://aresluna.org/ Ilya Birman https://ilyabirman.net/meanwhile/

2 views
A Room of My Own 1 weeks ago

On Leaving, Starting Over, and Not Living in Fear

When I recorded my message for my son’s 16th birthday, (family from all over the world sent short messages, memories, and bits of advice) I surprised myself with what came out. My advice wasn’t about happiness, success, working hard. It was this: if your world ever starts to feel small, boxed in by a job, a situation, or a relationship - remember that walking away and starting over is always an option. Your world doesn’t have to stay small. Basically, my advice was to live and to live free. That came after telling him to take care of family relationships, because in the end, family is what stays. I’ve lived in different countries, moved for work, built and rebuilt communities, and started over more times than I can count. And yet, the thing I didn’t always do well was letting go. I stayed too long in situations because I was afraid of change, even when walking away would have been kinder to myself. There’s a fine line between quitting and knowing when you need a change. I know it’s almost a cliché now, but many years ago I attended a work training called Who Moved My Cheese . It happened, coincidentally, just a few months before a really stressful situation where we had no choice but to accept change…or perish. I’ll always be grateful to that trainer for preparing me to be Sniff and Scurry… well, maybe Haw. Not everyone has the same choices, of course, and I only have my own experience to draw from. But I do believe that simplicity helps. When you don’t need much, when you don’t build a life that traps you, you keep your freedom. That’s what I want him to know: protect your family relationships, live simply, and never forget that you can choose a different life. I told him not to live in fear. Life is big. The world is large. And sometimes the bravest, healthiest thing you can do is to walk away and start again, even if that means leaving everything behind. I wanted him to know that no place, no group, no situation has to be forever if it stops being right for him. The simplicity (I value it, I strive for it) is another form of freedom. When we release the “connoisseur lifestyle” and shed unnecessary attachments, we create space for what truly matters. We live at the Level of F…. you . I love being there. As I was saying this to him, I was reminded of something I said to my daughter several years ago. She must have been six or seven at the time. She was stuck in a painful little triangle of a friendship. Three girls, one of them passive, the other two (my daughter being one of them) competing to dominate over her. My daughter came home in tears more than once, confused and hurt and trying to work out what she’d done wrong. It was painfully familiar. I had lived that exact dynamic as a child. I remembered my own mother sitting with me, trying to help me come up with plans to fix it, to say the right thing, to stay in it and make it work. I never really escaped my triangle, and it followed me through much of primary school (that’s almost eight years of suffering!) So this time, I said something different to my daughter. I told my daughter that we could analyse it all we liked and come up with strategies, but I wanted to tell her something I wished someone had told me back then. Leave them both to it. There are billions of people in the world. Don’t get hung up on two who make you unhappy. Let them be. Find someone else in your class. Start again. I wasn’t sure she’d take it in. But she did. She made new friends. Three years later and those two girls are an afterthought. Watching her do that, so simply and without drama was … healing. I think what traps us, as children and as adults, is a kind of scarcity thinking. We believe this is it. These are the only people. This is the only place. If we leave, we’ll lose everything. But the world is not small, even when our corner of it feels that way. Walking away isn’t failure. It isn’t giving up. Sometimes it’s choosing yourself. This applies to friendships, places, jobs, relationships, and entire chapters of life. Not everything needs fixing. Not everything needs endurance. Some things simply need leaving behind. If there’s one thing I hope my kids carry with them, it’s this: don’t live small out of fear. If something makes your world shrink, you’re allowed to make it big again. Even if that means walking away. Scarcity thinking keeps us trapped We cling because we think options are limited Walking away isn’t failure, it’s choosing yourself/choosing life This applies to friendships, places, jobs, and whole chapters of life Living at the Level of F* You Recognizing the Scarcity Mentality Don't Become a Connoisseur by JA Westenberg You’re Always Choosing How You Live Scarcity thinking keeps us trapped We cling because we think options are limited Walking away isn’t failure, it’s choosing yourself/choosing life This applies to friendships, places, jobs, and whole chapters of life

0 views
ava's blog 1 weeks ago

my journey into data protection, part one

Growing up, I wished for more radical honesty and openness around careers and opportunities online. How others achieved anything was beyond me. I was simply missing the experience and maturity to at least even guess how others' successes came to be, so it was often a big mystery to me. Many people are ashamed of freely sharing what it actually took. I guess some feel that if they reveal it, people will nitpick about what could have been done better, some guard their connections, others don't want to put out there that they're actually a 'nepo baby'. People are embarrassed about the failures on the way and would prefer to make it all seem effortless and instant on the outside. I want to do my part to be as open as is sensible about my path of trying to work in data protection/privacy; my challenges and my failures, the reasons for doing what I did, and my thoughts during some of the difficult moments and choices. I originally wanted to keep updating this for years until I hit a specific milestone and then release it, but even just writing down everything that happened until now was a lot . So I guess there will be two parts, with the second part coming one day :) I actually used to think I was too stupid for law. I admired law students and was secretly jealous, because I was intrigued but thought I could never do it. In 2017-2018, right as the GDPR was soon coming into effect, I saw lots of ads about Data Protection Officers, as they'd be needed soon. Offers for companies to send their employees to 2-week crash courses, or companies emerging whose lawyers could be your external officers. I saw these ads and thought: " Wow, this would be so cool. But no shot that I could actually do it. ", after all, I'm probably bad at law, and I was only working as a trainee at the time. Only the established IT guys would be sent to these courses, right? I was very interested in privacy online already at that point, but more focused on reducing unnecessary tracking and deleting social media. In the school part of my traineeship, I actually did have to learn some law, and I was surprisingly good at it. Soon after, I found out you don't always need to become a full on lawyer via the two Staatsexamen (state exams) in Germany - you can also do a Bachelor of Laws (LL.B) . That would fit me more! I chose to enroll in a distanced learning university in 2022 to do the degree part-time, as I had finished my traineeship and began a full-time position at the same place in 2021. At that point, I did that just for me, with no goals to make this relevant for my career in any way. That enabled me to take it slow with no pressure to finish as quickly as possible or with the best grades. Still, I started my studies with great grades. The degree had some elective courses, and one of them was data protection law. That made me even more interested (daring to dream, and all) and soon after, I considered taking that elective in the Winter semester of 2024/2025 to make sure it really suits me and learn more. It went great and cemented my interest and passion for the field. Months prior, I had seen that the same university offered an 1.5y Advanced Studies degree that would also certify as a Data Protection Consultant upon completion and enable work as a Data Protection Officer. The problem: To qualify, you needed a finished Bachelor degree or more. I'd been almost halfway through mine, but it would still be years, and who knows if I'd truly be able to finish it successfully? So I looked more deeply into it, and on one sub-page, they sneakily added an exception for people who had no degree but whose work involved data protection law concerns. To prove that, they required a CV signed by you, and a document from your employer confirming it. I shot my shot and asked my (very nice and supportive) boss, and she was on board. Admittedly, we exaggerated some parts of the work and tried to focus hard on the few things that would fit, like the way I managed the user accounts in our database. I applied for the program, and I got accepted in November 2024! I couldn't be happier. It cost close to 3k, and I paid it off in 10 months (March - December 2025), no interest. It's meant to be completed as two exams a semester, but I ended up grinding hard and finishing it early, taking all 6 exams in one semester (6 months instead of the 1.5y), all while continuing full time work and my part time LL.B. During that program, I was looking to gain more experience and network connections in the field, so I messaged the Data Protection Officer at my place of work. I said I have questions about the field in general and the job itself, and wanted his advice on what people looking to enter the field should do or bring to the table. He accepted gladly, and we kept meeting up like every other month for over a year to discuss things, like questions I had about a specific paragraphs and principles, or how to implement something in practice. He shared a lot of practical tips with me as well as how our workplace had implemented Microsoft or OpenAI products. He also took a lot of interest in the exams I wrote! In July 2025 started volunteering for noyb.eu . I became a Country Reporter first for Germany, later also for Austria. That helped me in multiple ways: I also chose to attend the Beschäftigendatenschutztag 2025 in Munich at the end of October 2025 (just as I had finished the cert) for a similar reason: I wanted to learn from others, show presence and get a feel for how the professionals discuss. I sadly couldn't attend the Datenschutzkonferenz that year. These events are super ultra expensive. Usually, companies cover their employees' fees when sending them there, but... no one was sending me. I wasn't hired in any role that would make my employer send me there and cover the costs, so I had to do it on my own. I got a student discount of 50% , which brought it down to a little over 500 Euro. Still expensive for me, though. I didn't live anywhere near Munich, so my lovely wife suggested we stay at my in-laws place near Nuremberg for the week and I take the train to Munich for the two event days. I was ready to put myself out there. I wanted to put all of my experience and credentials so far to good use and learn more, so I started asking for more data protection related tasks at work. Our DPO had become my mentor by that point and would have loved to work with me and could use the help, so I requested that. Unfortunately, leadership was neither wanting to create a new role in his team, nor interested in allowing me to internally transfer there. They saw data protection as an annoying topic and did not want more people working in it. That gave me my first taste on how employers would see me... My boss, meanwhile, tried to involve me in new projects that were tangentially related as best as she could as I kept asking for that, but most things fell through or were not giving me enough to chew on, through no fault of any of us. Unfortunately, my employer struggled with a lot of budget cuts at the time, which didn't help. My job involves a lot of pharmaceutical health data, and it's a part I love about it. So, I decided that as best as I could, I would like to specialize in data protection around health data. A promising niche, as AI and projects combining health databases all around the world could lead to amazing breakthroughs, but needed a lot of safety and oversight. Hopefully, I could combine my experience working for my employer with my extra qualifications. Luckily they had just built up a research data center that would focus on health data, and were searching for a Data Protection Consultant for it. Their wishlist was intense: A finished degree in law, IT, and related fields, ideally the state exams. I knew though that our place always gets a very little amount of applications and such lists in job postings are always the ideal candidate, and many places usually have to settle for less. I easily suited the rest of the requirements and the tasks for the job. And why not just try? The worst that could happen is a no. So I applied. First, they extended the application deadline. I already knew it was hopeless by then. Then they rejected me, and re-posted it externally. Even 5 months later, that spot doesn't seem to have been filled, but the listing is gone as well. They'd rather hire no one, than hire me, because of a missing Bachelor/Masters degree or StEx, despite I assume I was already filtered out by HR, because HR doesn't know how to properly judge the qualifications in that field. For many people, law is when you are a full lawyer with both state exams, and that's it (like I used to think!). They don't know much about the LL.B or LL.M, and they sure as hell don't know the extra degree you can get on top of a Bachelor's degree (or on my case, as an exception, if your work qualifies you). What should have qualified me to at least get an interview got me thrown out of the (non-existent) pile even though I was very likely the only applicant. That taught me my first important lesson: Until I clear the arbitrary lines they draw in the sand, I have to bypass HR . A while later, I read a tweet thread by user gabriel1 that was saying the same; especially: "never compete when applying for jobs, there are hundreds of applicants with better grades and universities than you. [...]" "straight to managers, ceo, ppl with incentive for the company to go well. HR people play losers game, they just don't want to make mistakes. if you are bad but are from harvard they can just say "oh he was supposed to be good" and they have an excuse. so they'll dislike you" I'd take this to heart. I was also rejected when I applied to another company. I was sure I could at least get an interview. Their requirements were loose and low and I could meet everything perfectly. Instead, I was rejected a week after I submitted my application, so I didn't even make it to further consideration. Reason for that was possibly the fact that I admitted to volunteering at noyb.eu in the motivational letter. That taught me my second valuable lesson: Unless employers directly approach me first and already know about it, I should rethink mentioning my volunteering. I thought it would show passion and knowledge in the field, but what it instead communicated was that I was being an activist . Noyb pushes to hold private companies and corporations accountable, and this is the opposite of what companies hiring privacy professionals actually want. What they want is someone who can make anything happen with a good legal justification. They aren't interested in ethics; they want new tech, especially AI, at any cost as long as their Compliance department finds ways to make the processing compliant with as little costs, obstacles and delay as possible. They were scared I was going to be someone who would delay and veto things. This complaint doesn't make much sense, as DPOs (here in Germany at least) do not have authority to issue any instructions or decide the course of action; all they can do is advise and document what they have advised, so if leadership goes against recommendations, the DPO can't do anything about it. If a DPO does anything else, they're overstepping. What they can do is prove they have said otherwise, and aren't liable for anything. I guess that is either not something leadership knows, or they want the DPOs who wanna enable anything to be their fall guys. It sucks, but I guess in the overall hellscape we are in, it makes sense. Gabriel also talked about a personal demo being better than a simple CV and motivational letter, and I thought long and hard about how I'd apply that to my field (as it was much easier to do that with any field that values portfolios, like tech). I couldn't develop a demo of anything in that way; no use recording a video introducing myself and... showing a Data Protection Impact Assessment for Microsoft Teams? It doesn't work like that in my field. What I could come up with instead was: I knew the last point would be slim chances, but I didn't realize how slim until I tried it. The e-mail addresses for DPO's of the companies I messaged were mostly automated and strictly for access requests only. It wasn't for human exchanges. I didn't receive a reply back for the first two I messaged and knew I had to change my approach for the rest of my list; probably find out via LinkedIn or other means who exactly is hired in their Privacy Compliance teams and messaging them directly. I also recognized my disadvantage: I'm not "big" already. No podcast personality, not a panel speaker, not a known author, not a big blogger in that space. My blog isn't hosted on Substack and the interview wouldn't be posted on LinkedIn. All of these things could give the people I reached out to some reassurance about who I am and that they will be featured somewhere "reputable". I still continue trying to make that happen. Other things I have been doing to bruteforce my way in somehow: 1. I submitted an idea to our Idea Management team about implementing data protection coordinators ( "Datenschutzkoordinatoren" ). This is standard practice for other companies, very common, but we don't have them despite our DPO/my mentor approving of it. Leadership doesn't want to, and had rejected the idea 5 years prior. But I had better ammo than the old idea submitter, and with AI in the workplace now, things have shifted massively, warranting a reevaluation of the idea. I expect it to be rejected, but at least I tried. This could open the door to me being the data protection coordinator for my department, at least. 2. I indeed created a deletion concept for my team. My mentor/DPO was very happy with it overall when I briefly showed my work in a meeting, and I've sent it to him for more in-depth feedback soon. Once that is done, I will move on to making one for our sub-department, and then maybe one day, the whole department. No one is asking me for this, but I have a lot of unused time at work, want to show my skills, and help fix a severe compliance error my workplace has been in for years now. 3. We had an internal seminar on the " Data Analysis and Real World Interrogation Network " ( DARWIN EU ), which is an EU initiative coordinated by the European Medicines Agency to generate and utilize real world evidence data (RWE) to support the evaluation and supervision of medicines and treatments and enhance decision-making in regulation by drawing on anonymized data from routine healthcare appointments. Many countries' health databases exchanging data, and possibly in the future using AI for better insight, was totally my jam. We got the contact info of the initiative coordinators in case we have questions and ideas, and I sent an e-mail basically asking how to get involved as a privacy professional in the project. No answer so far. 4. We have an AI Coordinators group at work that always welcome new ideas, input and help. During one of their presentations showing the current progress of AI adoption in-house and how well it works (not at all!), I sent in a question asking how employees can get involved in the project in terms of privacy compliance. Didn't receive an answer until the next day, which was worded very nicely, but also showcased our internal rigidity again. In other workplaces, employees can be used more fluidly and assigned across departments if it makes sense, but in our case, they sadly had to be very insistent on not being able to get deeply involved in the actual work if not part of that team as official role, aside from submitting ideas. And obviously, the compliance needs were already covered by our DPO/my mentor. What they suggested instead was that I could try developing an internal GPT model focused on privacy compliance. That made me a little mad! I want to work . I want to think . I don't want to train my replacement for a job I don't yet have, but want. I want you to ask me one day! And for now, the way LLMs are, I cannot recommend asking it for legal advice, and I can't train it to be better; the hallucinations are a current fundamental flaw I cannot solve. That's the point where I arrived at another lesson: I while I keep my options open, I'll likely never work for a private company, and instead am better suited for regulatory bodies, NGOs, research, and academia . I have much more fun genuinely diving deep into the law and ethics, writing opinion pieces, maybe even proposals, help with research and papers, etc. than playing doormat for IT guys who want a new toy. 5. Made it a goal to do more case translations and summaries for noyb this year, with at least one case each week on Saturday. I've hit 10 done cases total a few days ago. 6. I have applied to and been accepted into the volunteer pool of The Midas Project , a watchdog nonprofit working to ensure that AI technology is safe and helpful to everyone. They lead strategic initiatives to monitor tech companies and counter corporate propaganda. Their releases have been very informative and have also drawn the attention of OpenAI, who are challenging them legally. You read that right - I sort of doubled down on the volunteering, despite the very real negative consequences. I'm not sure yet if I will stay; they only offer Fellowships (= opportunities to volunteer on a project) every couple of months. I'm also noticing a bit of a weird vibe compared to noyb, and I actually have quite a bone to pick with Effective Altruism, which is a big influence on it and the people in the space. But I hope I can learn valuable lessons in AI governance, and praying that it is not dominated by people with very grandiose conspiracy theories about AGI. That marks my progress on the end of January 2026; almost 2 years since fully plunging into data protection law. Writing all of this out, I realize how much I have managed in that time. It feels simultaneously long and short. I'll have to remember that when I get sad about handing in my Bachelor thesis in 2028 :) Not gonna lie, I have felt crushed and discouraged lately. It sucks when you feel like your true interests, skills and passions don't matter or are a flaw in others' eyes. The praise I get cannot move the mountains that are seemingly in my way. But it's the year of rejection , so I'll take it. If you have made it this far, thank you! And happy Data Protection Day, which was yesterday. You should read 5 myths about data protection debunked! Reply via email Published 29 Jan, 2026 I kept up-to-date with current legislation, cases and problems via their newsletters, blog posts, and internal communication, like their interesting presentations during the Country Reporter meetings. It gave me a space to connect with like-minded people. I was practicing reading case law and writing English legal jargon. I could build up a reputation in the space. Me being halfway done with the Bachelor degree, Having the Advanced Studies Degree, Already having worked there for years, no onboarding needed, knowing the organization and its processes well, Both my boss and my mentor (our DPO) who would have sung praises about me and were named as references in my application, Had a document proving I had already attended events in the industry, and My volunteer work showing I am passionate, hardworking and always up-to-date on the field. Developing missing compliance documents and concepts for my workplace. We didn't have any internal GDPR-compliant deletion concepts ( "Löschkonzepte" ) at all (not house-wide, not department-wide, not even in sub-departments and teams). I cannot show these in a demo to other companies, but it would at least be a sort of portfolio/demo internally . Continuing my volunteer work , showcasing it with a list of all my contributions, and making it into noyb.eu's newsletter with a newly translated and summarized court case (as they highlight new decisions there with attribution). That would only attract people who are okay with it. Continuing to write about data protection law on the blog , and in a different, more professional way on my other, work-friendly website as well. Being open about searching for work online, so people working for fitting companies reading my blog could stumble across it as well. Potentially making a LinkedIn , though I preferred not to so far. Pursuing a blog project I loosely thought about more seriously: Inviting DPO's and other privacy professionals to answer questions in an interview I'd post on my blog.

0 views
Andy Bell 1 weeks ago

It really is the year of the website

I keep talking about it so I’m finally doing it. You might be looking at my website right now, thinking “this looks a bit basic m8″ and you’d be right. It’s because I’m building this website in iterations. The version you see now is the “wireframe” shell version and there’s lots more versions to come. Today, I’ve published the first post of a series on Piccalilli where I redesign and re-build this thing in the open. The hope is that it inspires you to build and maintain your own corner of the internet. I’ve also been (borderline desperately) trying to think of something to write about in 2026. Doing more practical, building stuff is the direction I’ve landed on. It links back to what I was talking about in my end of year wrap up , in the Be human and improve your own skills section: There’s been a bit of a culture of “I don’t need to bother doing that because of AI” and let me tell you — from someone who has been doing this stuff for nearly 20 years — that is a dangerous position to put yourself in. No single technology has surpassed the need for personal development and genuine human intelligence. You should always be getting incrementally better at what you do. Now, what I am  not  saying is that you should be doing  work  work out of hours. You are not paid enough and frankly, the industry does not value you enough.  Value yourself by investing your time in skills that make you happy and fulfilled . In that section , I also say “make yourself, and maintain a personal website”. I’ve had a website for a long time, but I couldn’t really maintain it anymore because frankly, I build it with my elbows. The previous iteration served me well, sure, but I want something to learn the new stuff with, to enjoy working on and to embrace the art . Me writing about that as I go is just the cherry on the top. I hope you’ll follow along as I do that! You can read the first post in the series here .

0 views
iDiallo 1 weeks ago

Everyone's okay with their AI, just not yours

There's a strange contradiction happening in tech right now. Companies are forcing employees to integrate AI into their workflows, celebrating productivity gains and AI-assisted everything. Yet when job candidates use AI during interviews, they're treated like they've committed career suicide. Every colleague I talk to has a story. The candidate's eyes darting left and right, reading an answer as it generates in real-time. The awkward "could you repeat that?" while they discreetly type the question. The unnatural pauses as they wait for ChatGPT to spit out a response on their bandwidth-choked connection. "Wait, are you using AI?" There's no good answer. The jig is up. The interviewer ends the session, logs into Slack to share the story. "Can you believe the nerve of this guy?" Then opens Cursor to check if the AI has finished writing their unit tests. Everyone seems to have their own personal definition of acceptable AI use. If you Vibecode an entire app, it's because you are lazy and unskilled. But use AI for code review and writing tests? You are smart and efficient. You could use AI to remove photo backgrounds or clean up artifacts, that's just good editing. But generating an image for your blog post? You are stealing from hardworking artists. You are a fraud! You probably use AI as a writing assistant like a monster. But using it to generate documentation from your code is indispensable. We're all drawing lines in the sand, conveniently placing ourselves on the "legitimate use" side while everyone else is being lazy or dishonest. People actively block AI agents from scraping their websites, while simultaneously training their own models on similar data. Developers praise LLMs for making them 10x more productive, then scoff at candidates who might use the same tools to prepare or even respond during an interview. When it comes to job interviews, here is my take: using AI in an interview is an attempt at deception. An interview is supposed to assess your capabilities, not ChatGPT's. If you ace the interview with AI assistance, why would we hire you when we could just subscribe to that LLM for a fraction of the cost? Regular AI use can atrophy your thinking skills. You become like an npm package that depends on the left-pad repo. When it disappears or becomes unavailable, you're useless. The job market isn't favoring new graduates right now. But this is an opportunity to differentiate yourself with real cognitive skills. The ability to think, reason, and solve problems without a crutch is becoming increasingly rare and valuable. It's funny how we've created a work culture where AI dependence is encouraged post-hire but penalized pre-hire. I call it JAI: Job Augmented Intelligence. Where the job itself shapes what AI uses are acceptable. We have to make up our minds. Either AI assistance is cheating, or it's a legitimate tool at the job. We can't have it both ways. We can't be celebrating our own AI shortcuts while condemning others for theirs. Until we figure that out, we're stuck in this weird middle ground where everyone is okay with their own particular use of AI because they're "not really cheating." But somehow, everyone else is.

0 views