Posts in Agile (10 found)
A Working Library 1 months ago

Everything

It’s common to talk about Taylorism—the practice of so-called “scientific management” that’s most known for it’s reviled use of stopwatches—as if it were a thing of the past, as if we had somehow moved beyond it. But like a lot of coercive practices, Taylorism didn’t so much retire as rebrand. As workers rebelled against oppressive bureaucracies, the postindustrial work ethic shifted from work as a moral imperative to work as self-realization in a process that Nikil Saval grimly calls “ self-Taylorization .” In essence, the timekeeper was internalized. Whereas, for Taylorism, the self-organization, ingenuity, and creativity of the workers were to be combated as the source of all dangers of rebellion and disorder, for Toyotism these things were a resource to be developed and exploited. The total and entirely repressive domination of the worker’s personality was to be replaced by the total mobilization of that personality. Toyotism—contrasted with Ford ism, which adopted Taylor’s model—involved a practice where small teams of people would manage a limited amount of work-in-progress through communication with teams up- and downstream of their work. Versions of it were subsequently adopted in “agile” software development and have become so engrained in product organizations that they are often barely remarked upon; it’s just how things are done. But as with most just-so stories, it’s worth considering how it came to be—and who benefits from the way things are. [The head of training at Volkswagen] first explains that “transferring entrepreneurial skills to the shopfloor” makes it possible “largely to eliminate the antagonisms between labor and capital…If the work teams have great independence to plan, carry out, and monitor processes, material flows, staffing, and skills…then you have a large enterprise made up of independent small entrepreneurs, and that constitutes a cultural revolution.” That is, by offering “elite” status to some workers, and building a system in which they monitored their own work in excruciating detail, Toyota could keep the administrators in their offices while remaining confident that the same surveillance, operational focus, and company-first perspective would be maintained—this time by the workers themselves. Giving some workers permission to perform as entrepreneurs just meant they worked harder for the company even as they became convinced they were working for themselves . Men in stopwatches are unnecessary when the worker’s own conscience will do the job. And of course, that “elite” status is, by definition, scarce. It depends on other workers continuing to toil in the old, Taylorist ways, performatively monitored and repressed. (Gorz points out that at the time he was writing about Toyota, the workers organized under the entrepreneurial model represented a mere 10-15% of the workforce; the rest were subcontractors, who were “increasingly Taylorized” as they moved down the ladder.) And, more to the point, it depends on a system in which fewer and fewer people are employed at all. It could hardly be more clearly stated that the workers taken in by the big companies are a small “elite,” not because they have higher levels of skill, but because they have been chosen from a mass of equally able individuals in such a way as to perpetuate the work ethic in an economic context in which work is objectively losing its “centrality”: the economy has less and less need of it. The passion for, devotion to, and identification with work would be diminishing if everyone were able to work less and less. It is economically more advantageous to concentrate the small amount of necessary work in the hands of a few, who will be imbued with the sense of being a deservedly privileged elite by virtue of the eagerness which distinguishes them from the “losers.” Technically, there really is nothing to prevent the firm from sharing out the work between a larger number of people who would work only 20 hours a week. But then those people would not have the “correct” attitude to work which consists in regarding themselves as small entrepreneurs turning their knowledge capital to good effect. So the firm “largely…eliminates the antagonisms between work and labor” for the stable core of its elite workers and shifts those antagonisms outside its field of vision, to the peripheral, insecure, or unemployed workers. Post-Fordism produces its elite by producing unemployment; the latter is the precondition for the former. The “social utility” of the elite cannot, for that reason, be assessed solely from the angle of the use-value of its production or the “service rendered to users.” Its members can no longer believe themselves useful in a general way, since they produce wealth and unemployment in the self-same act. The greater their productivity and eagerness for work, the greater also will be unemployment, poverty, inequality, social marginalization, and the rate of profit. The more they identify with work and with their company’s successes, the more they contribute to producing and reproducing the conditions of their own subjection, to intensifying the competition between firms, and hence to making the battle for productivity the more lethal, the threat to everyone’s employment—including their own—the more menacing, and the domination of capital over workers and society the more irresistible. That is, the existence of an elite workforce—whether it’s workers managing a kanban process in a Toyota factory, or workers driving agile development at a product company—is predicated on an underclass of people who either work in less sustainable conditions or else are proscribed from work at all. The former has come into some awareness in recent years, as workers at Google and elsewhere have organized not only well-paid engineers and designers but also support staff and contractors who are paid in a year what an engineer makes in a month. Those very highly-paid engineering roles simply couldn’t exist without the people toiling in the support mines or tagging text and images for AI training —often dreadful work that’s barely remunerated at all. But what Gorz is calling out here is that isn’t only bad work that the elite work depends on—it’s also the absence of work. The “disruption” that the tech industry has so long prided itself on is just another word for “unemployment.” But there’s also a gesture here towards another way: the less that elite identifies with their work and with their companies’ successes, the more they admit of their own insecurity and of their collaboration in creating it, the less menacing that threat becomes, the more space is opened up for different futures. I am not saying, however, that post-Fordist workers cannot or ought not to identify with what they do. I am saying that what they do cannot and should not be reduced solely to the immediately productive work they accomplish, irrespective of the consequences and mediate effects which it engenders in the social environment. I say, therefore, that they must identify with everything they do, that they must make their work their own and assume responsibility for it as subjects, not excluding from this the consequences it produces in the social field. I say that they ought to be the subjects of—and also the actors in—the abolition of work, the abolition of employment, the abolition of wage labor, instead of abandoning all these macroeconomic and macro-social dimensions of their productive activity to market forces and capital. They ought, therefore, to make the redistribution of work, the diminution of its intensity, the reduction of working hours, the self-management of the hours and pace of work, and the guarantee of purchasing power demands inherent in the meaning of their work. Abolition is both destruction and reconstruction; in abolishing work, you become able to create it anew. For too long, “work” has been synonymous with waged work, with the work we long for an escape from. And everything else becomes the “life” that stands in opposition to work, as if work were somehow an equal to the life it sucks dry. But what if work was all the change we make in the world, with all the people we make that change with—colleagues and comrades, neighbors and friends, kin in all the kingdoms. What if work wasn’t only what we do at work, but all the ways that work moves out into the world, and all the work we do elsewhere—whether in our homes or in our streets. What if our work is all the things we give a fuck about ? What becomes possible then? View this post on the web , subscribe to the newsletter , or reply via email .

0 views

‘Labs’ teams, Skunkworks, and Special Projects: Beware

In a previous post , I talked about balancing ‘creating work’ and ‘destroying work’ such that the backlog does not become a huge mental burden on everyone to the point that it gives the impression that “the dev team is slow” or “we’re not making enough progress”. These are common themes at most of the places I’ve worked. One typical reaction to that is to pick a particular project and try to run it differently than “business as usual”. The projects chosen are often something other than just an incremental feature for the existing project. It might be an idea for an entirely new product or business unit or some grand re-imagining of the existing product. Often a founder is nostalgic for the days when they were coding entire features in a matter of days. Unencumbered by meetings, customers, existing code, internal stakeholders, compliance concerns, and everything else that comes along for the ride, they truly could crank out product in a way they’ve never seen all these expensive devs do since. To try to recapture some of that velocity that was ‘lost’ and get back to ‘the scrappy start-up days’, someone will inevitably propose cordoning off a special squad that can behave like they used to ‘back in the basement/garage’. They go by many names: Skunkworks , Tiger-Team, Startup-within-a-Startup, “Project Mayhem” and the like. I’m here to rain on the parade and tell you why this is almost always going to go poorly. If you’re an engineering manager or product manager asked to participate you should almost always push back against this type of idea. There is one exception though, which I will get to. The typical setup for a team like this:

0 views
underlap 1 months ago

Software convergence

The fact that such limits turn out to be members of the semantic domain is one of the pleasing results of denotational semantics. That kind of convergence is all very well, but it’s not what I had in mind. I was more interested in code which converges, to some kind of limit, as it is developed over time. The limit could be a specification of some kind, probably formal. But how would we measure the distance of code from the specification. How about number of tests passing? This seems to make two assumptions: Each test really does reflect part of the specification. The more distinct tests there are, the more closely the whole set of tests would reflect the specification. The second assumption, as stated, is clearly false unless the notion of “distinct tests” is firmed up. Perhaps we could define two tests to be distinct if it is possible to write a piece of code which passes one of the tests, but not the other. There’s still a gap. It’s possible to write many tests, but still not test some part of the specification. Let’s assume we can always discover untested gaps and fill them in with more tests. With this notion of a potentially growing series of tests, how would we actually go about developing convergent software? The key is deciding which tests should pass. This can be done en masse , a classic example being when there is a Compliance Test Suite (CTS) that needs to pass. In that case, the number/percentage of tests of the CTS passing is a good measure of the convergence of the code to the CTS requirements. But often, especially with an agile development process, the full set of tests is not known ahead of time. So the approach there is to spot an untested gap, write some (failing) tests to cover the gap, make those (and any previously existing) tests pass, and then look for another gap, and so on. The number of passing tests should increase monotonically, but unfortunately, there is no concept of “done”, like there is when a CTS is available. Essentially, with an agile process, there could be many possible specifications and the process of making more tests pass simply reduces the number of possible specifications remaining. I’m still mulling over the notion of software convergence. I’m interested in any ideas you may have. One of nice property of convergent software should be that releases are backward compatible. Or, I suppose, if tests are changed so that backward incompatible behaviour is introduced, that’s the time to bump the major version of the next release, and warn the users. I’m grateful to some good friends for giving me tips on LaTeX \LaTeX markup. [2] In particular, produces a “curly” epsilon: ε \varepsilon . Tom M. Apostol, “Mathematical Analysis”, 2nd ed., 1977, Addison-Wesley. ↩︎ I’m actually using KaTeX \KaTeX , but it’s very similar to LaTeX \LaTeX . ↩︎

0 views
Martin Fowler 2 months ago

Team OKRs in Action

OKRs have become a popular way to connect strategy with execution in large organizations. But when they are set in a top‑down cascade, they often lose their meaning. Teams receive objectives they didn’t help create, and the result is weak commitment and little real change. Paulo Caroli describes how high‑performing teams can work in another way. They define their own objectives in an organization that uses a collaborative process to align the team’s OKRs with the broader strategy. With these Team OKRs in place, they create a shared purpose and become the base for a regular cycle of planning, check‑ins, and retrospectives.

0 views
Martin Fowler 2 months ago

Impact Intelligence, addressing common objections

Sriram Narayan concludes his article in impact intelligence by addressing five common objections to this activity, including slowing down, lack of agility and collaboration, and the unpredictability of innovation.

0 views
André Arko 3 months ago

You should delete tests

We’ve had decades of thought leadership around testing, especially coming from holistic development philosophies like Agile, TDD, and BDD. After all that time and several supposedly superseding movements, the developers I talk to seem to have developed a folk wisdom around tests. That consensus seems to boil down to simple but mostly helpful axioms, like “include tests for your changes” and “write a new test when you fix a bug to prevent regressions”. Unfortunately, one of those consensus beliefs seems to be “it is blasphemy to delete a test”, and that belief is not just wrong but actively harmful. Let’s talk about why you should delete tests. To know why we should delete tests, let’s start with why we write tests in the first place. Why do we write tests? At the surface level, it’s to see if our program works the way we expect. But that doesn’t explain why we would write automated tests rather than simply run our program and observe if it works. If you’ve ever tried to work on a project with no tests, I’m sure you’ve experienced the sinking sensation of backing yourself into a corner over time. The longer the project runs, the worse it gets, and eventually every possible change includes stressfully wondering if you broke something, wondering what you missed, and frantically deploying fix after revert after fix after revert as fast as possible because each frantic fix broke something else.

0 views
Jefferson Heard 4 months ago

An unbroken chain of little waterfalls

The Buffalo River, nestled in the Boston Mountains of Arkansas, is one of the best floats in the country. It's so iconic that it's been protected as a National River, the first of its kind. Clear water, tinting to green-blue in the depths. White limestone rocks and pebbles studded with the fossil remnants of a shallow Paleozoic sea. Gentle rapids that cascade your boat downriver, each one a little waterfall that is so smooth your canoe never feels out of your control. A float from the put-in at Tyler Bend to the Gilbert General Store takes about 4 hours if you're looking to enjoy yourself along the way. But this isn't a post about floating down a river. It's a post about Agile, Waterfall, and the challenge of estimating time and complexity as a Product and Engineering leader. But it's also a little bit about rivers, journeys, and Flow . There are countless ways software and product teams use to estimate how long it will take them to ship a feature or complete a project, precisely because all of them are so bad. I suppose Point Poker sells those silly card decks, so it makes someone money. But Fibonacci points, T-shirt sizes, point poker, time estimates, and all the other idiosyncratic things people resort to under pressure to perform better at estimation than the last late-shipment are well... pointless. The Ecology of Great Tech No spam. Unsubscribe anytime. If you ship consistently on time, when has your point-poker (or whatever) exercise ever told you something you didn't already guess intuitively? If you ship consistently late or early, and you go over your estimations every time, when do you get anything but "reasonable excuses that we couldn't have known better" for why the estimate was so far off? There's a book, my old colleague and mentor James Coffos at Teamworks gave to me when I was trying to ship our re-engineered product stack on time. Actionable Agile Metrics for Predictability: An Introduction , by Daniel S. Vicanti. Despite its longwinded and dull title, this is probably the best, shortest book I've ever read on figuring out how to recognize speedups and slowdowns, how to estimate correctly, and how to intervene when the float down the river from concept to feature snags on the shoreline. First off, humans are tremendously bad at estimating time and complexity, and there's no evidence they can get much better when the approach is "narrative guessing," i.e. reading a description of a feature or ticket and giving an estimation based on your understanding of what's written. It's far better to estimate completion of a task based on past performance. Start with a well-written epic of fixed scope. Break it out into tickets with engineers and allow those tickets to be broken into subtasks. (I'll tell you in a minute how to do all that.) Then, at the end of each sprint measure: By "epic of fixed scope" I don't mean that the tickets are static. They can be added to and designs can be reworked, but the outcome should remain steady. Over time you're going to build a picture of what a good project looks like and what a troubled one looks like. From these measurements above you want to understand how fast on average your team moves through tickets vs. the amount of "scope creep" and "unexpected complexity" they discover per sprint. You won't believe me until you measure it for a while, but regardless of how they estimate it, your teams are going to move through tickets at roughly the same rate every month. The canoe-crushing boulders on your project are not velocity, but scope change, creep, and undiscovered complexity. There's some wiggle room on these rules, but: If this doesn't happen then the feature definition is incomplete or the devs lack clarity on how to build what's being asked. Further attempts to develop against it without refinement will result in wasted work. Schedule a retro and re-evaluate the scope and tasks before going forward again. If this sounds like Waterfall to you, understand that you cannot deliver reliably if you don't know what you're building . I'm not saying that you build a whole new product with waterfall process. I am saying that the most agile way to develop is to navigate a chain of tiny waterfalls that become clearer as they're approached. Rapids, if you will. An epic at a time. This of course puts a hard outline around how an epic can be constructed. It has to describe a precisely known modification. It covers maybe six weeks, not six months of work. It also can't be a title and a bunch of stories which themselves are title-only. It has to have written, fixed(-ish) requirements and end-goals when it's accepted as dev-ready. If you're building something bigger, it's more of an "initiative" and covers multiple well-understood modifications to achieve a larger goal in an agile way. A well written epic distinguishes between the Product function and the Engineering and Design functions. Typically, I think of product concerns as user stories, acceptance requirements, stretch goals, milestones, and releases. Organize what's being built in the most declarative (as opposed to procedural) terms possible. You want to give your engineers and designers freedom to come up with a solution that fits the experience of using the product, and you can't do that if you're telling them how to do it. I'm going to go with an example. I don't want to rehash how to do basic product research but your decently written epic is going to focus on a class of users and a problem they want to solve. This isn't the only way to write an epic, but it mirrors how entrepreneurs think. There are need-to-haves and nice-to-haves, and follow-ons that they deem natural, but aren't included in the ACs. Each bulleted "user story" or AC is written as an outcome that a class of user needs to achieve using your software. Designers and Engineers should talk with the person guiding the product definition (could be a PM, could be the CEO, could be the Founding Engineer) and clarify the requirements and fill in gaps before going off and creating designs and engineering artifacts. For example, missing in the above but probably necessary is " PF Coaches need to be able to cancel or reschedule an appointment individually or for a block of time / days." and "PF Coaches need to be notified if a conflict arises with a client from an incoming appointment on their calendar." A good designer or engineer will catch that requirement as missing and add it. Designers will take these ACs and turn them into full fledged user stories with screen or dialog designs. Engineers will say "oh yeah, we already have a recurrence dialog, so don't design a new one" and debate with the designers on how to get into the scheduler in a natural way. Then they come back with the person guiding the product definition and go over their approach. That approach should be complete in the sense that it covers the project from breaking ground through delivery. It's not just the first sprint's worth of tasks but an outline of how the project should go. Sure, more will get added along the way, but Design and Engineering should know how they're getting there. Also, If the product person takes the design to a customer call or two, the lead designer and engineer on the team should be present on that call, because they're the ones that need to get better over time at intuiting how a customer wants to use their software. Once everyone agrees that the solution is workable, it's tasked out: And so on. If you're thinking "This looks like a lot of work before engineers start building my product," then one of two things is true: I cannot tell you the number of times I've seen "seat-of-our-pants Agile" result in dead features or weeks or months of being stuck at "80% done" on a project. If the person doing the product definition is incentivized to commit to the work they're pushing towards engineering and product, and they're held accountable for late-changes, that person is going to get better at their job quickly. If the engineering and design functions are given creative control over a solution, then they're ultimately held accountable for coming up with a good solution for customers, making them more market and customer focused. When the above is practiced well, it encourages Flow in your team. Product is incentivized correctly to keep the process moving. Design and Engineering are given the ability to work to their strengths and become better at understanding your market and customers. Here are some "anti-patterns" I've seen that cause projects to drag out. The One Liner: This makes the design and engineering functions guess at what the product is. The worst part of a one-liner is that it's usually deceptively straightforward. It implies that the product person thinks that the result is obvious enough that engineering and design should just know what to build. Realistically they're going to think of about 40% of the actual ACs. The "stories" that get added will be a mishmash of engineering tasks, design artifacts, and guesses about what the user wants. The result will be that the PM goes back and forth with customers with the half-finished feature and saying "we actually need this change," to the building team. If it was a 6-week project done right, it's now a 12-week project that creates tension and bad blood between engineers, designers, product, and customers. Product Negotiates Against Itself: If the product manager is not also the engineer and the designer, then they do not know what constitutes complexity for those functions. If they don't understand this limitation, the temptation is to "offer" engineering and design "concessions" that actually fail to reduce complexity at all and in many cases increase it and make for a worse customer experience at the same time. For our above example, these kinds of ACs get added by product management before the feature ever reaches an engineer or designer: From the product manager's perspective, they've reduced the number of days and the time window that need to be considered, made it so you don't have to handle anonymous scheduling, and you've reduced the number of channels a message has to go through. From Design and Engineering's point of view, these are requirements, not concessions and well, now: Instead of cutting scope, the product manager just doubled it and created a series of follow-on tickets that will get "mopped up" after the initial release when the coaches complain that they can't do things like schedule evening appointments, and when clients demand a "lite" experience that doesn't require downloading the app. Prescriptive instead of Descriptive: Here we have no user-stories. We have a prescription by Product of what they want built. It might come in the form of a slide deck or Figma they sketched designs on. It might come in the form aberrant "user stories" that are actually written as ersatz engineering and design tasks. But however the work is presented, the product manager has done Engineering and Design's jobs for them albeit badly, setting up a chain of painful and slow negotiations where Design and Engineering suggest changes without understanding the underlying desires of the user they're building for. Now your builders are building for the product manager rather than the customer . The end-product will inevitably be incomplete and buggy, because the builders are playing Telephone with product management and customers. Focus your teams on laying out the course for an epic in the first sprint of dev work. A particular story may be unworkable or a design might need to be completely re-thought. Ideally that would've been found before the canoe was in the water, but the next best time is the present. You want to reduce the number of items that are reworked or added each day as quickly as possible, because hitting that fixed-scope is going to be what accelerates you to the finish line. For the product manager, an MVP contains a minimum complete set of user-stories that improve a user's experience over their baseline. "Cutting scope" at the product level is an illusion based on misunderstanding the product definition vs its implementation . Engineers and designers will outline their approach and may ask for concessions that reduce the implementation time, but the actual scope of work remains the same. Not every team is going to go through tickets at the same speed. Not every team is going to dial in the work to be done at the same speed or add engineering tasks at the same rate. Each major feature in your products is a different level of maturity and complexity, approaching a different problem for a different set of users. They're going to have different profiles and that's fine . The goal with your measurements is to establish a baseline per team or per product , not to measure the delta between your team and an ideal or compare them to each other. If a team starts doing suddenly worse than it has before on fixing the scope or it starts adding engineering subtasks at a late stage in the game, you have a way to say "This isn't normal, what's wrong?" and improve. For my money, the best unit of work to measure is the epic, but there are things that don't fit in epics. Drop-everything bugs come in. Vendors you integrate with change their products. Tooling and library dependencies age. Marketing updates branding. Sales is pushing into a new market and needs a new language pack. These tasks make up a sort of base-cost of operations and maintenance. You can categorize them differently and measure bug rates and so forth, but in the end what you want to know is how much of an average month or sprint or quarter is taken up O&M work vs. Developmental work. I've divided this up various ways over the years, but I've found it really doesn't matter how you divide it up. Over-measuring doesn't clear up the signal. If there is an uptick in how much time is being spent on this by a team or on a particular product, dig in and figure out why and whether there's an opportunity to bring it down to or below baseline. It could be tech debt. It could be that a vendor has become unreliable. It could be a communications or employee engagement breakdown. Once you have baselines you can make a prediction for new work. For our example above: We just scoped out giving personal fitness coaches a way to schedule with their clients, and we have 32 tasks. That means that at the end of the first sprint we should have 51 total tasks and at the we should expect about 65-70 tasks in the epic by the time it's completed and shipped. That's about 6 weeks worth of work to get it to UAT using the lower numbers, and accounting for the O&M overhead. You can use that in your estimates to others, or you can build in some wiggle room but keep in mind that projects usually take as long as you give them. You can even use statistical methods to build that wiggle room, and the more data you have, the better those estimates will be. I know that's a lot, so I want to summarize the main points and give you a Start-Stop-Continue. Predicting completion based on past performance. Organize work into epics. Epics are populated with user stories that describe a good user outcome. Then engineering and design work with the people defining the product (including customers where appropriate) to determine an approach and an initial set of development tasks that goes from breaking ground to shipped product. Once it's agreed upon, it goes into development and while the development tasks may change, the outcome should not. Once you have a baseline it's easy to provide completion estimates to people outside the team. It's also easy to figure out if a project is on or off track, to proactively communicate that, including how far off track it is. And you have the information to dig in with the right people on the team if that happens. Stop doing point-poker and all forms of "narrative guessing" in creating development estimates. Stop letting development start on an idea that's not ready. Stop writing user-stories that are actually development tasks. Stop product management from "people pleasing" and pre-emptively negotiating against itself with the development team. Continue to provide completion estimates, but better ones. The number of tickets and subtasks completed. The number of changes to the scope of a story or epic e.g. the end-goal, requirements, and user-stories. The number of new tickets and subtasks added. Changes to the end-goal or core requirements of a story or epic should be done by the time devs accept it and begin work. From there, tickets and subtasks should stop being added within a sprint or two. You're a tiny shop where everyone knows what they need to build and this is too heavy-handed a practice. This article can wait until you're bigger. Consider this up-front cost vs. the cost of wasting engineering and design days and weeks needing to redo work based on unforeseen changes. We can't use our existing calendar components because of the restricted schedule. The sign-up flow has to account for a new user sign up happening during the scheduling workflow. Redirects and navigation need to be updated in the mobile and web apps. The user's experience is made significantly worse because they have to complete an irrelevant task in the course of doing the thing they came to do. Our message provider Twilio already provides SMS, push, and email as a single notification package so now we write code into the system that allows us to ignore the user's notification preferences and only send SMS. Now the user is irritated because every other message from your app comes according to their preference but this one, new, "buggy" feature they used. The calendar team clears 15-25 tasks per week The number of implementation tasks will grow from the initial dev-ready set by an average of 60% in the first sprint, 20% in the second sprint, and then fall off. The calendar team spends 20% of its task clearance on O&M. Changes to the outcome (user stories or overall goal) made after development starts. The rate of development and design tasks added over time. The rate development tasks are cleared overall. The average number of "overhead" tasks within that.

0 views
Jason Fried 6 months ago

Doing what you think, not what you thought

Whenever I talk about working in real-time, making decisions as you go, figuring things out now rather than before, I get a question like this. "If you don't have a backlog, or deep sets of prioritized, ranked items, how do you decide what to do next. " My answer:  The same way you do when your made your list. You make decisions

0 views
Jason Fried 1 years ago

Appetites instead of estimates

The problem with software estimates is that they're both entirely right and entirely wrong. Yes there's a 3 week version of something. And a 6 week version. And a 4 month version. And a 12 month version. That's correct. Yet, you'll almost always be wrong whichever you pick. Because estimates aren't walls — they're windows. Too easy to open and climb through to the next one

0 views
Ludicity 1 years ago

Tossed Salads And Scrumbled Eggs

With the decision to focus on our consultancy full-time in 2025, my time as an employee draws to a close. My attitude has become that of an entomologist observing a strange insect which can miraculously diagnose issues, but only if the diagnosis is "you weren't Agile enough". My rage has quickly morphed into relief because this is, broadly speaking, the competition. 'A beetle clock?' she said. She had turned away from the glass dome. 'Oh, er, yes... The Hershebian lawyer beetle has a very consistent daily routine,' said Jeremy. 'I, er, only keep it for, um, interest.' While our team is now blessedly free of both the madness of corporate dysfunction and the grotesque world of VC-funded enterprise, we must still interface with such organizations, ideally in as pleasant a manner as is possible for both parties. But there is still one question from my soon-to-be old life that bears pondering, which I must understand if I am to overcome its terrible implications. What the fuck is going on with all those dweebs talking about Scrum all day? I would rather cover myself in paper cuts and jump into a pool of lemon juice than attend one more standup where Amateur Alice and Blundering Bob pat each other on the back for doing absolutely fucking nothing... except using the learning stipend to get "Scrum Master Certified" on LinkedIn or whatever. — A reader You may be surprised to hear, given my previous writing , that I am more-or-less ambivalent about the specifics of Scrum. The inevitable protestations that "Scrum works well for my team" whenever it comes up are both tedious and very much beside the point. The reason that these protestations are tedious is that, in the environment we find ourselves navigating, these are meaningless statements without context. Most people in management roles are executing the Agile vision in totally incompetent fashion, usually conflating Scrum with Agile. They also think that it's going just swimmingly, when a more accurate characterization would be drowningly. Given that you do not know who you are speaking to over the internet, whether they are competent engineers or self-proclaimed thought leaders, any statement about Agile "working for my team" does not convey much information, in the same way that someone proclaiming that they are totally not guilty is generally an insufficient defense in court. The reason that they are beside the point is that the specifics of Scrum are much, much less interesting than what we can infer from the malformed version of the practice that we see spreading throughout the industry. I believe there are issues with Scrum, but those issues simply do not explain the breathtaking dysfunction in the industry writ large. Instead I believe that Scrum and the assorted mutations that it has acquired simply reflect a broader lack of understanding of the systems that drive knowledge work, and the industry has simply adopted the methodology that slots most neatly into our most widely-held misconceptions. As you can see, I am not a data engineer like yourself, but we share the deep belief that Scrum is complete and utter bullcrap. — A reader When I first entered the industry, I was at a large institution that had recently decided to become more Agile. It is worth taking the time to explain what this means for the non-technicians in the audience, both so that they can follow what is going on, and so that the technicians here can develop an appreciation for how fucking nuts this all sounds when you explain it to someone with some distance. Most of us are so deeply immersed in this lunacy that we no longer have full context on how bizarre this all is. To begin with, "Agile" is a term for a massive industry around productivity related to software engineering. Astute adults, with no further context, will see the phrase "industry around productivity" and become appropriately alarmed. The industry is replete with Agile consultants, Agile coaches, Agile gurus, and Agile thought leaders. Note that these are all the same thing at different points on the narcissism axis. Agile is actually a philosophy with no concrete implementation details, so there are management methodologies that claim to be inspired by that broader philosophy. The most popular one is called Scrum. As with any project management methodology, the dream goal with Scrum is for teams to work more quickly, to respond to changes in the business more rapidly, and to provide reliable estimates so that projects do not end up in dependency hell. This is typically accomplished through Jira. All-powerful Jira! All-knowing Jira! What is this miraculous Jira? It's a website that simulates a board of sticky notes! That's it, that's the whole thing. When something needs to be done, you put it in there and communicate on the card. Well, all right, that doesn't sound so bad yet. In any case, this is paired with a meeting that runs every morning, called a Stand-Up. It is supposed to run for approximately ten minutes, as one would expect for a meeting that's going to happen every day . Instead, every team I've seen running Scrum has this meeting go on for an hour. Yikes, yes, daily one hour meeting. And since orthodoxy in the modern business world is that a "silo" is bad 1 , many people work on more than one team, so they attend two one hour meetings per day . That is a full 25% of an organization's total attention dedicated to the same meeting every day . What on earth are you doing in daily one hour meetings? Well, we discuss the cards. Wait, I thought the whole point of Jira was so that all your notes are on the electronic cards? You're asking too many questions, heretic . Guards, seize them! Of course, while this is usually enough to provoke complete confusion when explained to people with enough distance from the field to retain their grasp on common sense, it gets worse. Prepare yourself for a brain-frying. You typically don't just do work as it needs doing. In an effort to keep track of the team's commitments as time goes on, the team commits to Sprints , which basically means that you commit about two weeks worth of cards to the board, then only work on those cards on pain of haranguing. Sprints are usually arranged back-to-back with no breaks, and "sprinting" nonstop throughout the year is obviously a totally healthy choice of words which has definitely never driven anyone to burnout. But to keep track of how much work is in each card, there is usually another meeting called Backlog Grooming, where the team sits around and estimates how much time each card is going to take. This is done by assigning Story Points to cards. What is a Story Point? Why, it's a number that is meant to represent complexity rather than time, because we know in the software world that time estimates are notoriously unreliable. To make things even simpler , most teams actually still use them to mean time, enough so that there are all sorts of articles out here where people desperately try to explain to professionals in the industry that they shouldn't use the phrase "Story Points" incorrectly, even though knowing what one of the core phrases in the methodology means should be a given. Okay, you're with me so far, right? Scrum is a project management methodology based on Agile, where you run daily Stand-Ups to reflect on how your Sprint is going, and the progress of your Sprint is measured by the number of Story Points you've completed in your cards, which may or may not be hours. Fuck, wait, did I say cards? There are no cards, there are Stories and Tasks, and a long sequence of Stories and Tasks contributes to an Epic... wait, did I not explain what an Epic is? An Epic usually translates to some broader commitment to the business. Sorry, sorry, we'll try again. So you do the Stand-Ups to evaluate how many Story Points you've completed in your Sprint — ah shit, wait, wait, I forgot. Okay, so the number of Story Points you've done in a Sprint is Velocity. Yeah, right, so you want your Velocity to stay high. So you run Backlog Grooming to produce Story Points for each of our Stories and Tasks, which are not time estimates except when they are, and then we try to estimate how many Story Points we can fit into a Sprint, which is explicitly a timespan of two weeks, again keeping in mind that Story Points are not time estimates , okay? If we do a good job, we'll have a high Velocity. And then we put that all into Jira, and you write down everything you're doing but then I also ask you about it every morning while I simultaneously try not to turn this into a "justify your last eight hours of work" ceremony. Damn it, wait, wait, I forgot to tell you, these aren't meetings, okay? They're called Ceremonies, and the fact that I am demanding large swathes of people attend ceremonies against their will does not make this a cult. Phew. Okay. Now, with all of that in mind, how many Story Points do you think it'll take you to update that API, Fred? Fred, you dumb motherfucker, Story Points have to adhere to the Fibonacci sequence 2 , you stupid idiot. Four? You mud-soaked yokel. You've disrespected me, this team, and most of all, yourself. Christ, Fred, you're better than this. Fucking four ? I wouldn't let that garbage-ass number near my business. I don't understand why you're struggling with th— As someone whose part of their job was to write end-user documentation at REDACTED about these exact things, you have my wholehearted, eye-twitching encouragement from this section alone. — A reader reluctantly working in Scrum advocacy I'm going to stop there for a second. You may understand now why, when first confronted by Jira and the Agile Cult, I elected not to read anything about it for about a year. It was, after all, an organization-wide transformation being pushed by many people with big ol' beards and very serious names like Deloitte in their work histories. Repeated references were made to the Agile "manifesto" by non-engineers, which caused me to avoid reading anything about it. It was a whole manifesto , for which the only frames of reference I have are voluminous works by Marx and Kaczynski which I haven't read either. Surely these people had been employed because they had some formidable skillset that I was missing. Imagine my befuddlement when I realized that this is the entirety of the manifesto: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Congratulations! You're all Agile certified! I'm so proud of all of us. This certificate is going right on the fridge, alongside the prettiest macaroni art. A few things here are striking. The first is that I don't see any Proper Nouns at all. So all that Scrum-Velocity-Story-Point-Epic-Fibonacci stuff is very much some sort of philosophy emerging from a weird nerd-aligned religious schism. The second thing is that the text is actually reasonable, and able to provoke meaningful discourse. For example, should individuals be contrasted with processes? Is taking the individual into account necessarily at odds with process? In some ways yes, in some ways no, but the authors are merely stating a rough preference for when the two come into conflict. And the final sentence walks back the preceding four, so this is hardly the incendiary foundation for the monolith of bullshit I just described. So where on earth is all this Scrum stuff coming from? There's nothing in the original manifesto that would suggest that this is sensible, nor is there anything that would even begin to send people down this strange pathway. For all the flaws with Scrum, you will find no support for this stark madness anywhere within an authoritative source on the topic. Yes, it comes with a thousand confusing names and questionable value, but it doesn't actually suggest people dedicate up to half their time to meetings. In fact, Scrum is mostly embraced in a manner which implies some of its most fervent advocates have failed to spend even a few minutes reading about their primary job functions. How else can we explain the prevalence of Story Points as time estimates, and the one hour meetings every morning? And this is exactly why I don't view Scrum itself as particularly problematic. The fundamental issue , the one that is only moved by small degrees by project management methodologies, is that many, many people simply have totally unsophisticated ideas around how knowledge work functions. Last week, someone was trying to bully me into estimating something. It took two hours of me saying "I can't estimate that, it has no end point, you don't understand what you're asking" for it to finally devolve into "Ok, you can't estimate it, I understand, but if you had to estimate it what would it be?" I said fuck it, 8 hours for me to investigate and come back with an answer on how long the work would take... and they were happy with that. Nobody paused to consider it took 25% of my estimated time to have a meeting about why it was a dumb question. — A reader Work at large companies has a tendency to blow up, run far behind schedule, then ultimately limp past the finish line in a maimed state. One of my friends talks about how, when faced by his first failed project on a team, a management consultant responded to all critical self-reflection with "But you'd say that, overall, this was a success?" in a desperate bid to generate a misleading quote to put into a presentation to the board. The core of this issue lies in the simple fact that time estimates will, with varying frequency based on domain and team skill, explode in spectacular fashion. We are not even talking about a task taking twice as long as initially estimated. I'm talking about missing deadlines by years . The software I mention in this blog post is now over ten years overdue. I am fairly certain that the majority of software projects collapse in this fashion, for reasons that would only fit into a post about Sturgeon's Law . It is in the shadow of this threat that the Scrum Master lives. Yes, that's right, there are still exciting Important Words that we haven't introduced. The Scrum Master, who I usually call Scrum Lords because it's funnier, is some sort of weird role that's specialized entirely in managing the Jira board, providing Agile coaching, and generally doing ad hoc work for an engineering team. Keeping with the theme of Scrum being bad but people being even worse at implementing it as prescribed, they usually end up being the project manager as well. Atlassian's definition of a Scrum Master notes that one of their core roles is "Remove blockers: superhero time", which makes me want to passionately make out with a double-barreled shotgun. I can only assume that Scrum Masters feel deeply infantilized by this, and I am offended on their behalf. They are generally very sad and stressed out, while simultaneously pissing off everyone around them. I can just punch any software YouTuber's name into the search bar along with "Scrum Master" and be assured that I can find someone sneering . Putting that brief meanness aside, I am actually very sympathetic. They are, after all, people, and I take the bold stance that I'd prefer people be happy and self-actualized. All of this, with the boards and the Stories and the Epics, they're all mechanisms for trying to construct some terrible fractal of estimation that will mystically transmute the act of software engineering into the act of bricklaying. And I'm guessing that bricklaying is also way more complicated than it looks, so this still wouldn't improve matters much even if it worked. This is further complicated by the fact that most Scrum Masters have either no understanding of the work under consideration, or have learned enough merely to be dangerous 4 . This puts them into an impossible position. If companies are going to pay outsized compensation to perform a job that simply requires a degree and a willingness to endure tedium, I can hardly fault someone for taking that deal. Even the Atlassian definition of a Scrum Master notes that technical knowledge is "not mandatory" , so who can blame them for not having technical knowledge? And once you're in that position, you have now become the shrieking avatar of the latent anxiety in the business. All projects are default dead barring exceptional talent, but this level of realism would fail to extract funding from the business, even if cool analysis reveals that the failure chance is still worth the risk. The Scrum Master is thus reduced to a tragic figure. They worry about losing their overpaid role, are not developing skills that are easily packaged when pitching themselves to other businesses, and feel responsible for far too much inside a project. Yet they do not have the knowledge or the power to debug the machine that is the team, even if they are well-intentioned and otherwise talented. Bad actors can more-or-less get away with saying anything to avoid doing work, because the truth is that only an engineer can tell when another engineer is making things up, which is precisely why we all live in fear of sketchy mechanics overcharging us for vehicle repairs. Even if someone is suspected as malingering, the Scrum Master is unable to initiate termination procedures, and will probably have to trust their gut to a degree that is unpalatable for most people if they want to escalate issues. If the project is running late, they have no recourse other than to ask the engineers to re-prioritize work, then perform what I think of as "slow failure", which is normally the demesne of the project manager. When a project is failing, the typical step is not to pull the plug or take drastic action, it is to gradually raise a series of delays while everyone pretends not to notice the broader trend. By slowly failing, and at no point presenting anyone else in the business with a clear point where they should pull the plug, you can ultimately deliver nothing while tricking other people into implicitly accepting responsibility. The Scrum Master is generally not malicious, they are just failing to see the broader trend, and simply hoping for the sake of personal anxiety regulation that this task will indeed be accomplished by the next sprint. When I run into someone in this position, I have very little trouble with my disdain when they're enjoying harassing everyone, but I mostly run into people who are actually struggling to be happy with 40 hours of their week. I know of Scrum Masters who have broken down crying when they hear that people are leaving teams — not due to a deep emotional connection with the person leaving, but because that anxiety is lying right below the surface, and almost any disruption can set it off. It is not unusual to hear people in this role flip between intensely rededicating themselves to "fixing the issues" and then despairing about their value to society, something that I personally went through on my first corporate team. It sucks. I suspect that the impact of the organization manifesting their anxiety in one person in this way, then giving that person control of meetings and the ability to deliver that anxiety to their teams, is perhaps one of the most counter-productive configurations possible if you assume that the median Scrum Master is not a bastion of self-regulation. These people exist, but I wouldn't bet on being able to hire a half dozen of them at an affordable rate. For most of us, including me, attaining this level of equanimity is very much a lifelong work-in-progress. But even this is not a problem with Scrum, it's a much more serious problem — that organizations run default-dead projects and have cultures where people have to hide this while executives loot the treasury — that is simply made slightly worse by Scrum configurations. Most engineers just go through the Scrum/Agile motions, finding clever ways to make that burndown chart progress at the right slant without questioning what they’re doing, and it’s nice to read someone articulating the negative thoughts that some of us have had for such a long time. Believe me when I say it’s been this way pretty much since the inception of this fad in the mid 90s. — A reader I have previously joked (I was actually dead serious) that the symbolic representation of the work, the card on the Jira board, is taken to be such a literal manifestation of the work that you can just move the pointless tasks to "Done" without actually doing anything and the business will largely not notice. If the card is in "Done", then the work is Done. The actual impact of the work is so low that no one notices, in a way that my electrician would never be able to get away with. Readers have written in to say that they have done exactly this, and nothing untoward has happened. This conflation of management artefacts with the actual reality of the organization is widespread, and also not Scrum specific, but it is my contention that methodologies producing these artefacts is core to their appeal. A phrase I love is that "the map is not the territory", which more or less translates to the idea that maps merely contain abstract symbols of the territory they represent, and that while we may never have access to a perfect view of the whole territory, it is important to understand that we aren't looking at the real territory. That little doodle of a mountain is not what the mountain actually looks like. Despite the scribble of a sleeping dragon, Smaugh may be awake when you get there. The harsh truth is that, as with anything complicated enough in life, you cannot realistically de-risk it. We go through our days with complex five-year plans, have them utterly blown apart every year by Covid, assassinations, coups, and if you're super lucky the best you can hope for is the dreadful experience of watching other people getting cancer rather than yourself. Then, because this is terrifying, we immediately go back to pretending that the most important event in the next five years will also be predictable. And the other thing that we do because risk is usually terrifying (it's actually quite fun when you learn to expose yourself to good rare events — say, writing in public), is we immediately cling to things that smooth this away. Software engineers do not like engaging with the business partially because they trend towards being nerds, but mostly because interfacing with true economic reality is confronting. And non-programmers seem like they're interfacing with the reality of the business, but frequently they are interfacing with Reality As PowerPoint, which is closer to the territory but still not the territory . True reality is never accessible because no one has perfect information. We do not know whether our competitor's latest product is going to be far behind schedule or utterly obliterate us. We do not know if a pandemic is going to shut down the state for a year. To make matters worse, reality that is accessible is usually not accessible from a high vantage point. From a bird's eye view, you have no way of knowing that 80% of a specific team's output is from Sarah, and Sarah's son just broke his arm playing soccer so that project is about to collapse as she scrambles to cope. This is totally visible to some people at the business, but is not going to be shared with the person making promises to the board. We could build a complex systems-thinking approach about our work, but that is very hard and will have obvious fuzziness. Many of the mediocre executives I meet, particularly those I meet in the data governance space 5 , love their PowerPoints and Jira boards because while they are nonsense, they are nonsense that looks non-fuzzy and you will only have to deal with their inaccuracy once every few years, at which point so many people signed off on the clear-but-wrong vision of reality that it's hard to tell who is ultimately accountable for the failure. A more effective management methodology, one which accurately portrays the degree to which no one knows what is going on because life is chaotic, only makes sense for an entirely privately owned business where the owner needs to turn a profit rather than impress his employers or the markets. This mode of non-fuzzy being is only available to those who are salaried to "run the business", which means that they are not accountable to the territory, much like a hedge fund manager who receives bonuses for good years and a simple firing (keeping their ill-gotten gains) during a bad year, allowing them to engage in strategies that have massively negative expected returns but only during rare events. This is in stark contrast to the reality of the bootstrapped business founder, such as the barber down the road, who will simply be on the hook for such losses. If you're looking for results, rather than being precise, we want the symbols for our work to look as fuzzy as we are uncertain, rather than pseudoclarity. I want the rope bridges on my map to exist in a superposition of being intact and destroyed in the last big storm. Of course, this is exactly what expensive PowerPoint reports and Jira provide. Pseudoclarity, for as long as you're willing to fork over enterprise license money. The version of reality where you can simply calculate how many Story Points you're completing per month, compare that to the number of Story Points in the project, then calculate that the project will be finished on time is very, very tempting, but the ability to do this is dictated by factors that are almost totally unrelated to Scrum itself. A team that can work this smoothly has probably already won, whatever you decide to do. Some people have just lived with these symbols for so long that they think drawing a box on a PowerPoint slide that says "Secure Personally Identifiable Data" is the same thing as actually making that happen , as if one could conjure a forest into existence by drawing some trees. Funny thing is that, every time my company tried to introduce OKRs and make it work, it was clear to me that nobody read the book nor understood the final goal. Like Agile or Scrum. People try to implement them as dogma, and they always backfire because of that. I guess it is always easier to be a cook and follow recipes than to be a chef and adjust based on current circumstances/ingredients. — A reader If there is a specific problem with Scrum, something that I genuinely think makes it stand out as uniquely bad rather than just reflecting baseline organizational pathology, meetings are it. People are not good at running meetings, and mandating that they hold more meetings does not merely reflect our weaknesses as a society, it greatly amplifies the consequences of that weakness. So many meetings. So, so many meetings. Suffice it to say that anyone who runs a one-hour Stand-Up with any consistency should be immediately terminated if they are primarily Agile practitioners. There is very little to say here, save that people are so terrible at running meetings that, on average, the sanest thing to do for most businesses is pick a framework that minimizes the default number of them. I will appeal to authority here on the normalcy of meeting-running skill being low and simply quote Luke Kanies who has given meetings more thought than I have : So a manager’s day is built around meetings, and there is a new crop of tools to help with them. What’s not to love? Well. The tools are built by and for people who hate meetings, and often who aren’t very good at them. Instead, I want tools for people whose job is built around meetings, and who know they must be excellent at them. In the absence of people who treat running meetings as seriously as we treat system design, try not to run many meetings . If this sounds unpalatable then get good, nerds . I'm turning the tattered remnants of my humility module off to say that the team at our consultancy runs a meeting at 8PM every Thursday, after most of the team has just worked their day job and struggled to send kids off to bed, and we actually look forward to it. This is attainable, though even then we constantly reflect on whether the meeting needs to keep existing before it wears out its welcome. I ask people very frequently, possibly too frequently, whether they're still having fun. I currently believe that meeting-heavy methodologies are preferred because they feel like productivity if you aren't mindful enough to notice the difference between Talking About Things and Doing Things . Even some of the worst Agile consultants I know have periodically produced real output, but I suspect they can no longer differentiate between the things they do that have value and the things that do not. A while ago, I wrote a short story about Scrum being a Lovecraftian plot designed to steal human souls . It ended with this quote: In the future, historians may look back on human progress and draw a sharp line designating "before Scrum" and "after Scrum." Scrum is that ground-breaking. [...] If you've ever been startled by how fast the world is changing, Scrum is one of the reasons why. Productivity gains of as much as 1200% have been recorded. In this book you'll journey to Scrum's front lines where Jeff's system of deep accountability, team interaction, and constant iterative improvement is, among other feats, bringing the FBI into the 21st century, perfecting the design of an affordable 140 mile per hour/100 mile per gallon car, helping NPR report fast-moving action in the Middle East, changing the way pharmacists interact with patients, reducing poverty in the Third World, and even helping people plan their weddings and accomplish weekend chores. This is so unhinged that readers thought this was something I made up. 1200% productivity improvements? You can use Scrum to report on wars and accomplish your weekend chores ? This looks like I asked ChatGPT to produce erotica for terminally online LinkedIn power users. I wish it was. That's the blurb from one of Jeff Sutherland's books , one of the Main Agile Guys. The subtitle of the book is "doing twice the work in half the time", so this absolute weirdo is proposing that Scrum makes you four times faster than not doing Scrum . Jeff has also gone on record with amazing pearls of wisdom like this : If Scrum team is ten times faster than a dysfunctional team and AI makes teams go four times faster, then when both use AI the Scrum team with still be be ten times faster than the dysfunctional AI team. Actually, what I am teaching today is not only for developers to use AI to generate 80% of the code and be five times faster. This will make each individual team member 5 times as productive and the whole team five times faster. But it you make the AI a developer on the team you will get a synergistic effect, potentially making the team 25 times faster. Jeff, what the fuck are you saying? This is incomprehensible nonsense. You are throwing random numbers out faster than Ben Shapiro when he's flinging anti-trans rhetoric , a formidable accomplishment in and of itself, and they're all totally unsubstantiated. This is insane . This is demented . Scrum is ten times faster than a dysfunctional team? Are all non-Scrum teams dysfunctional? AI makes teams go four times faster? But you're teaching people to use AI to be five times faster? Then if you put AI on the team as a developer there's a synergistic effect and they're 25 times faster? Fucking what ? What mystical AI do you have access to that can replace a developer on a team today and why aren't you worth a trillion dollars? And if we throw Scrum into the mix for that sweet, sweet 10x speedup, can I get my team to be 250 times faster? Can our team of six people perform the work of, let me do some maths, 1500 normal developers? How have we defaulted to a methodology that has this raving fanatic at the helm? Contracting has exposed me to a variety of technical challenges and domains, ranging from work I remember with fondness and pride, to the kinds of unbearable, interminable corporate Scrum nightmares you describe so eloquently in your blog which seemed to be cooked up in a lab intent on undermining and punishing any sign of genuine ambition towards the improvement of human life. — A reader The reality is that teams are messy, filled with emotion, and that this is further compounded by the fact that our work requires a great deal of emotional well-being to deliver consistently. I once worked on the management team at a Southeast Asian startup, and while I was terribly depressed at that job, I was able to get my work done with some degree of consistency. Now that I am in IT, I basically cannot program when I am in a negative headspace because I cannot think clearly, and this dominates most of the productivity gains I see in a typical engineering team. Poor sleep, low psychological safety, and a thousand other little levers in the brain can disrupt functioning. There is no real shortcut for this. With that said, I do have thoughts on how to do better, and you're going to get them no matter how annoying that is! Behold, the unbridled power of not relying on advertising revenue! Let's start very, very simply. Names matter. Agile is popular because the word Agile has connotations of speed, and that is genuinely as sophisticated as many people are when designing their entire company's culture. Sprints are popular because the word Sprint has connotations of speed. The fact they are called Sprints has probably genuinely killed a few people when you aggregate the harm of being told you are Sprinting every week across a few million anxious people. Don't give things idiotic names. All methodologies should compete against a baseline of a bunch of sticky notes on a whiteboard, and you should question your soundness of mind every time you feel the need to introduce a Proper Noun, okay? Just have a big list of things to do, order in terms of what you want done first, and then do that. Just think about how much you'll save on onboarding and consultants for the exact same outcome in almost all cases. There are plenty of superior methods to this, but there are way more worse ones. If you have a meeting, just call it a meeting and prepare an agenda. I swear to God, if you invent a Proper Noun and someone asks me to learn it for no reason, sweet merciful Jesus, I will find you and — A spectacular amount of the design that goes into these methodologies is based around avoiding cognitive biases around estimation, though they frequently fall short because there is no easy fix for a mind that craves only easy fixes. That one sentence describes 90% of dysfunction in all fields. Consider the Fibonacci sequence restrictions, meaning that four Story Points can't exist (which is hilarious when adopted by teams using Story Points as time, because now four days can't exist). The generous reasoning behind this is that the variance in a highly-complex or time-intensive task is higher than that of a simple task, so it makes sense to force people into increasingly large number rather than stressing about a single point here or there. In reality, this is fucking silly and if someone suggested this ex nihilo , without the background of Scrum, we'd be absolutely baffled. But hey, an attempt was made. Most of the suggestions will be around different models for handling these biases. I'll indicate which of these I have actually tried. For the most part, I have found that the typical organization is completely unwilling to actually try these, and will only consider them when talking to me when I am presenting in my Consultant Mode, not my Employee Mode. Even though I am more deeply embedded in the workplace culture as an employee, most people can't stop seeing ICs as too low-status to take seriously. In these contexts, I've just gone rogue and experimented without management buy-in. Basecamp has a free book out online that talks about a methodology they call ShapeUp. It's quite good despite my general disdain for business books. In it, they explicitly deal with the failure mode of tasks stretching well beyond their value to the business, existing in the perpetual zone of "almost done". We combine this uninterrupted time with a tough but extremely powerful policy. Teams have to ship the work within the amount of time that we bet. If they don’t finish, by default the project doesn’t get an extension. We intentionally create a risk that the project—as pitched—won’t happen. This sounds severe but it’s extremely helpful for everyone involved. First, it eliminates the risk of runaway projects. We defined our appetite at the start when the project was shaped and pitched. If the project was only worth six weeks, it would be foolish to spend two, three or ten times that. [...] Second, if a project doesn’t finish in the six weeks, it means we did something wrong in the shaping. Instead of investing more time in a bad approach, the circuit breaker pushes us to reframe the problem. We can use the shaping track on the next six weeks to come up with a new or better solution that avoids whatever rabbit hole we fell into on the first try. Then we’ll review the new pitch at the betting table to see if it really changes our odds of success before dedicating another six weeks to it. All this does is turn off the sunk cost fallacy, forcibly. It's very smart. I ran this for a while with the other engineer mentioned in this blog post . Despite the broad horror of that story, it was the most productive work period of my life. It's also worth noting that the other engineer went on to become one of my co-founders, and that we both studied psychology together before getting into IT. A lot of our effectiveness came down to ruthless self-analysis and paying attention to failure modes. I like the advice given by P. Fagg, an experienced hardware engineer, "Take no small slips." That is, allow enough time in the new schedule to ensure that the work can be carefully and thoroughly done, and that rescheduling will not have to be done again. — Fred Brooks, The Mythical Man Month This is the smartest and most practicable thing that I'm ever going to write on this blog. Unsubscribe after this because it's all downhill from here. There is a rather strange phenomenon that arises around project lateness. When we estimate that something is going to be completed in a month, the natural temptation is to think that, when you are one day overdue, that you are almost done . I.e, it will be finished in one month and five days. In reality, each day past the deadline increases the estimated deadline. Prepare yourself for one of my patented doodles, a thing which I am bullied for relentlessly at work, and enjoy the dreadful simulated experience of being one of my colleagues enduring an interminable lecture about some abstract concept that no one cares about. This is the distribution that people think that are sampling from. As you move past the one month mark, you are approaching the sad probability that it'll take two months, but the likelihood of that is astonishingly low. At each point, the odds are that it's the next sprint where you'll deliver. The truth is that if you keep missing deadlines (or even miss one deadline), reality is gently, and eventually not-so-gently, informing you that you are not drawing from the distribution you thought you were . Instead, which each passing day, it is increasingly likely that you are drawing from some super cursed distribution that will ruin your project forever. Each delay represents the accumulation of evidence that you are more likely to be drawing from the blue instead of the red, or something even worse than the blue . These days, when something important is late by one day , I immediately escalate to the highest alert level possible. This is unpalatable for political reasons, but it is the only appropriate response. It works. I've tried this out, and I have never regretted it. I also once warned an executive about a server that was two weeks late in being provisioned. I failed to adequately explain the idea because it was complicated, they were impatient, and I hadn't practiced the explanation enough times... and I think that it's too counter-intuitive for non-statisticians to actually act on. Doing unusual things is a genuinely hard skill, even if you absolutely believe that the unusual thing is better. That server was provisioned a year behind schedule. There are no small delays in my world, only early delivery and absolute catastrophes. Our consultancy doesn't do deadlines. This was a strange idea when I first came across it because it is so different from the corporate norm, but it's a much better model when you have trust with the parties involved. If you don't have trust, guess what, nothing else matters. We pair this with fixed price billing, but the core is that we try to only work projects where there's no real risk of a few weeks here or there affecting our client adversely. The fixed price billing means that we aren't rewarded for running late, and have a higher effective hourly rate if we deliver something the client is happy with in less time. It also means that clients don't feel bad when we do things like document comprehensively or improve test suites. This is tricky, because it runs totally counter to how a large business operates, but there would also be very little to enjoy in starting a business only to do what everyone else is doing. There's common wisdom in the savvier parts of the IT world that you should limit the number of weird things you do with regards to technology stack. Even that is actually an argument against bias (people drastically underestimate the difficulties of using weird technology and overestimate the value) — but that's also terrible advice if you actually know what you're doing. You want to maximize the amount of weird stuff you're doing across the business to generate asymmetry with your competitors, with the admittedly serious caveat that the pathway to this particular ancient ruin is littered with skulls. Pay attention to the skulls. This isn't very useful for a larger business, as usually they have been architected to rely on conventional mechanisms, but it's worth thinking about the fact that this is possible. Authors like Jonathan Stark have it as part of their normal practice. You can choose to build a system that is not predicated on this idea of work flowing linearly through a system like a factory floor. In fact, one of the most famous books on IT operations is The Phoenix Project , but the The Phoenix Project is self-admittedly just a reskinned version of The Goal , a totally different book that is explicitly about factory floors . This is also totally beside the point, but the audiobook version of The Goal has a romantic subplot in a book about factory operations and the editors included smooth saxophone when the protagonist finally frees up enough time at work to attend to his ailing marriage, which caused me to exhale tea through my nose. Finally, here is a boring disclaimer that some industries simply can't get away with experimenting along these dimensions. Microchip manufacturers need to deliver the product in time for the next iPhone to ship or Apple cancels the contract. C’est la vie. This is a point taken from a private conversation with Jesse Alford , but obviously tasks can be estimated accurately, it's just expensive. I've done it before to a higher accuracy than I've seen on any Scrum team, with minimal practice, just by taking the time to have deep conversations with another engineer about the work. Unfortunately, it takes a non-trivial amount of engineering effort to do, and frequently has to be paired with actual work. Once again going to Basecamp, whose kool-aid I swear that I only drink sparingly even though it's delicious and refreshing, they have a specific chart on their platform called a hill chart. I love hill charts. They look like this: Simply put, they reflect the reality that there is a phase of a project where scope increases as you run into new cases during implementation, and then a phase where you actually have a good idea of how long something is going to take. For example, if I was going to integrate an application with a third-party application, the period of horror where I learn about the Informatica API (a wretched abomination whose developers should be crucified) is the left side of the chart as I learn things like "The security logs don't tell you if a login attempt was successful , just that someone clicked the login button". The right side of the chart, once I have painfully hiked up the hillside which is littered in caltrops, is where I say "This is still absolute torment, but I am now confident that there are only three days of pain left". People can have their gigantic Jira board, I guess, if they're willing to put that much time into something that isn't the work itself. And of course, that wouldn't be that great anyway, as it's possible to miss board deadlines, have to perform re-work, otherwise teams you depend on could screw up, people leave the team or become demotivated, etc. For the most part, businesses are best served by doing really, really simple things that have outsized value. Even within the consultancy, the only things we've bothered setting internal deadlines for are those that we've been procrastinating on. Even our deadlines are deployed towards psychological ends. This will change over time, but it has been totally fine until now. Nothing crushes software engineering productivity faster than low morale. Generally speaking, software engineers in the first world are well-paid enough, and their work is hard enough to measure, that you cannot intimidate them into working faster by standing behind them whilst demanding they flip burgers faster. Nor would I want to, on account of at least trying not to be a bastard. Scrum teams, and any team that does not pay extremely close attention to the symbolic framework that the team is operating in, will collapse over the course of approximately a year. A core issue with Scrum specifically is that many of the ceremonies symbolize extant organizational issues much too clearly . One of the other important meetings, the Retrospective (I know, there's another word!) is where you sit down and evaluate how a sprint went, and what the team can change. I love the idea of the Retrospective. It is a fantastic idea that any team aspiring to greatness should adopt. But if everyone in the Retrospective requires organizational permission to change things, or is the generic team where changes are not addressed with violent purpose, the Retrospective becomes emblematic of everything that makes people feel disrespected. On my current conventional-employment team, only about 20% of the team continues to attend Retrospectives, which is the outcome I've gradually observed everywhere I've seen Scrum. Again, this isn't a problem with Scrum — it's that Retrospectives interact positively with good cultures and negatively with bad cultures, and since most cultures are bad, it follows that Scrum is actively harmful to deploy into a random environment. It's only appropriate to roll it out when the organization is ready to actually make changes , and the continuance of the process should be immediately interrogated, and possibly terminated, the moment an engineer reports that they feel it's a waste of time. You can bring it back after figuring out how it all went so wrong the first time. There's probably some broad lesson in here about thoughtfulness, constantly evolving how you approach your craft and the structures that surround it, being suspicious of people selling 1200% improvements in productivity, and accepting that there's no substitute for reading and thinking deeply about your problems. There is no management methodology that will make up for having team members that proselytize full-time for a philosophy that is five lines long without having read those five lines 6 . But that sounds really tiring! Someone has announced Agile 2.0 , baby! Nothing can go wrong! Finally, all our problems are solved! Goodnight, everybody! This frequently boils down to professional responsibility cosplayers hoping that by repeating the phrase "break down silos" every day, they will be able to network every employee at the company into one gigantic Borg cube, somehow achieving all the upsides of specialization and none of the downsides. There is a nuanced middle-ground between these two points, and 90% of people are accordingly busy not acknowledging this.  ↩ I had to look this up despite seeing it in practice many times because it's so weird that I felt like I was just making things up.  ↩ The etymology of the word business reveals that it originally stemmed from something approximately meaning "anxiety". It is claimed that this sense of the word is now obsolete... but I think you and I both know that there's a sliver of it still in there.  ↩ This is a sexy phrase in business and software because people like the idea of being "dangerous" when they are in fact being extremely boring. News flash: things being dangerous is bad , you mad bastards. Programming is not skydiving and that is okay. Learn enough to be safe instead. Imagine learning enough surgery or cybersecurity to be dangerous then bragging about it. Even Michael Hartl's Learn Enough To Be Dangerous courses actually make you exceedingly safe.  ↩ This is where grifters are flocking to in my field, because it sounds responsible and the industry has decided it doesn't require technical ability. I attended a meeting a few weeks ago where someone had written a slide saying that "robotics" are part of our five-year plan. My brother in Christ, I'm a glorified database administrator, why do I need robots ?  ↩ The wildest thing about all this is that I actually like the manifesto and I still think most of this is bullshit. I like the principles enough that our consultancy is current leaning heavily towards adopting Extreme Programming practices.  ↩

0 views