[HN Gopher] Useful engineering metrics and why velocity is not o...
       ___________________________________________________________________
        
       Useful engineering metrics and why velocity is not one of them
        
       Author : lucasfcosta
       Score  : 109 points
       Date   : 2022-09-05 05:12 UTC (17 hours ago)
        
 (HTM) web link (lucasfcosta.com)
 (TXT) w3m dump (lucasfcosta.com)
        
       | orangesite wrote:
       | Fantastic work Lucas, beautifully explained!
       | 
       | Is this currently being applied at your employer? Are you hiring?
       | :-D
        
       | dieselgate wrote:
       | Not sure if this is mentioned in other comments but in my work it
       | seems relevant to "keep an eye" on velocity but not worship it.
       | Hopefully it stays pretty consistent in general but if there're
       | are big swings maybe it's reason to check out other process
       | items.
        
       | vcryan wrote:
       | People take velocity too seriously. Generally speaking, it helps
       | you identify how much work the team typically does so you can
       | ensure you have at least that much refined for planning the next
       | sprint. That's pretty useful... depending on the circumstances
        
       | mkl95 wrote:
       | Velocity metrics are good when treated as a symptom of other
       | issues. They are particularly good at detecting engineers who
       | don't know what _scope_ means.
        
       | OrvalWintermute wrote:
       | Maybe I am biased because I've attended spacecraft & sensor
       | engineering design reviews for much of my career.
       | 
       | But this does not look like engineering; looks more like
       | statistic process control applied to software dev
       | 
       | Not engineering!
        
         | q7xvh97o2pDhNrh wrote:
         | You are technically correct, which is (technically) the best
         | kind of correct.
         | 
         | For better or for worse, though, most of the modern software
         | industry has converged on "software engineering" as the name
         | for what they do, while terms like "web developer" usually
         | invoke spooky prehistoric (20th-century) connotations of
         | companies that treat software as "part of the IT department"
         | and a "cost center."
        
           | vsareto wrote:
           | Let's just go back to Webmaster, but we get have Appmaster
           | (desktop/mobile) and Chipmaster (embedded) too
        
           | spiffytech wrote:
           | I enjoyed this series of interviews with engineers from other
           | fields who switched to software development. Their broad
           | conclusion is that software engineers are either already
           | engineers, or are pretty close.
           | 
           | https://www.hillelwayne.com/post/are-we-really-engineers/
        
           | tamrix wrote:
           | It's just title inflation. Previously, a system engineer used
           | to work at Nasa, now they configure computer networks.
           | 
           | Now everyone is a tech/dev advocate because they have 500+
           | linkedin connections and repost the same cloud news everyone
           | has already read.
           | 
           | I'm just going to coin the next stupid title now, 'cloud
           | astronaut', they're so good they're beyond the cloud. That
           | will fill the blank void that is your ego. You're welcome.
        
             | jayd16 wrote:
             | Cloud astronaut already has a connotation. An "astronaut"
             | is like a loftier more out of touch cowboy. The cowboy will
             | get work done (their own way) even if they're off the
             | beaten path. The astronaut is about the journey of
             | exploration into the unknown. Forget bike shedding. We're
             | talking research into moon bases.
        
         | FeistySkink wrote:
         | Do you have any write ups about what those reviews are like?
         | Sounds quite a bit more interesting than the usual agile stuff.
        
         | zanellato19 wrote:
         | Engineering uses statics process applied to something to
         | understand it and better it all the time. It is engineering as
         | I've studied and practiced
        
           | mym1990 wrote:
           | Therefore it must be the only kind of engineering out there?
        
         | lucasfcosta wrote:
         | Thanks so much for this brilliant comment. Indeed, this is
         | statistical process control. What I meant by "engineering" was
         | that it applies to software engineering teams.
         | 
         | I'll consider adapting the title, but not sure whether folks
         | would grasp my intent as quickly
        
           | 8note wrote:
           | Swapping "engineering" to "software management" would be
           | clearer.
           | 
           | Velocity has a strong definition in engineering that has been
           | borrowed and misused in software management.
           | 
           | I expected the article to be about accelerometers and how the
           | ill effects of vibration have been mitigated
        
             | archibaldJ wrote:
             | > has been borrowed and misused in software management.
             | 
             | as a software engineer, I really apprecaite this.
             | 
             | > about accelerometers and how the ill effects of vibration
             | have been mitigated
             | 
             | I really hope I can mitigate vibrations of mood, esp the
             | procrastinatory kinds. But then I don't want to
             | over-"engineer" my behaviours or it would start to become
             | existential. Like sometimes I don't even know what is
             | meaningful anymore (despite working on my dream job), or if
             | I can truly humanly love myself. Being human is tough.
        
               | q7xvh97o2pDhNrh wrote:
               | An old mentor used to talk about "installing shock
               | absorbers" (in the metaphorical sense) for this sort of
               | thing.
               | 
               | Hope you're doing okay, dude. Meaning doesn't mean much,
               | but learning to love and be happy is (from what I can
               | gather) far more important than the zeroes and ones we
               | tinker with. I'm trying to figure it out myself and
               | waiting for the good times to come around again, too.
        
               | pgorczak wrote:
               | Maybe you don't need to completely mitigate those
               | vibrations and they're just a part of moving through life
               | as a human. A very related image that stuck with me is
               | "run on the engine, not the pistons".
        
             | nicwilson wrote:
             | I think velocity of development is more like velocity of
             | money than velocity of a car.
        
         | mandeepj wrote:
         | > But this does not look like engineering; looks more like
         | statistic process control applied to software dev
         | 
         | Of course, it's not engineering; it's all about metrics, which
         | are statistical
        
       | opmelogy wrote:
       | I've had the exact opposite experience. Velocity has helped my
       | teams multiple times. It gives a sense of how much productivity
       | drops when we add new members and if we don't recover within an
       | expected time then we know something is up. It's also used as a
       | rough early indicator that projects are running late and that
       | there's no way we can hit expected time lines. Teams escalate and
       | get alignment across the larger org on how to handle it (move
       | dates, get additional help, etc.).
       | 
       | > Velocity metrics are as loathsome as the horoscope because
       | neither provides any insight on why something went wrong or how
       | to fix it.
       | 
       | This is such a weird statement to me. Of course it doesn't tell
       | you why, it's just data. It's up to anyone that works with data
       | to figure out what's going on.
        
         | fnordpiglet wrote:
         | I've found talking to my developers regularly tells me when
         | there are issues and what they are, and we can solve the issues
         | together. While I could have seen the trend in a velocity chart
         | the act of collecting velocity data is both time consuming,
         | tedious, and feels like micromanagement to those being
         | measured. I have literally never seen a velocity chart tell me
         | something more easily observed by taking team and individual
         | temperature consistently. I've found managers who depend on
         | metrics to be terrible at empathetic listening, and I avoid
         | hiring them.
        
       | maerF0x0 wrote:
       | ok that was a really long and involved article, I will admit I
       | didn't read the whole thing. If the punchline is the inverse of
       | the title, then jokes on me.
       | 
       | afaik most agile systems are pretty clear that cross team
       | velocity is non-comparable. (if if teams are shuffled the
       | velocity will change).
       | 
       | But, the one thing it's meant to be used for is estimates. After
       | sufficient iterations of the loop a team should both have
       | increasing velocity (due to knowledge of codebase), and
       | increasing accuracy of estimates (both dialing into sprint point
       | / effort tuning, and knowledge of codebase) .
       | 
       | afaik velocity was never meant to tell you about how to improve
       | itself. Thats more for Retros. (retrospectives)
        
         | mandeepj wrote:
         | > After sufficient iterations of the loop a team should both
         | have increasing velocity
         | 
         | The velocity cannot keep on increasing forever :-)
        
         | aprdm wrote:
         | That's only true if the team is working in the same sort of
         | problems
        
       | awinter-py wrote:
       | velocity is a vector and by rights should include angle as well
       | as magnitude
       | 
       | if your measure of velocity doesn't rate direction (i.e. 'quality
       | of goal') as well as magnitude ('how quickly you are getting
       | there'), at minimum it needs to be renamed
        
       | fmajid wrote:
       | Perhaps in terms of micromanagement software tracking velocity
       | metrics, but the concept of velocity, and the practice of getting
       | rid of obstacles to getting things done is an essential part of
       | engineering management. It involves identifying and replacing
       | poor tooling (easy), and fixing broken culture or processes (very
       | hard), specially at the interface with product management and
       | support.
       | 
       | This is a qualitative process, though, and using velocity metrics
       | from your project management tools is like looking for your lost
       | keys under the streetlamp because that's where the light is
       | brightest.
       | 
       | It is also important to recognize the importance of keeping slack
       | in a system and not over-optimizing it, as explained in Tom
       | DeMarco's excellent book _Slack: Getting Past Burnout, Busywork,
       | and the Myth of Total Efficiency_.
        
         | pierrebai wrote:
         | > velocity metrics from your project management tools is like
         | looking for your lost keys under the streetlamp because that's
         | where the light is brightest
         | 
         | That's not just a poor analogy but wrong. I can't even start to
         | identify what aspect of velocity is the key and the street lamp
         | or searching. Velocity is a tool to assess the amount of work
         | you do per cycle and to help plan how much work to take in a
         | given cycle.
         | 
         | It has nothing to do with losing key,, searching or only
         | looking where there is light.
         | 
         | As usual, for any process, haters will just trade jabs and nod
         | appreciatively and how much it all suck. Assessing how much
         | work you team does every cycle is a useful metric, and people
         | higher up like to have at least a small glimpse of how much
         | work and and much time it will take to finish a feature.
         | 
         | One useless metric though, is snark.
        
         | jdgoesmarching wrote:
         | I think it's still a useful input into planning and decision
         | making if it is understood to be lower quality data that can't
         | account for all variables in software projects. Unfortunately,
         | MBA-types refuse to interpret estimates this way and can't keep
         | their grubby paws off using them as bad performance measures.
        
         | geekjock wrote:
         | "Using velocity metrics is like looking for your lost keys
         | under the streetlamp because that's where the light is
         | brightest."
         | 
         | ^ 100% agree.
         | 
         | Tracking output metrics (including LOC, # of PRs, velocity
         | points) doesn't help you identify or fix poor tooling, culture,
         | or processes.
         | 
         | Metrics like cycle time get a little bit closer, but still fail
         | to account for team context or surface root causes.
        
       | burlesona wrote:
       | OK, this is actually a really good and insightful article hidden
       | behind a provocative title.
       | 
       | I will say that "velocity" is an extremely helpful concept at the
       | level of the individual and small teams. With my trams I
       | challenge each engineer to try and batch work into 2-4 hour
       | chunks, with each chunk being labeled "1 point." Thus if we have
       | an ideal week there are theoretically 10 points of "velocity." We
       | also try to batch up meetings etc. and discount the coordination
       | overhead from the plausible output, so maybe the theoretical
       | ideal is lowered to 8-9 points. We all discuss and agree on that
       | based on whatever overhead we can't eliminate.
       | 
       | Then each engineer does their best to "call their shot" every
       | week and say how much work they'll finish. At the end of the week
       | we compare their prediction against reality, using a burn-up
       | chart as a visual aid to the flow of work. There's no penalty for
       | getting it wrong, just applause for nailing it, and always a
       | discussion of what worked and what could have gone better.
       | 
       | Following this habit, I have consistently seen teams get much
       | better at forecasting work into the future accurately. This
       | ability to have a steady velocity and make reasonably accurate
       | forecasts about how long things may take is extremely valuable
       | and helps us align and coordinate efforts across many teams.
        
         | welshwelsh wrote:
         | A 2-4 hour chunk of work is 1 day of work. By that measure,
         | ideally each engineer would hit 5 points per week.
         | 
         | Managers like to think, if an engineer can do 1 point in 4
         | hours, then they should be able to do 2 points in 8 hours. But
         | that's not how it works.
         | 
         | The average office worker is only productive for about [3 hours
         | a day](https://www.vouchercloud.com/resources/office-worker-
         | product...). The upper bound is [4-5 hours a day](https://www.w
         | ashingtonpost.com/lifestyle/wellness/productivi...). More than
         | that is unsustainable and leads to burnout.
         | 
         | This is why people prefer remote work. Seeing coworkers in
         | person is nice, for sure. But management still assumes people
         | work 8 hours a day, meaning office workers spend most of their
         | day trapped in the office pretending to work, which is a huge
         | waste of time.
        
           | burlesona wrote:
           | I've found this isn't true for myself and for many others. If
           | you are getting good, deep focus time then you can generally
           | get one productive stretch in the morning, take a good long
           | break, and then do one more productive stretch in the
           | afternoon. We can be pedantic about it and say "well the sum
           | total of truly deep focus time was probably not more than 4-5
           | hours" and that may be true, but it's also not a realistic
           | take on the context switching and ramp up time that it takes
           | to get _into_ that flow in the first place.
           | 
           | Also don't underestimate how many shallow tasks there are
           | that somebody has to do to keep the lights on. We try and
           | eliminate as many of these as we can, but it's just not cost
           | effective to eliminate the entire long tail. So it's pretty
           | common to have the morning focus session be true "deep work,"
           | and the afternoon session be a batch of minor stuff like
           | small bug fixes, updating dependencies, running some tests,
           | etc.
        
         | q7xvh97o2pDhNrh wrote:
         | I'm generally extremely allergic to time-based micro-estimation
         | like this, but it sounds like you've got the rare healthy
         | implementation of it. (Or as healthy as it can get, at least.)
         | 
         | Two questions, if you don't mind:
         | 
         | 1. One of the biggest dangers of time-based estimation is the
         | cultural danger around the "penalty for getting it wrong." It's
         | got quite a lot of gravity to it, in my experience. Once there
         | _are_ time-based estimates for tiny bite-sized amounts of work,
         | it 's a hop-skip-jump to the toxicity of "holding people
         | accountable for their commitments."
         | 
         | It sounds like you, as the team manager, are the one preventing
         | this backslide. How do you think about maintaining a healthy
         | culture around this when you go on vacation, get distracted,
         | get promoted, etc?
         | 
         | 2. What if a new discovery happens mid-week? Specifically, what
         | if it's the kind of discovery that invalidates the current
         | plan, so that finishing your "committed" work would be
         | worthless (or actively negative) for the business?
        
           | switchbak wrote:
           | I echo your concerns. On teams I've been on (exclusively at
           | large, boring organizations), I've seen a tendency to create
           | lots of really trivial tasks, and also to game things via
           | pretending these tasks are more complex than they are.
           | 
           | It's easy to nail your estimations when you're operating at
           | 1/3 your capacity, and you only ever meet but don't exceed
           | the planned velocity.
           | 
           | I wonder if this also disincentives learning and innovation
           | in the solution domain - because that would throw off all
           | those estimates.
        
             | burlesona wrote:
             | This is addressed by batching into same-sized chunks - the
             | engineers discuss the work plan as a group and if you give
             | yourself ten tickets that everyone knows are tiny and
             | should probably be done in an afternoon, the group will ask
             | you to group it into a parent ticket to make it a "half-
             | day" chunk of work.
        
           | burlesona wrote:
           | (1) This is part of blameless culture, and it can't work
           | without everyone being bought in, so once it is established
           | and working culturally it doesn't matter if one person on the
           | team changes. In the same way that we "hold people
           | accountable for incidents" by reviewing them and discussing
           | what we could learn, we do "hold people accountable for their
           | estimates" by asking what happened and reflecting on whether
           | we could have seen that coming. But we're never perfect and
           | we don't expect to be, we just try to see if we can get
           | better over time.
           | 
           | (2) When something unexpected comes in you flag it and report
           | it to everyone so everyone knows the expectations are
           | invalidated. This is super important! I can't even tell you
           | how much easier it is for a PM or other stakeholder to handle
           | learning on Tuesday that the plan is not going to work than
           | showing up for a check-in the next Monday expecting to see
           | some deliverable and only _then_ learning that it 's late.
           | The goal is to communicate expectation changes as close to
           | real-time as possible, which builds trust and also helps the
           | non developers see how volatile and unpredictable development
           | can be so they learn to believe you when you say "the range
           | is 10 to 30 weeks, and no I can't narrow it down any more
           | than that."
        
       | __t__ wrote:
       | I disagree that velocity is a terrible metric. I think a more
       | comprehensive suggestion from the article could've been to use
       | velocity in conjunction with other metrics. Perhaps with the ones
       | the author points out in the article.
       | 
       | Measuring things is hard. Most of the time when you measure
       | something there will be a loss of information. Velocity, in the
       | context of software engineering, is no exemption. However, this
       | doesn't mean that it can't be a helpful signal.
        
       | tootie wrote:
       | This is a really misguided opinion but possibly based on working
       | with misguided managers. Velocity in traditional scrum-style
       | agile is metric of product completion. If you're doing point
       | estimates or the like and you've sized your MVP at 128 points and
       | you've completed 64 points with 180 developer-days then you are
       | 50% done and can project completion in another 180 developer-
       | days. That's what it's predictive of. Saying it doesn't explain
       | why your velocity is your velocity is a big duh. It's not
       | supposed to. It's the primary indicator of progress. If velocity
       | is too slow then you go figure out why or what can be done. If
       | you're on time or early, then great.
        
         | spaetzleesser wrote:
         | The problem is that these estimates and projections often get
         | converted into hard commitments by management.
        
       | kcexn wrote:
       | With all measures that are applied in management contexts I find
       | that the mistake people tend to make is that there is a _linear_
       | relationship between the measure of work and the financial
       | measures (revenue, profit, margin, etc.)
       | 
       | Velocity seems like a simpler measure than a Network Queue, and
       | both seem like they are valid measures since neither will have a
       | linear relationship with the bottom line of the company.
       | 
       | The goal of the entire business is to tune all the parts of the
       | business and seek out an efficient solution.
       | 
       | The theory of constraints as it is quoted in this article isn't
       | advocating for rate matching everywhere in the business
       | specifically. Rather it is making the statement that some inputs
       | and processes in the business have a disproportionately large
       | effect on the business outcomes. And you should weight management
       | priority to the processes that affect your outcomes the most.
       | 
       | If you have a process (say deploys to production), that
       | dramatically affects your business outcomes, in the sense that
       | small changes to this process lead to large changes or even most
       | of the changes in outcomes (e.g. revenue). Then most of the
       | resources should be allocated to optimising that part of the
       | business, since resources allocated anywhere else will be wasted,
       | since the magnitude of the effects (even assuming they'll still
       | be positive), will be dwarfed by your most sensitive inputs.
       | 
       | The theory of constraints preaches that after you have chosen
       | your most important business outcomes (a strategic decision), you
       | should measure how much the different inputs to your business
       | affect the outputs of the business, and allocate resources to the
       | most sensitive inputs.
       | 
       | Most big businesses do this very well. It's why you can find
       | yourself in a team where doing more work or less work doesn't
       | seem to matter, even though your direct manager is up your ass
       | all the time.
        
       | rufflez wrote:
       | In my experience, metrics are a giant waste of time for engineers
       | and engineering managers. Project managers and folks who depend
       | on the fruits of engineering labor on the other hand love these
       | metrics. It is simpler (and wrongheaded) because who wants
       | complications after all? Look at the trivialization in this
       | article for instance - "tasks come in, and software comes out".
       | Oh really? I had no idea!
       | 
       | Tasks vary drastically from project to project,including due to
       | varying requirements, ambient software environment, technological
       | changes and a host of other factors. Metrics are a way for
       | project managers to pass the buck on to the engineering teams and
       | nothing more.
        
         | bergenty wrote:
         | The hard truth is metrics work. I can quantify someone's
         | performance and if they're not doing well put pressure on them
         | to do better or hire someone else. Velocity is a pretty good
         | measure of someone's performance especially if the estimating
         | is done in a sprint planning session with the entire team.
        
           | geekjock wrote:
           | The hard truth is that using velocity to measure performance
           | will guarantee that developers bias their estimates to ensure
           | they make themselves look good, thereby making your
           | "estimates" useless.
           | 
           | Anyone who's built software knows that estimates are guesses
           | and are bound to change once the work actually begins (not to
           | mention if scope changes midway through).
           | 
           | Additionally, velocity does not "count" work happening
           | outside of tickets, e.g., helping with a customer support
           | issue, assisting another developer with legacy code,
           | reviewing other people's work, spending time on feature
           | planning.
        
             | mandeepj wrote:
             | > Additionally, velocity does not "count" work happening
             | outside of tickets, e.g., helping with a customer support
             | issue, assisting another developer with legacy code,
             | reviewing other people's work, spending time on feature
             | planning.
             | 
             | Your capacity is adjusted accordingly. In my team an
             | engineer who's going to work-on production support, would
             | not have his capacity accounted. Similarly, if an engineer
             | has a lot of cross-team work going-on then we'll reduce his
             | capacity as well, and so on.
        
               | bosswipe wrote:
               | Yuck, I'd quit your team. That level of micromanagement
               | is demoralizing and it hurts the company. "Sorry support,
               | I have zero autonomy. I can't talk to you unless my
               | manager adjusts some number in some spreadsheet"
        
           | fnordpiglet wrote:
           | I'm glad I don't work on anything you're responsible for.
           | I've made a great career out of hiring the people folks like
           | you burn out and frustrate.
           | 
           | I'll bet $5 that people conform to your model by working
           | longer than necessary and your "velocity" is considerably
           | lower than it could be, despite being measurably consistent.
        
           | jdlshore wrote:
           | The harder truth is that metrics only _appear_ to work. As
           | soon as you use metrics to judge performance, people will
           | start gaming them. They'll do whatever it takes to get a good
           | score, regardless of whether it's good for the company or its
           | customers.
           | 
           | The metrics "work," in that they'll go up, but the things you
           | don't measure will get worse, often (eventually)
           | catastrophically.
           | 
           | In the case of velocity, you'll get people taking shortcuts
           | and sacrificing quality, both internal and external, so they
           | can juice their numbers. The outcome is technical debt and
           | arguments about what it means for something to be done,
           | resulting in slower progress overall.
           | 
           | Source: I've been consulting in this space for a few decades
           | now, and have seen the consequences of using velocity as a
           | performance measure time and time again.
           | 
           | (Other metrics are just as bad. See Robert Austin, _Measuring
           | and Managing Performance in Organizations_ , for an
           | explanation of why knowledge work like software development
           | is incompletely measurable and thus subject to measurement
           | dysfunction.)
        
         | tra3 wrote:
         | You gotta have some way of answering the "how long is it going
         | to take" question. There's no better way of predicting things
         | than looking at how long things have taken in the past.
         | 
         | Unfortunately what everyone forgets is that historical
         | information only applies if you are doing "a thing" that is
         | similar to what you've done before.
        
           | tadfisher wrote:
           | One way to address this problem is to not ask the question,
           | but to work with your teams to set goals for a longer period
           | of time such as a quarter. Then structure your organization
           | such that it doesn't depend on the exact timing of work
           | completion.
        
           | djbusby wrote:
           | Q: how long will this take?
           | 
           | A: what is your budget? What other work will you sacrifice?
        
           | switchbak wrote:
           | It really comes down to: can you provide the value we need in
           | this area. That value might be "validate experimental
           | research is implementable", or "perform a spike to de-risk a
           | future effort".
           | 
           | There's a fog of uncertainty that gets impenetrable past a
           | certain time horizon. We need businesses that are capable of
           | forging ahead into the unknown, being honest about the level
           | of confidence given our knowledge at the time. I don't see
           | that approach coming out of MBA schools, and I think Elon's
           | approach is in marked contrast to that.
           | 
           | In many ways old school grassroots agile has penetrated the
           | lower levels of the org (to some degree), but the upper
           | levels haven't changed in decades, and this impedance
           | mismatch is most visible in the rigidity of planning (which
           | is also forced on them by outside pressures).
        
       | vannevar wrote:
       | Velocity is not intended as an engineering management metric,
       | it's intended for use internally by the team as a guide for
       | sprint planning. If you're using sprints and not kanban, the
       | "cycle time" is fixed, and only the amount varies (approximated
       | inexactly by story points).
       | 
       | This statement FTA is completely antithetical to agile
       | development:
       | 
       | "A manager who wishes to make their team's cycle times more
       | uniform can try matching the rate at which tasks enter the system
       | to the rate at which they leave."
       | 
       | On an agile team, "the manager" doesn't moderate the work rate;
       | rather, the team measures their own work rate (approximately) via
       | velocity and manages themselves accordingly.
        
         | hbarka wrote:
         | Isn't kanban a tool within sprints?
        
           | spaetzleesser wrote:
           | Kam is basically a big to do list (aka backlog) and you just
           | work through it. I find it way easier to manage than scrum
           | with sprints and sprint planning.
        
           | vannevar wrote:
           | Kanban is an alternative to sprints, for contexts in which
           | the team is persistent across projects and it makes sense to
           | let them manage their work apart from the projects that they
           | contribute to---e.g., an ops team that supports the
           | organization via a help desk and ticketing system.
           | 
           | You could certainly think of the sprint board as a kanban
           | board with a finite lifespan, but I don't think there's any
           | benefit to doing so.
        
             | seadan83 wrote:
             | There would be a benefit to thinking that way. Instead of
             | doing a ridiculous sprint planning, instead just keep
             | pulling from the backlog. Rather than predicting what you
             | will get done (and invariably failing to predict well, or
             | using developers own predictions as their accountability
             | and a way to pressure overtime), perhaps just see what you
             | get done instead. Then instead of planning, actual work
             | could get done. Anyone needing estimates can look back
             | historically and compute number of tasks getting done and
             | extrapolate against the backlog.
             | 
             | By keeping the backlog prioritized at all times, and
             | pulling from the backlog instead of a 'sprint backlog' - it
             | avoids priority inversions. Planning what will be your
             | highest priority, that possibly changes mid-sprint, is a
             | priority inversion.
             | 
             | Kanban works well in all situations where sprints work
             | well. Instead of trying to predict in a planning, you just
             | do the work instead. Any predictions are still the same
             | based on looking at historical data and comparing to the
             | backlog, just less waste involved and less subterfuge to
             | try and get developers to work overtime (I'm not aware of
             | any sprint planning that really takes into account error
             | bars, so the prediction/planning is BS & management
             | planning porn, nothing more)
        
               | vannevar wrote:
               | >There would be a benefit to thinking that way. Instead
               | of doing a ridiculous sprint planning, instead just keep
               | pulling from the backlog.
               | 
               | The purpose of sprints is not to divide up the work---
               | this is a common misunderstanding. The purpose of the
               | sprint is to set incremental goals _based on incremental
               | learning._ So instead of setting a goal to create the
               | complete software system from the beginning, you set a
               | goal that can be accomplished within a fixed timeframe
               | that people can make reasonable estimations in and that
               | will test some of your key assumptions about the
               | software.
               | 
               | Simply throwing everything into a massive backlog and
               | grinding through it indefinitely is a recipe for failure,
               | since there is no opportunity to reflect on the larger
               | picture: the team is too busy focusing on the assembly
               | line and not paying attention to the product. And of
               | course you can still have "priority inversions" in
               | kanban, but unlike the finite sprint, you may go months
               | before you realize what you've done wrong.
               | 
               | As I said, there are appropriate times to do kanban, but
               | the situation you describe is not one of them.
        
               | seadan83 wrote:
               | That incremental goal is not mutually exclusive to
               | kanban. The difference is whether you are stating at the
               | beginning what goal you will hit, vs just working towards
               | that goal. It's the same thing, one creates pressure to
               | hit inaccurate estimates, the other just marches towards
               | the goal.
               | 
               | > Simply throwing everything into a massive backlog and
               | grinding through it indefinitely is a recipe for failure
               | 
               | Why is that? Scrum has the same backlog.
               | 
               | > since there is no opportunity to reflect on the larger
               | picture
               | 
               | Why is this the case? Kanban often says that you talk
               | about retrospective items any time they come up. You then
               | even can re-prioritize such items right when needed,
               | meaning you solve todays problem today, not tomorrow.
               | Retrospectives in scrum can often come too late (and
               | personally I've never really seen them work well, too
               | little, too late, and often compete against 'business'
               | priorities that always win out).
               | 
               | > And of course you can still have "priority inversions"
               | in kanban, but unlike the finite sprint, you may go
               | months before you realize what you've done wrong.
               | 
               | Kanban is less susceptible to priority inversion. You
               | keep the highest priority at top, it is evaluated more
               | often, and you pull from the top. Why does this mean that
               | months can go by? Why is that any more the case than say
               | scrum where you are encouraged to fix priority for weeks
               | at a time?
               | 
               | > As I said, there are appropriate times to do kanban,
               | but the situation you describe is not one of them.
               | 
               | I think the burden of proof is on you to state places
               | where scrum uniquely bests kanban. The statement that
               | kanban is best for places where ad-hoc work comes in
               | constantly is a case where scrum spectacularly fails. For
               | regular project work, kanban can be a great fit (I'd say
               | universally better than scrum, since planning is just
               | complete BS. Kanban is often little more than scrum
               | without the planning sessions and with far more frequent
               | prioritization, and you get to keep all of the same
               | retrospective & estimation tools)
        
               | vannevar wrote:
               | General reply to those who say, "but you can have
               | planning and retrospective in kanban!":
               | 
               | Sure, you can. But it requires discipline to actually do
               | it. It's very easy to get wrapped up in a task that runs
               | over, and when you multiply that tendency by everyone on
               | the team, the idea that it will actually get done
               | regularly in practice is optimistic, to say the least.
               | And if you are doing it regularly, together as a team, is
               | that really so different from a sprint?
               | 
               | Conversely, the notion that you can't change course mid-
               | sprint is simply wrong---you absolutely can. The concept
               | of sprint commitment is an acknowledgment that there is
               | value in completing something workable, even if it's the
               | wrong thing, as opposed to staying constantly in a state
               | of WIP. But if it's obvious you're doing something no
               | longer needed or technically feasible, nothing in scrum
               | says you have to complete the sprint as planned.
        
               | seadan83 wrote:
               | Piling on:
               | 
               | > So instead of setting a goal to create the complete
               | software system from the beginning, you set a goal that
               | can be accomplished within a fixed timeframe that people
               | can make reasonable estimations in and that will test
               | some of your key assumptions about the software.
               | 
               | In Kanban you can do all of that, except faster. Instead
               | of waiting for the arbitrary sprint boundary, you can
               | immediately evaluate an item that has landed in
               | production and immediately re-prioritize the backlog
               | based on any learnings. That would be compared to: we
               | will deploy this in sprint A, then in B we will see the
               | results, and then in sprint C we will work on our updated
               | items. The latter is a many week delay compared to
               | assigning tasks in strict sequence of
               | "ship/measure/adjust".
               | 
               | In the worst case, you ship at the beginning of a sprint
               | and then instead of working on the next priority thing
               | (that you learn from having shipped), you instead keep
               | working on the other sprint items. At that point, it
               | could be waterfall, working on items that you now know
               | are pointless, but you still have to anyway because it
               | was planned some time prior to do so.
        
               | jrib wrote:
               | Another perspective:
               | 
               | Kanban doesn't forbid regular retrospectives. You can
               | still pause frequently, reflect, and update your process
               | accordingly.
               | 
               | The purpose of sprints is to try to estimate the
               | completion of a release -- and as progress is made the
               | drift from that original prediction. Sprints in theory
               | let a manager estimate a team velocity. Together with
               | estimates for the task in a release, that, in theory,
               | gives one the ability to predict when a set of features
               | (a release) will be complete. I have yet to see this
               | actually work, but it could just be reflective of where I
               | have worked.
        
       ___________________________________________________________________
       (page generated 2022-09-05 23:00 UTC)