[HN Gopher] A useful productivity measure?
       ___________________________________________________________________
        
       A useful productivity measure?
        
       Author : rglullis
       Score  : 152 points
       Date   : 2024-05-06 07:39 UTC (15 hours ago)
        
 (HTM) web link (www.jamesshore.com)
 (TXT) w3m dump (www.jamesshore.com)
        
       | vouaobrasil wrote:
       | > In other words, in the absence of RoI measures, the percent of
       | engineering time spent on value-add activities is a pretty good
       | proxy for productivity.
       | 
       | What about using self-rated satisfaction of your workers in terms
       | of how happy they are with their engineering solutions? I find
       | that when I solve a really nice problem and add something that is
       | cool, or fix a very annoying bug, I feel very satisfied with the
       | result. Whereas, I have produced things in the past (when I
       | worked as a programmer) that seemed to make management happy that
       | I wasn't satisfied with, and it seemed to me at least that the
       | results weren't as useful in the end.
        
         | yen223 wrote:
         | To put it bluntly, Goodhart's law will really kick your ass if
         | you went with that.
        
         | mjr00 wrote:
         | This is way too subjective; I've met way too many developers
         | throughout my career who loved to spend their time crafting
         | "perfect" code rather than deliver customer-facing value for
         | this to work as a measure that aligns with business objectives.
        
         | jimbokun wrote:
         | Useful to know, but different from productivity.
        
       | Archelaos wrote:
       | > * The value it's estimated to generate
       | 
       | > * The amount we're willing to bet
       | 
       | A reasonable bet should correspond to the (estimated) probability
       | of its outcome. So the amount someone is willing to bet should
       | directly reflect the value it is estimated to generate. Betting
       | higher or lower makes no sense.
        
         | datadrivenangel wrote:
         | It depends on your risk tolerance!
         | 
         | We may want to bet more for more likely outcomes, even if the
         | expected return is lower, if we value predictability and the
         | cost of a bad portfolio outcome is high.
        
       | lifeisstillgood wrote:
       | I think there are two answers here. The first is treating
       | software develolment as a process - and using some form of
       | statistical process control to "manage" the process - reduce
       | waste etc. this is it turns out exactly what Agile / Scrum was /
       | is all about. Remember those "retrospectives" we are supposed to
       | do - that's the point at which we whip out a diagram and see if
       | we are trending out of band.
       | 
       | It's fine. It works really and it does help reduce waste - which
       | for any sane world reduced waste is a proxy for productivity.
       | 
       | But no matter what that is only "doing things right". It is not
       | "doing the right thing".
       | 
       | At this point we can also say we have tactical and operational
       | parts under control (or at least monitored). The "doing the right
       | thing" (or strategy) is next and this means something most
       | business refuse to accept - that their grand strategy, their
       | Vision, might be total bollocks.
       | 
       | Searching for productivity in software teams is the new version
       | of blaming lazy workers in the factory floor - we are all
       | software ompanies now, and if you are not succeeding you can
       | either blame your crappy product market fit, your strategy or you
       | can whip the employees harder. Guess which is easier.
       | 
       | So if you and the CEO need to make fucking appointments to have a
       | honest chat about strategy something has gone terribly wrong with
       | your companies ability to quickly respond strategically - if you
       | have a problem, it's not the employees that need to run faster.
       | 
       | edit: yeah I started ranting there, sorry. The whole "takes ages
       | to get face to face time" is a red flag, but you can get around
       | it. As for the rest, I really respect the attempt to treat the
       | whole company as a system - not just the software team - the
       | product bet idea clearly sets out goals and places emphasis on
       | whole org. That's really great but as you say "executives prefer
       | if other people are held accountable". But if they can learn a
       | company can be programmable (then they might learn to start
       | programming)
        
         | robertlagrant wrote:
         | > this means something most business refuse to accept - that
         | their grand strategy, their Vision, might be total bollocks
         | 
         | Actually this is often talked about, but only internally to the
         | senior team. You generally don't want to be saying "actually
         | widgets might be a bad idea" which might horribly demoralise
         | your widget factory workers, when you might well conclude after
         | discussion and analysis that actually they're still a good
         | idea.
        
       | didgeoridoo wrote:
       | The "fatal flaw" of skipping maintenance to max out value-add
       | work seems like it could be addressed by ensuring that any
       | accumulated tech debt be properly accounted for, rather than
       | swept under the carpet.
       | 
       | You'd consider "productivity" then to be value-add work _minus_
       | identified tech debt. Since calling out tech debt would hurt
       | leadership's own metrics, you'd need a mechanism to allow
       | individual engineers to blamelessly (and probably anonymously)
       | report, validate, and attach a cost metric to this tech debt.
       | 
       | The org would then be incentivized to balance work that delivers
       | immediate value with work that represents an investment in making
       | the team more efficient and effective at delivering that value in
       | the long run.
        
         | datadrivenangel wrote:
         | That's the thing though! Quality :tm: is very hard to account
         | for, especially when the quality is in the system, tooling, and
         | process to build the quality end product.
         | 
         | And often times technical debt isn't actually something you can
         | put on the balance sheet or bug tracker. It's all the little
         | investments in the future that are deferred or skipped. That
         | one code change is so minor you can phone in the review, you'll
         | write better documentation for that new feature in a few weeks
         | when you have some time, etc.
        
           | didgeoridoo wrote:
           | It's a good practice to note things for later and why they
           | should be done though, right? Even if it is _never_ intended
           | to be worked on, noting that you would write more docs for
           | this class but you don't have time is an important indicator
           | of productive capacity for leadership. If I start seeing a
           | lot of that as an executive, I should start to worry if we're
           | building our value-add on a foundation of sand.
        
           | duderific wrote:
           | > you'll write better documentation for that new feature in a
           | few weeks when you have some time
           | 
           | The trick here is to make a ticket for it. At least then, it
           | can be prioritized appropriately, instead of disappearing
           | into a void.
        
         | jimbokun wrote:
         | He mentions this in the article.
         | 
         | For important but non revenue producing aspects like security,
         | there are actually insurance markets now for breaches. The
         | insurance companies lower your premiums based on their
         | assessment of overall risk, making your exposure more
         | quantifiable.
        
           | didgeoridoo wrote:
           | He mentions this in the article to say that it's a "fatal
           | flaw" that might cause this methodology to not work in your
           | org. However, he also fatalistically assumes that skipping
           | _muda_ is a failure case, rather than just a realistic
           | response to balancing short vs long term considerations.
           | 
           | I suggest above that _muda_ should either 1) be worked on, or
           | 2) the fact that it's being deferred should be explicitly
           | captured. And, since there are competing interests
           | (leadership is accountable to net productivity while ICs are
           | not) the deferral capture needs to be anonymous to prevent
           | top-down pressure in the direction of ignoring tech debt
           | accumulation.
        
         | kayo_20211030 wrote:
         | Tech debt is categorically unquantifiable. Most of the time
         | it's more of a feeling than a number, and it's not an
         | accessible number ever. What's the ROI on paying down debt?
         | Hold as is, renegotiate, or extinguish? It's the same
         | calculation that goes into deciding which of the five thousand
         | value-add proposals to prioritize. The piece is unconvincing
         | for that reason. There's an assumption that TD is known, and an
         | implicit assumption that the ROI on its remediation is known.
         | Neither of those is true. Systems evolve to where they are,
         | with all their TD warts, because value-add was prioritized....
         | and then we have TD. I reckon we should just live with that
         | uncertainty, move the product forward, calculate ROI using the
         | same bogus productivity metrics we always have, and stop
         | inventing "better" systems which are just another form of
         | magic, but manage to suck up time and resources not required by
         | accepting on faith the old bogus metrics.
        
           | haskellandchill wrote:
           | It's really not unquantifiable. I read "How to Measure
           | Anything in Cybersecurity Risk" and it was an eye opener.
           | Using a table of risks and outcomes with associated
           | probabilities and 90% confidence intervals of dollar impacts
           | we can quantify categories of technical debt.
        
             | kayo_20211030 wrote:
             | If "Cybersecurity Risk" were the only form of technical
             | debt, we'd be just fine(?). Or, at least, we'd have some
             | sort of metric. It wouldn't be a good one, but it'd be
             | there. Chance of a breach: 1%. Existential or not? Probably
             | not. Cost of mitigation? Probably small. Worth addressing?
             | Mostly no, unless you're a regulated entity; then it's
             | mandatory. Quantifiable, for this narrow case, but what of
             | the rest?
        
               | esafak wrote:
               | Apply the same mentality to other things. If the
               | cybersecurity folks can quantify risk so can you. Are you
               | keeping track of your supply chain? How modular is your
               | code? How easy to refactor is your code? You could think
               | of reasonable metrics to measure various aspects of
               | technical debt. It won't be perfect but it's better than
               | nothing.
        
         | lelanthran wrote:
         | > The "fatal flaw" of skipping maintenance to max out value-add
         | work seems like it could be addressed
         | 
         | Who says it's a flaw? And even if it is, who says it needs to
         | be addressed?
         | 
         | It's all contextual: tech debt _used_ to be a flaw that could
         | destroy a product, but nowadays I 'm seeing teams rewriting
         | components of their products every 18 months in whatever new
         | fad seems to come along.
         | 
         | Why care about debt when it's going to be written off in the
         | future?
         | 
         | And even if it isn't, the person who accumulated the debt did
         | so by adding features - he's the man that _delivers_ , so he
         | gets to go up the ladder
         | 
         | It's not a fair world: anyone who actually cares about
         | bugcount, product quality, customer satisfaction and
         | sustainable velocity just isn't going to get recognised for the
         | fires they prevented.
        
           | didgeoridoo wrote:
           | > Who says it's a flaw?
           | 
           | The author of the article.
           | 
           | > who says it needs to be addressed?
           | 
           | I think you've misread my post. What is being "addressed"
           | isn't technical debt itself, but rather the author's proposed
           | failure mode of totally ignoring _muda_ to focus on overly-
           | incentivized "value add", which he correctly forecasts will
           | slowly destroy the product and company.
           | 
           | I'm saying that this doesn't have to be a failure mode, so
           | long as you acknowledge and record when _muda_ has been
           | skipped, and take that into consideration when holding
           | leadership accountable to productivity metrics.
        
       | roenxi wrote:
       | I really like this article so I am sad to be cynical about it.
       | Nevertheless. If you're engaging in C-suite politics as described
       | in this article, what on earth would be the motivation for
       | writing all this in public? I doubt the situation will evolve
       | favourably to Mr Shore if the leadership team starts reading
       | this.
       | 
       | Full agreement with the perspective though. CEOs (& consultant
       | friends) generally have some extremely reasonable expectations
       | about software engineer productivity based on experiences in
       | manufacturing, primary industries, healthcare, service and public
       | sectors. Surely we need accountability and productivity metrics!
       | It is a long and painful process that may never end watching them
       | gathering the evidence needed to learn that software engineer
       | productivity is different. A sad history repeating over and over
       | again like clockwork.
       | 
       | This CTO seems to be finding a golden path for how to manage
       | expectations without beating that over people's heads and I'm
       | taking notes. I don't think it'll work out exactly the way he
       | hopes but I love the attempt at making his team productive.
        
         | simonw wrote:
         | If I was a CEO with a VP who wrote something this realistic and
         | honest, my response would be to pressure the rest of my execs
         | to give me that same level of quality information. Breath of
         | fresh air!
        
         | AtlasBarfed wrote:
         | One of my long standing beefs with C-suites and their MBAs and
         | whatever meat factory grinds them out is the continued lack of
         | comprehension of basic software and IT management.
         | 
         | Back in the early 2000s, this could be saliently characterized
         | by the exuberance many CEOs of companies bragged about how IT-
         | ignorant they were. I saw the "my secretary prints out my
         | emails" written proudly in print many a time in the CEO worship
         | magazines like Fortune and the like.
         | 
         | Every decade of continued IT penetration and performance
         | benefits in business are continuing the principle: the health
         | of your business is strongly tied to its IT health.
         | 
         | BUT MY COMPANIES JUST STAMPS WIDGETS. Ok, look. MBAs long ago
         | accepted that every business, regardless of what it does, needs
         | two things: accounting and finance. So every MBA gets a decent
         | education on those.
         | 
         | Well, hate to tell you, MBAs need to know the fundamentals of
         | IT.
         | 
         | Basically, C-suites not understanding productivity measures in
         | IT are hard is stupid not only because it is a basic aspect of
         | IT management, but its a bit more disturbing than that:
         | 
         | YOU CAN'T MEASURE MANAGER PRODUCTIVITY EITHER. How does
         | traditional MBA measure middle managers and those types of
         | orgs? That is rife with all the problems of measuring
         | developers, not the least of it being the "juke the stats"
         | universal phenomenon. That's why the most important single
         | indicator of manager ability is: headcount. Headcount headcount
         | headcount. The second is size of budget: bigger bigger bigger.
         | 
         | Of course everyone that has working in companies knows the
         | hilarity of what that produces: bloated orgs, pointless
         | spending to use up a budget otherwise it gets cut, building
         | empires, etc.
         | 
         | The essence of developer that C-suites need to understand is
         | that they are not factory workers: they are in fact MANAGERs.
         | They just don't manage people, they manage virtual
         | employees/subordinates (software). Go ahead, ask any software
         | dev in an org what they do. Generally, somewhere in there is a
         | bunch of acronyms and the fact that they ... wait for it ...
         | MANAGE those acronyms. Krazam's microservices is a perfect
         | example of this. What is that long diatribe essentially about?
         | MANAGING all those services.
         | 
         | That gets to another major undercurrent of IT. Managers want to
         | place IT employees in the "worker bee" category. They don't
         | want IT workers to be managers, and entitled to the great
         | benefits according the management class. The long running IT
         | salary advantage in the US, and the constant ebb and flow
         | between management and IT "working" to push that down is a
         | major social undercurrent here.
        
           | 6510 wrote:
           | I like to compare it to illiteracy.
        
             | OJFord wrote:
             | Yeah, that's quite common? 'computer literate' etc.? It's
             | even quite dated as a term because it's so uncontroversial
             | and accepted.
        
               | 6510 wrote:
               | Right but I think that use to refer to using
               | applications. Developing them is much more like real
               | illiteracy. You can look at a bug tracker but the todo
               | list might as well be in Chinese.
        
           | vundercind wrote:
           | > YOU CAN'T MEASURE MANAGER PRODUCTIVITY EITHER.
           | 
           | This is so much the case that on a Freakonomics episode I
           | caught on NPR the other day about economists attempting to
           | prove whether the Peter Principle is true (TL;DR probably
           | yes, and likely more so than you'd even guess--but also this
           | is all very hard so maybe don't take it too seriously) it
           | seemed that this is something so difficult that they _barely
           | even try to do it_ and when they do the measurements are so
           | narrowly-focused and come with so many caveats that one
           | hesitates to call them _useful_.
           | 
           | Like best case you manage to find that you can measure one of
           | several plausible performance indicators and if you've picked
           | the exact right kind of job where _worker_ productivity can
           | kinda-meaningfully be measured (it often cannot) then maybe
           | you can conclude some things with a large enough dataset, but
           | then connecting that to any particular behavior on the part
           | of the managers is _another_ big hurdle to overcome before
           | you can try to, say, develop effective scientifically-sound
           | training, and at that point you've likely wandered into "just
           | give up, this is too messy" territory.
           | 
           | [edit] and this:
           | 
           | > That gets to another major undercurrent of IT. Managers
           | want to place IT employees in the "worker bee" category. They
           | don't want IT workers to be managers, and entitled to the
           | great benefits according the management class. The long
           | running IT salary advantage in the US, and the constant ebb
           | and flow between management and IT "working" to push that
           | down is a major social undercurrent here.
           | 
           | Nail. Head.
           | 
           | Managers of developers act piss-pants afraid of us properly
           | joining the (socially speaking) upper-middle (cf Fussell) or
           | professional class, which the MBA set are busy trying to
           | eliminate everywhere else (lawyers, doctors, college
           | professors) so they're the last ones standing.
           | 
           | This shit is why micromanagement PM frameworks and packing us
           | into loud visually-messy sardine can open offices is so
           | popular. Dropping that stuff would be table-stakes for our
           | moving up the social status ladder. Adopting it pushes us
           | _way_ down the pecking order.
        
           | robertlagrant wrote:
           | > Well, hate to tell you
           | 
           | Why?
        
           | 6510 wrote:
           | Wondering where the real problem lives I cant help but get
           | back to the disconnect between education and employment.
           | Isn't it that most curriculi have a bunch of borderline
           | nonsense and a bunch of things you end up applying every day?
           | 
           | Companies pay taxes, those go towards education even if it is
           | indirect by organizing life sufficiently to make training
           | possible. What formula can be had by which companies may rate
           | chunks of education or nudge it towards more useful
           | knowledge? Do they even know what they need?
           | 
           | Oddly schools measure productivity all the time. It seems a
           | hilarious puzzle.
        
         | hinkley wrote:
         | Blog posts, like long showers, are often for arguments you've
         | already lost.
        
         | hn_throwaway_99 wrote:
         | > If you're engaging in C-suite politics as described in this
         | article, what on earth would be the motivation for writing all
         | this in public? I doubt the situation will evolve favourably to
         | Mr Shore if the leadership team starts reading this.
         | 
         | I didn't get that impression _at all_ from this blog post. I
         | didn 't see any "C-suite politics" really at all in this
         | article:
         | 
         | 1. At the end of the day, the C-suite is responsible for making
         | the business successful. So it's reasonable to ask for _some_
         | level of productivity measure, no matter how much engineering
         | objects.
         | 
         | 2. I thought the author was extremely insightful about (a)
         | trying to get at the root of what execs really wanted, (b)
         | acknowledging that calculating engineering productivity is
         | notoriously difficult, but then (c) coming up with a best,
         | honest estimate, while still highlighting the potential future
         | pitfalls and the dangers of Goodhart's Law.
         | 
         | In short, I read this article as exactly the kind of
         | engineering leadership a good C-suite would want to keep around
         | and promote.
        
         | suprjami wrote:
         | I was also cynical at first but it ended up as a triumph over
         | C-suite politics.
         | 
         | This person took a nonsense buzzword "productivity" and turned
         | it into actual measurable numbers which make sure the company
         | is moving forward, and linked those numbers to enjoyable
         | developer activities.
         | 
         | This seems like a good outcome to me. The C-suite actively
         | don't want the developers spending time fixing bugs and doing
         | drudge work. Developers are incentivised to avoid bugs with
         | good code and testing. They can knock back bad implementations
         | because those will lead to more bugs which the C-suite have
         | said they don't want.
         | 
         | Seems like a win to me?
        
       | thcipriani wrote:
       | Seems like the author of this is aware of Goodhart's law, but it
       | bears repeating: both of the proposed real measures (RoI and
       | value-add capacity) could come at the expense of uptime and
       | stability. John Doerr talks about balancing OKRs, so you could
       | have you product bets and...something about stability: hitting
       | SLAs, whatever.
       | 
       | And the term "product bets" makes the leadership team sound like
       | Mortimer and Randolph Duke.
        
       | haskellandchill wrote:
       | What about the Bayesian methods shown in "How to Measure
       | Anything"? They have been applied to Cybersecurity ("How to
       | Measure Anything in Cybersecurity Risk" in a very thorough and
       | convincing manner. It looks like the business around it is trying
       | to apply it to product management
       | (https://hubbardresearch.com/shop/measure-anything-project-
       | ma...). Basically the idea is when things are hard to measure we
       | should not abandon quantitative scales and use qualitative ones
       | (like t-shirt sizes) but instead use probabilities to quantify
       | our uncertainty and leverage techniques like bayesian updates,
       | confidence intervals, and monte carlo simulations.
        
         | apwheele wrote:
         | This is not inconsistent with How to Measure Anything IMO (I
         | like that book as well). The biggest issue to me is that he
         | does not define actual follow ups on ROI -- it is all estimated
         | in this framework. So it is all good to define how to
         | prioritize, it is not helpful though retrospectively to see if
         | people are making good estimates.
         | 
         | My work very rarely is a nicely isolated new thing -- I am
         | building something on-top of an already existing product. In
         | these scenarios ROI is more difficult -- you need some sort of
         | counterfactual profit absent the upgrade. Most people just take
         | total profit for some line, which is very misleading in the
         | case of incremental improvements.
        
           | haskellandchill wrote:
           | The problem is the muda should have expected values
           | associated with them. Bugs and security vulnerabilities do
           | cost money, these are 90% confidence intervals of dollar
           | impact from How to Measure.
        
         | dentemple wrote:
         | Do you have any links to back up your information that _doesn
         | 't_ cost $17,250 to view?
        
           | haskellandchill wrote:
           | Here is a blog post: https://www.linkedin.com/pulse/three-
           | project-management-meas...
        
             | dentemple wrote:
             | thanks
        
       | jacknews wrote:
       | "turns out"
       | 
       | It turns out that 'leadership' are often just a bunch of clueless
       | 'bullies', as in just the top of a dominance hierarchy.
       | 
       | Summoned to the CEO's house to discuss the issue, seriously? An
       | obvious attempt to intimidate, imho.
        
         | jimbokun wrote:
         | So CEOs should not ask their direct reports questions about the
         | work their teams are doing?
        
           | jacknews wrote:
           | "It's the way you tell 'em"
        
           | karaterobot wrote:
           | The weird power move is that he made his direct report to fly
           | out to his home. The article makes it clear this was not
           | optional.
        
             | jdlshore wrote:
             | You're making a lot of assumptions. The reality is that I
             | was going to be in that city for other reasons, and the
             | visit to his house--which I appreciated and wanted--was
             | tacked on to that trip, and followed by a party the CEO
             | hosted for everyone else who was also in town.
        
               | jacknews wrote:
               | even so, was it optional, or no?
        
               | karaterobot wrote:
               | Well, okay, I'm not going to contradict your personal
               | experience. But here's what you said in your article:
               | 
               | > It started half a year ago, in September 2023. My CEO
               | asked me how I was measuring productivity. I told him it
               | wasn't possible. He told me I was wrong. I took offense.
               | It got heated.
               | 
               | > After things cooled off, he invited me to his house to
               | talk things over in person. (We're a fully remote
               | company, in different parts of the country, so face time
               | takes some arranging.) I knew I couldn't just blow off
               | the request, so I decided to approach the question from
               | the standpoint of accountability. How could I demonstrate
               | to my CEO that I was being accountable to the org?
               | 
               | So what I knew is that you said you didn't want to do
               | something, you had an argument, he asked you to come to
               | his house, which is not in your home town, to talk things
               | over, and you knew you couldn't just blow off the
               | request, and had to demonstrate your accountability to
               | the organization. True or not, can you see how one might
               | draw the conclusion that you were forced to present
               | yourself to the CEO in his house in order to show that
               | you were going to toe the line?
        
             | tbihl wrote:
             | Maybe I'm assuming levels of high functioning and healthy
             | relationships that are unwarranted, but I read this article
             | totally differently.
             | 
             | You're misunderstanding leadership roles, where there's
             | very little 'work-life separation', only 'work-life
             | balance'. You have immense freedom on your calendar, but
             | you also frequently think about work when at home and
             | understand that work can call you back at any hour.
             | Depending on the type of org, you're reasonably likely to
             | spend weekend time at meals or other activities with your
             | coworkers, often with wives and kids along for the ride.
             | Your coworkers are broadly agreeable but, if you had a
             | serious personality clash, you probably wouldn't stick
             | around for that long because such a living situation would
             | become untenable.
             | 
             | In this context (and the remote work situation), his house
             | is presumably a nice place for hosting, and 'come to my
             | house to work on it' is the good-boss version of, 'well
             | unfuck yourself and get back to me in a week with an
             | answer, and it better be good.' It's inviting collaboration
             | and focused time, and the CEO's responsible, agreeable
             | coworker (author of the article) accepted that generous
             | offer and made use of it.
        
               | jdlshore wrote:
               | Yes, that's exactly right. I appreciated the CEO's offer
               | and was happy to accept. He added on an offer to take me
               | boating on the lake near his house. The actual meeting
               | waited until a bunch of us were in town for an unrelated
               | meeting, and the offer of boating turned into a party for
               | all employees who were in that town.
        
           | AnimalMuppet wrote:
           | False dichotomy. (And I hate the trope that your comment
           | embodied.)
           | 
           | But a CEO probably should know enough to know that certain
           | things are not directly measurable, and should not push their
           | direct reports into trying to directly measure those things.
           | Otherwise the CEO gets meaningless measurements, and then
           | tries to use the meaningless measurements to steer the
           | business.
        
             | jimbokun wrote:
             | Google among other companies have had great success
             | quantifying things that most other companies assumed were
             | not quantifiable.
        
               | jacknews wrote:
               | do you have any data supporting that their success
               | derives from quantifying everything, rather than just
               | being in the right place at the right time?
               | 
               | anecdotally they seem to have had quite a few failures
               | resulting from 'blind measurement'.
        
               | jimbokun wrote:
               | Sorry, I stand corrected. All of Google's success is due
               | to luck, and none of it due to the talent and skill of
               | their people or the decisions they made.
        
       | exabrial wrote:
       | I have a simple proposal why SE's are externally strapped for
       | time: upgrades.
       | 
       | No not security patches. Upgrades. Like 'oh the newest best
       | version of React was released yesterday and I've just GOT to have
       | those and management doesn't understand how productive I'll be
       | and they shorted us on our last upgrade by not letting us go all
       | in'
       | 
       | This endless treadmill of half baked framework features and ui
       | refreshes streaming in the root problem.
       | 
       | I really think a team that can close the gate and seal the seams
       | from the problem would be the most effective.
       | 
       | That would mean your most important KPIs would be something like:
       | 
       | * number of new features imported from frameworks (close to zero
       | is ideal). These are expensive, like millions of dollars each.
       | 
       | * number of security patches applied/response time/etc (maximize
       | these)
       | 
       | * eliminate things with no or short LTS, measure weighted lts:
       | (LOC using framework multiplied by framework LTS)
       | 
       | Now this in and out itself isn't sufficient to measure an
       | organization's productivity, but these would would keep it on the
       | straight and narrow so other measures would become effective.
        
         | mjr00 wrote:
         | This is far too narrow of a scope to be broadly applicable. The
         | industry in general does not spend nearly as much time
         | upgrading packages as you imply, and your metrics don't make
         | sense either. How do you measure the number of "features"
         | imported from a framework?
        
         | simonw wrote:
         | I'm with Charity Majors on this one:
         | https://twitter.com/mipsytipsy/status/1778534529298489428
         | 
         | "Migrations are not something you can do rarely, or put off, or
         | avoid; not if you are a growing company. Migrations are an
         | ordinary fact of life.
         | 
         | Doing them swiftly, efficiently, and -- most of all --
         | _completely_ is one of the most critical skills you can develop
         | as a team. "
         | 
         | See also this, one of my favourite essays on software
         | engineering: https://lethain.com/migrations/
        
           | Aqueous wrote:
           | "Doing them swiftly, efficiently, and -- most of all --
           | completely is one of the most critical skills you can develop
           | as a team."
           | 
           | That all sounds great. However, I'd like to understand what
           | teams are actually able to do this, because it seems like a
           | complete fantasy. _Nobody_ I 've seen is doing migrations
           | swiftly and efficiently. They are giant time-sucks for every
           | company I've ever worked for and any company anyone I know
           | has ever worked for.
        
             | simonw wrote:
             | That's why it's celebrated as a valuable skill - it's hard!
             | 
             | Did you read this? https://lethain.com/migrations/
             | 
             | I have decades of experience and I'm just about reaching
             | the point now where a migration like this doesn't
             | intimidate me.
        
       | nine_zeros wrote:
       | Not this again.
       | 
       | Maybe one day they should measure the amount of time they spend
       | on this and improve their own productivity by not wasting so much
       | time chasing a fool's errand.
        
       | karol wrote:
       | Lines of code. Case closed!
        
         | ang_cire wrote:
         | I really hope this is sarcastic.
         | 
         | Is a developer that takes 2,000 LoC to fix an issue that
         | another takes 200 to fix 10x more productive, because I'm
         | pretty sure it's the opposite?
         | 
         | Productivity has to include "efficiency" somewhere in it, and
         | LoC doesn't capture efficiency at all.
        
           | khazhoux wrote:
           | Of course if you just take point samples, it fails as you
           | say.
           | 
           | But I don't think people really understand what LOC stats
           | look like in practice for engineering teams. The pattern I've
           | consistently seen --across several companies and many teams--
           | is that most low-performers simply don't commit very much
           | code every month/quarter/year/whatever. I'm not talking about
           | the senior engineers whose job is architectural design and
           | advising and code review, and so on, who contribute a ton to
           | the project without a line of code. I'm talking here only
           | about junior- or mid-level software engineers whose job is
           | supposed to be hands-on-keyboard code. You will always find a
           | number of them who simply push very little code of any
           | significance, compared to their peers. This will be visible
           | in their commits.
           | 
           | So while it's true that there's no function
           | _productivity=f(LOC)_ , there is still information to be
           | gleaned from reviewing the commits (in detail) of everyone on
           | the team, and often one will see a correlation where the
           | people who are not delivering much value to their team (they
           | fix only few bugs, or they take a very long time to implement
           | things, etc) they more-often-than-not have very low commit
           | stats overall.
        
             | OJFord wrote:
             | If you insisted on good git hygiene, then I'd have a lot
             | more time for commit count as a productivity metric, and
             | even LoC to a lesser extent.
        
             | ang_cire wrote:
             | Sure, this is useful for the outliers who just straight
             | don't work as much, but much less useful for telling the
             | difference between a busy person and a productive person.
             | 
             | I have seen bad devs who make daily commits with tens or
             | hundreds of LoC, but whose tools never actually convert to
             | production use because they are mired in development hell,
             | until someone pulls the project away from them and hands it
             | to someone else.
             | 
             | Devs who are so bad that they legitimately spend weeks
             | trying to make a tool that I then built in 2 days.
             | 
             | I think LoC irks me so much, because if I finish someone's
             | "2-month" project in a week, LoC optimization effectively
             | penalizes me for it.
        
       | jimbokun wrote:
       | I feel like it's easier to measure for large scale back end
       | systems. Where you can look at things like throughput,
       | scalability, failure rates, latencies, uptime, etc. So to an
       | extent developer productivity can be thought of as the derivative
       | of improvement in those metrics.
       | 
       | For front end development, you need to look at user satisfaction
       | measures. Where it's a bit more difficult to quantify which
       | contributions are most responsible.
        
         | senorrib wrote:
         | I feel like _satisfaction_ is a product measure, not
         | engineering. You can measure the exact same things on frontend
         | and backend.
        
       | talkingtab wrote:
       | Understanding software development is simple:
       | 
       | 1. I want you to build me a borgizaf. How long will it take you
       | and how can I measure your productivity? Now I want you to change
       | the following line of code to print "bye" and answer the same two
       | questions. `console.log("hello");`
       | 
       | Any answer to the first question has a certainty of 0%. The
       | answer to the second is almost 100% certain. The certainty is
       | directly dependent on whether you know what you are building AND
       | whether you have direct experience having completed the same
       | tasks.
       | 
       | 2. Given some task that is to some extent unknown (see above),
       | how do we proceed? We reduce the uncertainty/degree. This leads
       | directly to "hello world". You start with a program that _shows_
       | what a borgizaf can do and show it to the people who want it. If
       | it is wrong, evolve it. Iterate.
       | 
       | The problem here is that often the people who want the borgizaf
       | can't really tell you what it is. _Especially_ if it has not been
       | made before. But they can tell you what it is not. (See Notes on
       | the Synthesis of Form). Even the people who want a borgizaf may
       | not know what it is. And it may _change_.
       | 
       | 3. In order to increase "productivity" we often have teams of
       | people working on aspects of the same problem. After years of
       | labor each part is done and one day the thing is assembled.
       | Unfortunately, the drive team did not tell anyone they were using
       | front wheel drive, and everyone else thought it was rear wheel
       | drive. Left and right driver position, etc. You don't believe
       | this is possible. One word "Ingres".
       | 
       | Iteration must be the whole. Unless you are iterating on the
       | whole, your certainty will _always_ be 100%. Because the pieces
       | might not fit together.
       | 
       | 4.The speed at which you develop a borgizaf, is the dependent on
       | the speed of iterations and the degree to which each iteration
       | reduces uncertainty.
       | 
       | Really, I think that is all there is to it.
        
         | beretguy wrote:
         | Borgizaf. Nice. I'm borrowing it.
        
         | AnimalMuppet wrote:
         | It's a bit worse than that. The borgizaf is often defined by
         | business development, or somebody like that. They decide they
         | need one after a bunch of customer interviews. But the
         | customers don't actually know what they need. So the business
         | development people are trying to describe something that the
         | users need but don't know that they need, and that doesn't yet
         | exist.
         | 
         | Yeah, the main problem is uncertainty, not lines of code to
         | write. You measure progress mostly by how much you reduce
         | uncertainty.
        
           | talkingtab wrote:
           | Yes. Often the need is best understood through iteration, but
           | certainly having "eyes on" by the ultimate customer helps.
           | Once I was told by marketing that security was not important.
           | The customers got a prototype and screamed bloody murder. I
           | think the moral of the story is to view the development of a
           | product as a collaboration between all parties. _All_ ,
           | including customers.
           | 
           | If there are silos of concerns then you can end up with a
           | beer distribution game.
           | 
           | And this: "You measure progress mostly by how much you reduce
           | uncertainty." is very succinct
        
         | Rustwerks wrote:
         | Software engineering is search.
         | 
         | The iterative approach described here to finding a 'good'
         | borgizaf is simulated annealing. Make an initial guess to the
         | solution, then start with changes that are large early on and
         | which increasingly refine as you get closer to your goal.
         | 
         | Metropolis-Hastings algorithm works better for situations for
         | which you know even less about what a good borgizaf looks like.
         | Perform many experiments (code changes) and toss out all of the
         | ones that don't work. Science works this way. An automated LLM
         | coder would probably also go this route as the cost of trying
         | many things would be more heavily automated.
         | 
         | Large systems tend to look more like the genetic algorithm. The
         | space is diced up into individual components and then each
         | 'gene' is optimized in parallel. For example if you were trying
         | to build a Linux distribution you'd have several hundred
         | packages and then for each release the packages could improve
         | or be entirely replaced with better versions (or swap back and
         | forth as they competed).
         | 
         | Of course there are other search strategies that can be
         | employed. Search is still an important area of research.
         | 
         | https://en.wikipedia.org/wiki/Simulated_annealing
         | https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_al...
         | https://en.wikipedia.org/wiki/Genetic_algorithm
        
           | Thrymr wrote:
           | Sure, everything is an optimization problem, the hard part is
           | defining your cost function, especially if the borgizaf is
           | trying to solve an ill-formed business problem.
        
       | MP_1729 wrote:
       | Beware Goodheart's Law: "when a measure becomes a target, it
       | ceases to be a good measure". If your goal is stopping to waste
       | time solving bugs, I'm sure you're going to be able to do that.
       | 
       | You should have an important counter-metric to see if you're not
       | messing with software. It could be number of reported bugs,
       | crashs in production, etc.
        
         | littlestymaar wrote:
         | Nitpick: It's Goodhart (without the "e").
        
         | PaulKeeble wrote:
         | Then it becomes the challenger scenario. Various pieces are
         | failing but the whole mission succeeds so everyone ignores the
         | known risks because management is interested in their return on
         | investment. That works right up until the rocket explodes and
         | suddenly there are lots of external people asking serious
         | questions. Boeing is going through the same thing having
         | optimised for ROI as well and its planes are now falling apart
         | on a daily basis.
         | 
         | Who always gets in trouble for this? More often than not the
         | developers and operators who in a high pressure environment
         | optimised what they were told to optimise and gamed the metrics
         | a little so they weren't fired or held back in their careers.
        
         | Izkata wrote:
         | Naming it "muda" helps push it that way, too: If any of those
         | higher-ups decide to look up the word, they'll see that you're
         | calling bugfixing "pointless work".
        
         | hinkley wrote:
         | Professional athletes have a lot of telemetry on them. But some
         | of that telemetry makes sense during training, and maybe makes
         | more sense for a brief period of time while they work on
         | technique.
         | 
         | You focus on something intensely for a little while, get used
         | to how it feels, then you work on something else. If your other
         | numbers look wrong, or if it's been too long, we look at it
         | again.
        
       | qznc wrote:
       | First, specify more clearly what you mean by "productivity" to
       | avoid misunderstandings:
       | https://dl.acm.org/doi/10.1145/3454122.3454124
       | 
       | It can be about satisfaction & well being, performance, activity,
       | collaboration & communication, or efficiency & flow. These are
       | all different things.
        
       | ChrisMarshallNY wrote:
       | I'm of the opinion that it isn't really possible to even _define_
       | "productivity," let alone _measure_ it.
       | 
       | Different layers of the org, have different expectations for
       | "productivity," and the problems generally occur, when
       | conflicting definitions collide.
       | 
       | I have come to learn that there's really no substitute for hiring
       | really good, experienced engineers, and then manage them with
       | Respect. I know that the SV "Holy Grail" is to come up with The
       | Perfect Process, whereby we can add a monkey, pay them bananas,
       | treat them like circus animals, and come out with perfect code,
       | but I haven't seen anything even close to that, succeed.
        
       | karmakaze wrote:
       | My TL;DR - estimate (sustainable) Value-add capacity.
       | 
       | The current/instantaneous value-add capacity is what the
       | team/department can deliver in a short-time without consideration
       | of hidden costs (tech debt). The 'sustainable' modifier adjusts
       | that to be a long-term average.
       | 
       | This is something developers and team managers will certainly
       | know, but is good to see framed from the top-down (though I don't
       | know how many C-level execs know tech debt 'in the gut'). Usually
       | it's ignored until it gets so bad a rewrite is required.
        
       | javier_e06 wrote:
       | The article remind me of an add in Dr. Dobbs Magazine inside back
       | page. It is a Microsoft add with two pictures in it.
       | 
       | In the first picture there are two developers in a conference
       | room late at night. There is a Pizza box open with half a pizza
       | and a board with a software design that looks like spaghetti. The
       | developers hair is messy. The looked stress.
       | 
       | The second picture is of a group of people walking in with
       | birthday cake on a bright sunny day to the same conference room.
       | The two developers look happy. The manager is somewhere in the
       | picture. I think. Is a MS .Net commercial.
       | 
       | I could not find the add in the wayback machine.
       | 
       | Companies attach yearly bonus to production/profits meeting
       | goals/targets to increase productivity. Whatever productivity is.
        
       | barfbagginus wrote:
       | Would it help to demand that your non-technical leadership knows
       | category theory, queuing theory, version control, and
       | infrastructure automation?
       | 
       | One way that helps me is that I have nobody on the C suite but
       | me. Because when some lazy business jerk with no knowledge of
       | even a basic functor starts telling me what I should do to be
       | accountable to them, I give them a big fat boot.
       | 
       | Obviously a downside is that scaling my organization to more than
       | one head count is going to be a little difficult!
        
       | llmblockchain wrote:
       | I have a method. Remove your headphones and listen to your
       | teammates. I've always used the "fucks per-hour" as a measurement
       | of productivity.
       | 
       | For the uninitiated, the more fucks, the more productivity.. ie,
       | they are looking at the code and trying to do something.
       | 
       | True story-- at my first job the fucks were so bad they moved
       | some of us into our own offices.
        
         | scruple wrote:
         | I love this. When they RTO'd us at work, I ended up sitting
         | next to someone who isn't on our team and who isn't an
         | engineer. He complained to his manager that I curse a lot and
         | when they asked my manager to have me tone it down they moved
         | him instead.
         | 
         | I'm not being loud or particularly vulgar, just lots of "what
         | the fuck?" and "what is this shit?" and "how the hell does this
         | even work?" being involuntarily mumbled to myself.
        
           | philk10 wrote:
           | https://www.osnews.com/story/19266/wtfsm/
        
       | OutOfHere wrote:
       | A proto-measure is whether the software has active users, whether
       | as a library or application or service, etc., and whether
       | internal or external. That's after a substantial initial seed
       | investment. If it still has no active users, it's time to change
       | focus. It it has active users, then assuming it ultimately rolls
       | up into a revenue generator, it must continue receiving funding.
       | This is not a full measure, but it's a foundation.
        
       | asdasdsddd wrote:
       | I feel like this doesn't account for the fact that not doing muda
       | makes value-add work take longer
        
       | Aurornis wrote:
       | Kudos to the author for being honest about the flaw in this
       | metric:
       | 
       | > It's ridiculously easy to cheat this metric. Even if you
       | correctly categorize your muda--it's very tempting to let edge
       | cases slide--all you have to do is stop fixing bugs, defer some
       | needed upgrades, ignore a security vulnerability... and poof!
       | Happy numbers. At a horrible cost.
       | 
       | > Actually, that's the root of my org's current capacity
       | problems. They weren't cheating a metric, but they were under
       | pressure to deliver as much as possible. So they deferred a bunch
       | of maintenance and took some questionable engineering shortcuts.
       | Now they're paying the price.
       | 
       | > Unfortunately, you can get away with cheating this metric for a
       | long time. Years, really. It's not like you cut quality one month
       | and then the truth comes out the next month. This is a metric
       | that only works when people are scrupulously honest, including
       | with themselves.
       | 
       | I've had the same experience with well-meaning productivity
       | metrics collection: Even the execs who were really trying to do
       | the right thing would accidentally invent a new metric that
       | looked fantastic for a couple years, then later collapsed when
       | everything else caught up. By then it might be someone else's job
       | to clean up the mess.
       | 
       | Like the author said, this difficulty in this problem is that it
       | can go on for years. If you have an executive team that actually
       | knows how to balance the metrics with the things that are harder
       | to tracked, it might not be a problem. However, as soon as you
       | get an executive looking to game the system for a couple years
       | before jumping to another job, it becomes a doomed venture.
        
         | godelski wrote:
         | I think most people forget that most metrics are easy to hack
         | and all metrics are hackable (yes, even if you use math &
         | science). So the only rational thing to do is be aware and
         | acknowledge the limitations of your metric and be open about
         | that and what context they operate under. If you aren't aware
         | of the underlying assumptions, your metric is borderline
         | useless. It's like the old saying "there's two types of code:
         | those with bugs and those that no one uses." All systems are
         | buggy and if you aren't looking for them you'll get bit.
         | 
         | And in great irony, it is why metrics are the downfall of any
         | meritocracy.
        
           | pixl97 wrote:
           | All models are wrong, some models are useful.
        
             | godelski wrote:
             | There's a corollary I use with respect to ML
             | You don't need math to train good models.       You do need
             | math to know why your models are wrong.
        
         | bravura wrote:
         | I'm surprised the author, given the thoughtfulness of much of
         | the post, didn't decide to model tech debt, since then
         | accumulated debt would be quantified frequently (if
         | inaccurately).
         | 
         | Especially since they are optimizing for value-add work versus
         | paying down tech debt, which the author acknowledges is what
         | got the engineering org into a paying down tech hole that they
         | are currently mired in before the author's tenure. For this
         | reason, accumulated tech debt should be modeled in their bar
         | charts too.
         | 
         | This is far preferable to letting tech debt grow and fester for
         | several years. In a way, I find this possibility almost a
         | regression by the author and possibly irresponsible. I.e. they
         | didn't patch the underlying core issue (rest of org not
         | understanding or assessing tech debt as a liability), and
         | instead push a quantitative measure that actively encourages
         | taking on unmeasured tech debt.
        
           | robertlagrant wrote:
           | Isn't that the "muda"?
        
           | suprjami wrote:
           | How do you quantify technical debt?
           | 
           | It doesn't seem possible to track percentage of code as debt.
           | You can't put a comment on code and call this block either
           | debt or good.
           | 
           | One way could be to measure different activities. If you
           | spend time solving a lot of bugs (muda) then where are those
           | bugs?
           | 
           | If they're in the same old code, does that code need
           | rewritten? If they're in new code, is that code being written
           | in a good way? Can better testing catch these?
           | 
           | "Technical debt" seems as hard to measure as developer
           | productivity. It seems you need something you can actually
           | put on a bar chart to measure the _effects_ of having
           | technical debt.
        
             | nevdka wrote:
             | You need a way to assign technical debt to a process or
             | practice, then you can evaluate the cost of that process
             | over time.
             | 
             | Could be something like... Our team spent 500 hours fixing
             | bugs introduced during overtime, and we spent 250 hours of
             | overtime to introduce those bugs in the first place. So an
             | additional hour of overtime will likely introduce 2 hours
             | of technical debt.
             | 
             | Or maybe... Code written using inheritance hierarchies
             | creates 40 minutes of muda per hour, but using composition
             | only creates 20 minutes. Or... every hour spent on throw-
             | away prototypes reduces muda by 2 hours.
        
         | gopher_space wrote:
         | Education as a domain has identified key performance metrics
         | for teacher evaluation and has come up with a method of
         | collection they can point to which always results in useful
         | data.
         | 
         | It's also an expensive and time-consuming process. You're
         | either burning a stack of money because you absolutely need
         | that info, or a slightly smaller stack to cook the books in
         | full view of people who understand statistics way more than you
         | do.
        
           | draebek wrote:
           | > Education as a domain has identified key performance
           | metrics for teacher evaluation and has come up with a method
           | of collection they can point to which always results in
           | useful data.
           | 
           | Can you please elaborate on this? My impression was that
           | judging teachers is actually fairly hard. See, for example,
           | push-back on standardized testing.
           | 
           | The most useful metric I can imagine is where the students
           | are in some number of _years_ , but the obvious (to me)
           | problems are
           | 
           | 1. Takes years 2. Lots of confounding factors
        
             | gopher_space wrote:
             | On-the-job observations done by someone with way more
             | experience.
             | 
             | The kpis that get pushback are indicators of whatever you
             | want them to be. They're just as indicative of poor
             | government planning or shitty parents.
        
               | fardo wrote:
               | > On-the-job observations done by someone with way more
               | experience
               | 
               | This feels like it puts a potential hard cap on quality
               | growth by discouraging mixups or experimentation that
               | might improve education, but wouldn't please an old-guard
               | for one reason or another, and discourages alternative
               | class styles which the judge doesn't approve of.
               | 
               | Both of those seem like potentially serious problems in
               | education, given that its structure has with few
               | exceptions been effectively stagnant over the last
               | several hundred years. You therefore may be mistaking
               | "evaluating the success at implementing the widely
               | accepted method" for an "evaluation of quality".
        
         | asdfman123 wrote:
         | > In other words, in the absence of RoI measures, the percent
         | of engineering time spent on value-add activities is a pretty
         | good proxy for productivity.
         | 
         | It's a lagging indicator of productivity. If the team is
         | consumed with fixing old bugs, it means that teams weren't
         | _actually_ that productive years back and were instead just
         | exchanging  "results" for technical debt.
         | 
         | Instead of creating a perverse incentive to create messes, what
         | if they did something like create a ratio: "60% of work is new
         | features, and 40% is maintenance"? And if maintenance becomes
         | all consuming, could they look back and identify the problems
         | with previous launches?
        
       | yellow_lead wrote:
       | > So that's my productivity measure: value-add capacity. The
       | percentage of engineering time we spend on adding value for users
       | and customers.
       | 
       | I think this is an important metric, but it doesn't mean anything
       | if that time is wasted. Imagine the team has no muda, they are
       | spending all their time on value-add, productivity is 100% right?
       | Well no, you still need to succeed at what you're doing, do it
       | well, and in a reasonable time.
       | 
       | This isn't a measure of productivity, it's a measure of time
       | allocation.
        
         | ec109685 wrote:
         | Agreed, I think.
         | 
         | Does his framework include a measure of output too by looking
         | at ROI? So if you are able to deliver more effectively, that
         | should increase ROI independently of whether you reduce the
         | amount of muda time.
        
         | ozim wrote:
         | If I code up to the specification given by the business, a
         | feature that annoys customers and makes them switch to the
         | competition, is it my problem?
         | 
         | For engineering department amount of "value-add" or how many
         | features they deliver (to the specification of course, if the
         | requirements are bad, it is also on business) is the only
         | measure of productivity. Because that is something they can
         | have control over. Reasonable time is implied by delivering
         | amount of value-add.
         | 
         | Don't go overboard with the definition, it should be context of
         | engineering, not the whole company. If business cannot come up
         | with good features that time is wasted for the company but
         | engineering is doing what they are paid for.
        
       | Izkata wrote:
       | > Like any engineering organization, we spend some percent of our
       | time on fixing bugs, performing maintenance, and other things
       | that are necessary but don't add value from a customer or user
       | perspective. The Japanese term for this is muda.
       | 
       | That translates as "pointless" or "useless"... that's like, the
       | opposite of "necessary".
       | 
       | Just sticking with an English word like "foundation" would make
       | sense here - necessary, but not something a customer or user
       | typically sees. Even further, metaphorically, fixing bugs =
       | stabilizing the foundation.
       | 
       | Using foreign words just to sound fancy and ending up with the
       | wrong meaning is just annoying.
        
         | jlawson wrote:
         | Alternatives: foundation, plumbing, substructure, support work,
         | support structure, backend, sustainability, backstage,
         | logistics
        
           | myhf wrote:
           | "necessary but not sufficient"
        
         | kitten_mittens_ wrote:
         | A good chunk of the English lexicon was foreign at one point or
         | another: https://en.wikipedia.org/wiki/Inkhorn_term
        
         | jessetemp wrote:
         | They're saying the things that don't add value are muda
         | (waste), not the foundational stuff. I only know that because
         | my employer uses the same jargon. And since almost no one
         | speaks phonetically-spelled-out Japanese, they always follow it
         | with (waste) in parentheses
        
         | jdlshore wrote:
         | "Muda" is a technical term with a specific meaning. (Google
         | it.) It originated in the Toyota Production System, then
         | migrated to software development via the Poppendiecks' "Lean
         | Software Development." It doesn't mean "pointless" any more
         | than legacy software means "an inheritance."
        
           | koito17 wrote:
           | If you're aware of Toyota's 7tsunomuda sure. I expect the
           | average person to think Wu Tuo  as used in daily
           | conversation. I don't think the analogy with "legacy
           | software" is accurate; "legacy" has multiple definitions in
           | modern English, whereas Wu Tuo  has exactly one definition in
           | modern Japanese.
           | 
           | See
           | https://kotobank.jp/word/%E7%84%A1%E9%A7%84-641862#w-641862
           | 
           | One of the example sentences -- Shi Jian woWu Tuo nisuru --
           | is exactly what my mind thought when reading the article's
           | explanation of "muda". (Also: the second entry does not apply
           | to modern Japanese; dictionaries reference old literature for
           | example sentences of obsolete definitions)
        
             | Izkata wrote:
             | > If you're aware of Toyota's 7tsunomuda sure.
             | 
             | Which indeed I've never heard of. But from what I can find
             | [0], the meaning here still matches what I said - these are
             | things that can be safely removed to save time/money
             | without harming the end product. Not "necessary but hidden"
             | things.
             | 
             | [0] https://www-nikken--totalsourcing-
             | jp.translate.goog/business...
        
       | dboreham wrote:
       | The executive summary of this article (for me) is: burst the
       | delusion bubble around your stake holders. This is admirable, but
       | that delusion existed in the first place for a reason. The reason
       | still exists.
        
       | a_c wrote:
       | I might have skimmed too fast. Productivity seemed not defined?
       | IMO productivity is all about making product that get used. And
       | profitability is how much people willing to pay to use such
       | product.
        
       | neilv wrote:
       | I wonder how well the engineering/product would've gone, in that
       | case, without that productivity-measuring flavor of leadership
       | being asserted.
        
       | w10-1 wrote:
       | Risk is lost in the discussion but very relevant to management.
       | 
       | In theory, in the most productive engineering organization, each
       | person is the expert at what they're doing and work is factored
       | to avoid overlap. This happens somewhat naturally as people silo
       | (to avoid conflict and preserve flexibility). This is actually
       | the perfect system, if your people are perfect: their incentives
       | are aligned, they're working with goodwill, etc.
       | 
       | But that makes every person critical, few people can operate at
       | that level all the time, and fewer still want to work alone doing
       | the same thing ad infinitum. Also, needs change, re-factors work
       | and its mapping to workers.
       | 
       | So then you expand the scope to productivity over time with
       | accommodation for change - i.e., capacity.
       | 
       | But by definition, capacity far exceeds output, and is thus
       | unobservable and unmeasurable.
       | 
       | So instead you experiment and measure: switch roles, re-organize
       | work, address new kinds of problems and prototype new kinds of
       | solutions, bounded by reasonably anticipated product needs.
       | 
       | And to inject this uncertainty, you actually have to inject a lot
       | of countervailing positivity, to deflect the fear of measurement
       | or failure, to encourage sharing, etc.
       | 
       | Unfortunately, these experiments and freedom from consequences is
       | pretty indistinguishable from the bureaucratic games and padding
       | that productivity measures are designed to limit.
        
       | OJFord wrote:
       | With articles like this I always think it'd be really interesting
       | to hear from others on the team. I find it really hard to imagine
       | there's not going to be pressure (either self-inflicted or from
       | above) to 'game' it, or how do you decide what's gaming and
       | what's pragmatic anyway - perhaps better said I find it hard to
       | imagine people don't _feel_ like it 's being gamed, that they're
       | having to add value where they think they should be maintaining,
       | even if OP or whoever deciding that thinks it's the right call.
       | 
       | (And to be clear I can't say and am not saying which is right,
       | from over here with zero information about any such decisions.)
       | 
       | It could be fun to have a platform for honest blogs about work
       | stuff where it's grouped by people verified to work (or have
       | worked) at the same place. (Or, I realise as I write that, a
       | descent into in-fighting and unprofessionalism..)
        
       | mdgrech23 wrote:
       | "but the ceo had a scheduling conflict so he couldn't come to
       | this thing that was really important to him". Does that ever make
       | sense to anyone? It never made sense to me. I think it's about
       | classism. These guys only want to hang out w/ the lower class if
       | they're at the front of the room talking otherwise they want to
       | come off as mysterious and too busy to have any time. What a load
       | of shit.
        
       | booleandilemma wrote:
       | There's no shortcut to paying attention. That is, paying
       | attention to who's doing the real work. Any metric can be gamed.
        
       | ozim wrote:
       | I would point out that Martin Fowler, Kent Beck and Gergely Orosz
       | were writing that you cannot measure productivity of a single
       | team member.
       | 
       | *You can get a rough sense of a team's output by looking at how
       | many features they deliver per iteration. It's a crude sense, but
       | you can get a sense of whether a team's speeding up, or a rough
       | sense if one team is more productive than another.*
       | 
       | So in that sense "value add time" will be a valid metric. But it
       | is still not a single number that one can give without context to
       | the CEO, that CEO still has to ask questions like "we usually
       | have X time spent on fixing bugs, aren't you gaming the system to
       | get your bonus?". That is the CEO job to understand the metric
       | not to be gamed...
        
       | jlas wrote:
       | Don't overlook sources of anti-productivity which might be easier
       | to measure like the time it takes to build & test code changes.
        
       | danielovichdk wrote:
       | Productivity is a totally irrelevant metric. It takes 2 people, 5
       | years to built a 50 foot sailboat. It takes 10 people, 4 months.
       | 
       | Which is most productive?
       | 
       | That depends on the price and the quality of course.
       | 
       | Never measure for quantity.
       | 
       | Always measure for quality in the environment you are competing
       | in.
       | 
       | And since most professional work is about making money/use value,
       | it would be a could start to measure how much money/use value
       | people are contributing to. If people are not actively
       | contributing to this they are not productive.
        
       ___________________________________________________________________
       (page generated 2024-05-06 23:00 UTC)