[HN Gopher] Measuring an engineering organization
       ___________________________________________________________________
        
       Measuring an engineering organization
        
       Author : theptip
       Score  : 58 points
       Date   : 2023-01-02 17:59 UTC (1 days ago)
        
 (HTM) web link (lethain.com)
 (TXT) w3m dump (lethain.com)
        
       | PragmaticPulp wrote:
       | Measuring gets a bad rap, but proper measuring can be very
       | valuable to engineers as well.
       | 
       | If the company is only trying to measure developer productivity,
       | they're missing the big picture. Developers will only get blamed
       | when the number doesn't match some hidden expectation.
       | 
       | Good organizations measure a lot of things, such as tech support
       | effort (mentioned in this article), roadmap churn, number of
       | unexpected feature demands that derail forecasts, and other
       | metrics that evaluate the company rather than laying blame on a
       | single group.
       | 
       | Measuring roadmap churn and the impact of unexpected feature
       | requests has been very helpful for me at metrics-heavy companies.
       | When management starts asking why things haven't been shipped
       | yet, it's amazing to pull out a quantified list of all of their
       | unplanned requests, roadmap changes, and other management issues
       | that set everyone back.
        
         | e_i_pi_2 wrote:
         | I think a lot of the bad rap comes from how the measurements
         | are used - I personally like to know my trends for how many
         | hours/commits/etc I did in different areas during a time
         | period. I use Wakatime[1] in my editor and love checking in
         | every once in a while. However I would never ask everyone on my
         | team to install it and share the dashboards, because then the
         | measurements may be used the wrong way, e.g. "Alice did way
         | more commits than Bob last week, so that's bad for Bob".
         | 
         | As another example I like using story points/velocity to get a
         | feel for the team, but I don't think you can extrapolate on
         | that data and expect the results to be valid. I think most of
         | the frustration from devs come from the measurements being used
         | to try to plan out the future when that isn't really possible.
         | IMO you can choose a set of features OR a release date but
         | definitely not both.
         | 
         | I'm no expert but I've read a fair amount on different
         | strategies to try and make timelines for software and my big
         | take-away at this point is "don't make timelines for software.
         | If you _have_ to, then keep it probabalistic and don't promise
         | anything by an exact date". I think we as devs need to kinda
         | educate the rest of the business that it's a tradeoff between
         | features and release dates. I like the "iron triangle" model
         | but you can't just add more people to a team and expect things
         | to be done faster (Mythical Man-Month), so it's really just
         | scope VS release date.
         | 
         | [1]: https://wakatime.com
        
         | ironmagma wrote:
         | They help in the same way that a speed limit helps drivers.
         | Sure, it actually makes the roads safer to some extent. Is
         | there some amount of abuse you have to expect in enforcement?
         | Yes. Is there some amount of unjust rule breaking that will
         | happen? Also yes. Does it massively reduce the fun of driving?
         | Also yes. If everyone just knew what they were doing on the
         | road would it really be necessary? Not really. And do you need
         | them for small, high-trust areas? Not really.
        
           | e_i_pi_2 wrote:
           | Somewhat tangential, but we have the already have the tech to
           | know when license plates show up at a certain place, we know
           | the map and the speed limits, so we could completely automate
           | tickets for speeding. If the limit is 60mph and you show up
           | 10 road-miles away in less than 6 minutes then you must have
           | sped somewhere along the way, then the owner of the car gets
           | a ticket in the mail.
           | 
           | In the short-term I think most people would hate this, but
           | then I think we need to choose between raising the limit or
           | following it. We all make the laws so we can raise the
           | limits, and this approach completely removes the potential
           | for abuse or playing favorites. As an added benefit it wipes
           | out a lot of what cops spend their time on, so if you don't
           | like cops we could justify having way less, or if you do like
           | them then it frees up their time to focus on more important
           | things.
           | 
           | I also recognize this opens up a huge potential for abuse by
           | the govt having all this data, but we already have it for
           | tolls so we might as well use it for good - having human
           | police officers handle stuff like this is a huge risk for
           | abuse and a waste of time.
        
             | marcosdumay wrote:
             | > We all make the laws so we can raise the limits
             | 
             | How democratic is your country? Will the limits raise or
             | will your government take the extra revenue and keep things
             | unreasonable?
             | 
             | You will find that people have many different answers for
             | those.
        
           | pixl97 wrote:
           | > And do you need them for small, high-trust areas?
           | 
           | OSHA would disagree, and even racetracks have rules.
           | 
           | I think a lot of developers on HN just have a problem dealing
           | with the fact the rules are going to be applied to them. If
           | you become an owner of a company your engineering rules go
           | away, but then you have all the regulatory fun of a business
           | owner trying to make a living and selling things at the same
           | time. Working for a larger organization tends to have
           | benefits like steady pay, but in return they ask for you to
           | justify your expense and it's very possible the people asking
           | for justification have no idea exactly what you do.
        
         | pixl97 wrote:
         | Heh, I like how you were downvoted. I think a number of people
         | here are a little bitter about their own negative experiences
         | in companies and don't have enough self reflection to admit
         | it's possible they could be part of the problem.
        
       | geekjock wrote:
       | "What should we measure to improve developer productivity?" is a
       | decades-old problem for leaders with no clear solution.
       | 
       | There finally seems to be some level of consensus that output
       | metrics like lines of code, # of PRs, and commits, are an
       | ineffective approach.
       | 
       | Lean metrics like cycle time and lead time can be a helpful high-
       | level diagnostic, but they're far from an indicator of
       | effectiveness or productivity.
       | 
       | A new approach being adopted by many organizations is to focus on
       | the actual experiences of developers... the things that slow them
       | down or frustrate them... and turn these into measurements that
       | guide improvement. I'm the founder of getdx.com where we're
       | publishing research on this: http://paper.getdx.com
        
         | Cardinal7167 wrote:
         | > "What should we measure to improve developer productivity?"
         | 
         | How about nothing? Honest question. I am sick and tired of
         | being treated like cattle. I've never seen a single convincing
         | argument that any metric of my outputs has mapped to some
         | productivity metric in a meaningful way, but I have absolutely
         | seen them weaponized against me and others to punish and fire
         | engineers. All of these metrics have only ever boiled down to a
         | negative reinforcement mechanism, in my experience.
        
           | dalyons wrote:
           | In healthy orgs you are collecting such metrics to help
           | advocate for investing in tooling, automation,
           | infrastructure, process improvements etc. It can give the
           | business case for building/expanding dev tooling and cloud
           | teams. I've found it helpful in past jobs... fully weighted
           | dev salaries are so high, at a moderate sized org a little
           | bit of rough efficiency math can make for obvious investment
           | cases.
        
           | senguidev wrote:
           | IMO some ~objective feedback of reality is good though.
           | Probably not easy to set up without having it quickly turn
           | into a negative reinforcement loop indeed. But let's not
           | forget that, with the right supporting culture, it can be
           | helpful (pleasant?) to the dev to have some metrics. I admit
           | "how to do that?" remains unclear.
        
           | pixl97 wrote:
           | So how do you deal with bad developers? How do you even
           | measure if they exist in your organization.
           | 
           | And as much as many of us on HN are developers and like to
           | toot our own horn, some of us suck and don't get better with
           | time. I mean in small organizations this is typically easy to
           | figure out, but in larger organizations it's a problem that
           | can persist for long periods of time.
        
             | lylejantzi3rd wrote:
             | Who cares? No company I've ever heard of has been killed by
             | bad developers. The same can't be said for managers, Vice
             | Presidents or the C-Suite.
        
               | theptip wrote:
               | The big problem with this line of reasoning is that good
               | developers hate working with bad developers. So if you
               | don't do something about bad developers (which might just
               | mean train them, not necessarily fire them), then the
               | good developers will leave.
        
               | lylejantzi3rd wrote:
               | In that case, the bad developers are easy to spot. The
               | good developers point right at them. There's no mystery
               | nor are measurements needed.
        
               | Jensson wrote:
               | That is circle reasoning, if you know which developers
               | are good so you can ask them then you already know which
               | developers are bad.
               | 
               | And asking good developers to identify good developers
               | doesn't work either. Good developers hates working with
               | bad developers, true, but like all other humans they hate
               | working with all sorts of people for different reason so
               | they will point out a lot of good developers as well.
               | 
               | Instead what works is if you have lots of data from
               | diverse group of good developers. That way the personal
               | noise gets cancelled out and you are mostly left with the
               | "I don't like bad developers" signal. This isn't trivial
               | data to get though, especially since people usually
               | choose to work with people like themselves so a team
               | likely isn't diverse enough to get you this kind of data,
               | and people from other teams aren't familiar enough to
               | give good data either.
        
               | pixl97 wrote:
               | This is a tautology if I've ever heard one.
               | 
               | If a bad manager keeps only bad developers around it will
               | kill a (software) company just as fast as anything else.
               | It turns out that paying customers like their software to
               | work and be free of data corruption.
        
               | kqr wrote:
               | I disagree. Under good management, bad developers might
               | progress more slowly than good ones, but they won't
               | outright destroy the product.
        
               | theptip wrote:
               | Speed == probability of survival for startups. You only
               | have so much runway; better developer productivity means
               | more swings of the bat at finding PMF.
        
               | Cardinal7167 wrote:
               | That isn't what is in contention, though. The mapping of
               | some objective metric to that speed and productivity is
               | what I'm arguing doesn't make sense.
        
               | Jensson wrote:
               | When you lack good developers stuff like this happens:
               | 
               | https://www.cbc.ca/news/business/sony-
               | cyberpunk-2077-playsta...
               | 
               | Edit with more arguments:
               | 
               | You could of course blame managers for that, but delaying
               | a game since your bad developers aren't capable of
               | delivering isn't really an option either. How many years
               | will it take those developers to get things right? What
               | they would need is to delay the game and replace the
               | developers with good developers, or scale back the
               | complexity of the game to a level those developers can
               | handle.
               | 
               | The company in question choose the latter, they said they
               | will use unreal engine for future games instead of an
               | inhouse engine. So they admitted their developers aren't
               | good enough.
        
             | Cardinal7167 wrote:
             | > So how do you deal with bad developers?
             | 
             | The same way we deal with good developers? With managers,
             | of course. That's the whole point of their role. Every
             | developer productivity metric I've seen speaks to some
             | disparity between engineering and the C-suite. I don't
             | understand where these products originate that seek to do
             | the manager's job for them but worse. I've been in
             | management roles before, the numbers don't and can't always
             | tell you the full story, no matter the size of the team or
             | the quality of the data.
             | 
             | As always, good people are hard to find.
        
           | geekjock wrote:
           | I'm a developer and am right there with you. But if you're a
           | decades-old corporation with 10,000 engineers, you need some
           | set of signals to help guide improvements to tools and
           | processes, right? This benefits developers, and there should
           | be a set of signals to enable this.
        
       | ed_elliott_asc wrote:
       | My only contribution to this is that I strongly believe you
       | should only be measured on a system you built from scratch -
       | inheriting someone else's system and then you being measured on
       | it is unfair.
        
       | splittingTimes wrote:
       | I agree with most of the sentiments in the article and am always
       | keen on learning new meaningful, i.e. not individual but high-
       | level measures. The notion of "measuring something is better than
       | nothing" is not helpful, I feel. You not measure what is just
       | easy to measure, but what is actually imporant and gives real
       | business insight. This, as the article states, is typically very
       | hard to define and execute.
       | 
       | Also, I would widen the perspective of "CTO needs to provide data
       | to CEO" / engineering effectivness a bit, as this is not the most
       | important thing you need to know. Ultimately, as a company you
       | wanna know if your business is going in the right direction or
       | not. You want to spend more money on things & activities that add
       | to the value of your product offering in the eyes of a customer
       | so that the customer will pay for it and you want to spend less
       | money on things customers will not pay for. But what are the
       | value-adding activities and which are non-value-adding? When will
       | a customer buy your product or service?
       | 
       | For me, this sets a certain order of importance of what things to
       | measure/quantify to answer the following questions:
       | 
       | 1. Deliver customer value. Are we building the right product?
       | 
       | 2. Generate business value. Can we generate revenue?
       | 
       | 3. High value product. Do we keep quality high and build the
       | right next features?
       | 
       | 4. Code quality. Are we building the product right so it is
       | maintainable and exensible?
       | 
       | 5. Team chemistry. Is the team aligned on the goal and healthy in
       | their interactions and spirit?
       | 
       | 6. Process efficiency. Is success repeatable?
       | 
       | ===
       | 
       | (1) Answering that should be not too hard as you can measure any
       | kind of customer feedback on your products, be it youtube likes,
       | alpha tester feedback, support calls, surveys.
       | 
       | (2) Once you know you have a product customers want and like, you
       | need to know is the customer facing part of your organization
       | (marketing, sales, training & education and support) connecting
       | to said customer, so that they can sell and actually generate
       | revenue?
       | 
       | This is much harder to get data for. You can measure the number
       | of licenses, lost and new customers, or the trends in the
       | business volume of services, but what does that tell you about
       | the ability of your organization to be able to sell a good
       | product to customers? I am not sure here.
       | 
       | (3) is to reflect on the business success (ROI) on new features
       | that you implement. Do you keep building a high value product?
       | Are we effective in communicating the new features value
       | proposition to our customers? There is soo much to measure here.
       | 
       | Feature level: Measure via BI the business impact of new
       | features, how often are they used? Measure the resources needed
       | to deliver a feature. Measure number of support cases / bugs
       | reported per feature. Measure the estimated time vs the taken
       | time for a feature.
       | 
       | Quality of the overall product: Measure the yearly mean number of
       | support cases per week. Measure the yearly mean number of bug
       | reports per week. Measure the yearly mean number of crash reports
       | per week. Measure number of major field incidences per year.
       | 
       | Do we make the customer feel cared about after the sale? Measure
       | lead time to resolve support calls. Measure customer ratings of
       | support calls.
       | 
       | (4) Code quality like compiler/sonar warnings or test coverage
       | are the easy measures, but might not give you insight. I prefer
       | to look more highlevel again at the product quality which ties
       | into (3), like How often do tickets come back from testing to
       | development? Measure number of regressions reported by customers
       | after a release. Measure number of (real) hotfixes needed after a
       | release.
       | 
       | Answering the more fundamental questions of code quality like
       | "how easy is it to add new features" is a bit more difficult. I
       | have not found good measures yet.
       | 
       | For (5) totally agree with the article, you should never measure
       | how many lines of code were produced, how many tickets closed or
       | the like. The question of team happiness is most important for
       | team work. Ther are good tools for that like OfficeVibe or Glint.
       | 
       | Measure the employee turnover. Measure the number of
       | uninterrupted hours of work time / total time present at work per
       | week. Measure the mean Office Vibe score. Many further answers
       | could be drawn off of Office vibe: Are people at ease, having a
       | good time and enjoying interactions with their peers? Is there no
       | sense that single individuals try to succeed in spite of the
       | efforts of those around them? Was the work a joint product? Was
       | everybody proud of its quality? Do they take enjoyment in their
       | work? Is there trust and mutual esteem among the peers?
       | 
       | For (6) you want to know: Are our processes such that success are
       | repeatable?
       | 
       | On a company level: Measure number of botched releases / roll
       | backs. Measure number of failed audits per year. Measure number
       | of open CAPAs per year.
       | 
       | Per Team level: How long are compile/ CICD times? How long do
       | code reviews lie around before taken up? How easy is on-boarding
       | of new employees? etc
        
       ___________________________________________________________________
       (page generated 2023-01-03 23:00 UTC)