[HN Gopher] Build or don't build a software feature?
       ___________________________________________________________________
        
       Build or don't build a software feature?
        
       Author : knadh
       Score  : 148 points
       Date   : 2022-03-06 11:24 UTC (1 days ago)
        
 (HTM) web link (dont.build)
 (TXT) w3m dump (dont.build)
        
       | mdeck_ wrote:
       | Sounds like a modified/more complex version of the well-known
       | RICE framework.
       | 
       | https://www.intercom.com/blog/rice-simple-prioritization-for...
        
       | Pxtl wrote:
       | Using decimals instead of going 0-10 so 5 could be the halfway
       | mark (or 1-5 with 3 as halfway) is kind of an example of "don't
       | build that feature".
        
       | lucideer wrote:
       | As someone often advocating for not building features, after
       | playing with this a little bit I do think some of the constants*
       | behind it's formula are a little off toward the
       | pessimistic/negative outcome.
       | 
       | If a very complex feature is of true high value to 90% of my
       | users, it seems uncontroversially worthwhile: the tool gives me
       | "No, but a close call (48%)"
       | 
       | I'd suggest putting a little more weight on value & user
       | importance and a little less weight on complexity/effort.
       | 
       | Otherwise, GREAT tool. Even just as an aid to get across the idea
       | that some features should not be built, which is often not
       | understood.
       | 
       | *for reference, weights are currently as follows:
       | 
       | # users: 10
       | 
       | effort: -15
       | 
       | user importance: 20
       | 
       | biz complexity: -15
       | 
       | value: 20
       | 
       | risk: -20
        
       | pete_nic wrote:
       | Great job on this decision making tool. One thing that might be
       | missing - the business viability of the feature. All else being
       | equal you prioritize features that add value to the company that
       | builds them.
        
         | bduffany wrote:
         | I'd guess that was intended to be captured with "how valuable
         | is the feature to target users, truly?" If target users truly
         | find the feature valuable then I guess that means the feature
         | is adding value to the company.
        
           | Jensson wrote:
           | Users don't find ads valuable, but plenty of businesses still
           | wants developers to add ads to the product.
        
         | chockchocschoir wrote:
         | To be fair, the tool doesn't seem to be exclusively for
         | "business software", so that would be a parameter you'd control
         | yourself.
        
       | RexM wrote:
       | Seems like it's missing a tick box that asks if it's already been
       | promised to a customer. That would automatically override
       | everything and make it say "Yes, build it."
        
       | smoyer wrote:
       | This is a cool tool but it should differentiate between "users"
       | and "customers" so that weighting is based on the potential for
       | making paying users happy (or perhaps the word "user" should just
       | be changed.) It also appears that these sliders are equally
       | weighted but I find that these factors are NOT equally weighted
       | based on a particular feature, the lifecycle of the product and
       | the lifecycle of the company.
        
         | jenscow wrote:
         | I was thinking the same, until I saw: _How important are the
         | target users to the existing product?_
        
       | hardwaresofton wrote:
       | Love the stuff that ZeroDHA builds (I'm a heavy user of
       | Listmonk[0]). Maybe this is the secret to them shipping so
       | consistently and high impact features!
       | 
       | [0]: https://github.com/knadh/listmonk
        
       | axg11 wrote:
       | Interesting visual tool and mental framework. Unfortunately, I
       | think this would have "failed" in all the decision processes I
       | have been a part of. Why? Each of the questions is valid, but bad
       | decisions usually stem from incorrectly answering one of these
       | questions.
       | 
       | > How valuable is the feature to the target users, truly?
       | 
       | Take this question for example. Accurately answering it is not
       | always possible. A common mistake is to ask your users or take
       | their word for how valuable they perceive a feature to be. That
       | approach is better than nothing, but can often lead teams astray.
       | This isn't because your users are stupid, it's because your users
       | don't have the perspective that you have in terms of (a) what is
       | possible, and (b) the knock-on effects of the feature on other
       | aspects of the software value proposition.
       | 
       | Note: the above is _not_ true about bugs. If a new feature is
       | actually a bug/issue fix raised by your users, they are usually
       | right.
       | 
       | > What is the time, technical effort, and cost to build the
       | feature?
       | 
       | Estimating technical effort is so difficult that it is an entire
       | field in itself. When working on complex systems, you also have
       | to consider the future complexity introduced when building on top
       | of the new feature (linked to the last question).
        
         | orblivion wrote:
         | > your users don't have the perspective that you have in terms
         | of (a) what is possible, and (b) the knock-on effects of the
         | feature on other aspects of the software value proposition
         | 
         | "Hey Jack, we've been asking for an edit button for years! It's
         | not that difficult."
        
         | Someone wrote:
         | > but bad decisions usually stem from incorrectly answering one
         | of these questions.
         | 
         | Then change your answers. For me, this kind of method of
         | assigning numbers to aspects of a choice and combining them in
         | some way is there not to be an oracle, but to direct your
         | thoughts or discussions within a group.
         | 
         | For example, if you gut tells you a feature is definitely worth
         | it, but a tool like this says it's only borderline useful, that
         | shouldn't make you immediately discard the feature, but make
         | you consider
         | 
         | - whether the list of aspects is complete
         | 
         | - whether you judged the existing ones correctly
         | 
         | - whether your gut was right (e.g. if your gut says its worth
         | it, but you also think its hard to implement and only
         | moderately useful. Clearly, something is wrong or missing
         | there)
         | 
         | When making a group decision, a big advantage is that this
         | moves you from exchanging opinions "I think we should do A; you
         | think we should do B" to more concrete discussion "I think it's
         | worth a lot and easy to implement; you think its worth
         | something but too hard to support for what it's worth. Let's
         | discuss those two separately".
         | 
         | Items about which there's strong disagreement even after
         | discussion may even trigger postponement "let's get a better
         | idea about how hard/useful this is first"
         | 
         | The only way to make an informed decision is by thresholding on
         | some number scale, but as you say, it also is impossible to
         | assign numbers to aspects of a solution.
        
           | axg11 wrote:
           | Good point. It could be interesting to make this into a
           | multiplayer tool. Allow each member of a team to answer the
           | questions and then focus the debate around the questions
           | where there is the most disagreement.
        
       | kevsim wrote:
       | This is cool!
       | 
       | It's kind of similar to what the RICE/ICE frameworks are trying
       | to help achieve [0].
       | 
       | We built some scoring of impact/effort into our tool Kitemaker
       | [1] and allow teams to prioritize their work by these things. We
       | ended up going with really simple scores like S/M/L since it's
       | super hard to know the difference between a 6 and a 7 (and it
       | probably doesn't really matter anyway).
       | 
       | 0: https://medium.com/glidr/how-opportunity-scoring-can-help-
       | pr...
       | 
       | 1: https://kitemaker.co
        
         | blowski wrote:
         | You didn't ask for this feedback but I'll give it to you - your
         | homepage is way too generic. "Break silos", "Focus on what
         | matters", "Powerful integrations", etc. There are 1000s of
         | tools offering those features - why is Kitemaker different and
         | worth me looking at?
        
       | smoe wrote:
       | Just a bit of a nitpick (or issue for non native speaker) maybe,
       | the answers are confusing in relationship to the title.
       | 
       | Title: Don't build (or build) that feature
       | 
       | Answer: Yes
       | 
       | I think the way I selected the questions (high impact, low
       | effort) it should tell me to build, but as I read it, the tool
       | tells me either to _not_ build or answers an OR question with yes
       | or no.
        
         | tnzk wrote:
         | "Go" or "Hold on" would be better?
        
           | smoe wrote:
           | I think "build" / "don't build" would be best.
        
       | p0nce wrote:
       | Features are a last resort proposition.
        
       | metanonsense wrote:
       | I like the idea. Some corner-cases yield odd results, however,
       | e.g. 1,10,10,1,1
        
       | laurent123456 wrote:
       | I'm not sure it would be useful to decide if something should be
       | implemented or not (in the sense that you often already know),
       | but it's a great visual tool regardless, and could be used to
       | show users how development decisions are made.
       | 
       | A nice additional feature would be a way to bookmark a set of
       | slider values, so that it can be shared with others.
        
       | rhynlee wrote:
       | Interesting mental model you made here, I like it! Stuff like
       | this definitely helps get people into system 2 thinking instead
       | of going with intuition and in this case that's probably a good
       | thing.
       | 
       | I thought seeing charts of how the answer would change along with
       | each slider value for a given value range might help as others
       | have mentioned it's not too easy to answer the questions
       | accurately. Could help handle uncertainty since people would then
       | be able to understand the range of answers in between their "best
       | case" and "worst case"
        
       | faeyanpiraat wrote:
       | I wouldn't show the result in realtime, as it allows fine tuning
       | parameters to get to a result I want, and not one I need.
        
       | m3047 wrote:
       | Reminds me of https://weekdone.com/ for some reason. It should be
       | a Jira plugin where the team can vote on each of the questions.
       | =)
        
       | chrismorgan wrote:
       | > _This page requires Javascript to function._
       | 
       | Missed an opportunity to present the "don't build" reasoning! :-)
        
         | orliesaurus wrote:
         | I have yet to meet someone in real life who disabled js on the
         | browser
        
           | agentdrtran wrote:
           | Every single person with JS disabled browses HN
        
         | jbverschoor wrote:
         | People who disable js don't really need that. So time saved
         | presenting the reasoning
        
       | nada_ss wrote:
       | this is a survey?
        
         | compressedgas wrote:
         | It appears to be a decision calculator.
        
       | troebr wrote:
       | Maybe it should say "build" or "don't build" instead of yes/no.
        
       | motohagiography wrote:
       | This is ideal as a teaching tool. Related to another thread
       | (dictator's handbook thread) I used salience models to help make
       | feature decisions, which are a lot like story points poker
       | without the overhead, but the effect is the same. The key
       | challenge is getting honest assessments from people of the
       | questions. Most people when provided a model will ask, "sure, how
       | do I get it to say what I want it to say?" and if it doesn't say
       | that, they won't accept that their desire is thwarted by
       | principle.
       | 
       | Such a useful tool, and I foresee referring to it regularly.
        
       | blowski wrote:
       | This is useful as a way of demonstrating "questions to ask
       | yourself". But how do do you decide whether something is 3/10 or
       | 7/10 for any of these? What might be even better is being able to
       | compare options against each other.
        
       | azhenley wrote:
       | The value seems to range from 5% to 95%, is that to leave room
       | for uncertainty?
        
         | laurent123456 wrote:
         | That's because the sliders only go down to 1, not 0 (if you
         | change the HTML and set the "min" value to 0 you can get to 0%
         | or 100%). I guess it makes sense because none of the answers
         | could possibly be 0 as far as I can see.
         | 
         | Or perhaps as an optimisation he could set the total to 0% or
         | 100% immediately if certain answers are set to 0 - for example,
         | if no user need the feature, it should be 0%, or if the time
         | and cost is 0, it should be 100% (although that's absurd), etc.
        
       | tantalor wrote:
       | Besides the first, all the measures here are relative, so I
       | suppose it is up to the user to calibrate.
       | 
       | Take build cost. Suppose a project would take 2 engineers 4 weeks
       | to build. A large team may call that a "2" but a small team would
       | call it a "8".
        
       | [deleted]
        
       | moasda wrote:
       | Interesting approach for a decision calculator.
       | 
       | Why is the result always between 5% and 95%?
        
         | jonathan_h wrote:
         | TL;DR: A minimum score of 1 messes with their formula. OP
         | should consider changing it.
         | 
         | Looking at the page's <script>, I think it's because they set
         | the minimum score to 1 rather than 0.
         | 
         | In addition, for three of the questions, a high score is seen
         | as a negative rather than a positive (e.g. a high score in
         | "development effort" likely means "a lot of effort"), so behind
         | the hood they invert those scores by doing (10 - score).
         | 
         | The problem is then that positive questions have a range from 1
         | to 10, and negative questions have a range from 0 to 9, and the
         | means you have a minimum score of 3 and a max of 57, rather
         | than 0 and 60.
         | 
         | For a more flowery answer: A developer never deals in absolutes
         | :)
        
           | [deleted]
        
       | dudul wrote:
       | I like the idea, but unfortunately, with the exception of the
       | first slider, they are all subjective and hard to quantify.
       | 
       | What's a technical effort of 6 vs 4? What's a technical debt of 8
       | vs 6 or 7?
        
       ___________________________________________________________________
       (page generated 2022-03-07 23:01 UTC)