[HN Gopher] The Code Review Pyramid
___________________________________________________________________
The Code Review Pyramid
Author : adrianomartins
Score : 157 points
Date : 2022-03-21 17:36 UTC (1 days ago)
(HTM) web link (www.morling.dev)
(TXT) w3m dump (www.morling.dev)
| Quarrelsome wrote:
| aren't we missing something at the base of the pyramid:
|
| > does it actually work?
|
| I feel like the foundation of any code review has to be a bug
| analysis to ensure that the feature works, doesn't create
| regressions or fail in another supported use-case.
| pc86 wrote:
| Potentially unpopular opinion, but I would much rather get
| well-formatted, performant, readable code that misses the
| business case a bit than get a ball of spaghetti mess that
| nails it. Not only will it be easier to update the code in the
| former, but the business pressure to just deploy the latter and
| "fix it later" can sometimes be too high to resist. I was just
| looking at a "proof of concept" with a 2017 commit date in
| production last week, the danger of that is all too real.
| Quarrelsome wrote:
| No, I mean like the primary function of every PR is to ensure
| the code does what its trying to do and doesn't (for example)
| fail to compile, or break the build or create an infinite
| loop or an out of memory exception.
| b3morales wrote:
| On a dayjob closed-team project, if my coworkers are
| putting up PRs without making sure they meet functional
| requirements, that is a problem that needs to be solved
| several steps _before_ the PR. If it 's a junior person who
| doesn't know better, more working side-by-side with them to
| get to the PR. If it's a senior person dealing with a
| shitty spec, more discussion with PMs/designers before we
| start coding. If it's a senior person who's putting up
| broken code because they don't care...that's probably a
| management conversation.
|
| I _have to_ be able to trust that my teammates are doing
| their best to produce working components. Otherwise code
| review is a bandaid covering a hole in the hull of a
| sinking ship.
| digisign wrote:
| Arguably too fundamental to mention, at least in one slide.
| Perhaps covered during the developer-testing period, before
| peer review.
| nescioquid wrote:
| I think the purpose of code reviews is really information
| sharing, rather than attempting to identify and remove defects
| (which might occasionally happen, but is more reliably
| addressed via testing). On the one hand, the team has the
| opportunity to see how a new feature works, on the other, the
| implementer gets feedback on team norms.
| owlbite wrote:
| Completely different to the order I use, which basically amounts
| to "how hard is it to fix this later".
|
| So most important questions are: (1) API must be as correct as
| possible, especially if its exposed outside of the immediate
| team. Changing APIs is a royal pain. (2) Documentation of what's
| going on, especially for non-obvious stuff. If we come back to
| this in 3 years time and the author has moved on, do we have any
| hope of fixing it? (3) Testing. What degree of confidence do we
| have this is doing the right thing? Will it protect me when I
| change it in a few years time?
|
| Any minor bugs or code style issues can be fixed later as/when
| they are found and cause problems.
| greymalik wrote:
| I think you might be misreading the pyramid? The most important
| things are at the bottom. It aligns exactly with what you're
| saying.
| gunnarmorling wrote:
| Exactly that. "how hard is it to fix this later" is what
| drives the ordering in the pyramid.
| mannykannot wrote:
| While I recognize the problem as real and significant, and I
| think a hierarchy of concerns is a valid way to mitigate it, I
| feel there are a few problems in this particular pyramid.
|
| Of all the issues that might be raised in a review, which are the
| most important to fix? I find it difficult to imagine a ranking
| in which incorrect functioning (including security
| vulnerabilities and truly inadequate performance) are not at the
| top (answering the ranking question with "all of them" would just
| be a way to avoid the issue.)
|
| In this pyramid, issues of this nature are to be found at all
| levels except the top one, mixed in with more-or-less subjective
| ones, such as "Is a new API generally useful and not overly
| specific?" - pointless flame wars have erupted over issues such
| as this (also, orthogonally, code review is late in the game to
| be asking this one.)
|
| For a review to be successful, its participants need to be able
| to restrain themselves from going down rabbit holes that could
| not lead to necessary rework - or a strong moderator, if the
| individual participants cannot act with adequate self-restraint.
| pcmoney wrote:
| If your team does not share a subjective understanding of what
| makes their product/API useful none of this matters. Focusing
| on objective technicalities is the least useful aspect of code
| review. If you have someone who constantly is like "Well that's
| subjective" and derails conversation that way you need to coach
| them or move them off the team.
| mannykannot wrote:
| Are you saying that if you enter a code review without having
| a shared subjective understanding of what makes your
| product/API useful then the review is not likely to be
| successful, or that code review is the place to form such a
| shared subjective understanding? I can agree to the first
| interpretation, but about the latter I would say that code
| review is quite a bit later than optimal, and that there are
| other, better ways of achieving it.
|
| If your organization is such that you cannot even discuss
| these issues until you have working code, then you have
| other, bigger, problems.
| th3iedkid wrote:
| Another aspect, the pyramid makes code review look like the
| only thing around the actual value outcome. There's a lot of
| other aspects like pair-programming that eliminates most things
| in the pyramid, taking it out of scope from a review process
| neebz wrote:
| previous discussion:
| https://news.ycombinator.com/item?id=30674159
| sz4kerto wrote:
| Ours is different.
|
| Tests (end-to-end and integration) are at the bottom. If there
| are no tests that prove that the code works, we won't really look
| further because we don't even know if the code implements the
| right thing. Then comes interfaces, then implementation, then
| style.
| sublimefire wrote:
| What is more the tests will highlight the API and the
| expectations (see API semantics in the pyramid). By nature, the
| process of writing tests will usually catch some bugs. I'm not
| arguing for the tests to cover each possible scenario, but they
| must exist. If, on the other hand, it is hard to add the tests
| then it probably means you have bigger problems.
| ilovecaching wrote:
| This doesn't seem correct to me. Each of these categories should
| bear equal weight, I won't spend less time on code style or tests
| to spend more time on API design. My code review checklist is a
| checklist. I spend as much time as necessary on each item in the
| checklist, no more no less.
|
| I also think that code style, fuzzing, and benchmark tests are
| the most important items on my checklist. It keeps the code base
| from becoming a jenga tower that is brittle to change. It also
| keeps us honest as to whether a feature is a perf hit or not.
| Tests are far more important than this graphic portrays them as.
| kolanos wrote:
| Code style is the first thing I eliminate from a review
| process. I find the most commonly used code formatter and
| linter for $LANG (the more opinionated the better) and plug it
| into pre-commits and CI. Code style is just not worth my time.
| simonbarker87 wrote:
| I'm at the point now where I don't want to think about layout
| and formatting of my code, just let a formatter do it all for
| me on save please
| tuyiown wrote:
| But, code formatting is not a solved problem, right ?
|
| You either have to fiddle until the formater outputs proper
| layout, or let it rest as is and never learn how to format
| your code, letting the code base leaking tons of probable
| less than ideal layout.
| twblalock wrote:
| The proper layout is whatever the formatter outputs.
| That's the point. Everyone uses the formatter and accepts
| whatever it does, and nobody needs to think about it
| anymore.
| kolanos wrote:
| I'll take a consistent less than ideal layout over
| whatever $HUMAN comes up with. But to me an inconsistent
| layout is the less than ideal one. And humans excel at
| inconsistency.
| RussianCow wrote:
| I configure Prettier once on my JS codebases and then
| never touch it again. Seems about as "solved" as is
| reasonably possible.
| ilovecaching wrote:
| You're conflating style with formatting. No formatter will
| tell you if you've picked clear names or need to decompose or
| combine something for better readability.
| kolanos wrote:
| As long as the starting point is what an opinionated
| formatter spits out, then we can talk about the hardest
| thing in software engineering (naming things). But lets be
| real, most "code style" conversations are of the formatting
| variety and they're the purest form of bike shedding.
| nhoughto wrote:
| I'll use this I think, very handy. I'd probably emphasis all the
| text/descriptions more too, lots of important nuggets in there
| that aren't obvious when glancing, like automating the top
| portions of the pyramid, humans shouldn't be mis-formatting code
| and others human comment on it anymore.
| AngeloR wrote:
| I've been thinking about code reviews a lot.. and I think a big
| problem with existing code reviews is that for most orgs, code
| reviews were a way to move planning to the end of the development
| process instead of keeping it up-front.
|
| I think the foundation of the pyramid (API and Implementation
| semantics) is often discussed in code reviews when it should have
| almost no place. By the time you get to the code review not only
| should these have been ironed out and shared with the whole team.
| What you should be looking at is simply adherence to the plan,
| and calling out deviations.
|
| I wrote about a small rat about this:
| https://xangelo.ca/posts/code-reviews-are-failure/
| gunnarmorling wrote:
| > is often discussed in code reviews when it should have almost
| no place. By the time you get to the code review not only
| should these have been ironed out and shared with the whole
| team.
|
| That's a fair point. It's my background in open-source and I
| suppose the specific ways we work in the projects I'm involved
| with which made me add these things at the bottom. Oftentimes,
| contributors come up with PRs for new features, and there was
| no prior discussion about API design etc. (although we highly
| recommend to have such discussion before starting with the work
| on a large PR).
|
| I.e. oftentimes a PR review will be the first time I see how
| something is implemented, how the API looks like etc. It may be
| different with more controlled settings in in-house development
| teams adhering to more formally defined processes.
| VBprogrammer wrote:
| I've noticed a phenomenon where I sometimes have to comment about
| something near the pointy end of the code review pyramid in order
| to get my brain into the correct mindset to really start grocking
| how and why someone has made the changes they have. After I break
| this barrier it often leads to the deeper questions about whether
| it is doing what it supposed to be doing.
| makecheck wrote:
| An important one often overlooked (due to our tendency to focus
| on just the "diff"): _what else should be changing_ or _could be
| affected_?
|
| Very easy to forget to change code that isn't highlighted in the
| review.
| willcipriano wrote:
| The more experienced I get the more I want to work on teams with
| a higher degree of trust than ones like this. Don't get me wrong
| I think code review can be valuable but on high preforming teams
| it works best on a as needed basis.
|
| The way this has worked successfully for me in the past is as a
| new joiner to a team you submit for code review every time. My
| first couple of reviews tend to have a good bit of feedback as I
| settle in to the style of the team and often a new language or
| framework. After a while though I become a domain expert for the
| projects I am working on, and code reviews end up becoming a
| rubber stamp. At that point it's just something slowing me down.
| I'd request review when I need it, or I'm making a big change
| that will effect everyone, otherwise there are many cases where
| there is really only one way to skin this cat and it's totally
| perfunctory.
|
| This sort of arrangement also requires a bit of ownership, if you
| see reports floating around about an endpoint you just worked on
| having issues you have to be able to be trusted to jump on it. I
| feel like trust and ownership are touchy feely people problems so
| tech people invented code reviews and linters to try to make up
| for not cultivating the sort of culture where they aren't
| required on every single little thing.
| the_jeremy wrote:
| I trust the people whose code I review to be competent
| developers. But everyone occasionally makes typos, misreads
| requirements, has blind spots, etc. Even if the creator is the
| expert and the reviewer is there to catch only obvious
| mistakes, that still has value.
| edgyquant wrote:
| On my team we have one other engineer, no matter your or their
| official position, just do a quick glance over for obvious
| issues. This does not prevent real bugs from getting merged in,
| the test cases are supposedly for that but even those have some
| slippage.
| Cerium wrote:
| I work on a team that runs following a high trust and domain
| ownership model. When I joined our process was "write some
| code, test it, be confident in the code, commit it to trunk". A
| number of years ago we worked our way through "Clean Code" in a
| reading group and decided to try out code reviews. Each week
| one of us would present our recent work and the team would tear
| it apart line by line, and in the process we developed a better
| understanding of our group philosophy around style and
| practices.
|
| When we switched to Git we decided that we liked code reviews
| and wanted to make them part of merging to develop. Depending
| on the content the reviews are deeper or more superficial. Many
| times even when I review the code of somebody making changes in
| their own subject matter domain I have found bugs that simply
| needed another set of eyes to catch.
|
| I like code reviews and in the context I use them find that
| they are a complement to trust, not a substitute.
| quadrifoliate wrote:
| The popularity this sentiment on HN terrifies me.
|
| Please don't do this. I keep finding rubber stamp code reviews
| (or none at all) while diving into codebases for where bugs
| were introduced, and 9 times out of 10, it's someone who didn't
| bother to get proper reviews for their code because "it's just
| something slowing me down". I can assure you that you have bugs
| and usability issues in your code, not just "linter issues".
| Everyone does.
|
| If your code reviews are becoming rubber stamps, that's an
| indication that you need to slow down and teach other team
| members about the system, or hire smarter people who can
| understand your reviews more quickly. Don't respond by getting
| rid of the review process as a knee-jerk response to these
| sorts of organizational constraints.
| ryandvm wrote:
| I disagree.
|
| If you have a practice that isn't effective (e.g. code
| reviews that aren't preventing bugs), doubling-down is almost
| never the correct solution.
|
| Code reviews are like unit tests; they're helpful under the
| right conditions. Mostly on the tricky bits or when a dev is
| working in an unfamiliar area.
|
| It's a common fallacy for folks to believe that if a little
| of something is good, then a lot of it must be better. I've
| been at places with 100% code test coverage or requiring
| EVERY SINGLE PULL REQUEST to be reviewed. It's obnoxious, and
| it doesn't do shit to prevent bugs. Instead it causes folks
| to burn out on the practice and write crap unit tests or
| stamp "LGTM" on every PR they're forced to evaluate. People
| have a limited capacity for cognitive engagement and wasting
| that on piddly crap means they can't spend it on the
| important bits.
| bena wrote:
| I don't think this is doubling down so much as singling
| down.
|
| The problem with unit tests and code reviews is that they
| ultimately aren't sexy. There's no way to do them that
| destroys the integrity of the system. Especially when a
| large percentage of the code written is passable. Yes,
| reviewing and testing code that works and is bug free is a
| bit of a waste of time. But you won't know what code isn't
| working and bug free unless you test and review.
|
| And then there's the fact that most of what we do is not
| that important. If my code is buggy or inefficient or a
| third thing, the worst thing that happens is a few people
| frown while I get it sorted. If you knew that if your code
| had a bug in it, it would definitely kill 5 - 10 people
| every time it is run, you'd treat review and testing with
| the utmost importance because the cost of committing buggy
| code is way too high.
|
| Testing and review becomes exactly as important as your
| product.
| AnimalMuppet wrote:
| If your code reviews aren't catching bugs, the correct
| answer is to _fix the code reviews_ , not to stop doing
| them.
|
| I mean, it's possible that code reviews are worthless. But
| enough people have found value in them, across enough
| organizations, enough code styles, and enough decades, that
| it seems really hard to conclude that. It seems more
| probable that the team that finds them useless is doing
| them wrong. (Yeah, I know, No True Scotsman. Still, the
| first response should be "maybe our team is doing them
| wrong, and can we fix that?", rather than "let's stop doing
| them".)
| [deleted]
| boredtofears wrote:
| I don't disagree with you at all, but I'm not sure I've
| been a part of an organization that does it right. Beyond
| the stuff like style, conventions, and obvious
| bugs/problems, it's challenging to provide meaningful
| insight to someone that has many more hours of experience
| in a particular area of a codebase more than you do. I
| know I personally hold back from calling out things that
| I find to be subjective -- even if I think I would have
| done it differently -- simply because I'm often unsure
| about the things I don't know.
|
| Plus as a peer reviewer, you're often in the same part of
| the sprint cycle as they are - meaning your mind is
| already juggling a lot of things about your own code.
| It's tough to "come up for air" and then immediately dive
| into someone else's code changes with enough mental
| capacity to provide a truly useful review.
|
| It's easy to slip into the rubber stamping mentality when
| you generally trust the ability of a particular
| contributor.
| jonex wrote:
| "Beyond the stuff like style, conventions, and obvious
| bugs/problems ..." Catching the obvious issues is IMO one
| of the main benefits of never skipping code reviews. Even
| trivial changes often have those (at least when I write
| them) and I've most likely saved many hours of debugging
| by a reviewer catching stuff like that. But this does
| require more than simple rubber stamping. I encourage my
| reviewers to read my code with the assumption that I make
| errors, basically looking for the bug that's fairly
| likely to be there.
|
| The other thing I think is easy to do is simply see if
| you understand the code. If the reviewer struggle to do
| understand what's going on, chances are that other people
| will struggle as well. In this case, your lack of
| knowledge is an asset, the original author is likely to
| not see how it looks for someone not as well versed with
| the problem.
| cillian64 wrote:
| You assume here that people only work on the area of the
| codebase which they are most familiar with. I think a
| sign of a healthy team is when people are contributing to
| other parts of the code and not getting stuck in a rut
| where each area of code only has one
| maintainer/gatekeeper.
|
| Also, for less experienced members, reviewing the changes
| of more experienced people is a great way to learn if you
| properly sit down and understand the change (why it's
| being done, why it's being done this way) rather than
| just skimming through. In addition to catching mistakes,
| this is useful to check that code is actually
| comprehensible to people other than the author and a good
| time to request more/better comments if necessary.
| boredtofears wrote:
| I'm not assuming that at all. When you're at a company
| that has $Xmm lines of code floating around in its
| repositories, it's unrealistic to assume you will be able
| to cross functionally train all developers on all code.
| Of course you try to aim for that, but you'll inevitably
| find yourself in situations where you don't.
| librish wrote:
| Velocity vs correctness is a trade-off. Stringent review,
| testing, and ownership requirements can easily slow down
| developers by an order of magnitude. Which is often the right
| call, but not always.
| Kranar wrote:
| I feel I work on a high performance team and code reviews are a
| joy. When I submit my code for review it's an opportunity to
| share my work with my peers, potentially explain my thought
| process and my work process and we get to learn from one
| another, and when I get code to review it's the opposite, I get
| to see how my coworkers think and tackle problems.
|
| I have never once felt like a code review did anything to erode
| trust and I don't see how a code review could slow down a work
| process. Using git, as soon as I submit code for review I begin
| work on the next task and if I need anything from the branch I
| submitted for review, I just merge that into the new branch. If
| changes are requested I make the changes and then merge again.
|
| Code reviews are among the best ways of ensuring that code
| remains consistent and that everyone is learning from everyone
| else. That's why in my team all pull requests are reviewed by
| two people including one junior developer.
| bmj wrote:
| On my team, the level of scrutiny a merge request receives
| depends on the author and the functionality. MRs from a
| principal or lead engineer tend to get rubber-stamped at a
| higher rate, though, to your point, many of us will explicitly
| call out code that we would like to have reviewed in more
| detail. If a more junior level developer submits a merge
| request, it is highly likely their code will get a close
| review.
|
| I'm not sure I agree with your premise that code reviews are
| the product of a culture with a lack of trust and ownership. I
| think, in fact, submitting to a review process _is_ a sign of
| ownership -- it 's saying "hey, the quality of the code we
| produce as a team is important to me, and I know I make
| mistakes."
| larusso wrote:
| I'm called a principal engineer in my organization (I know
| that can mean everything or nothing) I like to use code
| review requests and sometimes gates also to put a fence in
| front of my ego. Also this is a great way of sharing
| knowledge. I had great sessions where questions about X and
| then answers verbally or in text suddenly put stuff into
| different perspective. Sometimes I realize that my solution
| might be to complicated or an more generic, performan, (paste
| fancy adjective to describe your code here) is easy to
| achieve.
| digisign wrote:
| Yes on trust. However... I'm pretty darn experienced in what I
| do these days, and every day I still write a bit of crappy
| code, usually as I come to an understanding over time of the
| current problem. WTFs come from code written six months ago as
| well as yesterday. If someone can point out an issue early, it
| saves everyone time. So, I welcome it.
| mpalczewski wrote:
| First thing I review is readability.
|
| Code should be readable. Once code is readable it makes reviewing
| the rest much easier. Readable code is more maintainable and
| problems jump out at you.
|
| Stuff other than readability is important, but if you focus on
| readability it makes the rest of the review go very smoothly.
| convolvatron wrote:
| unfortunately, 'readability' is fairly subjective. I was part
| of a group recently that spent a good couple hours every week
| arguing whether 'ctxt' or 'context' was more readable.
|
| when confronted, they explained to me as one would a child,
| that they cared deeply about code quality
| mpalczewski wrote:
| > when confronted, they explained to me as one would a child,
| that they cared deeply about code quality
|
| these people sound useless.
|
| Readability as it applies to code is how easy it is to read
| the code and understand it in relation to how complicated the
| problem domain is.
| [deleted]
| [deleted]
| _nhynes wrote:
| That's ridiculous.
|
| Obviously `ctx` is the most readable.
| maze-le wrote:
| Its obvious from the context that "c" is the only variable
| name that should represent a context.
| thanhhaimai wrote:
| +1, I also focus first on readability while reviewing code.
|
| Software engineering is a collaborative process. I don't write
| code for the machine; I write it for my colleagues and my
| future self to read. Code we write 6 months ago looks like
| someone else code. Readability is important unless you're
| convinced that it's prototype/throwaway code.
|
| After readability then I'd look for tests. It's a theme: good
| tests are also meant to be read as documentation. Beside
| ensuring correctness, it's also the example my colleagues or
| future self can refer for the module usage.
| nine_zeros wrote:
| If the code is doing the wrong thing in the first place, no
| amount of nitpicking on readability will save time.
|
| First pass: Functional correctness
|
| Second pass: Better ways to do the same thing (optional)
|
| Third pass: your favorite nitpicks.
| thanhhaimai wrote:
| It's assumed in your post that readability review is
| nitpicking. It's actually about making the code idiomatic,
| consistent, predictable, quickly understandable, and low
| mental stress while reading. The "nitpicking" parts you
| referred mostly is about ensuring code consistency and
| predictability. But there are more to readability than that:
|
| 1) did the code use a third party library while a standard
| lib is sufficient? Is there a way to use native language
| features (e.g. list comprehension) to make the code more
| idiomatic to people familiar with the language?
|
| 2) are the namings make sense in the business context? Can a
| new hire read just the public interface and be able to guess
| the usage?
|
| 3) do people have to jump around, full-text-search the code
| base to understand what's going on? Is automatic dependencies
| injection being overused? How many things people need to keep
| in their head to be able to follow this code?
|
| It's all readability there, and I'd argue it's not only about
| style or formatting.
| pc86 wrote:
| This structure (especially the optionality of the second
| pass) is how you end up with perfectly readable code
| iterating over List<T> when a hash map would suffice and be
| orders of magnitude more performant.
|
| Nobody can do it perfectly, but I think trying to keep
| personal nitpicks out of reviews as much as possible -
| ideally by codifying _team_ nitpicks in automated formatting
| /.prettierrc/whatever - lets people focus on things that
| matter: what the code does and how it does it.
| watwut wrote:
| Then again, that one is super easy to find and fix if it
| turns out to actually be problem.
| mpalczewski wrote:
| yes, the strawman of nitpicking has been killed.
| sgeisenh wrote:
| I think you're missing the point: it's incredibly hard to
| determine functional correctness of unreadable code.
|
| So the first priority is getting the change to a readable
| state. I'm not talking about nits, I'm talking about
| minimizing the cognitive overhead to truly understand what
| the code is doing.
|
| That being said, it is also easy to identify nits during this
| phase. In codebases that have collections of best practices,
| this often helps to ensure that the change is expressed in
| terms of a common vocabulary. Once this pass is done, the
| reviewer can usually better understand the intent of the
| changes.
| nine_zeros wrote:
| > I think you're missing the point: it's incredibly hard to
| determine functional correctness of unreadable code.
|
| There is a spectrum between style differences and
| absolutely unreadable code.
|
| If it is absolutely unreadable, sure, send it back to be
| readable. But if it is merely styling issues, I'd encourage
| you to understand that you are not saving any time by
| focusing on styling before functionality.
| chimprich wrote:
| > I'd encourage you to understand that you are not saving
| any time by focusing on styling before functionality.
|
| Far more developer time is spent reading code than
| writing it.
|
| If you can speed up the time required to understand a
| piece a code by improving the style then it's almost
| always worth it. For a professional software engineer,
| just above "absolutely unreadable" is far too low a bar
| to aim for.
| nine_zeros wrote:
| I am still not convinced that readability comes before
| functionality.
|
| Imagine a developer building a PR for 2 weeks, then
| reviews back and forth for 2 more weeks. Now 4 weeks have
| passed and only now the reviewer reviews the
| functionality - only to find that the entire
| implementation is wrong/could be done in a better way.
|
| What a waste of 4 weeks of both the author and the
| reviewer! This could have been short-circuited very early
| in the process.
|
| On my team, we default to early feedback on functionality
| and let the CI enforce what it can. Everything else is
| debatable.
| GaveUp wrote:
| I don't know if I agree that the second pass is optional.
| I've found that pass is the one I've seen have the most
| benefit, particularly with newer developers. It serves two
| purposes when done well.
|
| First it gives you an idea of how well thought out the
| implementation was (i.e. was this a quick hack to just finish
| a asked for requirement) or was a best design targeted. It
| also helps newer developers develop a voice. Often, at the
| start, newer devs will take what a more senior dev says as
| gospel, but by striking up a conversation and, in some ways,
| making them defend their choices it can help build confidence
| and that voice to speak up when something doesn't seem right.
|
| Second, I've found it a good way to introduce people to new
| approaches to accomplishing tasks. Not everyone spends their
| off hours studying patterns and practices and rarely is there
| time during a work day to do this properly so code reviews
| are a natural place to bring these things up as there's
| concrete comparisons and examples to work with. That helps
| spark a dev's interest to look in to the topics further.
| mdtusz wrote:
| As a counter argument, it's much easier for a developer to
| write code that does the right thing when that code is
| readable.
|
| Too often I see code that is _maybe_ correct, but the
| reviewer can't actually be sure of it without manually
| testing it, and the chances that a future reader in 4 months
| will have any idea what it does or why are extremely low.
| Good variable and function names get you 90% of the way there
| yet for some reason cryptic code is still all too common in
| the world.
|
| To me, reviewing for readability isn't nitpicking - it's
| equivalent to a mechanical engineering design review pointing
| out that design for maintenance or design for assembly has
| been entirely ignored.
| simonbarker87 wrote:
| I like the attempt at making this process smoother, I guess where
| to place importance of different aspects relative to each other
| is always down to opinion and interpretation.
|
| I wrote this piece a while ago focussing on using the
| Must/Should/Could principle to speed up the review process and
| put a less opinionated framework in place to help keep code
| review moving.
|
| https://careerswitchtocoding.com/blog/moscow-the-best-code-r...
|
| It's not an original thought from me but many people hadn't heard
| about it and off the back of this post I've had a few people get
| in touch to say they've implemented it in their team and it's
| helped improve code review speed.
___________________________________________________________________
(page generated 2022-03-22 23:01 UTC)