[HN Gopher] Changing how I review code
___________________________________________________________________
Changing how I review code
Author : deletionist
Score : 49 points
Date : 2021-06-29 21:35 UTC (1 days ago)
(HTM) web link (itnext.io)
(TXT) w3m dump (itnext.io)
| marceloabsousa wrote:
| I wanted to read the article but it's behind a paywall. Is there
| a way to access it?
| IvyMike wrote:
| I didn't hit a paywall but the article was originally published
| at https://dangoslen.me/blog/changing-how-i-review-code/ ;
| maybe that will work for you.
| Chernobog wrote:
| > Another common critique is that the pull requests encourage
| reviewers to only look at the code. They never even pull down the
| changes they are reviewing!
|
| Our CI automatically deploys a "review app" when someone creates
| a pull request. This simplifies code review, QA and demoing!
| debarshri wrote:
| It sounds cool that you create on-demand review app. In a small
| scale it works. In a larger scale development is absolute
| impossible due to resource constraints to create on-demand
| environment for review app.
|
| For instance, in a warehouse automation infrastructure that has
| 30 devs, working on 8 feature simulataneously, it is next to
| impossible to create review app with the resources they need.
| Chernobog wrote:
| I'm not going to dismiss your experience, or your knowledge
| about the environment you describe, but we are most
| definitively not doing small scale development. Our company
| has a large software stack comprised of several backends and
| frontends, maintained by 8+ teams.
|
| I maintain one frontend in this stack, and the review app
| produced will only contain my version of the front-end, and
| the API calls will go to the backends in the development
| environment.
|
| Review apps for backend services will also point to the
| development environment if this particular backend needs to
| communicate with other services.
|
| When a feature touches a frontend and a backend
| simultaneously, we can solve that in various ways. Sometimes
| the backend can be finished first, but if that's not feasible
| we can supply URL overrides for endpoints to the review app
| of the frontend, so it communicates with the review app of
| the backend (and uses the rest of the development environment
| for the other calls).
|
| This works for us and might not be feasible for all
| projects/stacks/companies for various reasons. YMMV I guess
| :)
| debarshri wrote:
| You are looking at a problem from developer's perspective.
| From infrastructure perspective, it is a terrible pattern
| if you own the feature-flow or review app flow. It is
| really wasteful with no guarantee that when all the
| features that are written concurrently are merged, it your
| feature will still work as expected.
|
| Let me put it in another way, would it have been if you
| would have kept, development, test, staging and production
| environments up to date all the time and had meaningful
| integration tests?
| t-writescode wrote:
| At a former company I worked for, every branch would create a
| new environment with those changes (and some shared
| components that were owned by other teams); and, if multiple
| repositories had the same new branch name, it would use all
| the ones with the same name.
|
| Depending on your system, this is entirely doable.
| miltondts wrote:
| Code reviews are not very good:
|
| - Most style issues should be caught by automated tools
|
| - Most functional issues should be caught by tests (written by
| another person preferably)
|
| - To bring someone up to speed on your chosen style, pair
| programming is much faster than code reviews
|
| - To have shared knowledge of code (increased bus factor) pair
| programming, or code walkthroughs are much faster
|
| Not sure why we have fallen so much in love with the idea.
| cbsmith wrote:
| That's interesting to consider a code walkthrough as different
| than a code review. How would you describe the differences?
|
| To me code reviews can catch style and functional problems in
| code that get by your automation systems, but the primary
| impact of a code review is on the team, not the code. It
| imparts not just an understanding and ownership of the code,
| but also establishes/negotiates how the team works together.
| the_gipsy wrote:
| > Not sure why we have fallen so much in love with the idea.
|
| Because organisations obsess about risk minimisation.
| uvesten wrote:
| Pair programming is sooo much better than PR's. I really loathe
| reviewing pull requests, no matter if you do it absolutely
| correct, and don't miss anything, it's extremely wasteful of the
| time of both the coder and the reviewer, even if there are no
| changes that need to be made. There is the context switch for the
| reviewer, ditto for the coder, waiting for feedback. Ugh. Just
| have two people do the work together, and lose the need for
| constant context switches and wasted time.
| the_gipsy wrote:
| PRs are perfect in open source . In companies they're just an
| illusion of minimising risk.
| dave_sid wrote:
| Yeah I've always felt that PRs are ineffective. It's to late a
| stage in the development process to give feedback.
| marceloabsousa wrote:
| Code reviews and PRs are different things. Still, PRs can be
| effective with the right tooling and methodology. We've doing
| continuous code reviews with PRs and it's been working great
| to have early feedback. More details at:
| https://reviewpad.com/blog/continuous-code-reviews-are-
| the-w...
| dave_sid wrote:
| Sure it is
| uvesten wrote:
| If we want to argue semantics, sure, but (at least in my
| experience) a PR is by far the most common way. And from
| the linked article, they seem to say that pair programming
| is the ideal, but that continuous code reviews bring most
| of the same benefits. Sounds good to me. But. I'm still
| convinced that just doing proper pair programming from the
| start saves time for the organization, it's just that the
| upfront investment seems too great. (I'd argue that
| spending 2x programmer time is easily the cheapest way to
| get a feature done, and if you think you are saving
| costs/being more productive doing solo dev + code review
| you're not accurately measuring time spent. (Or that the
| review is very superficial.))
| geordimort wrote:
| I find that there's a lot of bias in pair programming.
| One takes the role of the lead dev and the other of the
| 'copilot' but not actively reviewing the code. It works
| great if you want to review later asynchronously without
| needing to ask for more context. Still pair programming
| doesn't replace an asynchronous code review afterwards.
| Ideally every dev should let the code sink a bit and then
| review it before distributing it. I see both approaches
| working together - don't get this trend of advocating for
| pair programming as the best of getting the highest
| quality.
| convolvatron wrote:
| I personally feel quite comfortable working on something
| and pinging a peer if I've having an issue figuring out a
| good way to do something or I want to vet the approach.
|
| then they don't need to spend all afternoon watching me
| get tea and pick my nose and we still get to have the
| interesting discussions.
| yjftsjthsd-h wrote:
| > Another common critique is that the pull requests encourage
| reviewers to only look at the code. They never even pull down the
| changes they are reviewing!
|
| You should run it somehow, but if you've got CI running tests,
| which I think you should have anyways, then does it matter?
| juanca wrote:
| At least with UI code, the tests / code don't really express
| the experience for the user. (But it's getting better with
| time!)
|
| In that practice, it's better to run it locally! For each UI
| file / function, try to get it to execute from a user's POV.
| elwell wrote:
| In practice, I end up finding at least one bug nearly every
| time I pull the code down and test it locally.
| kmtrowbr wrote:
| You should attack it. You should pick it apart and try to break
| it in as many ways as possible. Of course you should meter your
| effort depending on how critical the changes are. It amazes me
| the lackadaisical attitude towards code review. "It will
| probably be okay" is the worst attitude to have with computer
| code. It is the most brittle, brutal environment imaginable.
| all2 wrote:
| Can you expand on this?
| kmtrowbr wrote:
| You should pull the PR you're reviewing down into your IDE
| and you should spend like, an hour reviewing it carefully
| and trying to break it. Have a mental checklist: say they
| renamed a function ... is the file named the same as the
| function? Did they remember to rename the file? Did they
| delete parts of the code? Did they remember to delete
| everything related to that? Often the most challenging
| parts are negative: it's harder to remove properly than it
| is to add new things. Say they changed the validation on a
| model: is this going to work well with the production
| database? Do we need to write a migration to catch the
| database up to the new reality of the code? Automated tests
| are just automated warning flags that give you confidence
| that the codebase is basically hanging together. You still
| need to be very careful & thoughtful as you're making
| changes.
|
| I like to actually reset the branch so I can see all the
| raw changes in my IDE, this helps me to think it through,
| in git you can do this via:
|
| git pull feature-branch; git checkout develop; git checkout
| -b my_initials/feature-branch; git merge feature-branch
| --squash --no-commit
| all2 wrote:
| Just because there are tests doesn't mean the tests are any
| good. I've only started my quest in TDD and I've discovered it
| is rather trivial to write tests that don't really mean
| anything.
|
| I find myself asking "what is this _supposed_ to do? " and
| "does this actually do what it is supposed to do?". I tend to
| drift into a design mindset during test writing (more-so than
| in code review) that highlights data flow and the basic bits of
| logic. Even then, I find that I can miss test cases rather
| easily.
| loloquwowndueo wrote:
| A reviewer in my team is fond of changing some of the code in
| the PR and running the tests. The expectation is that if the
| code is changed randomly a test should always fail. This helps
| catch untested code paths.
|
| It's somewhat similar to the practice of automated code
| fuzzing, only manual and scoped to changes in a particular PR.
| travisjungroth wrote:
| That's mutation testing[1]. For most languages there are
| tools that can scope the mutations to a diff.
|
| [1]https://en.wikipedia.org/wiki/Mutation_testing
| sfvisser wrote:
| Of course it matters. Otherwise, how do you know if your app
| works as intended?
|
| And don't tell me your tests tell you, because I guarantee you
| they don't!
| the_af wrote:
| Isn't that what QA/testers are for? Testing that the
| feature/release works as intended?
| watwut wrote:
| That is what testers are for. Testing is great thing.
| bern4444 wrote:
| /s/testers/users
|
| Just kidding and in reality UI tests exists and aren't super
| difficult to set up. The front end has plenty of tools like
| Cypress and its pretty simple to automate running a bunch of
| tests that screenshot and diff compare your site.
|
| I never really pulled down UI code to test them out. There
| has to be a certain level of trust between engineers.
| Sometimes I'd ask or post screenshots of changes but that was
| rare.
| shinryuu wrote:
| Still, getting a second set of fresh eyes using the code
| that you wrote is pretty invaluable. This can then be fed
| back as improvements.
| akiselev wrote:
| _> Just kidding and in reality UI tests exists and aren 't
| super difficult to set up. The front end has plenty of
| tools like Cypress and its pretty simple to automate
| running a bunch of tests that screenshot and diff compare
| your site._
|
| In a well oiled machine, that's what the QA department
| does. They have close working relationships with the
| engineering and customer support departments so they
| develop testing processes and systems tailored to the
| "quirks" of engineers and users. They spend extra time on
| the places they know the engineers have blind spots and
| work on automated regression tests based on customer
| support workloads.
|
| Whether they go as low level as unit tests depends on the
| organization but one place I worked at, the domain was
| complicated enough that "I see no red squiggles in my IDE,
| pinky swear" was the threshold to go from engineering to
| QA. It was an ongoing joke that whenever there was a major
| dependency update, 10-25% of the work sent to QA that day
| contained obvious syntax errors and wouldn't compile
| because the IDE everyone used often took an hour or more to
| reindex the project without any clear indication that it
| was in a useless intermediate state. Running the compiler
| would slow it and the indexer to a crawl, so the engineers
| just YOLOd it so they could move on to other work, and the
| QA department set up a static analyzer with an empty rule
| set on all PRs to catch dumb errors caused by immutable
| process/purchasing decisions.
|
| IME these kinds of QA departments are the #1 springboard
| into software engineering roles because they work so
| closely with engineering and are exposed to code in the
| form of automated tests.
| clipradiowallet wrote:
| I had the pleasure of being at a startup with a very talented SRE
| named M. Harrison. His minimum-level review criteria were that
| when you were planning to approve a PR, you should summarize
| everything the PR does in the approval comment. Responses like
| ":+1:" or "LGTM" were not allowed. When I would read a PR
| intending to understand and then summarize, the errors or
| omissions (eventually) started to just jump out at me. I'm not
| sure if I'm articulating it properly...but that expectation of a
| small book report-ish paragraph helped me become much better at
| critically reviewing PRs.
|
| disclaimer: the majority(not all) PR's were in a _very very_
| large terraform codebase.
| cbsmith wrote:
| Interesting. If before you weren't intending to understand and
| be able to summarize before, what were you doing when you
| reviewed code?
| dan-robertson wrote:
| I'd be curious to see a survey of how different companies do code
| review or it's equivalent. I'd like something about the
| motivations they had for setting up the systems in certain ways
| and how they changed them over time. At my employer, code review
| was once done periodically as a big Herculean effort at each
| "release" (development stopped for code review) involving
| printing out the code/diffs on physical paper to read it and a
| senior person reading basically all the critical code. Over time
| the system became more automated and digital, and the
| requirements for review were also reduced. But I have no idea how
| that compares to other places apart from the first thing sounding
| very silly.
| jimbobimbo wrote:
| I'm doing _a lot_ of code reviews and I never run the code
| myself. PR owner does that, PR validation suite (essentially
| reduced CI) does that.
|
| "This code works when you run it" is the last thing I care about
| when I'm reading the code - it's a given, we won't allow change
| to merge if it wasn't the case. Code reviews are there to catch
| mistakes in design, copypasta, how the change fits into the
| product overall.
| deletionist wrote:
| https://archive.is/jeDmL
| avmich wrote:
| Can somebody post a link which wouldn't start with "One more
| step Please complete the security check to access"?
| kasiiaa wrote:
| > Originally published at https://dangoslen.me.
|
| https://dangoslen.me/blog/changing-how-i-review-code/
| avmich wrote:
| Thank you.
___________________________________________________________________
(page generated 2021-06-30 23:01 UTC)