Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to ease/improve PR reviewing process #11166

Closed
2 of 13 tasks
domenkozar opened this issue Nov 20, 2015 · 10 comments
Closed
2 of 13 tasks

How to ease/improve PR reviewing process #11166

domenkozar opened this issue Nov 20, 2015 · 10 comments
Labels
0.kind: enhancement Add something new 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md 9.needs: documentation 9.needs: reporter feedback This issue needs the person who filed it to respond

Comments

@domenkozar
Copy link
Member

domenkozar commented Nov 20, 2015

We had an openspace at NixCon about how to make more people participate in PR reviewal process and how to make it easier.

Here are my quick notes:

  • hydra spams users, could we rather send one email per revaluation?
  • automatic tagging would be hard, but it's doable for things like nixos tag for everything that touches nixos/
  • improve travis with a wrapper to hide output, but output dots and print last
    1000 lines on failure (or upload to S3)
  • automatic linter for new pull requests
  • tagging issues/prs helps, how can we give it to anonymous?
  • encourage people to review PRs even without commit access, as it's good
    feedback
  • encourage use of nox-review
  • tag people that touched that file last time Automatically mention potential reviewers on pull-requests using mention-bot #11160
  • gather statistics what are the most common rotting PRs
  • feature freeze time window for NixOS, purge old issues when NixOS release is not supported anymore
  • activate darwin for travis-ci
  • monthly online gathering of PR cleaning
  • use the time at sprints to encourage people to spend an hour reviewing their
    PR and 10 others
@domenkozar domenkozar added documentation 9.needs: reporter feedback This issue needs the person who filed it to respond labels Nov 20, 2015
@vcunat
Copy link
Member

vcunat commented Nov 25, 2015

Travis-ci for darwin? Is that even possible? I thought there are licensing problems with such things.

@davidak
Copy link
Member

davidak commented Nov 25, 2015

@vcunat https://docs.travis-ci.com/user/osx-ci-environment/

Travis CI uses OS X 10.9.5.

@domenkozar
Copy link
Member Author

Yes, they are using https://macstadium.com/cloud

@Profpatsch
Copy link
Member

automatic linter for new pull requests

Oh, is there a nix linter?

@grahamc
Copy link
Member

grahamc commented Feb 6, 2017

There was another discussion at FOSDEM this year:

http://piratepad.net/1nHg65LMQj

Notes duplicated (in case that link goes dead...) **CLICK TO EXPAND**
# Decision process of pull requests

We need a decision process. When is something merged? When not?
Pull requests shouldn't stay floating.

For the pull requests that aren't clear whether they are good or bad,
we would need some kind of review process. we could look at the appraoch
of the boost library review process.
==> nothing ever is "good or bad" just different depending on observer.
You should always try to think of consequences instead.
The question is: Merging does have what consequences?


There is a review period that lasts 3 or 4 weeks. You have time to
provide comments (technical feedback).  Then there is a review
manager assigned who goes through all the feedback and summarizes it.
He then tells why something can not or can be merged.  Or
give an overview of what would be fixed to get it merged.  

As a contributor, you want to have a reasonable process to get
it merged.

nbp:  I see one flaw in this process.  I think this is 
too strict. there are cases where people are not sure whether
this is a good thing or not.  I think we need clear guidelines
to decide what happens next if something is undecisive. Settling
with only one person is probably not a good idea.


Person c:  @eelco gets all the mentions,    "Eelco: Around 100.000 :-)"

nbp:  How can we lower this burden for him?

Eelco : We have an open issue about maintainer structure.  For a model
like the linux kernel?   For packages we already have maintainers.
However, perhaps for things like different platform support, or
very large changes, we do not have a clear responsible person yet.

People have mentioned the zmq process of Pieter Hintjens,  "Just merge it"
or don't merge it. But zmq is a rather well-contained and limited-scope
library.


Maarten:  The main issue I have with the zmq one is the merge by default. I
don't like that.

nbp: Our process is somewhat organic, we do not have a master mind that
decides where we go. (??)

Eelco: How is this decided within the boost community?

Person a:   It is a community driven thing, during review process, everybody
leaves comments why they're against or for it (technical comments).  You
can give general feedback, but you do not have to vote yes/no.  But there
is also a voting process, where you specifically say  "I am against it, unless
you address these points".



nbp: Okay what do we do we if we disable commits to master, and only use pull-requests?
Will this break our current work flow? Will we have too many open pull requests?
Can we actually upkeep this process?

Person a:  We have enough pull requests already.  The problem isn't it in who pushes
to master, the problem is more that pull requests stay open for a long while without
a clear decision.

We think the issues are orthogonal. 

Anyhow, building pull requests might be a good idea to do again.
Perhaps we should also have a proper CONTRIBUTING guide. A clear path of how you
get your pull request into master.


nbp: Lets say person A contributes some work, and the pull request is assigned to me,
and i'm not available for a month, what do we do?  If there is no consensus 
whether it should be merged or not.   We have a review proccess, people can leave
github reviews, and after the time is up (fixed time), we _have_ to make a decision.


If the first reviewer cannot show up in time, then it is a popular vote. Boost
has a QoS such that they discard reviews that have no technical reasoning (no +1s).

Eelco:  I'm more in favor of the wikipedia model. Where we reach a consensus instead of voting
All the problem who hit +1 on GitHub take 3 seconds, but the people who actually maintain it,
have to put time in it.
=> The question is different => how much time to give till consensus is reached. What to do if not ? So again this is just a question about what process to apply
 +1: I don't think voting is a good approach for this.

Mentionbot is a bit too spammy, because it mentions everybody who has ever touched a file.
It is not sophesticated enough. It should take into account who the maintainers are.


As a person who contributes, you might also be stuck on review comments,  "I do not have
the knowledge to fix this". 


Decision:

We add a maintainer file

If a reviewer (the person responsible) agrees, merge,  if they disagree we do not merge.
If there is no consensus, we should probably just close it after a certain time period.


Eelco: Perhaps we should have a few core maintainers, who decide over lingering issues
Someone has to be responsible for taking the decision, based on all the feedback gathered.



# How to deal with lingering open issues

There are two different schools of thought about open issues
1.  an issue must be actionable. Someone needs to be able to pick it up and close it
2.  The issue tracker is a scratch pad of ideas that we could possibly implement in the future
    perhaps it's very philosophical and has no consensus.   (Issue number 8).

Should we track every open problem we should think about?
We should perhaps close issues that are not actionable.

"How do we handle security" is not an issue.  But it is still important that
we can discuss it somewhere central.

We should add tags, such that we can filter on actionable issues.  Instead
of close them. SOmeone should go over them and tag them.


Some issues are an interesting design question, but what if someone hasn't been
doing something for 6 months? However, we could split those up in more actionable
issues instead.

Good example is the IPFS issue.  There are many ways in which IPFS can be used with
nixos. We should make them more actionable.  A "discussion" tag would be great.

Closing has the psychocological effect that we "Do not want to do it" but that
is not what we want to do.  Perhaps we can make a nixos-rfcs issue tracker
instead of in the nixpkgs.     such that non-actionable issues can be moved there.

We can take the rust-lang/haskell model. There is a repository with markdown files
of RFCs. These are considered "reasonable" if they're in master. You can add new RFCs
by opening a pull request. They have a specific format.  If people have a cool idea,
they'll have to put some effort in creating such a proposal.

**Decision:    Policy disscusions should go somewhere else , nixpkgs should be actionable only.**


In rust they have a group of 'smart' people that will monitor the RFC process, and leave
feedback and try to kill off wrong RFCs.


## Styleguides

We have some stuff, but it is very limited. Perhaps we should also document patterns (and anti-patterns).

Multiple versions of the same package, how do we name the attributes?  Now we have the `2_0_3` convention
We should have some uniformity in this.

In a pull request we have a list of things you should do (which is too late). We should already
put this in the nix manual.  Naming conventions etc.

For review it's also nice that you have a list you can link to and check off for a pull request.
Sometimes people want an old version of the package  `foo_1_1_2_4` . But lets call it `foo`
and pass the version as an argument instead.


What critera does debian etc use for inclusion? If a maintainer doesn't commit for x  many months,
should we delete it?  If it is not maintained, we could make this decision to remove them.

Stackage and SUSE have the model of "You are responsible if you submit it" and if it is not maintained,
or has issues with other packages, we remove it".
=> I'd recommended a grace period such as "unmaintained_since = date" or "broken  = {since = "date"; reason = "..."} A process could be removing stuff after 6 month if nobody fixes it. This allows people to take care and will ensure the repository tidies itself over time.

What if someone does a feature request? meh.
But a maintainer should respond to pull requests though. If there is no response? What then?


## Storing secrets


You can make a module that only contains a secret, and you can include it.
No you  don't w.

2 solutions:

Add a permission model in the nix store, make stuff non-world-readable.
Other option is encryption support.  We can encrypt at evaluation time certain parts
of configuration.   like the wpa\_supplicant configuration file.


## Fixpoints


security upgrade branch has dependency on self argument.

self are all the depdnecneis coming out of the fixpoint.

With security-updates we have a new distinction.  W distinct
between dynamic and static linking.

If we view the nixpkgs to have 2 fixpoints. one for dynamic and static
Security updates work by pealing of the dynamic ones, and patch those.


The language packages are giving me troubles. They are introducing
their own recursive set of dependencies.  This is a problem because
they appear as static linking. If we update one python package with
a security update and we have them all recompile.



Lets have an RFC to discuss the implications and how we move the fixpoints


## Lets go back to PRs for a short while. We wanted to discuss it a bit more

nbp: Can somebody else just revert a pull request that was  accepted?

Well if there is a good reason.  Say the maintainer  merges his/her
own pull request? Is that valid?  Have another maintainer and agree
with the maintainer.


We have parts of code that do not have multiple reviewers. Perhaps
we should fix that too.  Should maintainers of packages go through
the same review process as external contributors?
/> alternative approach would be to document "how much review went into a file"
which would also serve as indicator about how important it is and how much "communication effort" makes sense to maintain a derivation. For instance download statistics could be used as well to understand importance/impact.
/> How do you quantify the quantity of review?

there is some disagreement if we really need a pull request for
each small change.   Lets add another layer of indirection called
super maintainers. They can push?
We should distinguish on impact, not on person.

Did we have problems with this in the past? It happens rarely,
so would a pull request really make a difference in small changes?

How probable is it that a change is gonna break something? and
how many administrative overhead should we have?
=> This equals amount of testing

Lets put it in a different light:   I am the maintainer of a code,
somebody else made a change, can I still maintain that code?
If you are forced through a review process, you would know what
is changed.


Eelco says: If we have proper decision procedures, as discussed before,
a reason for revert is usually a good reason.  For example, a regression.
=> Changes always cause "regression" - due to lack of manpower eventually.
Is apache more important than a game ? Kernels get more bloated and slower over time,
is it a regression ? You'll always be able to find counter examples.

This discussion is abstract. We should do it more often. Also collect examples
of where this went wrong.

Lets introduce the new process, but iteratively improve on the way contributions
happen.


## Can we do these kind of discussions more regularly?
We should!


## How do we help Eelco



## Attendees
* Arian van Putten (@arianvp)
Nicolas B. Pierron
Eelco Dolstra
zimbatm
Maarten Hoogendoorn (@moretea)
Louis Taylor (@kragniz)
Sander van der Burg
Johannes Bornhold @johbo
Daiderd Jordan (@LnL7)
Peter Simons



Let's compare to law model: Germany: You have many laws which tell you right/wrong
USA/UK => Decisions are made based on previous sample cases ..
In both cases you end up "weighting" arguments .. (consensus).
Spamle mplayer "Who could have known that multi core processors are that common today - sorry for refusing a patch back in .."

Also is there any way to "offload" decision processes to maintainers of packages - eg which dependencies are required? Creeating kind of haskell like platforms and writing import as .nix derivations for it? Then "good/bad" on version upgrades would turn into "which version do *I* want" -> and more versions could be "made work". It might be fitter. And others distros could join.

Can we add all ruby /perl / PHP/ python/haskell/rust/scala/java/vim/emacs/... packages into one repository? .. It still works well - but there are 50.000 Ruby packages for instance.

interesting cases:
* libre offfice vs open office (for what reason)
* historically peolpe "take time, stay focused, fix a bug", put up something new.
  Too much discussion should only take place if the topic is worth it.
* I required code for running multiple mysqld instances. Code was not merged because
  "it should have been abstracted" (which was too much work for me to get done)

@mmahut
Copy link
Member

mmahut commented Aug 11, 2019

Are there any updates to this issue, please?

@davidak
Copy link
Member

davidak commented Dec 3, 2019

There where attempts to formalize the review process, but no decision.

NixOS/rfcs#53 (comment)
NixOS/rfcs#30

@stale
Copy link

stale bot commented Jun 1, 2020

Thank you for your contributions.
This has been automatically marked as stale because it has had no activity for 180 days.
If this is still important to you, we ask that you leave a comment below. Your comment can be as simple as "still important to me". This lets people see that at least one person still cares about this. Someone will have to do this at most twice a year if there is no other activity.
Here are suggestions that might help resolve this more quickly:

  1. Search for maintainers and people that previously touched the
    related code and @ mention them in a comment.
  2. Ask on the NixOS Discourse. 3. Ask on the #nixos channel on
    irc.freenode.net.

@stale stale bot added the 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md label Jun 1, 2020
@Profpatsch
Copy link
Member

Profpatsch commented Jun 9, 2020

We’re at a pretty good state between ofborg, the stale bot and all the incremental improvements that happened in the meantime, so I guess we can close this.

@nixos-discourse
Copy link

This issue has been mentioned on NixOS Discourse. There might be relevant details there:

https://discourse.nixos.org/t/how-many-people-are-paid-to-work-on-nix-nixpkgs/8307/88

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.kind: enhancement Add something new 2.status: stale https://github.com/NixOS/nixpkgs/blob/master/.github/STALE-BOT.md 9.needs: documentation 9.needs: reporter feedback This issue needs the person who filed it to respond
Projects
None yet
Development

No branches or pull requests

8 participants