D pull request review process -- strawman formal definition, query for tools

Daniel Murphy yebblies at nospamgmail.com
Fri May 10 00:05:31 PDT 2013


"Steven Schveighoffer" <schveiguy at yahoo.com> wrote in message 
news:op.wwugwetleav7ka at stevens-macbook-pro.local...
>
> 1. Make the process defined and transparent so both reviewers and 
> submitters understand what is going on, and what is expected.

Sounds good.

> 2. Make someone own the review.  Without ownership, there will be long 
> delays, and that reflects a sense of disapproval or apathy towards the 
> submitter or his idea, even if that isn't the case.  I like how the 
> "feature" review on the newsgroup has a review manager and he sets a 
> deadline.  That works out REALLY well.
>

My impression is that sometimes modules sit around for quite a while waiting 
for a review manager.  How are reviewers going to be assigned and how will 
it be ensured that they don't let if fall through the cracks?

> The recent thread on the review non-process outlined a story from the 
> perspective of the submitter that looked to me very different than what 
> actually happened from the reviewer side (reading the comments after the 
> fact).
>

Having a written process we can refer people to will hopefully help with 
that, even if it is just what we currently have.

>>
>>>    h) One of the reviewers (not sure if it should be primary or 
>>> secondary)
>>> should close any bugzilla bugs fixed by the pull request.  Can this be
>>> automated?
>>
>> This is what we are currently doing (sort of) and I think it would be 
>> better
>> to have the requester be responsible for closing the original bug.
>
> I think we should push for full automation.  I think it can be done with 
> github (and actually, I think it already automatically records the commit 
> in bugzilla).
>
> The problem I see with making the submitter do it is, the submitter may 
> not be active at the time, or may not care.  The pull request is done, and 
> he has his fix, but we need to make sure the bug list is updated properly.
>

I don't think this is a problem with dmd, as most pull requests come from 
the same small group of people. (I think Kenji has done over 1/3)

>> This
>> often requires trying each original test case along with other measure to
>> ensure the bug was actually fixed.
>
> Oh!  I completely forgot!  Another requirement for a pull request to be 
> accepted:
>
>   g) Any fixed bugs MUST be accompanied by a unit test that fails with the 
> current code but passes with the change.
>

Sure, but this does not ensure that the tests have full coverage of the 
problem, or even full coverage of the cases presented in the bug report. 
This probably applies to dmd more than the others where failures can have 
very complex interacting causes.

> If this is included, the automated tests should cover that.
>

I would love to have a way to add a test case, and have the autotester 
ensure both that it fails with the old version, and works with the new one. 
(or the other way around for fail-compilation cases)  One of my pet projects 
is an automated regression sourcer finder where you can give it a test case 
and it attempts to build with the last N releases.  Anyway, another time.

>> Sometimes pull requests are only partial
>> fixes and while the requester (hopefully) knows this, the reviewer will 
>> have
>> to work all this out in addition to looking at the actual changes.
>
> These would be more complex cases.  I think they would be rare (at least 
> from a library side, not sure about dmd).  It's important to note that 
> reviewers are volunteers, and if you are not willing to take the time to 
> do a full review, you should give it to someone else.
>

Right, the really common one in dmd is bugs that apply to D1.

> Perhaps the "description" requirement can be detailed to say "if this is 
> only a partial fix, that should be noted."
>
>> Ideally this goes on bugzilla or github issues so we don't end up
>> fragmenting the information even further.
>
> agreed.  I'm really leaning toward github issues since it already 
> integrates with github pull requests (right? never used it before), 
> probably involves less work to automate.
>

Yeah, I think we can use labels to set the states and it has assignment 
built in.

>> My main problem is when I find I have time and energy to review some pull
>> requests, I never know which ones are ready.  I usually find myself 
>> looking
>> through them from the front or the back, and rarely get very far.
>>
>> A big improvement would be a two-state system:
>> - Ready for review
>> - Not ready for review
>
> I think this is important.  A pull request shouldn't be "Ready" until it's 
> also submitted to the tracking system.  The issue should reflect the 
> current state.  In other words, you should be searching for unassigned 
> issues in the correct states, not pull requests.
>

If we use github issues and the pulls and issues are linked, this should be 
fine.

>> The submitter sets it to ready when they think it's done, and it goes to 
>> Not
>> Ready when any of the following happens:
>> - An official reviewer decides it needs more work
>> - It fails the autotester (ideally except for those intermittent 
>> failures)
>> - The submitter decides it needs more work
>
> I think we are on the same page, my stages don't have to be "real" 
> statuses, but it's important to formalize workflow and statuses so both 
> sides know what is going on and what is expected.
>
> -Steve

I would like real statuses because they are searchable.  The wiki is almost 
good enough here, but having them connected to the actual pull requests and 
automatically changed when they fail the test suite would be much much 
better.  Maybe this applies better to dmd where the majority are bugfixes 
instead of enhancements, I don't know.  I think we're agreeing on most of 
it. 




More information about the Digitalmars-d mailing list