D pull request review process -- strawman formal definition, query for tools

Steven Schveighoffer schveiguy at yahoo.com
Fri May 10 06:29:20 PDT 2013


On Fri, 10 May 2013 03:05:31 -0400, Daniel Murphy  
<yebblies at nospamgmail.com> wrote:

> "Steven Schveighoffer" <schveiguy at yahoo.com> wrote in message

>> 2. Make someone own the review.  Without ownership, there will be long
>> delays, and that reflects a sense of disapproval or apathy towards the
>> submitter or his idea, even if that isn't the case.  I like how the
>> "feature" review on the newsgroup has a review manager and he sets a
>> deadline.  That works out REALLY well.
>>
>
> My impression is that sometimes modules sit around for quite a while  
> waiting
> for a review manager.  How are reviewers going to be assigned and how  
> will
> it be ensured that they don't let if fall through the cracks?

We can't be perfect :)  This is a volunteer army.  If that is what  
happens, that is what happens.  At least we should make the submitter  
aware of that likelihood.

A weekly check of pull requests that are assigned but stale might be a  
good idea.

>> Oh!  I completely forgot!  Another requirement for a pull request to be
>> accepted:
>>
>>   g) Any fixed bugs MUST be accompanied by a unit test that fails with  
>> the
>> current code but passes with the change.
>>
>
> Sure, but this does not ensure that the tests have full coverage of the
> problem, or even full coverage of the cases presented in the bug report.
> This probably applies to dmd more than the others where failures can have
> very complex interacting causes.

Up to your discretion as a reviewer what you require.  It's impossible to  
formalize this requirement as a documented/objective measure.  I would say  
that 9 times out of 10, the bug report will contain a minimized example  
that causes the issue to occur, that should just be included as a  
unit-test.

>> If this is included, the automated tests should cover that.
>>
>
> I would love to have a way to add a test case, and have the autotester
> ensure both that it fails with the old version, and works with the new  
> one.

I didn't consider that the auto-tester wouldn't test that the test case  
fails on the existing code.  I suppose that would just be up to the  
reviewer whether you wanted to go through the effort of testing with that  
test on the current master branch, or eyeball it and assume it does fail.   
The most important thing is that a unit test is inserted to verify the fix  
actually works, and that a regression is caught immediately.

>> agreed.  I'm really leaning toward github issues since it already
>> integrates with github pull requests (right? never used it before),
>> probably involves less work to automate.
>>
>
> Yeah, I think we can use labels to set the states and it has assignment
> built in.

That is perfect.  A couple people have mentioned we can do this without  
github issues, the pull requests support labels and assignment.  Though  
mutually exclusive labels might be awkward.  Perhaps we should redefine  
the stages with that in mind.

>> I think we are on the same page, my stages don't have to be "real"
>> statuses, but it's important to formalize workflow and statuses so both
>> sides know what is going on and what is expected.
>
> I would like real statuses because they are searchable.  The wiki is  
> almost
> good enough here, but having them connected to the actual pull requests  
> and
> automatically changed when they fail the test suite would be much much
> better.  Maybe this applies better to dmd where the majority are bugfixes
> instead of enhancements, I don't know.  I think we're agreeing on most of
> it.

I meant not ALL the stages have to be real statuses :)  We clearly want  
something searchable to see the pull requests you are assigned and which  
ones are ready for review.

I will take a look at github issues and github's pull request features to  
see what level of features we need.

-Steve


More information about the Digitalmars-d mailing list