[phobos] Split std.datetime in two?

spir denis.spir at gmail.com
Fri Feb 11 02:09:42 PST 2011


On 02/11/2011 09:56 AM, Andrei Alexandrescu wrote:
> On Feb 11, 2011, at 12:34 AM, Jonathan M Davis <jmdavisProg at gmx.com> wrote:
>
>> On Thursday, February 10, 2011 15:00:50 spir wrote:
>>> On 02/10/2011 11:02 PM, Andrei Alexandrescu wrote:
>>>> On 2/10/11 3:44 PM, Don Clugston wrote:
>>>>> (1) There has to be a maximum acceptable source file size.
>>>>> Personally I start to feel uncomfortable above 2000 lines, and get an
>>>>> uncontrollable urge to split at 5000 lines. That's just me, but I
>>>>> suggest all modules should be short. And at 35000 lines,
>>>>> std.datetime.length> short.max.
>>>>
>>>> Agreed.
>>>>
>>>>> (2) Actually, it seems that most of size actually comes because every
>>>>> test is written 'by hand'. If they were done as arrays [parameter1,
>>>>> parameter2, result]...
>>>>> with a loop, they'd be a lot shorter. (I crunched down the
>>>>> std.math.exp tests enormously by doing this). Looking at that module,
>>>>> I get the feeling that there's been a lot of cut-and-paste.
>>>>> It is a little disconcerting if D really cannot write unittesting code
>>>>> concisely. If it really needs to be that big, that part of the
>>>>> language needs more work; or we need more helper functions. Or both.
>>>>
>>>> Agreed. People will look at Phobos for inspiration in terms of style and
>>>> idioms. If they see they're looking at more than 2x the size of the code
>>>> for adding tests, probably they'd feel intimidated.
>>>>
>>>> Peeking at std.datetime's unittests, I confirm they are very repetitive -
>>>> essentially unrolled loops. I just set the bar somewhat halfway and saw
>>>> the following. I mean come on!
>>>>
>>>>
>>>> Andrei
>>>>
>>>> assertThrown!DateTimeException(Date.fromISOString(""));
>>>> assertThrown!DateTimeException(Date.fromISOString("990704"));
>>>> assertThrown!DateTimeException(Date.fromISOString("0100704"));
>>>> assertThrown!DateTimeException(Date.fromISOString("2010070"));
>>>> assertThrown!DateTimeException(Date.fromISOString("2010070 "));
>>>> [...]
>>>
>>> I'm interested in this example. I mean how can this happen? What we would
>>> never do in regular (if only because we're lazy, and even copy+paste sucks
>>> for more than a few repetitions), we happily do as soon as the /context/
>>> is somehow different; eg unittests. Just like if unittest were not code.
>>> I've read similar patterns in code by very high-level programmers. There
>>> are even test-case generating tools that produce such code.
>>
>> Unit tests need to be simple. If they're more complicated, the risk of getting
>> them wrong goes up. Also, as Steve points out, determining _what_ failed can be
>> a royal pain when you try and put something in a loop. Helper functions help
>> with that, but it's still a pain.
>
> Not taking one femtosecond to believe that. The hard part is to get the
> unittest to fail. Once it fails, it is all trivial. Insert a writeln or use a
> debugger.
>
>>
>> Normal code can afford to be more complex - _especially_ if it's well unit
>> tested. But if you make complicated unit tests, then pretty soon you have a
>> major burden in making sure that your tests are correct rather than your code.
>
> I am now having a major burden finding the code that does work in a sea of chaff.
>
>>
>> In the case above, it's testing 5 things, so it's 5 lines. It's simple and
>> therefore less error prone. Unit tests really should favor simplicity and
>> correctness over reduced line count or increased cleverness.
>
> All code should do that. This is a false choice. Good code must go inside
> unittest and outside unittest.
>
>> The goal of unit
>> testing code is inherently different from normal code. _That_ is why unit
>> testing
>> code is written differently from normal code.
>
> Not buying it. Unittest code is not exempt from simple good coding principles
> such as avoiding copy and paste.

I agree with Jonathan, on the contrary (for once ;-).
Test code is logically not of the same nature; it a sort of meta-code instead. 
If one writes it using the same level of difficulty or complexity as regular 
code, then logically one needs meta-test, or what? Certainly we all have 
experienced cases of runtime bugs made super hard to diagnose /because/ of 
wrongly coded tests which hid the cause. That test code correct be, as much as 
possible, obviously correct, is especially important.
Still it is not necessary for that to unroll loops of dozens of cases.

True, the hard job is, or may be in some cases, to get failure clearly exposed 
in a test case. And I not only buy your agument about inserting, but my tests 
systematically tell what they do and show their outcome, initially. I just 
realise this probably helps & and having correct tests, giving a chance to 
visually detect an error. This translate in code, because assert is not 
convenient for that, to every assert cases beeing preceded by a writeln (or 
there is first a writeln waiting for an assert to come). Stupid, no?

I take the opportunity to reiterate my wish. When check-mode=verbose:
     check (expression, expectation);
is equivalent to (example output):
     outcome = expression;
     if (expectation == outcome || expectation == to!string(outcome))
         writeln("%s --> %s", <expression>, outcome);
     else
         writeln("Test Error:\n\texpression: %s\n\texpectation: %s\n\toutcome: %s,
           <expression>, expectation, outcome);
Once all is fine, just turn check-mode to silent, in which case output only 
happens on failure.

Unfortunately, I'm unsure how to write that, if even possible, because of the 
<expression> part. Seems requires compiler magic? Especially in cases of (not 
unrolled) test suites where <expression> is variable (so that, I guess, even 
string mixins are off-side). I'll try anyway for cases when <expression> is 
constant.

Denis
-- 
_________________
vita es estrany
spir.wikidot.com



More information about the phobos mailing list