[phobos] Split std.datetime in two?
Andrei Alexandrescu
andrei at erdani.com
Fri Feb 11 13:18:42 PST 2011
On 2/11/11 1:20 PM, Steve Schveighoffer wrote:
> It looks as though Jonathan is willing to roll up the code into loops, so having this debate is academic at this point, but I wanted to respond to some points.
Agreed. However, the horror of leaving this in limbo spurs me into
continuing a debate that I feel is Kafkian. (I don't want to get
beheaded through my inaction!)
> ----- Original Message -----
>> From:Andrei Alexandrescu<andrei at erdani.com>
>>
>> On Feb 11, 2011, at 2:39 PM, Steve Schveighoffer<schveiguy at yahoo.com>
>> wrote:
>
>>> Please please, let's *NOT* make this a standard practice. If a test
>> fails, I don't want to get a debugger out or start printf debugging *to find
>> the unit test*. I want it to tell me where it failed, and focus on fixing the
>> problem.
>>
>> You find the unittest alright. With coming improvements to assert you will often
>> see the arguments that caused the trouble.
>
> This will help immensely. Right now, you get a line number. The rule should be, if a unit test fails, it should give you enough information to 1) find the failing assert and 2) give you all the information to understand why the assert failed.
>
>> I don't understand how we both derive vastly different conclusions from the
>> same extensive eperience with unittests. To me a difficult unittest to find is a
>> crashing one; never ever once in my life I had problems figuring out why an
>> assert fails in a uniitrst, and worse, I am incapable to imagine such.
>
> foreach(x; someLongArray)
> assert(foo(x)> 5);
>
> which x caused the problem? all I get is the line number for the assert.
When the unittest fails I go and edit the code as such:
assert(foo(x), text("foo(", x, ") failed"));
Doesn't cost a damn thing, doesn't add to the line count, and when it
fires adds a wealth of info.
> BTW, I had this happen in Tango unit tests that used loops (ironically, when I was rewriting Tango's date/time code), I had to instrument the code with printfs which sucked IMO.
See above.
>>> I don't sympathize with you, we have tools to easily do this without
>> much burden.
>>
>> A little while ago you didn't care to start a debugger or use writeln - two
>> simple tools. I cry double standard.
>
> Um.. how are you reading the code if not using an editor? Using ctrl-F and typing in the function name is several orders of magnitude simpler than adding writelns, recompiling, or recompiling in debug mode (I'm counting the part where you have to dissect the makefile to figure out how to put the thing in debug mode in the first place) and debugging. It's like saying the burden of using the claw of a hammer that's already in your hand is the same as going to get a reciprocating saw to remove a nail. It's not even close to a double standard.
How are you debugging code if not using stdio or a debugger?
>>> If you want to find a function to read, use your editor's find
>> feature. Some even can just allow you to click on the function and find the
>> definition. This argument is a complete red herring
>>
>> I find it a valid argument. I and I suspect many just browse code to get a feel
>> of it.
>
> Which you can. I think you can also get a feel for how well tested the library is also ("wow, look at all the unit tests...").
That's what I said until I got drowned in'em. If the usual D application
or library needs such massive unittests and written in such a repetitive
manner, D has failed.
>>> Good unit tests are independent and do not affect one another. Jonathan is
>> right, it's simply a different goal for unit tests, you want easily isolated
>> blocks of code that are simple to understand so when something goes wrong you
>> can work through it in your head instead of figuring out how the unit test
>> works.
>>
>> For me repeated code is worse by all metrics. There is zero advantages that it
>> has, and only troubles.
>
> Then I guess we differ. I prefer to have unit tests be simple statements that are easily proven mentally without much context. Complex code is more difficult to reason about, and as spir said, we don't want to have to unit test our unit tests.
A simple loop is simpler than 100 unrolled instances of it.
> I attach to this statement that the unit tests should test *different* things even if they look repetitive. For example, testing two corner cases might look very repetitive, but they both are looking for weaknesses in different places.
I am certain that the unittests in std.datetime are well beyond 100%
code coverage. IMHO they should stop at 100% coverage (which is probably
at 20% of their size).
> Having extra unit tests that do the same exact thing is not productive, I agree there.
And I guarantee you std.datetime saturates coverage at 100% with a
fraction of the size. I mean it's obvious - how many paths are in e.g.
toISOExtendedString?
> Let's look at a different type of repetition -- version statements. The nature of version statements sometimes forces one to sometimes repeat whole sections of code. However, understanding the code is much easier than some of the horrific abuses of C preprocessor stuff that I've seen. At some point, factoring out repetitive code becomes more harmful than the alternative. I feel like unit tests are one of those cases.
Again, I feel like you do, save the numbers. Let me give you an example.
Many would agree that sex is good. Yet few would agree that being
required to perform it for 12 hours can be terrible. Eating is good too.
Eating seven pounds of even the best food is a horrific experience.
There is a limit to everything. That limit has been passed many times
over by the unittests in std.datetime.
BTW, I'm familiar with the version argument (aired originally by Walter)
and my stance is as loud and clear as ever. I am proactively trying to
refactor code to avoid needless duplication. See e.g. std.file.read,
which I refactored to use individual version() statements instead of
wholesale version() as was initially in that module.
Andrei
More information about the phobos
mailing list