WAT: opCmp and opEquals woes
Jonathan M Davis via Digitalmars-d
digitalmars-d at puremagic.com
Fri Jul 25 02:46:55 PDT 2014
On Friday, 25 July 2014 at 08:21:26 UTC, Jacob Carlborg wrote:
> By defining opEquals to be opCmp == 0 we're:
>
> 1. We're not breaking code where it wasn't broken previously
> 2. We're fixing broken code. That is when opEqual and opCmp ==
> 0 gave different results
Code that worked perfectly fine before is now slower, because
it's using opCmp for opEquals when it wasn't before. Even worse,
if you define opEquals, you're then forced to define toHash,
which is much harder to get right. So, in order to avoid a
performance hit on opEquals from defining opCmp, you now have to
define toHash, which significantly increases the chances of bugs.
And regardless of the increased risk of bugs, it's extra code
that you shouldn't need to write anyway, because the normal,
default opEquals and toHash worked just fine.
I honestly have no sympathy for anyone who defined opCmp to be
different from the default opEquals but didn't define opEquals.
Getting that right is simple, and it's trivial to test for you're
unit testing like you should be. I don't want to pay in my code
just to make the compiler friendlier to someone who didn't even
bother to do something so simple. And any code in that situation
has always been broken anyway. I'm _definitely_ not interested in
reducing the performance of existing code in order to fix bugs in
the code of folks who couldn't get opEquals or opCmp right.
I'd much rather be able to take advantage of the fast, default
opEquals and correct toHash than be forced to define them just
because I defined opCmp and didn't want a performance hit on
opEquals.
- Jonathan M Davis
More information about the Digitalmars-d
mailing list