Why D is not popular enough?

Marco Leise via Digitalmars-d digitalmars-d at puremagic.com
Fri Aug 19 01:31:39 PDT 2016


Am Thu, 18 Aug 2016 22:50:27 +0000
schrieb John Smith <gyroheli at gmail.com>:

> Well you could say the same for the same for int. Why isn't "int 
> + int = long"? Right now it is following the rule "int + int = 
> int".

I believe in C, int reflects the native machine word that the
CPU always uses to perform arithmetics. Together with
undefined overflow behavior (due to allowing different HW
implementations) of signed types I suppose it was the best bet
to at least widen smaller types to that type.

Even on today's amd64 CPUs int/uint remain the most efficient
integral type for multiplication and division.

If we hypothetically switched to a ubyte+ubyte=ubyte semantic,
then code like this breaks silently:

ubyte a = 210;
ubyte b = 244;
float f = 1.1 * (a + b);

Or otherwise instead of casting uints to ubytes you now start
casting ubytes to uints all over the place. What we have is
the guarantee that the result will reliably be at least 32-bit
in D.

-- 
Marco



More information about the Digitalmars-d mailing list