Why D is not popular enough?

John Smith via Digitalmars-d digitalmars-d at puremagic.com
Thu Aug 18 15:50:27 PDT 2016


Well there are some things I feel could be improved, a lot of the 
things are really just minor but what is a deal breaker for me 
mostly is the compilers. The GCC and Clang implementations are 
really far behind in terms of the version, so they are missing a 
lot of features. A lot of the features that I'd want to use D 
for. In the download section it also says "good optimization" but 
it honestly isn't. I rewrote a small portion for one of my 
projects to test out D but looking at the assembly a lot of it 
was questionable. I'd say DMD probably produced the best assembly 
but then there's the problem that it doesn't support MMX 
instructions. Even on 64-bit it still uses the FPU, which I can't 
really use. The FPU isn't consistent enough for simulations that 
run on separate computers and different OSs that need to be 
synced through a network.

Anyways for some more minor things. I really don't like 
__gshared, why couldn't it just be named "gshared" instead. Don't 
like how that naming convention is used in C/C++ either but I 
feel here in D it is completely out of place. Nothing else uses a 
preceding "__" and from the documentation it looks like it's made 
to stand out cause it shouldn't be used. But it is used a lot, 
and it wouldn't be possible to do certain things without it. I 
forget but the dynamic library loader for D, derelict, is one 
such case.

There's no "const T&" equivalent in D. Basically constant 
variables need to be copied and non-constant variables can be 
passed with the use of "ref". So you need to write two different 
functions that in essence do the same thing. One way around the 
code duplication is using templates so that it auto generates 
these variants for you, but then there's code bloat cause each 
parameter could then be a copy or a "ref". This leads to a lot of 
extra copies, depending on the object and it's size it might not 
be desirable. It thus limits code where you could do a one line 
operation like: 
"someObject.process(otherObject.generateLargeConstantObject());". 
In this case there will be an extra copy made, and currently none 
of compilers are able to optimize it out. It seems like it is 
possible for the compiler to be able to optimize it so no copies 
are made but that's not the case currently which goes back my 
main argument I guess. I can see the value in not having a 
"const&" but the current implementation is flawed.

http://ideone.com/INGSsZ

Garbage collector is in a few libraries as well. I think the only 
problem I had with that is that the std.range library has 
severely reduced functionality when using static arrays.

I think there was more but it was a while since I used D and 
don't recall. There are significant improvements to D over C++ 
that I do love, really want to be able to use it. Whenever I run 
into an issue with C++ I just think about D and how I could have 
solved that problem easily.

On Sunday, 14 August 2016 at 18:45:06 UTC, Walter Bright wrote:
> This rule was retained for D to make it easier to translate 
> code from C/C++ to D. Changing the rule could result in subtle 
> and invisible bugs for such translations and for C/C++ 
> programmers who are so used to the integral promotion rules 
> that they aren't even really aware of reliance upon them.
>
> The reason C/C++ programmers, even experienced ones, are often 
> unaware of this rule is there is another rule, implicit 
> narrowing, so:
>
>     byte = byte + byte;
>
> compiles without complaint. The trouble comes when byte has a 
> value, say, 255, and the sum is 510. The assignment silently 
> chops it to byte size, and 254 is stored in the result.
>
> For D, we decided that silently converting 510 to 254 would not 
> be acceptable. Hence an explicit cast would be required,
>
>     byte = cast(byte)(byte + byte);

Well you could say the same for the same for int. Why isn't "int 
+ int = long"? Right now it is following the rule "int + int = 
int". Maybe cause the values aren't as small but I could argue 
the same thing. If we add 2147483647 with 2147483647, the value 
stored is -2. Under the same sort of thought, you probably 
wouldn't find that acceptable either correct? At some point you 
are going to run out of types that have larger storage. What is 
"cent + cent" going to have as a larger type? At some point you 
just have to accept that you are working with a finite set of 
numbers. For D the compromise happens with the type int. C/C++ 
just accepts that and maintains consistency rather than flip 
floping at an arbitrary type.


More information about the Digitalmars-d mailing list