DMD backend quality (Was: Re: DIP 1031--Deprecate Brace-Style Struct Initializers--Community Review Round 1 Discussion)

H. S. Teoh hsteoh at quickfur.ath.cx
Tue Feb 18 21:26:14 UTC 2020


On Tue, Feb 18, 2020 at 08:10:57PM +0000, kinke via Digitalmars-d wrote:
[...]
> Once you know that the DMD backend is a formidable one-man project, it
> should be clear as day that that cannot compete with huge eco-systems
> like LLVM and GCC with millions of man-hours, in terms of architecture
> support, optimizations and flexibility. It's nice for fast unoptimized
> codegen if you are only targeting x86, and probably features better
> debuginfos than LDC at the moment (well, ecept for Mac apparently),
> but to me, the DMD *backend* is clearly a dead end in the long run.

Yeah, not to mention Walter himself has been mostly working on the front
end these days, as he himself said he hasn't been keeping the backend
up-to-date.  A single person only has so many hours a day, no matter
what a brilliant genius Walter is, there's still a limit as to what can
be accomplished by a single person.  And if DMD's very limited target
arch support is already languishing from lack of sufficient time to keep
up-to-date, support for other targets like Android is pretty much never
going to happen, whereas using LDC gives you that support *today*.

Add to that LDC's recent ability to cross-compile from Linux to Windows
without needing to rebuild the entire toolchain, vs. dmd's inability to
cross-compile at all (AFAIK), and the best choice from the user's POV is
beyond obvious.


> DMD itself runs faster by 58% when compiled with LDC for a random
> compilation test case, see
> https://github.com/dlang/installer/pull/425#issuecomment-580868218.

IMNSHO, we should use LDC to compile official DMD releases, as someone
has already mentioned. ;-)


> For number crunching, you can definitely expect much higher speed-ups
> if the auto-vectorizer kicks in. Whole-program optimization via LTO,
> even across C(++) and D, and PGO can improve your runtime further.
[...]

Yeah, LDC's optimizer is light-years ahead of DMD's.  I just tested it
again on one of my latest projects, with a custom main() that runs a
CPU-intensive part of the code in a loop 100 times.  With `dmd -O
-inline`, a typical run is about 25-26 seconds.  With `ldmd2 -O3`, a
typical run is about 16-17 seconds.  We're looking at a ~30% increase in
performance here, on exactly the same code and maximum optimization
flags on both compilers.

The numbers speak for themselves, really.

As for compile times, `dmd -O -inline` typically takes about 6 seconds,
whereas `ldmd2 -O3` typically takes about 8 seconds, which is about a
25% slowdown in compile times.  Is a 2-second compile-time slowdown
worth a 30% performance increase in the resulting executable?  To me,
it's an unquestionable yes.


T

-- 
Democracy: The triumph of popularity over principle. -- C.Bond


More information about the Digitalmars-d mailing list