<p dir="ltr"><br>
On 30 May 2015 19:05, "via Digitalmars-d" <<a href="mailto:digitalmars-d@puremagic.com">digitalmars-d@puremagic.com</a>> wrote:<br>
><br>
> On Saturday, 30 May 2015 at 14:29:56 UTC, ketmar wrote:<br>
>><br>
>> On Sat, 30 May 2015 12:00:57 +0000, Kyoji Klyden wrote:<br>
>><br>
>>> So personally I vote that speed optimizations on DMD are a waste of time<br>
>>> at the moment.<br>
>><br>
>><br>
>> it's not only waste of time, it's unrealistic to make DMD backend's<br>
>> quality comparable to GDC/LDC. it will require complete rewrite of backend<br>
>> and many man-years of work. and GDC/LDC will not simply sit frozen all<br>
>> this time.<br>
><br>
><br>
> +1 for LDC as first class!<br>
><br>
> D would become a lot more appealing if it could take advantage of the LLVM tooling already available!<br>
><br>
> Regarding the speed problem - One could always have LDC have a nitro switch, where it simply runs less of the expensive passes, thus reducing the codegen quality, but improving speed. Would that work? I'm assuming the "slowness" in LLVM comes from the optimization passes.<br>
></p>
<p dir="ltr">I'd imagine the situation is similar with GDC. For large compilations, it's the optimizer, for small compilations, it's the linker. Small compilations is at least solved by switching to shared libraries. For larger compilations, only using -O1 optimisations should be fine for most programs that aren't trying to beat some sort of benchmark.</p>