Jai compiles 80,000 lines of code in under a second
aliak
something at something.com
Fri Sep 21 13:37:58 UTC 2018
On Friday, 21 September 2018 at 09:21:34 UTC, Petar Kirov
[ZombineDev] wrote:
> On Thursday, 20 September 2018 at 23:13:38 UTC, aliak wrote:
>> Alo!
>>
>
> I have been watching Jonathan Blow's Jai for a while myself.
> There are many interesting ideas there, and many of them are
> what made me like D so much in the first place. It's very
> important to note that the speed claims he has been making are
> all a matter of developer discipline. You can have an infinite
> loop executed at compile-time in both D and Jai. There's
> nothing magical Jai can do about that - the infinite loop is
> not going to finish faster ;) You can optimize the speed of
> compile-time computation just like you can optimize for
> run-time speed.
Haha well, yes of course, can't argue with that :p I guess it
makes more sense to compare the "intuitive" coding path of a
given language. Eg: if I iterate a million objects in a for loop,
because i want to process them, there is no other non-compile
time way to do that. If language X takes an hour and language Y
takes a millisecond, I'm pretty sure language X can't say it
compiles fast, as that just seems like a pretty common-scenario
and is not using the language in any way it was not meant to be
used.
>
> What your observing with D is that right now many libraries
> including Phobos have tried to see how much they can push the
> language (to make for more expressive code or faster run-time)
> and not as much time has been spent on optimizing compile-time.
> If you take a code-base written in Java-like subset of the
> language, I can grantee you that DMD is going to very
> competitive to other languages like C++, Go, Java or C#. And
> that's considering that there are many places that could be
> optimized internally in DMD. But overall most of the time spent
> compiling D programs is: a) crazy template / CTFE
> meta-programming and b) inefficient build process (no parallel
> compilation for non-separate compilation, no wide-spread use of
> incremental compilation, etc.). AFAIR, there were several
> projects for a caching D compiler and that can go a long way to
> improve things.
Ah I see. Ok so there's quite a bit of big wins it seems
(parallelization e.g.).
>
> On the other hand, there are things that are much better done
> at compile-time, rather than run-time like traditional
> meta-programming. My biggest gripe with D is that currently you
> only have tools for declaration-level meta-programming
> (version, static if, static foreach, mixin template), but
> nothing else than plain strings for statement-level
> meta-programming. CTFE is great, but why re-implement the
> compiler in CTFE code, while the actual compiler is sitting
> right there compiling your whole program ;)
Yeah I've always wondered this. But I just boiled it down to me
not understanding how compilers work :)
>
> P.S.
>
> Jai:
> loadExecutableIcon(myIcon, exeLocation)
>
> D:
> static immutable ubyte[] icon = import("image.png).decodePng;
Si si, but i believe the loadExecutableIcon actually calls
windows APIs to set an icon on an executable, and they'd probably
@system which means I don't think that could be done in D.
>
> (In D you have read-only access to the file-system at
> compile-time using the -J flag.)
>
> [0]: https://github.com/atilaneves/reggae
More information about the Digitalmars-d
mailing list