D Language Foundation October Monthly Meeting Summary

Richard (Rikki) Andrew Cattermole richard at cattermole.co.nz
Sun Dec 31 12:02:50 UTC 2023


On 01/01/2024 12:12 AM, Mike Parker wrote:
> Next, he said he'd discovered that a one-line file with |import 
> std.file| takes 200ms to compile, and that was nuts. He needed to figure 
> out at some point exactly what the problem was. It was just semantic 
> analysis just from the import. He wasn't even generating the object 
> file. On the same machine, he also tried a C++ compile with just 
> |#include <iostream>| and that took 400ms. He said that twice as fast as 
> C++ was nowhere good enough. Walter agreed.

It's 2024, so lets hunt down what the problem is with ``std.file``.

On my machine compiling it using ldc2 1.35.0, it took ~500ms (frontend 
only).

That is quite a long time.

So lets go hunting!

I've found a bunch of cost associated with ``std.uni``, specifically 
from ``std.windows.charset``.

Which imports ``std.uni`` via ``std.string``, and ends up imports the 
unicode tables which take 117ms to sema2 (no surprises there).

Why does it import ``std.string``? To call toStringz. What did I replace 
it with in my test code? ``return cast(typeof(return))(s ~ "\0").ptr;``.

Bye bye 117ms.


Next up ``std.datetime.timezone`` 40ms of that is from ``std.string``, 
but alas that actually is needed.

Now on to sema2 for ``std.datetime.systime``, again into 
``std.datetime.timezone``, nothing we can do there as above. For all 
111ms of it.


All and all, I can't find anything to really prune for this. There won't 
be any easy wins here.

The Unicode tables would need to be completely redone to improve it and 
if done may only decrease the sema2 from ~100ms.


More information about the Digitalmars-d-announce mailing list