core.reflect vs. __traits
Stefan Koch
uplink.coder at googlemail.com
Sat Jul 3 11:59:03 UTC 2021
Good Day D Community,
I just did a little test run of using `core.reflect` on the `enum
TOK` within DMD.
(Which was successful yay. ;))
Then I compared it against the `__traits` version.
```
import dmd.tokens;
version(UseCoreReflect)
{
import core.reflect.reflect;
static immutable e = cast(immutable EnumDeclaration)
nodeFromName("TOK");
pragma(msg,
() {
import std.conv;
string result;
result ~= "enum " ~ e.name ~ " {\n";
foreach(m;e.members)
{
IntegerLiteral l = cast(IntegerLiteral) m.value;
result ~= " " ~ m.name ~ " = " ~ to!string(l.value) ~
",\n";
}
result ~= "}";
return result;
} ()
);
}
version(UseTraits)
{
enum members = __traits(allMembers, TOK);
pragma(msg,
() {
import std.conv;
string result;
result ~= "enum " ~ TOK.stringof ~ "{\n";
static foreach(m;members)
{
result ~= " " ~ m ~ " = " ~
to!string(cast(int)mixin("TOK." ~ m)) ~ ",\n";
}
result ~= "}";
return result;
} ()
);
}
```
In terms of code I do prefer the `core.traits` version because
that one does not use a mixin and doesn't force you to do a
`static foreach` also it can trivially be factored into a runtime
function which doesn't stress CTFE out. Whereas the `__traits`
version cannot trivially be factored to do most of the work at
runtime.
I guess you could build a `string[]` with member names and index
that runtime but it's not as trivial as just calling the function
at runtime with an `EnumDeclaration` object generated at compile
time.
Please let me know what you think.
P.S. In terms of performance the `core.reflect` version is faster
by 2% on average which doesn't matter in the slightest using
such tiny test-cases.
More information about the Digitalmars-d
mailing list