bigint compile time errors

Kai Nacke via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Sun Jul 5 13:35:01 PDT 2015


On Friday, 3 July 2015 at 04:08:32 UTC, Paul D Anderson wrote:
> On Friday, 3 July 2015 at 03:57:57 UTC, Anon wrote:
>> On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote:
>
>>> enum BigInt test1 = BigInt(123);
>>> enum BigInt test2 = plusTwo(test1);
>>>
>>> public static BigInt plusTwo(in bigint n)
>>
>> Should be plusTwo(in BigInt n) instead.
>>
>
> Yes, I had aliased BigInt to bigint.
>
> And I checked and it compiles for me too with Windows m64. That 
> makes it seem more like a bug than a feature.
>
> I'll open a bug report.
>
> Paul

The point here is that x86 uses an assembler-optimized 
implementation (std.internal.math.biguintx86) and every other cpu 
architecture (including x64) uses a D version 
(std.internal.math.biguintnoasm). Because of the inline 
assembler, the x86 version is not CTFE-enabled.

Regards,
Kai


More information about the Digitalmars-d-learn mailing list