bigint compile time errors
Kai Nacke via Digitalmars-d-learn
digitalmars-d-learn at puremagic.com
Fri Jul 10 13:55:06 PDT 2015
On Tuesday, 7 July 2015 at 22:19:22 UTC, Paul D Anderson wrote:
> On Sunday, 5 July 2015 at 20:35:03 UTC, Kai Nacke wrote:
>> On Friday, 3 July 2015 at 04:08:32 UTC, Paul D Anderson wrote:
>>> On Friday, 3 July 2015 at 03:57:57 UTC, Anon wrote:
>>>> On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson
>>>> wrote:
>>>
>>>>> [...]
>>>>
>>>> Should be plusTwo(in BigInt n) instead.
>>>>
>>>
>>> Yes, I had aliased BigInt to bigint.
>>>
>>> And I checked and it compiles for me too with Windows m64.
>>> That makes it seem more like a bug than a feature.
>>>
>>> I'll open a bug report.
>>>
>>> Paul
>>
>> The point here is that x86 uses an assembler-optimized
>> implementation (std.internal.math.biguintx86) and every other
>> cpu architecture (including x64) uses a D version
>> (std.internal.math.biguintnoasm). Because of the inline
>> assembler, the x86 version is not CTFE-enabled.
>>
>> Regards,
>> Kai
>
> Could we add a version or some other flag that would allow the
> use of .biguintnoasm with the x86?
>
> Paul
biguintx86 could import biguintnoasm. Every function would need
to check for CTFE and if yes then call the noasm function. Should
work but requires some effort.
Regards,
Kai
More information about the Digitalmars-d-learn
mailing list