Implementing C23 _Float16 in ImportC
Bruce Carneal
bcarneal at gmail.com
Wed Jan 15 17:28:37 UTC 2025
On Wednesday, 15 January 2025 at 06:42:45 UTC, Sergey wrote:
> On Tuesday, 14 January 2025 at 22:14:01 UTC, Walter Bright
> wrote:
>> The interesting thing is not the existence of the types. It's
>> how they are implemented. The X86_64 architecture does not
>> support a 16 bit floating point type.
>
> It had some limited support starting from 2018 had AVX512-FP16.
> And some newer models of x86 support both FP16 and BF16.
>
> *
> https://networkbuilders.intel.com/docs/networkbuilders/intel-avx-512-fp16-instruction-set-for-intel-xeon-processor-based-products-technology-guide-1651874188.pdf
>
> *
> https://stackoverflow.com/questions/49995594/half-precision-floating-point-arithmetic-on-intel-chips
>
> Also it is important for AI/ML models, which can run on ARM
> CPUs, GPUs and TPUs
ARM architecture is also evolving (8.2A, 9) to better support
data parallelism with both FP16 and BF16 as well as SVE2 but the
uptake by manufacturers is uneven. Some may be betting that CPU
data parallelism is a dead end, that NEON is more than enough.
In the GPU world (dcompute) mini/micro floats are very important
for a variety of workloads, AI/ML chief among them.
More information about the Digitalmars-d
mailing list