std.simd

Manu turkeyman at gmail.com
Sat Mar 17 12:15:08 PDT 2012


On 17 March 2012 20:42, Robert Jacques <sandford at jhu.edu> wrote:

> On Fri, 16 Mar 2012 16:45:05 -0500, Manu <turkeyman at gmail.com> wrote:
>
>> Can you give me an example of a non-simd context where this is the case?
>>
> Don't say shaders, because that is supported in hardware, and that's my
>> point.
>> Also there's nothing stopping a secondary library adding/emulating the
>> additional types. They could work seamlessly together. flaot4 may come
>> from
>> std.simd, float3/float2 may be added by a further lib that simply extends
>> std.simd.
>>
>
> Shaders. :) Actually, float4 isn't supported in hardware if you're on
> NVIDIA. And IIRC ATI/AMD is moving away from hardware support as well. I'm
> not sure what Intel or the embedded GPUs do. On the CPU side SIMD support
> on both ARM and PowerPC is optional. As for examples, pretty much every
> graphics, vision, imaging and robotics library has a small vector library
> attached to it; were you looking for something else?
>

GPU hardware is fundamentally different than CPU vector extensions. The
goal is not to imitate shaders on the CPU. There are already other
possibilities for that anyway.

Also, clean support for float3 / float2 / etc. has shown up in Intel's
> Larrabee and its Knight's derivatives; so, maybe we'll see it in a desktop
> processor someday. To say nothing of the 245-bit and 512-bit SIMD units on
> some machines.
>

Well when that day comes, we'll add hardware abstraction for it. There are
2 that do currently exist, 3DNow, but that's so antiquated, I see no reason
to support it. The other is the Gamecube, Wii, WiiU line of consoles; all
have 2D vector hardware. I'll gladly add support for that the very moment
anyone threatens to use D on a Nintendo system, but no point right now.

float3 on the other hand is not supported on any machine, and it's very
inefficient. Use of float3 should be discouraged at all costs. People
should be encouraged to use float4's and pack something useful in W if they
can. And if not, they should be aware that they are wasting 25% of their
flops.

I don't recall ever dismissing 256bit vector units. Infact I've suggested
support for AVX is mandatory on plenty of occasions. I'm also familiar with
a few 512bit vector architectures, but I don't think they warrant a
language level implementation yet. Let's just work through what we have,
and what will be used to start with. I'd be keen to see how it tends to be
used, and make any fundamental changes before blowing it way out
of proportion.


> My concern is that std.simd is (for good reason) leaking the underlying
> hardware implementation (std.simd seems to be very x86 centric), while
> vectors, in my mind, are a higher level abstraction.
>

It's certainly not SSE centric. Actually, if you can legitimately criticise
me of anything, it's being biased AGAINST x86 based processors. I'm
critically aware of VMX, NEON, SPU, and many architectures that came before.
What parts of my current work in progress do you suggest is biased to x86
hardware implementation? From my experience, I'm fairly happy with how it
looks at this point being an efficiency-first architecture-abstracted
interface.

As I've said, I'm still confident that people will just come along and wrap
it up with what they feel is a simple/user-friendly interface anyway. If I
try to make this higher-level/more-user-friendly, I still won't please
everyone, and I'll sacrifice raw efficiency in the process, which defeats
the purpose.

How do you define vectors in your mind?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20120317/25d7edb0/attachment.html>


More information about the Digitalmars-d mailing list