GPGPUs
Atash
nope at nope.nope
Fri Aug 16 12:53:42 PDT 2013
On Friday, 16 August 2013 at 12:18:49 UTC, Russel Winder wrote:
> On Fri, 2013-08-16 at 12:41 +0200, Paul Jurczak wrote:
> […]
> Today you have to download the kernel to the attached GPGPU
> over the
> bus. In the near future the GPGPU will exist in a single memory
> address
> space shared with all the CPUs. At this point separately
> downloadable
> kernels become a thing of the past, it becomes a
> compiler/loader issue
> to get things right.
I'm iffy on the assumption that the future holds unified memory
for heterogeneous devices. Even relatively recent products such
as the Intel Xeon Phi have totally separate memory. I'm not aware
of any general-computation-oriented products that don't have
separate memory.
I'm also of the opinion that as long as people want to have
devices that can scale in size, there will be modular devices.
Because they're modular, there's some sort of a spacing between
them and the machine, ex. PCIe (and, somewhat importantly, a
physical distance between the added device and the CPU-stuff).
Because of that, they're likely to have their own memory.
Therefore, I'm personally not willing to bank on anything short
of targeting the least common denominator here (non-uniform
access memory) specifically because it looks like a necessity for
scaling a physical collection of heterogeneous devices up in
size, which in turn I *think* is a necessity for people trying to
deal with growing data sets in the real world.
Annnnnnnnnnnndddddd because heterogeneous compute devices aren't
*just* GPUs (ex. Intel Xeon Phi), I'd strongly suggest picking a
more general name, like 'accelerators' or 'apu' (except AMD
totally ran away with that acronym in marketing and I sort of
hate them for it) or
'<something-I-can't-think-of-because-words-are-hard>'.
That said, I'm no expert, so go ahead and rip 'mah opinions
apart. :-D
More information about the Digitalmars-d
mailing list