Whats holding ~100% D GUI back?

Ola Fosheim Grøstad ola.fosheim.grostad at gmail.com
Fri Nov 29 15:29:20 UTC 2019


On Friday, 29 November 2019 at 13:27:17 UTC, Gregor Mückl wrote:
>>> GPUs are vector processors, typically 16 wide SIMD. The 
>>> shaders and compute kernels for then are written from a

[…]

> Where is this wrong? Have you looked at CUDA or compute 
> shaders? I'm honestly willing to listen and learn.

Out of curiosity, what is being discussed? The abstract machine, 
the concrete micro code, or the concrete VLSI pipeline (electric 
pathways)?

If the latter then I guess it all depends? But I believe a trick 
to save real estate is to have a wide ALU that is partioned into 
various word-widths with gates preventing "carry". I would expect 
there to be a mix (i.e. I would expect 1/x to be implemented in a 
less efficient, but less costly manner)

However, my understanding is that VLIW caused too many bubbles in 
the pipeline for compute shaders and that they moved to a more 
RISC like architecture where things like branching became less 
costly. However, these are just generic statements found in 
various online texts, so how that is made concrete in terms om 
VLSI design, well... that is less obvious. Though it seems 
reasonable that they would pick a microcode representation that 
was more granular (flexible).

> Last weekend, in fact. I'm bootstrapping a Vulkan/RTX raytracer 
> as pet project. I want to update an OpenGL based real time room 
> acoustics rendering method that I published a while ago.

Cool!  :-D Maybe you do some version of overlap add convolution 
in the frequency domain, or is it in the time domain?  Reading up 
on Laplace transforms right now...

I remember when the IRCAM workstation was state-of-the-art, a 
funky NeXT cube with lots of DSPs. Things have come a long way in 
that realm since the 90s, at least on the hardware side.




More information about the Digitalmars-d mailing list