Whats holding ~100% D GUI back?

Ola Fosheim Grøstad ola.fosheim.grostad at gmail.com
Fri Nov 29 23:55:55 UTC 2019


On Friday, 29 November 2019 at 16:40:01 UTC, Gregor Mückl wrote:
> This presentation is of course a simplification of what is 
> going on in a GPU, but it gets the core idea across. AMD and 
> nVidia do have a lot of documentation that goes into some more 
> detail, but at some point you're going to hit a wall.

I think it is a bit interesting that Intel was pushing their Phi 
solution (many Pentium-cores), but seems to not update it 
recently. So I wonder if they will be pushing more independent 
GPU cores on-die (CPU-chip). It would make sense for them to 
build one architecture that can cover many market segments.

> The convolutions for aurealization are done in the frequency 
> domain. Room impulse responses are quite long (up to several 
> seconds), so a time domain convolutions are barely feasible 
> offline. The only feasible way is to use the convolution 
> theorem, transform everything into frequency space, multiply it 
> there, and transform things back...

I remember reading a paper about casting rays into a 3D model to 
estimate an acoustic model for the room in the mid 90s. I assume 
they didn't do it real time.

I guess you could create a psychoacoustic parametric model that 
works in the time domain... it wouldn't be very accurate, but I 
wonder if it still could be effective. It is not like Hollywood 
movies have accurate sound...  We have optic illusions for the 
visual system, but there are also auditive illusions for the 
aural systems. E.g. shepard tones that ascend forever, and I've 
heard the same have been done with motion of sound by morphing 
the phase of a sound over speakers, that have been carefully 
placed with an exact distance between them, so that a sound moves 
to the left forever. I find such things kinda neat... :)

Some electro acoustic composers explore this field, I think it is 
called spatialization/diffusion? I viewed one of your vidoes and 
the phasing reminded me a bit of how these composers work. I 
don't have access to my record collection right now, but there 
are some soundtracks that are surprisingly spatial. Kind of like 
audio-versions of non-photorealistic rendering techniques. :-) 
The only one I can remember right now seems to be Utilty of Space 
by N. Barrett (unfortunately a short clip):
https://electrocd.com/en/album/2322/Natasha_Barrett/Isostasie

> There's a lot of pitfalls. I'm doing all of the convolution on 
> the CPU because the output buffer is read from main memory by 
> the sound hardware. Audio buffer updates are not in lockstep 
> with screen refreshes, so you can't reliably copy the next 
> audio frame to the GPU, convolve it there and read it back in 
> time because the GPU is on it's own schedule.

Right, why can't audiobuffers be supported in the same way as 
screenbuffers? Anyway, if Intel decides to integrate GPU cores 
and CPU cores tighter then... maybe. Unfortunately, Intel tends 
to focus on making existing apps run faster, not to enable the 
next-big-thing.

> Perceptually, it seems that you can get away with a fairly low 
> update rate for the reverb in many cases.

If the sound sources are at a distance then there should be some 
time to work it out? I haven't actually thought very hard on 
that... You could also treat early and late reflections 
separately (like in a classic reverb).

I wonder though if it actually has to be physically correct, 
because, it seems to me that Hollywood movies can create more 
intense experiences by breaking with physical rules.  But the 
problem is coming up with a good psychoacoustic model, I guess. 
So in a way, going with the physical model is easier... it easier 
to evaluate anyway.

> And those pesky graphics programmers want every ounce of GPU 
> performance all to themselves and never share! ;)

Yes, but maybe the current focus on neural-networks will make 
hardware vendors focus on reducing latency and thus improve the 
situation for audio as well. That is my prediction, but I could 
be very wrong. Maybe they just will insist on making completely 
separate coprocessors for NN.



More information about the Digitalmars-d mailing list