Performance of tables slower than built in?
NaN
divide at by.zero
Sat May 25 09:04:31 UTC 2019
On Friday, 24 May 2019 at 17:40:40 UTC, Ola Fosheim Grøstad wrote:
> On Friday, 24 May 2019 at 17:04:33 UTC, Alex wrote:
>> I'm not sure what the real precision of the build in functions
>> are but it shouldn't be hard to max out a double using
>> standard methods(even if slow, but irrelevant after the LUT
>> has been created).
>
> LUTs are primarily useful when you use sin(x) as a signal or
> when a crude approximation is good enough.
>
> One advantage of a LUT is that you can store a more complex
> computation than the basic function. Like a filtered square
> wave.
Its pretty common technique in audio synthesis. What i've done in
the past is store a table of polynomial segments that were
optimised with curve fitting. It's bit extra work to calculate
the the waveform but actual works out faster than having huge
LUTs since you're typically only producing maybe 100 samples in
each interrupt callback, so it gets pretty likely that your LUT
is pushed into slower cache memory in between calls to generate
the the audio.
More information about the Digitalmars-d-learn
mailing list