Non-pipeline component programming
Marco Leise
Marco.Leise at gmx.de
Fri Feb 7 23:09:35 PST 2014
Am Fri, 07 Feb 2014 09:58:52 +0000
schrieb "Francesco Cattoglio" <francesco.cattoglio at gmail.com>:
> On Friday, 7 February 2014 at 09:47:43 UTC, Zoadian wrote:
> > On Friday, 7 February 2014 at 08:09:10 UTC, Mike Parker wrote:
> > Nitro will store Translation like this, so it is even possible
> > to iterate over parts of components:
> >
> >
> > Entity[]
> > int[] for a
> > int[] for b.x
> > int[] for b.y
> > int[] for b.z
> > int[] for c.x
> > int[] for c.y
> > int[] for c.z
>
> This looks nice and everything, but won't it slow down access
> times quite a lot?
Yeah, looks like a point is now spread over 3 cache lines.
As long as you access memory strictly forwards or backwards,
x86 can at least stream the data from RAM in the background.
Now if you check for collisions of 2 objects you'll need up to
6 spread out cache lines, where a compact layout would need
one cache-line per object (2).
The trouble is that RAM is not only much slower to access,
but also it seems to take a while for the data from RAM to
arrive in the CPU, making it important to know ahead of time
where the next data has to be read from. The combined effect
can mean a ~100 times slow down.
So x86 got a sophisticated prefetcher, that can track several
sequential memory reads (e.g. up to 16) either forwards or
backwards through memory and loads those locations into CPU
caches before they are needed.
If you access pattern is seemingly random to the CPU it might
in the worst case still try to prefetch, clogging the memory
bandwidth and not achieving anything useful :p.
--
Marco
More information about the Digitalmars-d
mailing list