Is 2X faster large memcpy interesting?

Sean Kelly sean at invisibleduck.org
Thu Mar 26 15:04:25 PDT 2009


== Quote from Andrei Alexandrescu (SeeWebsiteForEmail at erdani.org)'s article
>
> As a rule of thumb, it's generally good to use memcpy (and consequently
> fill-by-copy) if you can — for large data sets, memcpy doesn't make much
> difference, and for smaller data sets, it might be much faster. For
> cheap-to-copy objects, Duff's Device might perform faster than a simple
> for loop. Ultimately, all this is subject to your compiler's and
> machine's whims and quirks.
> There is a very deep, and sad, realization underlying all this. We are
> in 2001, the year of the Spatial Odyssey. We've done electronic
> computing for more than 50 years now, and we strive to design more and
> more complex systems, with unsatisfactory results. Software development
> is messy. Could it be because the fundamental tools and means we use are
> low-level, inefficient, and not standardized? Just step out of the box
> and look at us — after 50 years, we're still not terribly good at
> filling and copying memory.

I don't know how sad this is.  For better or worse, programming is still a
craft, much like blacksmithing.  Code is largely written from scratch for
each project, techniques are jealously guarded (in our case via copyright
law), etc.  This may not be great from the perspective of progress, but
it certainly makes the work more interesting.  But then I'm a tinker at
heart, so YMMV.



More information about the Digitalmars-d mailing list