Cleaned up C++
via Digitalmars-d
digitalmars-d at puremagic.com
Thu Apr 23 07:28:59 PDT 2015
On Wednesday, 22 April 2015 at 22:26:45 UTC, John Colvin wrote:
> On Wednesday, 22 April 2015 at 21:59:48 UTC, Ola Fosheim
> Grøstad wrote:
>> On Wednesday, 22 April 2015 at 20:36:12 UTC, John Colvin wrote:
>>> Is it even possible to contrive a case where
>>> 1) The default initialisation stores are technically dead and
>>> 2) Modern compilers can't tell they are dead and elide them
>>> and
>>> 3) Doing the initialisation has a significant performance
>>> impact?
>>>
>>> The boring example is "extra code causes instruction cache
>>> misses".
>>
>> Allocation of large arrays.
>
> That doesn't really answer the question without some more
> context.
I think it does.
Compilers cannot tell what goes on because they cannot figure out
nontrivial loop invariants without guidance. You need something
like a theorem prover (coq?)...
Compilers may not be about to tell arrays won't be touched
because of memory barriers when calling external functions, so
they have to complete initialization before that. Presumably a
Rust compiler could do better...
>Can you give a specific example where all 3 points are
> satisfied?
Not sure why you would need it, plenty of cases where compilers
will fail. E.g. queues between threads (like real time threads)
where you allocate in one thread and fill out data in another
thread.
Any preallocation done on large data structures or frequently
reinitialized data structures may perform better without explicit
initialization.
For a system level language I think it would be better to verify
that you don't use noninitialized memory using a theorem prover
(sanitizer) or to use guards (like NaN). Automatic initialization
is also a source of bugs.
More information about the Digitalmars-d
mailing list