Cleaned up C++

Iain Buclaw via Digitalmars-d digitalmars-d at puremagic.com
Thu Apr 23 00:32:02 PDT 2015


> On Wednesday, 22 April 2015 at 20:29:49 UTC, Walter Bright wrote:
>>
>> On 4/22/2015 12:51 PM, ponce wrote:
>>>
>>> I didn't appreciate how important default initialization was before
>>> having to
>>> fix a non-deterministic, release-only, time-dependent bug in a video
>>> encoder
>>> some months ago. Just because of 2 uninitialized variables (C++ doesn't
>>> require
>>> member initialization in constructor). If one of them was _exactly equal
>>> to 1_
>>> by virtue of randomness, then it would perform from 0 to 2 billions of
>>> motion
>>> estimation steps, which is very slow but not a total halt. A watchdog
>>> mechanism
>>> would detect this and reboot, hence labelling the bug "a deadlock". It
>>> would
>>> disappear in debug mode since variables would be initialized then.
>>
>>
>> The default initialization comes from bitter personal experience, much
>> like yours!
>>
>>
>>> That gives a totally other meaning to "zero cost abstractions" since in
>>> three
>>> weeks of investigation I could have speed-up the program by ~5%, much
>>> more than
>>> the supposed slowdown of variable initialization.
>>
>>
>> Most of the implicit initializations become "dead stores" and are removed
>> anyway by the optimizer.

Right, on simple types (scalars, pointers) which have a trivial
initialisation, the compiler is able to track their value until it is
read and use DCE to remove previous initialisations up to that point.

(Contrived) Examples:

void dce1()
{
    int dead_int;
    printf("Dead Int\n");
}

void dce2()
{
    int inited_later;
    int dead_int;
    inited_later = 42;
    printf("Initialized Int = %d\n", inited_later);
}

---

gdc -O -fdump-tree-optimized=stderr dce.d

dce1 ()
{
  __builtin_puts (&"Dead Int"[0]);
  return;
}

dce2 ()
{
  __builtin_printf ("Initialized Int = %d\n", 42);
  return;
}

---

As pointed out, there are limitations when it comes to types which
have a complex initialiser.

On 22 April 2015 at 22:36, John Colvin via Digitalmars-d
<digitalmars-d at puremagic.com> wrote:
>
> Is it even possible to contrive a case where
> 1) The default initialisation stores are technically dead and

int arrayInit()
{
    return 0xdeadbeef;
}

void main()
{
    int[ushort.max] dead_array = arrayInit();
    printf("Dead Array\n");
}

> 2) Modern compilers can't tell they are dead and elide them and

Actually - gdc can DCE the array initialisation at -O3, but I'd like
to do better than that in future...

gdc -O3 -fdump-tree-optimized=stderr dce.d
D main ()
{
  unsigned int ivtmp.20;
  int dead_array[65535];
  unsigned int _1;
  void * _13;

  <bb 2>:
  # DEBUG D.3569 => &dead_array
  # DEBUG D.3570 => 65535
  ivtmp.20_12 = (unsigned int) &dead_array;
  _1 = (unsigned int) &MEM[(void *)&dead_array + 262128B];

  <bb 3>:
  # ivtmp.20_16 = PHI <ivtmp.20_3(3), ivtmp.20_12(2)>
  _13 = (void *) ivtmp.20_16;
  MEM[base: _13, offset: 0B] = { -559038737, -559038737, -559038737,
-559038737 };
  ivtmp.20_3 = ivtmp.20_16 + 16;
  if (_1 == ivtmp.20_3)
    goto <bb 4>;
  else
    goto <bb 3>;

  <bb 4>:
  __builtin_puts (&"Dead Array"[0]);
  dead_array ={v} {CLOBBER};
  return 0;

}

> 3) Doing the initialisation has a significant performance impact?

Short answer, no.

time ./a.out
Dead Array

real    0m0.001s
user    0m0.000s
sys    0m0.001s


Worded answer, static array overflow analysis means that you can't get
an array much higher than 1M, and the compiler can quite happily
vectorised the initialisation process for you so there's fewer loops
to go round.  However if there was a 'lazy + impure' initialiser, that
would be a different story.  :-)

Iain.


More information about the Digitalmars-d mailing list