Need a Faster Compressor

Timon Gehr via Digitalmars-d digitalmars-d at puremagic.com
Tue May 24 09:22:56 PDT 2016


On 24.05.2016 01:02, Walter Bright wrote:
> On 5/23/2016 2:17 PM, Timon Gehr wrote:
>> Then don't do that. I.e. re-mangle recursively from scratch for each
>> mangled
>> name and allow sharing parts between unrelated components within that
>> mangled
>> name.
>
> How is that essentially different from running a compression pass?

Speed. The title of this thread is "Need a faster compressor".

> The
> only real problem with the compressor was the compression speed, and the
> LZ4 test shows this is solvable.
> ...

No. Just increase the recursion depth by a small number of levels to 
completely kill the speedups gained by using a faster compression algorithm.

>
>  > (That's similar to how the compiler stores the structure in memory,
> which
>> also avoids the exponential blowup.)
>
> No, it doesn't.
> ...

Yes, it does. The compiler does not use exponential space to store the AST.

>
>> You can't have exponential growth before compression.
>
> The compiler is designed so that the internal names are pretty much
> write-only, uniquely hashed by their address. It's not spending time
> running strlen()'s on them.
> ...

It's _exponential_ growth. We don't even want to spend the time and 
memory required to generate the strings.

>
>> Any scheme that does not prevent running time exponential in template
>> instantiation depth is also
>> inadequate in the long run.
>
> Most algorithms seem to have pathological worst cases, but as long as
> they are rare, it doesn't impede their utility.
>

The reason we have this discussion is that the worst case isn't rare 
enough to make this argument. Why compress in the first place if mangled 
names don't grow large in practice?


More information about the Digitalmars-d mailing list