why is string not implicit convertable to const(char*) ?

Timon Gehr timon.gehr at gmx.ch
Thu Jul 5 19:26:42 PDT 2012


On 07/06/2012 03:40 AM, Wouter Verhelst wrote:
> Timon Gehr<timon.gehr at gmx.ch>  writes:
>
>> On 07/06/2012 02:57 AM, Wouter Verhelst wrote:
>>> To be fair, there are a _few_ areas in which zero-terminated strings may
>>> possibly outperform zero-terminated strings (appending data in the case
>>> where you know the memory block is large enough, for instance).
>>
>> It is impossible to know that the memory block is large enough unless
>> the length of the string is known. But it isn't.
>
> Sure it is, but not by looking at the string itself.
>
> Say you have a string that contains some data you need, and some other
> data you don't. I.e., you want to throw out parts of the string.
>
> You could allocate a memory block that's as large as the original string
> (so you're sure you've got enough space), and then start memcpy'ing
> stuff into the new memory block from the old string.
>

This incurs the cost of determining the original string's length, which 
is higher than computing the new string length for the data&length
representation.

> This way you're sure you won't overrun your zero-terminated string, and
> you'll be a slight bit faster than you would be with a bounded string.
>

Are you talking about differences of a few operations that are
completely hidden on a modern out-of-order CPU? I don't think the
zero-terminated string method will even perform less operations.

> I'll readily admit I haven't don't this all that often, though :-)
>
>>> But they're far and few between, and it would indeed be silly to switch to
>>> zero-terminated strings.
>>
>> There is no string manipulation that is significantly faster with
>> zero-terminated strings.
>
> Correct -- but only because you said "significantly".
>

I meant to say, 'measurably'.


More information about the Digitalmars-d-learn mailing list