size_t + ptrdiff_t

Manu turkeyman at gmail.com
Sun Feb 19 13:04:04 PST 2012


On 19 February 2012 21:21, Timon Gehr <timon.gehr at gmx.ch> wrote:

>
>> It is just as unportable as size_t its self.
>>
>
> Currently, size_t is typeof(array.length). This is portable, and is
> basically the only place size_t commonly occurs in D code.


What about pointer arithmetic? Interaction with C/C++ code? Writing OS
level code? Hitting the hardware?
And how do you define 'portable' in this context? What makes size_t more
portable than a native int? A data structure containing a size_t is not
'portable' in the direct sense...


> The reason you need it is to improve portability, otherwise people need to
>> create arbitrary
>> version mess, which will inevitably be incorrect.
>> Anything from calling convention code, structure layout/packing, copying
>> memory, basically optimising for 64bits at all... I can imagine static
>> branches on the width of that type to select different paths.
>>
>
> That is not a very valid use case. In every static branch you'll know
> exactly what the width is.


That's the point.
Branches can each implement an efficient path for the different cases.


> Even just basic efficiency, using 32bit ints on many 64bit machines
>> require extra sign-extend opcodes after every single load... total waste
>> of cpu time.
>>
>>
> Using 64bit ints everywhere to represent 32bit ints won't make your
> program go faster. Cache lines fill up faster when the data contains large
> amounts of unnecessary padding. Furthermore, the compiler should be able to
> eliminate unneeded sign-extend operations. Anyway, extra sign-extend
> opcodes are not worth caring about if you get up to twice the number of
> conflict cache misses.


I'm talking about the stack, passing args etc. Data structures should
obviously be as tight as possible.


> Currently, if you're running a 64bit system with 32bit pointers, there
>> is absolutely nothing that exists at compile time to tell you you're
>> running a 64bit system,
>>
>
> Isn't there some version identifier for this? If there is not, such an
> identifier could be introduced trivially and this must be done.


Why introduce a version identifier, when a type would be so much more
useful, and also neater? (usable directly rather than ugly version blocks)


> or to declare a variable of the machines native
>> type, which you're crazy if you say is not important information.
>>
>
> What do you do with the machine's native type other than checking its size
> in a static if declaration? If you don't, then the code is unportable, and
> using the proper fixed size types would make it portable. If you do, then
> you could have checked a built-in version instead. What you effectively
> want for optimization is the most efficient type that is at least a certain
> number of bits wide. And even then, it is a moot point, because storing
> such variables in memory will add unnecessary padding to your data
> structures.


If that's all you do with it, then it's already proven its worth. There's a
major added bonus that you could USE it...
I don't like this argument that it's not portable, it's exactly as portable
as size_t is already, and there's no call to remove that.


> What's the point of a 64bit machine, if you treat it exactly like a 32bit
>> machine in every aspect?
>>
>
> There is none.
>

Then why do so many hardware vendors feel the need to create 64bit chips
which are used in 32bit memspace platforms?
It's useful to have double width registers. Some algorithms are easier with
wider registers, you can move more data faster, it extends your range for
intermediate values during calculations, etc. These are still real
advantages, even on a 32bit memspace platform.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20120219/26eb8159/attachment.html>


More information about the Digitalmars-d mailing list