Signed word lengths and indexes
Walter Bright
newshound1 at digitalmars.com
Mon Jun 14 15:48:55 PDT 2010
bearophile wrote:
> I have found a Reddit discussion few days old:
> http://www.reddit.com/r/programming/comments/cdwz5/the_perils_of_unsigned_iteration_in_cc/
>
>
> It contains this, that I quote (I have no idea if it's true), plus
> follow-ups:
>
>> At Google using uints of all kinds for anything other than bitmasks or
>> other inherently bit-y, non computable things is strongly discouraged. This
>> includes things like array sizes, and the warnings for conversion of size_t
>> to int are disabled. I think it's a good call.<
>
> I have expressed similar ideas here:
> http://d.puremagic.com/issues/show_bug.cgi?id=3843
>
> Unless someone explains me why I am wrong, I will keep thinking that using
> unsigned words to represent lengths and indexes, as D does, is wrong and
> unsafe, and using signed words (I think C# uses ints for that purpose) in D
> is a better design choice.
D provides powerful abstractions for iteration; it is becoming less and less
desirable to hand-build loops with for-statements.
As for "unsafe", I think you need to clarify this, as D is not memory unsafe
despite the existence of integer over/under flows.
> In a language as greatly numerically unsafe as D (silly C-derived conversion
> rules,
Actually, I think they make a lot of sense, and D's improvement on them that
only disallows conversions that lose bits based on range propagation is far more
sensible than C#'s overzealous restrictions.
> fixed-sized numbers used everywhere on default, no runtime numerical
> overflows) the usage of unsigned numbers can be justified inside bit vectors,
> bitwise operations, and few other similar situations only.
>
> If D wants to be "a systems programming language. Its focus is on combining
> the power and high performance of C and C++ with the programmer productivity
> of modern languages like Ruby and Python." it must understand that numerical
> safety is one of the not secondary things that make those languages as Ruby
> and Python more productive.
I have a hard time believing that Python and Ruby are more productive primarily
because they do not have an unsigned type.
Python did not add overflow protection until 3.0, so it's very hard to say this
crippled productivity in early versions. http://www.python.org/dev/peps/pep-0237/
Ruby & Python 3.0 dynamically switch to larger integer types when overflow
happens. This is completely impractical in a systems language, and is one reason
why Ruby & Python are execrably slow compared to C-style languages.
More information about the Digitalmars-d
mailing list