type declaration
Derek Parnell
derek at psych.ward
Mon Jan 1 13:53:33 PST 2007
On Mon, 1 Jan 2007 21:27:16 +0000 (UTC), %u wrote:
> Why use type declarations like this (below) and instead use human readable
> language to declare what it really is...
I believe there have been some studies done that indicate the use of words
containing a mixture of alphabetic and numeric characters is harder to
read. It appears that there is a type of 'context switch' silently going on
when people see digits in their text.
> The use of "double", "short" and "long" is CUMBERSOME!
> Especially when switching between Processors Architectures.
You do realize that D has defined these data types as fixed length. An
'int' is always going to be 32-bits regardless of the CPU architecture.
> This is trying to be a new language... make it NEW!
> Or at least make these key words SYNONYMOUS!
>
> D Meaning New Key Word
> byte signed 8 bits int8
> short signed 16 bits int16
> int signed 32 bits int32
> long signed 64 bits int64
> cent signed 128 bits int128
> float 32 bit floating float32
> double 64 bit floating float64
Feel free to add these to your own code first, to try it out.
alias int8 byte;
alias int16 short;
alias int32 int;
alias int64 long;
alias in128 cent;
alias float32 float;
alias float64 double;
And of course, to be consistent, you ought to come up with new terms for
the other 'misnamed' datatypes.
alias ??? real;
alias ??? ifloat;
alias ??? idouble;
alias ??? ireal;
alias ??? cfloat;
alias ??? cdouble;
alias ??? creal;
alias ??? char
alias ??? wchar
alias ??? dchar
--
Derek Parnell
More information about the Digitalmars-d
mailing list