Value Preservation and Polysemy -> context dependent integer literals

Fawzi Mohamed fmohamed at mac.com
Thu Dec 4 06:22:50 PST 2008


On 2008-12-01 22:30:54 +0100, Walter Bright <newshound1 at digitalmars.com> said:

> Fawzi Mohamed wrote:
>> On 2008-12-01 21:16:58 +0100, Walter Bright <newshound1 at digitalmars.com> said:
>> 
>>> Andrei Alexandrescu wrote:
>>>> I'm very excited about polysemy. It's entirely original to D,
>>> 
>>> I accused Andrei of making up the word 'polysemy', but it turns out it 
>>> is a real word! <g>
>> 
>> Is this the beginning of discriminating overloads also based on the 
>> return values?
> 
> No. I think return type overloading looks good in trivial cases, but as 
> things get more complex it gets inscrutable.

I agreee that return type overloading can go very bad, but a little bit 
can be very nice.

Polysemy make more expressions typecheck, but I am not sure that I want that.
For example with size_t & co I would amost always want a stronger 
typechecking, as if size_t would be a typedef, but with the usual rules 
wrt to ptr_diff, size_t,... (i.e. not cast between them).
This because mixing size_t with int, or long is almost always 
suspicious, but you might see it only on the other platform (32/64 
bit), and not on you own.

Something that I would find nice on the other hand is to have a kind of 
integer literals that automatically cast to the type that makes more 
sense.
I saw this in aldor, that discriminated upon return type, there and 
integer like 23 would be seen as fromInteger(23), and would select the 
optimal overloaded fromInteger depending on the context.
Sometime you would need a cast, but most of the time things just work. 
This allowed to use 1 also as unit matrix for example.
I don't need that much, but +1/-1,... with something that might be 
long, short, real,... needs more care than it should be, and normally 
it is obvious which type one expects.

Now such a change should be checked in detail, and one would probably 
want also a simple way to tell the compiler that an integer is really a 
32 bit int, to be more compatible with C one could make the different 
choice that for example these "adapting" integer literals have a 
special extension, like "a" so that the normal integer literals keep 
exactly the same semantic as in C, and 0a,1a, 12a would be these new 
integer types.

To choose the type of these "adapting" integers one would proceed as follow :
if it has an operation op(a,x) then take the type of x as type of a (I 
would restrict op to +-*/% to keep it simple), if x is also adaptive, 
recurse.
If the whole expression is done and it is an assignment look at the 
type of the variable.
If the variable has no type (auto) -> error [one could default to long 
or int, but it can be dangerous]
if this is part of a function call
f(a,...), try the values in the following order: long, int [one could 
try more, but again it can be expensive, one could also fail as before, 
but I think that this kind of use is widespread enough, that it is good 
to try to guess, but I am not totally convinced about this]

Basically something like polysemy, but *only* for a kind of integer 
literals, and without introducing new types that can be used externally.

One could also try to make the normal 0,1,2,... behave like that, and 
have a special extension for the one that are only 32 bits, but then to 
minimize the surprises then one cannot easily decide "not to guess", 
and the default decision should be int, and not long, something that I 
am not sure is the best choice.

Fawzi

Implementation details: these adaptive numbers need at least to be 
represented temporarily within the compiler. Using longs for them if 
one wants to allow also conversion to unsigned longs of maximum size, 
can be problematic. The compiler should use arbitrary precision numbers 
to represent them until the type is decided, or finds the exact type 
before the conversion.




More information about the Digitalmars-d mailing list