OT (partially): about promotion of integers
Walter Bright
newshound2 at digitalmars.com
Tue Dec 11 13:57:35 PST 2012
On 12/11/2012 10:45 AM, bearophile wrote:
> Walter Bright:
>
>> Why stop at 64 bits? Why not make there only be one integral type, and it
>> is of whatever precision is necessary to hold the value? This is quite
>> doable, and has been done.
>
> I think no one has asked for *bignums on default* in this thread.
I know they didn't ask. But they did ask for 64 bits, and the exact same
argument will apply to bignums, as I pointed out.
>> But at a terrible performance cost.
> Nope, this is a significant fallacy of yours. Common lisp (and OCaML) uses
> tagged integers on default, and they are very far from being "terrible".
> Tagged integers cause no heap allocations if they aren't large. Also the
> Common Lisp compiler in various situations is able to infer an integer can't
> be too much large, replacing it with some fixnum. And it's easy to add
> annotations in critical spots to ask the Common Lisp compiler to use a
> fixnum, to squeeze out all the performance.
I don't notice anyone reaching for Lisp or Ocaml for high performance applications.
> The result is code that's quick, for most situations. But it's more often
> correct. In D you drive with eyes shut; sometimes for me it's hard to know if
> some integral overflow has occurred in a long computation.
>
>
>> And, yes, in D you can create your own "BigInt" datatype which exhibits
>> this behavior.
>
> Currently D bigints don't have short int optimization.
That's irrelevant to this discussion. It is not a problem with the language.
Anyone can improve the library one if they desire, or do their own.
> I think the compiler doesn't perform on BigInts the optimizations it does on
> ints, because it doesn't know about bigint properties.
I think the general lack of interest in bigints indicate that the builtin types
work well enough for most work.
More information about the Digitalmars-d
mailing list