std.complex

Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang at gmail.com> Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang at gmail.com>
Thu Jan 2 16:59:43 PST 2014


On Thursday, 2 January 2014 at 23:43:47 UTC, Lars T. Kyllingstad 
wrote:
> Not at all. ;)

I am never certain about anything related to FP. It is a very 
pragmatic hack, which is kind of viral. (but fun to talk about ;).

> I just think we should keep in mind why FP semantics are 
> defined the way they are.

Yes, unfortunately they are just kind-of-defined. 0.0 could 
represent anything from the minimum-denormal number to zero 
(Intel) to maximum-denormal number to zero (some other vendors). 
Then we have all the rounding-modes. And it gets worse with 
single than with double. I think the semantics of IEEE favours 
double over single, since detecting overflow is less important 
for double (it occurs seldom for doubles in practice so 
conflating overflow with 1.0/0.0 matters less for them than for 
single precision).

> Take 0.0*inf, for example.  As has been mentioned, 0.0 may 
> represent a positive real number arbitrarily close to zero, and 
> inf may represent an arbitrarily large real number. The product 
> of these is ill-defined, and hence represented by a NaN.

Yes, and it is consistent with having 0.0/0.0 evaluate to NaN.
( 0.0*(1.0/0.0) ought to give NaN as well )

> 0.0+1.0i, on the other hand, represents a number which is 
> arbitrarily close to i. Multiplying it with a very large real 
> number gives you a number which has a very large imaginary 
> part, but which is arbitrarily close to the imaginary axis, 
> i.e. 0.0 + inf i. I think this is very consistent with FP 
> semantics, and may be worth making a special case in 
> std.complex.Complex.

I am too tired to figure out if you are staying within the 
max-min interval of potential values that can be represented (if 
you had perfect precision). I think that is the acid test. In 
order to reduce the unaccounted-for errors it is better to have a 
"wider" interval for each step to cover inaccuracies, and a bit 
dangerous if it gets more "narrow" than it should. I find it 
useful to try to think of floating point numbers as conceptual 
intervals of potential values (that get conceptually wider and 
wider the more you compute) and the actual FP value to be a 
"random" sample of that interval.

For all I know, maybe some other implementations do what you 
suggest already, but my concern was more general than this 
specific issue. I think it would be a good idea to mirror a 
reference implementation that is widely used for scientific 
computation. Just to make sure that it is accepted. Imagine a 
team where the old boys cling to Fortran and the young guy wants 
D, if he can show the old boys that D produce the same results 
for what they do they are more likely to be impressed.

Still, it is in the nature of FP that you should be able to 
configure and control expressions in order to overcome FP-related 
shortcomings. Like setting rounding-mode etc. So stuff like this 
ought to be handled the same way if it isn't standard practice. 
Not only for this stuff, but also for dealing with 
overflow/underflow and other "optional" aspects of FP 
computations.

> I agree, but there is also a lot to be said for not repeating 
> old mistakes, if we deem them to be such.

With templates you probably can find a clean way to throw in a 
compile-time switch for exception generation and other things 
that can be configured.


More information about the Digitalmars-d mailing list