Always false float comparisons
Andrei Alexandrescu via Digitalmars-d
digitalmars-d at puremagic.com
Mon May 16 13:10:40 PDT 2016
On 05/16/2016 02:57 PM, Walter Bright wrote:
> On 5/16/2016 7:32 AM, Andrei Alexandrescu wrote:
>> It is rare to need to actually compute the inverse of a matrix. Most
>> of the time
>> it's of interest to solve a linear equation of the form Ax = b, for
>> which a
>> variety of good methods exist that don't entail computing the actual
>> inverse.
>
> I was solving n equations with n unknowns.
That's the worst way to go about it. I've seen students fail exams over
it. Solving a linear equations sytems by computing the inverse is
inferior to just about any other method. See e.g.
http://www.mathworks.com/help/matlab/ref/inv.html
"It is seldom necessary to form the explicit inverse of a matrix. A
frequent misuse of inv arises when solving the system of linear
equations Ax = b. One way to solve the equation is with x = inv(A)*b. A
better way, from the standpoint of both execution time and numerical
accuracy, is to use the matrix backslash operator x = A\b. This produces
the solution using Gaussian elimination, without explicitly forming the
inverse. See mldivide for further information."
You have long been advocating that the onus is on the engineer to
exercise good understanding of what's going on when using
domain-specific code such as UTF, linear algebra, etc. So if you
exercised it now, you need to discount this argument.
>> I emphasize the danger of this kind of thinking: 1-2 anecdotes trump a
>> lot of
>> other evidence. This is what happened with input vs. forward C++
>> iterators as
>> the main motivator for a variety of concepts designs.
>
> What I did was implement the algorithm out of my calculus textbook.
> Sure, it's a naive algorithm - but it is highly unlikely that untrained
> FP programmers know intuitively how to deal with precision loss.
As someone else said: a few bits of extra precision ain't gonna help
them. I thought that argument was closed.
> I bring
> up our very own Phobos sum algorithm, which was re-implemented later
> with the Kahan method to reduce precision loss.
Kahan is clear, ingenous, and understandable and a great part of the
stdlib. I don't see what the point is here. Naive approaches aren't
going to take anyone far, regardless of precision.
>>> 1. Go uses 256 bit soft float for constant folding.
>> Go can afford it because it does no interesting things during
>> compilation. We
>> can't.
>
> The we can't is conjecture at the moment.
We can't and we shouldn't invest time in investigating whether we can.
It's a waste even if the project succeeded 100% and exceeded anyone's
expectations.
>>> 2. Speed is hardly the only criterion. Quickly getting the wrong answer
>>> (and not just a few bits off, but total loss of precision) is of no
>>> value.
>> Of course. But it turns out the precision argument loses to the speed
>> argument.
>>
>> A. It's been many many years and very few if any people commend D for its
>> superior approach to FP precision.
>>
>> B. In contrast, a bunch of folks complain about anything slow, be it
>> during
>> compilation or at runtime.
>
> D's support for reals does not negatively impact the speed of float or
> double computations.
Then let's not do more of it.
>>> 3. Supporting 80 bit reals does not take away from the speed of
>>> floats/doubles at runtime.
>> Fast compile-time floats are of strategic importance to us. Give me
>> fast FP
>> during compilation, I'll make it go slow (whilst put to do amazing work).
>
> I still have a hard time seeing what you plan to do at compile time that
> would involve tens of millions of FP calculations.
Give those to me and you'll be surprised.
Andrei
More information about the Digitalmars-d
mailing list