OT: floats

Quirin Schroll qs.il.paperinik at gmail.com
Thu Jun 13 19:50:34 UTC 2024


On Wednesday, 12 June 2024 at 21:05:39 UTC, Dukc wrote:
> Quirin Schroll kirjoitti 12.6.2024 klo 16.25:
>> On Sunday, 9 June 2024 at 17:54:38 UTC, Walter Bright wrote:
>>> On 5/26/2024 1:05 AM, Daniel N wrote:
>>>> I was aware of `-ffast-math` but not that the seemingly 
>>>> harmless enabling of gnu dialect extensions would change 
>>>> excess-precision!
>>>
>>> Supporting various kinds of fp math with a switch is just a 
>>> bad idea, mainly because so few people understand fp math and 
>>> its tradeoffs. It will also bork up code that is carefully 
>>> tuned to the default settings.
>> 
>> Every once in a while, I believe floating-point numbers are 
>> such delicate tools, they should be disabled by default and 
>> require a compiler switch to enable. Something like 
>> `--enable-floating-point-types--yes-i-know-what-i-am-doing-with-them--i-swear-i-read-the-docs`. And yeah, this is tongue in cheek, but I taught applied numerics and programming for mathematicians for 4 years, and even I get it wrong sometimes. In my work, we use arbitrary precision rational numbers because they’re fool-proof and we don’t need any transcendental functions.
>
> My rule of thumb - if I don't have a better idea - is to treat 
> FP numbers like I'd treat readings from a physical instrument: 
> there's always going to be a slight "random" factor in my 
> calculations. I can't trust `==` operator with FP expression 
> result just like I can't trust I'll get exactly the same result 
> if I measure the length of the same metal rod twice.
>
> Would you agree this being a good basic rule?

That’s so fundamental, it’s not even a good first step. It’s a 
good first half-step. The problem isn’t simple rules such as 
“don’t use `==`.” The problem isn’t knowing there are rounding 
errors. If you know that, you might say: Obviously, that means I 
can’t trust every single digit printed. True, but that’s all just 
the beginning. If you implement an algorithm, you have to take 
into account how rounding errors propagate through the 
calculations. The issue is that you can’t do that intuitively. 
You just can’t. You can intuit _some_ obvious problems. Generally 
speaking, if you implement a formula, you must extract from the 
algorithm what exactly you are doing and then calculate the 
so-called condition which tells you if errors add up. While that 
sounds easy, it can be next to impossible for non-linear 
problems. (IIRC, for linear ones, it’s always possible, it may 
just be a lot of work in practice.)

Not to mention other quirks such as `==` not being an equivalence 
relation, `==` equal values not being substitutable, and 
lexically ordering a bunch of arrays of float values is huge fun.

I haven’t touched FPs in years, and I’m not planning to do so in 
any professional form maybe ever. If my employer needed something 
like FPs from me, I suggest to use rationals unless those are a 
proven bottleneck.


More information about the dip.development mailing list