Always false float comparisons

Joseph Rushton Wakeling via Digitalmars-d digitalmars-d at puremagic.com
Sun May 15 06:49:12 PDT 2016


On Saturday, 14 May 2016 at 18:46:50 UTC, Walter Bright wrote:
> On 5/14/2016 3:16 AM, John Colvin wrote:
>> This is all quite discouraging from a scientific programmers 
>> point of view.
>> Precision is important, more precision is good, but 
>> reproducibility and
>> predictability are critical.
>
> I used to design and build digital electronics out of TTL 
> chips. Over time, TTL chips got faster and faster. The rule was 
> to design the circuit with a minimum signal propagation delay, 
> but never a maximum. Therefore, putting in faster parts will 
> never break the circuit.
>
> Engineering is full of things like this. It's sound engineering 
> practice. I've never ever heard of a circuit requiring a 
> resistor with 20% tolerance that would fail if a 10% tolerance 
> one was put in, for another example.

Should scientific software be written to not break if the 
floating-point precision is enhanced, and to allow greater 
precision to be used when the hardware supports it?  Sure.

However, that's not the same as saying that the choice of 
precision should be in the hands of the hardware, rather than the 
person building + running the program.  I for one would not like 
to have to spend time working out why my program was producing 
different results, just because I (say) switched from a machine 
supporting maximum 80-bit float to one supporting 128-bit.


More information about the Digitalmars-d mailing list