Fixing C's Biggest Mistake

Walter Bright newshound2 at digitalmars.com
Sun Jan 8 05:51:33 UTC 2023


On 1/5/2023 2:25 PM, cc wrote:
> On Sunday, 1 January 2023 at 20:04:13 UTC, Walter Bright wrote:
>> On 12/31/2022 10:33 AM, monkyyy wrote:
>>> When adr is making a video game on stream and defines a vec2 with default 
>>> initialized floats; it's a video game it should be fail-safe and init to 0 
>>> rather than have him take 10 minutes on stage debugging it. Different 
>>> situations can call for different solutions, why is safety within computer 
>>> science universally without context?
>>
>> You're right that a video game need not care about correctness or corruption.
> 
> I don't think that's a very apt take *at all*.  Frankly it's insulting.  You do 
> realize video games are a *business*, right? They absolutely care about 
> correctness and corruption.

Sorry I made it sound that way. Nobody is going to die if the display is a bit 
off. And the reason video game developers asked for D to support half-floats is 
because of speed, not accuracy. (It's in the Sargon library now.)

John Carmack famously used the fast inverse square root algorithm that was 
faster, but less accurate, than the usual method.

https://en.wikipedia.org/wiki/Fast_inverse_square_root

That said, I believe you when you say you care about this. I believe you want 
very much to have your programs be correct. Which is great! I'm certainly not 
going to try and talk you out of that. I posit that NaN initialization is a good 
path to get there. It's the whole reason for it. I'm not even sure why we're 
debating it!


> On this specific issue, it so happens that 
> developers also tend to find it very useful and common for numeric types to 
> initialize to zero (when they are initialized at all).  Which is why they find 
> it very *surprising* and *confusing* when they suddenly don't.  This should not 
> be interpreted to mean that their industry is lazy and "doesn't care" about the 
> financial viability of releasing sound code.

1.0 is also a popular initial value. The compiler is never going to reliably 
guess it right.

Suppose 1.3 was what it was supposed to be. Does initialization to 0.0 make the 
program more or less likely to be correct than if it was initialized with NaN? I 
think we can agree that the program is wrong under both scenarios. But which 
program is more *likely* to produce an error in the output that cannot be ignored?

I propose the NaN initialization would make the error much more obvious, and so 
fixable before release.



More information about the Digitalmars-d mailing list