Integer division by zero results in floating-point exception !?

Anders F Björklund afb at algonet.se
Mon Apr 2 10:58:19 PDT 2007


David Finlayson wrote:

>> It is probably not an D exception but a unix signal, if I remember corectlly
>> unix gives a process the floating point exception signal on divide by 0.
> 
> So, is this a bug or a feature? If it is normal, it seems like this
> undermines the exception mechanism built into the language.

The hardware exceptions are considered "part of" the D mechanisms too,
for instance dereferencing a null pointer will give you a similar error.


BTW; If I run your program (with the debugger) on Mac OS X, I get:

Program received signal EXC_ARITHMETIC, Arithmetic exception.
0x00002e21 in D main (args={length = 1, ptr = 0x300550}) at div0.d:13
13          writefln("a/b = ", a/b);

Interestingly, this only happens on X86. The PPC happily outputs:
a/b = 0

So I guess it is ultimately up to how the architecture handles it...
But for the Intel architecture you probably want to check for it ?

--anders


More information about the Digitalmars-d-learn mailing list