Lints, Condate and bugs
Walter Bright
newshound2 at digitalmars.com
Thu Oct 28 16:32:36 PDT 2010
bearophile wrote:
>> becomes 2 instructions: ADD EAX,3 JC overflow
> Modern CPUs have speculative execution. That JC has a very low probability,
> and the CPU executes several instructions under it.
It still causes the same slowdown. If the CPU can speculatively execute 5
instructions ahead, then you're reducing it to 4. Substantially increasing the
code size has additional costs with cache misses, etc.
> Every time I am comparing a signed with an unsigned I have an overflow risk
> in D.
Not every time, no. In fact, it's rare. I believe you are *way* overstating the
case. If you were right I'd be reading all the time about integer overflow bugs,
not buffer overflow bugs.
> And overflow tests are a subset of more general range tests (see the
> recent thread about bound/range integers).
Languages which have these have failed to gain traction. That might be for other
reasons, but it's not ausipcious.
>> It's also possible in D to build a "SafeInt" library type that will check
>> for overflow. These classes exist for C++, but nobody seems to have much
>> interest in them.
>
> People use integer overflows in Delphi code,
Delphi has also failed in the marketplace. Again, surely other reasons factor
into its failure, but if you're going to cite other languages it is more
compelling to cite successful ones.
> I have seen them used even in
> code written by other people too. But those tests are uncommon in the C++
> code I've seen. Maybe the cause it's because using a SafeInt is a pain and
> it's not handy.
Or perhaps because people really aren't having a problem with integer overflows.
> Then I'd like a compiler switch that works very well to automatically change
> all integral numbers in a program into bigints (and works well with all the
> int, short, ubyte and ulong etc type annotations too, of course).
Such a switch is completely impractical, because such a language would then have
two quite incompatible variants.
> Have you tried to use the current bigints as a replacement for all ints in a
> program? They don't cast automatically to size_t (and there are few other
> troubles, time ago I have started a thread about this), so every time you use
> them as array indexes you need casts or more. And you can't even print them
> with a writeln. You care for the performance loss coming from replacing an
> "ADD EAX,3" with an "ADD EAX,3 JC overflow" but here you suggest me to
> replace integers with heap-allocated bigints.
Bigint can probably be improved. My experience with Don is he is very interested
in and committed to improving his designs.
BTW, bigints aren't heap allocated if their range fits in a ulong. They are
structs, i.e. value types (that got bashed in another thread as unnecessary, but
here's an example where they are valuable).
Also, *you* care about performance, as you've repeatedly posted benchmarks
complaining about D's performance, including the performance on integer
arithmetic. I don't see that you'd be happy with a market slowdown your proposal
will produce.
I'm willing to go out on a limb here. I welcome you to take a look at the dmd
source and Phobos source code. Find any places actually vulnerable to a
signed/unsigned error or overflow error (not theoretically vulnerable). For
example, an overflow that would not happen unless the program had run out of
memory long before is not an actual bug. The index into the vtable[] is not
going to overflow. The line number counter is not going to overflow. The number
of parameters is not going to overflow. There are also some places with overflow
checks, like in turning numeric literals into binary.
More information about the Digitalmars-d
mailing list