Time to move std.experimental.checkedint to std.checkedint ?

Walter Bright newshound2 at digitalmars.com
Mon Mar 29 20:00:03 UTC 2021


On 3/29/2021 9:41 AM, Andrei Alexandrescu wrote:
> On 3/27/21 3:42 AM, tsbockman wrote:
>> With good inlining and optimization, even a library solution generally slows 
>> integer math code down by less than a factor of two. (I expect a language 
>> solution could do even better.)
>>
>> This is significant, but nowhere near big enough to move the bottleneck in 
>> most code away from I/O, memory, floating-point, or integer math for which 
>> wrapping is semantically correct (like hashing or encryption). In those cases 
>> where integer math code really is the bottleneck, there are often just a few 
>> hot spots where the automatic checks in some inner loop need to be replaced 
>> with manual checks outside the loop.
> 
> This claim seems speculative. A factor of two for a fundamental class of 
> operations is very large, not just "significant". We're talking about e.g. 1 
> cycle for addition, and it was a big deal when it was introduced back in the 
> early 2000s. Checked code is larger, meaning more pressure on the scarce I-cache 
> in large programs - and that's not going to be visible in microbenchmarks. And 
> "I/O is slow anyway" is exactly what drove the development of C++ 
> catastrophically slow iostreams.

With the LEA instruction, which can do adds and some multiplies in one 
operation, this calculation often comes at zero cost, as it is uses the address 
calculation logic that runs in parallel.

LEA does not set any flags or include any overflow detection logic.

Just removing that optimization will result in significant slowdowns.

Yes, bugs happen because of overflows. The worst consequence of this is memory 
corruption bugs in the form of undersized allocations and subsequent buffer 
overflows (from malloc(numElems * sizeElem)). But D's buffer overflow protection 
features mitigate this.

D's integral promotion rules (bytes and shorts are promoted to ints before doing 
arithmetic) get rid of the bulk of likely overflows. (It's ironic that the 
integral promotion rules are much maligned and considered a mistake, I don't 
share that opinion, and this is one of the reasons why.)

In my experience, there are very few places in real code where overflow is a 
possibility. They usually come in the form of unexpected input, such as overly 
large files, or specially crafted malicious input. I've inserted checks in DMD's 
implementation where overflow is a risk.

Placing the burden of checks everywhere is a poor tradeoff.

It isn't even clear what the behavior on overflows should be. Error? Wraparound? 
Saturation? std.experimental.checkedint enables the user to make this decision 
on a case-by-case basis. The language properly defaults to the simplest and 
fastest choice - wraparound.

BTW, Rust does have optional overflow protection, it's turned off for release 
builds. This is pretty good evidence the performance cost of such checks is not 
worth it. It also does not do integral promotion, so Rust code is far more 
vulnerable to overflows.


More information about the Digitalmars-d mailing list