BitArray implementation issue
safety0ff via Digitalmars-d
digitalmars-d at puremagic.com
Tue Jul 22 14:16:35 PDT 2014
Currently the implementation of BitArray's length setter is badly
broken.
Its implementation contains this code segment:
if (newdim != olddim) {
auto b = ptr[0 .. olddim];
b.length = newdim; // realloc
ptr = b.ptr;
if (newdim & (bitsPerSizeT-1)) // Set any pad bits to 0
ptr[newdim - 1] &= ~(~0 << (newdim &
(bitsPerSizeT-1)));
}
There are many bugs in the last 2 lines:
1) If we're increasing the length, ptr[newdim-1] is necessarily
zero, so we're doing nothing.
2) If we're reducing the length, we're essentially clearing
arbitrary bits whenever the branch is taken, furthermore:
3) On 64 bit, we're always clearing the upper 32 bits of the last
word.
4) On 64 bit, we have undefined behaviour from the left shift.
While trying to fix this the question was raised: should we even
clear any bits?
The underlying array can be shared between BitArray's (assignment
behaves like dynamic array assignment.)
This means that clearing the bits might affect other BitArray's
if the extension does not cross a size_t boundary.
So it is difficult to mimic D dynamic array semantics at a bit
level without a reference counting mechanism.
Furthermore, a BitArray can function as an arbitrary data
reinterpreter via the function:
void init(void[] v, size_t numbits)
Help is needed for deciding whether setting length should zero
the new bits (even if it means stomping other BitArrays.)
Currently I've implemented zero'ing bits upon extension in my PR
[1], but it is stalled on this design issue.
[1] https://github.com/D-Programming-Language/phobos/pull/2249
More information about the Digitalmars-d
mailing list