Number of Bits Needed to Represent a Zero-Offset Integer

Jonathan M Davis via Digitalmars-d-learn digitalmars-d-learn at puremagic.com
Mon Jan 19 04:15:34 PST 2015


On Monday, January 19, 2015 12:04:38 via Digitalmars-d-learn wrote:
> As a follow-up to
>
> http://forum.dlang.org/thread/fdfwrdtjcawprvvkoqas@forum.dlang.org#post-qxudiyoygnvvbovhjfgt:40forum.dlang.org
>
> I'm looking for a function that figures out the number of bits
> that are needed to represent a zero-based integer:
>
> value-set => bits
> =================
> 0,1 => 1 (*)
> 0,1,2 => 2
> 0,1,2,3 => 2 (*)
> 0,1,2,3,4 => 3
> 0,1,2,3,4,5 => 3
> 0,1,2,3,4,5,6 => 3
> 0,1,2,3,4,5,6,7 => 3 (*)
> 0,1,2,3,4,5,6,7,8 => 4
> ...
>
> (*) means full bit usage

I had this lying around in one of my projects:

/++
  Logâ‚‚ with integral values rather than real.
  +/
ulong lg(ulong n)
{
    return n == 1 ? 0 : 1 + lg(n / 2);
}

/++
    Returns the minimum number of bits required to hold the given value.
  +/
size_t bitsRequired(ulong value)
{
    import std.math;
    return value == 0 ? 1 : cast(size_t)lg(value) + 1;
}

unittest
{
    import std.string, std.typetuple;
    ulong one = 1;

    assert(bitsRequired(0) == 1);
    foreach(bits; 0 .. 31)
    {
        assert(bitsRequired(one << bits) == bits + 1,
               format("bits [%s], result [%s]", bits, bitsRequired(bits)));
    }

    foreach(T; TypeTuple!(ubyte, ushort, uint, ulong))
    {
        ulong value;
        foreach(bits; 0 .. T.sizeof * 8)
        {
            value <<= 1;
            value |= 1;
        }
        assert(bitsRequired(T.min) == 1);
        assert(bitsRequired(T.max) == T.sizeof * 8,
               format("Type [%s], result [%s]", T.stringof, bitsRequired(T.max)));
    }
}

- Jonathan M Davis




More information about the Digitalmars-d-learn mailing list