Why must bitfields sum to a multiple of a byte?

Era Scarecrow rtcvb32 at yahoo.com
Thu Aug 2 02:14:14 PDT 2012


On Thursday, 2 August 2012 at 09:03:54 UTC, monarch_dodra wrote:
> I had an (implementation) question for you: Does the 
> implementation actually require knowing what the size of the 
> padding is?
>
> eg:
> struct A
> {
>     int a;
>     mixin(bitfields!(
>         uint,  "x",    2,
>         int,   "y",    3,
>         ulong,  "",    3 // <- This line right there
>     ));
> }
>
> It that highlighted line really mandatory?
> I'm fine with having it optional, in case I'd want to have, 
> say, a 59 bit padding, but can't the implementation figure it 
> out on it's own?

  The original code has it set that way, why? Perhaps so you are 
aware and actually have in place where all the bits are assigned 
(even if you aren't using them); Be horrible if you used 
accidently 33 bits and it extended to 64 without telling you 
(Wouldn't it?).

  However, having it fill the size in and ignore the last x bits 
wouldn't be too hard to do, I've been wondering if I should 
remove it.


More information about the Digitalmars-d-learn mailing list