Real Int24

IntegratedDimensions IntegratedDimensions at gmail.com
Mon May 21 18:35:59 UTC 2018


On Monday, 21 May 2018 at 15:41:21 UTC, Simen Kjærås wrote:
> On Saturday, 19 May 2018 at 18:44:42 UTC, IntegratedDimensions 
> wrote:
>> On Saturday, 19 May 2018 at 18:19:35 UTC, IntegratedDimensions 
>> wrote:
>>> Is there any way to create an int24 type that behaves just 
>>> like any other built in type without having to reimplement 
>>> everything?
>>
>> In fact, what I'd like to do is create an arbitrary type:
>>
>> struct atype(T)
>> {
>>
>> }
>>
>> where atype(T) is just a "view" in to N_T bits interpreted as 
>> T, an enum.
>>
>> If T is bit, then the N = 1 and the interpretation is 1 bit.
>> If T is byte, then the N = 8 and the interpretation is 7 bits 
>> followed by 1 signed bit.
>> If T is int24, then the N = 24 and the interpretation is 23 
>> bits followed by 1 signed bit.
>>
>> The idea is the storage of atype is exactly N bits. If this is 
>> not possible due to boundary issues then N can always be a 
>> multiple of 8(which is for my use cause is the smallest).
>
> D does not support types that take up less than one byte of 
> space. It's possible to make types that represent less than one 
> byte - bool may be considered such an example - but they still 
> take up at least 1 byte.
>
> If you create a custom range type, you could pack more than one 
> element in each byte, see std.bitmanip.BitArray[0] for an 
> example.
>
>
>> The main thing is that I would like to be able to use atype as 
>> if it were a built in type.
>>
>> If N = 24, 3 bytes, I want to be able to create arrays of 
>> atype!int24[] which work just as if they were arrays of bytes 
>> without any exception or special cases.
>>
>> atype!byte would be equivalent to byte and reduce to the 
>> compiler internals. I'm not looking to create a "view" of an 
>> array. I want a standalone type that can behave as all the 
>> desired types needed, which is most of the numerical types of 
>> D and some of the ones it neglected like 24-bit ints, 48-bit 
>> ints, etc. Ideally, any type could be used and the "most 
>> optimal" code is generated while being able to use the types 
>> using the standard model.
>
> We already have std.numeric.CustomFloat[1]. As the name 
> implies, it only works for floats.
>
> I hacked together something somewhat equivalent for ints:
>
> https://gist.github.com/Biotronic/f6668d8ac95b70302015fee93ae9c8c1
>
> Usage:
>
> // Two's-complement, native-endian, 24-bit int type:
> CustomInt!24 a;
>
> // Unsigned, native-endian, 15-bit:
> CustomInt!(15, Representation.Unsigned) b;
>
> // Offset (-2..5) 3-bit int:
> CustomInt!(3, Representation.OffsetBinary, 2) c;
>
> // You get the idea:
> CustomInt!(64, Representation.SignedMagnitude, 0, 
> Endianness.BigEndian) d;
>
> Not sure this is what you're looking for, but it's at the very 
> least inspired by your post. :)
>
> If what you want is a type that can represent something a 
> packed array of 13-bit ints, the above is not what you're 
> looking for - you're going to need a custom range type.
>
> --
>   Simen
>
> [0]: https://dlang.org/phobos/std_bitmanip#BitArray
> [1]: https://dlang.org/phobos/std_numeric.html#.CustomFloat

Cool. I'll try it as a drop in replacement and if it works then 
it works! ;) Thanks.

Just to be clear and to make sure this works the way it seems:

All types multiple of a byte are reduced to either internal 
representation(byte, ubyte, short, ushort, int, uint, long, 
ulong) directly(becomes an alias) or the most efficient structure 
for the representation: unsigned 24-bit = 3 bytes and is 
effectively ubyte[3], usigned 128-bit is ubyte[16], etc?


Non multiples are extended up one byte, so 7-bits is representing 
using an byte, etc.

This seems to be the case from the code.


Now, what I didn't see was anything to work with non byte aligned 
arrays of CustomInt. Would it be possible to add? I know you say 
that we should use bitmanip but code could be extended to support 
it relatively easily by treating an array of bits as an array of 
CustomInts and the indexer can compute the appropriate offset 
using the bit size.

Maybe that will require CustomIntsArray?

The idea is, say one has 7-bit ASCII represented in a ubyte[] 
then they can map that to a CustomInt!7[] which will be use 
CustomInt!7(but 7 bits, not 8) as representation. But, of course 
CustomInt!7[3] would be 8 bits. But basically it retrieves the 
correct value and stores it by doing the standard masking.

BTW, it looks like you could extend your type to deal with floats 
and doubles which would make this type very robust in dealing 
with arbitrary primitive types.

The idea is that any matching language types are aliased to and 
those that don't are handled appropriately.






More information about the Digitalmars-d-learn mailing list