colour lib

Steven Schveighoffer via Digitalmars-d digitalmars-d at puremagic.com
Fri Sep 2 05:36:02 PDT 2016


On 8/31/16 1:58 AM, Manu via Digitalmars-d wrote:
> I have this implementation issue, which I'm having trouble applying
> good judgement, I'd like to survey opinions...
>
> So, RGB colours depend on this 'normalised integer' concept, that is:
>   unsigned: luminance = val / IntType.max
>   signed: luminance = max(val / IntType.max, -1.0)
>
> So I introduce NormalizedInt(T), which does that.
>
> The question is, what should happen when someone does:
>   NormalisedInt!ubyte nub;
>   NormalizedInt!byte nb;

Is it s or z ? :)

>   auto r = nub + nb;
>
> What is typeof(r)?
>
> There are 3 options that stand out, and I have no idea which one is correct.
> 1. Compile error, mismatching NormalisedInt type arithmetic shouldn't
> work; require explicit user intervention.
> 2. 'Correct' result, ie, lossless; is(typeof(r) ==
> NormalisedInt!short). Promote to type that doesn't lose precision,
> type conversion loses efficiency, but results always correct.
> 3. Do what normal int types do; is(typeof(r) == NormalisedInt!int) ie,
> apply the normal integer arithmetic type promotion rules. Classic
> pain-in-the-arse applies when implicitly promoted result is stored to
> a lower-precision value. Probably also slow (even slower) than option
> #2.
>
> Are there other options?
> I'm tempted by #1, but that will follow right through to the colour
> implementation, which will lead to colour type cast's all over the
> place.

In the case that you are unsure, #1 is the only one that leaves room to 
make a decision later. I think you should start with that and see what 
happens.

What may turn out to happen is that most people only use one type, and 
then casts aren't going to be a problem.

-Steve


More information about the Digitalmars-d mailing list