colour lib

Marco Leise via Digitalmars-d digitalmars-d at puremagic.com
Thu Sep 1 13:09:44 PDT 2016


Am Wed, 31 Aug 2016 15:58:28 +1000
schrieb Manu via Digitalmars-d <digitalmars-d at puremagic.com>:

> I have this implementation issue, which I'm having trouble applying
> good judgement, I'd like to survey opinions...
> 
> So, RGB colours depend on this 'normalised integer' concept, that is:
>   unsigned: luminance = val / IntType.max
>   signed: luminance = max(val / IntType.max, -1.0)
> 
> So I introduce NormalizedInt(T), which does that.
> 
> The question is, what should happen when someone does:
>   NormalisedInt!ubyte nub;
>   NormalizedInt!byte nb;
>   auto r = nub + nb;
> 
> What is typeof(r)?
> 
> There are 3 options that stand out, and I have no idea which one is correct.
> 1. Compile error, mismatching NormalisedInt type arithmetic shouldn't
> work; require explicit user intervention.
> 2. 'Correct' result, ie, lossless; is(typeof(r) ==
> NormalisedInt!short). Promote to type that doesn't lose precision,
> type conversion loses efficiency, but results always correct.
> 3. Do what normal int types do; is(typeof(r) == NormalisedInt!int) ie,
> apply the normal integer arithmetic type promotion rules. Classic
> pain-in-the-arse applies when implicitly promoted result is stored to
> a lower-precision value. Probably also slow (even slower) than option
> #2.
> 
> Are there other options?
> I'm tempted by #1, but that will follow right through to the colour
> implementation, which will lead to colour type cast's all over the
> place.

I'd suspect #1 to be the best option, too. However, I don't
know when users will deal with these calculations. Surely
adding sRGB(22,22,22) + sRGB(11,11,11) gives sRGB(28, 28, 28),
with a higher precision while performing the addition and then
rounding back. Anything requiring multiple operations on an
image should use a higher precision linear color space from
the start.

-- 
Marco



More information about the Digitalmars-d mailing list