colour lib

Manu via Digitalmars-d digitalmars-d at puremagic.com
Tue Aug 30 22:58:28 PDT 2016


I have this implementation issue, which I'm having trouble applying
good judgement, I'd like to survey opinions...

So, RGB colours depend on this 'normalised integer' concept, that is:
  unsigned: luminance = val / IntType.max
  signed: luminance = max(val / IntType.max, -1.0)

So I introduce NormalizedInt(T), which does that.

The question is, what should happen when someone does:
  NormalisedInt!ubyte nub;
  NormalizedInt!byte nb;
  auto r = nub + nb;

What is typeof(r)?

There are 3 options that stand out, and I have no idea which one is correct.
1. Compile error, mismatching NormalisedInt type arithmetic shouldn't
work; require explicit user intervention.
2. 'Correct' result, ie, lossless; is(typeof(r) ==
NormalisedInt!short). Promote to type that doesn't lose precision,
type conversion loses efficiency, but results always correct.
3. Do what normal int types do; is(typeof(r) == NormalisedInt!int) ie,
apply the normal integer arithmetic type promotion rules. Classic
pain-in-the-arse applies when implicitly promoted result is stored to
a lower-precision value. Probably also slow (even slower) than option
#2.

Are there other options?
I'm tempted by #1, but that will follow right through to the colour
implementation, which will lead to colour type cast's all over the
place.


More information about the Digitalmars-d mailing list