Article: Functional image processing in D

Vladimir Panteleev vladimir at thecybershadow.net
Fri Mar 21 10:41:27 PDT 2014


On Friday, 21 March 2014 at 17:24:02 UTC, Jakob Ovrum wrote:
> What happens if negative values sneak into the code? Sounds 
> dangerous.

Well, working with unsigned values correctly complicates the code 
by a good deal, for one. For example, if you want to draw a 
circle at (x,y) with radius r, then the first column of the 
bounding box is max(x-r,0). If you use unsigned coordinates, you 
have to write x-min(r,x) instead, which is a lot less intuitive. 
Not to mention, that it makes sense to draw a circle with 
negative center coordinates (you'll only see the fragment with 
non-negative coordinates).

There's also tricky bits like if you ever need to subtract a 
value from another, divide the result, then add something back. 
With a signed type you can get the expected positive number, even 
if the number being divided was negative. With an unsigned type, 
the subtraction can cause an underflow, with the (unsigned) 
division interpreting the result with a very large positive 
number instead of a small negative one.

>> For example, a problem I've struggled with is avoiding having 
>> two overloads for almost every function in image.d. I've tried 
>> multiple approaches: default arguments (in the form of *new 
>> Image!COLOR), templates, string mixins, UDAs, pointers, but 
>> they all were rather ugly or impractical. Some related 
>> compiler issues are 8074, 12386, 12425 and 12426 - fixing 
>> those might make some of those approaches more feasible.
>
> Referring to the overload sets with the `target` parameter?

Yes.

> (Looking at the source I also noticed some `isInputRange` 
> checks are missing; `ElementType` only checks for a `front` 
> property.)

Thanks.

>> That's what my previous design used. But ultimately, unless 
>> you're dealing with very narrow images, I don't think there 
>> will be a noticeable difference in performance. This design is 
>> more flexible, though (e.g. vjoiner can serve scanlines from 
>> different sources).
>
> Maybe parallelized blitting makes sense, though it would really 
> require a use case where blit speed is a bottleneck to matter 
> in the first place.

I agree, I think in most cases it makes sense to parallelize on a 
higher level.

Searching the web for "parallel memcpy" seems to confirm my 
suspicion that it's not practical, at least not for conventional 
CPUs.


More information about the Digitalmars-d-announce mailing list