I only need something like two decimal precision on only small numbers, so I use: cast(double) short / 100.0 I don't want to use floats as they use too much memory (talking about a 20000 big array). Why are there no 16bit halfs (as opposed to doubles :)?