sqrt(2) must go

Manu turkeyman at gmail.com
Thu Oct 20 13:09:37 PDT 2011


I think you just brushed over my entire concern with respect to libraries,
and very likely the standard library its self.
I've also made what I consider to be reasonable counter arguments to those
points in earlier posts, so I won't repeat myself.

I think it's fairly safe to say though, with respect to Don's question,
using a tie-breaker is extremely controversial. I can't see any way that
could be unanimously considered a good idea.
I stand by the call to ban implicit conversion between float/int. Some
might consider that a minor annoyance, but it also has so many potential
advantages and time savers down the line too.

On 20 October 2011 22:21, Jonathan M Davis <jmdavisProg at gmx.com> wrote:

> On Thursday, October 20, 2011 21:52:32 Manu wrote:
> > On 20 October 2011 17:28, Simen Kjaeraas <simen.kjaras at gmail.com> wrote:
> > > On Thu, 20 Oct 2011 15:54:48 +0200, Manu <turkeyman at gmail.com> wrote:
> > >  I could only support 2 if it chooses 'float', the highest performance
> > >
> > >> version on all architectures AND actually available on all
> > >> architectures; given this is meant to be a systems programming
> > >> language, and supporting as
> > >> many architectures as possible?
> > >
> > > D specifically supports double (as a 64-bit float), regardless of the
> > > actual hardware. Also, the D way is to make the correct way simple, the
> > > fast way possible. This is clearly in favor of not using float, which
> > > *would* lead to precision loss.
> >
> > Correct, on all architectures I'm aware of that don't have hardware
> double
> > support, double is emulated, and that is EXTREMELY slow.
> > I can't imagine any case where causing implicit (hidden) emulation of
> > unsupported hardware should be considered 'correct', and therefore made
> > easy.
>
> Correctness has _nothing_ to do with efficiency. It has to do with the
> result
> that you get. Losing precision means that your code is less correct.
>
> > The reason I'm so concerned about this, is not for what I may or may not
> do
> > in my code, I'm likely to be careful, but imagine some cool library that
> I
> > want to make use of... some programmer has gone and written 'x = sqrt(2)'
> > in this library somewhere; they don't require double precision, but it
> was
> > implicitly used regardless of their intent. Now I can't use that library
> in
> > my project.
> > Any library that wasn't written with the intent of use in embedded
> systems
> > in mind, that happens to omit all of 2 characters from their float
> literal,
> > can no longer be used in my project. This makes me sad.
> >
> > I'd also like you to ask yourself realistically, of all the integers
> you've
> > EVER cast to/from a float, how many have ever been a big/huge number? And
> > if/when that occurred, what did you do with it? Was the precision
> > important? Was it important enough to you to explicitly state the cast?
> > The moment you use it in a mathematical operation you are likely throwing
> > away a bunch of precision anyway, especially for the complex functions
> like
> > sqrt/log/etc in question.
>
> When dealing with math functions like this, it doesn't really matter
> whether
> the number being passed in is a large one or not. It matters what you want
> for
> the return type. And the higher the precision, the more correct the
> result, so
> there are a lot of people who would want the result to be real, rather than
> float or double. It's when your concern is efficiency that you start
> worrying
> about whether a float would be better. And yes, efficiency matters, but if
> efficiency matters, then you can always tell it 2.0f instead of 2. Don's
> suggestion results in the code being more correct in the general case and
> yet
> still lets you easily make it more efficient if you want. That's very much
> the D
> way of doing things.
>
> Personally, I'm very leery of making an int literal implicitly convert to a
> double when there's ambiguity (e.g. if the function could also take a
> float),
> because then the compiler is resolving ambiguity for you rather than
> letting
> you do it. It risks function hijacking (at least in the sense that you
> don't
> necessarily end up calling the function that you mean to; it's not an issue
> for sqrt, but it could matter a lot for a function that has different
> behavior
> for float and double). And that sort of thing is very much _not_ the D way.
>
> So, I'm all for integers implicitly converting to double so long as
> there's no
> ambiguity. But in any case where there's ambiguity or a narrowing
> conversion,
> a cast should be required.
>
> - Jonathan M Davis
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.puremagic.com/pipermail/digitalmars-d/attachments/20111020/2eea6dde/attachment.html>


More information about the Digitalmars-d mailing list