Uh... destructors?
Steven Schveighoffer
schveiguy at yahoo.com
Wed Feb 23 12:56:57 PST 2011
On Wed, 23 Feb 2011 15:28:32 -0500, bearophile <bearophileHUGS at lycos.com>
wrote:
> Steven Schveighoffer:
>
>> I see zero value in disallowing comparing pointers in D.
>
> I have not suggested to disallow comparing all pointers. I have
> suggested to disallow it only for pointers/references allocated inside a
> pure function, that are @transparent.
That's not what your example showed. It showed comparing two allocated
pointers *outside* a pure function, and expected them to be equal. I see
that as disallowing all pointer comparisons.
>> What kinds of problems does pointer comparison cause? I know of none.
>
> If you compare pointers created in a pure function, you are breaking the
> most important constraint of pure functions. A pure function is
> deterministic. D pure function aren't deterministic, because the value
> of the memory pointers they return can be different across different
> calls. If you leave this hole in pure functions, then their purity is
> much less useful, you can't perform optimizations, you can't reason
> about code like you are able to do with pure functions.
>
> Currently you are able to write functions like:
>
> pure bool randomPure() {
> int[] a1 = new int[1];
> int[] a2 = new int[2];
> return a1.ptr > a2.ptr;
> }
This is the first real example you have made that shows a problem! It
uses all valid constructs within pure functions (without casts), and by
current rules could be considered strong-pure, however, it violates the
rule of pure that the same parameters should result in the same answer.
> Is this function pure? It returns a pseudo-random boolean. You are free
> to define this function as pure, but this form of purity is much less
> strong than what people think of "pure".
I would not define the function as pure. The questions to answer are:
1. Can the compiler detect valid reference comparisons without annotation?
2. Is this going to be a common problem?
At first glance, I'd say the answer to number 2 is no. Most people are
not going to use the randomness of the memory allocator to subvert
purity. It's unlikely that you accidentally write code like that.
The answer to number 1, I don't know. I don't think the compiler can
determine the origin of allocated memory without annotation.
>
> With the type system change I have prosed it becomes:
>
> pure bool randomPure() {
> @transparent int[] a1 = new int[1];
> @transparent int[] a2 = new int[2];
> return a1.ptr > a2.ptr; // compile-time error, their ptr are
> @transparent, so they can't be read
> }
I can see now the value in this. I just wonder if it would be worth it.
Seems like such a rare bug to create a whole new type constructor.
It also has some unpleasant effects. For example, the object equality
operator does this:
bool opEquals(Object o1, Object o2)
{
if(o1 is o2)
return true;
...
}
So this optimization would be unavailable inside pure functions, no? Or
require a dangerous cast?
Would it be enough to just require this type of restriction in pure @safe
functions?
I feel that a new type of reference is likely overkill for this issue.
>> Showing an
>> assert that two pointers are not equal is not evidence of an error, it's
>> evidence of incorrect expectations.
>
> One of the main purposes of a type system is indeed to disallow programs
> based on incorrect expectations, to help programmers that may not always
> remeber what the incorrect expectations are :-)
My point in the above statement is that when you say:
auto a = new int[1];
auto b = new int[1];
assert(a.ptr == b.ptr);
is not evidence of an error :) This is what you did previously.
-Steve
More information about the Digitalmars-d
mailing list