1 matches bool, 2 matches long
Andrei Alexandrescu
SeeWebsiteForEmail at erdani.org
Sun Apr 28 06:38:54 PDT 2013
On 4/28/13 4:40 AM, Walter Bright wrote:
> On 4/27/2013 9:38 PM, kenji hara wrote:
>> On the other hand, D looks like having *special rule* of 0 and 1
>> literal for
>> boolean type. Even if the underlying rule is sane (partial ordering
>> rule and
>> VRP), the combination makes weird behavior.
>
> Again, whether it is "weird" or not comes from your perspective. From
> mine, a bool is a 1 bit integer. There is nothing weird about its
> behavior - it behaves just like all the other integer types.
Let me start by saying that I agree with the view that this isn't a
large or important issue; also, as language proponents with a lot of
things to look at and work on, it seems inefficient to develop the
language from one strident argument to another regardless of their true
weight.
That being said, I don't think Walter is framing the problem correctly.
The advantage of his approach is simplicity: bool is to the extent
possible a 1-bit integer (with particularities stemming from its small
size). (I presume it's an unsigned type btw.) That makes a lot of rules
that apply to integers also apply automatically to bool. There remain a
few peculiarities that have been mentioned:
1. The relationship between sizeof(bool), the cardinality of Boolean
values, .min and .max etc are unlike that for integers.
2. Conversion rules from other integrals to bool (0 is preserved,
nonzero is converted to 1) are different than among non-bool integrals
(truncation etc).
3. A variety of operators (such as += or *=) are not allowed for bool.
These distinctions (probably there are a few subtler ones) and their
consequences erode the simplicity advantage. Any serious argument based
on simplicity should acknowledge that.
The larger issue here goes back to good type system design. At the
highest level, a type system aspires to: (a) allow sensible and
interesting programs to be written easily; and (b) disallow non-sensible
or uninteresting programs from being written. Real type systems
inevitably allow at least a few uninteresting programs to be written,
and fail to allow some interesting programs. The art is in minimizing
the size of these sets.
From that perspective, bool, as a first-class built-in type, fares
rather poorly. It allows a variety of nonsensical programs to pass
typechecking. For example, bool is allowed as the denominator in a
division or reminder operation. There is no meaningful program that
could use such an allowance: the computation is either trivial if the
bool is true, or stuck if it's false.
Then there is a gray area, such as multiplying an integer by a bool;
arguably "a * b" is a shortcut for "if (!b) a = 0;" or "b ? a : 0" or "a
* (b ? 1 : 0)" if b is a boolean. One might argue this is occasionally
useful.
Then there is a firmer area of cooperation between bool and other
numerics, e.g. a[0 .. a.length - b], where b is a bool. I'm seeing these
in code now and then and I occasionally write them. I personally find
code that needs to use a[0 .. a.length - (b ? 1 : 0)] rather pedestrian,
but not unbearably so.
Tightening the behavior of bool to disallow nonsensical programs is
arguably a good thing to do. Arguing against it would need to explain
e.g. why operations such as "b1 *= b2" (with b1 and b2 of type bool)
were deemed undesirable but "b1 / b2" was not.
If enough differences accumulate to make bool quite a different type
from a regular integral, then the matter of overloading with long,
conversion from literals 1 and 0 etc. may be reopened. Even then, it
would be a difficult decision.
Finally, I felt compelled to add a larger point. This:
> It's like designing a house with a fixed footprint. You can make the
> kitchen larger and the bathroom smaller, or vice versa, but you can't
> make them both bigger.
This is a terrible mental pattern to put oneself in. Design problems
often seem - or can be framed - as such, and the zero-sum-game pattern
offers a cheap argument for denying further consideration.
We've been stuck in many problems that looked that way, and the first
step is to systematically destroy that pattern from the minds of
everyone involved. We've been quite successful at that a few times:
template constraints, integral conversions and VRP, cascaded comparisons
"a < b < c", ordering comparisons between signed and unsigned integrals,
and more. They all seemed to be zero-sum design problems to which no
approach was better than others; once that was removed and ingenuity was
allowed to say its word, solutions that had escaped scrutiny came on the
table. From the perspective of the zero-sum game, those are nothing
short of miraculous.
Andrei
More information about the Digitalmars-d
mailing list