Introducing Nullable Reference Types in C#. Is there hope for D, too?

Walter Bright newshound2 at digitalmars.com
Sun Nov 19 22:54:38 UTC 2017


On 11/19/2017 11:36 AM, Timon Gehr wrote:
> On 19.11.2017 05:04, Walter Bright wrote:
>> On 11/18/2017 6:25 PM, Timon Gehr wrote:
>>> I.e., baseClass should have type Nullable!ClassDeclaration. This does not in 
>>> any form imply that ClassDeclaration itself needs to have a null value.
>>
>> Converting back and forth between the two types doesn't sound appealing.
>> ...
> 
> I can't see the problem. You go from nullable to non-nullable by checking for 
> null, and the other direction happens implicitly.

Implicit conversions have their problems with overloading, interactions with 
const, template argument deduction, surprising edge cases, probably breaking a 
lot of Phobos, etc. It's best not to layer on more of this stuff. Explicit 
casting is a problem, too.

There's also an issue of how to derive a class from a base class.


>>>> What should the default initializer for a type do?
>>> There should be none for non-nullable types.
>> I suspect you'd wind up needing to create an "empty" object just to satisfy 
>> that requirement. Such as for arrays of objects, or objects with a cyclic graph.
> Again, just use a nullable reference if you need null. The C# language change 
> makes the type system strictly more expressive. There is nothing that cannot be 
> done after the change that was possible before, it's just that the language 
> allows to document and verify intent better.

This implies one must know all the use cases of a type before designing it.


>> Yes, my own code has produced seg faults from erroneously assuming a value was 
>> not null. But it wouldn't have been better with non-nullable types, since the 
>> logic error would have been hidden
> 
> It was your own design decision to hide the error.

No, it was a bug. Nobody makes design decisions to insert bugs :-) The issue is 
how easy the bug is to have, and how difficult it would be to discover it.


>> and may have been much, much harder to recognize and track down.
> No, it would have been better because you would have been used to the more 
> explicit system from the start and you would have just written essentially the 
> same code with a few more compiler checks in those cases where they apply, and 
> perhaps you would have suffered a handful fewer null dereferences.

I'm just not convinced of that.


> The point of types is to classify values into categories such that types in the 
> same category support the same operations. It is not very clean to have a 
> special null value in all those types that does not support any of the 
> operations that references are supposed to support. Decoupling the two concepts 
> into references an optionality gets rid of this issue, cleaning up both concepts.

I do understand that point. But I'm not at all convinced that non-nullable types 
in aggregate results in cleaner, simpler code, for reasons already mentioned.

>> I wish there was a null for int types.
> AFAIU, C# will now have 'int?'.

Implemented as a pointer to int? That is indeed one way to do it, but rather costly.


> It can also be pretty annoying.

Yes, it can be annoying, so much better to have a number that looks like it 
might be right, but isn't, because 0.0 was used as a default initializer when it 
should have been 1.6. :-)


> It really depends on the use case. Also this is 
> in direct contradiction with your earlier points. NaNs don't usually blow up.

"blow up" - as I've said many times, I find the visceral aversion to seg faults 
puzzling. Why is that worse than belatedly discovering a NaN in your output, 
which you now have to back search it to its source?

My attitude towards programming bugs is to have them immediately halt the 
program as soon as possible, so:

1. I know an error has occurred, i.e. I don't get corrupt results that I assumed 
were correct, leading to more adverse consequences
2. The detection of the error is as close as possible to where things went wrong

Having floats default initialize to 0.0 is completely anti-ethical to (1) and 
(2), and NaN at least addresses (1).

There have been many long threads on this topic in this forum. Yes, I understand 
that it's better for game programs to ignore bugs because gamers don't care 
about corrupt results, they only care that the program continues to run and do 
something. For the rest of us, are we ready to be done with malware inserted via 
exploitable bugs?

By the way, I was initially opposed to having seg faults produce stack traces, 
saying it was the debugger's job to do that. I've since changed my mind. I do 
like very much the convenience of the stack trace dump, and rely on it all the 
time. I even insert code to force a seg fault to get a stack trace. I was wrong 
about its utility.


> I'm not fighting for explicit nullable in D by the way.

Thanks for clarifying that.


More information about the Digitalmars-d mailing list