Asked on Reddit: Which of Rust, D, Go, Nim, and Crystal is the strongest and why?

Idan Arye via Digitalmars-d digitalmars-d at puremagic.com
Thu Jun 11 10:22:52 PDT 2015


On Thursday, 11 June 2015 at 13:21:27 UTC, Dave wrote:
>> Exceptions are not meant to force handling errors at he source.
>
> This attitude is why so many exceptions go unhandled at the 
> upper
> layers. When you have a top level that calls a function that 
> calls 50 other functions, each that throw a handful or more 
> different exceptions, it's unreasonable to expect the upper 
> layer coder to account for all of them. In fact, in practice 
> they usually don't until bugs arise. This is provably bad 
> practice.

I'd rather have an exception unhandled at the top level than 
discarded at a middle level. Much easier to debug when you get a 
proper stack trace. Also, the top level handling can be very 
generic if it's purpose is not to solve the problem but to log it 
and to allow to use to continue using the other parts of the 
program as much as possible.

>> If you want to force handling errors at the source they should 
>> be part of the return type.
>
> Again what errors are worth throwing as exceptions in your
> paradigm? Which ones are worth returning? This separation is 
> very
> arbitrary for my taste.

Exceptions are for when something went wrong. Returned errors are 
for when the function can't do what you asked it to do, but that 
doesn't mean that something went wrong.

For example, if you try to write to a file and fail that's an 
exception, because something went wrong(e.g. - not enough disk 
space, or a permissions problem).

But if you have a function that parses a string to a number and 
you call it with a non-numeric string - that doesn't necessarily 
mean that something went wrong. Maybe I don't expect all strings 
to be convertible to numbers, and instead of parsing each string 
twice(once for validation and once for the actual conversion) I 
prefer to just convert and rely on the conversion function to 
tell me if it's not a number?

Note that doesn't mean that every time a function returns an 
error it's not a problem - they can indicate problems, the point 
is that it's not up to the callee to decide, it's up to the 
caller. The conversion function doesn't know if I'm parsing a 
YAML file and if field value is not a number that just means it's 
a string, or if I'm parsing my own custom file format and if 
something specific is not a number that means the file is 
corrupted.

In the latter case, I can convert the returned error to 
exception(the returned error's type should have a method that 
returns the underlying result if it's OK and if there was an 
error raises an exception), but it's the caller's decision, not 
the callee.

>> Exceptions are not "hard fails".
>
> They can be if they go unaccounted for (depending on the
> language/environment). Java has the infamous,
> NullPointerException that plagues Java applications. C# has the
> NullReferenceException.

Even if they go unaccounted for, you still get a nice stack trace 
that helps you debug them. Maybe we have different definitions 
for "hard fail"...

>>>> It doesn't really guarantee the functions not annotated as 
>>>> throwing won't > crash
>>>
>>> Combined with other guarantees (such as immutability, thread
>>> local storage, safe memory management, side-effect free code, 
>>> no
>>> recursion, etc), you can make a reasonable guarantee about the
>>> safety of your code.
>>
>> And a superhero cape, combined with an airplane, airplane fuel 
>> and flight school, allow you to fly in the air.
>
> Not really sure how to parse this...Doesn't seem like you have 
> any good argument against what I said. Again I said you can 
> make a *reasonable* guarantee. And I am not alone here. If you 
> look at Rust, it really does illustrate a trend that functional 
> programming has been pushing for a long time. Provable 
> guarantees. Problems are very rarely unique. There are a core 
> set of things that happen frequently that cause problems. And 
> these things are easily recognizable by compilers. You can't 
> prevent everything, but you can prevent a good deal of the 
> obvious stuff. This is just an extension of that mindset. So it 
> is not really that outlandish.
>
>> It is the other restrictions(without getting into a discussion 
>> about each and every restriction in the list) that make the 
>> code safer - nothrow doesn't really contribute IMO.
>
> Without the nothrow, you cannot guarantee it won't cause 
> problems
> with unhandled errors ;) Seems like a nice guarantee to me. I
> would at least like this option, because library writers often
> try to write in an idiomatic way (and I tend to use the most 
> reputable libraries I can find), which gives you some 
> guarantees. The guarantee would be better served by default 
> IMHO though.

Even with no throw you can't guarantee a function won't cause 
problems with unhandled errors - unless you use a very strict 
definition of handling errors, that include discarding them or 
crashing the program. nothrow can only guarantee the function 
won't expose any problems you can use the exceptions mechanism to 
debug or deal with - not very useful, considering how easy it is 
to convert an error that plays nice with the exceptions mechanism 
to an error that horribly crashes the program...

> Having a 'throw' keyword is also useful for IDE's. Anybody that
> has used Visual Studio and C# will tell you a nice feature is
> that Visual Studio can tell you what exceptions get thrown when 
> calling a
> method (very useful). C# does it in a different way, but a 
> 'throw' keyword would actually help scanners figure this out 
> very trivially as well as programmers just reading the header 
> of a function.

If you insist on forcing developers to handle exceptions close to 
the source, or to explicitly pass them on, I guess it can be 
useful let them know what it is that they are require to 
handle/pass on. Still, I don't think it's a good idea to 
needlessly burden people just so you could provide them with the 
tool to better handle that burden.

>> Scala and Rust seem to maintain both paradigms just fine. It's 
>> actually beneficial to have both - you have to acknowledge 
>> return-type-based exceptions, and you can always bypass them 
>> by turning them to exceptions, which are good for logging and 
>> debugging.
>
> I do not believe Rust has exceptions.

It has panics, which are different from exceptions in that you 
can't catch them(unless it's from another thread), but close in 
their usage to what I have in mind when referring to exceptions - 
they don't allow you to go on with what you where trying to do, 
but allow you to debug the problem and/or back down from it 
gracefully.

> I don't mind a language having multiple ways to handle errors.
> Seeing how its a topic no one ever is on the same page about,
> it's actually a wise design decision. But you don't often see
> library writers mixing them for consistency purposes. It's just
> easier for people to learn your library when you have one error
> handling scheme. It's usually encountered only where two
> libraries written by different vendors have to interact in
> application code.

It's not a matter of preferences, just like choosing between int 
and float is not a matter of preferences. Each type of error has 
it's own purpose, and a library can use them both.

>> If exception handling is enforced, they can only be bypassed 
>> by converting them to errors or crashes, which are much less 
>> nice than exceptions when it comes to debugging, logging and 
>> cleanup.
>
> Exceptions have many benefits. They have many disadvantages too.
> They are often very slow on the exceptional path, which occur
> more frequently than most admit.

Returned errors provide a faster error path, and when you have to 
decide if an error should be returned or thrown, the ones that 
should be thrown are usually the ones where you don't care as 
much about speed.

>> Writing code that acknowledges that this code can fail due to 
>> an exception somewhere else does not count as ignoring it.
>
> You could not handle it all the way up the chain (at the cost of
> adding something to a definition, not much trade-off there).

Assuming you have control over the definition, which is not 
always the case. In Java, for example, when you implement an 
interface you have no control over the signature of it's methods. 
The library that have provided that interface doesn't know what 
the implementers are going to do, so it has to either mark the 
methods as all-throwing(kind of defeats the purpose of the 
`throws` annotation), or pretend it's implementers can't 
throw(which we know is not true).

> That
> would essentially be ignoring it. From a design perspective you
> could also have some mechanism (be it a keyword or whatever) 
> that
> you use to explicitly 'suppress' them for code that needs to be
> faster, or code that you literally don't care. In application
> code, you care. It's a good default to force people to handle
> errors as they occur (which is what I am talking about,
> defaults). If they wish to not handle them there, it's not at 
> all
> hard to imagine ways to allow people to 'suppress' them or 'pass
> them up' when the situation calls for it. Force people to deal
> with them by default, let them explicitly handle it in another
> way. Or they can just use return types.

Returned errors are a faster mechanism that forces to deal with 
the error at the source, or explicitly transfer it to the upper 
level at the cost of changing the function's signature. 
Exceptions are a slower mechanism that allows to deal with the 
errors far from the source without requiring special function 
signatures in the middle levels(when they do require it's a 
syntactic salt, not a requirement of the mechanism).

nothrow by default is combining the slowness of exceptions with 
the limitness of returned errors. Why would anyone want to do 
that?


More information about the Digitalmars-d mailing list