RFC: Change what assert does on error

Timon Gehr timon.gehr at gmx.ch
Wed Jul 9 04:47:21 UTC 2025


On 7/7/25 23:44, Dukc wrote:
> On Sunday, 6 July 2025 at 23:53:54 UTC, Timon Gehr wrote:
>> The insane part is that we are in fact allowed to throw in `@safe 
>> nothrow` functions without any `@trusted` shenanigans. Such code 
>> should not be allowed to break any compiler assumptions.
> 
> Technically, memory safety is supposed to be guaranteed by not letting 
> you _catch_ unrecoverable throwables in `@safe`.

No. `@system` does not mean: This will corrupt memory. It means the 
compiler will not prevent memory corruption. At the catch site you are 
not able to make guarantees about the behavior of an open set of code, 
so making a catch statement a `@system` operation is frankly ridiculous.

Things that are _actually_ supposed to be true:
- Destructors should run when objects are destructed.
- The language should not assume that there will never be bugs in 
`@safe` code.
- It is easily possible to make a failing process record some 
information related to the crash.
- Graceful shutdown has to be allowed in as many cases as reasonably 
possible.

I.e., it's best to approach a discussion of what is "supposed" to hold 
from the perspective of practical requirements.

Coming up with some ideological restrictions that seem nice on paper and 
defending them in the face of obvious clashes with reality is just not a 
recipe for success.

> When you do catch them, 
> you're supposed to verify that any code you have in the try block 
> (including called functions) doesn't rely on destructors or similar for 
> memory safety.
> ...

This makes no sense. That can be your entire program.

> I understand this is problematic, because in practice pretty much all 
> code often is guarded by a top-level pokemon catcher,

Yes.

> meaning 
> destructor-relying memory safety isn't going to fly anywhere. I guess we 
> should just learn to not do that,

No, that would be terrible. Memory safety being potentially violated is 
just the smoking gun, it's not the only kind of inconsistency that will 
happen.

I want my code to be correct, not only memory safe.

> or else give up on all `nothrow` optimisations.

Yes.

> I tend to agree with Dennis that a switch is not the way 
> to go as that might cause incompatibilities when different libraries 
> expect different settings.
> ...

`nothrow` "optimizations" (based on hot air and hope) are what is 
problematic. It's not a sound transformation.

It's not like linking together different libraries with different 
settings is any more problematic than the current behavior.

I.e., the issue is not some sort of "compatibility", it is specifically 
the `nothrow` "optimizations".

If a library actually requires them in order to compile due to 
destructor attributes (which I doubt is an important concern in 
practice, e.g. a `@system` destructor with a `@safe` constructor does 
not seem like sane design), that's a library I will just not use, but it 
would still be possible using separate compilation or by making the 
"optimization" setting a `pragma` or something -- not an attribute that 
the compiler will sometimes implicitly slap on your code to change how 
it behaves.

> In idea: What if we retained the `nothrow` optimisations,

No. Kill with fire.

> but changed 
> the finally blocks so they are never executed for non-`Exception` 
> `Throwable`s unless there is a catch block for one?

Maybe. Can be confusing though.



More information about the Digitalmars-d mailing list