RFC: Change what assert does on error

Dennis dkorpel at gmail.com
Sun Jul 6 19:17:47 UTC 2025


On Sunday, 6 July 2025 at 15:34:37 UTC, Timon Gehr wrote:
> Also it seems you are just ignoring arguments about rollback 
> that resets state that is external to your process.

I deliberately am, but not in bad faith. I'm just looking for an 
answer to a simple question to anyone with a custom error 
handler: how does the compiler skipping 'cleanup' in nothrow 
functions concretely affect your error log?

But the responses are mostly about:

- It's unexpected behavior
- I don't need performance
- Some programs want to restore global system state
- Contract inheritance catches AssertError
- The stack trace generation function isn't configurable
- Not getting a trace on segfaults/null dereference is bad
- Removing Error is a breaking change
- Differences between destructors/scope(exit)/finally are bad
- Separate crash handler programs are over-engineered

Which are all interesting points! But if I address all of that 
the discussion becomes completely unmanageable. However, at this 
point I give up on this question and might as well take the 
plunge. 😝

> It's a breaking change. When I propose these, they are rejected 
> on the grounds of being breaking changes.

I might have been too enthusiastic in my wording :) I'm not 
actually proposing breaking everyone's code by removing Error 
tomorrow, I was just explaining to Jonathan that it's not how I'd 
design it from the ground up. If we get rid of it long term, 
there needs to be something at least as good in place.

> A nice thing about stack unwinding is that you can collect data 
> in places where it is in scope. In some assert handler that is 
> devoid of context you can only collect things you have 
> specifically and manually deposited in some global variable 
> prior to the crash.

That's a good point. Personally I don't mind using global 
variables for a crash handler too much, but that is a nice way to 
access stack variables indeed.

> It's an inadmissible conflation of different abstraction levels 
> and it is really tiring fallacious reasoning that basically 
> goes: Once one thing went wrong, we are allowed to make 
> everything else go wrong too.
>
> Let's make 2+2=3 within a `catch(Throwable){ ... }` handler 
> too, because why not, nobody whose program has thrown an error 
> is allowed to expect any sort of remaining sanity.

Yes, I wouldn't want the compiler to deliberately make things 
worse than they need to be, but the compiler is allowed to do 
'wrong' things if you break its assumptions. Consider this 
function:

```D
__gshared int x;
void f()
{
     assert(x == 2);
     return x + 2;
}
```

LDC optimizes that to `return 4;`, but what if through some 
thread/debugger magic I change `x` to 1 right after the assert 
check, making 2+2=3. Is LDC insane to constant fold it instead of 
just computing x+2, because how many CPU cycles is that addition 
anyway?

Similarly, when I explicitly tell 'assume nothing will be thrown 
from this function' by adding `nothrow`, is it insane that the 
code is structured in such a way that finally blocks will be 
skipped when the function in fact, does throw?

I grant you that `nothrow` is inferred in templates/auto 
functions, and there's no formal definition of D's semantics that 
explicitly justifies this, but skipping cleanup doesn't have to 
be insane behavior if you consider nothrow to have that meaning.

> Adding a subtle semantic difference between destructors and 
> other scope guards I think is just self-evidently bad design, 
> on top of breaking people's code.

Agreed, but that's covered: they are both lowered to finally 
blocks, so they're treated the same, and no-one is suggesting to 
change that. Just look at the `-vcg-ast` output of this:

```D
void start() nothrow;
void finish() nothrow;

void normal() {
     start();
     finish();
}

struct Finisher { ~this() {finish();} }
void destructor() {
     Finisher f;
	start();
}

void scopeguard() {
     scope(exit) finish();
     start();
}

void finallyblock() {
     try {
         start();
     } finally { finish(); }
}
```

When removing `nothrow` from `start`, you'll see finally blocks 
in all function except (normal), but with `nothrow`, they are all 
essentially the same as `normal()`: two consecutive function 
calls.

> I am talking about actual pain I have experienced, because 
> there are some cases where unwinding will not happen, e.g. null 
> dereferences.

That's really painful, I agree! Stack overflows are my own pet 
peeve, which is why I worked on improving the situation by adding 
a linux segfault handler: https://github.com/dlang/dmd/pull/15331
I also have a WIP handler for Windows, but couldn't get it to 
work with stack overflows yet. Either way, this has nothing to 
with how the compiler treats `nothrow` or `throw Error()`, but 
with code generation of pointer dereferencing operations, so I 
consider that a separate discussion.

> You are talking about pie-in-the-sky overengineered alternative 
> approaches that I do not have any time to implement at the 
> moment.

Because there seems to be little data from within the D 
community, I'm trying to learn how real-world UI applications 
handle this problem. I'm not asking you to implement them, 
ideally druntime provides all the necessary tools to easily add 
appropriate crash handling to your application. My question is 
whether always executing destructors even in the presence of 
`nothrow` attributes is a necessary component for this, because 
this whole discussion seems weirdly specific to D.

> We can, make unsafe cleanup elision in `nothrow` a build-time 
> opt-in setting. This is a niche use case.

The frontend makes assumptions based on nothrow. For example, 
when a constructor calls a nothrow function, it assumes the 
destructor doesn't need to be called, which affects the AST as 
well as attribute inference (for example, the constructor can't 
be @safe if it might call a @system field destructor because of 
an Exception).

But also, I thought the whole point of nothrow was better code 
generation. If it doesn't do that, it can be removed as far as 
I'm concerned.

> it is somehow in the name of efficiency.

It's an interesting question of course to see how much it 
actually matters for performance. I tried removing `nothrow` from 
dmd itself, and the (-O3) optimized binary increased 54 KiB in 
size, but I accidentally also removed a "nothrow" string 
somewhere causing some errors so I haven't benchmarked a time 
difference yet. It would be interesting to get some real world 
numbers here.

I hope that clarifies some things, tell me if I missed something 
important.



More information about the Digitalmars-d mailing list