[OT] OT: Null checks.
Timon Gehr
timon.gehr at gmx.ch
Wed May 7 01:12:59 UTC 2025
On 5/7/25 02:40, Walter Bright wrote:
> On 5/6/2025 9:29 AM, Timon Gehr wrote:
>> This only works because the plane keeps its own state independent of
>> the electronics.
>
> I know what I'm talking about on this subject.
I am aware. I am saying it is an irrelevant subject to my problem.
> ...
>
> There are several aspects of D that are influenced by my experience as
> an aerospace engineer.
> ...
For better and worse, it seems. Reality check: D is advertised as a
general-purpose language that allows you to be productive.
>
>> At some point you'll just have to accept that most use cases are not
>> like this. Then you will maybe also figure out that it is not about
>> what kind of person you are, but about what kind of external factors
>> are relevant to your work. (Hint: I am not currently writing software
>> for avionics.)
>
> It's the same situation if you write stock trading software. You might
> not die if it goes haywire, but you certainly could go bankrupt.
> ...
Another use case that is not relevant to me now.
> There's also the situation of minimizing the risk of malware injection.
> That could certainly ruin your whole week.
> ...
Yes, right, because being unable to fix unexplained segfaults is such a
great way to avoid malware injection. Ideally you don't have to run the
software again until the bug is fixed. This is not practical if you
cannot know what went wrong.
Bonus points, introduce segfaults and invalid instruction errors on
otherwise mostly benign bugs that are immediately detected, such as null
pointer dereferences, so that people get used to seeing segfaults and
are not alarmed once the program starts segfaulting because some
intruder is trying to run exploits.
Incentivize people to write overly broad and overcomplicated signal
handlers, that will certainly help with security.
>
>> And BTW, it appears an ESA mars mission failed partly because an
>> acceleration sensor actively refused to operate for an extended amount
>> of time after acceleration went out of the range it was rated for for
>> a small amount of time. It did so by sticking to one of the ends of
>> the rated range, making the probe compute that it was underground.
>>
>> This demonstrates that your tools thinking they know better than you
>> how to react to an error condition is also fatal in "critical"
>> applications.
>
> The anecdote only demonstrates that the design had no backup plan for a
> failed sensor.
> ...
AFAIU one issue was that the engineers did not know the sensor would
behave in this stupid fashion by default to indicate failure.
Anyway, clearly this is not the only thing that went wrong, but it
certainly helped the mission fail.
> Here's another: the 737MAX MCAS system kept functioning despite
> receiving bad data from the AOA sensor, and moved the flight controls
> far outside of the envelope.
>
> There was another incident long ago where the autopilot decided to turn
> the airplane upside down. That was fun for the crew and passenger.
>
> And another where the stabilizer jammed. The pilot, rather than leaving
> the jammed thing alone and doing an emergency landing, decided he would
> keep trying to unjam it. He eventually succeeded so well the nut broke
> off the end of the jackscrew and the stabilizer then broke free.
>
> Don't keep trying to work broken systems. They get more broken when you
> do that.
>
It seems hypocritical of you to know what went wrong in those
circumstances. How was any information allowed to escape? /s
We are looking at a failure case of the language _right now_.
More information about the Digitalmars-d
mailing list