shared switch

Jonathan M Davis newsgroup.d at jmdavisprog.com
Mon Oct 9 02:07:50 UTC 2023


On Sunday, October 8, 2023 8:02:22 AM MDT Imperatorn via Digitalmars-d wrote:
> On Sunday, 8 October 2023 at 08:26:53 UTC, Jonathan M Davis wrote:
> > On Sunday, October 8, 2023 12:27:20 AM MDT Imperatorn via
> >
> > Digitalmars-d wrote:
> >> [...]
> >
> > Just be aware that when you're using -preview switches, you're
> > typically using features that are still changing as bugs (and
> > sometimes even how the feature works) get ironed out. So, there
> > is a much higher risk of your code breaking when using such
> > switches, and depending on what happens with bugs with and
> > changes to those features, the changes that they force you to
> > make to your code may or may not actually be required in the
> > long run.
> >
> > [...]
>
> Thanks for your input.
>
> What would you personally do if you had to write an application
> in D with the risk of loss of life if you got a runtime error?
> What can be done to minimize the risk basically by using D.

In general, the biggest thing there would be to try to be _very_ thorough
with unit tests (and integration tests and the like, etc). The better your
testing, the more issues that you'll find. Arguably, one of the biggest
features of D is that it has built-in unit testing with unittest blocks,
making it really easy to write tests without having to jump through a bunch
of extra hoops like you have to do in most languages.

It's great to find as many bugs as you can via the type system and language
features, but ultimately, it's testing that's going to find most of the
issues, since the language itself can't verify that your logic is correct
for what you're doing.

Similarly, you should probably make liberal use of assertions (and
potentially contracts, though those are mostly just a way to group
assertions to be called when entering or exiting a function) where
reasonable to catch issues - though if you're dealing with stuff where you
want to catch it in release builds, then testing and throwing exceptions on
failure would be a better choice.

And of course, the big thing that usually comes up with discussions on
systems written where there's a real potential of loss of life is to have
redundancy so that you can afford for some parts to fail. But that's
obviously less of a language concern and more of a general design issue. And
if you're working on such systems, you probably know more about that than I
do.

As far as things like scope and shared go, restricting how much you even
need them will buy you more than any feature designed to make sure that you
use them correctly. In most applications, very little should be shared
across threads, and restricting what is to small portions of the code base
will make it much easier to both find and avoid bugs related to it.
Similarly, if you're typically avoiding taking the address of local
variables or slicing static arrays, scope won't matter much. scope is
supposed to find bugs with regards to escaping references to local data, and
there's nothing to find if you're not doing anything with references to
local data. Sometimes, you do need to do that sort of thing for performance
(or because of how you have to interact with C code), but minimizing how
much you use risky features like that will go a long way in avoiding bugs
related to them. Part of what makes D generally safer than C or C++ is how
it reduces how much you're doing risky things with memory (e.g. by having
the GC worry about freeing stuff).

It may make sense to at least peridiocally use the preview flags in a build
to see if you might need to update your code, but how much sense it's going
to make to do that is really going to depend on what your code is doing. If
you're in an environment where you actually need to take the address of
locals all over the place for performance reasons or whatnot, then it could
be worth the pain of just turning on the switch for DIP 1000 and using it
all the time, whereas if you're doing relatively little with taking the
address of locals or slicing static arrays, worrying about DIP 1000 could
just be wasting your time.

As for shared, it may be worth just turning on the switch, because you want
the compiler to basically not let you do anything with shared other than
store a variable that way. Typically, the two ways that shared needs to be
handled are

1. Have an object which is shared which you do no operations on while it's
shared. In the sections where you need to operate on it, you then either use
atomics on it, or you protect that section of code with a mutex and cast
away shared to get a thread-local reference to the data. You can then do
whatever you need to do with that thread-local reference, making sure that
it doesn't escape anywhere (which scope may or may not help with), and then
when you're done, make sure that no thread-local references exist before
releasing the mutex. Because of the casts, the code in question will need to
be @trusted if you want to use it with @safe code, which should help
segregate that section of the code.

2. You have a type which is designed to be operated on as shared. It has
shared member functions, and you use it normally as if it weren't shared.
However, internally, it then does what #1 talks about. Any time the shared
object needs to do anything with its data, it either uses atomics, or it
locks a mutex and temporarily casts away shared, ensuring that no
thread-local references escape.

Whichever way you handle it, you basically want the compiler complaining any
time you do anything with shared that isn't guaranteed to be thread-safe,
which functionally means that you want it to complain when you do just about
anything with it other than call a shared member function. So, if the switch
means that the compiler complains more, that's probably a good thing.

However, the exact meaning of the switch does risk changing over time (e.g.
there's been some discussion about changing it so that shared integer types
just do the atomics for you automatically for basic operations, whereas
types that won't directly work with atomics would just be an error to do
anything with as shared other than call shared member functions), so
depending on exactly what happens, the switch could get annoying, but in
general, if it's just going to flag more stuff as an error with shared, then
that's usually a good thing. And since in the vast majority of programs,
shared should be in a very small portion of your code base, any issues with
a compiler switch should be of minimal cast. But I'd have to see exactly
what the preview flag for shared was complaining about (or not complaining
about) in a code base to see whether it really made sense to enable it
normally or not.

Honestly, it wouldn't surprise me if the primitives intended to be used with
shared (such as Mutex in core.sync.mutex) were the most annoying parts with
regards to shared simply because they were originally written before shared,
and shared hasn't necessarily been applied to them correctly. That's the
sort of thing that needs to be sorted out along with the exact behavior of
the switch.

In any case, in general, I would say that you should use @safe as much as
reasonably possible (with @trusted used as little as reasonably possible)
and test, test, test. How much the type system will be able to help you
catch stuff will change over time (hopefully exclusively for the better) as
features like scope and shared are better sorted out, but ultimately, you're
going to need to catch anything that gets through the cracks by testing -
and with how much of the code's behavior depends on logic that only the
folks working on it are going to know and understand, and the compiler can't
possibly catch, those cracks will always be large. And that's true of any
code base.

- Jonathan M Davis





More information about the Digitalmars-d mailing list