DIP 1024--Shared Atomics--Community Review Round 1
Ola Fosheim Grøstad
ola.fosheim.grostad at gmail.com
Mon Oct 14 12:46:15 UTC 2019
On Sunday, 13 October 2019 at 19:08:00 UTC, Manu wrote:
> On Sun, Oct 13, 2019 at 12:55 AM Ola Fosheim Grøstad via
> Digitalmars-d <digitalmars-d at puremagic.com> wrote:
>> That said, if you had a formalization of the threads and what
>> states different threads are in then you could get some
>> performance benefits by eliding shared overhead when it can be
>> proven that no other threads are accessing a shared variable.
>
> What does that even mean? You're just making things up.
(Drop the ad hominems... Especially when your in an area that is
not your field.)
> Thread's don't have 'states', that's literally the point of
> threads!
All program execution is in a state at any given point in time.
So yes, you transition between states. That is the premise for
building an advanced type system. That's what makes it possible
to implement Rusts borrow checker.
Anything that moves in a discrete fashion will move between
states, that is a basic computer science conceptualization of
computing. It is the foundation that theories about computing is
built on.
> Maybe there's a 10 year research project there, but that idea
> is so
> ridiculous and unlikely to work that nobody would ever try.
Oh. People do it. Sure, it is hard in the most general case, but
you do have languages with high level concurrency constructs that
provide type systems that makes it possible to write provably
correct concurrent programs.
It doesn't make much sense to claim that nobody would ever try to
build something that actually exists... Does it?
> I think what you're talking about is something more like a
> framework;
No. Languages/compilers that check concurrency states at compile
time.
>> But that is not on the table...
>
> Definitely not. Not at the language level at least.
That's right. So those benefits are not avilable... which was my
point. :-)
More information about the Digitalmars-d
mailing list