Is core.internal.atomic.atomicFetchAdd implementation really lock free?
max haughton
maxhaton at gmail.com
Sun Dec 4 01:51:56 UTC 2022
On Saturday, 3 December 2022 at 21:05:51 UTC, claptrap wrote:
> On Saturday, 3 December 2022 at 20:18:07 UTC, max haughton
> wrote:
>> On Saturday, 3 December 2022 at 13:05:44 UTC, claptrap wrote:
>>> On Saturday, 3 December 2022 at 03:42:01 UTC, max haughton
>>> wrote:
>>>> On Wednesday, 30 November 2022 at 00:35:55 UTC, claptrap
>>>
>>> "Atomically adds mod to the value referenced by val and
>>> returns the value val held previously. This operation is both
>>> lock-free and atomic."
>>>
>>> https://dlang.org/library/core/atomic/atomic_fetch_add.html
>>
>> If it provides the same memory ordering guarantees does it
>> matter (if we ignore performance for a second)? There are
>> situations where you do (for reasons beyond performance)
>> actually need a efficient (no overhead) atomic operation in
>> lock-free coding, but these are really on the edge of what can
>> be considered guaranteed by any specification.
>
> It matters because the whole point of atomic operations is lock
> free coding. There is no oh you might need atomic for lock free
> coding, you literally have to have them. If they fall back on a
> mutex it's not lock free anymore.
>
> memory ordering is a somewhat orthogonal issue from atomic ops.
Memory ordering is literally why modern atomic operations exist.
That's why there's a lock prefix on the instruction in X86 - it
doesn't just say "do this in one go" it says "do this in one go
*and* maintain this memory ordering for the other threads".
You'd never see the mutex or similar, it's just a detail of the
atomics library. Again, I'm not saying it would be good, just
that it wouldn't make any difference to code.
More information about the Digitalmars-d
mailing list