ARM bare-metal programming in D (cont) - volatile

Iain Buclaw ibuclaw at ubuntu.com
Thu Oct 24 02:43:40 PDT 2013


On 24 October 2013 10:27, John Colvin <john.loughran.colvin at gmail.com> wrote:
> On Thursday, 24 October 2013 at 08:20:43 UTC, Iain Buclaw wrote:
>>
>> On 24 October 2013 08:18, Mike <none at none.com> wrote:
>>>
>>> On Thursday, 24 October 2013 at 06:37:08 UTC, Iain Buclaw wrote:
>>>>
>>>>
>>>> On 24 October 2013 06:37, Walter Bright <newshound2 at digitalmars.com>
>>>> wrote:
>>>>>
>>>>>
>>>>> On 10/23/2013 5:43 PM, Mike wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> I'm interested in ARM bare-metal programming with D, and I'm trying to
>>>>>> get
>>>>>> my
>>>>>> head wrapped around how to approach this.  I'm making progress, but I
>>>>>> found
>>>>>> something that was surprising to me: deprecation of the volatile
>>>>>> keyword.
>>>>>>
>>>>>> In the bare-metal/hardware/driver world, this keyword is important to
>>>>>> ensure the
>>>>>> optimizer doesn't cache reads to memory-mapped IO, as some hardware
>>>>>> peripheral
>>>>>> may modify the value without involving the processor.
>>>>>>
>>>>>> I've read a few discussions on the D forums about the volatile keyword
>>>>>> debate,
>>>>>> but noone seemed to reconcile the need for volatile in memory-mapped
>>>>>> IO.
>>>>>> Was
>>>>>> this an oversight?
>>>>>>
>>>>>> What's D's answer to this?  If one were to use D to read from
>>>>>> memory-mapped IO,
>>>>>> how would one ensure the compiler doesn't cache the value?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> volatile was never a reliable method for dealing with memory mapped
>>>>> I/O.
>>>>
>>>>
>>>>
>>>> Are you talking dmd or in general (it's hard to tell).  In gdc,
>>>> volatile is the same as in gcc/g++ in behaviour.  Although in one
>>>> aspect, when the default storage model was switched to thread-local,
>>>> that made volatile on it's own pointless.
>>>>
>>>> As a side note, 'shared' is considered a volatile type in gdc, which
>>>> differs from the deprecated keyword which set volatile at a
>>>> decl/expression level.  There is a difference in semantics, but it
>>>> escapes this author at 6.30am in the morning.  :o)
>>>>
>>>> In any case, using shared would be my recommended route for you to go
>>>> down.
>>>>
>>>>
>>>>> The correct and guaranteed way to make this work is to write two "peek"
>>>>> and
>>>>> "poke" functions to read/write a particular memory address:
>>>>>
>>>>>     int peek(int* p);
>>>>>     void poke(int* p, int value);
>>>>>
>>>>> Implement them in the obvious way, and compile them separately so the
>>>>> optimizer will not try to inline/optimize them.
>>>>
>>>>
>>>>
>>>> +1.  Using an optimiser along with code that talks to hardware can
>>>> result in bizarre behaviour.
>>>
>>>
>>>
>>> Well, I've done some reading about "shared" but I don't quite grasp it
>>> yet.
>>> I still have some learning to do.  That's my problem, but if you feel
>>> like
>>> explaining how it can be used in place of volatile for hardware register
>>> access, that would be awfully nice.
>>
>>
>> 'shared' guarantees that all reads and writes specified in source code
>> happen in the exact order specified with no omissions, as there may be
>> other threads reading/writing to the variable at the same time.
>>
>>
>> Regards
>
>
> Is it actually implemented as such in any D compiler? That's a lot of memory
> barriers, shared would have to come with a massive SLOW! notice on it. Not
> saying that's a bad choice necessarily, but I was pretty sure this had never
> been implemented.

If you require memory barriers to access share data, that is what
'synchronized' and core.atomic is for.  There is *no* implicit locks
occurring when accessing the data.

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


More information about the Digitalmars-d mailing list