so what exactly is const supposed to mean?
kris
foo at bar.com
Mon Jul 3 18:42:58 PDT 2006
David Medlock wrote:
[snip]
> Not intending to start a long drawn out 'const' discussion again but...
nor I :D
> Multithreading appears to be the hardest of hardware parallelism to
> implement. Race conditions will not magically disapear with
> const-correctness (Delegates alone negate that).
Not all of them, no. But a surprising number of instances do; almost as
if by magic. I'm just speaking from experience only. Nothing theoretical
or empirically measured.
BTW: I'm one of those who feel the multithreading model is just too hard
for 95%+ of programmers to get right. I'm among that 95%, although I've
written OS schedulers and so on. It's just too easy to make a subtle
mistake, and not being able to mechanically 'prove' the correctness of a
typical/traditional multithreaded design illustrates just how bad the
situation really is.
But that does not hinder the value of immutability ~ it's very useful
for single-thread designs also.
> I really believe that true hardware parallelism will come from data
> parallelism:
>
> Data[1000] mydata;
> foo( mydata );
>
> processor 1 operates on mydata[0-499]
> processor 2 operates on mydata[500-999]
>
> Data parallelism is (in the general sense) inherently faster,doesn't
> require locking, and is not prone to things like task-switching, locks,
> race conditions, etc
No question about that, but you still need rendezvous points or some
other form of synchronization, unless there's no (fan-in) response involved?
The thing is that immutable, as part of a contract, makes the
conversation between callers and callees that much more deterministic.
This is a useful artifact regardless of how many threads or how much
hardware duplication there is - even single threaded. After all, don't
we want the compiler to tell us when we're doing something contrary to
someone else's design? It may even be beneficial somewhere in the above
example.
But yeah ... no long, drawn-out discussion needed :)
> Apparently NVidia has figured this out already.
> Of course this requires a more relational/vector view of the data than
> is currently mainstream.
>
> Here is a relevant presentation:
> http://www.cs.princeton.edu/~dpw/popl/06/Tim-POPL.ppt
>
> (PDF
> here:http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf
> ).
Thanks; they look interesting. There are a number of excellent
alternatives to multithreading ~ you might be interested in perusing for
occam/tputer/csp/jcsp etc
>
> Const has some nice properties, but I rate it about a 5 on a scale of 10
> for dire features. Yes, I know libraries benefit from it.
Yes; like everything else, the priority is often dependent upon what you
doing.
> I would counter that library authors have serious design issues to
> consider even with const,
That can be true. But as those are resolved, the need for const becomes
more pronounced :D
> and that COW is not a bad tradeoff in the
> meantime.
Sure; as a stopgap measure it is ok. It's the lack of 'enforcability'
that makes it less than suitable for a long-term solution ~ IMO
-----
But it's not at all clear what your messsage is, David. Are you
speculating that D is currently too immature for immutability to be
useful? Or, that the multithreading model should be re-evaluated? Asking
only because I'm not sure what you're getting at overall?
More information about the Digitalmars-d-learn
mailing list