Hi Sarita (and others),
Thanks for those insightful examples. Like you, I can't think of why
your Example 1 should be allowed if Example 2 is not. Although I could
never put my finger on it, there did seem to me to be some intuitive
difference between the executions that M/P allowed and those it didn't.
But according to my vague intuition, Example 1 should not have been
allowed. Does M/P really allow it? I'm not comfortable enough with the
model--in fact, I still find the M/P specification ambiguous in some
places--to say for sure. The reason I'm not sure is that if you
restrict the non-forbidden executions to be those in which T2's read of
Z returns 2, then don't we then always get that r2 = 2? (Bill and
Jeremy, can you confirm what your intuition for this example is, and
whether you think that M/P conforms to this intuition?)
If M/P does allow Example 1 but not Example 2, then I do feel that there
is some sensible distinction between the executions that M/P allows and
those that it doesn't, though I cannot ennuciate this distinction to my
satisfaction. Furthermore, I don't entirely agree that simplicity is a
significant criterion for the "gold standard" model: it is essential that
we have simple conservative approximations for both programmers and
implementors ("conservative" means opposite things for these two sets of
users), and probably a few different such models, with varying degrees of
simplicity and conservatism, especially for the implementors. There is
at least widespread agreement on the most important of these models:
race-free programs get sequentially consistent behavior. The complexity
of all other models seems sufficiently high to me that there isn't a real
qualitative difference. In any case, proofs will be necessary to reason
about the behavior of programs. (I do think it is important that any
model we propose have a clear formal specification, so that such proofs
are possible. The M/P model needs a bit more work on this front. I also
think "conceptual integrity" is important, but neither SC- nor M/P has,
in my opinion, the clear upper hand on this criterion.)
My main worry is that we might adopt too strong a model. I believe we
will almost certainly want to tweak the model in the future, as we gain
real experience with it, and that it will be much easier to strengthen
the model than to weaken it. Strengthening the model only requires that
JVM implementations be updated, while weakening the model may break
existing Java code.
I'm not, of course, advocating that we adopt a model we think is too weak
because we are going to have to change it anyway. I merely believe that
the burden of proof should be on those trying to strengthen the model.
I don't think the arguments for M/P over SC- are compelling. Indeed, I
haven't seen a concrete example (something that corresponds to real code)
in which the greater restrictions on the memory model allow better code to
be written. That is, how will programming with M/P be better/easier than
with SC-? Is there some object/method/function that would be simpler to
implement under M/P than under SC-? In the absence of compelling arguments
of this kind, I think it's best to go with the weaker model.
Victor
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:56 EDT