Bill Pugh wrote:
> In this case, it is true that in all sequentially consistent
> executions, r3 will always be zero. But the JMM is not SC.
Agreed. Agreed. But I am was not assuming that JMM is SC. In fact there
are Java memory models that are not SC for which r3 will always be zero,
hence this case is ruled out. (Which is why I raised this issue :-)..)
> I think we need to say that if a compiler uses some conservative
> mechanism to determine the set of values that could be seen by a read,
> the compiler can use that information. In particular, the mechanism
> has to be conservative with respect to the memory model.
Dont know what this (= mechanism has to be conservative wrt the mm)
means. We are trying to define the memory model. I agree (if you are
saying this) that all compiler transformations must be sound wrt the
memory model.
But also keep the following in mind. There is non-monotonicity at work
here. We are in a situation in which making a conservative assumption
(r3 can read 0 or 42) sanctions a particular behavior, but making a
tighter analysis (r3 can read only 0) prohibits the behavior and making
an even more conservative assumption (r3 can read 0 or 42 or 43)
prohibits it.
> In this case, the compiler could use a non-standard type inference
> algorithm to determine that the types of x, y, r1, r2 and r3 are {0,
> 42}. For this analysis, we might simply assume that any read can see
> any write of that same variable; this would be conservative with
> respect to the memory model. From this information, we can deduce that
> if r3 != 0, then r3 == 42.
To illustrate my point, lets modify the test case. Here is Test Case 18a
====================================================
<H2>Causality test case 18</H2><PRE>Initially, x = y = 0
Thread 1:
1: r3 = x
2: if (r3 == 0)
3: x = 42
4: r1 = x
5: y = r1
Thread 2:
6: r2 = y
7: x = r2
Thread 3:
8: if (false)
9 : x=43
Behavior in question: r1 == r2 == r3 == 42
Decision: Disallowed.
</PRE>
<HR>
=======================================================
If I follow through with your reasoning, the "conservative" calculation
(any read can see any write) would say that x can take on the values
0,42,43. (You said lets assume any read can see any write.)
Hence Line 4 cannot be replaced with r1=42.
So adding "junk" will cause the semantics to disallow a behavior (the
proposed behavior for Case 18).
This cant be right. The semantics should sastisfy the principle: Code
that cannot possibly affect a computation should not affect it :-).
I believe the issue here is that one simply cant "assume some type
inference algorithm". The actual results returned by it might matter. A
better alternative would be to design the semantics so that it is
*independent* of any type inference algorithm.
The central point about Case 18 is that the behavior in question can be
allowed only if one adds the condition (through type inference or
whatever) that r3 takes on the values 0 and 42. It cannot be allowed if
one ads the condition that r3 takes on the value 0 (only). Or 0 and 42
and 43 only. One cannot allow semantics to be defined in terms of the
precision of some type inferencer -- without specifying that type
inferencer. And that of course would be very unfortunate.
I believe the decision on Case 18 should be reversed.
Best,
Vijay
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:58 EDT