I think that all versions of the consensus JMM revision enable, under
some circumstances, that adjacent synch blocks be fused into
one. However, wihout some additional rules that are otherwise
unrelated to JMM issues, some of these manipulations can introduce
liveness failures. Here's an attempt to lay out these rules. I think
that we need only two simple rules.
The basic idea is that you'd like to allow fusion of blocks locking on
the same lock. That is:
synchronized(x) { a(); }
synchronized(x) { b(); }
may be transformed into
synchronized(x) { a(); b(); }
Not much real code looks like this. But under proposed JMM, some reads
and writes that might appear "between" the a() and b() blocks may be
absorbed by one or the other of them (briefly, you can move
reads/writes whose visibility and ordering properties don't depend on
whether they are in the blocks or not), leading to more frequent
occurrences of this form.
Fusion applies only to blocks using the same lock. Regardless of
whether a JMM's rules would otherwise enable it, You need a rule
disallowing this for blocks using different locks, since this would
result in holding two locks, acquired in some order, when only one at
a time is expected. Many liveness problems could be artificially
induced. For one example, wait()s won't work right. On the other hand,
it may be possible to eliminate one of the locks entirely via static
analysis, in which case, you might obtain this effect. To cover all
the cases, we can phrase the rule as:
Rule 1: No fusion may result in the possibility that more locks
are simultanously held at any given time than specified in the
original code.
This may be generalized to apply to volatiles by saying that, for
purposes of these rules, each read/write of a volatile is considered
to be performed within the scope of an otherwise unused lock. This is
conservative, but considering that volatiles might actually need to be
implemented via private locks on some systems, it seems like the right
tactic. This means that r/w of volatiles cannot be moved into synch
blocks or fused with them. (It does mean though that, on systems using
locks for volatiles, the read and write in "++aVolatile" could be
fused so that it is atomic, but people would not be allowed to rely on
this fact.)
Fusion is NOT necessarily an "optimization", so it surely shouldn't be
mandated. Fusion can reduce overall performance of some programs by
increasing the duration of critical sections, thus, sometimes, leading
to more lock contention, especially on SMPs. Contended locks are much
more expensive that uncontended locks. So this will tend to be worthwhile
only when one or both of the blocks are short. As a conservative
guide, fusion should always be worthwhile if one of the blocks takes
less time to execute than does the expected lock/unlock overhead.
Further, there is a point at which fusion must be disallowed because
of the potential for permanent liveness failures.
Rule 2: No block may be fused with itself.
As the most common application of this rule, no transformation may
have the effect of "hoisting" a monitorenter above a loop entry point.
For the canonical example, consider, assuming initially a shared field a=0
T1 T2
for (;;) { synchronized(x) { a = 1; }
synchronized(x) {
if (a != 0) break;
}
}
It's clearly a bad idea to transform T1's code to one in which the
block is fused with itself:
synchronized(x) {
for (;;) {
if (a != 0) break;
}
}
... since T1's loop that was previously guaranteed to terminate if T2
ever reached its assignment can now lockout T2 and livelock T1.
Practically no real code looks like this, but lots of programs somehow
contain equivalents. For example, "while (!Thread.interrupted())..."
loops may be of this form, depending on how interrupt is implemented.
The underlying notion here is fairness. Java has no formal fairness
properties. However, it is reasonable to impose a rule saying that no
transformation will invalidate any fairness properties that a JVM may
already possess. In turn, this leads to the promise that no
transformation will invalidate the weakest of weak fairness rules: As
time approaches infinity, the probablity approaches 1.0 that any
action that is periodically enabled will occur. (See for example
Francez and Forman "Interacting Processes" book for good accounts of
the theory of fairness). Note that an underlying JVM might not even
obey this property. For the sake of JMM rules though, we can say that
transformations are limited to those that cannot completely give up
fairness.
Aside: It would be great if we could impose weak fairness on all JVM
operations, but this battle might not be worth fighting in the JMM
JSR.
Strictly speaking, this reasoning applies only to infinite loops.
When loops have determinate iteration counts, you could arguably relax
the rule. But this seems unwise. Consider code like:
for (long i = 0; i < 1000000000; ++i) {
if (!Thread.interrupted())
slowCalulation();
}
Fusing in this case would not mesh well with standard guidance about
the use of Thread.interrupt() and related polling-style constructs.
(On the other hand, loop unrollings should be allowed to fuse unrolled
segments.) And I think that some programmers will want or need a
simple guarantee about such things. So the simple version, applying to
any loop is the best choice.
-- Doug Lea, Computer Science Department, SUNY Oswego, Oswego, NY 13126 USA dl@cs.oswego.edu 315-341-2688 FAX:315-341-5424 http://gee.cs.oswego.edu/ ------------------------------- JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:29 EDT