I'd like to cast a strong concurring opinion about "Roach Motel"
volatiles. Of the options we've discussed over the years, this seems
to be best pragmatically. It gives non-experts, experts, and compiler
writers more of what they want than does any other proposed rule:
* It is strong enough to allow simple "fixes" of common errors
like declaring double-checked refs volatile. (Although as David
Holmes hinted, there are sure to be a few odd cases broken in odd ways,
which need more than this fix.)
* It is weak enough that core library classes etc can contain
code competitive with that in any language to avoid known
synchronization overhead or contention. Very few of us write such
code, but nearly all users will benefit from this. For example, as
Bill mentioned, it looks like a lightly-synchronized version of
java.util.Hashtable I put together relying on these volatile
semantics (that is otherwise plug compatible with previous versions)
noticeably improves overall specJVM figures on sparcs. There are
several other heavily used classes that could get similar
performance improvements.
* It is sufficiently expressive so that people can build various
other flavors of barriers out of volatile reads and writes, but the
converse using stronger rules wouldn't work of course. While the
possible reorderings can be confusing (as in Rob Strom's example,
which also had me confused for a while), they aren't any different
than for locks. They just LOOK very different.
* We have yet to see a desirable compiler optimization that is
disabled by this rule. Acquire-release memory consistency
models in general are better understood than others, and seem
not to present optimization problems in other contexts.
-Doug
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:33 EDT