Okay, it doesn't seem like there is any stomach for having two models.
However, your mail suggests that "the programmer must write correctly
synchronized code" is the programmer model. Is this true? That is,
are we really telling programmers that incorrectly synchronized code
is broken, and that all the other work we are doing to define
semantics in those cases is to aid debugging? If so, then I would
strongly advocate the weaker model.
Furthermore, a compiler can detect many potential data races, and if
such code is likely to be buggy, doesn't it make sense to try to catch
those cases? Surely that will be much more help in almost every case,
and also allow much better implementations, because we are guaranteed
to have data-race-free code.
I had thought that while we encourage programmers to write correctly
synchronized code, there were certain situations in which we wanted
to allow "incorrectly" synchronized code. The classic example of
these is relaxation-based algorithms (like many shortest-paths
algorithms); another has something to do with initialization. I
don't know much about these applications, but others I'm sure do.
Also, I worry about the suggestion about "relaxing the spc over time
if new architectures make the benefits worth the pain". Relaxing the
spec will break existing code. The "pain" now will not be so great
because after all, the original JMM didn't really make sense, so
people who wanted to play it safe programmed conservatively. If we
put out a model saying that all implementations are guaranteed to
have certain properties and that programmers can rely on them, it
will be difficult (to say the least) to later go back and say that
implementations don't, after all, have to guarantee everything we
said they would.
Victor
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:48 EDT