Hi Sarita and others,
> Can you elaborate on what you mean by "conceptual integrity?"
This is a fuzzy notion, thus my use of scare quotes. Basically
I mean that there is a simple idea that allows someone to understand
why something is or is not allowed. In the M/P model, Bill and Jeremy
are pushing the idea of "causality", and this works okay (but not great)
for me at a high level (in the latest versions--it was really hard for
me to understand in the earlier versions), though there are still some
parts that I don't get. In particular, according to my intuition, the
Example 1 of recent email should not have been allowed.
The notion of discontinuities provided enough conceptual
integrity in the original SC- proposal, but I find the discontinuities
in the revised version somewhat harder to understand. I haven't had
much time to really consider it though, so perhaps my discomfort will
go away with familiarity.
* * *
As both Bill and Sarita point out, we need to converge quickly on some
proposal, but I am a bit uncomfortable with both their proposals. With
Bill's proposal, I worry a lot with saying something so indefinite. The
point of this exercise--or at least one important goal--is to produce
a specification that programmers can use to reason about their programs.
But we all agree that at least some "safety" properties are necessary to
maintain basic properties of Java (e.g., no "out-of-thin-air" values).
Let's give that kind of a guarantee--but as conservative (weak) a
guarantee as we can justify. If we later decide to strengthen the
model, it won't break any code that was correct under the weaker model.
Bill's suggestion gives programmers a stronger model with an admonition
that their code may break in the future, but no way to determine if it
will (or to write conservatively enough that it won't--except for truly
data-race-free programs).
Bill seems to prefer to forbid executions that may be bad, and
then allow them in the future if we can show that they aren't. Given
that we all agree to allow many executions that are confusing, I can't
see how we will ever make this distinction--to show that these
executions aren't so bad after all. I prefer to keep the model as weak
as possible because I believe it will be much easier to strengthen the
model in the future than to weaken it (which I think will be almost
impossible).
If I understand it correctly, I like Sarita's suggestion better:
Present two models, each conservative to its respective audience.
Nonetheless, I have a few misgivings about it.
- I worry that having two models will be confusing. We can mitigate
the confusion by presenting only the programmer model in most places,
and leaving the implementor model only in JVM implementation books
and the like.
- It commits us to a vague time frame. That is, what does "in the
near future" mean? I worry that it will never come to pass once the
pressure of the deadline is gone. In any case, what will be the
process for accepting the final model, if we do agree on one? Will
we have to have another JSR?
- If we do present two models, I think the programmer model needs to
include some form of "safety" property in it already--one that at
least forbids "out-of-thin-air" values. We agree that such a property
is necessary, so let's state it as weakly as possible, so that
programmers are enabled to write programs that are guaranteed to be
correct when the final model is finally presented.
- For the implementor model, I think we need to pick something stronger
that the M/P model, which already makes some uncertain concessions to
implementors. Otherwise, even if we weaken several aspects of the
model, we may discover that for formal cleanliness, or possibly even
for consistency, we may need to strengthen some other aspects of the
model, negating the benefits of having specified the stronger model.
As I've said, I'm not sure this is so bad, especially if we converge
quickly on the final model, but if we adopt this position (that it
isn't so bad), then I think we are better off not committing to a
stronger model at all. Instead, we could simply provide guidelines
for implementation, which we need to do anyway. I guess you can
think of these guidelines as a model (i.e., if you do guarantee these
properties, then your implementation will be strong enough), but they
can be very implementation-specific, as a real semantic model should
not be. Not presenting this model as a real model also mitigates, I
believe, some of the confusion I worried about in the first point.
My suggestion is thus to use SC- as the conservative programmer model
described above (assuming that we get consensus that the model is
formally sound and weak enough to allow any executions we think ought
to be allowed), along with a boldly stated caveat that the model may be
strengthened in the future, and to avoid becoming incorrect, an
implementation should follow some accompanying guidelines. This
suggestion is predicated on the assumption that SC- is weak enough to
allow any executions we think ought to be allowed. Although I have not
heard anyone suggest that it might not be, it would strongly bolster
the case if we could at least show that SC- is strictly weaker than
M/P; that is, that any execution allowed by M/P is also allowed by SC-.
Is this known to be true?
Victor
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:56 EDT