Please let's not have two ! One is quite hard enough and I don't think we
need more. From the programmers point of view I think the model was well
defined about a year ago and relies on monitors, volatiles, final field
semantics etc. All we need to say from a programmers point of view is that
incorrectly synchronized programs are broken. The tricky problem is; what
happens when we run broken code ? We clearly need to ensure type safety
and that's where some of the complexity arrives. It would also be very
nice to be able to explain 'bugs' without needing to employ a memory model
consultant. At present we have two alternative models and we just need to
pick one. From the type safety point of view they are equivalent but I
think that from a 'bug finding' point of view there's a lot to be said for
insisting on causality. In real life programmers stop threads at a
breakpoint and examine the program state. If that state is allowed to
contain items that don't really exist yet because their 'cause' hasn't yet
been born then we are in trouble. OK, so that's a gross simplification but
that's what a lot of the more subtle optimisations will look like. If we
start with something causal then we can consider relaxing the spec over
time if new architectures make the benefits worth the pain. My vote would
be to keep the simplicity of exposition of Sarita's model and find an
'intuitive' way to add the causality to the read spec. The result should
be the best of both worlds.
Martin Trotter
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:47 EDT