Bill said:
> Under my proposal, this would be a data race.
Well, all multiple read parallelism includes races since readers may
observe partially completed updates. The only safe strategy for using
such algorithms is to ensure that the data are always consistent, so
can be read even in the midst of updates. The main property they
(typically) need is that readers observe the effects of all updates
that have occurred at least upon onset of the read operation, for some
reasonable definition of "onset". (So, under current JLS-style MM,
onset can be defined in terms of committed writes to main memory.)
> The fact that the
> thread doing the writing does a sync is only visible to other threads
> doing a sync on the same object.
>
Yes, I see that under your proposed model, there is no way to
guarantee visibility in this and related constructions without
synchronization across threads, as opposed to synchronization with
main memory. I'll try to summarize the issues here in a separate post.
Paul said:
> class VolatileArray {
> private final int[] data;
> private volatile int macguffin = 0;
> public VolatileArray(int cap) { data = new int[cap]; }
> public int get(int i) {
> return data[i + macguffin];
> }
> public void set(int i, int value) {
> data[i] = value;
> macguffin = 0;
> }
> }
>
> would probably have to work. It makes get() a little more expensive,
> but eliminates the need for a read barrier in set(). Even compilers
> which detected that the macguffin variable was unused would need to
> preserve the memory barriers.
I think that variants of the optimizations that Dave Detlefs and
David Bacon described would still foil this attempt -- the optimizer
could still check if mutliple calls to get() were referring to the
same element, and if macguffin were read using atomic read/write,
you still wouldn't have the barrier.
-Doug
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:23 EDT