At 11:34 AM -0800 11/8/99, Joshua Bloch wrote:
>Folks,
>
> I have, unfortunately, not had the time to keep up with this discussion of
>late. In fact, I haven't read the last 50 or so messages in any detail, but
>I'm still going to throw in my ill-informed 2 cents.
>
> It's sheer lunacy to even think seriously about a model that breaks a
>simple "single check" idiom applied to integers. It's wildly
>counterintuitive.
>All it does is to ensure that Joe Programmer (not to be confused with Joe
>Bowbeer) will rarely, if ever, write a correct, non-trivial multithreaded
>program. At the very least, it would demand a massive re-education program.
I understand that lots of brilliant people (plus people like myself),
have written
code avoiding synchronization that they thought would work under some
particular
memory model, only to be surprised later to find that it doesn't.
The question is, what to we take from this?
1) We need to be even more clever at devising a memory model so that it will
handle more idioms, while still being reasonably efficient to execute.
2) We need to stop trying to program concurrent programs without
using synchronization
and/or volatile variables.
Nothing would make me happier than devising a good solution to (1). But I am
increasingly dubious that anyone will be able to come up with a solution to (1)
that will be:
a) Easily explainable to Joe programmer, such that they will easily be able to
understand what idioms will work and which idioms won't work.
b) Possible for VM implementers to understand and guarantee.
c) Have no surprises 10 months down the road.
On the other hand, if we tell people:
"You must use synchronization/volatile"
everybody will be able to understand. It then becomes a research
challenge to minimize synchronization cost. Currently, on a number
of benchmarks I've looked at, synchronization overhead is about 5-12%.
That figure doesn't include any compiler removal of redundant
synchronization. Even if we have to go back and put in more synchronization,
I'm confident that the cost will remain under 20%. The way processor speeds
are going, you will gain back that much speed in the time it takes you to do
Q&A on a product you are planning to ship.
I suspect that with improvements in research on synchronization minimization,
even with more synchronization in code, the overall overhead will drop to less
than 5% for most code.
For example, could the compile look for synchronization idioms that could be
safely replaced by a single-check idiom on some processors? For example, given:
volatile Helper helper;
Helper getHelper() {
if (helper == null) helper = new Helper();
return helper;
}
a compiler could recognize that, given certain legal assumptions about
interleaving, helper would only be assigned once. Thus, any later read only
has to match the release down when the Helper is allocated, and on a
uniprocessor,
it would be legal to make the read be non-volatile.
I understand that if we choose to tell people they _have_ to
synchronize, it will
require an absolutely massive re-education campaign.
I haven't given up on devising a solution along the lines of (1), and
am I continuing
to think along those lines. But I think we need to be prepared to
bite the bullet,
and just tell people that they have to give up their cherished
synchronization-avoiding
idioms.
Bill
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:22 EDT