I'm changing the subject from ownership types, as I haven't read any of
the papers recently enough to have a really meaningful conversation about
them. But I did have some questions about David's other points that I
would like to pose to the list. Like he said, gasoline on the fire.
> i've been watching for two years on this list as professors and ph.d.
> students in computer science go back and forth trying to understand the
> "memory model". if you need a ph.d. to understand the language
> semantics, then the language has a problem. languages are designed to
> be used by thousands or even millions of people. their meaning should
> be clearly statable. instead, the kinds of discussions i've watched
> going by on this list look like pilpul (*).
I have been rereading the list a bit recently (trying to make sure there
are no loose ends), and this issue is something that comes up a lot. I
have felt that in spite of the rather abstruse debates here about corner
cases in improperly synchronized code, the informal semantics of the
memory model are reasonably clear; once you understand that there are
memory model issues, you can understand the basic rules. The full formal
semantics are nasty, but I would expect almost no one to bother figuring
them out, when the informal rules would suffice.
The other side of this is that I'm getting the feeling that a lot of
people think that there is some major witch doctory in the "informalisms".
My first question is, do people feel that the informal semantics are
genuinely inherently difficult, once some of the problems with memory
models have been explained? Do you need a Ph.D. for this?
This ties in closely to my next question, which is about explaining memory
model issues in the first place...
> our experience in the field is that almost everyone programs java
> assuming that the memory is sequentially consistent. when it's not,
> they are truly surprised. i'm not just talking about the unwashed
> masses here. i'm talking about top-gun programmers, people with
> ph.d.'s, the whole gamut. and even when i explain to them why their
> programs are broken, it takes 3 or 4 times before they get it. it's not
> because they're dumb. it's because the "memory model" is so totally
> counter-intuitive that they just can't believe they really have to worry
> about such crap.
I would never expect anyone who isn't familiar with memory model issues
already, no matter how smart they were, or how wonderful they were at
concurrent coding, to find this stuff immediately intuitive. But I am not
sure I would expect smart people instantly to grasp the importance of
mutual exclusion, either - perhaps faster than memory models, but not
instantly. So, my other question is, how much of this is that programming
your way around the memory model is hard, and how much of this is that a
large number of people (including myself) learned how to write concurrent
programs without being told that this issue exists?
Programmers have a genuinely incorrect notion of how their code is
executed (Sequential consistency). If that notion is beaten out of them,
is this still a problem?
Are people surprised (and I'm sure most of us have seen colleagues' jaws
drop when they are told some of this stuff) because the memory model
rules are genuinely counter intuitive, or because people happily write
concurrent programs for years without ever having heard of this? Or is it
both?
If undergraduates are taught semaphores as devices for enabling
communication as well as mutual exclusion, does this problem lessen?
I would be very curious to hear what people's thoughts about this are.
Jeremy
-------------------------------
JavaMemoryModel mailing list - http://www.cs.umd.edu/~pugh/java/memoryModel
This archive was generated by hypermail 2b29 : Thu Oct 13 2005 - 07:00:42 EDT