* * * * * * * * * * * * * *
* * * * * * * * * * * * * *
After my review of his work appeared on quant-ph, Henry Stapp posted a reply on his web site. I reproduced that reply, quoting the points to which he had replied and giving my subsequent responses to him. Following correspondence, and with some editing from us both, this has now developed into the multi-stage debate which is presented here. Some discussion of my 1999 paper is included.
The remarks are numbered, with material from my review labelled (0), Stapp's reply as (1), my response to that reply as (2), and so on.
Throughout the debate, we use von Neumann's language of “Process 1” and “Process 2” to refer to ways in which quantum states may change with time. Process 2 refers to the Schroedinger equation or a relativistic generalization of it. In process 1, there is supposed to be some sequence of projections (P_1, P_2, . . ., P_N) defining the possible outcomes of a measurement. These projections might, for example, be defined by some preferred orthonormal basis. As a result of the measurement, a density matrix S is supposed to change to the mixture P_1 S P_1 + P_2 S P_2 + . . . + P_N S P_N. Our most fundamental disagreement is about whether or not process 1 is real. Reference is also made to “Process 3”. This is the change by which the mixture is replaced by exactly one of its components — say, P_i S P_i /tr(P_i S P_i).
* * * * * * * * * * * * * *
DONALD REVIEW (0)
Stapp proposes that quantum events are physical and occur in inanimate objects as well as in the brains of observers. This means that he encumbers himself with the necessity of providing a characterization of the inanimate occurences of the events if he is to complete his interpretation.
STAPP REPLY (1)
The primary focus of the orthodox interpretations (Copenhagen and vN) is on human beings. Our approach mainly follows that pragmatic tack. But the issue of an imbedding ontology will also be discussed.
DONALD REVIEW (0)
Stapp and I also disagree at a fundamental level about the randomness of quantum events. In Stapp (1993, 7.6), for example, he writes that, “it is an absurdity to believe that the quantum choices can appear simply randomly ‘out of the blue’, on the basis of absolutely nothing at all.” Presumably because he believes that reality is not absurd, Stapp uses this first claim to argue that consciousness intervenes in quantum events to influence outcomes.
STAPP REPLY (1)
No! The intervention of conscious choice in orthodox QT comes via the dependence of the Process 1 choice upon a human chooser, who (freely) chooses what to measure.
DONALD (2)
I'm not sure what Stapp is objecting to here. In the introduction to the first chapter of his book, he says, “brain processes are causually influenced by subjective conscious experience” and “The theory fixes the place in brain processing where consciousness enters, and explains both the content of the conscious thought and its causal efficacy.”
Causal influence and causal efficacy would be empty without influence on outcomes. In Stapp's later Zeno work, he explains how sequences of free process 1 choices can determine outcomes.
STAPP (3)
I am objecting to the “because he believes that reality is not absurd”. That is not my line of argument, not my reason for saying the consciousness intervenes. That conclusion comes not from the “absurdity” claim, but from the structure of orthodox QM.
DONALD REVIEW (0)
Stapp's . . . even bolder third claim . . . is that the choice of which question will be put to nature, is not controlled by any [known] rules . . . any known laws of [orthodox contemporary] physics.
STAPP REPLY (1)
This is not a bold claim: it is a simple fact. Within orthodox QT the Process 1 choice is not specified by any known law. In that specific sense the observers' choices are “free choices”.
DONALD (2)
So in what sense are observers' choices not free?
My own answer is that observers are free to choose to build or employ one apparatus rather than another, but that, up to the moment of observation, the workings of any apparatus has to be analysed using process 2 (the Schroedinger equation).
STAPP (3)
If the observers are free to choose, that sounds like Bohr, and the pragmatic approach, where observers are introduced and given powers that are not specified by Process 2 alone.
DONALD (4)
It may sound like that, but that is not what “free to choose” means to me. I believe both that free choice is an illusion in the sense that all future possibilities and their probabilities are predetermined by physical laws, and that our choices are real in the sense that we have to make them within the lives we lead. When I go shopping, I don't know what I will buy. I find out by experiencing my making of choices.
The step beyond process 2 comes when that process leads an observer towards the simultaneous experience of more than one outcome. The step is not because an observer has special power; it is forced by the requirement that the experience of an individual observer be definite.
STAPP (5)
I refer back to my orginal assertion [in STAPP REPLY (1)] that my meaning of “free choice” is very specific. I repeatedly emphasize in my writings that MY words “free choice” refer SPECIFICALLY to the fact that this “choice on the part of the experimenter” is not specified by the currently known orthodox laws of physics. This is a key point: if these choices are definite, as they seem to us to be, then whatever it is that fixes these choices involves considerations that go beyond what is specified by the contemporary theory that is supported by empirical evidence. More theory, beyond empirically supported contemporary orthodox quantum theory, is needed in order for us to be able to assert what it is that fixes these “free choices”. Von Neumann rigorizes the orthodox theory, and brings our brains into the part of nature described by the quantum mathematics, but that rigorization and expansion does not eliminate the need for choices not specified by the Schroedinger-equation-directed evolution, called Process 2. So there is an essential gap in contemporary theory that at least in principle opens the door to the possibility that our conscious choices of how to act, which are key components of contemporary orthodox quantum theory, are actually playing the role that orthodox quantum theory assigns to them. It seems reasonable to try to develop physical theory in the direction that seems to be moving, relative to the preceding classical mechanical theory, instead of trying to resuscitate the classical notion that the realities that we know as our conscious choices are causally impotent.
You say “The step (beyond Process 2) is not because an observer has special powers; it is forced by the requirement that the experience of an individual observer be definite.”
It is true that this step beyond Process 2 is required in order to accommodate the empirical fact that we do have definite experiences, which depend upon what we do: upon how we choose to probe the system being probed. That stubborn fact was the basis of the Copenhagen justification for bringing into the quantum dynamics the observer/experimenter and his “free choices”.
But this leaves open the question of whether the impression that our conscious will can influence our behavior is an illusion or not. It is quite CONSISTENT with the known laws to say that our conscious will CAN affect our actions: that this universal impression of the causal power of our thoughts to influence our actions is not an illusion. If this causal efficacy is indeed a rationally allowed possibility, then it is worth trying to spell out how that could work. It is not ascientific to consider our thoughts to be a part of the total reality that DOES something not already done by other parts.
You say that you believe “that free choice is an illusion in the sense that all future possibilities and their probabilities are fixed by physical laws.” But choice is not about “future possibilities and their probabilities”. It is about picking out actual experiences from among the possibilities. I have carefully avoided any claim about what causes our choices to be what they are, apart from the claim that contemporary orthodox theory does not specify these causes, and in particular does not claim that these choices are specified by the local mechanical deterministic Process 2, the (relativistic generalization by Tomonaga-Schwinger of the) Schroedinger equation. Orthodox theory does not demand that we merely “find out” what we will do, with what we will do controlled by the Schroedinger equation alone. Achieving that is the goal of MW theorists.
STAPP (3)
In my parlance, the distinction between the pragmatic and many-worlds (MW) approaches is that in the MW case the only rule is Process 2, and everything is required to be specified by that Process 2 alone, whereas in the (orthodox) pragmatic approach the human agents are brought in and allowed to make choices not deduced from Process 2. The effects of these choices on the physical universe are specified by Process 1. If you allow the observers to freely choose how the physically described system is to be probed, then in my parlance you are on the pragmatic side of the Pragmatic versus MW divide.
The problem with the MW approach that forced the founders of QM to the pragmatic alternative is the apparent impossibility of deducing Process 1, which is needed to link the QM mathematics to statistical predictions about experiences, from Process 2 alone. The problem is that it seems to be impossible to deduce from Process 2 alone the DISCRETE BASIS specified by Process 1, from the continuous smear of overlapping possibilities produced by Process 2.
DONALD (4)
We agree that quantum events are discrete and that this requires theoretical explanation. In my theory, the discreteness stems ultimately from a fundamental discreteness of observer structures. I don't ask what spectrum of a self-adjoint operator is measured in the production of a cloud chamber photograph. I ask what possibilities our structures make us capable of seeing.
STAPP (5)
We seem to agree that the needed discreteness comes from the mental aspects of reality, and not from Process 2 alone. In von Neumann's theory this element of discreteness is brought in by Process 1, which specifies a “basis” of states coordinated with our structure as soon-to-be experiencing observers. The Process 1 choice is coordinated to our psychological/phenomenal/experiential aspects.
What is the problem with an ontological dynamics based on Process 2 alone?
Consider a situation where a Stern-Gerlach apparatus is set up so that the vertical initial beam is in the z direction, and the device is on a horizontal wheel that can be rotated so that the axis can be any direction “theta” in the x-y plane. And suppose the initial conditions on the observer, and all the uncertainties in his brain, conspire to make the part of his brain that controls his choice about “how he will set the device” uniformly distributed over the 2pi possible values of theta.
There is a continuum of values of theta. But there is not a continuum of different distinct experiences of theta: our experiences are not sufficiently precise to specify an exact value for theta. And if a distinct experience did specify a precise value of theta, then the probability for each exact value of theta would be zero. If one associates possible discrete (distinct) experiences with structures that are slightly smeared out in theta, in order to accommodate the inexact relationship between experience and theta, then how does one satisfy, using Process 2 alone, the condition that the total probability of having an experience is not infinite. In this dynamically symmetrical situation, there, is for any proposed experienced state, a continuum of others generated by tiny displacements in theta.
So how does an ontological theory with only one ontological process, namely Process 2, and purely illusory choices, yield, in this symmtrical situation, a rationally coherent theory of the relationship between the mathematical theory and actually occurring streams of conscious human experience? Are there an infinitude of conscious experiences actually occurring when the experimenter makes his conscious choice?
According to orthodox QM, there is another process, Process 1. It specifies a preferred set of orthonormal basis vectors or in any case a separation of the pertinent Hilbert space into a set of orthogonal subspaces that correspond to different distinct experiences. I do not believe one can ontologically specify such a set of basis states in this symmetrical case solely from Process 2. This symmetry merely highlights the general problem of using a continuous dynamics to produce a discrete basis in finite time. And with no discrete basis the connection to discrete experiences is hard to define.
DONALD (6)
The problem with “orthodox QM” is precisely that the “preferred set of orthonormal basis vectors” invoked in Process 1 is *not* specified.
In my own analysis, I do not attempt to specify a discrete orthonormal basis. There are all too many orthonormal bases. What I argue is that, when one looks at the right level of abstraction and with the right definitions, there are, up to any given level of complexity, only finitely-many finite patterns of elementary events which change and then almost recur.
My analysis adds no new interactions or mental influences to Process 2. I merely try to identify the structures which correspond to observers.
DONALD REVIEW (0)
It is hard to avoid the idea that in physics ultimately everything just does appear “out of the blue”. In classical mechanics, for example, physics does not explain the initial conditions, but merely provides the laws by which those initial conditions develop.
STAPP REPLY (1)
In the mindless, choiceless, early classically conceived universe the initial conditions do indeed come from “out of the blue”. But the real world is a quantum world!
At the primary pragmatic level orthodox QT deals with the problem of causation by using the facts that: (1) Process 1 choices are causally efficacious in the physical world independently of their causal roots and (2) those causal roots are, in principle, untraceable solely in the physical world. Hence we are both entitled to, and required to, take these choices as primary variables, whose physical effects are specified, but whose origins must, to the extent that they can be traced, involve also non-physical variables
DONALD (2)
It is a mistake to confuse “pragmatism” and “principle”. As a theory, Process 1 is not only pragmatic but also incomplete, and because of that incompleteness, I believe that I am “entitled” to ask for a more complete theory (whether physical or non-physical). I would expect such a theory to characterize the circumstances in which process 1 choices are made and the structure of the choice projections.
STAPP (3)
I always emphasize “pragmatic” in order to point to the orientation of orthodox QM toward practical use, not ontology. An ontological description is required to be complete, but a pragmatic theory is only required to be useful. William James in his defense of pragmatism emphasizes that pragmatism sees itself as on the path to better understandings — and so ought science. We are not yet at the end of the road on these matters of the scientific understanding of mind. It is a grave mistake for science to pretend that we now have all the answers, and can pontificate authoritatively on every matter, including the basic nature of human beings, and the relationship of mind to the rest of nature. I am actively pursuing the problem of the origins of our thoughts with K. Laskey. See reference in the newer version of the BBS paper on my website.
DONALD (4)
We may never achieve perfect consistency and completeness in our scientific theories, but I believe that one of the best ways of developing our understanding of what quantum mechanics may be telling us about the nature of reality is to pick relentlessly at loose ends.
STAPP REPLY (1)
If we seek to imbed this pragmatic structure in an objective one-mind-per-person ontology then the rules must be specified that fix the particularness of each person's choices.
DONALD (2)
Exactly. And I do not believe that Stapp has specified such rules.
STAPP (3)
Exactly.
STAPP REPLY (1)
The need for these rules would indeed be avoided by going to a many-minds-per-person formulation. But one would then need *other rules* for separating the continuously smeared out world of possibilities into individual streams of consciousness in ways that would yield the vN statistical rules of QT for the correlation among the increments of knowledge along the individual streams of consciousness.
DONALD (2)
I agree. And I *have* tried in my papers to specify these “other rules”.
STAPP REPLY (1)
This is technically a much more difficult task than the one-mind task, because in the latter case one has two more physical processes, Processes 1 and 3, at one's disposal and the *given* mathematical structure of the vN rules ensures the proper correspondence to the data, *independently of the causal roots of the Process 1 choices*.
DONALD (2)
It is not clear which task is more difficult. The quantum Zeno effect is so powerful that, if process 1 choices were genuinely “free”, any desired future result could be conjured up, at any time, by appropriate sequences of “causally efficacious choices”. So rules are needed to determine what choices subjects actually have in particular circumstances. With my attachment to neurophysiology, I believe that these rules ought to produce a theory mapping brain states to sets of orthogonal projections. Until this has been done, and the consequences shown to be compatible with observation, we cannot know how hard, or how implausible, or how complex, it is.
STAPP (3)
The point I am making is the doing these things with three processes ought to be easier than doing it with just one of them, particularly, when one of the others can be so powerful, as you stress. With Process 1 at your disposal it is a matter of limiting its power, whereas without it the task could be strictly impossible.
STAPP REPLY (1)
In the many-minds case one has only one real process, Process 2, but must find rules that will allow the events along the somehow-defined individual streams of consciousness to conform to the stringent mathematical demands of the von Neumann rules. This is difficult technical task with stiff mathematical requirements, whereas the task of specifying the causal roots of the Process 1 choices has no stiff requirements: we can get quite far with only loose ideas about the psychological/physical roots of these choices.
DONALD (2)
Everett got quite far in the many-minds case with only loose ideas. I have tried to go further.
STAPP (3)
In vN theory the loose idea is severely restricted. It is the causal roots of the choices that is the big loose screw.
Do you claim to have a complete MW theory?
DONALD (4)
Yes. But see also the answer to the final question of my FAQ, and the introduction to Donald 1999.
STAPP REPLY (1)
In any case, the vN three-process formulation is what is supported empirically, and is thus the more secure foundation for scientific development.
DONALD (2)
It seems to me that much of the recent work in quantum mechanics, both experimental and theoretical, including work on decoherence, on the quantum Zeno effect, and on quantum computation, can be seen as indicating that there is no direct support for process 1, except as a heuristic first step towards explaining appearances.
STAPP (3)
The orthodox theory works. It seems reasonable to accept it, and see what it says in neuroscience. It might possibly be that the palpable efficacy of conscious effort is merely an appearance, not the real thing, but orthodox theory allows another, quite different, possibility, which is interesting and worth explaining.
What one sees in the recent works that you mention seems to depend on one's point of view. Carefully assessed, they do not show that one can get along without Process 1. A huge amount of effort has been expended in the effort to get the observer back out, and those workers are encouraged by seeing traces of classically describable features here and there. Efforts to neutralize our role in nature will undoubtedly persist for a while longer. But it is certainly worth emphasizing that QM need not neutralize us, and that orthodox QM not only does not neutralize us but incorporates us into the dynamics in a way that, although not in line with nineteeth century ideals, seems to fit nicely the emerging data. And it does not require us to turn our most deeply felt perception about ourselves, a primary life-spanning continually reaffirmed stream of intense data, into a deceptive illusion. James stressed that all observation is fallible, and certainly we are often deceived about details, but to expand fallibility about details into this wholesale rejection should not to be entered into lightly when it goes against basic empirically supported orthodox physics, when there is no evidence indicating the need for such a rejection of contemporary orthodox theory.
STAPP REPLY (1)
The many-minds ontology is highly speculative, basically obscure, anti-occam, and not yet developed to the point of being able to rigorously produce the predictions of orthodox vN quantum theory.
DONALD (2)
I have often admitted that aspects of my work are speculative. It is certainly complex, but then I do not know how complex reality actually is. I do not accept that my work is “anti-occam”. I only multiply entities under necessity; in particular, the necessities of avoiding solipsism, of theoretical consistency, and of compatibility with the empirical evidence for process 2, for (special) relativity theory, and for consciousness.
STAPP (3)
I think the empirical evidence is for the tripartite orthodox process. But, of course, evidence is theory-laden, so if you can devise another theory that accounts for all the data that the orthodox theory does, then your theory is equally well empirically supported. But it is probably more in line with Occam to eliminate the many worlds we cannot experience, and which cannot affect us, than to eliminate our palpable power to influence our own bodily behavior.
I do believe that my theory has been developed to the point where it reproduces the predictions of orthodox quantum theory (in the circumstances in which the orthodox theory makes confirmable predictions). This is discussed at length in the paper which contains the current definitive statement of my theory: “Progress in a Many-Minds Interpretation of Quantum Theory”, quant-ph/9904001.
STAPP (3)
I shall make time to have a look at it: I base my assessment of MW on the many papers on the subject that I have studied, and on my own efforts in that direction, but I was unaware of your paper.
* * * * * * * * * * * * * *
DONALD (4)
Several weeks after the previous remark, Stapp posted some comments on my paper. I have placed them and my responses to them here within this debate.
STAPP (5)
I have looked over your 1999 paper and have some comments.
I appreciate your “aim of providing a firm technical foundation for Everett's idea” and agree with you that “There should be no doubt that such a foundation is needed.” I agree with you that “the difficulty of working out the details have been widely underestimated”. I agree also that the solution should “try to describe the world in some particularly elegant fashion.”
But your effort to create a satisfactory foundation for a Many-Minds theory seems to be singularly inelegant: it is based on 44 highly technical hypotheses. You mention (p.9) that this long string of intricate assumption might be regarded as “ugly” and note that the issue of their suitability as the requisite “fundamental laws” is for the reader to decide.
DONALD (6)
On page 9 of my paper, I am discussing the specific constraints introduced in the third section. These constraints seem to me to be the most arbitrary parts of my hypothesis. I continue
It should be noted, however, that in compensation for these constraints, not only does one gain a possibly complete and consistent interpretation of quantum theory, but also, as we shall see in section 9, one may be able to reduce, perhaps to zero, the vast amount of information which would be conventionally required to specify exact initial conditions. Although initial conditions and physical laws are neatly separated in conventional physics, this does not mean that they should be ignored in the total count of the complexity or fussiness of a theory.
STAPP (5)
I strongly suspect that most readers, if they study your paper, or even just look at it, will draw the conclusion that if you are right that the foundation of many-minds theory depends on such an intricate set of assumptions/laws, then the Many-Minds idea is probably wrong.
If constructing a rationally coherent and logically satisfactory many-minds theory requires, as your detailed and extensive study indicates, such a technically intense set of foundational laws then I think most readers will conclude that many-minds is not a reasonable alternative to the more natural possibility that our sense experiences are parts of an objective reality that often give approximately valid information about the natural course of macroscopic events within that reality.
Your laudable attempt to place on a rational footing the contrary “many-minds” idea — that each stream of consciousness is a mere individual branch of a tree of diverging and contradictory streams of consciousness, all equally real in the large objective sense — appears, in view of its intricate ad hoc nature, to be more like the death rattle of a collapsing radical idea than the foundation of a viable theory of natural reality. I would be hard pressed to find a stronger argument against the many-minds (or many-worlds) idea than would be provided by a perusal of your arduous and commendable many-paper effort to provide a rational foundation for that idea.
You call the conventional theory “hopelessly inconsistent”. I do not agree the idea of real collapses is hopelessly inconsistent. There is no inconsistency in von Neumann's model, only incompleteness as regards the mind-brain connection, the origins of our conscious choices, and the extension of that connection to non-humans. Incompleteness is not the equivalent to inconsistency. There is no good reason to demand, in this difficult and active area of research, that our current level of knowledge should be complete: mind-brain science has not yet reached its terminus.
DONALD (6)
I agree that incompleteness is not equivalent to inconsistency. In the long term, however, neither is acceptable. Perhaps I should not have used the word “hopelessly”. Perhaps the problem is an excess of hope.
Different people understand different things by the idea of “conventional” quantum theory. They may refer to von Neumann's model, or to some version of Bohr's ideas, or just to commonly-held ideas about quantum theory.
Commonly-held ideas certainly seem to me to be inconsistent. On the one hand, there is the idea that quantum mechanics applies to all physical systems without exception, and on the other, there is some vague idea of a classical regime. On the one hand, there is an idea of Lorentz invariant quantum field theory with no distinguished hypersurfaces, and on the other, there is an idea of “collapse”. On the one hand, there is an idea of quantum states as states of knowledge, and on the other, there is an idea of quantum states as physical states.
As far as Bohr is concerned, even you yourself said, (in the introduction to H.P. Stapp, “The Copenhagen Interpretation”, Amer. J. Phys. 40, 1098-1116 (1972), reprinted as chapter 3 of “Mind, Matter, and Quantum Mechanics”):
The writings of Bohr and Heisenberg have, as a matter of historical fact, not produced a clear and unambiguous picture of the basic logical structure of their position. They have left impressions that vary significantly from reader to reader.
A similar point is made by Mara Beller in chapter 8 of her book “Quantum Dialogue”, (Chicago 1999):
. . . different scholars, with good textual evidence, have arrived at conflicting interpretations of these writings. While Popper (1963b) presented Bohr as a ‘subjectivist’, Feyerabend (1981a) found him an ‘objectivist’. More recently, Murdoch (1987) has concluded that Bohr was a realist, while Faye (1991) has argued with equal competence that Bohr was an antirealist.
In my paper, when I referred (parenthetically) to the “(hopelessly inconsistent) conventional theory”, I had in mind the remarks in the introduction to the paper:
the difficulty of working out the details has been widely underestimated in the stream of preliminary sketches for interpretations which have gained popularity over the years (cf. Bacciagaluppi, Donald, and Vermaas (1995), Dowker and Kent (1996)). It has been my experience in the development of the present theory that apparently minor problems when pursued have usually turned into major problems.
In this context, there is no way that the details of the conventional theory can be worked out, unless we can begin by agreeing on what the theory says.
If we agree that by “the conventional theory” we mean von Neumann's model then we do at least have a preliminary sketch for an interpretation. In this case, the difficulties of working out the details are widely recognised. You, for example, have gone some way towards working out a detailed version, but, as you admit, your theory remains incomplete.
STAPP (5)
The one serious question that you raise, namely the compatibility with relativity theory and Bell's theorem, poses no real problem. I doubt that there is anyone more conversant with that nonlocality issue than I. (Indeed, I do believe that your present formulation is technically flawed at a point pertaining to nonlocality issues — to the necessary instantaneousness of the collapse process, and your “breeding problem” — but that is a minor point that I shall not enter into here, as it would divert attention away from the big problem)
DONALD (6)
My central focus is on technical issues. If you genuinely believe you have discovered a technical flaw in my work, I would very grateful if you would send me the details. I guess from what you say, that you are referring to page 10 of my paper, where I discuss the idea of “Alice-n'-Bob” and claim that new ambiguities would be introduced were I to base my theory on joint minds rather than individual minds.
I am not sure what you mean by referring to a requirement that collapse should be instantaneous. In my theory, information is represented by sets of sequences of quantum states. The sequences model sequences of “collapses”, but these do not correspond to real events in time. The real events constitute the information, which is finite. The quantum states are Heisenberg states and are not labelled by spacelike hypersurfaces. Nor are such hypersurfaces explicitly defined through the sequences of states.
The uncertainty of a twenty-fifth of a second due to the light travel time to the antipodes which would arise in a vast joint structure describing the human race is a measure of the combinatorial complexities which such a structure would introduce. In particular, there would be an enormous number of ways in which such a structure could be formed. These would be equivalent “for all practical purposes”, but would differ by what I refer to in section 8 of the paper as “minor” differences.
STAPP (7)
You speak of the vast ambiguity associated with the 1/25 of a second that it takes light to cover the globe.
The von Neumann formulation requires, for consistency, instantaneous collapse along a spacelike surface. If you are considering problems with the collapse interpretation you must use a consistent formulation of that kind of interpretation, which means instantaneous collapses.
This is quite compatible with relativistic quantum field theory, as shown by the works of Tomonaga and Schwinger. The preferred frame could be the absolute frame defined by the cosmic black-body radiation, or by the Milne cosmology, which posits a point “big bang” and a preferred continuous sequence of instances “now” defined by fixed proper-time distance from the big bang. This cosmology has many attractive cosmological features, and is being aggressively pursued, in a broader framework, by my colleague Geoffrey Chew. This definition of “now” is completely relativistically invariant.
DONALD (8)
I believe that it is to the advantage of my work that it does not require globally-preferred frames or instants and only places local constraints on geometry. This helps to minimize the information required to define the entities fundamental to the theory. It could also be significant if a theory of quantum gravity, or quantum cosmology, involving some sort of quantization of geometry, is ever to be constructed.
STAPP (7)
The von Neumann interpretation is built around the idea of agents who have streams of consciousness that contain informational events, each of which has an image in the brain of that agent.
Of course, you have no collapses, and your “connected set of switches” postulate is your way of inserting separate agents into the dynamics.
But your predictions should be unambiguous and presumeably agree with von Neumann's. But in order for von Neumann to get unambiguous predictions he must have an orderly set of definite questions (Process 1). This you do not have.
DONALD (8)
Each individual instance of a mind in my theory corresponds to a pattern of definite information. The events corresponding to this pattern are ordered as long as they are strictly time-ordered. This is effected by the restriction in my hypothesis (the appendix to Donald 1999) to “ascending dockets”. I do restrict attention to the patterns formed by lifetime histories of individual observers. If Alice in Australia and Bob in Britain have had no influence on each other's observations, then there need be no call for their observations to be ordered.
STAPP (7)
So in this context there arises again the crucial question of whether the successful predictions of orthodox quantum theory can be recovered by a mathematical structure that leaves out what is crucial to those predictions in orthodox theory, namely a well defined sequence of definite (Process 1) questions. It does not seem to me that you have yet successful demonstrated that your rules will actually yield essentially the same predictions as orthodox theory if they leave out the process that was critical to the orthodox theory. In that theory the well definedness of the dynamical process depends upon the instantaneous collapse that you leave out, along with the collapse itself.
DONALD (8)
If your theory, or any other version of what you call the “orthodox” theory, were actually to provide a “well defined sequence of definite (Process 1) questions” then it would make predictions which could be compared with my theory. In fact, however, the orthodox idea is merely that there is some sequence of definite questions defined by some sequence of projection operators. The quantum Zeno paradox tells us that this sequence cannot be arbitrary and work on consistent histories shows that there are all too many plausible sequences.
Starting from this point, I have tried to identify abstract sequences of definite questions by working at a higher level. While sequences of projection operators are still to be found at the heart of my hypothesis, I have tried to construct classes of sequences of projections which are equivalent in that they pose the same pattern of questions. Classes of sequences of quantum states, corresponding to classes of sequences of apparent collapses, are also at the heart of my hypothesis. These sequences, however, do differ from the conventional picture, in that they are not necessarily sequences of pure states. This difference is significant. I believe it to be justified given that I am analysing localized systems in constant contact with their environment.
The empirical successes of conventional quantum theory arise in situations in which it is possible, at least from the outside, for an adequately trained quantum theorist to identify appropriate “measured observables”. There is often considerable flexibility in this identification. This underpins the intuition that, for a complete interpretation of quantum theory, successful empirical prediction is possible, as long as, in such situations, the fundamental entities defined by the interpretation take an appropriate place within this range of flexibility. Arguing that my interpretation is successful in this way is a central theme in many of the core papers on my web site and in particular in the one under discussion. It seems to me that you also rely on a version of this intuition. Here's how I express it in section 6 of my paper:
Elementary quantum theory, quantum statistical mechanics, and decoherence theory produce a picture of the local quantum states of a macroscopic system as being, to a good approximation, decoherent mixtures of correlated states weighted by numbers which reflect the probabilities defined by elementary quantum theory. The central purpose of the hypothesis is to analyse the information in such an approximate decoherent mixture of correlations, and to decompose the mixture with probabilities determined by the weights. Thus, ultimately, the probabilities of elementary quantum mechanics and the probabilities defined by the hypothesis agree because they all reduce to weights in quantum mixtures.
Nevertheless, this intuitive picture is not sufficient. For example, as I have already mentioned, the quantum Zeno paradox tells us that empirical evidence does rule out some sequences of questions. We must therefore proceed with care; in particular, when realistic questions related to the rich complexity of human brain states are considered. My theory is based on the observations of separate individuals. It is therefore necessary for me to argue, as I do in section 8 of my paper, that an individual's own private experiences of individual quantum probabilities do not differ in an unacceptable way from the same individual's experiences of how quantum probabilities appear on average to be experienced by others. You suggest that it is an advantage for individuals to be able to choose their own questions. The implementation and consequences of this idea also need detailed analyses.
STAPP (5)
In addition to all of the technical machinery that you find you must introduce, there is, as you emphasize, the basic huge mystery of the meaing of probability in a many-minds universe where everything happens: what does 'the probability for someting to happen' mean in a reality where everything happens? You say (p.14) that:
this mystery can be left unexplained. The goal for physics is to tell us what the world is like. In this paper, I propose that, for us, the world is like a discrete stochastic process. However mysterious probability may ultimately be, we have no problem in understanding what it is like to observe such a process. . . . We all know what it means to “know the odds”.
DONALD (6)
The “mystery” which, on page 14 of my paper, I said could be “left unexplained” is the mystery of the apparent truth of our beliefs that the world we see should be fairly typical. This is a mystery for any genuinely indeterministic theory; whether or not a many-worlds interpretation is put upon it. Trying to explain the mystery by invoking a random number generator, or a die-thrower, merely introduces the new mystery of what makes the output of that random number generator, or die-thrower, actually appear to be random.
The questions I have tried to answer are questions which I believe to be accessible to theoretical analysis; questions of precisely what events are probabilistic and precisely what their probabilities are.
STAPP (5)
So you are reduced to postulating that 'IT SEEMS as though we are observing a discrete stochastic process, even though the basic many-minds assumption is that there really is no such real discrete process'.
DONALD (6)
This 'quotation' is not my words. Nor does it express my opinion.
STAPP (5)
But if, in order to make a sensible theory, you must assert, as a primitive rule, that *it seems to us* that we are observing “a discrete stochastic process”, then the more natural assumption would be that there really is such a process, and that our experiences are both parts of that process and purveyors of information about it.
DONALD (6)
I do *not* say that there is no real discrete process. Indeed, the central hypothesis that I make is that the trajectories of the discrete process, which we appear to experience, are real. They are real as experiences, and they are real in that they obey definite laws. My claim is that, at a fundamental level, each of us is the trajectory he or she experiences.
In a many-minds theory, each individual observer has his own real structure. As a consequence, the distinction between “objective” and “subjective” becomes quite different from the distinction between “real” and “unreal”.
I argue that almost all of the complexity which we experience in “reality” lies in our developing personal structures.
STAPP (5)
The problem of relativistic invariance and Bell's Theorem is easily solved. So why on earth adopt such extremely complicated laws, as you effectively show to be necessary, in order to save what amounts to the claim that nature is tricking us into believing that there is a discrete natural physical process, into which our thoughts enter causally, and about which our thoughts inform us, when there really is no such process at all: it's all an elaborate hoax; an illusion. Your tour-de-force example of what is needed to provide a rational foundation for the many-world idea may be the clincher that eliminates that idea from rational contention, at least among serious-minded thinkers.
DONALD (4)
Here we return to the orginal debate on my review of Stapp's work.
* * * * * * * * * * * * * *
DONALD REVIEW (0)
The lesson of modern neurophysiology seems to me to be that everything we experience is directly reflected in the functioning and structure of our nervous system. Here “everything we experience” includes not only our thoughts and feelings, but also our thoughts about our thoughts. According to this neurophysiological hypothesis, every human thought and action, including choices, decisions, and self-analysis, can be explained in terms of the functioning of the evolved physical brain.
STAPP REPLY (1)
The basic issue is, precisely, the capacity of *that neurophysiological hypothesis* to adequately explain the recent data of neuropsychology. We argue that *that hypothesis* is structurally and empirically inadequate, and that the classical-physics-based prejudice should therefore be abandoned in favor of the quantum approach that bring consciousness into brain dynamics in a managable pragmatic and highly explanatory way. (See the BBS article on my website.)
DONALD REVIEW (0)
The ability of a brain to talk about and apparently to decide its own behaviour is not paradoxical, because it is limited, and because of the parallel and modular structure of neural processing. It is easy to understand why such an ability should have evolved because it allows efficient analysis, planning, and communication.
STAPP REPLY (1)
How can one understand evolution of consciousness if Process 2 controls everything and proceeds without reference to consciousness: How do the survival benefits of evolved consciousness produce an effect if consciousness makes no difference?
DONALD (2)
These are important questions. I'll come back to them below.
STAPP REPLY (1)
What is the rational basis of the insistence that the causal efficacy of our thoughts is a “user illusion” when neither physical theory nor empirical data support this claim?
DONALD (2)
Debates on the nature of free will and on the correctness of physical theory are far too far from resolved for either of us to be able to claim a monopoly of rationality or the unambiguous support of either theory or empirical data. Not only do I believe my own position to be rational and, by-and-large, compatible with the empirical data, but it would also seem to me to be rational and, by-and-large, compatible with the empirical data, to suppose that brain processes are deterministic because of some hidden variable theory.
STAPP REPLY (3)
But is the “hidden” variable needed to complete the dynamics psychic in character?
DONALD (4)
All I meant here was that I don't think that it is necessarily irrational to hope that some idea analogous to Bohm theory or spontaneous collapse could work. I do think that such a theory would have problems, similar to those of classical physics, with finding a place for consciousness.
DONALD REVIEW (0)
If Stapp is not simply invoking an extra-physical homunculus at this point, however then at least we can ask whether choices are supposed to be made using conventional neural circuitry — in which case, it is certainly not clear to me where the requisite circuitry is supposed to be — or whether some other type of physical process is supposed to be involved.
STAPP REPLY (1)
Yes, some other type of process, not Process 2, is involved.
One such “other type” of process is Process 1. The state of the brain is a compendium of increments in information/knowledge of various kinds, instantiated in a mathematical structure imbedded in a four-dimensional manifold of points with spacetime labels. This structure specifies also a set of future possible increments in information/knowledge. These increments will come in a sequence of events, each consisting of a Process 1 choice made by an agent, followed by a Process 3 choice made “by nature.”
Process 1, unlike Process 2, is nonlocal, and it is not controlled by any law of contemporary orthodox quantum theory. It can therefore be controlled in part by psychologically described realities. Process 1 enters orthodox theory as an effect of psychologically motivated (and described) choices made by agents about how they will act. The Process 1 action replaces S(t) by PS(t)P +P'S(t)P', with P'=1-P.
In order to causally relate Process 1 to effective intentional action the state PS(t)P should contains no aspect of S(t) that conflicts with a template for action that, if held in place, will tend to make future experiences of the agent conform to his intention. Then trial and error learning should allow effort to become bi-directionally causally linked, in the brain, via Process 1, to activation of the template of action that tends to actualize the intended experience.
Every psychologically felt reality is a a change in the state of knowledge/information instantiated in the physically described brain, and the correlation that this entails between the psychologically and physically described data can be investigated. But the model differs from the classical- physics-based model in that, because of the principled insufficiency of knowable physical information, the scientist is justified in taking conscious choices by human agents to have the physical causal efficacy that flows specifically from their influence on Process 1.
DONALD REVIEW (0)
Stapp's suggestion, at least in his earlier papers, seems to be that the brain state becomes a superposition or mixture of different developing possibilities until consciousness is reached, a choice occurs, and the possibilities are reduced to a single outcome. The trouble with this idea is that the state becomes a mixture of different neural firing patterns rather than a neural firing pattern analysing a mixture. The self-analysis of such a mixture would require not just an entirely new type of neurophysiology, but an entirely new type of physics.
STAPP REPLY (1)
Neither Donald nor I assume macroscopic quantum coherence. The state of the brain is assumed to be nearly a mixture of classically describable states. But the state of the brain is described by a density matrix, S(t), and the dynamics of Process 1 is controlled by that density matrix, as a whole. In the simplest model one defines {P} be the set of P that correspond to possible increments in knowledge, and defines projection operator P(t) to be the P in {P} that maximizes Trace PS(t)/Trace S(t). This P(t) depends of course on the entire mixture represented by S(t), and any dynamics involving P(t) will likewise involve this entire mixture. This P(t) might be expected to be pertinent because it is the P that is generated with the greatest statistical weight by the brain in its search for an appropriate action to take in the light of the situation faced by the person. A simple model takes the P associated with a Process 1 event at time t to necessarily be P(t). Then only a rule for a “consent” needs to be specified in order to complete the dynamics. Because the nature of the underlying reality is essentially experiential, it would be natural for the “consent” to be related to experientially felt values associated with the experience specified by P(t).
DONALD (2)
I don't know how to define “the set of P that correspond to possible increments in knowledge”.
STAPP (3)
I believe that consciousness is associated with functional properties of the brain, and that each possible conscious thought is associated with the activation of a particular functional pattern of brain activity, and that the action S —> PSP picks out the particular functional activity by eliminating competitors.
DONALD (2)
If you take “the P that is generated with the greatest statistical weight” are you then “able to rigorously produce the predictions of orthodox vN quantum theory”? What happens if I'm listening to a Geiger counter with a very small chance of clicking in any given time interval? What happens if, as hard as I can, I “choose” not to hear it click?
STAPP (3)
The key choice here is the choice to which is traced the existence of the classically describable set up. Process 2, alone, cannot give this. Given that set-up, the Process 3 describes which of the discrete bins specified by the set-up will be actualized. Whether someone is checking the bins does not matter.
DONALD (4)
I am not convinced that it is compatible with observation to suppose that process 1 occurs, especially with exactly two bins, on every occasion when there is a potential increment in knowledge. I suspect, for example, that the sheer number of potential “occasions” could lead to a Zeno effect which is not actually observed. In the light of difficulties which arose during the development of my own theory, I am particularly concerned about situations in which specific events of very low a priori probability come to be observed; for example, when a rare radio-active decay is observed at some specific observed time, or when a specific cloud chamber photograph is examined.
STAPP (5)
According to the pragmatic approach, the main relevant Process 1 is the choice made by the agent as to which experiment is to be set up. In the cases you mention the choice is to put in place the particular detection system, with a specified set of discrete bins. This is what is difficult to achieve with Process 2 alone, because discrete bins do not come out of the continuous smear of possible experimental situations generated by Process 2 alone. Process 1 is a process of a very different kind. I believe it must be added as a *supplement* to Process 2: it is not a *consequence* of Process 2.
DONALD (6)
We agree that something which is not a consequence of process 2 has to be added. What I suggest needs to found, however, is not an additional physical process, but the laws which determine how experiences can be structured. Thus my bins correspond to a finite and well-defined set of possible future observer structures.
STAPP (5)
Once the Process 1 action has occurred, then the various orthogonal bins are specified and a statistical weight is assigned to each bin, with the sum of these weights being unity. The quantum Zeno effect enters here first from the effort of the agent to decide which experiment to set up, and the effort of the agent to act in such a way as to set this experiment up. If the experiment is a good experiment, then the different bins will evolve into macroscopically distinguishable states, each with a statistical weight. An observer who opens his eyes and gazes upon the “pointer” will evolve into a collection of orthogonal states. If the observer then attends to the position of the pointer in order to experience the outcome of the experiment then the probabilities of the various possibilities will be specified by the statistical weights of these possibilities. If the observer is attentive, and makes an effort to remember, and perhaps mechanically record, the outcome then that initially oberved outcome will be held in place by the quantum Zeno effect.
It does not matter in what order the possibilities are examined. For example, if there are three possibilities, outcome 1 with weight 90%, outcome 2 with weight 9% and no experienced outcome with weight 1%, and the order of probing is by weight, then the first possibility will be experienced with probability 90%, the second with probability 9% and the third with probability 1%, according to von Neumann's rules, *sequentially applied*. The order in which the questions are asked does not matter. The only reason that I imposed the special rule for the ordering was to make the process well defined, with just a sequence of Yes or No questions.
It is important that the quantum Zeno effect merely hangs onto what has already been chosen: it does not affect the first choice. But by holding in place what has been randomly chosen, mental effort can influence physical variables. It is this subtle effect that is exploited in, for example, the BBS article on my website.
DONALD (6)
Suppose P, Q, and R, with P + Q + R = 1, are orthogonal commuting projections corresponding to the three outcomes. Let S be the initial state. The idea that, “The order in which the questions are asked does not matter”, corresponds, I suppose, to the fact that
Q(PSP + (1-P)S(1-P))Q + (1- Q)(PSP + (1-P)S(1-P))(1-Q)
= P(QSQ + (1-Q)S(1-Q))P + (1- P)(QSQ + (1-Q)S(1-Q))(1-P)
= PSP + QSQ + RSR.
If we allow an uninterrupted period of Hamiltonian dynamics (process 2) between the process 1 steps, we can no longer assume such equalities. Using the quantum Zeno effect to maintain the equality over arbitrary periods would stop time locally. It cannot simply be assumed that such a remarkable phenomenon would have no uncalled-for consequences. At an even more fundamental level, I also do not believe that the identification of P, Q, and R is unproblematic.
STAPP (3)
Although everything works out OK for a standard measurement process, I am concerned more with the case in which an agent must choose how to respond to challenging perceptual input.
DONALD (4)
And how is challenging perceptual input to be distinguished from any other sort of perceptual input?
STAPP (5)
I want to distinguish cases where the history of the person makes it likely that he will be faced with a well defined choice between distinct possibilities just on the basis of Process 2 alone, from the cases where Process 2 alone would produce a smear of *overlapping* (nearly classically defined) possibilities. These latter are the troublesome cases.
DONALD (6)
At the basis of my approach is the idea that it is easier to characterize observers and their possibilities than measurements. Indeed, in cases like the observation of a cloud chamber photograph or a radio-active decay, I don't think that it is at all straightforward to see how a fundamental observer-independent set of discrete bins can be specified merely from the experimental situation. Differences between specifications would presumably be exacerbated by Zeno-effect repetition. Using a many-minds approach, and dealing with observers rather than measurements, makes it possible to abstract over classes of equivalent specifications of the structure of individual observers.
DONALD REVIEW (0)
No conventional physical process within the brain will be able to cause a (generalized) “collapse” of the required form.
STAPP REPLY (1)
Process 1 is highly “nonconventional”: it does something no conventional process could ever do! That is why the founders had to make this radical break with tradition! Process 1 picks out, from what is in principle a continuously smeared out cloud of overlapping states of the “device” (in our case a brain), some discrete set of orthogonal subspaces of states that correspond to distinct experiential reactions. No conventional physical process starting from continuously smeared out states of all the relevant elements can do such a thing in a finite time. [See my article, “The basis problem in many-worlds theories” quant-ph/0110148, Can. J. Phys. 80, 1043–1052 (2002)].
So when we consider Process 1 we are in a new ballpark, involving “a new kind of physics”. Mind involves a new kind of process, within ontologically construed von Neumann QT.
DONALD REVIEW (0)
The Zeno effect does provide a dynamics formed using a sequence of carefully chosen projections, corresponding to von Neumann's quantum events, which, with probability one, will drive the total state towards any chosen state. That dynamics, however, is not the well-understood biologically-evolved dynamics produced by the interactions of the ions and atoms and molecules and electric fields of the human brain. It is a purely theoretical dynamics which depends on working towards the desired outcome right from the start of the initial quantum spreading. How and when the choice is supposed to result in the construction and action of the projection operators still has to be explained. The projections need to be defined to the precision of an individual wavefunction. Above atomic scales, no biological mechanism can control or even repeat states at this level of precision.
STAPP REPLY (1)
Precisely! The essentially different psychophysical Process 1 is needed!
DONALD REVIEW (0)
Using von Neumann's idea of “measurement” as a chain of physical processes, it would seem that a physical apparatus will itself have a dynamics which can be understood in terms of the conventional laws of physics described by the Schroedinger equation, and which can be used to explain the observed effects.
Most discussions of “the measurement problem” completely miss the essential problem by considering only “ideal measurements”. von Neumann studied “ideal measurements”, but merely to SHIFT the problem, not to solve it. The central problem is that, before collapses, the devices, and everything else, will be a smeared out cloud of overlapping possibilities, not the neat little idealizations that occur in idealized “ideal measurements”. The problem is to extract discrete streams of consciousness obeying von Neumann's rule from a completely smeared out continuum at all levels.
DONALD (2)
My analyses have been developed precisely to deal with these problems. Here is what I wrote about the universal quantum state (omega) in my 1997 paper (“On Many-Minds Interpretations of Quantum Theory”, quant-ph/9703008):
andomega seems to be a complete mess. However, it does have a great deal of hidden structure, and it is the job of a no collapse interpretation to explain how that hidden structure comes to be seen.
An adequate analysis of the correlations in omega is the first step towards an interpretation of quantum mechanics.
In my review of Stapp's work, I suggest that omega might be a simple state. Symmetrically-weighted combinations of correlations make it possible for omega to be “full of hidden structure”, “simple”, and, in a sense, “completely smeared out”.
DONALD REVIEW (0)
I believe that when we make a choice we are doing mental work involving ordinary physical neural processing; just as when we express a question in words we use the linguistic mechanisms which are available in our brains as a result of the life we have led. In my opinion, despite the interesting ideas of William James and Harold Pashler, neurophysiology is more fundamental than psychology.
STAPP REPLY (1)
Neurophysiology may indeed be “more important” than psychology, but that does not mean that mind can be left out. We argue that important empirical data involving protocols that specify how the subjects are to direct their conscious choices, and that measure behavioural and neural responses to these specifications require these choices to be included as pertinent variables of the theoretical analysis. The vN theory is then the perfectly suited theoretical framework because it puts these conscious choices in the central place, controlling, on the one hand, a very specific physical brain process that seems able to account nicely for the data, and, on the other hand, being controlled by psychological and physical factors, including an understanding of the words of the scientists who are conducting the experiments.
A purely classical neurophysiological account does not really refer to the pertinent psychologically described data, which is understood as arising from some very mysterious kind of illusion. A quantum neurobiological explanation is even more complex, since it must involve an explanation of how the smeared out cloud of possible brain states manages to produce distinct classically understandable streams of consciousness, that enjoy the correct statistical properties.
DONALD REVIEW (0)
My basic objection to Stapp's work is that he does not appear to have made any connection between the representation of choices by physical neural processing — or indeed any other cellular process — and their representation as projection operators to be measured.
STAPP (1)
Due to its discrete character the Process 1 choices cannot, I think, be explained purely in mechanical/physiological terms: I believe that the choice process involves the non-local psychic aspect of reality in a nonreducible way. Process 1 is not merely an aspect of Process 2, but is fundamentally psycho-physical. I go into this connection between the Process 1 projection operator P and brain in quite a lot of detail in the BBS article on my website.
The projection operators P are operators that project onto patterns of neural excitations that constitute “templates for action”. The relationship between these physical operators and psychological intentions are “learned” by trial and error, using the fact that an effortfully held in place P can have specific physical effects, via the quantum Zeno effect. Thus the Process 1 actions, combined with effort, have causal effects on the brain, and this allows a connection between psychological experience and brain process to be established through learning, even without having a detailed theory for the causal psycho-physical origins of the specific choice. A useful theory can be established without resolving this very deep question.
The key element of the pragmatic approach is that it allows the Process 1 choices to be introduced as basic empirically knowable variables, in place of the empirically inaccessible physically described local variables. Because of the “discreteness” character of this choice, Process 1 appears to be not reducible to Process 2. The difficult problem of specifying exactly how the psychological realities conspire with brain matter to produce a discrete choice is effectively evaded *at the practical level* by treating the choices as empirically specified input variables, just as in atomic physics. This gives a practically useful theory that covers a lot of data, and puts on the table, without immediately answering, the deep question of the psycho-physical causal roots of the Process 1 choices.
DONALD REVIEW (0)
A characterization of collapse should tell us what possibilities arise — this is “the preferred basis problem”. The characterization should be well-defined and unambiguous. It should be explained how one collapse leads on to the next and the probability of a given collapse should also be well-defined. With such a theory, it will no longer be possible to invoke the quantum Zeno effect as a clever trick by which arbitrary sequences of collapses can be used to attain any desired outcome. Instead, there will be specific circumstances in which the effect, or the appearance of the effect, will arise as a consequence of the specific collapses which actually occur, or which appear to occur.
STAPP REPLY (1)
In our model “there will be specific circumstances in which the (quantum Zeno) effect will as arise as a consequence of specific collapses which actually occur” just as Donald demands.
But he seems to miss the main point, which is that what is controlling the Process 1 involves psychological realities. So the collapse sequence is indeed “well-defined”. But it is controlled by psychological factors. Through trial and error learning each subject builds a repetoire of efforts that produce reliable feedbacks.
DONALD (2)
“Well-defined” means defined in mathematical terms. The point I miss is how psychological realities are supposed to be defined in mathematical terms.
STAPP (3)
The “collapse sequence” is the sequence of physically/mathematically described actions S —> PSP. This description includes the times at which these events occur. But the temporal density of these events is asserted to be influenced by psy-factors.
The Copenhagen way to make the psychological descriptions mathematical is to describe the intended empirical situation in classical-physics terms. Using ideas from classical physics, this classically described “idea” should be relatable to a classically describable state of the EM field in the brain. There is a corresponding quantum coherent state and a projection operator P onto that state. [See “Light as foundation of being.” in Quantum Implications, eds, Hiley & Peat, Routledge & Paul Kegan, 1987][Use the radiation gauge, Schwinger, Theory of Quantized Fields II, Phys.Rev. 91, 713-728 (1953) p.727]. Color and other psy-variables can probably also be represented in a vector space. I see no reason in principle why the quantification of psy should be a problem.
DONALD (4)
Which times? Which projection operators? Which coherent states?
The difficulty is not with finding some approximate mathematical descriptions of observed reality; it is with finding a theory characterizing the discrete chunks of it which actually correspond to specific observable possibilities.
DONALD REVIEW (0)
Just as one of the most fundamental questions in the philosophy of mind is whether mental events are merely how neurophysiological events appear, so one of the most fundamental questions in the philosophy of quantum theory is whether von Neumann's indeterministic events are merely how the continuous changes appear in specific circumstances. My starting point is to answer both questions affirmatively, and therefore I have tried to develop a theory characterizing “appearance”.
STAPP REPLY (1)
The idea that there should be, in addition to the causally complete physical reality also “appearances” is a baffling idea, unless one postulates some nonphysical observer with nonphysical thoughts, which is not what Donald does. But then what are these extra nonphysical “mere appearances"? Why should they exist at all, if they have no real causal role?
DONALD (2)
In much of my writing, there is a tension between a final fully consistent but highly abstract analysis and a preliminary analysis posed in comprehensible terms. My preliminary analysis may describe, for example, a quantum state for a physical brain in the sort of physical world that we see around us. Ultimately, however, this is to be superseded by an understanding of such apparent worlds as mere appearance; merely a reflection of the abstract information-bearing structures which constitute our minds.
In a complete many-minds interpretation, the background physical “reality” is just a single “completely smeared out” quantum state, which, in the Heisenberg picture, is unchanging. All the real interest lies with the minds; with their structures, with their probabilities, and with their correlations. That is why, increasingly, I describe myself as a philosophical idealist.
STAPP (3)
The idea that the only realities that we really know exist, namely our conscious experiences, are “mere appearances” in “apparent worlds” is exceedingly odd. To achieve this turning of everything on its head, Donald needs to shift everything interesting out of the “physical world” into the world of “appearances”. All the mathematics that we need to use must somehow get shifted into the world of appearances. The rules pertaining to these apparent worlds must somehow come out of Process 2, but yield at the practical level the qualitatively very different Process 1. Donald's world of appearances must *effectively* go over to the very world described in the pragmatic approach.
The orthodox (pragmatic) approach deals *directly* with this useful level of description, without entering into ontological speculations. It accepts our conscious experiences as the foundation of science. The utility of trying to understand these primary scientific realities as “mere appearances” is not readily apparent. In the von Neumann approach they are primary variables that can actually have, within the formalism, the causal efficacy that they appear to have.
DONALD (4)
Orthodoxy can also be speculation.
Pragmatism is appropriate when we limit ourselves to routine tasks. It was pragmatic for a medieval sailor trading within the Mediterranean to assume that the world was flat. It is pragmatic for a policeman to ignore the social inequities which exacerbate criminality. But understanding can be worth having even if it is of no practical use. It is in our understanding of reality that Stapp and I most differ. The difficulties of constructing a complete theory can help us to weed the garden of ideas, while refusing to look for such a theory may be only to postpone speculation.
DONALD (2)
I don't know why minds should exist at all.
STAPP (3)
Is not that a bad thing?
DONALD (4)
I don't pretend to have all the answers.
DONALD (2)
Here's what I wrote in a recent paper (“Finitary and infinitary mathematics, the possibility of possibilities and the definition of probabilities”, quant-ph/0306201) in which I was *really* being speculative:
My interpretation of quantum theory amounts to the definition of a stochastic process on abstract patterns of information. Despite the complexity of this definition, it seems to me far from inconceivable that among all the possible ways in which stochastic processes can be defined on such patterns, the postulated definition is actually as simple as any comparable set of rules which make likely rich and meaningful patterns of information.
In the apparent physical world, brains evolve and consciousness is an epiphenomenon. But from the point of view of a complete many-minds interpretation, consciousness is central, and brains appear to have evolved because the most likely sort of rich and meaningful pattern of information which can arise from simple rules and a simple background quantum state is the sort of pattern which will explain its own existence as the culmination of a long sequence of historical accidents with results selected by the likelihood that the fittest will survive.
Idealists always require something in addition to mind to explain why our world is not a freely-chosen dream world. Bishop Berkeley's explanation involved God. My explanation involves probability. I think that our worlds are typical, according to a well-defined probabilistic structure, of the worlds which can be experienced by minds of our complexity, supervening on stochastically-generated patterns of information obeying certain well-defined rules. Our experience of evidence for our evolution is part of what confirms that we are typical in this sense.
STAPP REPLY (1)
Donald says that since I do not go along with his (weird?) idea that I owe you all an explanation of what quantum events are. They are, if we go to the ontological stance, the changes in the states of systems that von Neumann's theory is all about. They are abrupt change in these states. They are real events. Because these states are states of “knowledge” these events are increments of knowledge. This gives an isomorphism between events in a psychological realm and events in a physical realm: the event is a change in the physically/mathematically instantiated state of knowledge. It has a physical description and a psychological description. It is real, not mere appearance.
DONALD REVIEW (0)
Stapp claims that there is, “An isomorphic connection between the structural forms of conscious thoughts, as described by psychologists, and corresponding actualized structural forms in the neurological patterns of brain activity, as suggested by brain scientists”. On the other hand, on the same page, he claims to have provided for, “A mechanical explanation of the efficacy of conscious thoughts”. I have no idea to what this is supposed to refer.
STAPP REPLY (1)
What this refers to is that a stream of conscious thoughts can instigate a Process 1 event that has a specific effect upon a brain. An effort can increase density of attention, which can activate the quantum Zeno effect, which can hold in place a template for action, which can cause that action to occur.
DONALD (2)
Does this mean that there is a mapping from brain states to sets of orthogonal projections?
How could such a mapping be compatible with the fact that no biological mechanism can control or even repeat states at the level of precision required for the quantum Zeno effect?
DONALD REVIEW (0)
In his book, Stapp stated that his theory “makes consciousness causally effective, yet it is fully compatible with all known laws of physics, including the law of conservation of energy”.
STAPP REPLY (1)
I have discussed this in the (unabridged) BBS article (doc, pdf) in the answer to question 4.
DONALD (2)
In this discussion, Stapp agrees that process 1 events “can affect, for example, the energy of the observed system”. He suggests that “the effect on the average energy could be virtually undetectable”. He does not, however, consider, or explain how to rule out, the possibility that evolution might have produced a creature able to make process 1 or quantum Zeno choices yielding biologically-useful energy changes; even if merely for the regulation of brain temperature.
STAPP (3)
I do very severely limit the effects of will to increasing the density of attention to already actualized ideas.
DONALD REVIEW (0)
Special relativity requires that no change can be communicated at a speed faster than that of light.
STAPP REPLY (1)
Relativity requires only that no “signal” can be communicated faster than light. The von Neumann collapses do not permit “signals” (controllable effects) to be communicated faster than light, and hence do not violate the physical requirements of the theory of relativity.
DONALD REVIEW (0)
The arguments of Tomonaga (1946) and Schwinger (1951) are concerned with generalizing the Schroedinger equation to allow for arbitrary spacelike hypersurfaces, rather than with an analysis of individual quantum events. The precise relevance of these old papers, the technical validity of which has been called into question by Torre and Varadarajan (1998), is not entirely clear to me.
STAPP REPLY (1)
Tomonaga and Schwinger show that the constant-time surfaces upon which the state S(t) is defined, can be replaced by an advancing set of spacelike hypersurfaces. This alleviates a frame dependence that is inimical to the theory of relativity.
DONALD REVIEW (0)
In relativistic quantum field theory, events are associated with spacetime regions rather than with spacetime points. If there are too many events too close together, then there may be no simple way of ordering the corresponding regions.
STAPP REPLY (1)
I assume that very high energy process cancel in such a way as to allow the state S(t) to be defined, and to make brain dynamics well described by Quantum Electro Dynamics, with the effects of the pertinent P(t) implementable as actions on S(t), as in von Neumann's nonrelativistic theory. I suspect that the P(t) can be expressed in terms of the coulomb field specified as in Schwinger's “Theory of Quantized fields II” Phys. Rev. 91, pg. 727 (and also by S. Weinberg). These quantities are expressed in terms of their sources by instantaneous action at a distance, and hence fit nicely into the von Neumann formulas.
Also, the Schwinger-Tomonaga formalism allows S to be parameterized by S(sigma) not S(t), where sigma advances by little local increments of the spacelike surface sigma. The non intuitive spacelike action is easily eliminated by imagining the spacetime future to be open, but the spacetime past (i.e., the part of spacetime behind the advancing (in process time) surface sigma, which creeps forward by tiny local increments) to be existing and evolving in process time. The knowledge available on any portion of the spacetime surface sigma (the advancing NOW) is specified by the actualizations in its backward light cone. But in a Bell-type correlation measurement when the actualization of an output occurs in a little advance of sigma on one leg, the past evolves so as to implement the requisite correlated change in the other leg. There is no action in a space-like direction, but correlations are implemented by actions in the V shaped region with vertex at the creation of the singlet state. This is the natural way to think about the “nonlocal” Bell-type “influences” in the relativistic version of the von Neumann formalsm.
DONALD (2)
In Donald 1995, I envisage a situation involving something like 10^{15} events per second, each associated with a spatial region of at least nanometre dimension, all in a brain which light takes more than 10^{-10} seconds to cross. I do not know how many individual process 1 events Stapp envisages per second.
STAPP (3)
I take the events in question to be macroscopic [e.g., perhaps on the millimeter and the millisecond scales in a human brain.]
DONALD (4)
There is a tradeoff here. According to my proposal, events are simple and are explicitly definable in terms of elementary yes-no questions. This means there must be a huge number of them to accomodate the complexity of human consciousness. In Stapp's proposal, the events themselves have enormous complexity. I believe that this would make it much harder to develop a theory to characterize them.
DONALD REVIEW (0)
If instantaneous action at a distance is allowed, then, at any moment, the effect of infinitely many distant events may need to be taken into account. In particular, this might make it surprising that the passage of time on Stapp's universal clock, if it existed, should not have been observed through some sort of frame-dependent effect on system dynamics.
STAPP REPLY (1)
The state S(t) of the brain is defined by tracing over variables localized outside the brain. The predictions of quantum theory associated with brain events, without any knowledge of (i.e., correlation with) events outside the brain are independent of the effects of events occurring spacelike to the brain. The fact that the predictions of QT are valid means that there can be no observable effects of the unknown faraway events.
DONALD (2)
This seems to me to be a circular argument. Just as I am trying to go further than Everett, and therefore I meet new problems, so I think that, even leaving aside his proposals about the causal efficacy of minds, Stapp is trying to go further than Heisenberg. He is sufficiently explicit about the instantaneous occurrence of quantum events all across the universe that he should be in a position to query whether, in the kind of formalism he is building, it is a fact that the (conventional, local, non-relativistic, small system) predictions of QT remain valid. Addressing that query might conceivably allow him to place a bound on the rate at which process 1 can occur within a volume, or even allow him to argue that the universe is finite.
STAPP (3)
I am sufficiently explicit about my model to be able to say that the answers are “No”. These instantanteous events associated with far-away unknown Process 1 and 3 actions have no measurable local effects. (To be sure, I do not go from denumerably infinite to more infinite than that, as regards to the number of events.)
DONALD (4)
The situations in which the conventional quantum formalism is well understood, and in which its predictions appear to have been confirmed, involve finite numbers of unknown spacelike-separated process 1 and 3 actions. If infinite numbers of actual external events are to be allowed, then I am not convinced that there will be no convergence problems. I am not convinced of the inevitable validity of a formalism which supposes that a state expressing an infinite amount of ignorance can be used to replace an actual but unknown state which infinitely-often instantaneously collapses globally. In the many-minds formalism, the actual apparent state *is* the state expressing ignorance.
Anyway, Stapp seems to suggest that his proposals go beyond processes 1 and 3, with the statement in Stapp 1995 that,
No attempt is made here to show that the quantum statistical laws will hold for the aspects of the brain's internal dynamics controlled by conscious thought.
STAPP (5)
In all of my recent works I take the von Neumann rules to hold strictly. That is a key requirement: stay strictly within the framework devised by von Neumann.
DONALD (6)
Good. von Neumann is a hero.
Nevertheless, even von Neumann, in chapter six of his great work “Mathematical Foundations of Quantum Mechanics”, falls into the trap, mentioned above by Stapp, of over-generalizing results from the study of “ideal measurements”. The problem arises when von Neumann attempts to argue that the so-called “Heisenberg cut” — the boundary between the quantum and the classical realm — or between observed system and observer — is arbitrary to a very large extent. In as far as that boundary is not arbitrary, it does matter precisely how a process 1 set of projections is defined; and it does matter exactly when it is applied; and whether, and how often, it is repeated.
DONALD (2)
In my opinion, satisfaction with “loose ideas” is the worst weakness in work on the interpretation of quantum theory. What is most needed is explicit formalisms, which allow problems to be revealed and addressed.
STAPP (3)
Yes indeed, and useful applications made to challenging real data.
DONALD REVIEW (0)
It is not straightforward to reconcile the apparent empirical evidence for instantaneous non-local changes in correlation information with the apparent evidence for the Lorentz invariance of physical processes.
STAPP REPLY (1)
I regard the demonstration as “straightforward.” Lorentz covariance pertains to measurable relationships, whereas the “nonlocal” effects always involve unperformed experiments.
DONALD REVIEW (0)
It is precisely because of the importance and difficulty of constructing a complete theoretical structure that it is so disappointing that Stapp fails to provide more than sketches of his ideas about the mathematical structure of thought (Stapp 1993 Appendix), about selection of top-level codes (Stapp 1982) and selection processes (Stapp 1995), about the role of the electromagnetic field (Stapp 1999), and about the quantum Zeno effect (Stapp 1999, 2000a, 2001a).
STAPP REPLY (1)
Some key references to my works are missing from Donald's list: Top-level code and selection: Stapp 1990 [Ch 6 of MM&QM, 1993/ 2003] “A quantum theory of the mind-brain interface”. Quantum Zeno Effect and Decoherence: My website “Response to Tegmark” LBNL 46871 (ps, TeX source).
DONALD (2)
I apologize.
These are both cited at other points in my review and I did consider them before reaching my opinion. The second is also available from the quantum physics archive and is my reference Stapp 2000b.
STAPP REPLY (1)
Rome was not built in a day. The core issue is whether conscious effort can make a difference: or whether physiology is the WHOLE STORY. The important thing to do first is to see whether there are scientific reasons, apart from the suggestive structure of orthodox contemporary physics, for believing that, contrary to the classical-physics-based prejudices of most scientists, our conscious choices can affect our physical behavior in ways that go beyond what classic physics would allow. Does the data fit well with the restrictive conditions imposed by QT? More details can be filled in as the data accumulates.
My complete review can be found from here, Stapp's original replies are here and here, and Stapp's home page is here.