* * * * * * * * * * * * * *
* * * * * * * * * * * * * *
Notes on some books which are relevant, or significant, or recommended in the broad context of the many-minds interpretation of quantum theory presented on my home page: http://people.bss.phy.cam.ac.uk/~mjd1014
* * * * * * * * * * * * * *
* * * * * * * * * * * * * *
B.S. DeWitt and N. Graham, The Many-Worlds Interpretation of Quantum Mechanics. (Princeton 1973)
More than half of this book consists of two papers from 1957 by Hugh Everett III. One: “ “Relative State” Formulation of Quantum Mechanics”, is the version of his thesis published at the time. The other: “The Theory of the Universal Wave Function”, which was previously unpublished, is much longer, with more technical details, analysis, and background. These papers are the work of a brilliant and original graduate student reacting to some of the key ideas of his time; in particular making use of von Neumann's “Mathematical Foundations of Quantum Mechanics” and of Shannon and Weaver's “Mathematical Theory of Communication”. The central idea which Everett develops is that of a universal quantum theory with a single all-encompassing wavefunction obeying, at all times, a linear wave equation with no discontinuous changes due to measurement. He introduces the idea of observers as subsystems of such a quantum theory and indicates how the wave equation then leads to the development of correlations between observer states and the states of the subsystems they observe. The assumptions he makes about the form of the observer states allows him to deduce the apparently stochastic nature of individual observations within a globally deterministic framework.
On a technical level, Everett's main theme is the analysis of correlations between quantum subsystems. Given a wavefunction η on one of a pair of subsystems of a composite system with wavefunction ψ, he defines the relative wavefunction ψηrel on the other subsystem. He proves the basis independence of his definition and examines its relation to the Schmidt decomposition which is discussed in von Neumann's book. His work foreshadows developments many years later in quantum information theory. He makes two interesting conjectures. One concerns an information-theoretic form of the uncertainty principle. In 1975 in this paper, this was re-discovered and proved by Białynicki-Birula and Mycielski, generating a substantial literature. Everett's other conjecture is that, for a given wavefunction on a compound space, the Schmidt decomposition maximises the correlation between subsystem bases. In this pdf, I show how this conjecture can be proved as an application of the strong subadditivity of quantum entropy.
Everett's claim to have deduced the probabilistic assertions of the usual interpretation of quantum mechanics is too strong. In fact, Everett merely shows why, according to his model, typical observers will tend to make the usual probabilistic assertions. This, however, is a circular claim in that probabilistic assertions are already needed to identify what is to be meant by “typical”. Nevertheless, Everett's arguments are not empty in that he shows consistency between the Hilbert space structure, assumptions about additivity under orthogonality, the Born rule for individual observations, and his identification of typicality.
Everett's model of observers is also both incomplete and far from realistic. It is incomplete in that he assumes that the identification of the observers and of their temporal trajectories is unproblematic. It is far from realistic because he assumes that observers always have well-defined pure states. The core papers on this site constitute my attempts to deal with these issues.
The remainder of the book consists of a collection of related papers. The first of these is a brief 1957 comment on Everett's work by J.A. Wheeler who was Everett's Ph.D. advisor. Wheeler emphasizes, without criticizing, the extent to which observers are unspecified in Everett's theory, in a way which anticipates the idea of quantum theory as just a theory of correlations. Also reprinted is a 1969 paper by L.N Cooper and D. van Vechten. This presents their independent discovery of the idea that considering quantum theory as a theory of correlations makes it unnecessary to invoke any process of wavefunction collapse. They say, “that we perceive one possibility or another [ . . . ] even though the wave function is a superposition [ . . . ] is due presumably to the nature of that physiological system called mind”. However, beyond drawing attention to the idea of irreversibility, they do not attempt to characterize the nature of “mind”.
In the final paper, N. Graham points out that Everett's measure on branching wavefunctions does not in general give the same weight to each branch. He then attempts to link Everett's measure with the idea of equiprobable states in statistical mechanics. However, despite one or two exceptions — such as thermometers, he is wrong to suggest that a measuring apparatus will normally reach statistical equilibrium before it is read while, at the same time, preserving its reading. Graham fails to distinguish between the unique perfect equilibrium states which are modelled in statistical mechanics, and the much wider class of possible thermal states of a typical macroscopic measuring device. Although such a device would be in thermal contact with its surroundings, it would not have time, in ordinary use, to explore the entirety of its phase space and indeed, it would be useless were it to experience the sort of fluctuations which could carry one measured result to another. (42)
P. Byrne, The Many Worlds of Hugh Everett III. (Oxford 2010)
Hugh Everett III made tremendous contributions not only to quantum mechanics, but also to game theory, to operations research, and to the development of computational algorithms. He was a drunk and a self-centred womaniser and he neglected his children. He used his great talents to work out the most efficient way of killing huge numbers of people with nuclear weapons — although at least he did not attempt to conceal the understanding to which this led him that the consequences of nuclear war would be catastrophic for all sides. In this disturbing biography, Byrne gives an expansive account of a story of brilliance and material success, and of personal failure and evil.
Byrne argues that Wheeler suppressed the original long version of Everett's thesis because he did not want to challenge Bohr's authority. Even this, however, did not prevent the arrogant dismissal of all of Everett's ideas by Bohr and his acolytes. Byrne claims that it is a mistake to divide Everett's view of the reality of the splittings of the universe from DeWitt's. He goes on to claim that it is not correct to attribute the many-minds idea to Everett. Nothing he says, however, alters either the fact that the language used by Everett in his long thesis is different from the language used by DeWitt, or my belief that this difference is of fundamental importance. Indeed, I think that a focus on observers and their correlations, as in Everett's original analysis, is essential if we are to make sense of a reality described by a universal wave equation.
In his autobiography What the Grandchildren Should Know (Little, Brown 2008), Everett's son Mark describes his upbringing as “ridiculous, sometimes tragic, and always unsteady”. He says that his father was so uncommunicative that he thought of him as like a piece of furniture. Mark went on to become the singer/songwriter E of the band Eels. His book is like his songs — honest and very personal — an enjoyable mix of cheery despair, grief, and hope. (44)
J.A. Barrett, The Quantum Mechanics of Minds and Worlds. (Oxford 1999)
A clear, careful, thoughtful, critical survey introducing a wide range of ideas about many-worlds and related interpretations. The book is particularly suited to those with some knowledge of quantum mechanics who prefer words to equations. At first sight, much of Barrett's concern is with theories which are clearly implausible, such as the “bare theory” or “Bell's Everett (?) theory”, but Barrett uses these theories skilfully as simple examples elucidating issues of wider significance. The book includes a detailed reading of Everett's own work and a non-technical analysis of the Bohm interpretation. (9)
B. Rosenblum and F. Kuttner, Quantum Enigma: Physics Encounters Consciousness. (Oxford 2006)
Aimed at an audience with no technical knowledge of physics or mathematics, Rosenblum and Kuttner attempt to explain why the quantum world is so strange, and why so many people have considered that consciousness might be relevant to its mysteries. The account they give is beautifully clear without being excessively over-simplified.
Even at the level at which they are writing, however, I would take issue with them on a few points. In particular, their assumption that collapse is “the same for everyone”, blinds them to the possibility of a many-minds interpretation according to which the appearance of common experiences is a consequence of the correlations which are at the heart of any analysis of quantum theory, while each individual observed collapse is just the experience by one individual of one of a number of events possible for that individual.
Rosenblum and Kuttner also express what is in my opinion a naive view of free will. When they state, “though you can't demonstrate your feeling of pain to someone else, you know it exists and it's certainly not meaningless”, the analogy they are trying to draw helps only to indicate that we have a real feeling that we have free will. This is hard to deny. My analysis, however, would start from the idea that our feelings are our experiences of our neural functioning. That functioning is a consequence of the probabilistic laws which make a typical conscious entity of our complexity most likely to see itself as having a past which can be understood as involving something like biological evolution. In those terms, just as our neural functioning, our statements, and our behaviour in response to pain can be seen as a natural outcome of our need to respond to damage by taking evasive action, avoiding further damage, and seeking help; so our functioning, statements, and behaviour when we “make choices” can be seen as a natural outcome of our evolution as social animals in that they allow us to express our desires, talk and think about consequences, and influence others by explaining what we want to do. From this point of view, none of our untutored statements or feelings about free will make it at all plausible that when we feel we are making a choice, we are actually doing anything which is not explicable within the framework of conventional quantum-mechanical stochastic physical laws. There is no evidence that choosing involves bending the laws or cheating the probabilities. There is also no theory to explain how choices could bend the laws or fiddle the probabilities, and, perhaps most significantly, there is no reason to believe that choices would feel any more genuine if they did bend the laws or fiddle the probabilities.
On the other hand, Rosenblum and Kuttner are quite right to emphasize the idea that experimenters appear to be free to choose their experiments. This is a fundamental plot element in the quantum mechanical mystery story. This plot element is used, for example, in showing how astonishingly difficult it is to understand the way that quantum mechanics tells us that the results of Alice's experiments in one place are correlated with the results of Bob's experiments in a different place, given that we can imagine Alice and Bob being so far apart that each can choose to change the experiment they perform without light being able to pass between them before their observations have been completed. In my opinion, the complexity and sensitivity of the thought processes which a human brain can use in coming to a choice are quite sufficient to authenticate this difficulty by ruling out any plausible form of mutual predetermination, without any need for any additional mystery.
In “Critique of ‘Quantum Enigma:Physics encounters Consciousness’ ” arXiv:0705.1996, M. Nauenberg denies that consciousness is required to resolve the problems of quantum mechanics. His arguments consist largely of appeals to authority and foot stamping. It may be of some historical interest to know, for example, that Einstein held to an ensemble interpretation of the wavefunction, but it gets us no closer to understanding the nature of individual quantum events. Nauenberg's main substantive claim is that such events can be explained in terms of the irreversible amplification involved in any observation. This is the paradigmatic explanation “for all practical purposes”; letting us work out what we will see without giving us any deeper understanding of the nature of reality. However, the irreversibility in a quantum process is different from the irreversibility of stepping off a cliff-top because the destination of the quantum process is unpredictable. Quantum irreversibility makes the mathematics of quantum probabilities and quantum events look more and more like the mathematics of classical probabilities and classical events. The mathematics of classical probabilities is a mathematics which can be used to describe either an observer's ignorance or a reality consisting of many separate worlds. Only at the level of appearance, however, does a for-all-practical-purposes irreversible process turn a many-worlds reality into a unique unknown future. Kuttner replies to Nauenberg in arXiv:0710.2361. (34)
M. Lockwood, Mind, Brain and the Quantum. (Blackwell 1989)
Lockwood discusses the nature of mind, primarily from a philosophical, or even a psychological, point of view, and he considers the relation between mind and the external world. He takes physicalism to be the idea that there are no mental or psychological facts that cannot be expressed entirely in the language of physics, chemistry, or physiology. He rejects this idea on the grounds that if we are given a complete physical description of another person our understanding of their state of mind can be arrived at only by correlation, direct or indirect, with our own states of mind, as we ourselves introspect them. He then argues against the idea that we are directly aware of external objects, and in favour of the idea that the conscious mind is directly and immediately aware only of states of, and happenings within, itself. Equating mental states and events with states and events in the brain, he claims in other words, that we are directly and transparently acquainted with our own brain states, rather than with the states of things external to us.
Lockwood turns to quantum mechanics to find the form in which we experience our brain states. He assumes that the immediate contents of awareness always correspond to the shared eigenvalues of some set of compatible observables. He then presents a many-minds reading of Everett and argues that the external world of an individual mind can be described in terms of the wavefunction relative (in the sense of Everett's “relative-state” analysis) to his assumed brain wavefunction.
I agree with much of Lockwood's analysis and with his reading of Everett. Where we differ is mainly in that I believe his analysis of brain states to be entirely inadequate; both in the application of quantum theory and as an analysis of the physical underpinning of mental life. Although I share, at least as a starting point, his assumption of an identity theory according to which mental states and processes, both conscious and unconscious, just are states and processes in the brain, in my view, a universal quantum theory makes it impossible simply to take for granted that we know what it is meant by the physical existence of the brain. As a consequence, I believe that the first goal of an identity theory should be a precise specification of the physical structure required to support mental states and processes. On the technical level, I think that wavefunctions are far too fragile and far too dependent on the precise definition of exact system boundaries to be plausibly assigned to systems as large, slow, warm, and wet as neural information carriers. His assumptions about wavefunctions lead Lockwood to implausible speculations about neural quantum computation. Unfortunately, Everett's relative-state analysis also requires the assignment of wavefunctions both to the entire universe (which is at least debatable) and to the subsystems of interest.
I think that Lockwood is right to suppose that our awareness is awareness of our own physical structures. However, if we can manage to find an adequate explanation of the dynamics of those structures, I do not believe that it is necessary also to find an external objective reality matching the apparent content of our awareness. A relative wavefunction, for example, is a mere reflection of a subsystem wavefunction in the mirror of a global wavefunction; it is, in other words, defined entirely by the wavefunctions of the original subsystem and of the complete universe, and provides no independent information. Our observed “external reality” might be thought to explain the future development of our individual physical structures, but it is not clear what part the relative wavefunction could play in such an analysis, if one rejects, as do both Lockwood and I, the idea, hard to make compatible with relativity theory, of the entire universe splitting at each individual observation.
This leaves open the problem of understanding the future development of our individual physical structures. Lockwood entirely neglects this topic. My own approach has been to try to define the probability of future observations given a fixed global state and a history of local information defined by past observations. This led me to generalize the relative entropy function, making it possible to avoid the assumption that any of the quantum states involved have to be wavefunctions.
If our awareness amounts to awareness of ourselves, then we need to have selves rich enough to explain our awareness. Lockwood makes the conventional assumption that all that needs to be discussed is what we seem to happen to be at or around the present moment. He does, however, mention the idea that quantum theory calls the past into question in the same way as it makes the future undetermined. In order to provide a sufficiently rich substrate in this context, I have suggested that, at any moment, a suitable history needs to be part of what we are. Thus I see present awareness as being awareness of the present state of the brain through the medium of its previous states.
Lockwood also discusses the apparent unity of consciousness and attempts to understand our perception of time's apparent flow in the context of the fixed four-dimensional spacetime of relativity theory. In my view, in neither case does he take sufficient account of what would be the ontological consequences of a fully developed identity theory. Such a theory leaves unanswered questions of why or how something with the physical structure of a human brain should come to be self-aware. Nevertheless, if it makes any sense at all, it surely has to be a way of providing answers to questions about the structure of consciousness and about the time-dependence of that structure. Thus a fully developed identity theory answers questions about the individuation and wholeness of individual minds just by pointing to its ontology. Moreover, any such theory is useless, unless, at least in normal circumstances, the content of the awareness of that brain is essentially what that brain says, or shows, that it is. It seems to me that there is no particular mystery, no “hard problem”, about the physical and biological causes of the words and actions by which our brains enable us to talk about and demonstrate unity or disunity in our apparent psychological state. Nor is there a particular mystery about the physical and biological causes of why when a boy sees a hard red wooden cricket ball flying towards his head, his brain shouts “in-coming” even more loudly than it shouts “red”. When he hears a melody, his brain has to be able to process it as more than just a sequence of individual notes, because otherwise he could not talk about it as more than just a sequence of individual notes. Given an identity theory, the perception of movement and of temporal patterns is a biological and physical fact, not much more mysterious or cross-temporal than the working of an speedometer. Brains also exist in time. At any moment of awareness, they point to themselves as existing at that moment, in a manner that is no more incompatible with relativity theory than the momentary number on the face of a digital watch. If a mind is the self-awareness of a brain, then the moment of its awareness is part of its being. The existence of such a being does not violate the laws of physics; it merely extends its ontology. (36)
This is a superb book. It is not elementary, but the technicalities arise always in the context of carefully explained aims. With an indexed bibliography of about 700 papers, it is a valuable review of much recent physics, both theoretical and experimental. It provides necessary background for serious work on the foundations of quantum theory, making it clear that it is wrong to expect that exactly the same concepts which have been developed for isolated quantum systems will continue to be appropriate for quantum systems in constant interaction with their surroundings. (2)
S. Haroche and J.-M. Raimond, Exploring the Quantum. (Oxford 2006)
From the start of quantum mechanics, people have dreamt up thought experiments to try to get some feeling for the implications of the theory. Increasingly, it has become possible to perform genuine experiments with features of these thought experiments. In this magnificent book, Haroche and Raimond provide detailed descriptions of many recent experiments in which quantum behaviour has been revealed and controlled with astonishing degrees of precision. They discuss experiments involving atoms and photons in cavities, trapped ions, and Bose-Einstein condensates. Their analysis of these experiments is unified by explicit and carefully-explained mathematical models. Haroche and Raimond show how the full intricacies of the quantum mechanical evolutions of isolated systems have been demonstrated in ever more complicated arrangements, while at the same time they explain how interactions with the environment hinder yet further extensions into the realm of quantum computation. (35)
Despite considerable evidence that gauge theories are essential for the description of physical systems at many levels, and despite the beautiful geometrical language in which classical gauge theories can be expressed, a multitude of problems, both technical and conceptual, remain to be solved before we can say that we have a full understanding — in particular when it comes to gauge quantum field theory. Healey discusses some of the conceptual problems of gauge theories with particular emphasis on the question of the reality of gauge potentials and, more generally, on the question of whether there are fundamental properties described by the theories which are path-dependent rather than local. Healey's subject has quite a number of different aspects, several of which are highly technical. Not surprisingly, when it comes to reviewing these technicalities, what he provides is more a sketch than a textbook. Nevertheless, people studying the technical details elsewhere may well find that this book makes interesting supplementary reading.
Healey's treatment of the interpretation of quantum theory seems to me, however, to be fairly superficial as it consists of little more than mentions of attempts to apply to quantum field theory what are already vague and unsatisfactory approaches to the interpretation of non-relativistic quantum mechanics. Unfortunately, when we take quantum field theory rather than quantum mechanics to be our fundamental theory, it becomes significantly harder to understand the reality that we appear to observe. In quantum mechanics, for example, we learn that the properties of individual particles are uncertain and ungraspable. According to interacting relativistic quantum field theories, however, the particles themselves, if we look closely, are ephemeral, uncertain, unlocalizable, and surrounded by inseparable clouds of virtual partners. The universe that quantum field theory pictures for us is a probabilistic soup of fleeting insubstantial ingredients. It is implausible, in my view, that reality as we see it can be assumed, even in the light of decoherence theory, just to fall out of the initial conditions and the mathematical structure of quantum field theory; at least unless we invoke an implicit observer by means of whom we determine at each moment, for example, the scale at which reality is to be examined or the sort of quantum properties chosen to be seen. As an alternative, I have suggested that we are abstract structures of a particular and definite type, and that our reality depends on the existence of laws of nature which specify such abstract structures and their possible developments. (45)
J. Bub, Interpreting the Quantum World. (Cambridge 1997)
Bub's discussion of the interpretation of quantum theory focuses on the hypothesis that certain quantum observables have, or come to have, definite, observer-independent, values. Versions of this hypothesis are advanced in the Bohm interpretation, the modal interpretation, and the Copenhagen interpretation. Bub begins by reviewing theorems which demonstrate limitations in what is possible. For example, Bell's theorem constrains the predetermination of spatially-separated definite values, while the Kochen-Specker theorem constrains the sets of observables which can simultaneously have definite values. To allow for these limitations, Bub restricts his attention to situations in which we are given a pure state e and an observable R of some finite-dimensional quantum system. His central theorem characterizes the maximal lattice of projections which can be assigned values in such a way as to recover the probabilities that the state e gives for the spectrum of R. He dicusses the dynamics of the definite values, noting the importance of interactions with the environment.
Bub investigates some significant and natural questions in the interpretation of quantum mechanics. However, it seems to me that he overstates the importance of his central theorem. For that theorem to be relevant, it would be necessary for the observable R to be a fundamental observer-independent aspect of reality. This is not impossible, but the obvious choices just lead back to the Bohm interpretation, the modal interpretation, and the Copenhagen interpretation. Bub does give a brief discussion of each, but fails to resolve their various problems. In “Quantum Mechanics as a Principle Theory” quant-ph/9910096, he suggests a version of the Copenhagen interpretation in which environmental decoherence allows R to be generated “from processes internal to quantum mechanics on the basis of the dynamics alone, without requiring any privileged status for observers”. He does not, however, explain how environmental decoherence can lead to a unique R.
Despite his emphasis on “no collapse” interpretations, several of the ideas in Bub's book depend on the physical relevance for general quantum systems of pure quantum states. For example, he proves a decomposition theorem showing uniqueness, when the decompositions exist, of decompositions of pure states of n-component tensor product Hilbert spaces into sums of products of pure states, for n > 2, under quite weak independence conditions on the components of the summands. He suggests that this theorem can be used to identify states of pointers — which are presumably macroscopic objects — but he does not analyse the form or the stability of the unique wavefunctions identified by the theorem. In fact, in many circumstances, they are hopelessly unstable (Donald 2004).
In “Maximal Beable Subalgebras of Quantum-Mechanical Observables” quant-ph/9905042, H. Halvorson and R. Clifton generalize Bub's central theorem to cover arbitrary states and observables in arbitrary C*-algebras. This involves some mathematical sophistication, but their exposition is excellent. They give an example to show that the uniqueness of Bub's maximal lattice can be lost when mixed states are considered. They leave the problem of the identification of the observable R untouched. (20)
M. Beller, Quantum Dialogue. (Chicago 1999)
A detailed analysis of the development of the Copenhagen interpretation of quantum mechanics, from its uncertain beginnings in open dialogue between colleagues and competitors to its hardening into the official, supposedly unchallengeable, establishment dogma. Beller identifies many confusions and inconsistencies in this dogma. Although sometimes hard to follow without reference to the original papers, I found this book both fascinating and convincing. (13)
K. Barad, Meeting the Universe Halfway. (Duke 2007)
Barad claims that her “book contributes to the founding of a new ontology, epistemology, and ethics, including a new understanding of the nature of scientific practices”. She invokes Bohr's ideas of complementarity and context-dependence to motivate the suggestion that the primary ontological units should be “phenomena” rather than independent objects with inherent boundaries and properties. “According to Bohr,” she writes, “theoretical concepts (e.g. position and momentum) are not ideational in character but rather specific physical arrangements”. However, it takes more than a torrent of words; “apparatuses are specific material-discursive practices [ . . . ]; apparatuses produce differences that matter [ . . . ]; apparatuses are themselves phenomena [ . . . ]; apparatuses have no intrinsic boundaries but are open-ended practices [ . . . ]”, to explain why there is some point in saying that position (for example) is what one particular apparatus happens to measure.
In my view, science is primarily about the construction, analysis, understanding, and testing of theoretical models which appear to give us some handle on reality. Barad, however, seems so concerned with discussing how this process can never be complete that she almost entirely loses sight of the models and their meaning. It may be amusing to learn that the sulphur from Stern's cheap cigar was the necessary final ingredient making the result of the Stern-Gerlach experiment visible, but the cigar is not fundamental. Even if Stern and Gerlach's experiment had failed, the effect would have been seen in due course by another group using better equipment. What matters are the arguments which make us look for such an effect, how we model its cause if we do find it, and whether our findings and our models can be confirmed and extended.
Barad's attempt to interpret quantum theory fails because she makes no attempt to model the details of the theory. It is all very well to claim that, “a phenomenon is a specific intra-action of an ‘object’ and the ‘measuring agencies’ ”, and that, “the object and the measuring agencies emerge from, rather than precede, the intra-action that produces them”, or even that, “relata do not precede relations; rather relata-within-phenomena emerge through specific intra-actions”. However the really difficult problem is to explain how “nature” could possibly “work out” what is going to happen next. Scientific explanations amount to a model consisting of an ontology and laws. Newton, for example, told us that a body keeps moving in a straight line with a constant speed unless compelled to change by an external force. Quantum theory tells us that wavefunctions obey the Schrödinger equation or some generalization of it. But, as Schrödinger himself pointed out, the Schrödinger equation is not sufficient to explain our observations. It is not even sufficient to explain our possible observations. Not only does the Schrödinger equation not tell us whether we will see the cat dead or alive, it does not even give us any reason to believe that we will go on seeing. We assume that we will go on seeing because we assume that we will be part of the future, but this is to make the ontological assumption that we will continue to be, which is an assumption which goes beyond the ontological framework implied by the Schrödinger equation.
Barad says that, “ ‘observer’ and ‘observed’ are nothing more than two physical systems intra-acting in the marking of the ‘effect’ by the ‘cause’ ”, and takes for granted that she knows what is meant by a “physical system”. Statements like “phenomena are the ontological inseparability of intra-acting ‘agencies’ ” or “intra-actions effect the rich topology of connective causal relations that are interatively performed and reconfigured” are just waffle. Physicists cut through such waffle by constructing theoretical models. Something like the consistent histories idea is how people usually attempt to model the sort of approach to the interpretation of quantum theory advocated by Barad. This idea provides a sketch of a variety of sets of possible future structures but it fails, not because, like all indeterministic theories, it fails to specify which individual possibility will occur, but because it fails even to specify which set of possibilities is allowed. Without such a specification, probabilities are meaningless. Barad is totally unspecific about what she thinks is meant by a future possibility. Despite her claim that “no human observers are required”, it appears simply that we are to know the possibilities when we see them, or when they “emerge”. At best, we could assume that we are already given an apparatus, but in that case Barad is not even working at the level of specificity of the conventional for-all-practical-purposes interpretation as she says, for example, that apparatuses “are not merely laboratory setups that embody human concepts and take measurements”.
Although long-winded, repetitious, and badly-edited, the book is largely free of inpenetrable jargon. In general, however, it hardly seems necessary to invoke Bohr or quantum mechanics in order to argue that everything is connected and that wisdom requires the consideration of different points of view. Indeed, since “according to Bohr, apparatuses are macroscopic material arrangements through which particular concepts are given definition, to the exclusion of others”, it would be just as justifiable to suppose that the moral teaching of Bohr's quantum theory is that might is right. (37)
C. Norris, Quantum Theory and the Flight from Realism. (Routledge 2000)
Norris defends realism; in particular the idea that there are truths about reality which are independent of our abilities to verify them. He argues that anti-realist trends in philosophy, as well as the excesses of cultural relativism, have been encouraged by the refusal of the proponents of the Copenhagen interpretation to permit realist hypotheses about the quantum domain. I agree with him on these points. However, it seems to me that our best hope of grasping the nature of reality is through the development of complete and consistent physical theories which are compatible with as wide as possible a range of experimental evidence. Norris pays little attention to physical theory. In his discussions of quantum mechanics, he relies largely on secondary sources and popular accounts. He claims repeatedly that physicists have failed to take Bohmian mechanics sufficiently seriously, but his remarks on the nature of the pilot wave and on quantum field theory suggest that he has not completely come to terms with the counter-arguments. (17)
A lengthy textbook on Bohmian mechanics. Holland provides an extended treatment of the non-relativistic theory with detailed analyses of many examples; including stationary states, diffraction, and the double-slit and Stern-Gerlach experiments. He devotes a chapter to the Hamilton-Jacobi theory, bringing out powerful analogies between formalisms of classical and quantum mechanics; and two chapters to alternative treatments of non-relativistic spin-half particles. Holland's final chapter looks at a variety of ways of attempting to provide a pilot-wave analysis of relativistic quantum field theory; either by a piloted-particle treatment of the Klein-Gordon or Dirac equations, or by a piloted-field treatment of a free quantum field. These approaches are distinct and each has significant problems. In the context of a piloted-field treatment of free quantum electrodynamics, Holland suggests that “photons” are not fundamental. His analysis is incomplete, but even so, it indicates that it is not possible to provide a uniform realist analysis of all quantum entities which can appear to behave quasi-classically. Holland does not investigate the nature of quarks.
Holland's work on the non-relativistic theory is impressive, but it seems to me that even here there are significant problems which he fails to address. Pilot-wave theory, as he presents it, amounts just to a theory of simple model systems with implicit and unanalysed external observers in a classical macroscopic world. Holland analyses many model wavefunctions but he does not, for example, discuss large thermal systems in which there can be an enormous range of possible global wavefunctions allowing, in individual instances, entirely unclassical behaviour for the piloted particles at a local level.
My own response to pilot-wave theory is developed further here and here. (38)
Bohm and Hiley's presentation of Bohmian mechanics takes more for granted than Holland's, and is correspondingly less appropriate as an introductory account. It does, nevertheless, have much to offer. As well as sophisticated analyses of many aspects of Bohmian mechanics, notably with regard to its explanation of topics in measurement theory, Bohm and Hiley discuss possible generalizations, or alternative versions, of their theory to deal with the Pauli equation, the Dirac equation, and boson quantum field theories. They also look at some other interpretations of quantum theory; in particular discussing some of the problems with early versions of the many-worlds interpretation. On the other hand, when they try to extend their own ideas beyond their specific formal versions, their proposals mainly strike me as being too vague to be useful.
The fundamental premise of Bohmian mechanics is the existence of particles guided by a wavefunction. Bohm and Hiley use conventional methods of advanced quantum mechanics to analyse circumstances in which it is appropriate for a given guiding wavefunction at the start of a measurement to be superseded by the end of the measurement with another effective guiding wavefunction corresponding to just one of the possible measurement outcomes. Bohm and Hiley make two crucial assumptions here. Their first assumption is that there is an appropriate wavefunction at the start of the measurement. Bohmian mechanics, however, is supposed to be a universal quantum theory without collapse, so this assumption depends on the nature of the unique true fundamental universal wavefunction. Essentially, Bohm and Hiley are assuming that the entire history of the universe can be described as a succession of processes in which this universal wavefunction can be superseded by a succession of quasi-local guiding wavefunctions. In my view, however, cosmology tells us that at all but the scale of the entire universe, the universal wavefunction is a locally thermal state. As such, there are no effective local guiding wavefunctions to begin with. This implies at all times an absence of the simple environment-independent guidance equations that Bohm and Hiley invoke in their models. Moreover, even if we assume the existence of macroscopic objects, similar problems will re-appear in any thermalized system, including any human brain.
Bohm and Hiley's second crucial assumption is that the existence of the particles explains our observations. In order to justify this assumption, they need to show that particles constitute what we observe, or at least what we are made up of. They state (their italics) that “we are directly aware of the particle aspect of the universe through the senses and that the more subtle wave function aspect is inferred by thought about our sensory experience in the domain that is manifest to the senses”. Unfortunately, their failure to justify their first assumption means that they also fail to justify their second assumption. Their argument for the existence of a classical level, in which they say the effects of the wavefunction “can be consistently left out of account”, ignores the extent to which the ubiquity of environmental entanglement at any such level invalidates the model of local wavefunctions which they invoke in making their argument.
It seems to me that without these assumptions, Bohm and Hiley's interpretation fails to have any justification. Given these assumptions, for Bohm and Hiley, observational probabilities are ignorance probabilities in a non-local deterministic theory, so that we can only learn about what has always been fixed by the global initial conditions. On the other hand, in my interpretation, which is a local indeterministic theory, we learn about what is locally possible. Each observation permanently fixes some information into the developing structure of an individual observer. Local correlations inside an initial thermal state can be explored and an observer is most likely to observe the most likely correlations consonant with her nature and her past. (41)
P. Pylkkänen, Mind, Matter and the Implicate Order. (Springer 2007)
Pylkkänen reviews Bohm's metaphysical work, mainly by extensive quotation. His book is full of ambition about understanding “the architecture of matter” and “the architecture of consciousness” but it seems largely devoid of achievement. Indeed, a book almost entirely without equations is not likely to tell us anything very useful about the “fundamental nature of matter and its movement”.
Non-relativistic Bohmian mechanics, as described, for example, by Holland, is a precise mathematical theory. It has never been clear, at least to me, exactly that how that theory is supposed to address questions about consciousness but, however it might, it is still a materialist physical theory. This physical theory does have at its heart its own fundamental duality between particle positions and the unique global guiding wavefunction, but this remains a duality within a materialist framework. Suggesting, as do Pylkkänen and Bohm, that the wavefunction guiding particles is somehow analogous to mind “guiding” matter does nothing to explain what gives something like a wavefunction its own local experiences. The ions in a brain are also guided by neural electric fields, and like the global wavefunction, both ions and electric fields have behaviours which can be abstracted to form representations of mental contents. This, however, merely provides three different ways of modelling the “body” aspect of the mind-body problem. My own work involves yet another way of modelling body in terms of patterns of neural events. In my case, however, I have attempted to provide an explicit analysis of the required abstractions. This makes my theory complex, but, in my view, such complexities are necessary if we are to identify the physical structures which underlie the experiences of individual observers, to determine the possible temporal developments of those structures, and to provide an adequate definition of probabilities be they objective or subjective.
Pylkkänen is entirely vague on the structure and individuality of minds and on their temporal developments. He is entirely vague about the relationship between brains and minds. He has a chapter on the consciousness of time with essentially no analysis of the fundamental, even if preliminary, issue of how neurons respond to and represent changing events. His section on the classical Zeno paradox similarly fails to analyse the dimension of what exists at a moment. I would suggest that there is no paradox in dynamics because a moving arrow exists in a space in which its velocity as well as its position is a variable; and there is no mental paradox, because the brain represents in simultaneous neural excitations both the observed position of the arrow and how it is currently seen to be moving.
Pylkkänen endorses Bohm's rejection of an ultimate fundamental theory. I would agree that we are far from having such a theory. Nevertheless, it does not seem to me that this excuses hand-waving vagueness about the ideas that we do have. There might well be a deterministic level underlying apparent indeterminism. This does not mean, however, that there are not important lessons to be learnt about reality by precise analysis of observed events and apparent probabilities at the only levels currently available to us. (40)
G. Berkeley, A Treatise Concerning the Principles of Human Knowledge. (Pepyat 1710)
Brilliant and bold, Berkeley argues that it is obvious that we experience nothing except our ideas and that it is only through our ideas that we learn about reality. It is therefore, he claims, wrong to believe in the existence of non-mental objects. Indeed, he says, even if there were such objects it would be impossible
“to comprehend in what manner body can act upon spirit, or how it is possible it should imprint any idea in the mind”.He then faces the challenge that
“whatever power I may have over my own thoughts, I find the ideas actually perceived by sense have not a like dependence on my will. When in broad daylight I open my eyes, it is not in my power to choose whether I shall see or no, or to determine what particular objects shall present themselves to my view; and so likewise as to the hearing and other senses; the ideas imprinted on them are not creatures of my will”.He therefore proposes that the perceived ideas are produced by God and he says
“the set rules or established methods wherein the Mind we depend on excites in us the ideas of sense, are called the Laws of Nature; and these we learn by experience, which teaches us that such and such ideas are attended with such and such other ideas, in the ordinary course of things”.Berkeley's theology is fundamental to him. He requires that God is free and so suggests that the laws of nature are only regularities which God chooses in order to give
“us a sort of foresight which enables us to regulate our actions for the benefit of life”.As he says, however,
“It will, I doubt not, be objected that the slow and gradual methods observed in the production of natural things do not seem to have for their cause the immediate hand of an Almighty Agent. Besides, monsters, untimely births, fruits blasted in the blossom, rains falling in desert places, miseries incident to human life, and the like, are so many arguments that the whole frame of nature is not immediately actuated and superintended by a Spirit of infinite wisdom and goodness”.His reply to this is
“the aforesaid methods of nature are absolutely necessary, in order to working by the most simple and general rules, and after a steady and consistent manner; which argues both the wisdom and goodness of God”.
With quantum theory, it can be argued that our observations present to us an apparent world very different from any underlying external physical universe. Indeed, ultimately that “universe” (described by the Everett “universal wavefunction”) may reduce to something as simple and lawfully defined as a vacuum state. This suggests a modified atheistic idealism, in which laws are fundamental and determine our possible observations and their probabilities. (23)
A. Franklin, Are There Really Neutrinos? (Westview 2004)
Franklin wrote this historical account of the development of the experimental evidence for neutrinos and for their properties in order to show science in action as a rational and self-correcting enterprise. Unfortunately, his subject matter is so technical that it seems unlikely that the book will be studied by anyone ignorant, pretentious, biased, conceited, or stupid enough to doubt the fact. Physicists, on the other hand, should enjoy the drama which lies in the details.
Nevertheless, it does not seem to me that the work Franklin reviews should necessarily convince us that there actually “really” are neutrinos, any more than Dr Johnson's famous stone-kicking answer to Bishop Berkeley's idealism (“I refute it thus”) should necessarily convince us that there actually “really” are stones. What is most convincingly demonstrated is merely that we do not freely choose the content of our experiences. Thus Franklin's account seems to me to be entirely compatible with my own view, which is that, rather than being due to real physical objects, the appearances of neutrinos — and of stones — are just consequences of the observer-independent physical laws which determine the probabilities of our observations. (19)
J. Dunham, I.H. Grant, and S. Watson, Idealism: The History of a Philosophy. (Acumen 2011)
Even within philosophy, “idealism” has multiple meanings. One fundamental distinction turns on the derivation of the word. “Idealism” can be related directly to “Idea” in the Platonic sense of some sort of archetype of a concept, property, or kind; or it can be derived from the later use of “idea” to mean a sensation or thought. Dunham, Grant, and Watson are particularly interested in work for which the former sense is more appropriate. Their book is not introductory. They let loose a blizzard of fragmentary summaries of metaphysical ideas from a succession of thinkers but hardly attempt, even within the limits of intellectual history, to provide historical, let alone biographical, context. I would not recommend the book to anyone not already familiar with the philosophers under discussion.
Dunham, Grant, and Watson suggest that idealism with a Platonic interpretation can be compatible with the acceptance of a scientific outlook. In order to achieve this however, it would be necessary to explain, in a scientific way, what Platonic Ideas are and how they are supposed to achieve any suggested (final) causal role. Apart from some rather vague remarks, for example, about attractors in dynamical systems, Dunham, Grant, and Watson provide no evidence that such explanations have been given. The dynamics of a dynamical system, of course, already provides an adequate reductive explanation for the existence of its attractors. Moreover, while it is possible to find mathematical models in which inevitable ends are necessarily reached, these tend to be model-dependent idealizations of physical situations. The ends in these models also have little in the way of archetypal properties in the sense, for example, of “virtue” or “perfection”.
In general, idealist philosophers tend to propose highly speculative and elaborate systems. Their motivations vary. There have been attempts to solve problems which subsequent science seems now to have adequate means to address; such as how matter can interact, or how living beings can be formed. There may alternatively be a desire to demonstrate how it could be possible for some particular entity or concept, such as God, virtue, freedom, or consciousness to be at the heart of reality; particularly in circumstances in which the materialist science of the day has made ascribing such reality seem difficult. The easiest way to rebut one of these proposals is to argue that the fundamental entity is not real in the required way. My own version of idealism is driven by a desire to explain quantum mechanics. This has led me to the conclusion that individual observations and their probabilities are at the heart of reality. The result is certainly a highly speculative and elaborate system, but one which is much more compatible with modern science than most of those discussed by Dunham, Grant, and Watson. The concepts and laws in terms of which my system is expressed are also much more explicitly defined than are most versions of Platonic Ideas which, I believe, amount just to useful abstractions identified by human thought.
Dunham, Grant, and Watson are concerned to argue that idealism is not anti-realism. I would suggest that we should be realists, not about Platonic Ideas, but about the laws which, in my view, determine not only the behaviour of matter, but also the structures of consciousness and the probabilities of possible experiences. It is these laws which science can try to identify but ultimately cannot explain except by reference, for example, to symmetries and to simplicity. This leaves one only to hope that there might some better way, in particular perhaps a complexity-dependent way, of making sense of Leibniz's idea of our's being that optimal reality in which one finds “the greatest variety in combination with the greatest possible order”. (49)
P. Woit, Not Even Wrong. (Jonathan Cape 2006)
Woit provides a detailed, competent, interesting account of how, following the success of the standard model, string theory came to be the dominant new idea in high energy physics. He then points out that the idea has no experimental support; he argues that it is too incomplete to be a genuine theory; and he criticizes its continuing dominance. I agree with, or find plausible, most of what he has to say. In particular, I feel that in circumstances where experiment cannot provide constant feedback, it is essential that theorists are scrupulously honest, to themselves as well as to others, about their assumptions and about precisely where the boundary lies between what they can actually prove from those assumptions and what they would merely like to believe. (27)
P. Davies, The Goldilocks Enigma. (Allen Lane 2006)
A sufficiently broad survey of the wilder speculations of modern physicists and others to provide everyone with some ideas they will want to reject. Davies considers some fundamental questions about the nature of the universe and of the laws which it seems to obey. The suggested answers vary in the extent to which they are circular, self-contradictory, untestable, or incomplete. What Davies reveals most clearly is how much we do not, and perhaps cannot, know; but I think that he also demonstrates that it can be interesting to explore questions beyond practical or even sensible bounds, to try to imagine how such questions might be answered, and to look at how we can constrain possible answers. (32)
S. Blackburn, Truth: A Guide for the Perplexed. (Penguin 2005)
How can we accept that certainty is unobtainable and at the same time maintain that creationism is ridiculous? Blackburn reviews some of our options. A discussion of arguments and responses rather than a treatise, this is serious popular philosophy of the best kind.
Blackburn says that in the ancient world scepticism led to the suspension of belief, but can we just be pragmatic and build our earthquake detectors without wanting to believe in the geological processes that seem to explain why they work? My aim in trying to understand quantum theory is to try to push beyond pragmatism to ask what quantum theory might be telling us about the nature of reality. This cannot be done without a framework of assumptions; for example about the existence, stability, and simplicity of mind-independent scientific laws. In my proposals, it is these laws which ultimately explain the appearance of geological processes. Scepticism provides the freedom to think in an investigation of this sort, and requiring consistency in details at every level is an essential driving discipline. Creationism seems to me to be ridiculous when it claims to be sceptical about science without considering the details of the scientific response while, at the same time, in its own framework it refuses either to be sceptical or to require consistency. (22)
P. Nelson, Biological Physics: Energy, Information, Life. (Freeman 2004)
A first-rate introduction to the physical analysis of biological systems. Nelson emphasizes the role of entropy at the level of the interactions of individual molecules. Any supposed involvement of quantum theory in cognitive processsing should be compatible with this. One of the “molecular machines” which Nelson analyses in some detail is kinesin. He explains how this two-component protein can “walk” along microtubules and so transport supplies from one part of a cell to another. I do not believe that microtubules have any direct involvment in cognitive processsing, but, if they did, it would need to be compatible both with their essential role in this transport process, and with the fact, mentioned by Nelson, that microtubule formation is entropically driven. Nelson also discusses the propagation of nervous impulses. (18)
D.J. Chalmers, The Conscious Mind. (Oxford 1996)
Mainly an excellent review of modern philosophy of consciousness, arguing against reductionism and for a form of dualism. Towards the end of the book, Chalmers speculates briefly about consciousness and information. I found these speculations entirely unconvincing because he does not explain what makes a physical system a realization of an information space. His comments on quantum mechanics, while quite sensible, also lack depth.
Chalmers makes a useful distinction between types of problem in this area. All problems involving how brains work at the physical level he describes as “easy”. By this he means that, whatever difficulties they may introduce at the technical level, problems such as, “How could a physical system be the sort of thing that could learn or that could remember?” involve conventional science and introduce no deep metaphyical enigmas. The hard problem, on the other hand, is the question of how and why cognitive functioning is accompanied by conscious experience.
Chalmers’ web site contains extensive bibliographies and directories of online papers on consciousness and related topics. (4)
M. Robinson, Understanding Behaviour and Development in Early Childhood. (Routledge 2011)
A competent and humane review of how young children begin to develop as people. Robinson discusses the growth and maturation of the brain and the senses. She looks at the way in which, in normal circumstances, abilities and challenges emerge in a predictable pattern. She emphasizes the importance of a child's relationships and interactions with carers and shows how, even with very young children, problems here can act to influence subsequent behaviour over the long term. This is an excellent text in human biology. It can be read from an entirely materialist point of view. According to this, human brains have evolved to develop in appropriate social settings by a complex sequence of feedback loops, and the more the setting is abnormal, the more likely it is that the development will go awry.
Robinson tells us what the mental lives of young children look like, but from my point of view, the question raised by work of this kind is “Where do the minds come in?”. I don't see mind as being identified by apparent behaviour; even if behaviour is taken to include what is said or what is being thought (considering thought as physical brain activity amounting to internal speech or pre-speech). Mind instead, I think of, as being that behaviour for itself. Unfortunately, this means that a materialist explanation of behaviour is not sufficient to explain why there is something it is like to be a particular physical system with that behaviour.
In fact, my analysis of quantum theory suggests that even the identification of “physical systems” is problematic. This leads me to the proposal that the fundamental entities are particular kinds of realised patterns of information, and hence to a form of idealism. Quantum physics, in my view, then provides a probabilistic foundation for human biology; showing how the most likely patterns of information of human complexity should appear to be the products of evolutionary processes. Nevertheless, there remains the question of how and why such patterns are accompanied by their own conscious experiences. This is the “hard problem” for my kind of idealism. How can what we say about our own feelings be true? Has it been true right from the beginning of our lives?
Robinson shows us how our individual minds develop from simplicity in broadly pre-programmed stages. It begins to be possible in the light of that sort of development to imagine how, as separate individual patterns of information, we could learn to make sense of what we are, but nothing comes of nothing and there remains, I think, an initiation problem; the problem of identifying, without begging the question, the mental and emotional bootstraps by which we pull ourselves into reality. (46)
J.R. Searle, The Rediscovery of the Mind. (MIT 1992)
According to Searle, modern philosophers of mind tend to be so obsessed with avoiding any hint of Cartesian dualism that they deny the all-too-obvious reality of human consciousness. He claims that consciousness is an emergent property of the physical brain, rather as liquidity is an emergent property of collections of water molecules at appropriate temperatures and pressures. Searle, however, makes no attempt to explain what this emergence might involve. For example, he consider five possible senses of “reductionism” which may be applied to consciousness but an application he does not consider is that consciousness itself may reduce to combinations of fundamental elements of information. Nor does Searle indicate how one could begin to understand which physical systems could possess consciousness other than by invoking direct physical similarity to human beings. Thus, it is apparently just obvious to Searle that dogs and cats are conscious and that computers and cars are not. Nevertheless, I think that he is right to argue that consciousness is real, that some things are not conscious, and that, by itself, an abstract definition of computation is far too broad to underpin a characterization of consciousness.
Searle seems to be rather confused about the causal role of consciousness. On the one hand, he claims to be able to imagine a situation “where you have no conscious mental life whatever, but your externally observable behaviour remains the same”, and on the other hand, he claims that “consciousness gives us much greater powers of discrimination than unconscious mechanisms would have”. This confusion between two different meanings of the word “consciousness”, where the first meaning involves the reality of subjectivity and the second the level of awakeness, is not helpful. The hard philosophical problems concern the first meaning rather than the second. With that meaning, I believe that consciousness is a fundamental part of what exists. It has no causal role — in the sense that consciousness adds no new interaction to physics — rather it is the existence for itself of that which is aware.
Searle has significant arguments against attempts to model neural functioning in terms of the running of computer programs. He also points out how much background knowledge is required to make sense of any thought. I think, however, that Searle's most valuable contribution is to argue that “syntax is essentially an observer-relative notion”. In other words, whether something carries a message or performs a computation depends on who is looking and in what way they look. If there is no who — if are there are no observers, no minds, no fundamental elements of information — then there are no messages and no computations, there is only stuff obeying fundamental physical laws. (43)
D.C. Dennett, Consciousness Explained. (Little Brown 1991)
A philosopher's attempt to come to terms with modern neuroscience. Provides useful reviews of work both in neuroscience and in philosophy and much excellent analysis. Dennett does not explain consciousness itself, but does explain that what needs to be explained is the consciousness of brains.
Dennett's subsequent book Freedom Evolves (Penguin 2003) makes clearer the boundaries of his explanations. In this book, he argues that deterministic physical theories do not contradict the existence of personal free will and that the existence of personal free will is made no more plausible by indeterministic physical theories. I agree. Nevertheless, at the level of foundational physics, questions of what is in fact determined and what is in fact stochastic are of more concern than questions of how we should understand the everyday sense of words like “free will”, “choice”, and “possibility”. Similarly, at a foundational level, it may be necessary to develop theories which discriminate between entities which are in fact conscious and entities which would merely be observed to behave as if they were conscious. (3)
J. McCrone, Going Inside. (Faber 1999)
A well-written and well-informed journalistic account of recent developments in neuroscience, emphasizing the dynamic, plastic, and distributed nature of neural processing. Good background reading for philosophers of mind, even if, in his last chapter, McCrone manages to miss entirely the point of Chalmers’ “hard problem”. The point is not that, “a successful theory of consciousness would have to be able to tell us exactly why things feel the way they do” — it is that we have no idea why things should feel at all. (5)
D. Lloyd, Radiant Cool. (MIT 2004)
The first part of this book is great fun; a novel about a philosophy graduate student searching for her missing advisor and discussing theories of consciousness with a variety of brilliantly sketched characters. In the second part, Lloyd writes in a more conventional way about some of his own theories. He makes a plausible case that neural networks can model features of the consciousness of time; showing, for example, that recurrent neural networks can learn to anticipate expected events. He is also interested in the development of techniques to present and analyse information from both artifical neural networks and brain scans. For example, he uses these techniques to demonstrate that some brain scans contain information about recently past events. The problem with such techniques is that they leave one wondering whether there is supposed to be a ghost lurking in one's own brain, applying some complicated procedure to discover what to be conscious of. As I discuss in Donald 1997, this problem may be ameliorated in the context of an interpretation of quantum theory in terms of a characterization of the mind of an observer, because such an interpretation provides a reason to suppose that consciousness exists in some definite and fundamental abstract space and achieves self-understanding through being its entire historical development. (30)
R. Rorty, Philosophy and the Mirror of Nature. (Princeton 1980)
A sustained attack on the notion of philosophy as providing a foundation for knowledge. Rorty argues that there are no completely unassailable a priori techniques for arriving at truth, and endorses the idea that truth is not as useful a concept as warranted assertability. His arguments and his discussions of developments in philosophy are interesting, and I have some sympathy with his views. However, I tend to believe that, even if much of what we do is no more than playing within the rules of some particular language game, “rules” themselves are necessarily constrained by the unavoidable laws of elementary logic. Not much more is needed for finitary mathematics, the results of which in consequence seem to me to be essentially undeniable, even if the form in which they are presented is hardly pre-ordained. Science, on the other hand, may well involve such a variety of methods and assumptions, incomplete theoretical constructions and experimental limitations, fashions and personalities, that it can indeed sometimes be more appropriate to think of it as a developing conversation than as any sort of necessary revelation. Nevertheless, unless there is some sort of fixed underlying reality of fact and law; however incomplete our knowledge of it may continue to be and however many different ways there are in which we might come to describe aspects of it; it is hard to understand how, despite radical conceptual revisions, the conversation can go on ever widening its scope while getting more and more precise, coherent, and useful. Rorty's radical pragmatism itself is only useful if it is taken as a form of scepticism. It should be kept as a background argument to remind us of our ultimate ignorance, rather than misused to suggest that we have nothing but ignorance.
Rorty argues that the idea of indubitability as characteristic of mind has been fundamental to the development of philosophy since the seventeenth century. His dismissal, following Wittgenstein, of indubitability as just, “the remark that we have the convention of taking people's word for what they are feeling”, seems to me to be perfunctory. In my view, it is necessary to distinguish what we are as experiencing beings, from our reports about our experiences. The reports are caused, not by our experiences, but by current events among the same evolved physical brain processes which, over our lifetimes, have structured our experiences. Incompatibility between report and experience cannot therefore be ruled out. Our feelings, in my view, arise from the meanings we find ourselves having as the complex structures that we are. The normal apparent compatibility between what we say and what we feel is then an indication that meaning arises from our structure in a consistent and almost inevitable way. (47)
V.S. Ramachandran and S. Blakeslee, Phantoms in the Brain. (Harper Collins 1998)
Ramachandran draws on his years of clinical practice to describe the bizarre ways in which people sometimes behave after various types of brain damage. Using recent progress in brain research and imaging, he attempts to explain how specific effects might follow from specific damage. His work provides a fascinating and terrifying demonstration of the dependence of normal human behaviour on the combined working of many different parts of the brain.
Fundamental to all scientific approaches to the nature of consciousness, is, of course, the assumption of some form of correspondence, or identity, between mind and brain. This is studied by studying the way that our behaviour depends on our brains and by assuming that there are close links between what we are experiencing and what we say we are experiencing. Ramachandran's descriptions of extremes in the relationship between brain functioning and behaviour brings the meaning of these assumptions into sharp focus.
Although an idealist, my conclusions about the consequences for the mind of apparent brain destruction are essentially equivalent to those of a materialist. In my view, “brains”, or more precisely brain histories, are effectively objective abstract structured patterns of information produced, and ended, by objective physical laws and probabilities. Just as in conventional approaches minds can be seen as dependent on, or as realized by, or as experiences of, extended material structures; so in my approach minds can be seen as dependent on, and made real in the form of, and experiences of, geometrical and temporal abstract structures. The difference is that in my approach those structures are the reality subject to law rather than merely part of that reality.
It is difficult to know what it might be like to be someone in some of the situations that Ramachandran describes. Indeed it might be easier to understand what it would be like to be a bat. What the patient is saying or doing may well be inconsistent. This places a particular strain on theories of consciousness which refer only to short-term neural functioning and assume that functioning to be “normal” and “purposeful”. I would not pretend that my picture of consciousness as a developing history is likely to have any clinical relevance; but it seems to me more plausible to imagine that we can guess at patients' current experiences by relating them to their previous experiences, rather than just by looking at what they might currently be capable of. (33)
T.D. Wilson, Strangers to Ourselves: Discovering the Adaptive Unconscious. (Belknap 2002)
“Minds” are the fundamental entities in my interpretation of quantum theory. Minds are what we are. More precisely, I take minds to be the structures from which our experiences arise. This does not mean, however, that we have, or even that we can have, explicit knowledge of every detail of our own structure; rather we have the awareness which results from being that entire structure. From the classical point of view, this is like noting that rather than having explicit knowledge of all the workings of our own brain, we have the awareness which results from having that functioning brain.
Wilson reviews evidence from psychology showing that we have explicit knowledge of only part of the mental processing involved in our behaviour. In example after example, he shows how our actions, our feelings, and our judgements all involve high-level processing inaccessible to consciousness. Much of our character seems to stem from ways we develop of analysing the world, but this can leave us making automatic and sometimes undesirable responses to situations. Wilson also discusses the unreliability of our beliefs about our character, about our motives, and about how we will feel about future events. He even argues that it is possible for us to fail to know our own true feelings, and he suggests that sometimes examining our actual behaviour may be better than introspection as a way of finding out about ourselves.
Wilson's work underlines the need to take care to specify what we mean when we use terms like “mind” and “consciousness”. Care is also needed with terms like “brain” and “structure”. A many-minds interpretation of quantum theory does not resolve all the traditional mind-body problems. In particular, it does not explain how awareness arises from structure. What it does do is to give us a reason to suppose that mental structure is at the heart of reality, and it provides some clues about the form and the temporal nature of that structure. (48)
A. Scott, Stairway to the Mind. (Springer 1995)
Scott proposes that consciousness is emergent. This is surely correct in an epistemological sense. Imagine, in other words, trying to predict the existence of humans and their apparent minds merely given a knowledge of chemistry and of the initial conditions in the universe shortly after the big bang. For any entity of even close to human scale, without any prior or empirical knowledge to the effect, for example, that animals could exist, this would surely be impossible. There is, however, also a much more debatable ontological claim, stating that human minds are not entirely a consequence of chemistry and the initial conditions of the universe. Occasionally, it seems to me that Scott blurs this distinction; making epistemological arguments but then hinting that he has justified the ontological claim. Adding to the confusion is the separate issue of the extent to which minds are social constructions.
It also seems to me that the idea that consciousness is emergent is of little help in understanding the actual nature of consciousness. Scott claims that action potential propagation along neurons, as described by the Hodgkin-Huxley equations, is an example of an emergent phenomenon. In this case, however, straightforward answers are available to questions about the existence and nature of nervous impulses and about the association between the impulses and the underlying nerve cells. If human minds exist, then they too seem to be associated with highly complex systems. But do minds exist at all? What does it mean to say that they exist? What sort of association between mind and brain is involved? And are we really sure that, in some appropriately simple form, minds do not also exist in association with relatively simple systems?
Scott's book does nevertheless provide an excellent introductory survey, going from the quantum theory of atoms to the social sciences, of the idea of a hierarchy of levels of scientific analysis. He validates the epistemological argument for emergence by describing the enormous gulfs between some of these levels. He also makes interesting comments on a variety of contemporary ideas about consciousness. (39)
D.B. Fogel, Blondie24. (Morgan Kaufman 2002)
Blondie24 is a computer program which plays checkers (draughts) to a fairly high standard. At its heart, is a neural net which evaluates board positions. The program is developed by an evolutionary process, in which sets of such programs play rounds of games against each other. At the end of a round, the less successful programs are eliminated. Each of the more successful programs goes forward to the next round, together with a descendant constructed by random variations of its connection weights. Blondie24 is the best program after many rounds. Fogel's book is a clear and readable story of the investigation of a challenging problem. It discusses an interesting computational technique and provides an elementary model of “evolution” of “intelligence”. Particularly noteworthy is how little direct knowledge of the game Fogel and his collaborator put into the original set-up of the system.
This work also provides an elementary illustration of the problems of functionalism. Although it is easy to imagine that the meaning of the game has been internalized into Blondie24's connection weights, these weights are used merely to compute a function which gives a value to possible board positions. Functionalism suggests that an equivalent meaning would be inherent in any physical process which could make the same computation. However, while unusual replications of human neural functioning might seem just the subject of far-fetched thought experiments, there are clearly all sorts of ways of computing a given finite mapping from board positions to values. My own view is that meaning cannot be inherent in every possible manifestation of a computation, let alone in any mere disposition to make a computation. Nor, if we want to avoid an implicit appeal to an assumed external observer, can meaning be seen as inherent in the context of a computation. Instead, I suggest that meaning only inheres in existence of a precise form and that an investigation of the problems of quantum mechanics may help us to discover that form. (28)
M.R. Bennett and P.M.S. Hacker, Philosophical Foundations of Neuroscience. (Blackwell 2003)
Bennett and Hacker attempt to impose a rigid, uniform, conventional linguistic analysis on all approaches to the investigation of the mind. Their analysis is pedantic and often tedious, but more importantly, it is stifling. It would be impossible to develop my interpretation of quantum theory within the framework put forward in this book. This framework would rule out an idealist position on a priori grounds; although whether or, in what sense, an external world exists is not a matter of logic or grammer but a matter of fact. Although I would not deny that consciousness is a social construct, I would also claim that society is a mental construct; or more precisely, that sophisticated minds of comparatively high probability will see themselves as finding their meaning through an apparent evolved social setting.
Bennett and Hacker do not allow that developing an understanding of difficult ideas could require one to play in several different frameworks. They are uncharitable in their narrowness so that sometimes it seems as if they are deliberately misinterpreting work that they criticize. Of course, to aim for consistency is essential, but a consistent framework is something which needs to be worked on and revised continuously, and which may even need to be completely replaced on occasion. It is not something which ordinary language gives us ready-made.
There is, nevertheless, one critical issue emphasized by this book. It is all too easy in a neuroscientific analysis to fall into the trap of introducing some sort of implicit homunculus. Avoiding this requires constant guard of language and thought. Bennett and Hacker's suggestion is that this justifies such radical claims as
“the term ‘representation’ is a weed in the neuroscientific garden, not a tool — and the sooner it is uprooted the better”;
“there is no such thing as storing memories in the brain”;
“there is no information in a hemisphere of the brain (not even in the thin sense in which one might say that there is information in a telephone cable while someone is talking)”.
It seems to me that this is counter-productive. The scientific investigation of neural functioning would hardly be possible without the concepts of “representation”, of “memory storage”, and of “information”. Instead of linguistic analysis, I would suggest that ontological analysis is actually what is required for homuncular exorcism. (21)
Jeffrey M. Schwartz has developed an apparently-successful method of treating patients with obsessive-compulsive disorders. In this book, he tries to explain his method by invoking the hypothesis of Henry P. Stapp that mental choices somehow involve the quantum Zeno effect. In Donald 2003, I criticize Stapp's ideas. I therefore disagree with Schwartz's explanation. Nevertheless, his method, which requires patients to challenge their own thoughts, does seem entirely plausible. In my opinion, an adequate explanation is possible in terms of classical neurophysiology, using the modular and parallel nature of neural processing. Schwartz and Begley also review a wealth of interesting evidence about the possibility, throughout life, of quite large-scale changes in neural connectivity. This allows them to justify a strong form of the conclusion that, “it is the life we lead that creates the brain we have”. (16)
J.-P. Sartre, L'être et le néant, (Being and Nothingness). (Gallimard 1943)
Comes as close as anything I have ever read to describing the essential nature of consciousness.
I recommend Dennett's “Consciousness Explained” and McCrone's “Going Inside” on this page, because they provide excellent reviews of modern research on the physical structures underlying mental processes. In my own work, I also have attempted to investigate the physical structures underlying mental processes. I believe that my results are compatible with the research that Dennett and McCrone review. I differ from them, partly because I agree with Chalmers that materialism is false, but more importantly because I believe that quantum theory requires a radical revision in our understanding of the nature of physical structure. Sartre, on the other hand, attempts to analyse consciousness itself. I think that his insights are remarkable. At the heart of his analysis is the idea of the primitive act of consciousness as being a negation; “the for-itself [ . . . ] which is that which it is not and which is not that which it is”. This links the primitives of consciousness to primitives of modality (the philosophical theory of possibility).
Many-minds interpretations might also seem to express a link between consciousness and modality, and when I first read Everett's thesis as an undergraduate, I thought that there had to be a deep connection with Sartre's ideas. I am no longer so sure. It now seems to me that there is no good reason why the actual “physical” existence of real alternative possibilities, at the level, say, of primitive instantaneous differences, should make those differences become real for themselves. Consciousness has to carry its differences internally rather than externally. (1)
D. Lewis, On the Plurality of Worlds. (Blackwell 1986)
In a philosophical analysis of the nature of possibility, Lewis tries to defend the doctrine of modal realism — the idea that to be possible is to be. In my opinion, Lewis fails to provide a satisfactory characterization of a “world” and his attempts to avoid problems with the infinite and with induction are inadequate. I believe that Lewis's greatest failure lies in not distinguishing between physical possibilities and imagined possibilities. Physical possibilities can be expressed through alternative initial conditions, alternative physical laws, or other mathematical structures. Whether these alternatives need to exist is an interesting metaphysical puzzle, but one to which the apparent simplicity of the reality we have so far observed is surely relevant. The many minds of my interpretation are defined within just a single set of initial conditions and physical laws. Even whether all of these minds exist as experienced realities is a metaphysical puzzle related to the problem of solipsism. Imagined possibilities, on the other hand, are constructions of individual minds. These are ultimately based on that fundamental and mysterious aspect of consciousness which is the distinction between what we are as, in a wide sense, neural structures, and what we are conscious of, as the possibilities of the reality which we experience those structures as representing. (15)
C.S. Chihara, The Worlds of Possibility. (Oxford 1998)
Chihara points out the similarities between modal realism and mathematical platonism. He criticizes both for requiring the existence of unknowable entities. His discussions of the ideas of earlier authors are clear and informative. His own contribution is to develop a model-theoretic analysis of modal logic involving a mapping between descriptions provided by natural language sentences and sentences in a particular abstract structure. Chihara does not attempt to eliminate modal notions entirely, so it seems to me that we are left, as we should be, with modality as a primitive aspect of consciousness.
The theory proposed by Chihara may be sufficient to show that modal realism is avoidable, but otherwise it is an elaborate formalism which I suspect would be useful for at best a very limited range of purposes. Indeed, Chihara gives no examples which are not more easily understood directly. In as far as possibilities are a tool of imagination, it seems counter-productive to insist on limiting them by particular formal structures. Imagined possibilities are often vague and incomplete and pedantry can be foolish. Moreover, sometimes what is of primary importance, even in mathematics, is to discover whether there are contradictions in what we initially imagine to be possible. In mathematics, if any consistent formal structure for such “possibilities” can be constructed then the task is already complete.
Chihara's previous book Constructibility and Mathematical Existence (Oxford 1990), about the philosophy of mathematics, is similar to his later book in many ways. Again there are instructive reviews of the writings of a sample of modern philosphers, and again Chihara provides his own linguistic analysis; in this case an interpretation of mathematics in terms of constructible sentences. (Confusingly, Chihara's “constructibility” is not a version of mathematical “constructivism”.) Once again, Chihara's machinery merely demonstrates a principle rather than providing anything that is likely to be of much use.
In Donald 2003, I make some comments on my own view of the relationships and similarities between apparent mathematical existence and the apparent existence of possibilities. (29)
N. Ferguson, Virtual History. (Picador 1997)
A fascinating collection of essays on historical turning points, demonstrating how study of what might have happened can be a valuable way of understanding what did happen. Ferguson introduces the collection with a long and sensible discussion of determinism and counterfactuals in history. Disappointingly, his afterword, in which he attempts to piece together all the counterfactuals proposed elsewhere in the book, is just the sort of frivolity that his introduction warns against.
I agree with Ferguson's suggestion that the most informative historical counterfactuals attempt to understand what sort of future might have been expected given contemporary opinions. Indeed, in terms of my understanding of physical theory, I would argue that one can reasonably consider the range and relative probabilities of likely futures given contemporary facts. In this framework, there are always many small variations possible in the short term, and small changes at any moment will usually ripple out into larger subsequent changes. In particular, whenever a time-scale of more than a single generation is contemplated, one should note the unpredictability in the genetic make-up and gender of any child. (6)
R. Cowley, What If? (Putnam 1999), More What If? (Putnam 2001)
Like Ferguson's “Virtual History”, these are collections of essays on historical turning points. The essays in these books are shorter, less detailed, and, at least in the first book, mainly concerned with military history. However they do cover a wider time span. They provide convincing arguments that there were many moments at which the path we now see leading to our world could easily have taken a quite different direction. (12)
S. Conway Morris, Life's Solution: Inevitable Humans in a Lonely Universe. (Cambridge 2003)
Sperm whales and elephants have similar social structures. Kiwis are birds but they live in burrows and are nocturnal, and their body feathers are like fur and the feathers around their mouths are like whiskers. Conway Morris presents many examples of biological systems, at all levels from the molecular to the social, which have evolved separately and yet found similar solutions to life's problems. He argues that, although the details of history are clearly contingent, the overall structures of life are broadly inevitable, at least if life manages to begin on a suitable planet. (24)
J. Barbour, The End of Time. (Weidenfeld and Nicolson 1999)
Julian Barbour would like to persuade us that time does not exist. Like so much work in physics, his book divides into a classical part and a quantum part. The classical part is a fascinating survey of ideas about absolute space and time and how they can be avoided. Often, the more one becomes familiar with a physical formalism, the more one comes to imagine that that formalism is an accurate reflection of reality. Thus having learnt to solve Newton's equations in terms of a parameter called “time”, one comes to think of that parameter as mirroring some universal clock; or having learnt general relativity, one comes to think of the universe as being a real four-dimensional spacetime manifold. Barbour's work makes one reconsider. The quantum part of his book is less well-developed, with many technical problems, for example about the definition of probabilities, being left open. Nevertheless, it is not without insight, in particular into the nature of global quantum states. What I feel is missing from the book, however, is any exploration of how meaning is supposed to be experienced in a timeless world. In my opinion, it is ultimately minds which bring time into existence, and, without an experienced history, minds would be empty.
Jeremy Butterfield has written a long review of this book (gr-qc/0103055). Barbour's own web site is at http://www.platonia.com. (11)
P. Yourgrau, A World Without Time. (Basic 2005)
This rather superficial popular account of the life and work of Kurt Gödel focuses on his relationship with Einstein, and on Gödel's use of relativity theory to motivate an idealistic account of time. Special relativity certainly suggests that there can be no universal “now” and that time is observer-dependent. General relativity is more complicated. Some important cosmological models do have a natural “cosmic time” parameter — although this does not solve the problems which arise from the independent motions of local observers. Gödel however constructed a spacetime model with closed timelike loops. He argued that even the possibility of such a spacetime meant that time in relativity theory could not be anything more than a geometrical dimension.
Unfortunately, Yourgrau does not describe Gödel's beautiful but sophisticated model in any detail. In fact, there is a much simpler model containing closed timelike loops which would be sufficient for most of the arguments in Yourgrau's book. Using standard co-ordinates on Minkowski space, this is constructed just by identifying points with corresponding spatial co-ordinates on top and bottom of the slab of spacetime from t = 0 to t = 1. Because the correspondence is an isometry, the resulting spacetime is flat and obeys Einstein's field equations for empty space.
I agree that an idealistic account of time is necessary. Indeed, according to my proposed interpretation of quantum theory, each of us has our own separate time defined by the stochastic path of which we are conscious. “Spacetime” is merely an ingredient of the laws of nature which define the stochastic process and consequently produce the likely form of the apparent world of our experience. (25)
L.M. Wapner, The Pea and the Sun. (Peters 2005)
The Banach-Tarski paradox is the theorem that a solid three-dimensional ball can be divided into a finite number of subsets which can be translated and rotated to form two solid balls, each identical in shape and volume to the original. Wapner provides an elementary and self-contained discussion and proof of this astonishing theorem. This should be interesting to anyone with a mathematical background who has heard of the result but never seen a thorough analysis of it. The book is only slightly marred by a rather confused sketch of a proof of the Schröder-Bernstein theorem and by some totally insubstantial and implausible speculations about physical applications. The primary conclusion I would draw from this enjoyable book is merely that when a mathematician says that something can be done, it need not mean that any corresponding physical task can be performed. Indeed, I believe that infinitary mathematics is only useful in circumstances where there is an appropriate sequence of finitary approximations. The Banach-Tarski paradox presents us with a situation where there is no such sequence. (26)
V. Tasic, Mathematics and the Roots of Postmodern Thought. (Oxford 2001)
Tasic provides a detailed historical analysis showing that versions of problems about the nature and relationship of “structure” and “meaning”, or of “texts” and “readers”, have long been discussed by philosophers of mathematics including Husserl, Hilbert, Poincaré, Brouwer, and Weyl. Indeed mathematics does provide a significant testing ground for the investigation of such problems, which are fundamental in the philosophy of mind and of knowledge. Any study of mathematics should at least teach that obscurity is all too easy. It is clarity that is impressive, and, unlike so much of postmodernism itself, Tasic's work is notable for the care he takes to explain genuinely difficult and important concepts. His book pulls together and illuminates ideas from a wide range of European philosophies, among which postmodernism may be the least significant. (14)
R. Hersh, What is Mathematics, Really? (Random House 1997)
Argues that mathematics is a social construct, but Hersh's honesty and competence means that it is haunted by the fact that mathematics also expresses truth. Although much of what Hersh has to say about mathematics and its history is both interesting and accurate, I think that he is wrong to give precedence to the problem of understanding mathematics as a human practice over the philosophically much more interesting problem of understanding the relation between mathematics and truth. Indeed, this book would be an excellent starting point for an attack on the wider stupidities of social constructivism. This is partly because Hersh makes such a good job of presenting his case; showing how mathematical ideas vary over the centuries and demonstrating the fallibility of human truth seeking. But it is also because scepticism about mathematics is even less plausible than scepticism about empirical sciences. Although it may be that we can never be utterly sure of our mathematical knowledge, and although our knowledge does depend on us and our society, truth does not. Mathematical truth lies neither in some unobtainable Platonic realm, nor in formalism or axioms; but rather in the inevitability of logical consequences and in the necessary properties of possible patterns. (8)
T.W. Körner, Fourier Analysis. (Cambridge 1988)
A wonderful survey of some central topics in mathematical analysis with extensive discussions of their ramifications, applications, and history. I would particularly recommend this book for vacation reading for committed mathematics undergraduates or to anyone who, having once enjoyed university mathematics, has since moved on. (7)
G. Gigerenzer, Reckoning With Risk. (Penguin 2002)
All medical tests are subject to errors. Not infrequently, the disease tested for is sufficiently rare and the false positive rate is sufficiently high, that a screened patient with an initially positive test is more likely not to have the disease than to have it. This should be well understood by the physicians involved in the tests, but Gigerenzer shows that often it is not. He also explores situations in which misunderstandings of probability can lead to miscarriages of justice. Probabilities can be represented in different ways. Gigerenzer discusses ways which can and should be used to enhance clarity and avoid mistakes, and also ways which can and are used, for example, to over-emphasize the efficacy of medical treatments.
This book, published as “Calculated Risk” in the USA, is not difficult or technical, but I think it is important. It should be read by doctors and their patients, lawyers and their clients, and by teachers of probability theory and their students. (31)
G. Vitiello, My Double Unveiled. (John Benjamins 2001)
A non-technical account of an attempt to apply ideas from quantum many-body theory to the understanding of the workings of the brain. In my opinion, Vitiello's proposals are completely implausible. Indeed the way the physics is supposed to apply is left so vague and speculative that the impression is given of a scientist determined to impose advanced concepts from his own field on neurophysiology whether or not they are necessary. Nominally the problems addressed are to do with the nature and formation of memories, but far too little attention is paid to the standard biological theories about how these problems can be solved. A briefer account is given in the paper “Dissipation and memory domains in the quantum model of brain” by E. Alfinito and G. Vitiello, quant-ph/0006065. (10)
* * * * * * * * * * * * * *
* * * * * * * * * * * * * *
Notes on some relevant, or significant, or recommended papers available from the quantum physics e-print archive.
home page: http://people.bss.phy.cam.ac.uk/~mjd1014