Full Colour PDF 30 mb ◊ White pages print version
Chris King
CC BY-NC-ND
4.0 doi:10.13140/RG.2.2.32891.23846
Part 2 Conscious Cosmos ◊ Update 5-8-2021◊ 6-2024
Contents Summary - Contents in Full
Symbiotic Existential Cosmology:
Scientific Overview – Discovery and Philosophy
Biocrisis, Resplendence and Planetary Reflowering
Psychedelics in the Brain and Mind, A Moksha Epiphany, The Devil's Keyboard
Fractal, Panpsychic and Symbiotic Cosmologies, Cosmological Symbiosis
Quantum Reality and the Conscious Brain
The Cosmological Problem of Consciousness in the Quantum Universe
The Physical Viewpoint, The Neuroscience Perspective
The Evolutionary Landscape of Symbiotic Existential Cosmology
Evolutionary Origins of Conscious Experience
Science, Religion and Gene Culture Co-evolution
Animistic, Eastern and Western Traditions and Entheogenic Use
Natty Dread and Planetary Redemption
Yeshua’s Tragic Mission, Revelation and Cosmic Annihilation
Ecocrisis, Sexual Reunion and the Entheogenic Traditions
Communique to the World To save the diversity of life from mass extinction
The Vision Quest to Discover Symbiotic Existential Cosmology
The Great I AM, Cosmic Consciousness, Conscious Life and the Core Model of Physics
The Evolution of Symbiotic Existential Cosmology
Appendix:Primal Foundations of Subjectivity, Varieties of Panpsychic Philosophy
Consciousness is eternal, life is immortal.
Incarnate existence is Paradise on the Cosmic equator
in space-time – the living consummation of all worlds.
But mortally coiled! As transient as the winds of fate!
Symbiotic Existential Cosmology – Contents in Full
The Existential Condition and the Physical Universe
Discovering Life, the Universe and Everything
The Central Enigma: What IS the Conscious Mind?, Glossary
Biocrisis and Resplendence: Planetary Reflowering
The Full Scope: Climate Crisis, Mass Extinction. Population and Nuclear Holocaust
Psychedelics in the Brain and Mind
Therapy and Quantum Change: The Results from Scientific Studies
Biocosmology, Panpsychism and Symbiotic Cosmology
Darwinian Cosmological Panpsychism
Symbiosis and its Cosmological Significance
Quantum Reality and the Conscious Brain
The Cosmological Problem of Consciousness, The Central Thesis, The Primal Axiom
The Physical Viewpoint, Quantum Transactions
The Neuroscience Perspective, Field Theories of Consciousness
Conscious Mind, Resonant Brain
Cartesian Theatres and Virtual Machines
Global Neuronal Workspace, Epiphenomenalism & Free Will
Consciousness and Surviving in the Wild
Consciousness as Integrated Information
Is Consciousness just Free Energy on Markov Landscapes?
Can Teleological Thermodynamics Solve the Hard Problem?, Quasi-particle Materialism
The Crack between Subjective Consciousness and Objective Brain Function
A Cosmological Comparison with Chalmers’ Conscious Mind
Minimalist Physicalism and Scale Free Consciousness
Defence of the real world from the Case Against Reality
Consciousness and the Quantum: Putting it all Back Together
How the Mind and Brain Influence One Another
The Diverse States of Subjective Consciousness
Consciousness as a Quantum Climax
TOEs, Space-time, Timelessness and Conscious Agency
Psychedelics and the Fermi Paradox
The Evolutionary Landscape of Symbiotic Existential Cosmology
Evolutionary Origins of Neuronal Excitability, Neurotransmitters, Brains and Conscious Experience
The Extended Evolutionary Synthesis, Deep and dreaming sleep
The Evolving Human Genotype: Developmental Evolution and Viral Symbiosis
The Evolving Human Phenotype: Sexual and Brain Evolution, the Heritage of Sexual Love and Patriarchal Dominion
Niche Construction, Habitat Destruction and the Anthropocene
Democratic Capitalism, Commerce and Company Law
Science, Religion and Gene-Culture Coevolution, The Spiritual Brain, Religion v Nature, Creationism
The Noosphere, Symbiosis and the Omega Point
Animism, Religion, Sacrament and Cosmology
Is Polyphasic Consciousness Necessary for Global Survival?
The Grim Ecological Reckoning of History
Anthropological Assumptions and Coexistential Realities
Shipibo: Split Creations and World Trees
Meso-American Animism and the Huichol
Pygmy Cultures and Animistic Forest Symbiosis
San Bushmen as Founding Animists
The Key to Our Future Buried in the Past
Entasis and Ecstasis: Complementarity between Shamanistic and Meditative Approaches to Illumination
Eastern Spiritual Cosmologies and Psychotropic Use
Psychedelic Agents in Indigenous American Cultures
Natty Dread and Planetary Redemption
The Women of Galilee and the Daughters of Jerusalem
Descent into Hades and Harrowing Hell
Balaam the Lame: Talmudic Entries
Soma and Sangre: No Redemption without Blood
The False Dawn of the Prophesied Kingdom
Transcending the Bacchae: Revelation and Cosmic Annihilation
Ecocrisis, Sexual Reunion and the Tree of Life
Biocrisis and the Patriarchal Imperative
The Origins and Redemption of Religion in the Weltanshauung
A Millennial World Vigil for the Tree of Life
Redemption of Soma and Sangre in the Sap and the Dew
Maria Sabina’s Holy Table and Gordon Wasson’s Pentecost
Santo Daime and the Union Vegetale
The Society of Friends and Non-sacramental Mystical Experience
The Vision Quest to Discover Symbiotic Existential Cosmology
Scepticism, Belief and Consciousness
Psychedelics – The Edge of Chaos Climax of Consciousness
Discovering Cosmological Symbiosis
The Great I AM, Cosmic Consciousness, Conscious Life and the Core Model of Physics
Evolution of Symbiotic Existential Cosmology
Communique on Preserving the Diversity of Life on Earth for our Survival as a Species
Affirmations: How to Reflower the Diversity of Life for our own Survival
Epilogue
Symbiotic Existential Cosmology is Pandora's Pithos Reopened and Shekhinah's Sparks Returning
The Weltanshauung of Immortality
Paradoxical Asymmetric Complementarity, The Natural Face of Samadhi vs Male Spiritual Purity, Clarifying Cosmic Karma
Empiricism, the Scientific Method, Spirituality and the Subjective Pursuit of Knowledge
Appendix Primal Foundations of Subjectivity, Varieties of Panpsychic Philosophy
The Conscious Brain, and the Cosmological Universe[24]
Solving the Central Enigma of Existential Cosmology
Chris King – 21-6-2021
In memory of Maria Sabina and Gordon Wasson
Contents
1 The Cosmological Problem of Consciousness
2 Psychedelic Agents in Indigenous American Cultures
3 Psychedelics in the Brain and Mind
4 Therapy and Quantum Change: Scientific Results
5 Evolutionary Origins of Excitability, Neurotransmitters and Conscious Experience
6 The Evolutionary Landscape of Symbiotic Existential Cosmology
7 Fractal Biocosmology, Darwinian Cosmological Panpsychism and Symbiotic Cosmology
8 Animistic, Eastern and Western Traditions and Entheogenic Use
9 Natty Dread and Planetary Redemption
10 Biocrisis and Resplendence: Planetary Reflowering, A Moksha Epiphany
Abstract:
This article resolves the central enigma of existential cosmology – the nature and role of subjective experience – thus providing a direct solution to the "hard problem of consciousness". This solves, in a single coherent cosmological description, the core existential questions surrounding the role of the biota in the universe, the underlying process supporting subjective consciousness and the meaning and purpose of conscious existence. This process has pivotal importance for avoiding humanity causing a mass extinction of biodiversity and possibly our own demise, instead becoming able to fulfil our responsibilities as guardians of the unfolding of sentient consciousness on evolutionary and cosmological time scales.
The article overviews cultural traditions and current research into psychedelics [25] and formulates a panpsychic cosmology, in which the mind at large complements the physical universe, resolving the hard problem of consciousness extended to subjective conscious volition over the universe and the central enigmas of existential cosmology, and eschatology, in a symbiotic cosmological model. The symbiotic cosmology is driven by the fractal non-linearities of the symmetry-broken quantum forces of nature, subsequently turned into a massively parallel quantum computer by biological evolution (Darwin 1859, 1889). Like Darwin’s insights, this triple cosmological description is qualitative rather than quantitative, but nevertheless accurate. Proceeding from fractal biocosmology and panpsychic cosmology , through edge of chaos dynamical instability, the excitable cell and then the eucaryote symbiosis create a two-stage process, in which the biota capture a coherent encapsulated form of panpsychism, which is selected for, because it aids survival. This becomes sentient in eucaryotes due to excitable membrane sensitivity to quantum modes and eucaryote adaptive complexity. Founding single-celled eucaryotes already possessed the genetic ingredients of excitable neurodynamics, including G-protein linked receptors and a diverse array of neurotransmitters, as social signalling molecules ensuring survival of the collective organism. The brain conserves these survival modes, so that it becomes an intimately-coupled society of neurons communicating synaptically via the same neurotransmitters, modulating key survival dynamics of the multicellular organism, and forming the most complex, coherent dynamical structures in the physical universe.
This results in consciousness as we know it, shaped by evolution for the genetic survival of the organism. In our brains, this becomes the existential dilemma of ego in a tribally-evolved human society, evoked in core resting state networks, such as the default mode network, also described in the research as "secondary consciousness", in turn precipitating the biodiversity and climate crises. However, because the key neurotransmitters are simple, modified amino acids, the biosphere will inevitably produce molecules modifying the conscious dynamics, exemplified in the biospheric entheogens, in such a way as to decouple the ego and enable existential return to the "primary consciousness" of the mind at large, placing the entheogens as conscious equivalents of the LHC in physics. Thus a biological symbiosis between Homo sapiens and the entheogenic species enables a cosmological symbiosis between the physical universe and the mind at large, resolving the climate and biodiversity crises long term in both a biological and a psychic symbiosis, ensuring planetary survival.
The Decline of Ground-Breaking Disruptive Scientific Discoveries
The research of Park, Leahey & Funk (2022) confirms that papers and patents are becoming less disruptive over time. I want to draw the attention of readers to the fallacy that the past record of science and technology is a basis to believe pure physicalist science will show how the brain “makes” consciousness in any sense greater than the neural correlate of conscious experience. This needs to be taken seriously and is damning evidence against the assumption that the past progress of mechanistic science will solve the hard problem of conscious volition.
Fig 70b: Decline of disruptive science and technology
The figure shows just how devastating the decline has become and indicates the extreme unlikelihood of mechanistic science solving the biggest problem of all. This belief is a product of severe ignorance of the diffuse complexity of the excitation from the prefrontals through to the motor cortex modified by the basal ganglia and the cerebellum, involving both diffuse network activity and deep cyclic connections, which appear to be both uncomputable and empirically undecidable in the quantum universe.
Fig 70c: The research citation profile of Symbolic Existential Cosmology at 1t5h July 2024
Growth of research and distribution of dates of citations three years since the mushroom trip that precipitated this work, it has accrued 860 pages, with 1692 source references, with 108 in 2022, 101 in 2023 and 30 in 2024. Of these 1244 are from 2000 on, 960 from 2010 on and 960 from 2020 on, illustrating the real-time up-to-date nature of the work, which is roughly in four categories, (1) cosmological physics, (2) consciousness and neuroscience, (3) evolutionary biology, (4) metaphysics, animism and religious studies. Fittingly, the oldest citation is Charles Darwin (1859) "On the Origin of the Species".
The Central Thesis of Symbiotic Existential Cosmology © Chris King 25-10-2023 as part of SEC 1.1.416
The central thesis of Symbiotic Existential Cosmology is that subjective consciousness interacts with the physical brain, as a sub-quantum anticipator of environmental uncertainty, using space-time – entangled patterns of brain wave states, which are in a quantum phase transition between superimposed wave function evolution and measurement-derived wave function collapse, biologically coupled to a complementary self-critically tuned neurodynamic phase transition, in which edge-of-chaos instabilities in conscious brain dynamics, accompanied by wave phase-modulation, implicitly involve both past and future special-relativistic states in their entangled wave functions, thus facilitating efficacy of subjective conscious volition over the physical universe, by anticipating existential threats to organismic survival, a process which is thus preserved by evolution, in a universe that is also in a phase transition between full entanglement and wave collapse – a process which is itself in a state of biospheric evolutionary climax at the edge of chaos.
This resolves two complementary questions essential to a complete existential cosmology:
(a) The neurobiological question of the hard problem of the subjectivity of consciousness, extended to volition, in which subjective conscious physical volition is manifested through space-time – entangled patterns of brain wave states, in a quantum phase transition, coupled to self-organised phase transitions at the edge-of-chaos, complemented by tuned modulation of wave phase.
(b) The cosmological question of the origin and foundation of conscious existence, manifesting in evolutionary climax, in a quantum universe, which is likewise in an ongoing partially collapsed dynamic phase transition, of quantumness (a) between a fully quantum-entangled cosmic wave function and a measurement-reduced universe, substantially quantum-collapsed, due to a plethora of destructive measurement processes that tend to evoke "classical" outcomes, through the projection operator, as in the cat paradox experiment, in which planetary biospheres and conscious existence, co-arise in evolutionary climax.
Empirical experience confirms that, while the existence of the universe is necessary to our biological survival, subjective conscious experience is primary. We informatively access the physical world, entirely and exclusively through our subjective conscious experiences of it, so subjective consciousness is clearly primary to our knowledge of the physical universe, whose physical quanta in bosons and fermions, we do not experience directly, but only inferentially, through our subjective biologically derived sensory perceptions, along with our inner less-confined subjective experiences in dreams and visionary states. Nevertheless, we understand that, as living organisms, the physical universe is necessary to our biological survival and an inferred foundation of our common existence.
Recall that, in a classical deterministic Laplacian universe, every future state is fully determined by the dynamical laws of evolution and the initial conditions, while in the quantum universe, our knowledge of outcomes is restricted to evolution of the Schrödinger equation, punctuated by collapse of the wave function, resulting in discrete events having a probability determined by the wave function amplitude. This leads to so-called non-local hidden-variable theories, postulated below the quantum level, from pilot waves, through quantum transactions to super-determinism, which seek to explain and determine the specific individual outcomes we see, for example in the Schrödinger cat paradox experiment, where a cat subjected to a lethal dose of cyanide, via a radio-active decay is found to be alive or dead but not both, e.g. with a 50-50 probability. The quantum universe is thus in a state of probability-punctuated causality and is neither fully defined, nor causally closed. We attempt to understand this using two classical ideas, (1) the deterministic nature of billiard ball collisions and (2) the probabilities of a classical poker game, applying the first to the Schrödinger evolution and decoherence and the second to the probability interpretation of wave function collapse, but neither are adequate to deal with the non-locality implied by quantum entanglement. God does not play dice!
The central thesis of Symbiotic Existential Cosmology asserts that subjective consciousness is a key process resolving cosmological indeterminacy, by forming a neurodynamic realisation of these hidden variable interactions governing non-local entanglement, facilitating biological survival by anticipating quantum-derived environmental uncertainty in a manner complementing computational boundary conditions. Space-time anticipation constitutes whole conscious experiences containing a retro-causal echo of the future, both in intuitive conscious anticipation and in rarer more graphic experiences of prescience, precognition and deja vu. This is not just a deductive process. It cannot be analysed, or predicted by causal computation, for the same reason entanglement prevents transmission of classical information under local Einsteinian causality, and the uncertainties are irreducible, because they involve both computationally intractable environmental uncertainties and indeterminacies due to other live conscious volitional agents.
The central thesis states that the conscious brain uses its continuous wave tissue potentials to evoke a unique form of quantum space-time anticipation through global-scale entangled excitations moving in and out of coherence, in a phase transition between full entanglement and wave function collapse that is reflected biodynamically in (a) the phase transition between edge of chaos dynamics at tipping points between chaos and order, and (b) phase coupling between continuous electromagnetic wave potentials in the tissues and individual neuronal action potentials in the neural net connectivity. It is thus not just a quantum super-computer, but a sub-quantum conscious anticipator.
This results in subjective experience manifesting as a globally omnipresent "stream of consciousness" everything, everywhere all at once (Gillett 2023), forming an anticipatory “quantum of the present”, utilising memory and the environmental context, but also entangled with the future, in a manner similar to an integral transform, also supported by extensive predictive coding in the brain, acting as a contextual environmental filter, to enable intuitive conscious volitional choices, favouring the survival of the whole organism, by anticipating acute existential threats, crises and opportunities and in turn applying subjective conscious volition in our decision-making and behaviour, to facilitate our biological survival over evolutionary time scales. By contrast, rational processes are tied more to established classical causal factors compatible with cerebral cognition. The subjective interaction can thus move seamlessly from uncertainty-dominant anticipative intuition, to a context-dominant predictively cognitive response. Because brain dynamics do not involve independent identically-distributed IID measurements, as their contexts are continually changing, there is no evidence that they converge to the classical. Nor can it be established that the quantum universe, or biological brain, is causally closed to subjectivity.
This overall picture coincides with our empirical subjective experience of conscious intuitive decision-making, resulting in veridical perception of our volitional action in intentional behaviour. For this purpose, SEC asserts that there are two scientific empirical avenues, (a) verified objective observation and (b) affirmed subjective experience, consistent with the etymology and current meaning of "empirical" as "based on, concerned with, or verifiable by observation, or experience, rather than theory or pure logic" (Oxford Languages). Given the fact that the overwhelming opinion of sane people is that we have conscious empirical intentional ability to affect our fates and this is the basis of criminal and corporate law on intent, this has a very high level of empirical validation, which even scientific physicalists, in all honesty, need to concede is the case. Indeed SEC is the only type of cosmology that respects our empirical experience without invoking some form of illusionism that our subjective experience of active agency is a fallacious deception.
Symbiotic Existential Cosmology asserts that subjective conscious intuition in decision-making and volition can influence the ongoing brain state, in situations where there is quantum-derived environmental uncertainty, as is critical for our survival, and that the brain state accompanying subjective conscious volition over the physical universe, occurs only when the physical brain dynamic is in a self-organised critical state of physical phase transition at the edge of chaos, between a more chaotic and a more ordered regime, which is itself at a quantum uncertain unstable tipping point. This avoids causal conflict between subjective volition and a determined ongoing physical brain state. It is achieved by minimising quantum measurement in favour of a higher degree of anticipatory phase entanglement than the surrounding universe, consistent with minimal collapse to the classical. The efficacy of conscious anticipation in survival in the wild is essential to explain why subjective conscious volition has an evolutionary advantage, over non-conscious computational processing and has thus been conserved by evolution, giving conscious biological brains a critical advantage over AI, using phase transitional entanglement that is not accessible to objective physical interrogation.
While Symbiotic Existential Cosmology is agnostic to specific quantum interpretations, it finds the approach of transactional super-causality helpful as an explanation of quantum anticipation, through a trans-causal future-past handshaking phase transition, from a plasma-like interaction of contingent offer and confirmation waves, to a solid-like set of real interactions between emitters and absorbers, which, like subjective experience, stands outside physical space-time.
Furthermore, this is achieved because the brain also uses phase modulation of its waveforms, evidenced in electro- and magneto-encephalograms, through hand-shaking phase coupling of individual action potentials with the continuous tissue potentials. Because brain waves in the EEG share the features of macroscopic radio waves, they share features of entanglement of their fields that we witness in coherent laser light as well. The modulated phase coherence of global brain dynamics in dynamic feedback with more decentralised partially decoherent local processing, invokes the "spotlight" of coherent conscious attention, with a sub-conscious periphery, common to theories such as global neuronal workspace theory. Symbiotic Existential Cosmology asserts that these phase modulations are not just analogous to the foundation concept of quantum uncertainty through wave beats, but constitute a unique form of quantum measurement by the brain of its own internal dynamical states, distinct from using the Born probability interpretation by destructive measurement, as in standard experimental physics, such as the cat paradox experiment, while keeping destructive measurement minimal to facilitate quantum anticipation.
In turn, this enables the subjectively conscious brain to collapse the probability multiverse of future quantum uncertainties, to unfold the line of emerging world history that we invoke together, for better or worse, through our choices in the uncertain “karmic” quantum environment, as live intentional behavioural autonomous agents, thus inheriting personal conscious responsibility for our actions, particularly in relation to the biosphere and it’s and our survival as a species.
The Cosmological Axiom of Primal Subjectivity
We put this into precise formulation, taking into account that the existence of primary subjectivity is an undecidable proposition, from the physical point of view, in the sense of Godel, but is empirically certain from the experiential point of view, we come to the following:
(1) We start on home ground, i.e. with human conscious volition, where we can clearly confirm both aspects of reality – subjectively experiential and objectively physical.
(2) We then affirm as empirical experience, that we have efficacy of subjective conscious volition over the physical universe, manifest in every intentional act we make, as is necessary for our behavioural survival – as evidenced by my consciously typing this passage, and that this is in manifest conflict with pure physicalism asserting the contrary.
(3) We now apply Occam's razor, not just on parsimony, but categorical inability of pure materialism, using only physical processes, which can only be empirically observed, to deal with subjective consciousness, because this can only be empirically experienced and is private to observation. This leads to intractability of the hard problem of consciousness. Extended to the physicalist blanket denial of conscious physical volition, which we perceive veridically in our conscious perception of our enacted intent, this becomes the extended hard problem. Classical neuroscience accepts consciousness only as an epiphenomenon – an internal model of reality constructed by the brain, but denies volition, as a delusion perpetrated by evolution to evoke the spectre of intentional behaviour.
(4) We then scrutinise the physical aspect and realise we cannot empirically confirm classical causal closure the universe in brain dynamics because: (a) the dynamics is fractal to the quantum-molecular level so non-IID processes don't necessarily converge to the classical and (b) experimental verification is impossible because we would need essentially to trace the neurodynamics of every neuron, or a very good statistical sample, when the relevant dynamics is at the unstable edge of chaos and so is quantum sensitive. Neither can we prove consciousness causes brain states leading to volition, because consciousness can only be experienced and not observed, so it’s a genuine undecidable proposition physically.
(5) This sets up the status of: “Does subjective conscious volition have efficacy over the universe? ” to be an empirically undecidable cosmological proposition from the physical perspective, in the sense of Godel. From the experiential perspective however, it is an empirical certainty.
(6) We therefore add a single minimal cosmological axiom, to state the affirmative proposition – “Subjective conscious volition has efficacy over the physical universe”. We also need to bear in mind that a physicalist could make the counter proposition that it doesn’t, and both could in principle be explored, like the continuum hypothesis in mathematics – that there is no infinite cardinality between those of the countable rationals and uncountable reals [1].
(7) Rescaling the primal axiom: We now need to scale this axiom all the way down to the quantum level, because it is a cosmological axiom that means that the universe has some form of primal subjective volition, so we need to investigate its possible forms. The only way we can do this, as we do with one another about human consciousness, where we can’t directly experience one another’s consciousness, is to make deductions from the physical effects of volition – in humans, organisms, amoebo-flagellates, prokaryotes, biogenesis, butterfly effect systems and quanta.
(8) We immediately find that quantum reality has two complementary processes:
(a) Quantum consciousness: The wild wave function which contains both past and future implicit “information” under special relativity, corresponding to the quantum-physical experiential interface of primal subjectivity.
(b) Quantum volition: Collapse of the wave function, which violates causality and in which the normalised wave power space leaves the quantum total free will where to be measured, which is the quantum-physical volitional interface of primal subjectivity.
(9) Primal cosmological subjectivity: This means that subjectivity is a primal complement to the quantum universe and that the cosmos as a whole in an interactive process between physical objectivity and experiential subjectivity, arising at the cosmic origin.
(9) Two potentially valid cosmologies from the physical perspective, but only one from the experiential perspective:
As with any undecidable proposition, from the objective perspective, pure physicalists can, on the one hand, continue to contend that the quantum has no consciousness or free will and that uncertainty is “random” and cite lack of an obvious bias violating the Born interpretation, and develop that approach, thus claiming volition is a self-fulfilling delusion of our internal model of reality. But Symbiotic Existential Cosmology can validly argue that uncertainty could be due to a complex quasi-random process, e.g. a special relativistic transactional collapse process, which by default, the quantum, by virtue of its wave function context does have “conscious” free will over, allowing us and the diversity of life to also be subjectively conscious and affect the world around us, unlike the pure materialist model
(10) Symbiotic Existential Cosmology, thus shows that CA, in the form of subjective conscious volition, is undecidable physically, although it is certain experientially and necessary for subjective conscious survival..
An Accolade to Cathy Reason
The first part of the answer to the cosmological axiom CA – that subjective consciousness is a cosmological complement to the physical universe – was due to Cathy Reason. In 2016 she proved that it is impossible to establish certainty of consciousness through a physical process. So CA could be false, or it could be unprovable. In 2019, and 2021, with Kushal Shah, she proved the no-supervenience theorem – that the operation of self-certainty of consciousness is inconsistent with the properties possible in any meaningful definition of a physical system – effectively showing CA is certain experientially. A formal proof is Reason (2023).
1 The Cosmological Problem of Consciousness
The human existential condition consists of a complementary paradox. To survive in the world at large, we have to accept the external reality of the physical universe, but we gain our entire knowledge of the very existence of the physical universe through our conscious experiences, which are entirely subjective and are complemented by other experiences in dreams and visions which also sometimes have the genuine reality value we describe as veridical. The universe is thus in a fundamental sense a description of our consensual subjective experiences of it, experienced from birth to death, entirely and only through the relentless unfolding spectre of subjective consciousness.
Fig 71: (a) Cosmic evolution of the universe (WMAP King 2020b).
Life has existed on Earth for a third of the universe’s 13.7 b ya lifetime. (b)
Symmetry-breaking of a unified superforce into the four wave-particle forces
of nature, colour, weak, electromagnetic and gravity with the first three
forming the standard model and with the weak-field limit of general relativity
(Wilczek 2015) comprising the core model. (c)
quantum uncertainty defined through wave coherence beats, (d) Schrödinger cat
experiment. Schrödinger famously said “The
total number of minds in the universe is one”, preconceiving Huxley’s notion of the mind at
large used as this monograph’s basis for
cosmological symbiosis. Quantum theory says the cat is in both live and dead
states with probability 1/2 but the observer finds the cat alive or dead,
suggesting the conscious observer collapses the superimposed wave function. (e)
Feynman diagrams in special relativistic quantum field theories involve both
retarded (usual) and advanced (time backwards) solutions because the Lorenz
energy transformations ensuring the atom bomb works have positive and negative
energy solutions .
Thus electron scattering (iv) is the same as positron creation-annihilation [26]. Each successive order
Feynman diagram has a contribution reduced by a factor the fine
structure constant. (f) Double slit interference shows a photon emitted as a particle passes
through both slits as a wave before being absorbed on the photographic plate as
a particle. The trajectory for an individual particle is quantum uncertain but
the statistical distribution confirms the particles have passed through the
slits as waves. (g) Cosmology of conscious mental states (King 2021a). Kitten’s Cradle a love song.
The religious anthropocentric view of the universe was overthrown, when Copernicus, in 1543 deduced that the Earth instead of being in the centre of the cosmos instead, along with the other solar system planets, rotated in orbits around the Sun. Galileo defended heliocentrism based on his astronomical observations of 1609. By 1615, Galileo's writings on heliocentrism had been submitted to the Roman Inquisition which concluded that heliocentrism was foolish, absurd, and heretical since it contradicted Holy Scripture. He was tried by the Inquisition, found "vehemently suspect of heresy", and forced to recant. He spent the rest of his life under house arrest.
The Copernican revolution in turn resulted in the rise of classical materialism defined by Isaac Newton’s laws of motion (1642 – 1726), after watching the apple fall under gravity, despite Newton himself being a devout Arian Christian who used scripture to predict the apocalypse. The classically causal Newtonian world view, and Pierre Simon Laplace’s (1749 – 1827) view of mathematical determinism “that if the current state of the world were known with precision, it could be computed for any time in the future or the past”, came to define the universe as a classical mechanism in the ensuing waves of scientific discovery in classical physics, chemistry and molecular biology, climaxing with the decoding of the human genome, validating the much more ancient atomic theory of Democritus (c. 460 – c. 370 BC). The classically causal universe of Newton and Laplace has since been fundamentally compromised by the discovery of quantum uncertainty and its “spooky" features of quantum entanglement.
In counterposition to materialism, George Berkeley (1685 – 1753) is famous for his philosophical position of "immaterialism", which denies the existence of material substance and instead contends that familiar objects like tables and chairs are ideas perceived by our minds and, as a result, cannot exist without being perceived. Berkeley argued against Isaac Newton's doctrine of absolute space, time and motion in a precursor to the views of Mach and Einstein. Interest in Berkeley's work increased after 1945 because he had tackled many of the issues of paramount interest to 20th century philosophy, such as perception and language.
The core reason for the incredible technological success of science is not the assumption of macroscopic causality, but the fact that the quantum particles come in two kinds. The integral spin particles, called bosons, such as photons, can all cohere together, as in a laser and thus make forces and radiation, but the half-integer spin particles, called fermions, such as protons and electrons, which can only congregate in pairs of complementary spin, are incompressible and thus form matter, inducing a universal fractal complexity, via the non-linearity of the electromagnetic and other quantum forces. The fermionic quantum structures are small, discrete and divisible, so the material world can be analysed in great detail. Given the quantum universe and the fact that brain processes are highly uncertain, given changing contexts and unstable tipping points at the edge of chaos, objective science has no evidential basis to claim the brain is causally closed and thus falsely conclude that we therefore have no agency to apply our subjective and consciousness to affect the physical world around us. By agency here I mean full subjective conscious volition, not just objective causal functionality (Brizio & Tirassa 2016, Moreno & Mossio 2015), or even autopoiesis (Maturana & Varela 1972).
The nature of conscious experience remains the most challenging enigma in the scientific description of reality, to the extent that we not only do not have a credible theory of how this comes about but we don’t even have an idea of what shape or form such a theory might take. While physical cosmology is an objective quest, leading to theories of grand unification, in which symmetry-breaking of a common super-force led to the four forces of nature in a big-bang origin of the universe, accompanied by an inflationary beginning, the nature of conscious experience is entirely subjective, so the foundations of objective replication do not apply. Yet for every person alive today, subjective conscious experiences constitute the totality of all our experience of reality, and physical reality of the world around us is established through subjective consciousness, as a consensual experience of conscious participants.
Erwin Schrödinger: Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental.
Arthur Eddington: The stuff of the world is mind stuff.
J. B. S. Haldane: We do not find obvious evidence of life or mind in so-called inert matter...; but if the scientific point of view is correct, we shall ultimately find them, at least in rudimentary form, all through the universe.
Julian Huxley: Mind or something of the nature as mind must exist throughout the entire universe. This is, I believe, the truth.
Freeman Dyson: Mind is already inherent in every electron, and the processes of human consciousness differ only in degree and not in kind from the processes of choice between quantum states which we call “chance” when they are made by electrons.
David Bohm: It is implied that, in some sense, a rudimentary consciousness is present even at the level of particle physics.
Max Planck: “I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness.
Werner Heisenberg: Is it utterly absurd to seek behind the ordering structures of this world a “consciousness” whose “intentions” were these very structures?
Andrei Linde: Will it not turn out, with the further development of science, that the study of the universe and the study of consciousness will be inseparably linked, and that ultimate progress in the one will be impossible without progress in the other?
The hard problem of consciousness (Chalmers 1995) is the problem of explaining why and how we have phenomenal first-person subjective experiences sometimes called “qualia” that feel "like something”, and more than this, evoke the entire panoply of all our experiences of the world around us. Chalmers comments (201) “Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” By comparison, we assume there are no such experiences for inanimate things such as a computer, or a sophisticated form of artificial intelligence. Two extensions of the hard problem are the hard problem extended to volition and the hard manifestation problem how is experience manifested in waking perception, dreams and entheogenic visions?
Fig 71b: The hard problem's explanatory gap – an uncrossable abyss.
Although there have been significant strides in both electrodynamic (EEG and MEG), chemodynamic (fMRI) and connectome imaging of active conscious brain states, we still have no idea of how such collective brain states evoke the subjective experience of consciousness to form the internal model of reality we call the conscious mind, or for that matter volitional will. In Jerry Fodor’s words: “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious.”
Nevertheless opinions about the hard problem and whether consciousness has any role in either perception or decision-making remain controversial and unresolved. The hard problem is contrasted with easy, functionally definable problems, such as explaining how the brain integrates information, categorises and discriminates environmental stimuli, or focuses attention. Subjective experience does not seem to fit this explanatory model. Reductionist materialists, who are common in the brain sciences, particularly in the light of the purely computational world views induced by artificial intelligence, see consciousness and the hard problem as issues to be eliminated by solving the easy problems. Daniel Dennett (2005) for example argues that, on reflection, consciousness is functionally definable and hence can be corralled into the objective description. Arguments against the reductionist position often cite that there is an explanatory gap (Levine 1983) between the physical and the phenomenal. This is also linked to the conceivability argument, whether one can conceive of a micro-physical “zombie” version of a human that is identical except that it lacks conscious experiences. This, according to most philosophers (Howell & Alter 2009), indicates that physicalism, which holds that consciousness is itself a physical phenomenon with solely physical properties, is false.
David Chalmers (1995), speaking in terms of the hard problem, comments: “The only form of interactionist dualism that has seemed even remotely tenable in the contemporary picture is one that exploits certain properties of quantum mechanics.” He then goes on to cite (a) David Eccles’ (1986) citing of consciousness providing the extra information required to deal with quantum uncertainty thus not interrupting causally deterministic processes, if they occur, in brain processing and (b) the possible involvement of consciousness in “collapse of the wave function” in quantum measurement. We next discuss both of these loopholes in the causal deterministic description.
Two threads in our cosmological description indicate how the complementary subjective and objective perspectives on reality might be unified. Firstly, the measurement problem in the quantum universe, appears to involve interaction with a conscious observer. While the quantum description involves an overlapping superposition of wave functions, the Schrödinger cat paradox, fig 71(d), shows that when we submit a cat in a box to a quantum measurement, leading to a 50% probability of a particle detection smashing a flask of cyanide, killing the cat, when the conscious observer opens the box, they do not find a superposition of live and dead cats, but one cat, either stone dead or very alive. This leads to the idea that subjective consciousness plays a critical role in collapsing the superimposed wave functions into a single component, as noted by John von Neumann, who stated that collapse could occur at any point between the precipitating quantum event and the conscious observer, and others (Greenstein 1988, Stapp 1995, 2007).
Wigner (1961, Wigner & Margenau 1967) in "Remarks on the Mind-Body Question" used a variant of the cat paradox to argue for conscious involvement. In this version, we have a lab containing a conscious friend who reports the result later, leading to a paradox about when the collapse occurs – i.e when the friend observes it or when Wigner does. But according to Wigner, the friend in the box is in a superposition of states until he reports. Wigner discounted the observer being in a superposition as this would be preceded by being in a state of effective "suspended animation". As Wigner says, he could ask his friend, "What did you feel about the [measurement result] before I asked you?" The question of what result the friend has seen is surely "already decided in his mind", Wigner writes, which implies that the friend–system joint state must already be one of the collapsed options, not a superposition of them. As this paradox does not occur if the friend is a non-conscious mechanistic computer, it suggests consciousness is pivotal. Henry Stapp (2009) in "Mind, Matter and Quantum Mechanics" has an overview of the more standard theories.
While systems as large as 2000 atoms (Fein et al. 2019) that of gramicidin A1, a linear antibiotic polypeptide composed of 15 amino acids (Shayeghi et al. 2020), and even a deep-frozen tardigrade (Lee at al. 2021) have been found in a superposition of states resulting in interference fringes, indicating that the human body or brain could be represented as a quantum superposition, it is unclear that subjective experience can. More recent experiments involving two interconnected Wigners’ friend laboratories also suggest the quantum description "cannot consistently describe the use of itself” (Frauchiger & Renner 2018). An experimental realisation (Proietti et al. 2019) implies that there is no such thing as objective reality, as quantum mechanics allows two observers to experience different, conflicting realities. These paradoxes underly the veridical fact that conscious observers make and experience a single course of history, while the physical universe of quantum mechanics is a multiverse of probability worlds, as in Everett’s many worlds description, if collapse does not occur. This postulates split observers, each unaware of the existence of the other, but what kind of universe they are then looking at seems inexorably split into multiverses, which we do not experience.
In this context Barrett (1999) presents a variety of possible solutions involving many worlds and many or one mind and in the words of Saunders (2001) in review has resonance with existential cosmology:
Barrett’s tentatively favoured solution [is] the one also developed by Squires (1990). It is a one-world dualistic theory, with the usual double-standard of all the mentalistic approaches: whilst the physics is precisely described in mathematical terms, although it concerns nothing that we ever actually observe, the mental – in the Squires-Barrett case a single collective mentality – is imprecisely described in non-mathematical terms, despite the fact that it contains everything under empirical control.
In quantum entanglement [1], two or more particles can be prepared within the same wave function. For example, in a laser, an existing wave function can capture more and more photons in phase with a standing wave between two mirrors by stimulated emission from the excited medium. In other experiments pairs of particles can be generated inside a single wave function. Einstein, Podolsky and Rosen (1935) proposed a locally causal limitation on any hidden variable theories describing the situation when two particles were entangled coherently in a single wave function. For example an excited Calcium atom with two outer electrons can emit a blue and a yellow photon with complementary polarisations in a spin-0 to spin-0 transition, as shown in fig 72(8). In this situation when we sample the polarisation of one photon, the other instantaneously has the complementary polarisation even when the two detections take place, without there being time for any information to pass between the detectors at the speed of light. John Bell (1964) proved that the results predicted by standard quantum mechanics when the two detectors were set at varying angles violated the constraints defined by local Einsteinian causality, implying quantum non-locality, decried by Einstein, Rosen and Podolsky (1935) as an incomplete view:
In a complete theory there is an element corresponding to each element of reality. A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either (1) that the description of reality as given by a wave function in quantum mechanics is not complete, or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a system on the basis of measurements made on another system that had previously interacted with it leads to the result that if (1) is false then (2) is also false. One is thus led to conclude one precludes the knowledge of the other. Then either (1) that the description of reality as given by a wave function is not complete.
The experimental verification was confirmed by Alain Aspect and others (1982) over space-like intervals using rapidly time varying analysers (fig 72(8)), receiving a Nobel in 2022. There are other more complex forms of entanglement such as the W and GHZ states (Greenberger, Horne & Zeilinger 1989, Mermin 1990), used in quantum computing (Coecke et al. 2021), types of entangled state that involve at least three subsystems (particle states, or qubits). Extremely non-classical properties of the GWZ state have been observed.
Quantum entanglement is an area where consciousness may have a critical role. Liu, Chen & Ao (2024) have suggested that the myelin sheaths surrounding neuron axons are a rich source of bi-photons. Albert Einstein dubbed the phenomenon "spooky action at a distance”. It shows that the state of either particle remains indeterminate until we measure one of them, when the other’s state is the instantaneously determined to be complementary. This cannot however be used to send logical classical information faster than light, but it indicates that the quantum universe is a highly entangled system in which potentially all particles in existence are involved in correlated behaviour. He proposed that the effect came about because the interactions contained hidden variables, which had already predetermined their states. This doesn't mean that quantum mechanics is incomplete, superficial or wrong , but that a hidden variable theory we do not have direct access to within uncertainty may provide the complete description.
Fig 71c: (1) Entanglement experiment with time varying analysers (Aspect et al. 1982). A calcium atom emits two entangled photons with complementary polarisation each of which travels to one of two detectors oscillating so rapidly there is no time to send information at the speed of light between the two detector pairs. (2) The blue and yellow photon transitions. (3) The quantum correlations blue exceed Bell’s limits of communication between the two at the speed of light. The experiment is referred to as EPR after Einstein, Podolsky and Rosen who first suggested the problem of spooky action at a distance.
Robert Moir (2014) notes that Von Neumann’s proof of the impossibility of dispersion free states depends crucially on a particular assumption. Let A be some quantum mechanical observable and so ⟨ψ|A|ψ⟩ is the expectation value of an ensemble of particles prepared in the state |ψ⟩. Now, for considerations of hidden variables, the individual members of that ensemble have definite values of observables. Let ν(A) denote the value of A taken by some particular particle of the ensemble. Von Neumann’s assumption was that if A, B and C are any observables such that C = A + B, then the value of C assigned to a particle must satisfy ν(C) = ν(A) + ν(B) (1).
It turns out that a theory that satisfies (1) cannot match the predictions of quantum mechanics, which is what drives von Neumann’s proof (Golub & Lamoreaux 2024). John Bell objects very strongly to von Neumann’s assumption, an assumption that John Bell considered silly. It is silly precisely because there is no reason why (1) should hold if A and B do not commute. Operators A, B commute if A.B = B.A but some operators, like the x, y and z components of angular momentum S of a quantum particle don't. Here Sx.Sy = –Sy.Sx and consequently we can't know the values of both at the same time. (1) does hold for commuting observables and, indeed, in general for expectation values, but if A and B do not commute then they do not have simultaneous eigenvalues and so they cannot be simultaneously measured. Thus there is no reason why (1) should hold for individual members of an ensemble and so no reason why (1) should be required by a hidden variables theory:
The latter is a quite peculiar property of quantum mechanical states, not to be expected a priori. There is no reason to demand it individually of the hypothetical dispersion-free states, whose function it is to reproduce the measurable properties of quantum mechanics when averaged over (Bell 1966).
David Mermin concurs:
Yet the von Neumann proof, if you actually come to grips with it, falls apart in your hands! There is nothing to it. It’s not just flawed, it’s silly! . . . When you translate [his assumptions] into terms of physical disposition, they’re nonsense. The proof of von Neumann is not merely false but foolish! (Mermin 1993)
Pilot Wave Theory David Bohm (1952) along with Louis de Broglie introduced the notion of pilot waves which identify particles as having real positions, thus not requiring wave function collapse, but have problems with handling creation of new particles. The theory posits a real position and momentum for a particle such as a photon guided by a particular non-local form of pilot wave. It illustrates a form of hidden variable theory which does not require collapse of the wave function, but the predictions hold only for a situation where no new particles are created with new degrees of freedom during the trajectory. Its interpretation is thus inconsistent with the Feynman approach, where the transition probability includes all paths and all possible virtual particles created and annihilated during the transition. To the extent that its predictions coincide with those of quantum mechanics, phenomena, from weak quantum measurement (Kocsis et al. 2011) to surreal Bohmian trajectories (Mahler et al. 2016), can also be interpreted correctly by entanglement using standard quantum mechanics. Images of such trajectories can be seen in weak quantum measurement and surreal Bohmian trajectories in fig 57.
However, the pilot wave theory has to stipulate an exact initial condition for the particle, which violates uncertainty. As such, the theory has no experimental consequences, so W. Pauli referred to it as an “uncashable check”. But as there are no observable consequences, the Bohm theory is possibly a counterexample to von Neumann’s second conclusion that hidden variables in particular would have already led to a failure of the theory. Pilot waves also have a fundamental problem with particle creation, because stipulating a position for the particle doesn’t work to specify the later outcome if a high energy photon then undergoes a creation event and splits into and electron and a positron, where we have a potentially infinite number of possible trajectories of the ensuing particles. Attempts to solve this (Duerr et al. 2004, 2005, Nikolic, H 2010) involve two problems (1) the configuration space spits into a potentially infinite number of configurations and (2) A stochastic process is arbitrarily introduced for creation.
Pilot Wave Theory David Bohm (1952) along with Louis de Broglie introduced the notion of pilot waves which identify particles as having real positions, thus not requiring wave function collapse, but have problems with handling creation of new particles. The theory posits a real position and momentum for a particle such as a photon guided by a particular non-local form of pilot wave. It illustrates a form of hidden variable theory which does not require collapse of the wave function, but the predictions hold only for a situation where no new particles are created with new degrees of freedom during the trajectory. Its interpretation is thus inconsistent with the Feynman approach, where the transition probability includes all paths and all possible virtual particles created and annihilated during the transition. To the extent that its predictions coincide with those of quantum mechanics, phenomena, from weak quantum measurement (Kocsis et al. 2011) to surreal Bohmian trajectories (Mahler et al. 2016), can also be interpreted correctly by entanglement using standard quantum mechanics.
Fig 71d:
Cancellation of off-diagonal entangled components in decoherence by damping,
modelling extraneous collisions (Zurek 2003).
Other notions of collapse (see King 2020b for details) involve interaction with third-party quanta and the world on classical scales. All forms of quantum entanglement (Aspect et al. 1982), or its broader phase generalisation, quantum discord (Ollivier & Zurek 2002) involve decoherence (Zurek 1991, 2003), because the system has become coupled to other wave-particles. But these just correspond to further entanglements, not collapse. Recoherence (Bouchard et al. 2015) can reverse decoherence, supporting the notion that all non-conscious physical structures can exist in superpositions. Another notion is quantum darwinism (Zurek 2009), in which some states survive because they are especially robust in the face of decoherence. Spontaneous collapse (Ghirardi, Rimini, & Weber 1986) has a similar artificiality to Zurek’s original decoherence model, in that both include an extra factor in the Schrödinger equations forcing collapse.
Spontaneous random collapse GRW models (Ghirardi, Rimini, & Weber 1986) include an extra factor complementing the Schrödinger equation forcing random collapse over a finite time. Both Penrose’s gravitationally induced collapse and the variants of GRW theories such as continuous spontaneous localisation (CSL) involving gradual, continuous collapse rather than a sudden jump have recently been partially eliminated by experiments derived from neutrino research which have failed to detect the very faint x-ray signals the local jitter of physical collapse models imply.
In the approach of stochastic electrodynamicsSED (de la Peña et al. 2020), an additional stochastic term corresponds to the effects of the collapse process into the classical limit.
David Albert (1992), in "Quantum Mechanics and Experience", cites objections to virtually all descriptions of collapse of the wave function. In terms of von Neumann's original definition, which allowed for collapse to take place any point from the initial event to the conscious observation of it, what he concluded was that there must be two fundamental laws about how the states of quantum-mechanical systems evolve:
Without measurements all physical systems invariably evolve in accordance with the dynamical equations of motion, but when there are measurements going on, the states of the measured systems evolve in accordance with the postulate of collapse. What these laws actually amount to will depend on the precise meaning of the word measurement. And it happens that the word measurement simply doesn't have any absolutely precise meaning in ordinary language; and it happens (moreover) that von Neumann didn't make any attempt to cook up a meaning for it, either.
However, if collapse always occurs at the last possible moment, as in Wigner's (1961) view:
All physical objects almost always evolve in strict accordance with the dynamical equations of motion. But every now and then, in the course of some such dynamical evolutions, the brain of a sentient being may enter a state wherein states connected with various different conscious experiences are superposed; and at such moments, the mind connected with that brain opens its inner eye, and gazes on that brain, and that causes the entire system (brain, measuring instrument, measured system, everything) to collapse, with the usual quantum-mechanical probabilities, onto one or another of those states; and then the eye closes, and everything proceeds again in accordance with the dynamical equations of motion.
We thus end up with either purely physical systems, which evolve in accordance with the dynamical equations of motion or conscious systems which do contain sentient observers. These systems evolve in accordance with the more complicated rules described above. ... So in order to know precisely how things physically behave, we need to know precisely what is conscious and what isn't. What this "theory" predicts will hinge on the precise meaning of the word conscious; and that word simply doesn't have any absolutely precise meaning in ordinary language; and Wigner didn't make any attempt to make up a meaning for it; and so all this doesn't end up amounting to a genuine physical theory either.
But he also discounts related theories relating to macroscopic processes:
All physical objects almost always evolve in strict accordance with the dynamical equations of motion. But every now and then, in the course of some such dynamical evolutions (in the course of measurements, for example), it comes to pass that two macroscopically different conditions of a certain system (two different orientations of a pointer, say) get superposed, and at that point, as a matter of fundamental physical law, the state of the entire system collapses, with the usual quantum-mechanical probabilities, onto one or another of those macroscopically different states. But then we again have two sorts of systems microscopic and macroscopic and again we don't precisely know what macroscopic is.
He even goes to the trouble of showing that no obvious empirical test can distinguish between such variations, including decoherence e.g. from air molecules, and with the GRW theory, where other problems arise about the nature and consequences of collapse on future evolution.
Tipler (2012, 2014), using quantum operators, shows that, in the many worlds interpretation, quantum non-locality ceases to exist because the first measurement of an entangled pair, e.g. spin up or down, splits the multiverse into two deterministic branches, in each of which the state of the the second particle is determined to be complementary in each multiverse branch, so no nonlocal "spooky action a a distance" needs, or can take place.
This also leads to a fully-deterministic multiverse:
Like the electrons, and like the measuring apparatus, we are also split when we read the result of the measurement, and once again our own split follows the initial electron entanglement. Thus quantum nonlocality does not exist. It is only an illusion caused by a refusal to apply quantum mechanics to the macroworld, in particular to ourselves.
Many-Worlds quantum mechanics, like classical mechanics is completely deterministic. So the observers have only the illusion of being free to chose the direction of spin measurement. However, we know my experience that there are universes of the mutilverse in which the spins are measured in the orthogonal directions, and indeed universes in which the pair of directions are at angles θ at many values between 0 and π/2 radians. To obtain the Bell Theorem quantum prediction in this more general case, where there will be a certain fraction with spin in one direction, and the remaining fraction in the other, requires using Everett’s assumption that the square of the modulus of the wave function measures the density of universes in the multiverse.
There is a fundamental problem with Tipler’s explanation. The observer is split into one that observes the cat alive and the other observes it dead. So everything is split. Nelson did and didn’t win the battle of Copenhagen by turning his blind eye, so Nelson is also both a live and dead Schrödinger cat. The same for every idiosyncratic conscious decision we make, so history never gets made. Free will ceases to exist and quantum measurement does not collapse the wave function. So we have a multiverse of multiverses with no history at all. Hence no future either.
This simply isn’t in any way how the real universe manifests. The cat IS alive or dead. The universe is superficially classical because so many wave functions have collapsed or are about to collapse that the quantum universe is in a dynamical state of creating superpositions and collapsing nearly all of them, as the course of history gets made. This edge of chaos dynamic between collapse and wave superposition allows free will to exist within the cubic centimetre of quantum uncertainty. We are alive. Subjective conscious experience is alive and history is being unfolded as I type.
Nevertheless the implications of the argument are quite profound in that both a fully quantum multiverse and a classical universe are causally deterministic systems, showing that the capacity of subjectively conscious free-will to throw a spanner in the works comes from the interface we experience between these two deterministic extremes.;
Transactional Interpretations
Another key interpretation, which extends the Feynman description of virtual particles to real particle exchanges, is the transactional interpretation TI (Cramer 1986, King 1989, Kastner 2012, Cramer & Mead 2020, Cramer 2024) where real quanta are also described as a hand-shaking between retarded emitter waves (usual time direction) and advanced (retrocausal) waves from the absorber, called “offer” and “confirmation” waves whose combined effects occur, with Born probability φ.φ*, directly displaying the product of the offer and confirmation wave amplitudes.
TI arose from the Wheeler-Feynman (WF) time-symmetric theory of classical electrodynamics (Wheeler and Feynman 1945, 1949, Feynman 1965), which proposed that radiation is a time-symmetric process, in which a charge emits a field in the form of half-retarded, half-advanced solutions to the wave equation, and the response of absorbers combine with that primary field to create a radiative process that transfers energy from an emitter to an absorber.
Feynman's (1965) Nobel Lecture "The Development of the Space-Time View of Quantum Electrodynamics" opened the whole transactional idea of advanced and retarded waves twenty years before Cramer (1983) did. It enshrines the very principle in which QED became acclaimed as the most accurate theory ever, although it encrypts the wave aspect in the Green’s function particle propagators, thus returning Heisenberg probabilities summed over Feynman diagrams.
The same interactive picture applies to collapse of both single particle wave functions, and to entangled particles, in which collapse of the wave function on absorption apparently has to paradoxically result in a sudden collapse of the wave function to zero, even at space-like intervals between detector pairs, a problem that is easily resolved by the advanced and retarded confirmation and offer waves of the transactional interpretation.
As just noted, the process of wave function collapse has generally been considered to violate Lorenz relativistic invariance (Barrett 1999 p44-45):
The standard collapse theory, at least, really is incompatible with the theory of relativity in a perfectly straightforward way: the collapse dynamics is not Lorentz-covariant. When one finds an electron, for example, its wave function instantaneously goes to zero everywhere except where one found it. If this did not happen, then there would be a nonzero probability of finding the electron in two places at the same time in the measurement frame. The problem is that we cannot describe this process of the wave function going to zero almost everywhere simultaneously in a way that is compatible with relativity. In relativity there is a different standard of simultaneity for each inertial frame, but if one chooses a particular inertial frame in order to describe the collapse of the wave function, then one violates the requirement that all physical processes must be described in a frame-independent way.
The transactional interpretation presents a unique view of cosmology, involving an implicit space-time anticipation in which a real exchange, e.g. a photon emitted by a light bulb and absorbed on a photographic plate or elsewhere, or a Bell type entanglement experiment with two detectors, is split into an offer wave from the emitter and retro-causal confirmation waves from future prospective absorbers that, after the transaction is completed, interfere to form each real photon confined between a unique emission and a unique absorption vertex. We also experience these retro-causal effects in weak quantum measurement fig 57(3), and delayed choice experiments, fig 74.
Fig 72: (1) In TI a transaction is established by crossed phase advanced and retarded waves. (2) The superposition of these between the emitter and absorber results in a real quantum exchanged between emitter P and future absorber Q. (3)The origin of the positive energy arrow of time envisaged as a phase reflecting boundary at the cosmic origin allowing a particle to radiate into space if the universe expands forever (Cramer 1983). This problem is made more acute by the difficulty of absorbing all weakly-interacting cosmic neutrinos. Cramer asserts this is the fundamental basis of the the arrow of time we associate with thermodynamics and claims it suggests the universe will expand forever as it appears, because a big crunch would induce a retrocausal reflecting boundary paradox. (4) Pair splitting entanglement can be explained by transactional handshaking at the common emitter. (5) A real energy emission in which time has broken symmetry involves multiple transactions between the emitter and many potential absorbers with collapse modelled as a symmetry breaking, in which the physical weight functions as the probability of that particular process as it ‘competes’ with other possible processes (Kin 1989, Kastner 2014). (6) The treatment of the quantum field in PTI is explained by assigning a different status to the internal virtual particle transactions (Kastner 2012). (7) Space time emerging from a transaction (Kastner 2021a). (8) Entanglement experiment with time varying analysers (Aspect et al. 1982). A calcium atom emits two entangled photons with complementary polarisation each of which travels to one of two detectors oscillating so rapidly there is no time to send information at the speed of light between the two detector pairs. (9) The blue and yellow photon transitions. (10) The quantum correlations blue exceed Bell’s limits of communication between the two at the speed of light. The experiment is referred to as EPR after Einstein, Podolsky and Rosen who first suggested the problem of spooky action at a distance.
To get a full picture of this process, we need to consider the electromagnetic field as a whole, in which there are many absorbers are also receiving offer waves from many emitters, so we get a network of virtual emitter-absorber pairs (fig 73) . There is a fundamental symmetry between creation and annihilation, but there is a sting in the measurement tail. When we do an interference experiment, with real photons in a positive energy universe, we know each photon came from the small region within the light source, but the locations of the many potential absorbers affected by the wave function are spread across the world at large in the emitter’s future. The photon could be absorbed anywhere on the photographic plate, or before it, if it hits dust in the apparatus, or after if it goes right through the plate and out of the apparatus altogether, into the universe at large. The key problem in wave function collapse is which absorber?
Cramer & Mead (2020) comment that both atoms in a transaction require a small amplitude component in the opposite state (inset) to initiate dipole radiation, with the excited atom having a small ground state amplitude and the absorbing atom a small excitation amplitude due to ambient interactions. They also note that phase coherence between emitter and absorber may be critical and that transactional resonance occurs during the brief period of transient oscillations during photon release of the order of the inverse of the photon’s frequency (~10-10 s visible light).
In all these cases, once a potential absorber becomes real, all the other potential absorbers have zero probability of absorption, so the change occurs “instantaneously” across space-time to other prospective absorbers, relative to the successful one, through the change in the emitter’s wave induced retrocausally at the emission outset.
Special relativistic quantum field theory is time-symmetric, admitting both retarded and advanced solutions, so solving wave function collapse is thus most closely realised in the transactional interpretation, where the real wave function is neither the emitter's spreading linear Schrödinger retarded wave, nor any of the prospective absorbers’ linear advanced waves, but the non-linear result, fig 73 (3, 4), of a hand-shaking, in which all these hypothetical offer and confirmation waves resolve into one or more real wave functions, linking “creation” and “annihilation” vertices in a deeper hidden variable wave interaction that Heisenberg creation and annihilation operators in Hilbert linear inner product space cannot and do not capture. It is the nature of this non-linear phase transition which holds the keys to life, the universe and everything and potentially the nature of time itself.
Fig 73: Upper: A transaction modelled by a phase transition from a virtual plasma to a real interactive solid (1) spanning space-time, in which the wave functions have now become like the harmonic phonons of solid state physics. Lower: Modelling wave function collapse in a transaction between an excited and a ground state atom (4), showing equivalent paths (3) and dipole radiation (2). Cramer & Mead (2020).
Moreover, in a positive energy real particle universe, where excited states are rarer than ground states, particularly at the lower energies of photosynthetic planetary systems, it is the multiplicity of absorbers in an excited emitter’s future that determine how the universe is able to come up with a non-deterministic causality-violating unique space-time spanning wave reduction in each case. This means real interactions are anticipatory i.e. future sensitive! But this does not result in any cyclical causality back to the future time paradox, but explains aspects of how type 1 von Neumann wave collapse occurs by selecting a future absorber from a space-time ensemble, through a complex interactive non-linear collapse “super-selection” process.
The entire notion in Bell experiments, where communication between absorbers appears to be impossibly instantaneous, invoking super-luminal communication, is unnecessary because the retrocausal confirmation wave perfectly cancels the time elapse of the offer wave, so if detector 1 samples first, its confirmation wave goes back to the source photon-splitter, arriving at the same time as the original emission and the offer wave collapses to a single photon emission to detector 2 which arrives there at exactly the time when 2 samples the complementary polarisation, with this precise information as required. No superluminal interaction between absorbers occurs even if it looks like the process was instantaneous and would have to involve infinite velocity, violating Lorenz invariance. This looks “instantaneous” without contradiction, or Lorenz violation, because of the time elapse cancellations, but if we follow it as a process, it is some kind of non-linear phase transition from a “plasma” state of offers and confirmations collapsing into a set of real photons with phonon-like real excitations connecting them, as in fig 73(1).
The only non-paradoxical way quantum entanglement and its collapse can be realised physically, especially in the case of space-like separated detectors, as in fig 72(8) is this:
(A) The closer detector, say No. 1, destructively collapses the entanglement at (1) sending a non-entangled advanced confirmation wave back in time to the source.
(B) The arrival of the advanced wave at the source collapses the wave right at source, so that the retarded wave from the source is no longer entangled although it was prepared as entangled by the experimenter. This IS instantaneous but retrocausally-local.
(C) The retarded offer wave is now no longer actually entangled and is sent at light speed to detector 2 where, if it is detected it immediately has complementary polarisation to 1’s detection.
(D) If detector 1 does not record a photon at the given angle, no coincidence measurement has been made.
(E) The emitted retarded wave will remain entangled unless photon 1 is or has been absorbed by another atom but then no coincidence counts will be made either.
(F) The process is relativistically covariant. In an experimenter frame if relative motion results in detector 2 sampling first, the roles of 1 and 2 become exchanged and the same explanation follows.
(G) Every later detection at (2) either collapses the entangled wave, or the already partially collapsed single particle wave function as in (B). If no detection has occurred at 1, or elsewhere, the retarded source wave is still entangled, and detector 2 may then sample it and collapse the entanglement. If a detection of photon 1 has happened elsewhere or at detector 1 the retarded source wave is no longer entangled, as in B above and then detector 2, if it samples photon 2, collapses this non-entangled single-particle wave function.
So there is no light-speed violating information transfer directly between detectors resulting in paradox, but there is a deeper conundrum about advanced and retarded waves in space time in the transactional principle. This as far as I can see gives the real-time account of how the universe actually deals with entanglement, not the fully collapsed statistical result the experimenter sees on the basis of Hilbert space operators and conveniently figures that the case is closed. The standard account of the Bell theorem experiment, as in fig 72(8) cannot explain how the universe actually does it, only that the statistical correlation agrees with the sinusoidal angular dependence of quantum reality and violates the Bell inequality. The experimenter is in a privileged position to overview the total data and can conclude this with no understanding of how an entangled wave function they prepared can arrive at detector 2 unentangled and complementary to 1, when photon 1 has already been absorbed.
In Symbiotic Existential Cosmology, transactional reality is envisaged as allowing a form of prescience, because the collapse has implicit information about the future state of the universe in which the absorbers exist. This may appear logically paradoxical, but no classical information is transferred, so there is no inconsistency. Modelling the collapse appears to happen outside space-time, or transversal to it across the entire space-time envelope of the special relativistic offer and confirmation wave functions, but actually it is near instantaneous at the emitter taking ~10-10 s for a visual frequency photon, so dual-time is just a core part of the heuristic to understand the non-linear process. This depends on transactional collapse being a non-random “super-selection” hidden-variable theory, in which wave function interactions manifest as a complex system during collapse in a way that looks deceptively like randomness because it is a complex quasi-ergodic process, influenced by interactive processes in the future universe, as well as phase relationships between the emitter and absorbers. An emission time of ~10-10 s may seem a negligible anticipatory advantage when we are looking at a simple light-driven interference experiment, but in the context of the conscious brain, where there are recursive wave interactions in the brain's electromagnetic fields, complementing dendritic network connections, with frequencies of 0.5 – 100 Hz, we have anticipatory emission times as long as 2 seconds, thus the complex emission-absorption processes may correspond to an expanded reality of the quantum present, in which subjective consciousness appears as an eternal now, sentiently anticipating the immediate future implicitly through handshaking with future absorbing brain states, including responses to immanent existential crises.
In Symbiotic Cosmology, transactions are envisaged as allowing a form of prescience, because the collapse has implicit information about the future state of the universe in which the absorbers exist. This may appear logically paradoxical, but no classical information is transferred, so there is no inconsistency. The collapse is transversal to space-time, as it traverses the entire space-time envelope of the absorbers and the special-relativistic offer and confirmation wave functions, but is near instantaneous at the emitter, taking ~10-10 s for a transient oscillation of a visual frequency ~10-10 Hz photon. This depends on transactional collapse being a non-random "super-selection" hidden-variable theory, in which wave function interactions manifest as a complex system during collapse, in a way that looks deceptively like randomness because it is a complex quasi-ergodic process, influenced by interactive processes in the future universe, including ambient phase relationships between the emitter and potential absorbers. An emission time of ~10-10 s may seem a negligible anticipatory advantage when we are looking at a simple light-driven interference experiment, but in the context of the conscious brain, where there are recursive wave interactions in the brain's coherently interacting electromagnetic fields, complementing axonal-dendritic network connections, with frequencies of 0.5 – 100 Hz, we have anticipatory emission times from ~10-2 to as long as 2 seconds, thus the complex emission-absorption processes may correspond to an expanded reality of the quantum present, in which subjective consciousness appears as an eternal now, sentiently anticipating the immediate future implicitly through handshaking with future absorbing brain states, including responses to immanent existential crises..
My conclusion is that subjective conscious physical volition has to imbue an evolutionary advantage, or it would be evolutionarily unstable and ultimately be discarded by natural selection, but this advantage has to involve real time anticipation of existential threats to survival. So I favour the transactional interpretation, in which a real particle e.g. a photon is a superposition of a causal “offer wave” from an emitter complemented by potential retrocausal “confirmation waves” from absorbers. This is actually necessary, because the emission wave is a linear Schrödinger wave that spreads, but a real photon is an excitation between an emitter and an absorber, more like a simple harmonic phonon excitation as a time-dependent process, in space with two foci as in fig 73(3, 4).
I remain intrigued by the transactional principle because I am convinced that subjective consciousness is a successful form of quantum anticipation in space-time, complementing classical cogitative prediction, that has enabled single-celled eucaryotes to conquer the biosphere before there were brains, due to their anticipatory membrane excitations sensing the world, which later evolved into neural networks as in hydra and then brains, based on intimately-coupled societies of such cells (neurons and neuroglia) now forming the neural networks neuroscience tries to understand in classical causal terms. The eucaryote endo-symbiosis in this view marks a unique discrete topological transformation of the cell membrane to unfold informational communication in attentive sentient consciousness invoking the second stage of cosmological manifestation that ends up being us wondering what the hell is going on? This is the foundation of our emergence as quantum cosmology and explains why we have the confounding existential dilemma we do have and why it all comes back to biospheric symbiosis being the centre of the cyclone of survival for us as a climax species.
The full picture of a transaction process is a population of real, or potential emitters in excited states and potential ground state absorbers, with their offer and confirmation wave functions extending throughout space time, similar to the Feynman representation in QED. As the transaction proceeds, this network undergoes a phase transition from a “virtual plasma” state to a “real solid”, in which the excited emitters are all paired with actual absorbers in the emitters’ future at later points in space-time. This phase transition occurs transversely across space-time – i.e. transcausally – covering both space-like and time-like intervals. It has many properties of a phase transition from plasma to solid, with a difference – the strongest interactions don’t win, except with a probability determined by the relative power of the emitter’s wave amplitudes at the prospective absorption event and vice versa. This guarantees the transaction conforms to the emitter’s probability distribution and the absorber's one as well. If a prospective absorber has already interacted with another emitter, others will not appear in the transaction network, so cease to be part of the collective transaction. Once this is the case, all other prospective absorbers of a given emitter scattered throughout space-time, both in the absorber’s past and future, immediately have zero probability of absorption from any of the emitters and no causal conflict.
The transition is laterally across the whole of space-time, not along the arrow of time in either direction. But this doesn’t mean a transaction is just a random process. Rather, it is a kind of super-selection theory, in which the probability of absorption at an absorber conforms to the wave probability but the decision-making process is spread between all the prospective absorbers and their potential emitters, distributed across space-time, not just an emitter-based random, wave power normalised probability. No paradoxical time loop is created.
The fact that, in the cat paradox experiment, we see only a live or dead cat and not a superposition doesn’t mean however, that conscious observers witness only a classical world view. There are plenty of real phenomena in which we do observe quantum superpositions, including quantum erasure, where entangled particles can be distinguished collapsing the entanglement, and then re-entangled. A laser consists of excited atoms above the ground state which can be triggered to coherently emit photons indistinguishably entangled in a superposition of in-phase states stimulated by a standing wave in the laser caught between pairs of reflecting mirrors, so we see the bright laser light and know it is a massive superimposed set of entangled photons. The same applies to coherent brain waves.
In all forms of quantum entanglement experiment, when the state of one of the pair is detected, the informational outcome is manifested at the other detector so that the other particle’s state is definitively complementary, although the detectors can be separated by space-like as well as time-like intervals, and this transmission cannot be used to relay classical information. This again is explained by the transactional interpretation, because the confirmation wave of the first detector of the pair is transmitted retro-causally back to the source event where the splitting occurred and then causally out to the second detector where it now has obligately complementary spin or polarisation when detection occurs.
What the transactional interpretation does provide is a real wave collapse process in which the universe is neither stranded in an Everett probability multiverse, nor in a probablistically selected fully collapsed “classical” eigenvalue state, but can be anywhere in between, depending on which agents are dong the measuring. Nor is collapse necessarily random and thus meaningless, but is a space-time spanning non-linear phase transition, involving bidirectional hand-shaking between past and future. The absorbers are all in an emitter’s future so there is a musical chairs dance happening in the future. And those candidates may also be absorbers of other emitters and so on, so one can’t determine the ultimate boundary conditions of this problem. Somehow the “collapse”, which we admit violates retarded causality, results in one future choice. This means that there is no prohibition on this being resolved by the future affecting the outcome because the actual choice has no relation to classical causality.
The only requirement is that repeated individual observations are asymptotic to the Born probability interpretation normalised by the wave function power φ.φ*, but this in itself reflects offer and confirmation processes, in which multiple interactions generate responses asymptotic to ergodic stochasticity, while having a basis in an explicit hidden variable theory. Such hidden complexity is trivially manifested in two-particle entanglement where observation of either particle’s polarisation appears to be “random”, but coincidence counting shows they are tightly correlated, indicating complementary polarisations in any reference frame. The reason for the Born probability asymptote could thus be simply that the non-linear phase transition of the transaction, like the cosmic wave function of the universe, potentially involves everything there is – concealing a predictive or anticipatory hidden variable theory. One should point out that the near universal assumption that the probability interpretation implies pure randomness normalised by the wave power has as much onus on scientific proof as does any hidden variable theory, and is tactically assumed by Heisenberg’s matrix formulation, in which quantum transitions are explicitly determined by the probabilities.
The transactional interpretation is one in which subjective conscious volition and meaning can become manifest in cosmic evolution, in which the universe is in a state of dynamic ramification and collapse of quantum superpositions. The key point here is that subjective conscious volition needs to have an anticipatory property in its own right, independent of and complementary to computational attention processes, to be retained by natural selection. Even if we do have free will, it would not have been selected for, all the way from founding eucaryotes to all metazoa including Homo sapiens otherwise. The transactional interpretation, by involving future absorbers in the collapse process, provides just such an anticipatory feature.
It is one thing to have free will and it’s another to use free will for survival on the basis of (conscious) prediction, or anticipation. Our conscious brains are striving to be predictive to the extent that we are subject to flash-lag perceptual illusions where perceptual processes attempt, sometimes incorrectly, to predict the path of rapidly moving objects (Eagleman & Sejnowski 2000), so the question is pivotal. Anticipating future threats and opportunities is key to how we evolved as conscious organisms, and this is pivotal over short to immediate time scales, like the snake’s or tiger’s strike which we survive. Anticipating reality in the present is precisely what subjective consciousness is here to do.
The hardest problem of consciousness is thus that, to be conserved by natural selection, subjective consciousness (a) has to be volitional i.e. affect the world physically to result in natural selection and (b) it has to be predictive or anticipatory as well. Free-will without anticipation is as neutral to evolution as random behaviour, and it would not be selected for. If we are dealing with classical reality, we could claim this is merely a computational requirement, but why then do we have subjective experience at all? Why not just recursive predictive attention processes with no subjectivity?
Here is where the correspondence between sensitive dynamic instability at tipping points and quantum uncertainty comes into the picture. We know biology and particularly brain function is a dynamically unstable process, with sensitive instabilities that are fractal down to the quantum level of ion channels, including enzyme molecules whose active sites are enhanced by quantum tunnelling and the quantum parallelism facilitating protein folding and interactive molecular dynamics. These are recursively ongoing as we face endless existential tipping points in real life. We know that brain dynamics operating close to the edge of chaos is liable to quantum-sensitive dynamic crisis during critical decision-making uncertainties that do not have an obvious computational, cognitive, or reasoned disposition. We also know at these points that the very processes of sensitivity on existing conditions and stochastic resonance, can allow effects at the micro level to approach quantum sensitivity to affect the outcome of global brain states.
And those with any rational insight can see that, for both theoretical and experimental reasons, proving classical causal closure of the physical universe in brain dynamics is an unachievable quest. Notwithstanding Libet’s attempt, there is no technological way to experimentally achieve verification that the brain is a causally closed physical process dependent only on external input and it flies in the face of the fractal molecular nature of biological processes, which are in critical transition at the quantum level. Nevertheless we can understand that subjective conscious volition cannot enter into causal conflict with brain processes that have already established an effective computational outcome, as we do when we reach a prevailing reasoned conclusion, so free will is effectively restricted to situations where the environmental circumstances are uncertain, or not effectively computable, as are interactions with multiple living agents, or perceived consciously to be anything but certain. This in turn means that the key role of free will is not applying it to rationally or emotionally foregone conclusions but to environmental and strategic uncertainties, especially involving other conscious agents whose outcomes become part of quantum uncertainty itself.
The natural conclusion is that subjectively conscious intuitive intentional will has been conserved by evolution because it provides an evolutionary advantage at anticipating root uncertainties in the quantum universe, including environmental and contextual uncertainties which are themselves products of quantum uncertainty amplified by unstable interactive processes in the molecular universe. This seems counter-intuitive, because we tend to associate quantum uncertainty and the vagaries of fate with randomness, but this is no more scientifically established than causal closure of the universe in brain function. All the major events of history that are not foregone conclusions, result from conscious intuitive intentional will applied to uncertainty, such as Nelson turning his bind eye to the telescope, in the eventually successful Battle of Copenhagen. So the question remains, that when we turn to the role of subjective consciousness volition in quantum uncertainty, this comes down to not just opening the box of Schrödinger’s cat, but to anticipating uncertain events more often than random chance would predict in real life.
That is where the transactional approach comes into its own, because, while the future at the time of casting the emission die is an indeterminate set of potential absorbers, the retro-causal information contained in the transaction is implicitly revealing which future absorbers are actually able to absorb the real emitted quantum and hence information about the real state of the future universe, over and above its probabilities at emission. Therefore the transaction is carrying additional implicit “encoded” information about the actual future state of the universe and what its possibilities are that can be critical for survival in natural selection. Although, like the “transmission” of a detection to the other detector in an entanglement experiment cannot be used to transfer classical information faster than the speed of light, the same will apply to quantum transactions, but this doesn’t mean they are random or have no anticipatory value, just that they cannot be used for causal deduction.
Because the “holistic” nature of conscious awareness is an extension of the global unstable excitatory dynamics of individual eucaryote cells to brain dynamics, a key aspect of subjective consciousness may be that it becomes sensitive to the wave-particle properties of quantum transactions with the natural environment in the process of cellular quantum sentience, involving recursive sensitivity to quantum modes, including photons, phonons and molecular orbital effects constituting cellular vision, audition and olfaction. Expanded into brain processes, this cellular quantum dynamics then becomes integral to the binding of consciousness into a coherent whole.
If we view neurodynamics as a fully quantum process, in the most exotic quantum material in the universe, in which the wave aspects consist of recurrent parallel excitation modes representing the competing possibilities of response to environmental uncertainties, when there is an open and shut case on logical, or reasoned tactical grounds, this “classical” mode will win out pretty much in the manner of Edelman’s (1987) neural Darwinism. However, in terms of quantum evolution, in situations where subjective consciousness becomes critical to make an intuitive decision, the brain dynamic approaches an unstable tipping point, in which system uncertainty becomes pivotal (represented in instability of global states which are in turn sensitive to fractal scales of instability to the molecular level. Subjective consciousness then intervenes causing an intuitive decision through a (type 1 von Neumann) process of wave function collapse of the actively superimposed transactional modes recurrently anticipating the immediate future.
From the inside, this feels like and IS an intuitive choice of "free-will" in autonomous subjective conscious volition over the physical universe. From the outside, this looks like collapse of an uncertain brain process to one of its anticipated eigenfunction states which then becomes apparent in the outcome. There is a very deep mystery in this process because the physical process looks and remains uncertain and indeterminate, but from inside, in complete contradiction, it looks and feels like the exercise of intuitive intentional will determining future physical outcomes. So in a fundamental way it is like a Schrödinger cat experiment in which the cat survives more often than not, i.e. we survive, except that the Geiger counter is not a single quantum tunnelling event but recursive interacting critical brain transition states. So we end up with the ultimate paradox of consciousness – how can we not only anticipate future outcomes that are quantum uncertain but capitalise on the ones that promote our survival, i.e. throw a live cat more often that chance would dictate!
This is the same dilemma that Symbiotic Existential Cosmology addresses in primal subjectivity because subjective anticipation is revealed to be a cosmological property and it is why subjective consciousness has efficacy of volition over the physical universe. From the physical point of view, causal closure of the brain is an undecidable proposition because neither can we prove causal closure of the quantum universe nor can we physically prove subjective consciousness has physical effect. On the other hand, as Cathy Reason’s theorem intimates, conscious self certainty of our autonomous agency implies we know we changed the universe. Certainty of will, as well as certainty of self. So the subjective perspective is certain and the objective perspective is undecidable. In exactly the same way, the cat paradox outcome is uncertain and can't be hijacked physically, but the autonomous agency resolving the uncertain brain state has confidence of overall efficacy. This is the key to consciousness, free-will and survival in the jungle when cognition stops dead because of all the other conscious agents rustling in the grass and threatening to strike, which are uncomputable because they too are conscious! It’s also the key to Psi, but in a more contingent way because Psi research is trying to pass this ability back into the physical, where it drifts towards the probability interpretation.
Consciousness is retained by evolution because it is a product of a Red Queen intuitive-cognitive [1] race between predators and prey in a similar way to the way sexuality and its asymmetric reproductive investment race between females and males has arisen from a self-perpetuating genetic race between parasites and hosts by creating individual variation, thus avoiding boom and bust epidemics.
Confirming this approach, Kauffman & Radin (2023) cite a variety of sources of evidence to propose a model whereby the world consists of two elements: Ontologically real Possibles that do not obey Aristotle’s law of the excluded middle, and ontologically real Actuals that do. Based on this view, which bears resemblance to von Neumann, (1955), and (Stapp, 2007; Rosenblum and Kuttner, 2006), measurement that is registered by an observer’s mind converts Possibles into Actuals. This quantum-oriented approach raises the intriguing prospect that some aspects of mind may be quantum, and that mind may play an active role in the physical world.
Following Heisenberg (2007), we propose a non-substance dualism (Robinson, 2023). In this view, the world consists of both ontologically real Possibles (i.e., Res potentia, that do not obey the law of the excluded middle) and ontologically real Actuals (Res extensa, that do). Mind converts these Possibles acausally into Actuals, where by the term acausal we specifically mean without a physical cause. Such a “becoming” is not deductive. The “X is Possible” of Res potentia does not entail the “X is Actual” of Res extensa. Indeed, no deductive mechanism has been found since the foundations of QM, suggesting that such a mechanism may not exist. Heisenberg’s “Res potentia and Res extensa linked by measurement” interpretation explains five mysteries of quantum mechanics: 1) Why measurement of one of N entangled variables instantaneously alters the amplitudes of the remaining N − 1 variables; 2) Spatial non-locality; 3) Which-way information; 4) Null measurements; and 5) Why there are “no facts of the matter” between measurements (Kauffman, 2020; Manousakis, 2006).
They also propose that conscious experience exists as a manifestation of quantum reality operating in the brain in a way which enables conscious intentionality to affect the brain and physical world through our behaviour, thus providing life with a selective advantage for survival consistent with Symbiotic Existential Cosmology.
However, if the mind-matter interactions studies are valid, then a human can not only “try” to alter the outcome of a physical system by intentionally altering the probabilities of the outcomes of measurement, say by bending the Born rule, but their will can actually accomplish their desire. Thus, Mind trying and doing can alter the outcome of “actualization” to behave non-randomly. A responsible free will is not ruled out. … A further answer is a form of panpsychism where interacting quantum variables measure one another. There are grounds to hold this view of QM. It is consistent with the Strong Free Will Theorem that says that electrons “freely decide” to become Up or Down upon measurement (Conway and Kochen, 2009). … Mind, in short, may have had – and still have – an active role in the evolution of the world. That is, we propose that a partially quantum mind-body system allows a mind to have acausal consequences for the brain. In this case, mind is not merely epiphenomenal, and therefore mind can have evolved due to selective advantage (Kauffman & Roli 2022 – who state that qualia are experienced and arise with our collapse of the wave function).
Three years after John Cramer published the transactional interpretation, I wrote a speculative paper, “Dual-time Supercausality (King 1989, Vannini 2006), based on John’s description which says many of the same things emergent in Ruth Kastner’s more comprehensive development. Summing up the main conclusions we have:
(1) Symmetric-Time: This mode of action of time involves a mutual space-time relationship between emitter and absorber. Symmetric-time determines which, out of the ensemble of possibilities predicted by the probability interpretation of quantum mechanics is the actual one chosen. Such a description forms a type of hidden-variable theory explaining the selection of unique reduction events from the probability distribution. We will call this bi-directional causality transcausality.
(2) Directed-time: Real quantum interaction is dominated by retarded-time, positive-energy particles. The selection of temporal direction is a consequence of symmetry-breaking, resulting from energy polarization, rather than time being an independent parameter. The causal effects of multi-particle ensembles result from this dominance of retarded radiation, as an aspect of symmetry-breaking.
Dual-time is thus a theory of the interaction of two temporal modes, one time-symmetric which selects unique events from ensembles, and the other time-directed which governs the consistent retarded actions. These are not contradictory. Each on their own form an incomplete description. Temporal causality is the macroscopic approximation of this dual theory under the correspondence principle. The probability interpretation governs the incompleteness of directed-causality to specify unique evolution in terms of initial conditions.
Quantum-consciousness has two complementary attributes, sentience and intent:
(a) Sentience represents the capacity to utilise the information in the advanced absorber waves and is implicitly transcausal in its basis. Because the advanced components of symmetric-time cannot be causally defined in terms of directed-time, sentience is complementary to physically-defined constraints.
(b) Intent represents the capacity to determine a unique outcome from the collection of such absorber waves, and represents the selection of one of many potential histories. Intent addresses the two issues of free-will and the principle of choice in one answer – free-will necessarily involves the capacity to select one out of many contingent histories and the principle of choice manifests the essential nature of free-will at the physical level.
The transactional approach has recently received wider investigation. Ruth Kastner (2021a,b) elucidates the relativistic transactional interpretation, which claims to resolve this through causal sets (Sorkin 2003) invoking a special-relativistic theory encompassing both real particle exchange and collapse:
In formal terms, a causal set C is a finite, partially ordered set whose elements are subject to a binary relation ≺ that can be understood as precedence; the element on the left precedes that on the right. It has the following properties:
(i) transitivity: (∀x, y, z ∈ C)(x ≺ y ≺ z ⇒ x ≺ z) (ii) irreflexivity: (∀x ∈ C)(x ~≺ x)
(iii)
local finiteness: (∀x, z ∈ C)
(cardinality { y ∈ C | x ≺ y ≺ z } < ∞)
Properties (i) and (ii) assure that the set is acyclic, while (iii) assures that the set is discrete. These properties yield a directed structure that corresponds well to temporal becoming, which Sorkin describes as follows:
In Sorkin’s construct, one can then have a totally ordered subset of connected links (as defined above), constituting a chain. In the transactional process, we naturally get a parent/child relationship with every transaction, which defines a link. Each actualized transaction establishes three things: the emission event E, the absorption event A, and the invariant interval I(E,A) between them, which is defined by the transferred photon. Thus, the interval I(E,A) corresponds to a link. Since it is a photon that is transferred, every actualized transaction establishes a null interval, i.e., ds2 = c2t2 − r2 = 0 . The emission event E is the parent of the absorption event A (and A is the child of E).
A major advantage of the causal set approach as proposed by Sorkin and collaborators … is that it provides a fully covariant model of a growing spacetime. It is thus a counterexample to the usual claim (mentioned in the previous section) that a growing spacetime must violate Lorentz covariance. Specifically, Sorkin shows that if the events are added in a Poissonian manner, then no preferred frame emerges, and covariance is preserved (Sorkin 2003, p. 9). In RTI, events are naturally added in a Poissonian manner, because transactions are fundamentally governed by decay rates (Kastner and Cramer, 2018).
Kastner comments in private communication in relation to her development of the transactional interpretation:
The main problem with the standard formulation of QM is that consciousness is brought in as a kind of 'band-aid' that does not really work to resolve the Schrodinger's Cat and Wigner's Friend of paradoxes. The transactional picture, by way of its natural non-unitarity (collapse under well-quantified circumstances), resolves this problem and allows room for consciousness to play a role as the acausal/volitional influence that corresponds to efficacy (Kastner 2016). My version of TI, however, is ontologically different from Cramer’s and it also is fully relativistic (Kastner 2021a,b). For specifics on why many recent antirealist claims about the world as alleged implications of Wigner's Friend are not sustainable, see Kastner (2021c). In particular, standard decoherence does not yield measurement outcomes, so one really needs real non-unitarity in order to have correspondence with experience. I have also shown that the standard QM formulation, lacking real non-unitarity, is subject to fatal inconsistencies (Kastner 2019, 2021d). These inconsistencies appear to infect Everettian approaches as well.
Kastner (2011) explains the arrow of time as a foundational quantum symmetry-breaking:
Since the direction of positive energy transfer dictates the direction of change (the emitter loses energy and the absorber gains energy), and time is precisely the domain of change (or at least the construct we use to record our experience of change), it is the broken symmetry with respect to energy propagation that establishes the directionality or anisotropy of time. The reason for the ‘arrow of time’ is that the symmetry of physical law must be broken: ‘the actual breaks the symmetry of the potential.’ It is often viewed as a mystery that there are irreversible physical processes and that radiation diverges toward the future. The view presented herein is that, on the contrary, it would be more surprising if physical processes were reversible, because along with that reversibility we would have time-symmetric (isotropic) processes, which would fail to transfer energy, preclude change, and therefore render the whole notion of time meaningless.
Kastner is a possibilist who argues that OWs and CWs are possibilities that are "real." She says that they are less real than actual empirically measurable events, but more real than an idea or concept in a person's mind. She suggests the alternate term "potentia," Aristotle's term that she found Heisenberg had cited. For Kastner, the possibilities are physically real as compared to merely conceptually possible ideas that are consistent with physical law. But she says the "possibilities" described by offer and confirmation waves are "sub-empirical" and pre-spatiotemporal (i.e., they have not shown up as actual in spacetime). She calls these "incipient transactions.” She calls for a new metaphysical category to describe "not quite actual...possibilities."
Kastner (2012, 2014b) sets out the basis for extending the possibilist transactional interpretation or PTI, to the relativistic domain in relativistic transactional interpretation or RTI. This modified version proposes that offer and confirmation waves (OW and CW) exist in a sub-empirical, pre-spacetime realm (PST) of possibilities, and that it is actualised transactions which establish empirical spatiotemporal events. PTI proposes a growing universe picture, in which actualised transactions are the processes by which spacetime events are created from a substratum of quantum possibilities. The latter are taken as the entities described by quantum states (and their advanced confirmations); and, at a subtler relativistic level, the virtual quanta. PTI proposes a growing universe picture, in which actualised transactions are the processes by which spacetime events are created from a substratum of quantum possibilities.
The basic idea is that offers and confirmations are spontaneously elevated forms of virtual quanta, where the probability of elevation is given by the decay rate for the process in question. In the direct action picture of PTI, an excited atom decays because one of the virtual photon exchanges ongoing between the excited electron and an external absorber (e.g. electron in a ground state atom) is spontaneously transformed into a photon offer wave that generates a confirming response. The probability for this occurrence is the product of the QED coupling constant α and the associated transition probability. In quantum field theory terms, the offer wave corresponds to a ‘free photon’ or excited state of the field, instantiating a Fock space state (Kastner 2014b).
In contrast, with standard QFT where the amplitudes over all interactions are added and then squared under the Born rule, according to PTI , the absorption of the offer wave generates a confirmation (the ‘response of the absorber’), an advanced field. This field can be consistently reinterpreted as a retarded field from the vantage point of an ‘observer’ composed of positive energy and experiencing events in a forward temporal direction. The product of the offer (represented by the amplitude) and the confirmation (represented by the amplitude’s complex conjugate) corresponds to the Born Rule.
Kastner (2014a, 2021c,d) deconstructs decoherence as well as quantum Darwinism, refuting claims that the emergence of classicality proceeds in an observer-independent manner in a unitary-only dynamics, noting that quantum Darwinism holds that the emergence of classicality is not dependent on any inputs from observers, but that it is the classical experiences of those observers that the decoherence program seeks to explain from first principles:
“in the Everettian picture, everything is always coherently entangled, so pure states must be viewed as a fiction -- but that means that it is also fiction that the putative 'environmental systems' are all randomly phased. In helping themselves to this phase randomness, Everettian decoherentists have effectively assumed what they are trying to prove: macroscopic classicality only ‘emerges’ in this picture because a classical, non-quantum-correlated environment was illegitimately put in by hand from the beginning. Without that unjustified presupposition, there would be no vanishing of the off-diagonal terms”
She extends this to an uncanny observation concerning the Everett view:
"That is, MWI does not explain why Schrodinger’s Cat is to be viewed as ‘alive’ in one world and ‘dead’ in another, as opposed to ‘alive + dead’ in one world and ‘alive – dead’ in the other.”
Kastner (2016a) notes that the symmetry-breaking of the advanced waves provides an alternative explanation to von Neumann’s citing of the consciousness of the observer in quantum measurement:
Von Neumann noted that this Process 1 transformation is acausal, nonunitary, and irreversible, yet he was unable to explain it in physical terms. He himself spoke of this transition as dependent on an observing consciousness. However, one need not view the measurement process as observer-dependent. … The process of collapse precipitated in this way by incipient transactions [competing probability projection operator weightings of the] absorber response(s) can be understood as a form of spontaneous symmetry breaking.
Kastner & Cramer (2018) confirm this picture:
And since not all competing possibilities can be actualized, symmetry must be broken at the spacetime level of actualized events. The latter is the physical correlate of non-unitary quantum state reduction.
However, in Kastner (2016b), she considers observer participation as integral, rejecting two specific critiques of libertarian, agent-causal free will: (i) that it must be anomic or “antiscientific”; and (ii) that it must be causally detached from the choosing agent. She asserts that notwithstanding the Born rule, quantum theory may constitute precisely the sort of theory required for a nomic grounding of libertarian free will.
Kastner cites Freeman Dyson’s comment rejecting epiphenomenalism:
I think our consciousness is not just a passive epiphenomenon carried along by the chemical events in our brains, but is an active agent forcing the molecular complexes to make choices between one quantum state and another. In other words, mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call ”chance” when they are made by electrons.”
Kastner then proposes, not just a panpsychic quantum reality but a pan-volitional basis for it:
Considering the elementary constituents of matter as imbued with even the minutest propensity for volition would, at least in principle, allow the possibility of a natural emergence of increasingly efficacious agent volition as the organisms composed by them became more complex, culminating in a human being. And allowing for volitional causal agency to enter, in principle, at the quantum level would resolve a very puzzling aspect of the indeterminacy of the quantum laws–the seeming violation of Curie’s Principle in which an outcome occurs for no reason at all. This suggests that, rather than bearing against free will, the quantum laws could be the ideal nomic setting for agent-causal free will.
Kastner, Kauffman & Epperson (2018) formalise the relationship between potentialities and actualities into a modification of Descartes res cogitans (purely mental substance) and res extensa (material substance) to res potentiae and res extensa comprising the potential and actual aspects of ontological reality. Unlike Cartesian dualism these are not separable or distinct but are manifest in all situations where the potential becomes actual, particularly in the process of quantum measurement in PTI, citing (McMullin 1984) on the limits of imagination of the res potentiae:
… imaginability must not be made the test for ontology. The realist claim is that the scientist is discovering the structures of the world; it is not required in addition that these structures be imaginable in the categories of the macroworld.
They justify this by noting that human evolutionary survival has depended on dealing with the actual, so the potential may not be imaginable in our conscious frame of reference, however one can note that the strong current of animism in human cultural history suggests a strong degree of focus on the potential, and its capacity to become actual in hidden unpredictable sources of accident or misfortune. In addition to just such unexpected real world examples, they they note the applicability of this to a multiplicity of quantum phenomena:
Thus, we propose that quantum mechanics evinces a reality that entails both actualities (res extensa) and potentia (res potentia), wherein the latter are as ontologically significant as the former, and not merely an epistemic abstraction as in classical mechanics. On this proposal, quantum mechanics IS about what exists in the world; but what exists comprises both possibles and actuals. Thus, while John Bell’s insistence on “beables” as opposed to just “observables” constituted a laudable return to realism about quantum theory in the face of growing instrumentalism, he too fell into the default actualism assumption; i.e., he assumed that to ‘be’ meant ‘to be actual,’ so that his ‘beables’ were assumed to be actual but unknown hidden variables.
What the EPR experiments reveal is that while there is, indeed, no measurable nonlocal, efficient causal influence between A and B, there is a measurable, nonlocal probability conditionalization between A and B that always takes the form of an asymmetrical internal relation. For example, given the outcome at A, the outcome at B is internally related to that outcome. This is manifest as a probability conditionalization of the potential outcomes at B by the actual outcome at A.
Nonlocal correlations such as those of the EPR entanglement experiment below can thus be understood as a natural, mutually constrained relationship between the kinds of spacetime actualities that can result from a given possibility – which itself is not a spacetime entity. She quotes Anton Zellinger (2016):
..it appears that on the level of measurements of properties of members of an entangled ensemble, quantum physics is oblivious to space and time .
Kastner (2021b), considers how the spacetime manifold emerges from a quantum substratum through the transactional process (fig 72(6)), in which spacetime events and their connections are established. The usual notion of a background spacetime is replaced by the quantum substratum, comprising quantum systems with non-vanishing rest mass, corresponding to internal periodicities that function as internal clocks defining proper times and in turn, inertial frames that are not themselves aspects of the spacetime manifold.
Cramer (2022) notes a possible verifiable source of advanced waves:
In the 1940s, young Richard Feynman and his PhD supervisor John Wheeler decided to take the advanced solution seriously and to use it to formulate a new electromagnetism, now called Wheeler-Feynman absorber theory (WF). WF assumes that an oscillating electric charge produces advanced and retarded waves with equal strengths. However, when the retarded wave is subsequently absorbed (in the future), a cancellation occurs that erases all traces of the advanced waves and their time-backward “advanced effects.” WF gives results and predictions identical to those of conventional electromagnetic theory. However, if future retarded-wave absorption is somehow incomplete, WF suggests that this absorption deficiency might produce experimentally observable advanced effects.
When Bajlo (2017) made measurements on cold, clear, dry days, he made the observations as the Earth rotated and the antenna axis swept across the galactic center, where wave-absorption variations might occur, in a number of these measurements, he observed strong advanced signals (6.94 to 26.5 standard deviations above noise) that arrived at the downstream antenna a time 2D/c before the main transmitted pulse signal. Variations in the advanced-signal amplitude as the antenna axis swept across the galactic center were also observed. The amplitude was reduced up to 50% of off-center maximum when pointed directly at the galactic center (where more absorption is expected.) These results constitute a credible observation of advanced waves.
Fig 74: Wheeler (1983) delayed choice experiment shows that different forms of measurement after light from a distant quasar has been gravitationally lensed around an intervening galaxy can be determined to have passed one or the other way around it or a superposition of both, depending on whether detection of one or other particle, or an interference is made when it reaches Earth. (b, c) An experimental implementation of Wheeler's idea along a satellite-ground interferometer that extends for thousands of kilometers in space (Vedovato et al. 2017), using shutters on an orbiting satellite.
Superdeterminism: There is another interpretation of quantum reality called super-determinism (Hossenfelder & Palmer 2020), which has an intriguing relationship with retro-causality and still can admit free will, despite the seeming contradiction in the title. Bell's theorem assumes that the measurements performed at each detector can be chosen independently of each other and of the hidden variables that determine the measurement outcome: ρ(λ(a,b))=ρ(λ).
In a super-deterministic theory, this relation is not fulfilled ρ(λ(a,b))≠ρ(λ) because the hidden variables are correlated with the measurement settings. Since the choice of measurements and the hidden variable are predetermined, the results at one detector can depend on which measurement is done at the other without any need for information to travel faster than the speed of light. The assumption of statistical independence is sometimes referred to as the free choice or free will assumption, since its negation implies that human experimentalists are not free to choose which measurement to perform. But this is incorrect according to Hossenfelder (2021). What it depends on are the actual measurements made, not the experimenter's free-will. For every possible pair of measurements a, b there is a predefined trajectory determined both by the particle emission and the measurement at the time absorption takes place. Thus in general the experimenter still has the free will to choose a, b or even to change the detector set up, as in the Wheeler delayed choice experiment in fig 74, and science proceeds as usual, but the outcome depends on the actual measurements made. In principle, super-determinism is untestable, as the correlations can be postulated to exist since the Big Bang, making the loophole impossible to eliminate. However this has an intimate relationship with the transactional interpretation and its implicit retro-causality, because it includes the absorbing conditions in the transaction, so the two are actually compatible.
In the 1980s, John Stewart Bell discussed superdeterminism in a BBC interview:
There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behaviour, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the "decision" by the experimenter to carry out one set of measurements rather than another, the difficulty disappears. There is no need for a faster than light signal to tell particle A what measurement has been carried out on particle B, because the universe, including particle A, already "knows" what that measurement, and its outcome, will be.
Although he acknowledged the loophole, he also argued that it was implausible. Even if the measurements performed are chosen by deterministic random number generators, the choices can be assumed to be "effectively free for the purpose at hand," because the machine's choice is altered by a large number of very small effects. It is unlikely for the hidden variable to be sensitive to all of the same small influences that the random number generator was.
Fig 74a: A hypothetical depiction of superdeterminism in which photons from the distant galaxies Sb and Sc are used to control the orientation of the polarization detectors α and β just prior to the arrival of entangled photons Alice and Bob.
Nobel Prize in Physics winner Gerard 't Hooft discussed this loophole with John Bell in the early 1980s:
I raised the question: Suppose that also Alice's and Bob's decisions have to be seen as not coming out of free will, but being determined by everything in the theory. John said, well, you know, that I have to exclude. If it's possible, then what I said doesn't apply. I said, Alice and Bob are making a decision out of a cause. A cause lies in their past and has to be included in the picture".
According to the physicist Anton Zeilinger, also a Nobel winner, if superdeterminism is true, some of its implications would bring into question the value of science itself by destroying falsifiability:
[W]e always implicitly assume the freedom of the experimentalist... This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature.
't Hooft (2016) has referred to his cellular automaton model of quantum mechanics as superdeterministic:
The Cellular Automaton Theory (CAT) assumes that, once a universal Schrödinger equation has been identified that encapsulates all conceivable phenomena in the universe (a Grand Unified Theory, or a Theory for Everything), it will feature an ontological basis that maps the system into a classical automaton. ... The universe is in a single ontological state, never in a superposition of such states, but, whenever we use our templates (that is, when we perform a conventional quantum mechanical calculation), we use superpositions just because they are mathematically convenient. Note that, because the superpositions are linear, our templates obey the same Schrödinger equation as the ontological states do.
Sabine (Hossenfelder 2020) points out exactly how superdeterminism can violate statistical independence:
I here want to explain how the strangeness disappears if one is willing to accept that one of the assumptions we have made about quantum mechanics is not realized in nature: Statistical Independence. Loosely speaking, Statistical Independence means that the degrees of freedom of spa- tially separated systems can be considered uncorrelated, so in a superdeterministic model they are generically correlated, even in absence of a common past cause. The way that Statistical Independence makes its appearance in superdeterminism is that the probability distribution of the hidden variables given the detector settings ρ(λ|θ) is not independent of the detector settings, ie ρ(λ|θ) ≠ ρ(λ). What this means is that if an experimenter prepares a state for a measurement, then the outcome of the measurement will depend on the detector settings. The easiest way to think of this is considering that both the detector settings, θ, and the hidden variables, λ, enter the evolution law of the prepared state. As a consequence, θ and λ will generally be correlated at the time of measurement, even if they were uncorrelated at the time of preparation. Superdeterminism, then, means that the measurement settings are part of what determines the outcome of the time-evolution of the prepared state. What does it mean to violate Statistical Independence? It means that fundamentally everything in the universe is connected with everything else, if subtly so. You may be tempted to ask where these connections come from, but the whole point of superdeterminism is that this is just how nature is. It's one of the fundamental assumptions of the theory, or rather, you could say one drops the usual assumption that such connections are absent. The question for scientists to address is not why nature might choose to violate Statistical Independence, but merely whether the hypothesis that it is violated helps us to better describe observations.
However note the "toy" superdeterministic hidden variable theory (Donadi & Hossenfelder 2022) uses “the master equation for one of the most common examples of decoherence – amplitude damping in a two-level system”. But decoherence is a theory in which an additional term is added to model the increasing probability of a quantum getting hit by another quantum and literally uses forced damping to suppress the entangled “off diagonal” components of the wave function matrix.
Superdeterminism is distinct from theories which attempt to assert boundary conditions on the cosmic origin combined with time asymmetric laws about the cosmic wave function (Chen 2023), which would result in strong determinism, the notion that despite quantum uncertainty, the entire future of the universe is predetermined.
Copenhagen's Hidden-variable Waterloo: It's one thing to be satisfied with the probability interpretation of quantum theory being simply our state of knowledge of the system, but the universe has to make a unique decision in each case. By comparison with hidden variable theories, the null assumption of the Copenhagen probabilistic view, as the intrinsic limit of our physical knowledge has the the intractable difficulty of the universe having to mount an actual process, in the absence of a natural hidden variable dynamic, which can, in Einstein's words, "play dice with the universe", because this would require extreme fine tuning in every wave-particle reduction to both induce a real absorption by a given atom, transforming from the wave function probability to certainty, allowing a specific outcome in each case, while causing the probability of absorption everywhere else in the universe to become instantaneously zero, without means of physical support, or transmission of the change, to arbitrarily ensure convergence to the probability interpretation across space-time, contradicting Lorenz invariance. By contrast, the genuinely interactive processes of transactions are natural underlying processes that could easily induce a complex system process ergodic to the probability interpretation without having to do global probability balancing across space-like and time-like intervals..
Schreiber (1995) sums up the case for consciousness collapsing the wave function as follows:
“The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse.”
Henry Stapp’s (2001) comment is very pertinent to the cosmology I am propounding, because it implies the place where collapse occurs lies in the brain making quantum measurements of its own internal states:
“From the point of view of the mathematics of quantum theory it makes no sense to treat a measuring device as intrinsically different from the collection of atomic constituents that make it up. A device is just another part of the physical universe... Moreover, the conscious thoughts of a human observer ought to be causally connected most directly and immediately to what is happening in his brain, not to what is happening out at some measuring device... Our bodies and brains thus become ... parts of the quantum mechanically described physical universe. Treating the entire physical universe in this unified way provides a conceptually simple and logically coherent theoretical foundation... “
Entanglement, Measurement and Phase Transition
A flurry of theoretical and experimental research has uncovered a strange new face of entanglement , that shows itself not in pairs, but in constellations of particles (Wood 2023). Entanglement naturally spreads through a group of particles, establishing an intricate web of contingencies. But if you measure the particles frequently enough, destroying entanglement in the process, you can stop the web from forming. In 2018, three groups of theorists (Chan A et al. 2019, Li et al. 2018, Skinner et al. 2019) showed that these two states — web or no web — are reminiscent of familiar states of matter such as liquid and solid. But instead of marking a transition between different structures of matter, the shift between web and no web indicates a change in the structure of information.
This is a phase transition in information, it’s where the properties in information — how information is shared between things — undergo a very abrupt change. Brian Skinner
Fig 74b: Entanglement
phase transition and Measurement
More recently, a separate trio of teams tried to observe that phase transition in action (Choi et al. 2020). They performed a series of meta-experiments to measure how measurements themselves affect the flow of information. In these experiments, they used quantum computers to confirm that a delicate balance between the competing effects of entanglement and measurement can be reached. The transition’s discovery has launched a wave of research into what might be possible when entanglement and measurement collide. Matthew Fisher, a condensed matter physicist at the University of California, Santa Barbara, started studying the interplay of measurement and entanglement because he suspects that both phenomena could play a role in human cognition.
Since a measurement tends to destroy the "quantumness" of a system, it constitutes the link between the quantum and classical world. And in a large system of quantum bits of information, or qubits, the effect of measurements can induce emergence of new phases of quantum information. When the qubits interact with one another, their information becomes shared non-locally in an entangled state, but under measurement, the entanglement is destroyed. The battle between measurement and interactions leads to two distinct phases: one where interactions dominate and entanglement is widespread, and one where measurements dominate, and entanglement is suppressed. Roushan et al. (2023) have now observed the crossover between these two regimes in a system of up to 70 qubits and also saw signatures of a novel form of "quantum teleportation"—in which an unknown quantum state is transferred from one set of qubits to another — that emerges as a result of these measurements. When measurements dominated over interactions (the "disentangling phase"), the strands of the web remained relatively short. The probe qubit was only sensitive to the noise of its nearest qubits. In contrast, when the measurements were weaker and entanglement was more widespread (the "entangling phase") the probe was sensitive to noise throughout the entire system. The crossover between these two sharply contrasting behaviours is a signature of the sought-after measurement-induced phase transition. Matsushita & Hofmann (2023) show this also depends on the strength of the back-action on the system of the measurement interaction. Eigenvalues emerge when the quantum interferences between different back-actions correspond to a Fourier transform in the back-action parameter.
Matsushita & Hofmann (2023) show this also depends on the strength of the back-action on the system of the measurement interaction. Eigenvalues emerge when the quantum interferences between different back-actions correspond to a Fourier transform in the back-action parameter. In the limit of weak interactions, the fluctuations of the system dynamics are negligibly small and the meter shift can be determined from the Hamilton-Jacobi equation, a classical differential equation expressing the relation between a physical property and the dynamics associated with it. When the measurement interaction is stronger, complicated quantum interference effects between different system dynamics are observed. Since the meter state has to be in a superposition of eigenstates |b⟩, the back-action effects are usually interpreted as a randomisation of the phases of the eigenstates |a⟩, resulting in the decoherence associated with a measurement of Aˆ. Fully resolved measurements in this sense, require a complete randomisation of the system dynamics. This corresponds to a superposition of all possible system dynamics, where quantum interference effects select only those components of the quantum process that correspond to the eigenvalues of the physical property.
“Our results show that the physical reality of an object cannot be separated from the context of all its interactions with the environment, past, present and future, providing strong evidence against the widespread belief that our world can be reduced to a mere configuration of material building blocks” – Holger Hofmann.
The Greenberger–Horne–Zeilinger state (Greenberger, Horne & Zellinger 1989, Mermin 1990) is one of several three-particle entanglements that have become pivotal in quantum computing (Hussein et al. 2023). There is no standard measure of multi-partite entanglement because different, not mutually convertible, types of multi-partite entanglement exist. Nonetheless, many measures define the GHZ state to be maximally entangled state. The GHZ state and the W state represent two non-biseparable classes of 3-qubit states, which cannot be transformed (not even probabilistically) into each other by local quantum operations. This three particle entanglement problem is reminiscent of classical gravitation, which has a two body inverse square law, that in the three-body problem becomes intractably complex and chaotic as Henri Poincare found out. There is no general closed-form solution to the three-body problem, i.e. no general solution that can be expressed in terms of a finite number of standard mathematical operations.
Measuring Quantumness: Entanglement, Magic and Interactivity
Understanding how quantum entanglement can open new capacities for quantum computing (Wood 2023b), can help us understand the way biological conscious brains can anticipate reality revolves around the question of what kinds of quantum system can or can’t be simulated by classical computers in polynomial time and not suffer the multiple exponential runaway, illustrated in Shor’s (1994) quantum algorithm for cracking encryption problems.
Fig 74c: Using
symmetry-protected magic to study the complexity of
symmetry-protected
topological (SPT) phases of matter (Ellison et al. 2021).
The ability of subsequently discovered classical computational algorithms to classically simulate quantum computing seemed like a bit of a miracle, but the first successful classical algorithm (Gottesman 1998) couldn’t handle all quantum circuits, just those that stuck to Clifford gates. Jozsa & Linden (2002) then proved that, so long as their algorithm simulated a circuit that didn’t entangle qubits, it could handle larger and larger numbers of qubits without taking an exponentially longer time, showing entanglement itself was a measure of quantumness.
Then entered “quantum magic”. If you added a “T gate,” a seemingly innocuous operation that rotates a qubit in a particular way, their algorithm would choke on it. The T gate seemed to manufacture something intrinsically quantum that can’t be simulated on a classical computer. Bravyi and Gosset in 2016 developed a classical algorithm for simulating so-called low-magic circuits, giving the quantum essence produced by the forbidden T-gate rotation the catchy name: magic. But this configuration is not just confined to arcane quantum computing configurations. Ellison et al. (2021) identified certain phases of quantum matter that are guaranteed to have magic, just as many phases of matter have particular patterns of entanglement.
Yet another form of quantumness had already been discovered earlier. Much as Gottesman-Knill had focused on circuits without entangling gates, and Bravyi and Gosset could cut through circuits without too many T gates, Valiant’s (2001) algorithm was restricted to circuits that lacked the “swap gate” – an operation that takes two qubits and exchanges their positions. As long as you don’t exchange qubits, you can entangle them and infuse them with as much magic as you like, and you’ll still find yourself on yet another distinct classical island. But as soon as you start shuffling qubits around, you can work wonders beyond the ability of any classical computer.
In 1984, Valiant’s “probably approximately correct” (PAC) model mathematically defined the conditions under which a mechanistic system could be said to “learn” information. Valiant (2013) generalised his PAC learning framework to encompass biological evolution as well. He broadened the concept of an algorithm into an “ecorithm,” which is a learning algorithm that “runs” on any system capable of interacting with its physical environment. Algorithms apply to computational systems, but ecorithms can apply to biological organisms or entire species. The concept draws a computational equivalence between the way that individuals learn and the way that entire ecosystems evolve. In both cases, ecorithms describe adaptive behaviour in a purely mechanistic way:
"An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world. And its goal is to perform well in that same complicated world." (Leslie Valiant).
Valiant is a devoted mechanist who sees no difference between the conscious brain and AI, reducing everything to computational mathematics. When asked (Pavlus 2016): “What if the ecorithms governing evolution and learning are unlearnable?” He acknowledges: “It’s a logical possibility, but I don’t think it’s likely at all. I think it’s going to be something pretty tangible and reasonably easy to understand. We can ask the same question about fundamental unsolved problems in mathematics”.
Terhal and DiVincenzo (2001) almost immediately uncovered the source of that power. They showed that Valiant’s swap-gate-free “matchgate” circuits, were simulating a well-known class of physics problems, similar to how computers simulate growing galaxies or nuclear reactions. Matchgate circuits simulate a group of fermions. When swap gates are not used, the simulated fermions are noninteracting, or “free.” Problems involving free electrons are relatively easy for physicists to solve, but when swap gates are used, the simulated fermions interact, crashing together and doing other complicated things. These problems are extremely hard, if not unsolvable. Conceptually, this resource corresponds to “interactivity” — or how much the simulated fermions can sense each other.
In the 1990s, the physical ingredient that made quantum computers powerful seemed obvious. It had to be entanglement, the “spooky” quantum link between distant particles that Erwin Schrödinger himself identified as “the characteristic trait of quantum mechanics.” When Gottesman developed his method of simulating Clifford circuits, he based it on the "operator" quantum mechanics developed by Werner Heisenberg. Restricting one’s view to free fermions involves viewing quantum mechanics through yet another mathematical lens. Each captures certain aspects of quantum states, but at the price of garbling some other quantum property.
If a collection of qubits is largely unentangled, has little magic, or simulates a bunch of nearly free fermions, then researchers know they can reproduce its output on a classical laptop. Any quantum circuit with a low score on one of these three quantum metrics lies in the shallows just off the shores of a classical island, in the space of possible phases of Matter (Wolchover 2018) and won’t be the next Shor’s algorithm.
Fig75: (1) Quantum Erasure shows it is also possible to 'uncollapse' or erase such losses of entangled correlation by re-interfering the wave functions so we can no longer tell the difference. The superposition choices of the delayed choice experiment fig 74, also do this. Erasure successfully recreates the lost correlations, detecting information about one of the particles and then erasing it again by re-interfering it back into the shared wave function provided we use none of its information. Pairs of identically polarised correlated photons produced by a 'down-converter', bounce off mirrors, converge again at a beam splitter and pass into two detectors. A coincidence counter observes an interference pattern in the rate of simultaneous detections by the two detectors, indicating that each photon has gone both ways at the beam splitter, as a wave. Adding a polarisation shifter to one path destroys the pattern, by making it possible to distinguish the photons' paths. Placing two polarising filters in front of the detectors makes the photons identical again, erasing the distinction, restoring the interference pattern. (2) Delayed choice quantum eraser configuration. An individual photon goes through one (or both) of the two slits. One of the photons - the "signal" photon (red and blue lines) continues to the target detector D0, which is scanned in steps along its x-axis. A plot of "signal" photon counts detected by D0 versus x can be examined to discover whether the cumulative signal forms an interference pattern. The other entangled photon - the "idler" photon (red and blue lines going downwards from the prism), is deflected by prism PS that sends it along divergent paths depending on whether it came from slit A or slit B. Detection of the idler photon by D3 or D4 provides delayed "which-path information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as which-path information had earlier potentially been available from the idler photon, the information has been subjected to a "delayed erasure".(3) Delayed choice entanglement swapping, in which Victor is able to decide whether Alice's and Bob's photons are entangled or not after they have already been measured. (Ma et al. 2002). (4) A photon is entangled with a photon that has already died (been sampled) even though they never coexisted at any point in time (Megidish 2012). Quantum mechanics also allows events to happen with no definite causal order (Goswami et al. 2018).
Phenomena, including delayed choice quantum erasure and entanglement-swapping fig 75, demonstrate that the time of a quantum observation can be ambiguous or possibly stand outside space-time, as the transactional picture suggests. The Wigner’s friend experiment of fig 76c likewise shows that quantum path information can also take the form of a quantum measurement ‘observer’. Narasimhan, Chopra & Kafatos M (2019) draw particular attention to Kim et al. (2000) in regard to a “universal observer” integrating individual conscious observers and their observations:
While traditional double-slit experiments are usually interpreted as indicating that the collapse of the wave function involves choices by an individual observer in space-time, the extension to quantum eraser experiments brings in some additional subtle aspects relating to the role of observation and what constitutes an observer. Access to, and the interpretation of, information outside space and time may be involved. This directly ties to the question of where the Heisenberg-von Neumann cut is located and what its nature is. … There is a possibility that individual observers making choices in space and time are actually aspects of the universal Observer, a state masked by assumptions about individual human minds that may need further development and re-examination.
Consciousness and Measurement
Summing up the position of physicists in a survey of participants in a foundations of quantum mechanics gathering, Schlosshauer et al. (2013) found that, while only 6% of physicists present believed consciousness plays a distinguished physical role, a majority believed it has a fundamental, although not distinguished role in the application of the formalism. They noted in particular that “It is remarkable that more than 60% of respondents appear to believe that the observer is not a complex quantum system.” Indeed on all counts queried there were wide differences of opinion, including which version of quantum mechanics they supported. Since all of the approaches are currently consistent with the predictions of quantum mechanics, these ambiguous figures are not entirely surprising.
In an experiment to test the influence of conscious perception on quantum entanglement (Radin, Bancel & Delorme 2021), explored psychophysical (mind-matter) interactions with quantum entangled photons. Entanglement correlation strength measured in real-time was presented via a graph or dynamic images displayed on a computer monitor or web browser. Participants were tasked with mentally influencing that metric, with particularly strong results observed in three studies conducted (p < 0.0002). Radin, Michel & Delorme (2016) also reported a 5.72 sigma (p = 1.05 × 10−8) deviation from a null effect in which participants focused their attention toward or away from a feedback signal linked in real time to the double-slit component of an interference pattern, suggesting consciousness affecting wave function collapse. For a review, see Milojevic & Elliot (2023). Radin (2023) has also reported 7.3 sigma beyond chance (p=1.4x10-13) deviations leaving little doubt that on average anomalous deviations in the random data emerged during events that attracted widespread attention, from a network of electronic random number generators located around the world that continuously recorded samples, used to explore a hypothesis that predicts the emergence of anomalous structure in randomness correlated with events that attract widespread human attention. Mossbridge et al. (2014) in a meta analysis have also cited an organic unconscious anticipatory response to potential existential crises they term predictive anticipatory activity, which is similar to conscious quantum anticipation, citing anticipative entanglement swapping experiments such as Ma et al. (2002).
The tendency towards an implicitly classical view of causality is similar to that among neuroscientists, with an added belief in the irreducible nature of randomness, as opposed to a need for hidden variables supporting quantum entanglement, rejecting Einstein’s disclaimer “God does not play dice with the universe.” Belief in irreducible randomness means that the principal evidence for subjectivity in quanta – the idiosyncratic unpredictable nature of individual particle trajectories – is washed out in the bath water of irreducible randomness, converging to the wave amplitude on repetition, consistent with the correspondence principle, that the behaviour of systems described by the theory of quantum mechanics reproduces classical physics in the limit of large quantum numbers.
Non-IID interactions may preserve quantum reality In Born's (1920) correspondence principle, systems described by quantum mechanics are believed to reproduce classical physics in the limit of large quantum numbers – if measurements performed on macroscopic systems have limited resolution and cannot resolve individual microscopic particles, then the results behave classically – the coarse-graining principle (Kofler & Brukner 2007). Subsequently Navascués & Wunderlich (2010) proved that in situations covered by IID (independent and identically distributed measurements) in which each run of an experiment must be repeated under exactly the same conditions and independently of other runs, we arrive at macroscopic locality. Similarly, temporal quantum correlations reduce to classical correlations and quantum contextuality reduces to macroscopic non-contextuality (Henson & Sainz 2015).
However Gallego & Dakić (2021) have shown that, surprisingly, quantum correlations survive in the macroscopic limit if correlations are not IID distributed at the level of microscopic constituents and that the entire mathematical structure of quantum theory, including the superposition principle is preserved in the limit. This macroscopic quantum behaviour allows them to show that Bell nonlocality is visible in the macroscopic limit.
“The IID assumption is not natural when dealing with a large number of microscopic systems. Small quantum particles interact strongly and quantum correlations and entanglement are distributed everywhere. Given such a scenario, we revised existing calculations and were able to find complete quantum behavior at the macroscopic scale. This is completely against the correspondence principle, and the transition to classicality does not take place” (Borivoje Dakić).
“It is amazing to have quantum rules at the macroscopic scale. We just have to measure fluctuations, deviations from expected values, and we will see quantum phenomena in macroscopic systems. I believe this opens the door to new experiments and applications” (Miguel Gallego).
Their approach is described as follows:
In this respect, one important consequence of the correspondence principle is the concept of macroscopic locality (ML): Coarse-grained quantum correlations become local (in the sense of Bell) in the macroscopic limit. ML has been challenged in different circumstances, both theoretically and experimentally. However, as far as we know, nonlocality fades away under coarse graining when the number of particles N in the system goes to infinity. In a bipartite Bell-type experiment where the parties measure intensities with a resolution of the order of N1/2 or, equivalently, O(N1/2) coarse graining. Then, under the premise that particles are entangled only by independent and identically distributed pairs, Navascués & Wunderlich (2010) prove ML for quantum theory.
Fig 76: Macroscopic Bell-Type experiment.
We generalize the concept of ML to any level of coarse graining α ∈ [0, 1], meaning that the intensities are measured with a resolution of the order of Nα. We drop the IID assumption, and we investigate the existence of a boundary between quantum (nonlocal) and classical (local) physics, identified by the minimum level of coarse graining α required to restore locality. To do this, we introduce the concept of macroscopic quantum behavior (MQB), demanding that the Hilbert space structure, such as the superposition principle, is preserved in the thermodynamic limit.
Conclusion: We have introduced a generalized concept of macroscopic locality at any level of coarse graining α ∈ [0, 1]. We have investigated the existence of a critical value that marks the quantum-to-classical transition. We have introduced the concept of MQB at level α of coarse graining, which implies that the Hilbert space structure of quantum mechanics is preserved in the thermodynamic limit. This facilitates the study of macroscopic quantum correlations. By means of a particular MQB at α = 1/2, , we show that αc ≥ 1/2, as opposed to the IID case, for which αIID ≤ 1/2. An upper bound on αc is, however, lacking in the general case. The possibility that no such transition exists remains open, and perhaps there exist systems for which ML is violated at α = 1.
This means for example, that in (a) neural system processing, where the quantum unstable context is continually evolving as a result of edge-of-chaos processing, and so repeated IID measurements are not made and (b) biological evolution, where a sequence of unique mutations become sequentially fixed by natural and sexual selection, which is also consciously mediated in eucaryote organisms, both inherit implicit quantum non-locality in their evolution.
John Eccles (1986) proposed a quantum theory involving psychon quasi-particles mediating uncertainty of synaptic transmission to complementary dendrons cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunnelling effect in synaptic exocytosis, while in perception the reverse process takes place. This model has been elaborated by a number of researchers (Eccles 1990, 1994, Beck & Eccles 1992, Georgiev 2002, Hari 2008). The difficulty with the theory is that the psychons are then physical quasi-particles with integrative mental properties. So it’s a quasi-physical description that doesn’t manifest subjectivity except by its integrative physical properties In the last chapter of his book The Neurophysiological Basis of Mind (1953), Eccles not only hypothesized the existence of a " self-conscious mind" relatively independent of the cerebral structures, but also supposed that a very weak influence of will on a few neurons of the cerebral cortex could cause remarkable changes in brain activity leading to the notion of volition being a form of "psychokinesis" (Giroldini 1991), supported also by Wilder Penfield (1960).
The Quantum Measurement Problem May Contradict Objective Reality
In quantum theory, before collapse, the system is said to be in a superposition of two states, and this quantum state is described by the wave function, which evolves in time and space. This evolution is both deterministic and reversible: given an initial wave function, one can predict what it’ll be at some future time, and one can in principle run the evolution backward to recover the prior state. Measuring the wave function, however, causes it to collapse, mathematically speaking, such that the system in our example shows up as either heads or tails. It’s an irreversible, one-time-only and no one knows what defines the process or boundaries of measurement.
One model that preserves the absoluteness of the observed event — either heads or tails for all observers—is the GRW theory, where quantum systems exist in a superposition of states until the superposition spontaneously and randomly collapses, independent of an observer. Whatever the outcome—heads or tails in our example—it shall hold for all observers. But GRW, and the broader class of “spontaneous collapse” theories, run foul of a long-cherished physical principle: the preservation of information. By contrast, the “many worlds” interpretation of quantum mechanics allows for non-absoluteness of observed events, because the wave function branches into multiple contemporaneous realities, in which in one “world,” the system will come up heads, while in another, it’ll be tails.
Ormrod, Venkatesh and Barrett (2023, Ananthaswamy 2023) focus on perspectival theories that obey three properties:
(1) Bell nonlocality (B). Alice chooses her type of measurement freely and independently of Bob, and vice versa – of their own free will – an important assumption. Then, when they eventually compare notes, the duo will find that their measurement outcomes are correlated in a manner that implies the states of the two particles are inseparable: knowing the state of one tells you about the state of the other.
(2) The preservation of information (I). Quantum systems that show deterministic and reversible evolution satisfy this condition. If you are wearing a green sweater today, in an information-preserving theory, it should still be possible, in principle, 10 years hence to retrieve the colour of your sweater even if no one saw you wearing it.
(3) Local dynamics (L). If there exists a frame of reference in which two events appear simultaneous, then the regions of space are said to be “space-like separated.” Local dynamics implies that the transformation of a system that takes a set of input states and produces a set of output states in one of these regions cannot causally affect the transformation of a system in the other region any faster than the speed of light, and vice versa. Each subsystem undergoes its own transformation, and so does the entire system as a whole. If the dynamics are local, the transformation of the full system can be decomposed into transformations of its individual parts: the dynamics are said to be separable. In contrast, when two particles share a state that’s Bell nonlocal (that is, when two particles are entangled, per quantum theory), the state is said to be inseparable into the individual states of the two particles. If transformations behaved similarly, in that the global transformation could not be described in terms of the transformations of individual subsystems, then the whole system would be dynamically inseparable.
Fig 76b: A graphical summary of the theorems. Possibilistic Bell Nonlocality is Bell Nonlocality that arises not only at the level of probabilities, but at the level of possibilities.
Their work analyses how pespectival quantum theories are BINSC, and that NSC implies L, so BINSC is BIL. Such BIL theories are then required to handle a deceptively simple thought experiment. Imagine that Alice and Bob, each in their own lab, make a measurement on one of a pair of particles. Both Alice and Bob make one measurement each, and both do the exact same measurement. For example, they might both measure the spin of their particle in the up-down direction. Viewing Alice and Bob and their labs from the outside are Charlie and Daniela, respectively. In principle, Charlie and Daniela should be able to measure the spin of the same particles, say, in the left-right direction. In an information-preserving theory, this should be possible. Using this scenario, the team proved that the predictions of any BIL theory for the measurement outcomes of the four observers contradict the absoluteness of observed events. This leaves physicists at an unpalatable impasse: either accept the non-absoluteness of observed events or give up one of the assumptions of a BIL theory.
Ormrod says dynamical separability is “kind of an assumption of reductionism – you can explain the big stuff in terms of these little pieces.” Just like a Bell nonlocal state cannot be reduced to some constituent states, it may be that the dynamics of a system are similarly holistic, adding another kind of nonlocality to the universe. Importantly, giving it up doesn’t cause a theory to fall afoul of Einstein’s theories of relativity, much like physicists have argued that Bell nonlocality doesn’t require superluminal or nonlocal causal influences but merely nonseparable states. Ormrod, Venkatesh and Barrett note: “Perhaps the lesson of Bell is that the states of distant particles are inextricably linked, and the lesson of the new ... theorems is that their dynamics are too.” The assumptions used to prove the theorem don’t explicitly include an assumption about freedom of choice because no one is exercising such a choice. But if a theory is Bell nonlocal, it implicitly acknowledges the free will of the experimenters.
Fig 76c: Above An experimental realisation of the Wigner' friend setup showing there is no such thing as objective reality - quantum mechanics allows two observers to experience different, conflicting realities. Below the proof of principle experiment of Bong et al. (2020) demonstrating mutual inconsistency of 'No-Superdeterminism', 'Locality' and 'Absoluteness of Observed Events’. LHV (local hidden variable Bell), LF (local friendliness) assuming 1, 2 & 3, NS no signalling, Q quantum results includes LHV but not LF.
An experimental realisation of non-absoluteness of observation has been devised (Proietti et al., 2019) as shown in fig 76c using quantum entanglement. The experiment involves two people observing a single photon that can exist in one of two alignments, but until the moment someone actually measures it to determine which, the photon is in a superposition. A scientist analyses the photon and determines its alignment. Another scientist, unaware of the first's measurement, is able to confirm that the photon - and thus the first scientist's measurement - still exists in a quantum superposition of possible outcomes. As a result, each scientist experiences a different reality - both "true" even though they disagree with each other. In a subsequent experiment, Bong et al. (2020) transform the thought experiment into a mathematical theorem that confirms the irreconcilable contradiction at the heart of the Wigner scenario. The team also tests the theorem with an experiment, using photons as proxies for the humans, accompanied by new forms of Bell's inequalities, by building on a scenario with two separated but entangled friends. The researchers prove that if quantum evolution is controllable on the scale of an observer, then one of (a) No-Superdeterminism — the assumption of 'freedom of choice' used in derivations of Bell inequalities - that the experimental settings can be chosen freely — uncorrelated with any relevant variables prior to that choice, (2) Locality or (3) Absoluteness of Observed Events — that every observed event exists absolutely, not relatively – must be false. Although the violation of Bell-type inequalities in such scenarios is not in general sufficient to demonstrate the contradiction between those three assumptions, new inequalities can be derived, in a theory-independent manner, that are violated by quantum correlations. This is demonstrated in a proof-of-principle experiment where a photon's path is deemed an observer. Wigner’s “friends” were played by photon paths, while photon detectors played the part of the Wigners. This new theorem places strictly stronger constraints on physical reality than Bell's theorem. Eric Cavalcanti (2021), the lead author, argues that a personalist view of quantum states is an expression of Copernicanism rather than solipsism. "If you think that any physical system can be considered an observer, then the experiment has already been done," Cavalcanti said. "But most physicists will think, no, I don't buy that. So what are the next steps? How far can we go?" Is a molecule an observer? An amoeba?" Gefter (2024).
Self-Simulated Universe Another theory put forward by gravitational theorists (Irwin, Amaral & Chester 2020) also uses retrocausality to try to explain the ultimate questions: Why is there anything here at all? What primal state of existence could have possibly birthed all that matter, energy, and time, all that everything? and the way did consciousness arise—is it some fundamental proto-state of the universe itself, or an emergent phenomenon that’s purely neurochemical and material in nature?
Fig 77b: Self-Simulated Universe: Humans are near the point of demarcation, where EC or thinking matter emerges into the choice-sphere of the infinite set of possibilities of thought, EC∞. Beyond the human level, physics allows for larger and more powerful networks that are also conscious. At some stage of the simulation run, a conscious EC system emerges that is capable of acting as the substrate for the primitive spacetime code, its initial conditions, as mathematical thought, and simulation run, as a thought, to self-actualize itself. Linear time would not permit this logic, but non-linear time does.
This approach attempts to answer both questions in a way that weds aspects of Nick Bostrom’s Simulation Argument with “timeless emergentism.” termed the “panpsychism self-simulation model,” that says the physical universe may be a “strange loop” that may self-generate new sub-realities in an almost infinite hierarchy of tiers in-laid with simulated realities of conscious experience. In other words, the universe is creating itself through thought, willing itself into existence on a perpetual loop that efficiently uses all mathematics and fundamental particles at its disposal. The universe, they say, was always here (timeless emergentism) and is like one grand thought that makes mini thoughts, called “code-steps or actions”, again sort of a Matryoshka doll.
David Chester comments:
“While many scientists presume materialism to be true, we believe that quantum physics may provide hints that our reality could be a mental construct. Recent advances in quantum gravity, like seeing spacetime emergent via a hologram, is also a touch that spacetime isn’t fundamental. this can be also compatible with ancient Hermetic and Indian philosophy. In a sense, the mental construct of reality creates spacetime to efficiently understand itself by creating a network of subconscious entities that may interact and explore the totality of possibilities.”
They modify the simulation hypothesis to a self-simulation hypothesis, where the physical universe, as a strange loop, is a mental self-simulation that might exist as one of a broad class of possible code-theoretic quantum gravity models of reality obeying the principle of efficient language axiom, and discuss implications of the self-simulation hypothesis such as an informational arrow of time.
The self-simulation hypothesis is built upon the following axioms:
1. Reality, as a strange loop, is a code-based self-simulation in the mind of a panpsychic universal consciousness that emerges from itself via the information of code-based mathematical thought or self-referential symbolism plus emergent non-self-referential thought. Accordingly, reality is made of information called thought.
2. Non-local spacetime and particles are secondary or emergent from this code, which is itself a pre-spacetime thought within a self-emergent mind.
3. The panconsciousness has freewill to choose the code and make syntactical choices. Emergent lower levels of consciousness also make choices through observation that influence the code syntax choices of the panconsciousness.
4. Principle of efficient language (Irwin 2019). The desire or decision of the panconscious reality is to generate as much meaning or information as possible for a minimal number of primitive thoughts, i.e., syntactical choices, which are mathematical operations at the pre-spacetime code level.
Fig 77c: This emphasis on coding is problematic, as it is trying to assert a consciousness-makes-reality loop through an apparently abstract coded representation based on discrete computation-like processes, assuming an "tit-from-bit" notion that reality is made from information, not just described by it.
It from bit: Otherwise put, every it — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe (Wheeler 1990).
Schwartz, Stapp & Beauregard (2005) advance a quantum theory of conscious volition, in which attentive will can influence physical brain states using quantum principles, in particular von Neumann's process 1 or collapse of the wave function complementing process 2, the causal evolution of the Schrödinger wave function responsible of ongoing physical brain states. They cite specific cognitive processes leading the physical changes in the manner of ongoing brain function:
There is at least one type of information processing and manipulation that does not readily lend itself to explanations that assume that all final causes are subsumed within brain, or more generally, central nervous system mechanisms. The cases in question are those in which the conscious act of wilfully altering the mode by which experiential information is processed itself changes, in systematic ways, the cerebral mechanisms used. There is a growing recognition of the theoretical importance of applying experimental paradigms that use directed mental effort to produce systematic and predictable changes in brain function. ... Furthermore, an accelerating number of studies in the neuroimaging literature significantly support the thesis that, with appropriate training and effort, people can systematically alter neural circuitry associated with a variety of mental and physical states.
They point out that it is necessary in principle to advance to the quantum level to achieve an adequate theory of the neurophysiology of volitionally directed activity. The reason, essentially, is that classic physics is an approximation to the more accurate quantum theory, and that this classic approximation eliminates the causal efficacy of our conscious efforts that these experiments empirically manifest.
They explain how structural features of ion conductance channels critical to synaptic function entail that the classical approximation to quantum reality fails in principle to cover the dynamics of a human brain, so that quantum dynamics must be used. The principles of quantum theory must then link the quantum physical description of the subject’s brain to their stream of conscious experiences. The conscious choices by human agents thereby become injected non-trivially into the causal interpretation of neuroscience and neuropsychology experiments, through type 1 processes performing quantum measurement operations. This particularly applies to those experimental paradigms in which human subjects are required to perform decision-making or attention-focusing tasks that require conscious effort.
Conscious effort itself can, justifiably within science, be taken to be a primary variable whose complete causal origins may be untraceable in principle, but whose causal efficacy in the physical world can be explained on the basis of the laws of physics. See also Stapp (2009b) for a short summary of his viewpoint.
The mental act of clear-minded introspection and observation, variously known as mindfulness, mindful awareness, bare attention, the impartial spectator, etc., is a well-described psychological phenomenon with a long and distinguished history in the description of human mental states. ... In the conceived approach, the role played by the mind, when one is observing and modulating one’s own emotional states, is an intrinsically active and physically efficacious process in which mental action is affecting brain activity in a way concordant with the laws of physics.
They propose a neurobiological interpretation where calcium channels play a pivotal role in type 1 processes at the synaptic level:
At their narrowest points, calcium ion channels are less than a nanometre in diameter. This extreme smallness of the opening in the calcium ion channels has profound quantum mechanical implications. The narrowness of the channel restricts the lateral spatial dimension. Consequently, the lateral velocity is forced by the quantum uncertainty principle to become large. This causes the quantum cloud of possibilities associated with the calcium ion to fan out over an increasing area as it moves away from the tiny channel to the target region where the ion will be absorbed as a whole, or not absorbed at all, on some small triggering site. ... This spreading of this ion wave packet means that the ion may or may not be absorbed on the small triggering site. Accordingly, the contents of the vesicle may or may not be released. Consequently, the quantum state of the brain has a part in which the neurotransmitter is released and a part in which the neurotransmitter is not released. This quantum splitting occurs at every one of the trillions of nerve terminals. ... In fact, because of uncertainties on timings and locations, what is generated by the physical processes in the brain will be not a single discrete set of non-overlapping physical possibilities but rather a huge smear of classically conceived possibilities. Once the physical state of the brain has evolved into this huge smear of possibilities one must appeal to the quantum rules, and in particular to the effects of process 1, in order to connect the physically described world to the streams of consciousness of the observer/participants.
However, they note that this focus on the motions of calcium ions in nerve terminals is not meant to suggest that this particular effect is the only place where quantum effects enter into the brain process, or that the quantum process 1 acts locally at these sites. What is needed here is only the existence of some large quantum of effect.
A type 1 process beyond the local deterministic process 2 is required to pick out one experienced course of physical events from the smeared-out mass of possibilities generated by all of the alternative possible combinations of vesicle releases at all of the trillions of nerve terminals. This process brings in a choice that is not determined by any currently known law of nature, yet has a definite effect upon the brain of the chooser.
They single out the quantum zeno effect, in which rapid multiple measurements can act to freeze a quantum state and delay its evolution and cite James (1892 417): The essential achievement of the will, in short, when it is most ‘voluntary,’ is to attend to a difficult object and hold it fast before the mind. Effort of attention is thus the essential phenomenon of will. ... Consent to the idea’s undivided presence, this is effort’s sole achievement. Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away." This coincides with the studies already cited on wilful control of the emotions to imply evidence of effect.
Much of the work on attention since James is summarized and analysed in Pashler (1998). He emphasizes that the empirical ‘findings of attention studies argue for a distinction between perceptual attentional limitations and more central limitations involved in thought and the planning of action. A striking difference that emerges from the experimental analysis is that the perceptual processes proceed essentially in parallel, whereas the post-perceptual processes of planning and executing actions form a single queue, is in line with the distinction between ‘passive’ and ‘active’ processes. A passive stream of essentially isolated process 1 events versus active processes involving effort-induced rapid sequences of process 1 events that can saturate a given capacity.
There is in principle, in the quantum model, an essential dynamic difference between the unconscious processing done by the Schrödinger evolution, which generates by a local process an expanding collection of classically conceivable experiential possibilities and the process associated with the sequence of conscious events that constitute the wilful selection of action. The former are not limited by the queuing effect, because process 2 simply develops all of the possibilities in parallel. Nor is the stream of essentially isolated passive process 1 events thus limited. It is the closely packed active process 1 events that can, in the von Neumann formulation, be limited by the queuing effect.
This quantum model accommodates naturally all of the complex structural features of the empirical data that he describes. Chapter 6 emphasizes a specific finding: strong empirical evidence for what he calls a central processing bottleneck associated with the attentive selection of a motor action. This kind of bottleneck is what the quantum-physics-based theory predicts: the bottleneck is precisely the single linear sequence of mind–brain quantum events that von Neumann quantum theory describes.
Speculative Quantum Brain Dynamics Theories
Nishiyama, Tanaka & Tuszynski (2024) provide a theory and review of a variety of speculative quantum processes a variety of researchers over the last half century have introduced quantum brain dynamics theories. Some of these have links to OOR below and some to the quantum dissipative theory of Vitiello and coworkers discussed in the neuroscience section.
Pribram (1971, 1991) proposed a holographic brain approach to describe memory and perception. This approach originated with the work of Ricciardi and Umezawa (1967), in which the brain is envisaged as a mixed system of classical neurons and quantum degrees of freedom in QFT, through bosonic corticons and exchange bosons. In QFT of the brain, memory represents the vacua emerging in the breakdown of symmetry in QFT. The QFT of the brain was further developed by Stuart et al (1978, 1979) applied to nonlocal memory storage, in a memory recall mechanism attributed to Nambu–Goldstone (NG) bosons, with long-range correlation propagating in the whole region of the brain. Fröhlich (1968) proposed the existence of Bose–Einstein condensation involving long-range correlations in biological systems. Collective oscillation of a macromolecule may induce coherent oscillation of protein molecules Narchidecia et al. (2018). Azizi et al. (2023) also suggested that terahertz vibrational modes of phonons in biological systems might correspond to Fröhlich condensate. Davydov and Kislukha (1976) derived the Davydov soliton – a solitary wave propagating along the alpha-helix structures of protein chains and also for DNA, which has a quantum Hamiltonian. In the coherent state, it is approximated as a semi-classical Hamiltonian that leads to a nonlinear (soliton) equation. The Fröhlich condensation and the Davydov soliton were found to represent static and dynamical features of a non-linear Schrödinger equation Tuszynski et al. 1984). Del Giudice et al. (1993 – 1999) suggested a QFT-based approach applicable to biological systems, such as Bose–Einstein condensates of dipolar modes in membranes, coherence domains in proteins, water molecule ensembles acting as a laser, and Josephson systems. A coherent domain was estimated to be the size of the inner diameter of a eucaryotic microtubule In the Jibu and Yasue (1993-1996) proposed concrete degrees of freedom in QFT of the brain or quantum brain dynamics (QBD), which are water rotational dipole fields and photon fields. Memory in this approach is represented as the vacua emerging in the breakdown of rotational symmetry of water dipoles where water dipoles are aligned in the same direction and memory recall processes are represented as excitations of a finite number of incoherent photons. Macroscopic order of aligned dipoles is maintained by long-range correlation of NG bosons, as dipole-wave quanta. They introduced optical super-radiance solutions to solve the binding problem in neuroscience Treisman (1996) of how we integrate information processing diffused in the whole brain. Water is expected as coherent light and sound sources of various wavelengths, representing various water molecular conformational states. Furthermore, water is also the medium of holographic memory storage (Cavaglià, Deriu and Tuszynski 2023) via interference patterns of coherent waves of light in which the integrated version of QBD and holography is proposed (Nishiyama et al. 2021, 2022). A microtubule is expected to be a device for super-radiant emission to achieve holographic memory. Super-radiance of light (and also sound) might be adopted to integrate information processing among holographic memories diffused in the whole brain, which might solve the binding problem.
Orchestrated Objective Reduction
Hameroff and Penrose (2014, Hameroff 2022) have proposed a hybrid quantum brain theory that forms a parallel "clip on" addition to assumed classical neurodynamics processes, asserting that consciousness originates at the quantum level inside neurons, rather than the conventional view that it is a product of connections between neurons, coupling orchestrated objective reduction (OOR) to hypothetical quantum cellular automata in the microtubules of neurons. The theory is regarded as implausible by critics, both physicists and neuroscientists who consider it to be a poor model of brain physiology on multiple grounds. Orchestration refers to the hypothetical process by which microtubule-associated proteins, influence qubit state reduction by gravitationally modifying the spacetime-separation of their superimposed states, based on Penrose's objective-collapse theory using a one graviton difference. Derakhshani et al. (2022) discount gravitational collapse theory experimentally:
We perform a critical analysis of the Orch OR consciousness theory at the crossroad with the newest experimental results coming from the search for spontaneous radiation predicted by the simplest version of gravity-related dynamical collapse models. We conclude that Orch OR theory, when based on the simplest version of gravity-related dynamical collapse [Diósi 2019], is highly implausible in all the cases analyzed.
The tubulin protein dimers of the microtubules have hydrophobic pockets that may contain delocalised π electrons. Hameroff claims that this is close enough for the tubulin π electrons to become quantum entangled. This would leave these quantum computations isolated inside neurons. Hameroff then proposed, although this idea was rejected by Reimers (2009), that coherent Frolich condensates in microtubules in one neuron can link with microtubule condensates in other neurons and glial cells via the gap junctions of electrical synapses claiming these are sufficiently small for quantum tunnelling across, allowing them to extend across a large area of the brain. He further postulated that the action of this large-scale quantum activity is the source of 40 Hz gamma waves, building upon the theory that gap junctions are related to the gamma oscillation. Craddock et. al. (2017) make claims about anaesthetics based on the exclusive action of halothane types on microtubules, which focus on halothane type molecules lack consistency with the known receptor-based effects of ketamine and N20 on NMDA receptors also shared by halothanes and that of propofol on GABA receptors. Evidence for anaesthetic disruption of microtubules, Kelz & Mashour's (2019) review, applies indiscriminately to all anaesthetics, from halothane to ketamine widely across the tree of life, from paramecium to humans, including both synaptic and ion-channel effects, indicating merely that microtubular integrity is necessary for consciousness and does not indicate microtubules have a key role in consciousness itself, other than their essential architectural and transport roles.
Because of its dependence on Penrose’s idea of gravitational quantum collapse, the theory is confined to objective reduction, at face value crippling the role of free-will in conscious experience. However Hameroff (2012) attempts to skirt this by applying notions of retro-causality, as illustrated in fig 77(2), in which a dual-time approach (King 1989) is used to invoke a quantum of the present, the Conscious NOW. We will see that retrocausality is a process widely cited also in this work. Hameroff justifies such retrocausality from three sources.
Firstly he cites an open brain experiment of Libet. Peripheral stimulus, e.g., of the skin of the hand, resulted in an “EP” spike in the somatosensory cortical area for the hand ∼30ms after skin contact, consistent with the time required for a neuronal signal to travel from hand to spinal cord, thalamus, and brain. The stimulus also caused several 100 ms of ongoing cortical activity following the EP. Subjects reported conscious experience of the stimulus (using Libet’s rapidly moving clock) near-immediately, e.g., at the time of the EP at 30ms, hinting at retro-causality of the delayed “readiness potential”.
Secondly, he cites a number of well-controlled studies using electrodermal activity, fMRI and other methods to look for emotional responses, e.g., to viewing images presented at random times on a computer screen. Surprisingly, the changes occurred half a second to two seconds before the images appeared. They termed the effect pre-sentiment because the subjects were not consciously aware of the emotional feelings. Non-conscious emotional sentiment (i.e., feelings) appeared to be referred backward in time. Bem (2012, 2016) reported on studies showing statistically significant backward time effects, most involving non-conscious influence of future emotional effects (e.g., erotic or threatening stimuli) on cognitive choices. Studies by others have reported both replication, and failure to replicate, the controversial results. Thirdly he cites a number of delayed choice experiments widely discussed in this work.
Secondly, he cites a number of well-controlled studies using electrodermal activity, fMRI and other methods to look for emotional responses, e.g., to viewing images presented at random times on a computer screen. Surprisingly, the changes occurred half a second to two seconds before the images appeared. They termed the effect pre-sentiment because the subjects were not consciously aware of the emotional feelings. Non-conscious emotional sentiment (i.e., feelings) appeared to be referred backward in time. Bem (2012, 2016) reported on studies showing statistically significant backward time effects, most involving non-conscious influence of future emotional effects (e.g., erotic or threatening stimuli) on cognitive choices. Studies by others have reported both replication, and failure to replicate, the results. Thirdly he cites a number of delayed choice experiments we have already discussed.
Sahu S, et al. (2013) found that electronic conductance along microtubules, normally extremely good insulators, becomes exceedingly high, approaching quantum conductance, at certain specific resonance frequencies of applied alternating current (AC) stimulation. These resonances occur in gigahertz, megahertz and kilohertz ranges, and are particularly prominent in low megahertz (e.g. 8.9 MHz). Hameroff & Penrose (2014) suggest that EEG rhythms (brain waves) also derive from deeper level microtubule vibrations. Microtubules also display super-radiance in vitro at room temperatures (Celardo et al. 2018), indicating quantum coherence between molecules. Two experiments (Lewton 2022, Tangerman 2022), presented at The Tucson Science of Consciousness conference merely showed that anaesthetics hastened delayed luminescence and that under laser excitation prolonged excitation diffused through microtubules further than expected when not under anaesthetics.
Khan et al. (2024) have found that the microtubule stabiliser epothilone B delays anaesthetic-induced unconsciousness in rats due to isoflurane but this anaesthetic also has evidence of GABA modulation and other anaesthetic effects. However Singh et al. (2021) have established filamentary (EM) communication modes complementary to the ionic communication mode, between the internal neuronal microtubule architecture and axon potentials. However Singh et al. (2021a, b) using dielectric resonance images have established fast filamentary (EM) communication modes complementary to the ionic communication mode, between the internal neuronal microtubule architecture and axon potentials fig 77c(5). These could provide a more plausible basis for microtubule involvement in consciousness than Penrose's CAs (1) and complement membrane excitation in the emergence of single celled eucaryote consciousness.
However, there is no direct evidence for the cellular automata proposed and microtubules are critically involved in neuronal architecture, and are also involved in molecular transport, so functional conflict would result from adding another competing function. Hameroff (2022) cites processes, from the pyramidal neuron, down through microtubules, to π-orbital resonances and gravitational space-time effects, but the linkage to microtubules is unestablished.
Fig 77: (1) An axon terminal releases neurotransmitters through a synapse and are received by microtubules in a neuron's dendritic spine. (2) From left, a superposition develops over time, e.g., a particle separating from itself, shown as simultaneous curvatures in opposite directions. The magnitude of the separation is related to E, the gravitational self-energy. At a particular time t, E reaches threshold by E = h ̄/t, and spontaneous OR occurs, one particular curvature is selected. This OR event is accompanied by a moment of conscious experience (“NOW”), its intensity proportional to E. Each OR event also results in temporal non-locality, referring quantum information backward in classical time (curved arrows). (3,4) Scale dependent resonances from the pyramidal neuron, through microtubules, to π-orbitals and gravitational effects. (5) Filamentary (EM) communication modes complementary to membrane excitation (Singh et al).
None of these processes have been empirically verified and the complex tunnelling invoked is far from being a plausible neurophysiological process. The model requires that the quantum state of the brain has macroscopic quantum coherence, which needs to be maintained for around a tenth of a second. But, according to calculations made by Max Tegmark (2000), this property ought not to hold for more than about 10-13 s. Hameroff and co-workers (Hagen et al. 2002) have advanced reasons why this number should actually be of the order of a tenth of a second. But 12 orders of magnitude is a very big difference to explain away and serious doubts remain about whether the Penrose–Hameroff theory is technically viable. Two experiments (Lewton 2022, Tangerman 2022), presented at The Tucson Science of Consciousness conference merely showed that anaesthetics hastened delayed luminescence and that under laser excitation prolonged excitation diffused through microtubules further than expected when not under anaesthetics. There is no direct evidence for the cellular automata proposed and microtubules are critically involved in neuronal architecture, and are also involved in molecular transport, so functional conflict would result from adding another competing function. Hameroff (2022) cites processes, from the pyramidal neuron, down through microtubules, to pi-orbital resonances and gravitational space-time effects, but the linkage to microtubules is weak.
OOR would force collapse, but it remains unestablished how conscious volition is invoked, because collapse is occurring objectively in terms of Penrose’s notion of space-time blisters. It remains unclear how these hypothetical objective or “platonic” entities, as Penrose puts it, relate to subjective consciousness or volition. Hameroff (2012) in “How quantum brain biology can rescue conscious free will” attempts an explanation, but this simply comes down to objective OOR control, while attempting to exploit the retrocausality in Symbiotic Existential Cosmology:
Orch OR directly addresses conscious causal agency. Each reduction/conscious moment selects particular microtubule states which regulate neuronal firings, and thus control conscious behavior. Regarding consciousness occurring “too late,” quantum state reductions seem to involve temporal non-locality, able to refer quantum information both forward and backward in what we perceive as time, enabling real-time conscious causal action. Quantum brain biology and Orch OR can thus rescue free will.
A pivotal critique of the whole concept of the Hameroff-Penrose model is that it is invoking a separate parallel quantum computation process to the conventional notions of synaptic receptors and membrane excitations. Microtubules have essential roles in cellular architecture and nutrient transport in neurons, so the idea that they are critical for quantum computation generating subjective consciousness is a logical confound to the evolution of cellular and organismic processes supporting known neurobiology, while neglecting the much more widespread and general quantum potentialities in known neuroscientific processes. Nevertheless both microtubules and excitable membranes and neurotransmitter synaptic proteins do go back to single celled eucaryotes, so could be mutually involved.
For these reasons Symbiotic Existential Cosmology remains agnostic about such attempts to invoke unestablished, exotic quantum effects in parallel with assumed classical processes, and instead points to the potentially quantum non-IID nature of brain processes generally, meaning that neurodynamics is a fractal quantum process not required to be adiabatically isolated, as decoherence limits of technological quantum computing suggest.
QBism and the Conscious Consensus Quantum Reality
QBism (von Bayer 2016) is an acronym for "quantum Bayesianism" a founding idea from which it has since moved on. It is a version of quantum physics founded on the conscious expectations of each physicist and their relationships with other physicists. According to QBism, experimental measurements of quantum phenomena do not quantify some feature of an independently existing natural structure. Instead, they are actions that produce experiences in the person or people doing the measurement.
“When I take an action on the world, something genuinely new comes out.”
This is very similar to the way Symbiotic Existential Cosmology presents consciousness as primary in the sense that we all experience subjective consciousness and infer the real world through the consensus view between conscious observers of our experiences of what we come to call the physical world. So although we know the physical world is necessary for our biological survival – the universe is necessary, we derive our knowledge of it exclusively and only through our conscious experiences of it.
The focus is on how to gain knowledge in a probabilistic universe... In this probabilistic interpretation, collapse of the quantum wave function has little to do with the object observed/measured. Rather, the crux of the matter is change in the knowledge of the observer based on new information acquired through the process of observing. Klaus Fuchs explains: “When a quantum state collapses, it’s not because anything is happening physically, it’s simply because this little piece of the world called a person has come across some knowledge, and he updates his knowledge… So the quantum state that’s being changed is just the person’s knowledge of the world, it’s not something existent in the world in and of itself.”
QBism is agnostic about whether there is a world that is structured independently of human thinking. It doesn’t assume we are measuring pre-existing structures, but nor does it pretend that quantum formalism is just a tool. Each measurement is a new event that guides us in formulating more accurate rules for what we will experience in future events. These rules are not just subjective, for they are openly discussed, compared and evaluated by other physicists. QBism therefore sees physicists as permanently connected with the world they are investigating. Physics, to them, is an open-ended exploration that proceeds by generating ever new laboratory experiences that lead to ever more successful, but revisable, expectations of what will be encountered in the future.
In QBism the wave function is no longer an aspect of physical reality as such, but a feature of how the observer's expectations will be changed by an act of quantum measurement.
The principal thesis of QBism is simply this: quantum probabilities are numerical measures of personal degrees of belief. According to QBism, experimental measurements of quantum phenomena do not quantify some feature of an independently existing natural structure. Instead, they are actions that produce experiences in the person or people doing the measurement.
In the conventional version of quantum theory, the immediate cause of the collapse is left entirely unexplained, or "miraculous" although sometimes assumed to be essentially random. QBism solves the problem as follows. In any experiment the calculated wave function furnishes the prior probabilities for empirical observations that may be made later. Once an observation has been made new information becomes available to the agent performing the experiment. With this information the agent updates their probability and their wave function, instantaneously and without magic.
So in the Wigner's friend experiment, the friend reads the counter while Wigner, with his back turned to the apparatus, waits until he knows that the experiment is over. The friend learns that the wave function has collapsed to the up outcome. Wigner, on the other hand, knows that a measurement has taken place but doesn’t know its result. The wave function he assigns is a superposition of two possible outcomes, as before, but he now associates each with a definite reading of the counter and with his friend’s knowledge of that reading — a knowledge that Wigner does not share. For the QBist there is no problem: Wigner and his friend are both right. Each assigns a wave function reflecting the information available to them, and since their respective compilations of information differ, their wave functions differ too. As soon as Wigner looks at the counter himself or hears the result from his friend, he updates his wave function with the new information, and the two will agree once more—on a collapsed wave function.
According to the conventional interpretation of quantum mechanics, in the Schrödinger's cat experiment, the value of a superimposed wave function is a blend of two states, not one or the other. What is the state of the cat after one half-life of the atom, provided you have not opened the box? The fates of the cat and the atom are intimately entangled. An intact atom implies a living cat; a decayed atom implies a dead cat. It seems to follow that since the atom’s wave function is unquestionably in a superposition so is the cat: it is both alive and dead. As soon as you open the box, the paradox evaporates: the cat is either alive or dead. But while the box is still closed — what are we to make of the weird claim that the cat is dead and alive at the same time? According to QBism, the state of an unobserved atom, or a cat, has no value at all. It merely represents an abstract mathematical formula that gives the odds for a future observation: 0 or 1, intact or decayed, dead or alive. Claiming that the cat is dead and alive is as senseless as claiming that the outcome of a coin toss is both heads and tails while the coin is still tumbling through the air. Probability theory summarises the state of the spinning coin by assigning a probability of 1/2 that it will be heads. So QBism refuses to describe the cat’s condition before the box is opened and rescues it from being described as hovering in a limbo of living death.
If the wave-function, as QBism maintains, says nothing about an atom or any other quantum mechanical object except for the odds for future experimental outcomes, the unperformed experiment of looking in the box before it is opened has no result at all, not even a speculative one. The bottom line: According to the QBist interpretation, the entangled wave-function of the atom and the cat does not imply that the cat is alive and dead. Instead, it tells an agent what she can reasonably expect to find when they open the box.
This makes QBism compatible with phenomenologists, for whom experience is always “intentional” – i.e. directed towards something – and these intentionalities can be fulfilled or unfulfilled. Phenomenologists ask questions such as: what kind of experience is laboratory experience? How does laboratory experience – in which physicists are trained to see instruments and measurements in a certain way – differ from, say, emotional or social or physical experiences? And how do lab experiences allow us to formulate rules that anticipate future lab experiences?
Another overlap between QBism and phenomenology concerns the nature of experiments. Experiments are performances. They’re events that we conceive, arrange, produce, set in motion and witness, yet we can’t make them show us anything we wish. That doesn’t mean there is a deeper reality “out there” – just as, with Shakespeare, there is no “deep Hamlet” of which all other Hamlets we produce are imitations. In physics as in drama, the truth is in performance.
However, there is one caveat. We simply don't know whether consciousness itself can be associated only with collapsed probabilities or in some way is also steeped even as a complement in the spooky world of entanglement, so reducing the entirety of physics to collapsed probabilities may not convey the entire picture and the degree to which conscious experiences correspond to unstable brain states at the edge of chaos making phase coherence measurements akin to or homologous with quantum measurements may mean this picture is vastly more complicated than meets the eye.
The observer consensus view of Qbism effectively assumes AOC (Absoluteness of Observed Events). Eric Cavalcanti, the lead author of the Wigner no go experiment notes: The notion of reality afforded by QBism, I propose, will correspond to the invariant elements of any theory that has pragmatic value to all rational agents—that is, the elements that are invariant upon changes of agent perspectives.
The Born Probability Interpretation and the Notion of Quantum “Randomness”
The Born rule provides a link between the mathematical formalism of quantum theory and experiment, and as such is almost single-handedly responsible for practically all predictions of quantum physics (Landsman 2008). The rule projects the superimposed vector with a basis of eigenvectors in an inner product space onto the eigenvector of one of its eigenvalues λi, as a purely algebraic operation.
It states that if an observable corresponding to a self-adjoint operator A with discrete spectrum is measured in a system with normalised wave function then:
(1) the measured result will be one of the eigenvalues λi of A, and
(2) the probability of measuring a given eigenvalue λi will equal , where Pi is the projection onto the eigenspace of A corresponding to λi.
Equivalently, the probability can be written as .
The rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schrödinger’s equation itself. Neither was supported by rigorous derivation. It is simply a probability law on the Hilbert space representation (Griffiths 2014) and says nothing about whether quantum uncertainty is purely random or whether there is a hidden variable theory governing it. Broadly speaking the rule is postulated, as derived above, and not proven experimentally, but assumed theoretically in experimental work:
It's not clear what exactly is meant by an experimental verification of the Born rule - the Born rule says how the quantum state relates to the probability of measurement, but "the quantum state" itself is a construct of the quantum theory that is rarely, if ever, experimentally accessible other than running repeated tests and inferring which state it was from the results assuming the Born rule is valid.
This is because we start initially with a Schrödinger wave equation as a Hamiltonian energy operator , but the wave function is experimentally inaccessible to classical observation, so we have to use the Born probability interpretation to get a particle probability we can sample e.g. in the pattern of photons on the photographic plate in the two-slit interference experiment in fig 71(f).
There are obvious partial demonstrations, but these just lead to averages that statistically approach the probability interpretation, but don’t tell us anything about the underlying process which generates these indeterminacies.
Born's rule has been verified experimentally numerous times. However, only the overall averages have been verified. For example if the prediction is 60% probability, then over large number of trials, the average outcome will approach the predicted value of 60%. This has been verified by measuring particle spin at angle A relative to the angle of its previously known spin angle. The prediction is square of cos(A/2). These predictions have also been verified with entangled pairs (Bell's state) where the same spin prediction is square of sin(A/2). What has not been verified is whether the outcomes are due to independent probability, or they are guided by some balancing mechanism.
Landsman (2008) confirms this picture:
The pragmatic attitude taken by most physicists is that measurements are what experimentalists perform in the laboratory and that probability is given the frequency interpretation (which is neutral with respect to the issue whether the probabilities are fundamental or due to ignorance). Given that firstly the notion of a quantum measurement is quite subtle and hard to define, and that secondly the frequency interpretation is held in rather low regard in the philosophy of probability, it is amazing how successful this attitude has been!
Heisenberg (1958), notes that, in the Copenhagen interpretation, probabilities arise because we look at the quantum world through classical glasses:
One may call these uncertainties [i.e. the Born probabilities] objective, in that they are simply a consequence of the fact that we describe the experiment in terms of classical physics; they do not depend in detail on the observer. One may call them subjective, in that they reflect our incomplete knowledge of the world.
Landsman (2008) clarifies:
In other words, one cannot say that the Born probabilities are either subjective (Bayesian, or due to ignorance) or objective (fundamentally ingrained in nature and independent of the observer). Instead, the situation is more subtle and has no counterpart in classical physics or probability theory: the choice of a particular classical description is subjective, but once it has been made the ensuing probabilities are objective and the particular outcome of an experiment compatible with the chosen classical context is unpredictable. Or so Bohr and Heisenberg say. ... In most interpretations of quantum mechanics, some version of the Born rule is simply postulated.
Roger Penrose (foreword vi in Wuppuluri & Doria 2018) notes:
Current quantum mechanics, in the way that it is used, is not a deterministic scheme, and probabilistic behaviour is taken to be an essential feature of its workings. Some would contend that such indeterminism is here to stay, whereas others argue that there must be underlying ‘hidden variables’ which may someday restore a fully deterministic underlying ontology. ... Personally, I do not insist on taking a stand on this issue, but I do not think it likely that pure randomness can be the answer. I feel that there must be something more subtle underlying it all.
John von Neumann (1951) is highly critical of both physical and algorithmic sources of randomness:
We see then that we could build a physical instrument to feed random digits directly into a high-speed computing machine and could have the control call for these numbers as needed. The real objection to this procedure is the practical need for checking computations. If we suspect that a calculation is wrong, almost any reasonable check involves repeating something done before. At that point the introduction of new random numbers would be intolerable. I think that the direct use of a physical supply of random digits is absolutely inacceptable for this reason and for this reason alone. … Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number – there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method.
Ruth Kastner (2013) claims that the transactional interpretation is unique in giving a physical explanation for the Born rule. Zurek (2005) has made a derivation from entanglement and Sebens and Carroll have done so for an Everett perspective, although this is not strictly meaningful, since every branch of the multiverse is explored.
Because wave interference is measured through particle absorption, experiments have been made (Sinha et al. 2010) to eliminate higher-order processes which might violate the two signal interference implied by the Born interpretation because Born’s rule predicts that quantum interference, as shown by a double-slit diffraction experiment, occurs from pairs of paths. Therefore using a three slit apparatus and sampling all combinations of slits we can confirm additive two wave interference, so Born applies.
Other experiments and theories attempt to derive the Born interpretation from more basic quantum properties. Masanes, Galley & Müller (2019) show Born’s rule and the post-measurement state-update, can be deduced from the other quantum postulates, referred to as unitary quantum mechanics, and the assumption that ensembles on finite-dimensional Hilbert spaces are characterised by finitely many parameters. Others such as Cabelo (2018) use graph theory. The movement to regenerate the whole of quantum theory from more basic axioms e.g. of information, or probability itself is called quantum reconstruction, of which Qbism is an example (Ball 2017).
Zurek (1991, 2003, 2005) has introduced the notions of decoherence, quantum Darwinism and envariance – environment – assisted invariance, to explain the transition from quantum reality to the classical. Decoherence is the way third-party quanta disrupt the off-diagonal wave amplitudes of entanglement resulting in projection onto the “observed” classical states through exponential damping as in fig 71c. Quantum Darwinism enriches this picture by developing the notion that some quantum “pointer” states can be more robust to decoherence by replicating their information into the environment. Envariance describes this process in terms of quantum measurement, in which the environment becomes entangled with the apparatus of ideal von Neumann measurement, again promoting the transition to the classical. While these do not deal with the question of hidden variable theories versus randomness of uncertainty they have been claimed to derive the Born probabilities (Zurek 2005, Harris et al 2016) through multiple environmental interactions, illustrated by Laplace playing card probabilities in fig 77d. However all the approaches to independent derivation of the Born rule including envariance have been criticised as being logically circular (Schlosshauer & Fine 2005, Landsman 2008).
Illustrating the difficulty of the problem, John Wheeler in 1983 proposed that statistical regularities in the physical world might emerge from such a situation, as they sometimes do from unplanned crowd behaviour (Ball 2019):
“Everything is built higgledy-piggledy on the unpredictable outcomes of billions upon billions of elementary quantum phenomena", Wheeler wrote. But there might be no fundamental law governing those phenomena — indeed, he argued, that was the only scenario in which we could hope to find a self-contained physical explanation, because otherwise we’re left with an infinite regression in which any fundamental equation governing behavior needs to be accounted for by some even more fundamental principle. “In contrast to the view that the universe is a machine governed by some magic equation, … the world is a self-synthesizing system,” Wheeler argued. He called this emergence of the lawlike behavior of physics “law without law.”
However the probability interpretation leads to the incorrect notion that quantum reality is somehow just a random process. Common opinions of processes like radioactive decay are treated as random by default, simply because they are indeterminate and don’t obey a fixed law. Study smarter, for example, states:
Radioactive decay is a random process, meaning it is impossible to predict when an atom will emit radiation. By the random nature of radioactive decay, we mean that for every atom, there are known probabilities that they will emit radiation (and thus decay radioactively) in the next second. Still, the fact that all we have is a probability makes this a random process. We can never determine ahead of time if an atom will decay in the next second or not. This is just like throwing a (fair, cubic) dice every second.
But probability is not randomness. Non-random and even deterministic processes can also have probabilities, fig 77d (4b). This is equating the quantum tunnelling of individual nuclei to a dice throw, which is a chaotic classical process with geometric constraints, so it is equating quantum uncertainty with classical chaotic butterfly effect systems.
Santha & Vazirani (1986) note:
Unfortunately, the available physical sources of randomness (including zener diodes and geiger counters) are imperfect. Their output bits are not only biased but also correlated.
Fig 77d: (1,2) Exponential decay of erratic radioactivity as the population of radioactive atoms becomes depleted. (3) Zener diode avalanche output. (4) Graph of the chaotic logistic iteration displaying interval-filling ergodicity in the frequency graph (4b), point plot 4(c). One can make a Born interpretation of this as a pseudo-wave function showing the relative probabilities of finding an iteration point at 0.5 and 0.9 by normalising the function over its integral, yet the process is deterministic. Therefore a probability interpretation does not imply randomness. (5,6) Quasi-random and pseudo-random or random 2-D distributions. (7) Sketch derivation of the Born formula derivation using Laplace probabilities of concealed and revealed playing cards (Zurek 2005). This shows how quantum theory leads to probabilities based on the physical state when the system of interest when S is entangled with 'the environment' E. Such entanglement would occur as a result of decoherence. When S and E are maximally entangled, the swap on S has no effect on its state. This is clear, since its effect can be undone without acting on S – by a 'counterswap' that involves only E.
Geiger counters measure quantum tunnelling in individual radioactive nuclei that reflect quantum uncertainty but are emitted very slowly over long periods, so the process is not random but one of erratic exponential decay over time. Zener diodes at high voltage undergo avalanche breakdown, which is a solid state feature that lets through a flood of electrons with a fixed relaxation time, so again it is not directly measuring uncertainty, but its compounded effects.
This is remarkably similar to chaotic molecular systems displaying a butterfly effect. Very simple algorithmic chaotic systems such as the logistic iteration, where x1 is chosen on (0,1) as a seed and the sequence is generated by modelling rabbit reproduction in a finite pasture (May 1976), where the chaotic phase is ergodic and asymptotic to an interval-filling stochastic process, when r = 4. This is shown in fig 77d, where this iteration generates an asymptotic frequency distribution, which has been normalised over its integral to produce a probability function playing the same role as as the squared wave function to give a probability interpretation parallel to the Born rule for an elementary deterministic discrete iteration, confirming the Born rule does not in any way imply that the basis of quantum uncertainty lies in randomness.
The point distribution (4c) shows us closer detail that confirms this deterministic dynamical process displays pseudo-random features akin to (6), modulated by the overall probability distribution (4b), but there is a subtle anomaly in (4c) in the horizontal strip neighbouring y = 0.75. This does not appear in the probability distribution, which is asymptotically smooth for large n. The reason is that the iteration has two fixed point solutions to x = f(x): 0 an attractor, and 0.75 a chaotic repelling point. There are 2n +1 periodic points of period n, so in a classical chaotic iteration the unstable periodic points are dense, but these have measure zero, as a countable subset of [0, 1], so their probability of occurrence is zero, but their neighbouring points can be seen in (4c) as stationary butterfly effect exponential trajectories neighbouring the fixed point. None of this can happen in a quantum system and this detail is not accessible in the quantum situation either because we have recourse only to the Born rule, so the interference experiment reflects only what we see in (4b) – the macroscopic experimental arrangement, as a statistical particle approximation to the wave power, not the underlying process governing the individual outcomes. In fig 57, we see that the classical chaotic stadium billiard (1), becomes scarred by such repelling periodic orbits in the quantum situation, although open quantum systems like the quantum kicked top (2) become entangled.
Algorithmic processes approximating randomness for experimental purposes are classified as either pseudo-random, or quasi-random.
Pseudo-random numbers more closely simulate randomness, as in the pseudorandom number generator (PRNG), or deterministic random bit generator (DRBG) used in computational random functions. An example pseudo-random number generator is where An is the previous pseudo number generated, Z is a constant multiplier, I is a constant increment, and M is a constant modulus. Xorshift PRNGs, which are examples of linear feedback shift register algorithms including WELL, generate the next number in their sequence by repeatedly taking the exclusive OR of a number with a bit-shifted version of itself. These are all periodic with very long periods, for example the Mersenne Twister MT19937 is based on the Mersenne prime M19937=219937-1. Quasi-random processes, also called low-discrepancy sequences, approach an even distribution more rapidly, but less faithfully than true randomness because they lack larger improbable fluctuations. They are generated by a number of algorithms including Fauré, Halton and Sobol, each of which have short arithmetic computational procedures. None of these quasi- or pseudo-random algorithms are in any way related to quantum uncertainty, nor are they random processes.
This leaves us in a position where the assumption of quantum randomness remains unestablished and in which a complex many-body non-local interaction, given the vast number of particles in the universe, could approximate the Born interpretation to the limits of any experimental error.
Summarising the interplay between the notion of “random” probabilistic wave function collapse and hidden variable theories, Symbiotic Existential Cosmology favours the latter on the basis that:
(1) The verification of Bell entanglement was a confirmation of the EPR claim and Einstein’s quote:
God does not play dice with the universe.
(2) The transcausal view of quantum transactions being a complex underlying hidden variable process, which is also shared by (3) superdeterminism violating statistical independence, and involving additional non-local processes (4) non-IID processes in biology not converging to the classical and (5) theories in which quantum measurement may contradict objective reality through process entanglements extending beyond Bell state entanglements.
The intrinsic difficulty that all such theories since the Bohm’s pilot wave formulation face, is that expectations are for a master equation like Schrödinger’s, when any system is likely to be a vastly complex non-local many-body problem. This criticism of the assumption of randomness applies equally to molecular dynamics, mutational evolution and neurodynamics.
The Transition from Quantum Universe to Conscious Experience
Focusing on the extreme end of the von Neumann chain of quantum measurement at the conscious observer leads to JA Wheeler’s notion of the self-observing universe – the notion that without conscious observers, the physical universe could not become manifest, fig 108. This raises two fundamental problems: (1) Many aspects of subjective experience such as tri-chromatic colour vision show manifest evidence of subjective consciousness having a biological basis, even if only as a sensory boundary condition upon it. This is consistent with the neuroscientific view that subjective consciousness is a biologically-based internal model of reality constructed by the brain. (2) Conscious life in the universe is a late comer, requiring long epochs to generate the chemical elements, biogenesis and the evolution of complex organisms, so the universe had to be able to manifest before conscious life evolved to observe it.
From the objective point of view, we need to understand why the quantum universe appears as macroscopically classical as it does. This indicates that the universe is in a critical phase transition between wave function evolution under the Schrödinger wave equation and wave function collapse. We know that carefully prepared experiments, from quantum interference, through the Schrödinger cat paradox to Bell entanglement experiments show we can display key quantum properties at will, by generating a suitable measurement environment in the lab. But there are two key reasons why the world appears classical. (1) The matter in the world around us consists of long-lived fermionic complexes whose defining properties involve them interacting predominantly as particles which do not fully show their wave aspects, although they do exist. (2) Many of our fundamental modes of interaction with the world are particulate, so involve Born rule discrete measurements to sense the world around us.
Vision is the classic case, where we detect the visual world through incident particulate photons exciting rhodopsin molecules in our rod and cone cells in the retina and converting these into the visual images we see through a tri-chromatic 3D simulated reality of the world around us. Eyes are thus photon detectors using type 1 von Neumann process wave collapse just as in a geiger counter, or photon detector in a Bell theorem entanglement experiment. This means we can’t sense the wave properties of photons directly but can do so only by performing the same multiple wave-particle reductions in a quantum interference experiment. Thus we can look at the rainbows on the under-side of a CD or DVD and know these are the same patterns we see on the interference photographic plate after a large number of particles indicate a high amplitude, say in the green wavelengths. Audition, by contrast, may be a direct transduction for sound waves to electrochemical depolarisation and action potentials, but olfaction is like vision based on particulate occupancy of a molecular receptor. This overall picture suggests that type 1 collapse processes are not restricted to a conscious observer opening the box in a Schrödinger cat experiment, but are general to macroscopic measurement apparatus becoming environmentally "entangled" with the experiment, and that brings in the notions of quantum decoherence and its siblings, recoherence, discord and envariance already discussed.
A key point about the notion of decoherence is that interactions with third party quanta do not just invoke quantum entanglement but both the von Neumann type 2 Schrödinger wave and type 1 wave reduction particle properties of quanta, including creation and annihilation and wave diffraction of molecular systems in the quantum billiards of the cellular cytosol. Central to this is that fact that classical dynamics in the Laplacian universe is deterministic, so that, although we can form statistical models of it e.g. in classical thermodynamics, no matter how much these processes are probabilistically entropic, and can be computationally unpredictable due to the butterfly effects of classical chaos, fig 57, they remain classically deterministic throughout, so there is no basis to assert any form of fundamental randomness. But the situation becomes altogether different in the quantum universe, because of the complementary wave and particle properties of each and every quantum invoking causality-violating type 1 collapse.
Fig 77e: Key particle and wave interactive processes in quantum interactive billiards. Left: Feynman diagrams for virtual exchange of virtual particles in the electromagnetic field include both photons and electron positron pairs (a, b). Real particle electron scattering (c) becomes electron positron creation and annihilation (d) if the time direction of the intermediate electron becomes time-reversed. Therefore real particle scattering involves creation and annihilation operators as well. Right: Because all quanta are both particles and waves the wave aspects of molecules are also involved in interactive diffraction effects amplifying uncertainty.
Firstly, molecular billiards is not just a question of tracing how molecules collide, because processes, even as simple as scattering of an electron by collision with or emission of a photon are governed by creation and annihilation operators, which is where the notion of entanglement with the apparatus applies. Fig 77e shows on the left, the homology of scattering between a real photon and an electron, compared with the virtual interactions of basic QED Feynman diagrams (a, b), real electron scattering (c) and the corresponding case (d) when the intervening electron becomes time reversed, creating a real electron-positron pair, resulting in a second annihilation to produce another gamma-ray photon. We already know that Bohm pilot waves cannot handle creation events so we can see that they cannot depict molecular billiards either, although these quantum operators do permit quantum entanglement.
Secondly, on the wave theoretic side, molecular billiards is a situation where open system quantum chaos pertains in a way in which uncertainties of kinetic interaction invoke butterfly-effect amplification of quantum indeterminacy flooding the system dynamics, as illustrated on the right and discussed in the paragraph below.
This doesn’t mean that we can weave our fairy wand over reality and claim the quantum universe and its uncertainty is intrinsically random, because we know in every real situation a unique outcome occurs – the cat is alive or dead in each case as are the vagaries of fate we superstitiously attribute to karma and hidden variable theories may show this.
Looking at the wave aspect, one can define the indeterminacy radius of a particle as the distance travelled before the diffraction of the particle by an identical particle is twice the particle diameter e.g. in H2 gas, as in fig 77e, resulting in complete uncertainty of the direction in the elastic collision. In particular, at 300 K the average kinetic energy of molecular hydrogen is 0.04 eV and the speed is 2,200 m/sec. The wavelength is thus .18 nm, greater than the Bohr diameter of .105 nm. Thus the indeterminacy radius is the order of magnitude of the atomic diameter. Thus the gas could just as well be a liquid, as it is in biological tissues, permitting a variety of quantum effects, from active site tunnelling in enzymes, to protein folding by quantum superposition of states. Since the wavelength varies inversely with the square root of the molecular weight e.g. of an amino acid molecule at a given temperature, we can make a realistic estimate of the indeterminacy radius of an amino acid in aqueous solution as follows. The average molecular weight of glycine is approximately 75. The wavelength is thus .18/√75 = .021 If we take the length of an amino acid as .36 nm and use the formula sin(θ) = +/- λ/w to estimate the angle θ of fig 14(b) using a .15 nm diameter of a water molecule as w, we find sin(θ) = .021/.15 or θ = 8.04o. Thus in .36 nm the uncertainty of position would be .05 approximately the Bohr molecular radius. The uncertainty will thus become complete after a few kinetic encounters.
This brings us finally to the question of what processes in the universe organismic subjective consciousness might provide in terms of quantum measurement. The first point to note is that the notion that the warm wet brain, dependence on action potentials and the law of mass action mean that brain processes are in the classical limit is untrue. We approach the classical description only by making repeated independent identically-distributed IID quantum measurements. Neither brain dynamics, nor biological evolution satisfy these constraints, so all type 1 processes in biology which endlessly make the context of one measurement the outcome of another changed set of circumstances fail to converge to the classical.
Symbiotic Existential Cosmology advances a view of the cerebral cortex as an environmental boundary condition filter, on root generative basal brain awareness, using both existing sensory information and accrued contextual memory to attempt to deal with existential threats due to environmental uncertainty that is ultimately a cumulative product of quantum uncertainty the interactive environment. The brain is designed as a dynamic wave theoretic parallel processing computational organ, using graded electrochemical potentials augmented by discrete action potentials in pyramidal neurons, for long-distance communication. In situations when there is no manifest solution to the ongoing existential crisis, the brain dynamic enters into a critical state of dynamically poised edge-of-chaos excitation modulated by wave phase coherence which itself forms a fundamental homologue with the wave sampling foundation of quantum uncertainty in fig 71c. As we understand it, the classical universe is causally closed and provides no avenue for subjective conscious awareness to intervene, but in the critical phase transition state of the dynamically poised brain dynamic becoming arbitrarily sensitive to quantum measurement, intuition becomes an instrument for the ‘aha’ moment of quantum collapse to provide an anticipatory response to organismic survival, which is why evolution has retained subjective consciousness throughout the metazoa. Hence quantum uncertainty is the cubic centimetre of indeterminacy through which subjective consciousness is able to have efficacy of volition over the physical universe.
Quantum Darwinism meets Subjective Consciousness
We understand that the actual explanation of decoherence is interaction with other quantum systems, resulting in quantum entanglements with third parties and somehow, it's possible for multiple observers to agree about the properties of quantum systems. Zurek (2009) argues that two things must be true. First, quantum systems must have "pointer states" that are especially robust in the face of disruptive decoherence by the environment and classical behaviour - the existence of well-defined, stable, objective properties - is possible only because pointer states of quantum objects exist, which are preserved, or transformed into a nearly identical state. This implies that the environment preserves some states while degrading others. A particle's position is resilient to decoherence, but superpositions decohere into localised pointer states, so that only one can be observed. But there's a second condition that a quantum property must meet to be observed – how substantial a footprint it makes in the environment. The states that are best at creating replicas in the environment - i.e. the "fittest" - are the only ones accessible to measurement. The same stability property that promotes environment-induced super-selection of pointer states also promotes quantum Darwinian fitness, or the capacity to generate replicas. "The environment, through its monitoring efforts, decoheres systems, and the very same process that is responsible for decoherence should inscribe multiple copies of the information in the environment".
Fig 77f: Experimental realisation of quantum Darwinism.
An experimental realisation of this idea has been performed by Unden et al. (2019). The team focused on NV centres, which occur when two adjacent carbon atoms within a diamond lattice are replaced with a nitrogen atom and an empty lattice site. The nitrogen atom has an extra electron that remains unpaired. This behaves as an isolated spin - which can be up, down or in a superposition. The spin state can be probed by illuminating the diamond with laser light and recording the florescence given off. Most carbon in the diamond is carbon-12, which has zero spin. However, around 1% of the atoms are carbon-13, which has a nuclear spin. They probed the interaction of an NV spin with, on average, four carbon-13 atoms, about 1 nm away. The carbon-13 spins - which serve as the environment - are too weak to interact with one another but nevertheless cause decoherence in the NV spin, in which the carbon-13 spins change to new quantum states. By measuring the spin of just one carbon-13 nucleus, indirectly by passing it back to the NV spin and repeating the experiment many times, they found they could correctly deduce most of the NV spin properties most of the time. But measurements of additional nuclear spins added little to this knowledge. These "give the first laboratory demonstration of quantum Darwinism in action in a natural environment". Other groups have carried out similar measurements (using the polarization of photons) that also show redundancy which demonstrates the proliferation of “classical” information and also an 'uptick' in information taking place at the quantum level. Pokorny (2019) and colleagues have shown how to make measurements on trapped ions, while still preserving some of the system's quantum coherence, which shows that measurement is the result of a dynamical process governed itself by quantum mechanics.
Quantum Darwinism may provide an interactive bridge which can help explain how the conscious brain derives its model of the classical world and uses it to anticipate opportunities and threats to survival. Conscious brain states are characterised by a maximal degree of global coupling, in terms of phase coherence of the EEG across regions, by contrast with decoherent unconscious regional processing. Karl Pribram has drawn attention to the similarities of this form of processing to quantum measurement where the uncertainty relation is defined by wave beats. Given the sensitive dependence of edge of chaos dynamics, self-organised criticality and hand-shaking interaction between micro levels of ion channel , the synapse and global scale excitations, the conscious brain forms the most complex interactive system of quantum entanglements in the known universe. The brain states corresponding to the evolving Cartesian theatre of consciousness thus provide the richest set of boundary conditions for quantum Darwinism to shape brain states and in turn be shaped by them. This way one can see the conscious brain as a two-way interactive process both shaping the fluctuations of the quantum milieu and and being in turn shaped by them in an interactive resonance with the foundations of the transitions from the quantum superimposed world to that of unfolding experienced real world history, in which both subjective consciousness and intentional will are modulating the apparent randomness of wave-function collapse dependent only on wave amplitude and probability, potentially resolving the question of how apparently random quantum processes can lead to anticipative intentional acts.
Complementing this description of the quantum world at large is the actual physics of how the brain processes information. By contrast with a digital computer, the brain uses both pulse coded action potentials and continuous gradients in an adaptive parallel network. Conscious states tend to be distinguished from subconscious processing by virtue of coherent phase fronts of the brain’s wave excitations. Phase coherence of beats between wave functions fig 71(c), is also the basis of quantum uncertainty.
In addition, the brain uses edge-of-chaos dynamics, involving the butterfly effect – arbitrary sensitivity to small fluctuations in bounding conditions – and the creation of strange attractors to modulate wave processing, so that the dynamics doesn’t become locked into a given ordered state and can thus explore the phase space of possibilities, before making a transition to a more ordered state representing the perceived solution. Self-organised criticality is also a feature, as is neuronal threshold tuning. Feedback between the phase of brain waves on the cortex and the discrete action potentials of individual pyramidal calls, in which the phase is used to determine the timing of action potentials, creates a feedback between the continuous and discrete aspects of neuronal excitation. These processes, in combination, may effectively invoke a state where the brain is operating as an edge-of-chaos quantum computer by making internal quantum measurements of its own unstable dynamical evolution, as cortical wave excitons, complemented by discrete action potentials at the axonal level.
Chaotic sensitivity, combined with related phenomena such as stochastic resonance (Liljenström et al. 2005), mean that fractal scale-traversing handshaking (Grosu 2023) can occur between critically poised global brain states, neurons at threshold, ion-channels and the quantum scale, in which quantum entanglement of excitons can occur (King 2014). At the same time these processes underpin why there is ample room in physical brain processing for quantum uncertainty to become a significant factor in unstable brain dynamics, fulfilling Eccles (1986) notion that this can explain a role for consciousness, without violating any classically causal processes.
This means that brain function is an edge-of-chaos quantum dynamical system which, unlike a digital computer, is far from being a causally deterministic process which would physically lock out any role for conscious decision-making, but leaves open a wide scope for quantum uncertainty, consistent with a role for consciousness in tipping critical states. The key to the brain is thus its quantum physics, not just its chemistry and biology. This forms a descriptive overview of possible processes involved rather than an empirical proof, in the face of the failure of promissory materialistic neuroscience (Popper & Eccles 1984) to demonstrate physical causal closure of the universe in the context of brain function, so Occam’s razor cuts in the direction which avoids conflict with empirical experience of conscious volitional efficacy over the physical universe.
Fig 78: (1) Edge of chaos transitions model of olfaction (Freeman 1991). (2) Stochastic resonance as a hand-shaking process between the ion channel and whole brain states (Liljenström & Svedin 2005). (3) Hippocampal place maps (erdiklab.technion.ac.il). Hippocampal cells have also been shown to activate in response to desired locations in an animals anticipated future they have observed but not visited (Olafsdottir et al. 2015). (4) Illustration of micro-electrode recordings of local wave phase precession (LFP) enabling correct spatial and temporal encoding via discrete action potentials in the hippocampus (Qasim et al. 2021). Delta, theta, alpha beta and gamma bands appear to have distinct functions and dynamics and metastable and spiral waves have been noted as having neurocognitive function (Roberts et al. 2019, Xu et al. 2023, ). (5) Living systems are dynamical systems. They show ensembles of eigenbehaviors, which can be seen as unstable dynamical tendencies in the trajectory of the system. Fransisco Varela’s neurophenomenology (Varela 1996, Rudrauf et al. (2003) is a valid attempt to bridge the hard and easy problems, through a biophysics of being, by developing a complementary subjective account of processes corresponding to objective brain processing. While these efforts help to elucidate the way brain states correspond to subjective experiences, using an understanding of resonant interlocking dynamical systems, they do not of themselves solve the subjective nature of the hard problem. (6) Joachim Keppler's (2018, 2021, James et al. 2022) view of conscious neural processing uses the framework of stochastic electrodynamics (SED), a branch of physics that affords a look behind the uncertainty of quantum field theory (QFT), to derive an explanation of the neural correlates of consciousness, based on the notion that all conceivable shades of phenomenal awareness are woven into the frequency spectrum of a universal background field, called zero-point field (ZPF), implying that the fundamental mechanism underlying conscious systems rests upon the access to information available in the ZPF. This gives an effective interface description of how dynamical brain states correspond to subjective conscious experiences, but like the other dynamical descriptions, does not solve the hard problem itself of why the zero point field becomes subjective.
Diverse Theories of Consciousness
Fig 79: Overview of
Theories of Consciousness reviewed, with rough comparable positions. Field or field-like
theories are in
blue/magenta. Explicit AI support magenta/red. Horizontal positions guided by specific author statements.
Section links: GNW, ART, DQF, ZPF, AST, CEMI, FEM, IIT, PEM, ORCH, GRT, CFT
Descriptions of the actively conscious brain revolve around extremely diverse conceptions. The neural network approach conceives of the brain as a network of neurons connected by axonal-dendritic synapses, with action potentials as discrete impulses travelling down the long pyramidal cell axons through which activity is encoded as a firing rate. In this view the notions of “brain waves” as evidenced in the EEG (electroencephalogram) and MEG (magnetoencephalogram) are just the collective averages of these spikes, having no function in themselves, being just an accessory low intensity electromagnetic cloud associated with neuronal activity, which happens to generate a degree of coupled synchronisation through the averaged excitations of the synaptic web. At the opposite extreme are field theories of the conscious brain in which fields have functional importance in themselves and help to explain the “binding” problem of how conscious experiences emerge from global brain dynamics.
Into the mix are also abstract theories of consciousness such as Tononi and Koch’s (2015) IIT or integrated information theory and Graziano’s (2016) AST or attention schema theory, which attempt to formulate an abstract basis for consciousness that might arise in biological brains or synthetic neural networks given the right circumstances.
The mushroom experience that triggered Symbiotic Existential Cosmology caused a reversal of world view from my original point of view, King (1996), looking for the neurodynamic and quantum basis of consciousness in the brain, to realising that no such theory is possible because a pure physicalist theory cannot bridge the hard problem explanatory gap in the quantum universe, due to the inability to demonstrate causal closure.
No matter how fascinating and counter-intuitive the complexities of the quantum, physical and biological universe are, no purely physicalist description of the neurodynamics of consciousness can possibly succeed, because it is scientifically impossible to establish a theoretical proof, or empirical demonstration, of the causal closure of the physical universe in the context of neurodynamics. The bald facts are that, no matter to what degree we use techniques, from optogenetics, through EcoG, to direct cell recording, there is no hope within the indeterminacies of the quantum universe of making an experimental verification of classical causal closure. Causal closure of the physical universe thus amounts to a formally undecidable cosmological proposition from the physical point of view, which is heralded as a presumptive 'religious' affirmative belief without scientific evidence, particularly in neuroscience.
The hard problem of consciousness is thus cosmological, not biological, or neurodynamic alone. Symbiotic Existential Cosmology corrects this by a minimal extension of quantum cosmology by adding the axiom of primal subjectivity, as we shall see below.
In stark contrast to this, the subjective experiential viewpoint perceives conscious volition over the physical universe as an existential certainty that is necessary for survival. When any two live human agents engage in a frank exchange of experiences and communications, such as my reply to you all now, which evidences my drafting of a consciously considered opinion and intentionally sending it to you in physical form, this can be established beyond reasonable doubt by mutual affirmation of our capacity to consciously and intentionally respond with a physical communication. This is the way living conscious human beings have always viewed the universe throughout history and it is a correct veridical empirical experience and observation of existential reality, consistent with personal responsibility, criminal and civil law on intent, all long-standing cultural traditions and the fact that 100% of our knowledge of the physical world comes through our conscious experience of it. Neuroscientists thus contradict this direct empirical evidence at their peril. Hoffman, Prakash & Prentner (2023) have also produced a decisive critique of all physical theories of consciousness, invoking this point of view.
However there is still a practical prospect of refining our empirical understanding of the part played by neurodynamics in generating subjective conscious experience and volition over the physical universe through current and upcoming techniques in neuroscience. What these can do is demonstrate experimentally the nature of the neurodynamics occurring, when conscious experiences are evoked, the so-called "neural correlate of consciousness", forming an interface with conscious experience our and ensuing decision-making actions.
To succeed at this scientific quest, we need to understand how quantum cosmology enters into the formation of biological tissues. The standard model of physics is symmetry broken, between the colour, weak, and EM forces and gravity, which ensures that there are a hundred positively charged atomic nuclei, with orbital electrons having both periodic quantum properties of the s, p, d, & f, orbitals and non-linear EM charge interactions, centred on first row covalent H-CNO modified by P & S and light ionic and transition elements, as shown in fig 51, to form a fractal cooperative bonding cascade from organic molecules like the amino acids and nucleotides, through globular proteins and nucleic acids, to complexes like the ribosome, and membrane, to cell organelles, cells and tissues. These constitute an interactive quantum form of matter – the most exotic form of matter in existence, whose negentropic thermodynamics in living systems is vastly more challenging than the quantum properties of solid state physics and its various excitons and quasi-particles. Although these are now genetically and enzymatically encoded, the underlying fractal dynamics is a fundamental property of cosmological symmetry-breaking and abiogenesis. It introduces a cascade of quantum effects, in protein folding, allosteric active sites with tunnelling, membrane ionic and electron transport and ultimately neurodynamics. Furthermore biological processes are non IID, not constituting identical independently distributed quantum measurements, so do not converge to the classical description and remain collectively quantum in nature throughout, spanning all or most aspects of neuronal excitability and metabolism.
This means that current theories of the interface between CNS neurodynamics and subjective conscious volition are all manifestly incomplete and qualitatively and quantitatively inadequate to model or explain the brain-experience interface. Symbiotic Existential Cosmology has thus made a comprehensive review of these, including GNW (Dehane et al.), ART (Grossberg), DQF (Freeman & Vitiello), ZPF (Keppler), AST (Graziano), CEMI (McFadden), FEM (Solms & Friston), IIT (Tononi & Koch), PEM (Poznanski et al.), as well as outliers like ORCH (Hameroff & Penrose). The situation facing TOEs of consciousness are, despite experimental progress, in a more parlous state than physical TOEs, from supersymmetric, superstring, and membrane theories to quantum loop gravity, that as yet show no signs of unification over multiple decades. In both fields, this requires a foundation rethink and a paradigm shift. Symbiotic Existential Cosmology provides this to both fields simultaneously.
To understand this biologically, we need to understand that the nature of consciousness as we know it and all its key physical and biological features, arose in a single topological transition in the eucaryote endosymbiosis, when the cell membrane became freed for edge-of-chaos excitation and receptor-based social signalling, through the same processes that are key to human neurodynamics today, when respiration became sequestered in the mitochondria. This in turn led to the action potential via the flagellar escape reaction, and to the graded membrane potentials and neurotransmitter receptor-based synaptic neural networks we see in neuronal excitation. It took a billion years later before these purposive processes enabling sentience at the cellular level, in the quantum processes we now witness in vision, audition, olfaction and feeling sensation became linked in the colonial neural networks illustrated by hydra and later the more organised brains of arthropods, vertebrates and cephalopods. This means that a purely neural network view of cognition and consciousness is physically inadequate at the foundation. Moreover the brain derives its complexity not just from our genome which is vastly too small to generate the brain’s complexity, but interactive processes of cell migration in the developing brain that form a self organising system through mutual neuronal recognition by neurotransmitter type and mutual excitation/inhibition.
Of these theories, GNW is the closest to a broad brush strokes, empirically researched account. Neural network theories like Grossman’s ART generate crude necessary but insufficient conditions for consciousness because they lack almost all the biological principle involved. Pure abstract theories like IIT do likewise. Specialised quantum theories like Hameroff & Penrose are untenable both in current biology and fundamentally in evolutionary terms because they have been contrived as quantum back-pack of oddball quantum processes such as quantum microtubular CAs, not confluent with evolutionary processes, using increasingly contrived speculation to make up for inadequacies e.g. in linking cellular processes through condensates. ORCH is also objective reduction, so it cannot address conscious volition.
There is good empirical support for two processes in brain dynamics. (1) Edge-of-chaos transitions from a higher energy more disordered dynamic to a less disordered attractor dynamic, which is also the basis of annealing in neural network models of a potential energy landscape. (2) Phase tuning between action potential timing in individual neurons and continuous local potential gradients, forming an analogue with quantum uncertainty based measurement of wave beats.
These mean that field and field like-theories such as ZPF, DQF and PEM all have a degree pf plausibility complementing bare neural network descriptions. However all these theories run into the problem of citing preferred physical mechanisms over the complex quantum system picture manifest in tissue dynamics. ZPF cites the zero-point field, effectively conflating a statistical semi-classical of QED with subjective consciousness as the quantum vacuum. It cites neurotransmitter molecular resonances at the synapse and periodic resonances in the brain as providing the link. DQF is well grounded in Freeman dynamics, but cites water molecule structures, which are plausible but accessory and not easy to test. PEM cites quasi-polaritonic waves involving interaction between charges and dipoles, with an emphasis on delocalised orbitals, which are just one of many quantum level processes prominently involved in respiration and photosynthesis and makes a claim to "microfeels" as the foundation of a definition of precognitive information below the level of consciousness. It also restricts itself to multiscale thermodynamic holonomic processes, eliminating the quantum level, self organised criticality and fractality.
Philosopher wins 25 year long bet with Neuroscientist (Lenharo 2023): In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness that it is an ongoing quest — and declared Chalmers the winner.
What ultimately helped to settle the bet was a study testing two leading hypotheses about the neural basis of consciousness (Cogitate Consortium et al. 2023). Consciousness is everything that a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says. However, despite a vast effort, researchers still don’t understand how our brains produce it. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.” It tested two of the leading hypotheses: integrated information theory (IIT) and global network workspace theory (GNWT). IIT proposes that consciousness is a ‘structure’ in the brain formed by a specific type of neuronal connectivity that is active for as long as a certain experience, such as looking at an image, is occurring. This structure is thought to be found in the posterior cortex, at the back of the brain. GNWT, by contrast, suggests that consciousness arises when information is broadcast to areas of the brain through an interconnected network. The transmission, according to the theory, happens at the beginning and end of an experience and involves the prefrontal cortex, at the front of the brain. The results didn’t perfectly match either of the theories. This has since resulted in an open letter treating IIT, in particular as pseudoscience (Fleming et al. 2023 Lenharo 2023b).
The position of Symbiotic Existential Cosmology is that none of these theories, and particularly those that depend on pure physical materialism, have any prospect of solving the hard problem and particularly the hard problem extended to volition. Symbiotic Existential Cosmology therefore adopts a counter strategy to add an additional axiom to quantum cosmology that associates primal subjectivity and free will with an interface in each quantum, where “consciousness” is manifested in the special relativistic space-time extended wave function and "free will" is manifested in the intrinsic uncertainty of quantum collapse to the particle state. This primal subjectivity exists in germinal forms in unstable quantum-sensitive systems such as butterfly effect systems and becomes intentional consciousness as we know it in the eucaryote transition.
This transforms the description of conscious dynamics into one in which subjectivity is compliant with determined perceived and cognitive factors but utilises the brain state as a contextual environmental filter to deal with states of existential uncertainty threatening the survival of the organism. This is similar to AST, but without the utopian artificial intelligence emphasis it shares with others such as ART, IIT, and PEM. Key environmental survival questions are both computationally intractable and formally uncomputable, because the tiger that may pounce is also a conscious agent who can adapt their volitional strategy to unravel any computational "solution”. This provides a clean physical cut, in which subjective consciousness remains compliant with the determined boundary conditions realised by the cognitive brain, but has decision-making ability in situations when cellular or brain dynamics becomes unstable and quantum sensitive. No causal conflict thus arises between conscious intent restricted to uncertainty and physical causes related to the environmental constraints. It invokes a model of quantum reality where uncertainty is not merely random, but is a function of unfolding environmental uncertainty as a whole. This is the survival advantage cellular consciousness fixed in evolution through anticipating existential crises and has conserved ever since, complementing cerebral cognition in decision-making. This is reflected experientially in how we make intuitive "hunch" overall decisions and physically in certain super-causal forms of the transactional QM interpretation and super-determinism, both of which can have non-random quasi-ergodic hidden variable interpretations and are compatible with free will.
The final and key point is that Symbiotic Existential Cosmology is biospherically symbiotic. Through this, the entire cosmology sees life and consciousness as the ultimate interactive climactic crisis of living complexity interactively consummating the universe, inherited from cosmological symmetry-breaking, in what I describe as conscious paradise on the cosmic equator in space-time. Without the symbiosis factor, humanity as we stand, will not survive a self-induced Fermi extinction, caused by a mass extinction of biodiversity, so the cosmology is both definitively and informatively accurate and redemptive, in long-term survival of the generations of life over evolutionary time scales.
Susan Pockett (2013) explains the history of these diverging synaptic and field theoretic views:
Köhler (1940) did put forward something he called “field theory”. Köhler only ever referred to electric fields as cortical correlates of percepts. His field theory was a theory of brain function. Lashley’s test was to lay several gold strips across the entire surface of one monkey’s brain, and insert about a dozen gold pins into a rather small area of each hemispheric visual cortex of another monkey. The idea was that these strips or pins should short-circuit the hypothesized figure currents, and thereby (if Köhler’s field theory was correct) disrupt the monkeys’ visual perception. The monkeys performed about as well on this task after insertion of the pins or strips as they had before (although the one with the inserted pins did “occasionally fail to see a small bit of food in the cup”) and Lashley felt justified in concluding from this that “the action of electric currents, as postulated by field theory, is not an important factor in cerebral integration.” Later Roger Sperry did experiments similar to Lashley’s, reporting similarly negative results.
Intriguingly, she notes that Libet, whom we shall meet later despite declaring the readiness potential preceded consciousness, also proposed a near-supernatural field theory:
Libet proposed in 1994 that consciousness is a field which is “not ... in any category of known physical fields, such as electromagnetic, gravitational etc” (Libet 1994). In Libet’s words, his proposed Conscious Mental Field “may be viewed as somewhat analogous to known physical fields ... however ... the CMF cannot be observed directly by known physical means.”
Pockett (2014) describes what she calls “process theories”:
The oldest classification system has two major categories, dualist and monist. Dualist theories equate consciousness with abstracta. Monist (aka physicalist) theories equate it with concreta. A more recent classification (Atkinson et al., 2000) divides theories of consciousness into process theories and vehicle theories: it says “Process theories assume that consciousness depends on certain functional or relational properties of representational vehicles, namely, the computations in which those vehicles engage. The relative number of words devoted to process and vehicle theories in this description hints that at present, process theories massively dominate the theoretical landscape. But how sensible are they really?
She then discusses both Tononi & Koch’s (2015) IIT integrated information theory and Chalmers' (1996) multi-state “information spaces". And lists the following objections:
First, since information is explicitly defined by everyone except process theorists as an objective entity, it is not clear how process theorists can reasonably claim either that information in general, or that any subset or variety of information in particular, is subjective. No entity can logically be both mind-independent and the very essence of mind. Therefore, when process theorists use the word “information” they must be talking about something quite different from what everyone else means by that word. Exactly what they are talking about needs clarification. Second, since information is specifically defined by everybody (including Chalmers) as an abstract entity, any particular physical realization of information does not count as information at all. Third, it is a problem at least for scientists that process theories are untestable. The hypothesis that a particular brain process correlates with consciousness can certainly be tested empirically. But the only potentially testable prediction of theories that claim identity between consciousness and a particular kind of information or information processing is that this kind of information or information processing will be conscious no matter how it is physically instantiated.
These critiques will apply to a broad range of the theories of consciousness we have explored, including many in the figure above that do not limit themselves to the neural correlate of consciousness.
Theories of consciousness have, in the light of our understanding of brain processes gained from neuroscience, become heavily entwined with the objective physics and biology of brain function. Michel & Doerig (2021), in reviewing local and global theories of consciousness summarise current thinking, illustrating this dependence on neuroscience for understanding the enigmatic nature of consciousness.
Localists hold that, given some background conditions, neural activity within sensory modules can give rise to conscious experiences. For instance, according to the local recurrence theory, reentrant activity within the visual system is necessary and sufficient for conscious visual experiences. Globalists defend that consciousness involves the large-scale coordination of a variety of neuro-cognitive modules, or a set of high-level cognitive functions such as the capacity to form higher-order thoughts about one’s perceptual states. Localists tend to believe that consciousness is rich, that it does not require attention, and that phenomenal consciousness overflows cognitive access. Globalists typically hold that consciousness is sparse, requires attention, and is co-extensive with cognitive access.
According to local views, a perceptual feature is consciously experienced when it is appropriately represented in sensory systems, given some background conditions. As localism is a broad family of theories, what “appropriately” means depends on the local theory under consideration. Here, we consider only two of the most popular local theories: the micro-consciousness theory, and the local recurrence theory, focusing on the latter. According to the micro-consciousness theory “processing sites are also perceptual sites”. This theory is extremely local. The simple fact of representing a perceptual feature is sufficient for being conscious of that feature, given some background conditions. One becomes conscious of individual visual features before integrating them into a coherent whole. According to the local recurrence theory, consciousness depends on "recurrent" activity between low- and higher-level sensory areas. Representing a visual feature is necessary, but not sufficient for being conscious of it. The neural vehicle carrying that representation must also be subject to the right kind of recurrent dynamics. For instance, consciously perceiving a face consists in the feedforward activation of face selective neurons, quickly followed by a feedback signal to lower-level neurons encoding shape, color, and other visual features of the face, which in turn modulate their activity as a result.
The authors also stress post-dictive effects as a necessary non-local condition for consciousness which may last a third of a second after an event.
In postdictive effects, conscious perception of a feature depends on features presented at a later time. For instance, in feature fusion two rapidly successive stimuli are perceived as a single entity. When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk, and no red or green disk at all. This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having. Many other postdictive effects in the range of 10-150ms have been known for decades and are well documented. Postdictive effects are a challenge for local theories of consciousness. Features are locally represented in the brain but the participants report that they do not see those features.
This can have the implication that unconscious brain processes always precede conscious awareness, leading to the conclusion that our conscious awareness is just a post-constructed account of unconscious processes generated by the brain and that subjective consciousness, along the experience of volition have no real basis, leading to a purely physically materialist account of subjective consciousness as merely an internal model of reality constructed by the brain.
Pockett (2014) in supporting her own field theory of consciousness, notes structural features that may exclude certain brain regions from being conscious in their own right:
It is now well accepted that sensory consciousness is not generated during the first, feed-forward pass of neural activity from the thalamus through the primary sensory cortex. Recurrent activity from other cortical areas back to the primary or secondary sensory cortex is necessary. Because the feedforward activity goes through architectonic Lamina 4 of the primary sensory cortex (which is composed largely of stellate cells and thus does not generate synaptic dipoles) while recurrent activity operates through synapses on pyramidal cells (which do generate dipoles), the conscious em patterns resulting from recurrent activity in the ‘early’ sensory cortex have a neutral area in the middle of their radial pattern. The common feature of brain areas that can not generate conscious experience – which are now seen to include motor cortex as well as hippocampus, cerebellum and any sub-cortical area – is that they all lack an architectonic Lamina 4 [layer 4 of the cortex].
By contrast with theories of consciousness based on the brain alone, Symbiotic Existential Cosmology sees subjectivity as being a cosmological complement to the physical universe. It thus seeks to explain subjective conscious experience as a cosmological, rather than just a purely biological phenomenon, in a way which gives validation and real meaning to our experience of subjective conscious volition over the physical universe, expressed in all our behavioural activities and our sense of personal responsibility for our actions and leads towards a state of biospheric symbiosis as climax living diversity across the generations of life as a whole, ensuring our continued survival.
Theories of consciousness have, in the light of our understanding of brain processes gained from neuroscience, become heavily entwined with the objective physics and biology of brain function. Michel & Doerig (2021), in reviewing local and global theories of consciousness summarise current thinking, illustrating this dependence on neuroscience for understanding the enigmatic nature of consciousness.
Localists hold that, given some background conditions, neural activity within sensory modules can give rise to conscious experiences. For instance, according to the local recurrence theory, reentrant activity within the visual system is necessary and sufficient for conscious visual experiences. Globalists defend that consciousness involves the large-scale coordination of a variety of neuro-cognitive modules, or a set of high-level cognitive functions such as the capacity to form higher-order thoughts about one’s perceptual states. Localists tend to believe that consciousness is rich, that it does not require attention, and that phenomenal consciousness overflows cognitive access. Globalists typically hold that consciousness is sparse, requires attention, and is co-extensive with cognitive access.
According to local views, a perceptual feature is consciously experienced when it is appropriately represented in sensory systems, given some background conditions. As localism is a broad family of theories, what “appropriately” means depends on the local theory under consideration. Here, we consider only two of the most popular local theories: the micro-consciousness theory, and the local recurrence theory, focusing on the latter. According to the micro-consciousness theory “processing sites are also perceptual sites”. This theory is extremely local. The simple fact of representing a perceptual feature is sufficient for being conscious of that feature, given some background conditions. One becomes conscious of individual visual features before integrating them into a coherent whole. According to the local recurrence theory, consciousness depends on "recurrent" activity between low- and higher-level sensory areas. Representing a visual feature is necessary, but not sufficient for being conscious of it. The neural vehicle carrying that representation must also be subject to the right kind of recurrent dynamics. For instance, consciously perceiving a face consists in the feedforward activation of face selective neurons, quickly followed by a feedback signal to lower-level neurons encoding shape, color, and other visual features of the face, which in turn modulate their activity as a result.
The authors also stress post-dictive effects as a necessary non-local condition for consciousness which may last a third of a second after an event.
In postdictive effects, conscious perception of a feature depends on features presented at a later time. For instance, in feature fusion two rapidly successive stimuli are perceived as a single entity. When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk, and no red or green disk at all. This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having. Many other postdictive effects in the range of 10-150ms have been known for decades and are well documented. Postdictive effects are a challenge for local theories of consciousness. Features are locally represented in the brain but the participants report that they do not see those features.
This can also have implications that unconscious brain processes always precede conscious awareness, leading to the conclusion that our conscious awareness is just a post-constructed account of unconscious processes generated by the brain and that subjective consciousness, along the experience of volition have no real basis, leading to a purely physically materialist account of subjective consciousness as merely an internal model of reality constructed by the brain.
Seth & Bayne (2022) provide a detailed review of theories of consciousness from the perspective of neuroscience. They investigate four key types of TOC as listed below and also provide table 1 below listing a diverse range of TOCs.
(1) Higher-order theories The claim uniting all these is that a mental state is conscious in virtue of being the target of a certain kind of meta-representational state. These are not representations that occur higher or deeper in a processing hierarchy but are those that have as their targets other (implicitly subjective) representations.
(2) Global workspace theories originate from architectures, in which a “blackboard” is a centralized resource through which specialised processors share and receive information. The first was framed at a cognitive level and proposed that conscious mental states are those that are ‘globally available’ to a wide range of cognitive processes, including attention, evaluation, memory and verbal report. Their core claim is that it is wide accessibility of information to such systems that constitutes conscious experience. This has been developed into ‘global neuronal work space theory’.
(3) Integrated information theory advances a mathematical approach to characterizing phenomenology. It starts by proposing axioms about the phenomenological character of conscious experiences (that is, properties that are taken to be self-evidently true and general to consciousness), and from these, it derives claims about the properties that any physical substrate of consciousness must satisfy, proposing that physical systems that instantiate these properties necessarily also instantiate consciousness.
(4) Re-entry and predictive processing theories The first associate conscious perception with topdown (recurrent, reentrant) signalling. The second group are not primarily ToCs but more general accounts of brain (and body) function that can be used to formulate explanations and predictions regarding properties of consciousness.
They note a version of the measurement problem, that to test a theory of consciousness (ToC), we need to be able to reliably detect both consciousness and its absence. At present, experimenters tend to rely on a subject’s introspective capacities to identify their states of consciousness. However, they claim this approach is problematic. Firstly they claim reliability of introspection is questionable. This is a debatable claim, which tends to lead to devaluing subjective reports, possibly unfairly, in an emphasis on “objective observations”, which render subjective consciousness as having an orphan status, defeating the very purpose of TOCs in relation to the hard problem. They also note infants, individuals with brain damage and non-human animals, who might be conscious, but are unable to produce introspective reports, claiming there is a pressing need to identify non-introspective ‘markers’ or ‘signatures’ of consciousness — such as the perturbational complexity index (PCI) and the optokinetic nystagmus response, or distinctive bifurcations in neural dynamics, as markers of either general, or specific kinds of conscious contents. These however are purely functional measures of what consciousness actually is, as experienced phenomena.
Table 1: The full spread of TOCs, as listed in Seth & Bayne (2022).
Higher-order theory (HOT) |
Consciousness depends on meta-representations of lower-order mental states |
Self-organizing meta- representational theory |
Consciousness is the brain’s (meta-representational) theory about itself |
Attended intermediate representation theory |
Consciousness depends on the attentional amplification of intermediate-level representations |
Global workspace theories (GWTs) |
Consciousness depends on ignition and broadcast within a neuronal global workspace where fronto-parietal cortical regions play a central, hub-like role |
Integrated information theory (IIT) |
Consciousness is identical to the cause–effect structure of a physical substrate that specifies a maximum of irreducible integrated information |
Information closure theory |
Consciousness depends on non-trivial information closure with respect to an environment at particular coarse-grained scales |
Dynamic core theory |
Consciousness depends on a functional cluster of neural activity combining high levels of dynamical integration and differentiation |
Neural Darwinism |
Consciousness depends on re-entrant interactions reflecting a history of value-dependent learning events shaped by selectionist principles |
Local recurrency |
Consciousness depends on local recurrent or re-entrant cortical processing and promotes learning |
Predictive processing |
Perception depends on predictive inference of the causes of sensory signals; provides a framework for systematically mapping neural mechanisms to aspects of consciousness |
Neuro-representationalism |
Consciousness depends on multilevel neurally encoded predictive representations |
Active inference |
Although views vary, in one version consciousness depends on temporally and counterfactually deep inference about self-generated actions |
Beast machine theory |
Consciousness is grounded in allostatic control-oriented predictive inference |
Neural subjective frame |
Consciousness depends on neural maps of the bodily state providing a first-person perspective |
Self comes to mind theory |
Consciousness depends on interactions between homeostatic routines and multilevel interoceptive maps, with affect and feeling at the core |
Attention schema theory |
Consciousness depends on a neurally encoded model of the control of attention |
Multiple drafts model |
Consciousness depends on multiple (potentially inconsistent) representations rather than a single, unified representation that is available to a central system |
Sensorimotor theory |
Consciousness depends on mastery of the laws governing sensorimotor contingencies |
Unlimited associative learning |
Consciousness depends on a form of learning which enables an organism to link motivational value with stimuli or actions that are novel, compound and non-reflex inducing |
Dendritic integration theory |
Consciousness depends on integration of top-down and bottom-up signalling at a cellular level |
Electromagnetic field theory |
Consciousness is identical to physically integrated, and causally active, information encoded in the brain’s global electromagnetic field |
Orchestrated objective reduction |
Consciousness depends on quantum computations within microtubules inside neurons |
In addressing the ‘hard problem’ they distinguish the easy problems concerned with the functions and behaviours associated with consciousness, from the hard problem, which concerns the experiential dimensions of consciousness, noting that what makes the hard problem hard is the ‘explanatory gap’ — the intuition that there seems to be no prospect of a fully reductive explanation of experience in physical or functional terms.
Integrated information theory and certain versions of higher-order theory address the hard problem directly, while other theories such as global workspace theories focus on the functional and behavioural properties normally associated with consciousness, rather than the hard problem, noting that some predictive processing theorists aim to provide a framework in which various questions about the phenomenal properties of consciousness can be addressed, without attempting to account for the existence of phenomenology — an approach called the ‘real problem’.
They posit that a critical question is whether the hard problem is indeed a genuine challenge that ought to be addressed by a science of consciousness, or whether it ought to be dissolved rather than solved as the solving easy problems first strategy invokes. The ‘dissolvers’ argue that the appearance of a distinctively hard problem derives from the peculiar features of the ‘phenomenal concepts’ that we employ in representing our own conscious states, citing illusionism, in which we do not have phenomenal states but merely represent ourselves as having such states, speculating that the grip of the hard problem may loosen as our capacity to explain, predict and control both phenomenological and functional properties of consciousness expands, thus effectively siding with the dissolvers.
In conclusion, they note that present, ToCs are generally used as ‘narrative structures’ within the science of consciousness. Although they inform the interpretation of neural and behavioural data, they demure that it is still rare for a study to be designed with questions of theory validation in mind. Although there is nothing wrong with employing theories in this manner, claiming future progress will depend on experiments that enable ToCs to be tested and disambiguated. This is the kind of ideal that we will expect physicalist neuroscientists to veer into, but it runs the risk of ‘sanitising’ consciousness, just as behaviourism has done in psychology to its nemesis.
Pivotal are two questions, one is the physicalist quest to use the easy functionalist notions of consciousness to explain away the hard problem of consciousness, which typifies Levine’s explanatory gap, Nagel’s what it is “to be like” something something conscious and Chalmers’ notion “how we have phenomenal first-person subjective experiences”. This is really not about the general questions of consciousness, such as “consciousness of” something, which can be viewed as a form of global attention that can be described functionally, and more specific notions like self-consciousness i.e. awareness of a form of functional agency, both of which could apply equally to artificial intelligence.
This becomes clear when we examine the authors’ choice of key theories of consciousness, several of which are not targeted at the hard problem at all, as they point out, knowing that Seth for example favours an ultimate functional explanation which will “dissolve” the hard problem, even if it is a form of identity theory, or dual aspect monism.
Really we need to distinguish consciousness from subjective consciousness – the ability to have subjective experiences and thus subjectivity itself and its cosmological status, rather than the mere functionality of consciousness as a global attentive process. This is why Symbiotic Existential Cosmology deals directly with primal subjectivity as a cosmological complement to the physical universe to capture the notion of subjectivity squarely and independently of consciousness. This leaves full consciousness an emergent property of the eucaryote endo-symbiosis that results in the cellular mechanisms of edge-of-chaos excitable membrane and informational membrane signalling using neurotransmitters, both of which are functionally emergent properties but with non-classical implications in the quantum universe.
We can immediately see this is a critically important step, when we see the above research being cited as a basis to determine whether future AI developments would be considered “conscious”, as Butlin et al. (2023) cite precisely the functional expressions of the same theories of consciousness as above, to provide criteria where a purely objective physical process could become “conscious”, in view of its functional properties in recurrent processing, global workspace, higher-order processes, attention schemas predictive processing and functional agency, none of which address the hard problem, let alone the extended hard problem of subjective volition over the physical universe.
Butlin et al. note: This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness. From these theories we derive ”indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.
Recurrent processing theory |
RPT-1: Input modules using algorithmic recurrence |
RPT-2: Input modules generating organised, integrated perceptual representations |
Global workspace theory |
GWT-1: Multiple specialised systems capable of operating in parallel (modules) |
GWT-2: Limited capacity workspace, entailing a bottleneck in information flow and a selective attention mechanism |
GWT-3: Global broadcast: availability of information in the workspace to all modules |
GWT-4: State-dependent attention, giving rise to the capacity to use the workspace to query modules in succession to perform complex tasks |
Computational higher-order theories |
HOT-1: Generative, top-down or noisy perception modules |
HOT-2: Metacognitive monitoring distinguishing reliable perceptual representations from noise |
HOT-3: Agency guided by a general belief-formation and action selection system, and a strong disposition to update beliefs in accordance with the outputs of metacognitive monitoring |
HOT-4: Sparse and smooth coding generating a “quality space” |
Attention schema theory |
AST-1: A predictive model representing and enabling control over the current state of attention |
Predictive processing |
PP-1: Input modules using predictive coding |
Agency and embodiment |
AE-1: Agency: Learning from feedback and selecting outputs so as to pursue goals, especially where this involves flexible responsiveness to competing goals |
AE-2: Embodiment: Modeling output-input contingencies, including some systematic effects, and using this model in perception or control |
Table 2: Indicator Properties (Butlin et al. 2023).
Polák & Marvan (2019), in a different kind of “dissolving” approach to the hard problem, attempt to assert dual theories, in which scientists study pairs of phenomenal mental states of which one is and the other is not conscious, the presence/absence of consciousness being their sole distinguishing feature, claiming this facilitates unpacking the unitary nature of the hard problem, thus partly decomposing it.
They note that Chalmers (2018 30) contains the acceptance of unconscious sensory qualities, saying such a move is: perhaps most promising for deflating the explanatory gap tied to qualities such as redness: if these qualities [...] can occur unconsciously, they pose less of a gap. As before, however, the core of the hard problem is posed not by the qualities themselves but by our experience of these qualities: roughly, the distinctive phenomenal way in which we represent the qualities or are conscious of them.
They cite two examples of separation of brain processes forming a neural correlate conscious experiences. The first is hemispheric visual neglect caused by localised brain damage such as a stroke, where information in the neglected hemisphere appears to unconsciously influence a person’s choices:
Unilateral visual neglect, the inability to see objects in one half of the visual field, might serve as an illustration. In the most famous neglect example (Marshall and Halligan, 1988), a person cannot consciously discriminate between two depicted houses. The houses are identical except that one of them is on fire in that half of the visual field the person, due to neglect, cannot see. Although the person was constantly claiming that both houses look the same to her, she repeatedly said she would prefer to live in the house not consumed by the flames.
A second example cites Lamme’s (2006, 2015) theory of brain processes which may constitute separate phases in the generation of a conscious experience, which permit clean separation of the brain mechanisms for the creation of phenomenal content from the mechanism that “pushes this content into consciousness”:
The theory revolves around the notion of local recurrent neural activity within the cortex and decomposes the formation of conscious visual content into two phases. The first one is called fast feedforward sweep. It is a gradual activation of different parts of the visual system in the brain. The dual view interprets this process as the formation of the unconscious but phenomenal mental state. A later process, that may or may not occur, is called recurrent activity. It is a neural feedback processing during which higher visual centers send the neural signal back to the lower ones. The time delay between the initiation of the first and the second process might be seen as corresponding to the difference between processing of the phenomenal character (feedforward sweep) and making and maintaining this phenomenal character conscious (recurrent processing).
They note that in several other theories already listed, including Global Neural Workspace theory, thalamo-cortical circuits, and apical amplification within the cortical pyramidal neurons, the phase of phenomenal content creation and the phase of this content becoming conscious are distinguishable. But all these theories are describing purely physical brain processes, being imbued with subjective aspects only by inference. So we need to look carefully at how the authors treat subjectivity itself. Essentially they are making a direct attack on the unitary nature of subjective conscious experience by attempting to separate consciousness from phenomenal experience so subjectivity is being held hostage in the division:
What constantly fuels this worry, we believe, is taking the conscious subjective phenomenal experience to be something monolithic. The peculiar nature of subjective qualities and their being conscious comes as a package and it is difficult to conceive how science might begin explaining it. … The conscious subjective experience is being felt as something unitary, we grant that. But that does not mean that if we look behind the subjective level and try to explain how such unitary experience arises, the explanation itself has to have unitary form. … Awareness in this sense is simply the process, describable in neuroscientific terms, of making the sensory qualities conscious for the subject. We could then keep using the term “consciousness” for the subjectively felt unitary experience, while holding that in reality this seemingly unitary thing is the result of an interaction between the neural processes constituting the phenomenal contents and the neural processes constituting awareness.
This effectively a form of physicalist illusionism (Frankish 2017), because, the claim made is that the subjective experience is falsely represented as integrated when the underlying physical reality is subdivided by the dual interpretation. It is an illustration of how functionalist theories of consciousness can be misused in an attempt to form a bridgehead decomposing the unitarity of subjective consciousness into interacting divisible physical systems, simply because multiple physical processes are held to be functional, or temporally sequential components, of the associated integrated brain processing state. The trouble with this is that these functional processes can invoke an integrated conscious experience only when they are functionally confluent, so we can’t actually separate “the fast feedforward sweep” from the “recurrent activity” in generating a real conscious experience and in pathological cases like hemispherical visual neglect this provides no evidence that healthy integrated conscious brain processes can be so decomposed into dual states.
By contrast with theories of consciousness based on the physical brain alone, in Symbiotic Existential Cosmology, subjectivity is itself a primal cosmological complement to the physical universe. It thus explains subjective conscious experience as a cosmological, rather than just a purely biological or neuroscience phenomenon, thus giving validation and real meaning to our experience of subjective conscious volition over the physical universe, expressed in all our behavioural activities and our sense of personal responsibility for our actions and leads towards a state of biospheric symbiosis as climax living diversity across the generations of life as a whole, ensuring our continued survival.
Psychotic Fallacies of the Origin of Consciousness
Theories of consciousness that are poles apart from any notion of the subjectivity of conscious experience, or the hard problem of consciousness and the explanatory gap of the physical description, arise from treating consciousness merely as purely a type of culturally derived cognitive process. Such theories fall into the philosophers trap of confining the nature of the discourse to rational processes and arguments, which fail to capture the raw depths of subjective experience, characteristic of mystical, shamanic and animistic cultures.
In "The Origin of Consciousness in the Bicameral Mind”, Julian Jaynes (1976, 1986) claimed human “ancestors", as late as the Ancient Greeks did not consider emotions and desires as stemming from their own minds but as the consequences of actions of gods external to themselves. The theory posits that the human mind once operated in a bicameral state in which cognitive functions were divided between one part of the brain which appears to be "speaking", and a second part which listens and obeys and that the breakdown of this division gave rise to “consciousness” in humans. He used the term "bicameral" metaphorically to describe a mental state in which the right hemisphere's experiences were transmitted to the left hemisphere through auditory hallucinations. In the assumed bicameral phase, individuals lacked self-awareness and introspection. Instead of conscious thought, they heard external voices or "gods" guiding their actions and decisions. Jaynes claimed this form of consciousness, devoid of meta-consciousness and autobiographical memory, persisted until about 3,000 years ago, when societal changes led to the emergence of our current conscious mode of thought. Auditory hallucinations experienced by those with schizophrenia, including command hallucinations, paralleled the external guidance experienced by bicameral individuals implying mental illness was a bicameral remnant.
To justify his claim, he highlighted instances in ancient texts like the Iliad and the Old Testament where he claimed there was no evidence of introspection or self-awareness and noted that gods in ancient societies were numerous and anthropomorphic, reflecting the personal nature of the external voices guiding individuals. However in the Epic of Gilgamesh, copies of which are many centuries older than even the oldest passages of the Old Testament, describes introspection and other mental processes.
According to Jaynes, language is a necessary but not sufficient condition for consciousness: language existed thousands of years earlier, but consciousness could not have emerged without language. Williams (2010) defends the notion of consciousness as a social–linguistic construct learned in childhood, structured in terms of lexical metaphors and narrative practice. Ned Block's (1981) review criticism is direct – that it is "ridiculous" to suppose that consciousness is a cultural construction.
Jaynes argued that the breakdown of the bicameral mind was marked by societal collapses and environmental challenges. As people lost contact with external voices, practices like divination and oracles emerged as attempts to reconnect with the guidance they once received. However this shows an ethnocentric rationalist lack of awareness and understanding of how earlier animistic cultures perceived the natural world, in which both humans and natural processes like storms, rivers and trees were imbued with spirits that were interacted with, but by no means were regarded as voices which humans had to blindly obey, but ones in which they were in dynamic interaction as sentient beings. There are diverse existing cultures, from the founding San to the highly evolved Maori, who practice animistic beliefs, actually and metaphorically who were not influenced by political upheavals at the periphery of founding urban cultures and can appreciate their world views in both rational and spiritual terms, while at all times being as fully integrated in their conscious experiences as modern dominant cultures. We know that doctrinal religions have evolved from mystical and animistic roots as means to hold together larger urban societies, but these are no more rational beliefs. Neither are polytheists more bicameral in their thinking than monotheists are, but less starkly absolute. Neither is it true that intelligent primates display evidence of a bicameral mind, but rather a fully adapted social intelligence, attuned by social evolution to facilitate their strategic survival as consciously aware intentional agents.
McGilchrist (2009) reviews scientific research into the complementary role of the brain's hemispheres, and cultural evidence, in his book "The Master and His Emissary", proposing that, since the time of Plato, the left hemisphere of the brain (the "emissary" in the title) has increasingly taken over from the right hemisphere (the "master"), to our detriment. McGilchrist felt that Jaynes's hypothesis was "the precise inverse of what happened" and that rather than a shift from bicameral mentality there evolved a separation of the hemispheres into bicameral mentality. This has far more reality value in the fact that the dominance of rational discourse over subjective conscious experience has risen to the degree that many people cannot rationally distinguish themselves from computational machines.
Field and Wave Theories of Consciousness v Connectome Networks and Action Potentials
Brain dynamics are a function of a variety of interacting processes. Major pyramidal neuron axon circuits functionally connect distant regions of the cortex to enable integrated processing forming the axonal connectome of the networked brain, driven by individual pulse-coded action potentials. Complementing this are waves of continuous potential in the cortical brain tissue indirectly sampled by electrodes on the scalp in the electroencephalogram or EEG and magnetic effects of currents in MEG. While the network view of brain activity is based on individual action potentials and regards the EEG brain waves as just tissue excitation averages, there is increasing evidence of phase coupling between between the two, so that both the discrete action potentials and the continuous tissue potentials are in mutual feedback. The complex interaction of these can be seen in Qasim et al. (2021), Cariani & Baker (2022) and Pinotsis et al. (2023), as exemplified in the independent dynamics of the various EEG bands and spiral and metastable wave states (Roberts et al. 2019, Xu et al. 2023). This leads to two views of brain dynamics the networked view based on the connectome and field theories centered on continuous tissue gradients and the folded tissue anatomy. A team led by György Buzsáki (Yang et al. 2024) has also found that selection of experience for memory is facilitated by hippocampal sharp wave ripples reproducing prominent waking experiences during sleep.
Pang et al. (2023) have compared the influence of these two physical features in the outer folds of the cerebral cortex, where most higher-level brain activity occurs — and the connectome, the web of nerves that links distinct regions of the cerebral cortex. Excited neurons in the cerebral cortex can communicate their state of excitation to their immediate neighbours on the surface. But each neuron also has a long axon that connects it to a far away region within or beyond the cortex, allowing neurons to send excitatory messages to distant brain cells. In the past two decades, neuroscientists have painstakingly mapped this web of connections — the connectome — in a raft of organisms, including humans. The brain’s neuronal excitation can also come in waves, which can spread across the brain and travel back in periodic oscillations.
They found that the shape of the outer surface was a better predictor of brainwave data than was the connectome, contrary to the paradigm that the connectome has the dominant role in driving brain activity. Predictions from neural field theory, an established framework for modelling large-scale brain activity, suggest that the geometry of the brain may represent a more fundamental constraint on dynamics than complex interregional connectivity.
Fig 79b: Comparison of the influences of connectome network based processing, volumetric wave modes in the cortex and exponential distance rule (EDR) networks connectivity and found geometric eigenmodes to be predominant.
They calculated the modes of brainwave propagation for the cortical surface and for the connectome. As a model of the connectome, they used information gathered from diffusion magnetic resonance imaging (MRI), which images brain anatomy. They then looked at data from more than 10,000 records of functional MRI, which images brain activity based on blood flow. The analysis showed that brainwave modes in the resting brain as well as during a variety of activities — such as during the processing of visual stimuli — were better explained by the surface geometry model than by the connectome.of activities — such as during the processing of visual stimuli — were better explained by the surface geometry model than by the connectome one, the researchers found.
There are a number of field theories of conscious brain dynamics each with their own favoured process.
Benjamin Libet (1994), the controversial discoverer of the readiness potential, notes the extreme contrast between the integral nature of conscious experience and the complex localised nature of network-based neurodynamics, leaning towards a field theory as the only plausible explanation:
One of the most mysterious and seemingly intractable problems in the mind-brain relationship is that of the unitary and integrated nature of conscious experience. We have a brain with an estimated 100 billion neurons, each of which may have thousands of interconnections with other neurons. It is increasingly evident that many functions of cerebral cortex are localized. This is not merely true of the primary sensory areas for each sensory modality, of the motor areas which command movement, and of the speech and language areas, all of which have been known for some time. Many other functions now find other localized representations, including visual interpretations of colour, shape and velocity of images, recognition of human faces, preparation for motor actions, etc. Localized function appears to extend even to the microscopic level within any given area. The cortex appears to be organized into functional and anatomical vertical columns of cells, with discrete interconnections within the column and with other columns near and far, as well as with selective subcortical structures.
In spite of the enormously complex array of localized functions and representations, the conscious experiences related to or elicited by these neuronal features have an integrated and unified nature. Whatever does reach awareness is not experienced as an infinitely detailed array of widely individual events. It may be argued that this amazing discrepancy between particularized neuronal representations and unitary integrated conscious experiences should simply be accepted as part of a general lack of isomorphism between mental and neural events. But that would not exclude the possibility that some unifying process or phenomenon may mediate the profound transformation in question.
The general problem had been recognized by many others, going back at least to Sherrington (1940) and probably earlier. Eccles (in, Popper and Eccles, 1977, p. 362) specifically proposed that the experienced unity comes not from a neurophysiological synthesis but from the proposed integrating character of the self-conscious mind. This was proposed in conjunction with a dualist-interactionist view in which a separate non-material mind could detect and integrate the neuronal activities. Some more monistically inclined neuroscientists have also been arriving at related views, i.e. that integration seems to be best accountable for in the mental sphere even if one views subjective experience as an inner quality of the brain "substrate" (as in "identity theory" or as an emergent property of it. There has been a growing consensus that no single cell or group of cells is likely to be the site of a conscious experience, but rather that conscious experience is an attribute of a more global or distributed function of the brain.
A second apparently intractable problem in the mind-brain relationship involves the reverse direction. There is no doubt that cerebral events or processes can influence, control and presumably "produce" mental events, including conscious ones. The reverse of this, that mental processes can influence or control neuronal ones, has been generally unacceptable to many scientists on (often unexpressed) philosophical grounds. Yet, our own feelings of conscious control of at least some of our behavioural actions and mental operations would seem to provide prima facie evidence for such a reverse interaction, unless one assumes that these feelings are illusory. Eccles (1990; Popper and Eccles, 1977) proposed a dualistic solution, in which separable mental units (called psychons) can affect the probability of presynaptic release of transmitters. Sperry (1952, 1985, 1980) proposed a monistic solution, in which mental activity is an emergent property of cerebral function; although the mental is restrained within a macro-deterministic frame- work, it can "supervene", though not "intervene", in neuronal activity. However, both views remain philosophical theories, with explanatory power but without experimentally testable formats. As one possible experimentally testable solution to both features of the mind-brain relationship, I would propose that we may view conscious subjective experience as if it were a field, produced by appropriate though multifarious neuronal activities of the brain.
Miller, Brincat & Roy (2024) Noting that cognition relies on the flexible organisation of neural activity, explore how many aspects of this organisation can be described as emergent properties, not reducible to their constituent parts. They discuss how electrical fields in the brain can serve as a medium for propagating activity nearly instantaneously, and how population-level patterns of neural activity can organise computations through subspace coding. They
note several aspects of brain waves, as opposed to network connections, including those in the alpha, beta, gamma theta and delta bands, which class them as critical emergent properties.
Fig 79c: Subspace coding. (a) Example of patterns of neural population activity for two different visual objects. (b) Population activity patterns can be thought of as points in a high-dimensional ‘state-space’, with one dimension for each neuron (three are shown, corresponding to the three labeled neurons in (a), but actual experiments sample hundreds from an underlying population of millions. (c) Activity for different conditions is typically restricted to a low-dimensional subspace (represented by a plane). (d) Information that must be kept separate is often encoded in orthogonal subspaces, so it can be read out independently. (e) When information must be mapped onto a common response, it is often encoded into aligned subspaces, so it can be driven by a single read-out.
Traditional synaptic connectivity is limited by the speed of axonal conduction and synaptic transmission. In contrast, because the brains electric fields have a direct effect on intracellular potential, they spread at the speed of an electric field in neural tissue, i.e. nearly instantaneously. This seems ideal for rapidly coordinating local neural activity.
Spatial computing proposes that mesoscale patterns of alpha/beta activity carry top-down control signals, which reflect information about the current context and goals. These alpha/beta patterns are inhibitory and spatially constrain the bottom-up gamma power associated with content-related spiking at a microscale level. In essence, alpha/beta patterns act as stencils, allowing content (microscale gamma/spiking) to be expressed in areas where alpha/beta is absent. These stencils represent different cognitive states or task operations. This is in line with observations that power and coupling in gamma versus alpha/beta are respectively associated with bottom-up processing versus top-down control.
A key benefit of such subspace coding is the organization of neural processing. When multiple pieces of information must be held simultaneously in memory or compared, they are often stored in approximately orthogonal subspaces. That is, the spiking patterns reflecting one item are independent of those for the other item, minimizing interference between them. Similarly, as incoming sensory information is encoded into working memory, it is rotated from a ‘sensory’ subspace to an orthogonal ‘memory’ subspace, protecting it from interference from further sensory inputs. Thus, subspace coding allows distinct information to be stored and operated on independently.
This separation of content (gamma/spiking) and control (alpha/beta) into different spatial scales endows generalization and flexibility. It enables the brain to perform top-down operations without ‘knowing’ the specifics of the underlying ensembles carrying content. This allows the brain to instantly generalize control to new content. Contrast this with standard neural network models, where all information — content and control — is encoded at the same level, synaptic connectivity. As a result, a standard network model learning a task with one set of objects needs retraining to perform with a new set of objects. Your brain does not need retraining to instantly generalize.
Fig 79d: Example neural power spectrum with a strong alpha peak in the canonical frequency range (8–12 Hz, blue-shaded region) and secondary beta peak within an overall 1/f pink noise edge of chaos power spectrum profile.
Donoghue et al. (2020) in the development of analytical technique seek to separate the assumed periodic nature of alpha, beta, gamma theta and delta bands from a common signal of broad spectrum chaotic activity with a 1/f-like distribution, with exponentially decreasing power across increasing frequencies, equivalent to the negative slope of the power spectrum when measured in log–log space. Treating the aperiodic component as ‘noise’ ignores its physiological correlates, which in turn relate to cognitive and perceptual states, while trait-like differences in aperiodic activity have been shown to be potential biological markers in development and ageing, as well as disease, such as ADHD or schizophrenia. Their technique seeks to both enable investigation of this aperiodic component while clarifying the periodic basis of interacting signals and their wave phase.
There are a number of field theories of conscious brain dynamics each with their own favoured process.
Joachim Keppler (2018, 2021, 2024) presents an analysis drawing conscious experiences into the orbit of stochastic electrodynamics (SED) a form of quantum field theory, utilising the conception that the universe is imbued with an all-pervasive electromagnetic background field, the zero-point field (ZPF), which, in its original form, is a homogeneous, isotropic, scale-invariant and maximally disordered ocean of energy with completely uncorrelated field modes and a unique power spectral density. This is basically a stochastic treatment of the uncertainty associated with the quantum vacuum in depictions such as the Feynman approach to quantum electrodynamics (fig 71(e)). The ZPF is thus the multiple manifestations of uncertainty in the quantum vacuum involving virtual photons, electrons and positrons, as well as quarks and gluons, implicit in the muon's anomalous magnetic moment (Borsanyi et al. 2021).
In the approach of SED (de la Peña et al. 2020), in which the stochastic aspect corresponds to the effects of the collapse process into the classical limit [28], consciousness is represented by the zero point field (ZPF) (Keppler 2018). This provides a basis to discuss the brain dynamics accompanying conscious states in terms of two hypotheses concerning the zero-point field (ZPF):
“The aforementioned characteristics and unique properties of the ZPF make one realize that this field has the potential to provide the universal basis for consciousness from which conscious systems acquire their phenomenal qualities. On this basis, I posit that all conceivable shades of phenomenal awareness are woven into the fabric of the background field. Accordingly, due to its disordered ground state, the ZPF can be looked upon as a formless sea of consciousness that carries an enormous range of potentially available phenomenal nuances. Proceeding from this postulate, the mechanism underlying quantum systems has all the makings of a truly fundamental mechanism behind conscious systems, leading to the assumption that conscious systems extract their phenomenal qualities from the phenomenal color palette immanent in the ZPF. ”
Fig 80: In Keppler's model, the phase transitions underlying the formation of coherent activity patterns (attractors) are triggered by modulating the concentrations of neurotransmitters. When the concentration of neurotransmitter molecules lies above a critical threshold and selected ZPF modes are in resonance with the characteristic transition frequencies between molecular energy levels, receptor activations ensue that drive the emergence of neuronal avalanches. The set of selected ZPF modes that is involved in the formation and stabilisation of an attractor determines the phenomenal properties of the conscious state.
His description demonstrates the kind of boundary conditions in brain dynamics likely to correspond to subjective states and thus provides a good insight into the stochastic uncertainties of brain dynamics of conscious states that would correspond to the subjective aspect, and it even claims to envelop all possible modes of qualitative subjectivity in the features of the ZPF underlying uncertainty, But it would remain to be established that the ZPF can accomodate all the qualitative variations spanning the senses of sight, sound and smell, which may rather correspond to the external quantum nature of these senses.
The ZPF does not of itself solve the hard problem as such, because, at face value it is a purely physical manifestation of quantum uncertainty with no subjective manifestation, however Keppler claims to make this link clear as well: A detailed comparison between the findings of SED and the insights of Eastern philosophy reveals not only a striking congruence as far as the basic principles behind matter are concerned. It also gives us the important hint that the ZPF is a promising candidate for the carrier of consciousness, suggesting that consciousness is a fundamental property of the universe, that the ZPF is the substrate of consciousness and that our individual consciousness is the result of a dynamic interaction process that causes the realization of ZPF information states. …In that it is ubiquitous and equipped with unique properties, the ZPF has the potential to define a universally standardized substratum for our conscious minds, giving rise to the conjecture that the brain is a complex instrument that filters the varied shades of sensations and emotions selectively out of the all-pervasive field of consciousness, the ZPF (Keppler, 2013).
In personal communication regarding these concerns, Joachim responds as follows:
I understand your reservations about conventional field theories of consciousness. The main problem with these approaches (e.g., McFadden’s approach) is that they cannot draw a dividing line between conscious and unconscious field configurations. This leads to the situation that the formation of certain field configurations in the brain is claimed to be associated with consciousness, while the formation of the same (or similar) field configurations in an electronic device would usually not be brought in relation with consciousness. This is what you call quite rightly a common category error. Now, the crucial point is that the ZPF, being the primordial basis of the electromagnetic interaction, offers a way to avoid this category error. According to the approach I propose, the ZPF (with all its field modes) is the substrate of consciousness, everywhere and unrestrictedly. The main difference between conscious and unconscious systems (processes) is their ability to enter into a resonant coupling with the ZPF, resulting in an amplification of selected ZPF modes. Only a special type of system has this ability (the conditions are described in my article). If a system meets the conditions, one must assume that it also has the ability to generate conscious states.
Keppler, J., and Shani, I. (2020) link this process to a form of cosmopsychism confluent with Symbiotic Existential Cosmology:
The strength of the novel cosmopsychist paradigm presented here lies in the bridging of the explanatory gap the conventional materialist doctrine struggles with. This is achieved by proposing a comprehensible causal mechanism for the formation of phenomenal states that is deeply rooted in the foundations of the universe. More specifically, the sort of cosmopsychism we advocate brings a new perspective into play, according to which the structural, functional, and organizational characteristics of the NCC are indicative of the brain’s interaction with and modulation of a UFC. In this respect, the key insights from SED suggest that this field can be equated with the ZPF and that the modulation mechanism is identical with the fundamental mechanism underlying quantum systems, resulting in our conclusion that a coherently oscillating neural cell assembly acquires its phenomenal properties by tapping into the universal pool of phenomenal nuances predetermined by the ZPF.
Fig
80b (Left): It is postulated that conscious systems must be equipped with a
fundamental mechanism by means of which they are able to influence the basic
structure of the ubiquitous field of consciousness (UFC). This requires the
interaction of a physical system with the UFC in such a way that a transiently
stable dynamic equilibrium, a so-called attractor state characterised by
long-range coherence, is established in which the involved field modes enter
into a phase-locked coupling. (Right) Cortical column coherence.
Keppler (2023) also proposes a model where long-range coherence is developed in the functioning of cortical microcolumns, based on the interaction of a pool of glutamate molecules, with the vacuum fluctuations of the electromagnetic field, involving a phase transition from an ensemble of initially independent molecules toward a coherent state, resulting in the formation of a coherence domain that extends across the full width of a microcolumn.
John Archibald Wheeler put it greatly, along these lines: The physics of the vacuum is the real physics – the rest are trivial aspects (Menas Kafatos).
However he becomes entrapped in a dual aspect monism view similar to mind-brain identity theories where subjective and objective aspects are just dual reflections of one another. In personal communication in 2024, he says:
The crucial point is that the collective behavior of the brain constituents, orchestrated by the ZPF, leads to a new level of ZPF-DAS that goes beyond the level of the individual DAS of the brain constituents." In other words, the formation and integration of complex conscious states takes place in the ZPF, not in the brain! ... The brain-ZPF interaction is always localized in the brain. More precisely, resonant brain-ZPF (neurotransmitter-ZPF) coupling can only occur under special conditions in the cortical microcolumns of a person's brain [all this is explained at length in the paper]. Accordingly, the modification of the ZPF (i.e., the amplification of specific ZPF modes) resulting from the resonant interaction takes place locally in the brain of this particular person. This is the reason why the excited phenomenal qualities (associated with the amplified ZPF modes) are intimately connected with this person, leading to the generation of a private, subjective conscious experience. This privacy is due to the local character of the interaction mechanism that encapsulates the generated conscious state from the environment. So, we are dealing with a global, ubiquitous, undifferentiated (dual-aspect) ZPF that harbors local, individual, differentiated (dual-aspect) subjects, which can be thought of as islands of concrete conscious experiences in a vast ocean of undifferentiated consciousness. And, yes, in this way the ZPF could potentially act as memory reservoir.
If we accept the ZPF as the quantum vacuum, it "interacts" with absolutely everything, but not in the usual sense we mean interaction and certainly not through a resonance, which is a property of an active positive energy electric field. For example, in quantum electrodynamics (QED) the magnetic moment of an electron is caused by it emitting and absorbing the same photon in an infinite number of ways. This is precisely the electron "interacting" with the quantum vacuum. The whole brain thus consists of molecules interacting with the quantum vacuum. There is nothing special about glutamate. You can set up a resonant system but you can't confine the quantum vacuum by saying it only interacts in cortical columns in a specific way. Claiming it is dual aspect adds another confound to the difficulty and doesn't solve or explain anything and it is epiphenomenalistic because it doesn't explain conscious intentional behaviour. This places a devastating constraint, that subjectivity and physicality have to be stitched together as duals from the bottom up. This simply isn't consistent with nature or experience as it stands. There is nothing about what the ZPF is actually like that suggests it involves a process capable of forming complete subjective experiences. It's simply uncertainty manifest in virtual particle fluctuations.
I see this approach as posing duality so that the physical quantum vacuum can be thought to be subjective as well. In a way SEC treats quantum subjectivity as a manifestation of the quantum vacuum, but it's a more complex one. Uncertainty is all about type 1 processes. Otherwise we just have a Hamiltonian process 2. So the quantum vacuum is only one foundation manifestation of uncertainty. By placing reliance on the field, you seem to be forcing yourself into dual aspect because you think the field itself is displaying consciousness, but there is no process in the quantum vacuum to support it. Also it doesn't establish anything new over physicality because the duality is bijective and has no additional information due to the subjective aspect being powerless to affect the physical aspect.
Here is what I think the QVF (quantum vacuum field) looks like. Symmetry-breaking means the asymmetric forces of nature we experience are at a lower polarised vacuum energy so the QVF emits those virtual particles we see e.g. in QED, but briefly even QED has to be emitting and absorbing virtual quarks etc. from the other forces but within the extremely small discrepancies we see in Feynman's QED calculations. In the inflationary phase at the higher vacuum energy the universe was in, things would look different and fundamental force unification would take place, so the strong forces would look relatively weaker and more commonplace, ultimately leading to the pure quantum fluctuations that set off the inflationary phase. But it's all the same quantum vacuum governed by uncertainty. So we are looking right back into the cosmic origin, but looking through a telescope backwards, so the primal phenomena have vanishingly small incidence over Planck intervals. This is not synonymous with conscious experience!
The dissipative quantum model of brain dynamics (Freeman W & Vitiello 2006, 2007, 2016, Capolupo A, Freeman & Vitiello 2013, Vitiello 2015, Sabbadini & Vitiello 2019) provides another field theoretic description.
Karl Pribram (2004) has noted both the similarity of wave coherence interactions as an analogy or manifestation of quantum measurement and the ‘holographic’ nature of wave potential fluctuations, in the dendritic web:
The holonomic brain theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm. Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These wave oscillations create interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform. Gabor, Pribram and others noted the similarities between these and the storage of information in a hologram, which can also be analyzed with a Fourier transform.
Coherent wave excitation of pyramidal action potentials has been cited as the basis of correlated potentiation of the glymphatic flow through the brain parenchyma during NREM and particularly REM sleep (Jiang-Xie et al. 2024).
The dissipative quantum model of brain dynamics (Freeman W & Vitiello 2006, 2007, 2016, Capolupo A, Freeman & Vitiello 2013, Vitiello 2015, Sabbadini & Vitiello 2019) provides another field theoretic description. I include a shortened extract from Freeman & Vitiello (2015), which highlights to me the most outstanding field theoretic description of the neural correlate of consciousness I know of, which also has the support of Freeman’s dynamical attractor dynamics as illustrated in fig 78, and likewise has similar time dual properties to the transactional interpretation discussed above, invoking complementary time directed-roles of emergence and imagination:
Fig 81: Molecular biology is a theme and variations on the polar and non-polar properties of organic molecules residing in an aqueous environment. Nucleotide double helices, protein folding and micelle structures, as well as membranes, are all energetically maintained by their surrounding aqueous structures. Water has one of the highest specific heats of all, because of the large number of internal dynamic quantum states. Myoglobin (Mb) the oxygen transporting protein in muscle, containing a heme active site illustrates this (Ansari et al. 1984), both in its functionally important movements (fim) and its equilibrium fluctuations invoking fractal energetics between it high and low energy states of Mb and MbCO. This activity in turn is stabilised both by non polar side chains maintaining the aqueous structure and polar side chains interacting with the aqueous environment to form water hydration structures (top left) The hydration shell of myoglobin (blue surface) with 1911 water molecules (CPK model), the approximate number needed for optimal function (Vajda & Perczel 2014). Lower: Here we show that molecules taking part in biochemical processes from small molecules to proteins are critical quantum mechanically. Electronic Hamiltonians of biomolecules are tuned exactly to the critical point of the metal-insulator transition separating the Anderson localized insulator phase from the conducting disordered metal phase. Left: The HOMO/LUMO orbitals for Myoglobin calculated with the Extended Hückel method. Right: Generalized fractal dimensions Dq of the wave functions (Vattay et al. 2015).
We began by using classical physics to model the dendritic integration of cortical dynamics with differential equations, ranging in complexity from single positive loops in memory through to simulated intentional behavior (Kozma and Freeman 2009). We identified the desired candidate form in a discrete electrochemical wave packet embedded in the electroencephalogram (EEG), often with the form of a vortex like a hurricane, which carried a spatial pattern of amplitude modulation (AM) that qualified as a candidate for thought content.
Measurement of scalp EEG in humans (showed that the size and speed of the formation of wave packets were too big to be attributed to the classical neurophysiology of neural networks, so we explored quantum approaches. In order to use dissipative quantum field theory it is necessary to include the impact of brain and body on the environment. Physicists do this conceptually and formally by doubling the variables (Vitiello 1995, 2001, Freeman and Vitiello 2006) that describe dendritic integration in the action-perception cycle. By doing so they cre- ate a Double, and then integrate the equations in reverse time, so that every source and sink for the brain-body is matched by a sink or source for the Double, together creating a closed system.
Fig 82: Field theory model of inward
projecting electromagnetic fields
overlapping in basal brain centres (MacIver
2022).
On convergence to the attractor the neural activity in each sensory cortex condenses from a gas-like regime of sparse, disordered firing of action potentials at random intervals to a liquid-like macroscopic field of collective activity. The microscopic pulses still occur at irregular intervals, but the probability of firing is no longer random. The neural mass oscillates at the group frequencies, to which the pulses conform in a type of time multiplexing. The EEG or ECoG (electrocorticogram) scalar field during the liquid phase revealed a burst of beta or gamma oscillation we denoted as a wave packet. Its AM patterns provided the neural correlates of perception and action. The surface grain inferred that the information capacity of wave packets is very high. The intense electrochemical energy of the fields was provided everywhere by the pre-existing trans-membrane ionic concentration gradients.
The theory cites water molecules and the cytosol as the basis for the quantum field description, a position supported at the molecular level by the polarisation of the cytoplasmic medium and all its constituents between aqueous polar and hydrophobic non-polar energetics as illustrated in fig 81.
Neurons, glia cells and other physiological units are [treated as] classical objects. The quantum degrees of freedom of the model are associated to the dynamics of the electrical dipoles of the molecules of the basic components of the system, i.e. biomolecules and molecules of the water matrix in which they are embedded. The coherence of the long-range correlations is of the kind described by quantum field theory in a large number of physical systems, in the standard model of particle physics as well as in condensed matter physics, ranging from crystals to magnets, from superconductive metals to superfluids. The coherent states characterizing such systems are stable in a wide range of temperatures.
In physiological terms the field consists of heightened ephaptic [0] excitability in an interactive region of neuropil, which creates a dominant focus by which every neuron is sensitized, and to which every neuron contributes its remembrance. In physical terms, the dynamical output of the many-body interaction of the vibrational quanta of the electric dipoles of water molecules and other biomolecules energize the neuropil, the densely compartmentalized tissue of axons, dendrites and glia through which neurons force ionic currents. The boson condensation provides the long-range coherence, which in turn allows and facilitates synaptic communication among neuron populations.
The stages of activation of the quantum field boson condensation correspond closely to stages of the Freeman attractor dynamics investigated empirically in the EEG and ECoG:
We conceive each action-perception cycle as having three stages, each with its neurodynamics and its psychodynamics (Freeman 2015). Each stage has at least one phase transition and may have two or more before the next stage. In the first stage a boson condensation forms a gamma wave packet by a phase transition in each of the primary sensory cortices. Only in stage one a phase transition would occur in a single cortex. In stage two the entorhinal cortex integrates all modalities before making a gestalt.
When the boson condensation carrying its AM pattern invades and recruits the amygdala and hypothalamus, we propose that this correlates with awareness of emotion and value with incipient awareness of content. In the second stage a more extended boson condensation forms a larger wave packet in the beta range that extends through the entire limbic system including the entorhinal cortex, which is central in an AM pattern. We believe it correlates with a flash memory unifying the multiple primary percepts into a gestalt, for which the time and place of the subject forming the gestalt are provided by the hippocampus. A third phase transition forms a boson condensation that sustains a global AM pattern, the manifestations of which in the EEG extend over the whole scalp. We propose that the global AM pattern is accompanied by comprehension of the stimulus meaning, which constitutes an up-to-date status summary as the basis for the next intended action.
The dual time representation of the quantum field and its double invokes the key innovative and anticipatory features of conscious imagination:
Open systems require an environment to provide the sink where their waste energy goes, and a source of free energy which feeds them. From the standpoint of the energy flux balance, brains describe the relevant restructured part of the environment using the time-reversed copy of the system, its complement or Double (Vitiello 2001). Where do the hypotheses come from? The answer is: from imagination. In theory the best sources for hypotheses are not memories as they appear in experience, but images mirrored backward in time. The imaginings are not constrained by thermodynamics. The mirror sinks and sources are imagined, not emergent. From this asymmetry we infer that the mirror copy exists as a dynamical system of nerve energy, by which the Double produces its hypotheses and predictions, which we experience as perception, and which we test by taking action. It is the Double that imagines the world outside, free from the shackles of thermodynamic reality. It is the Double that soars.
Johnjoe Mcfadden (2020) likewise has a theory of consciousness associated with the electromagnetic wave properties of the brain’s EM field interacting with the matter properties of “unconscious” neuronal processing. In his own words he summarises his theory as follows:
I describe the conscious electromagnetic information (cemi) field theory which has proposed that consciousness is physically integrated, and causally active, information encoded in the brain’s global electromagnetic (EM) field. I here extend the theory to argue that consciousness implements algorithms in space, rather than time, within the brain’s EM field. I describe how the cemi field theory accounts for most observed features of consciousness and describe recent experimental support for the theory. … The cemi field theory differs from some other field theories of consciousness in that it proposes that consciousness — as the brain’s EM field — has outputs as well as inputs. In the theory, the brain’s endogenous EM field influences brain activity in a feedback loop (note that, despite its ‘free’ adjective, the cemi field’s proposed influence is entirely causal acting on voltage-gated ion channels in neuronal membranes to trigger neural firing.
The lack of correlation between complexity of information integration and conscious thought is also apparent in the common-place observation that tasks that must surely require a massive degree of information integration, such as the locomotory actions needed to run across a rugged terrain, may be performed without awareness but simple sensory inputs, such as stubbing your toe, will over-ride your conscious thoughts. The cemi field theory proposes that the non-conscious neural processing involves temporal (computational) integration whereas operations, such as natural language comprehension, require the simultaneous spatial integration provided by the cemi field. … Dehaene (2014) has recently described four key signatures of consciousness: (i) a sudden ignition of parietal and prefrontal circuits; (ii) a slow P3 wave in EEG; (iii) a late and sudden burst of high-frequency oscillations; and (iv) exchange of bidirectional and synchronized messages over long distances in the cortex. It is notable that the only feature common to each of these signatures—aspects of what Dehaene calls a ‘global ignition’ or ‘avalanche’—is large endogenous EM field perturbations in the brain, entirely consistent with the cemi field theory.
Jones & Hunt (2023) provide a wide-ranging review of field theories of consciousness culminating in their own favoured theory, combining a panpsychist view concordant with Symbiotic Existential Cosmology although specifically dependent on EM fields as it’s key interface. They begin with a critical review of neuronal network approaches to conscious brain function:
Neuroscientists usually explain how our different sensory qualia arise in terms of specialized labeled lines with their own detector fibers and processing areas for taste, vision, and other sensory modes. Photoreceptors thus produce color qualia regardless of whether they are stimulated by light, pressure, or other stimuli. This method is supplemented by detailed comparisons of the fibers within each labeled line. For example, the three color fibers overlap in their response to short, medium, and long wavelengths of incoming light. So across-fiber comparisons of their firing rates help disambiguate which wavelengths are actually present. This longstanding view has arisen from various historical roots. But the overall problem is that these operations are so similar in the visual, tactile, and other sensory modes that it is unclear how these methods can differ enough to account for all the stark differences between color and taste qualia, for example. Another issue (which will be addressed more below) concerns the “hard problem” of why this biological information processing is accompanied by any conscious experience of colors, pains, et cetera.
It might be thought that recently proposed neuron-based neuroscientific theories of consciousness would offer more viable accounts of how different qualia arise. But they rarely do. For example, Global Neuronal Workspace Theory GNWT (e.g., Dehaene and Naccache, 2001; Dehaene, 2014) and Higher-Order Theories (e.g., Rosenthal, 2005) focus on access consciousness–the availability of information for acting, speaking, and reasoning. This access involves attention and thought. But these higher cognitive levels do not do justice to qualia, for qualia appear even at the very lowest levels of conscious cognition in pre-attentive iconic images.
They then explore both integrated information theory and quantum approaches such as Hameroff Penrose, illustrating their limitations:
Integrated Information Theory represents qualia information abstractly and geometrically in the form of a system’s “qualia space” (Tononi 2008). This is the space where each axis represents a possible state of the system–a single combination of logic-gate interactions (typically involving synapses). .. IIT’s accounts of qualia spaces are far too complex to specify except in the simplest of cases, and no tests for this method of characterizing qualia has yet been proposed, as far as we are aware.
Hameroff and Penrose have not yet addressed how different qualia arise from different quantum states. This latter issue applies to many quantum theories of consciousness. They generally omit mention of how quantum states yield the primary sensory qualia (redness, sweetness, etc.) we are familiar with. For example, Beshkar (2020) contains an interesting QBIT theory of consciousness that attributes qualia to quantum information encoded in maximally entangled states. Yet this information ultimately gets its actual blueness, painfulness, etc. from higher cortical mechanisms criticized above. Another example is Lewtas (2017). He also attributes our primary qualia to quantum levels. Each fundamental particle has some of these various qualia. Synchronized firing by neurons at different frequencies selects from the qualia and binds them to form images. ... The general problem with these highly philosophical qualia theories is that they are hard to evaluate. Their uniting of qualia to quanta is not spelt out in testable detail.
They then outline the difficulties network based neuroscience has dealing with qualia:
Standard neuroscience has not explained well how the brain’s separate, distributed visual circuits bind together to support a unified image. This is an aspect of the so-called “binding problem” of how the mind’s unity arises ... visual processing uses separate, parallel circuits for color and shape, and it is unclear how these circuits combine to form complete images. Ascending color and shape circuits have few if any synapses for linking their neurons to create colored shapes. Nor do they converge on any central visual area.
(1) The coding/correlation problem: As argued above, the neuronal and computational accounts above have failed to find different information-processing operations among neurons that encode our different qualia.
(2) The qualia-integration problem: Computational accounts also face the problem of explaining how myriad qualia are integrated together to produce overall unified perceptions such as visual images.
(3) The hard problem: In addition to the two empirical problems above, computational accounts face a hard, metaphysical problem. Why are neural events accompanied by any qualia at all?
They then explore how field theories can address these fundamental issues:
EM field approaches to minds have offered new theories of qualia and consciousness, some of which are testable. These electromagnetic approaches seat consciousness primarily in the various complex EM fields generated by neurons, glia and the rest of the brain and body. ... These EM field approaches are proliferating because they draw on considerable experimental evidence and withstand past criticisms from standard neuroscience. For example, they have explained the unity of consciousness in terms of the physical unity (by definition) of EM fields–in contrast to the discrete nature of neurons and their synaptic firing. In the last two decades, they have also offered explanations of how neural EM activity creates different qualia.
Pockett’s (2000) theory of qualia is an important landmark in EM field theories of mind. It is rooted in extensive experimental evidence, makes testable predictions, and is strongly defended against critics. If Kohler, Libet, Eccles, and Popper helped establish the EM field approach to minds, Susan Pockett has arguably done more to develop it than anyone else–except for perhaps Johnjoe McFadden. … Pockett’s basic claim is that “consciousness is identical with certain spatiotemporal patterns in the electromagnetic field” (ibid., pp. vi, 109, 136–7). Her evidence comes mainly from extensive EEG and MEG studies of neural electromagnetic fields. They show correlations between sensory qualia and field patterns. For example, EEG studies by Freeman (1991) show that various odors (e.g., from bananas or sawdust) correlate with specific spatial patterns distributed across mammalian olfactory areas.
McFadden’s (202b) theory says that information is conscious at all levels, which seems to entail a form of panpsychism (McFadden, 2002b). The “discrete” consciousness of elementary particles is limited and isolated. But as particles join into a field, they form a unified “field” consciousness. As these fields affect motor neurons, the brain’s consciousness is no longer an epiphenomenon, for its volition can communicate with the world. This level of “access” consciousness serves as a global workspace where specialized processors compete for access to volition’s global, conscious processes. McFadden rejects popular views that minds are just ineffectual epiphenomena of brain activity. Instead, field–nerve interactions are the basis of free will. The conscious field is deterministic, yet it is free in that it affects behavior instead of being epiphenomenal (McFadden, 2002a,b). This treats determinism as compatible with free will construed as self-determination.
They postpone the hard problem and focus on the first two above:
(1) The coding/correlation problem: What different EM-field activities encode or correlate with the various qualia? Both field theories above face difficulties here.
(2) The qualia-integration problem: How do EM fields integrate myriad qualia to form (for example) unified pictorial images? Here field theories seem quite promising in their ability to improve upon standard neuroscience.
They then cite three emergent field theories which have sought to address the outstanding problems faced by the field theories already discussed:
Ward and Guevara (2022) localize qualia in the fields generated by a particular part of the brain. Their intriguing thesis is that our consciousness and its qualia are based primarily on structures in thalamic EM fields which serve to model environmental and bodily information in ways relevant to controlling action. Ward and Guevara argue that the physical substrate of consciousness is limited to strong neural EM fields where synchronously firing neurons reinforce each other’s information in a manner which is also integrated and complex. Finally, local, nonsynchronous fields can be canceled out in favor of a dominant field that synchronously and coherently represents all the information from our senses, memories, emotions, et cetera. For these reasons, Ward and Guevara believe that fields are better candidates than neurons and synaptic firing for the primary substrate of consciousness. … they cite four reasons for ascribing consciousness to the thalamus. (1) We are not conscious of all sensory computations, just their end result, which involves the thalamic dynamic core. (2) Thalamic dysfunctions (but not necessarily cortical dysfunctions) are deeply involved in nonconsciousness conditions such as anesthesia, unresponsive wakefulness syndrome, and anoxia. (3) The thalamus is a prime source and controller of synchronization (in itself and in cortex), which is also associated with consciousness. (4) The thalamus (especially its DM nucleus Ouhaz et al. 2018) is ideally suited for the integrative role associated with consciousness, for cortical feedbacks seem to download cortical computations into thalamus. ... These lines of evidence indicate that while cortex computes qualia, thalamus displays qualia.
Another author who attributes qualia to fundamental EM activity is Bond (2023). This clear, succinct paper explains that quantum coherence involves the entanglement of quanta within energy fields, including the EM fields generated by neurons. Neural matter typically lacks this coherence because the haphazard orientation of quantum spins in the matter creates destructive interference and decoherence. Bond proposes the novel idea that firing neurons generate EM fields that can flow through nearby molecular structures and entangle with their atoms. This coherence produces our perceptions. The different subjective feelings of these perceptions come from different hybrids or mixtures of the fields’ wavelengths as they vibrate or resonate. ... On a larger scale, this coherence ties into the well-known phase- locking of corticothalamic feedback loops. Together, they produce the holism or unity of consciousness. This combination of coherent, phase-locked feedback loops and coherent, entangled wave-particles in EM fields is called by Bond a “coherence field.” It is investigated by his Coherence Field Theory (CFT).
Finally, as joint authors, they elucidate their favoured theory General Resonance Theory, or GRT arising from their independent research:
Another approach to the Qualia Problem is Hunt and Schooler’s General Resonance Theory (GRT), which is grounded in a panpsychist framework. GRT assumes that all matter is associated with at least some capacity for phenomenal consciousness (this is called the “panpsychism axiom”), but that consciousness is extremely rudimentary in the vast majority of cases due to a lack of physical complexity mirrored by the lack of mental complexity. The EM fields associated with all baryonic matter (i.e., charged particles) are thought to be the primary seat of consciousness simply because EM fields are the primary force at the scale of life (strong and weak nuclear fields are operative at scales far smaller and gravity is operative mostly at scales far larger). Accordingly, GRT is applicable to all physical structures and as a theory is not limited only to neurobiological or even biological structures (Hunt and Schooler, 2019).
GRT suggests that resonance (similar but not synonymous with synchronization and coherence) of various types is the key mechanism by which the basic constituents of consciousness, when in sufficient proximity, combine into more complex types of consciousness. This is the case because shared resonance allows for phase transitions in the speed and bandwidth of information exchange to occur at various organizational levels, allowing previously disordered systems to self-organize and thus become coherent by freely sharing information and energy.
Qualia, in GRT, are synonymous with consciousness, which is simply subjective experience:
Jones (2017, 2019), a coauthor of the current paper, has developed an EM-field theory of qualia. Like other field theories, it attributes qualia and images to neural EM-field patterns (and probably the EM-charged matter emitting the fields). Yet these are not the coded images of computational field theories that are based on information processing. Instead, in his theory images actually reside in conscious, pictorial form within the EM fields of neural maps. This is a neuroelectrical, pure panpsychist theory of mind (NP). The “pure panpsychism” says that everything (not just EM) is comprised purely of consciousness. NP addresses the hard problem, qualia-integration problem, and qualia coding/ correlation problem in the following ways.
(1) The hard problem: How are qualia metaphysically related to brains and computations? In NP, consciousness and its qualia are the hidden nature of observable matter and energy. We are directly aware of our inner conscious thoughts and feelings. Yet we are just indirectly aware of the observable, external world through reflected light, instruments, sense organs, et cetera.
(2) The qualia coding/correlation problem: How do our various qualia arise? Yet there is now growing evidence that different qualia correlate with different electrically active substances in cellular membranes found in sensory and emotional circuits. These substances are the membranes’ ion-channel proteins and associated G-protein-coupled receptors (GPCRs). For example, the different primary colors correlate with different OPN1 GPCRs ... oxytocin and vasopressin receptor proteins correlate with feelings of love, estrogen and testosterone receptors correlate with lust, the endorphin receptor correlates with euphoria, and the adrenaline receptor correlates with vigilance.
(3) The qualia-integration problem: First, how do various qualia unify together into an overall whole? Second, how specifically do qualia join point by point to form pictorial images? In NP’s field theory, active circuits create a continuous EM field between neurons that pools their separate, atomized consciousness. This creates a unified conscious mind along brain circuits (with the mind itself residing in the field and perhaps in the charged matter creating the field). This unity is strongest around the diffuse ion currents that run along (and even between) neuronal circuits. It is very strong among well-aligned cortical cells that fire together coherently.
In conclusion they state: Consciousness is characterized mainly by its privately experienced qualities (qualia). Standard, computation-based and synapse-based neuroscience have serious difficulties explaining them. ... field theories have improved in key ways upon standard neuroscience in explaining qualia. But this progress is sometimes tentative–it awaits further evidence and development.
Earlier John Eccles (1986) proposed a brain mind identity theory involving psychon quasi-particles mediating uncertainty of synaptic transmission to complementary dendrons cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunnelling effect in synaptic exocytosis, while in perception the reverse process takes place. This model has been elaborated by a number of researchers (Eccles 1990, 1994, Beck & Eccles 1992, Georgiev 2002, Hari 2008). The difficulty with the theory is that the psychons are then physical quasi-particles with integrative mental properties. So it’s a contradictory description that doesn’t manifest subjectivity except by its integrative physical properties.
Summarising the state of play, we have two manifestations of consciousness at the interface with objective physical description, (a) the hard problem of consciousness and (b) the problem of quantum measurement, both of which are in continual debate. Together these provide complementary windows on the abyss in the scientific description and a complete solution of existential cosmology that we shall explore in this article.
Neural Nets versus Biological Brains
Steven Grossberg is recognised for his contribution to ideas using nonlinear systems of differential equations such as laminar computing, where the layered cortical structures of mammalian brains provide selective advantages, and for complementary computing, which concerns the idea that pairs of parallel cortical processing streams compute complementary properties in the brain, each stream having complementary computational strengths and weaknesses, analogous to physical complementarity in the uncertainty principle. Each can possess multiple processing stages realising a hierarchical resolution of “uncertainty”, which here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage.
“Conscious Mind, Resonant Brain” (Grossberg 2021) provides a panoramic model of the brain, from neural networks to network representations of conscious brain states. In so doing, he presents a view based on resonant non-linear systems, which he calls adaptive resonance theory (ART), in which a subset of “resonant” brain states are associated with conscious experiences. While I applaud his use of non-linear dynamics, ART is a structural abstract neural network model and not what I as a mathematical dynamicist conceive of as "resonance", compared with the more realistic GNW, or global neuronal workspace model.
The primary intuition behind the ART model is that object identification and recognition generally occur as a result of the interaction of 'top-down' observer expectations with 'bottom-up' sensory information. The model postulates that 'top-down' expectations take the form of a memory template or prototype that is then compared with the actual features of an object as detected by the senses. This comparison gives rise to a measure of category belongingness. As long as this difference between sensation and expectation does not exceed a set threshold called the 'vigilance parameter', the sensed object will be considered a member of the expected class. The system thus offers a solution to the 'plasticity/stability' problem, i.e. the problem of acquiring new knowledge without disrupting existing knowledge that is also called incremental learning.
The basic ART structure.
The work shows in detail how and why multiple processing stages are needed before the brain can construct a complete and stable enough representation of the information in the world with which to predict environmental challenges and thus control effective behaviours. Complementary computing and hierarchical resolution of uncertainty overcome these problems until perceptual representations that are sufficiently complete, context-sensitive, and stable can be formed. The brain regions where these representations are completed are different for seeing, hearing, feeling, and knowing.
His proposed answer is that a resonant state is generated that selectively “lights up” these representations and thereby renders them conscious. These conscious representations can then be used to trigger effective behaviours:
My proposed answer is: A resonant state is generated that selectively “lights up” these representations and thereby renders them conscious. These conscious representations can then be used to trigger effective behaviors. Consciousness hereby enables our brains to prevent the noisy and ambiguous information that is computed at earlier processing stages from triggering actions that could lead to disastrous consequences. Conscious states thus provide an extra degree of freedom whereby the brain ensures that its interactions with the environment, whether external or internal, are as effective as possible, given the information at hand.
He addresses the hard problem of consciousness in its varying aspects:
As Chalmers (1995) has noted: “The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. ... Even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? There seems to be an unbridgeable explanatory gap between the physical world and consciousness. All these factors make the hard problem hard. … Philosophers vary passionately in their views between the claim that no Hard Problem remains once it is explained how the brain generates experience, as in the writings of Daniel Dennett, to the claim that it cannot in principle be solved by the scientific method, as in the writings of David Chalmers. See the above reference for a good summary of these opinions.
Grossberg demonstrates that, over and above information processing, our brains sometimes go into a context-sensitive resonant state that can involve multiple brain regions. He explores experimental evidence that “all conscious states are resonant states” but not vice versa. Showing that, since not all brain dynamics are “resonant”, consciousness is not just a “whir of information-processing”:
When does a resonant state embody a conscious experience? “Why is it conscious”? And how do different resonant states support different kinds of conscious qualia? The other side of the coin is equally important: When does a resonant state fail to embody a conscious experience? Advanced brains have evolved in response to various evolutionary challenges in order to adapt to changing environments in real time. ART explains how consciousness enables such brains to better adapt to the world’s changing demands.
Grossberg is realistic about the limits on a scientific explanation of the hard problem:
It is important to ask: How far can any scientific theory go towards solving the Hard Problem? Let us suppose that a theory exists whose neural mechanisms interact to generate dynamical states with properties that mimic the parametric properties of the individual qualia that we consciously experience, notably the spatio-temporal patterning and dynamics of the resonant neural representations that represent these qualia. Suppose that these resonant dynamical states, in addition to mirroring properties of subjective reports of these qualia, predict properties of these experiences that are confirmed by psychological and noninvasive neurobiological experiments on humans, and are consistent with psychological, multiple-electrode neurophysiological data, and other types of neurobiological data that are collected from monkeys who experience the same stimulus conditions.
He then develops a strategy to move beyond the notion of the neural correlate of consciousness (Crick & Koch 1990), claiming these states are actually the physical manifestation of the conscious state:
Given such detailed correspondences with experienced qualia and multiple types of data, it can be argued that these dynamical resonant states are not just “neural correlates of consciousness” that various authors have also discussed, notably David Chalmers and Christof Koch and their colleagues. Rather, they are mechanistic representations of the qualia that embody individual conscious experiences on the psychological level. If such a correspondence between detailed brain representations and detailed properties of conscious qualia occurs for a sufficiently large body of psychological data, then it would provide strong evidence that these brain representations create and support these conscious experiences. A theory of this kind would have provided a linking hypothesis between brain dynamics and the conscious mind. Such a linking hypothesis between brain and mind must be demonstrated before one can claim to have a “theory of consciousness”.
However he then delineates the claim that this is the most compete scientific account of subjective experience possible, while conceding that it may point to a cosmological problem akin those in relativity and quantum theory:
If, despite such a linking hypothesis, a philosopher or scientist claims that, unless one can “see red” or “feel fear” in a theory of the Hard Problem, then it does not contribute to solving that problem, then no scientific theory can ever hope to solve the Hard Problem. This is true because science as we know it cannot do more than to provide a mechanistic theoretical description of the dynamical events that occur when individual conscious qualia are experienced. However, as such a principled, albeit incrementally developing, theory of consciousness becomes available, including increasingly detailed psychological, neurobiological, and even biochemical processes in its explanations, it can dramatically shift the focus of discussions about consciousness, just as relativity theory transformed discussions of space and time, and quantum theory of how matter works. As in quantum theory, there are measurement limitations in understanding our brains.
Although he conceives of brain dynamics as being poised just above the level of quantum effects in vision and hearing, Grossberg sees brains as a new frontier of scientific discovery subject to the same principles of complementarity and uncertainty as arise in quantum physics:
Since brains form part of the physical world, and interact ceaselessly with it to adapt to environmental challenges, it is perhaps not surprising that brains also obey principles of complementarity and uncertainty. Indeed, each brain is a measurement device for recording and analyzing events in the physical world. In fact, the human brain can detect even small numbers of the photons that give rise to percepts of light, and is tuned just above the noise level of phonons that give rise to percepts of sound.
Complementarity and uncertainty principles also arise in physics, notably in quantum mechanics. Since brains form part of the physical world, and interact ceaselessly with it to adapt to environmental challenges, it is perhaps not surprising that brains also obey principles of complementarity and uncertainty. Indeed, each brain is a measurement device for recording and analyzing events in the physical world. In fact, the human brain can detect even small numbers of the photons that give rise to percepts of light, and is tuned just above the noise level of phonons that give rise to percepts of sound.
The Uncertainty Principle identified complementary variables, such as the position and momentum of a particle, that could not both be measured with perfect precision. In all of these theories, however, the measurer who was initiating and recording measurements remained out- side the measurement process. When we try to understand the brain, this is no longer possible. The brain is the measurement device, and the process of understanding mind and brain is the study of how brains measure the world. The measurement process is hereby brought into physical theory to an unprecedented degree.
Fig 83: Brain centres involved in intentional behaviour and subjectively conscious physical volition: (a) The cortex overlaying the basal ganglia, thalamus and amygala and substantia nigra involved in planned action, motivation and volition. (b) The interactive circuits in the cortex, striatum and thalamus facilitating intentional motor bahaviour. (c) The Motivator model clarifies how the basal ganglia and amygdala coordinate their complementary functions in the learning and performance of motivated acts. Brain areas can be divided into four regions that process information about conditioned stimuli (CSs) and unconditioned stimuli (USs). (a) Object Categories represent visual or gustatory inputs, in anterior inferotemporal (ITA) and rhinal (RHIN) cortices; (b) Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH); (c) Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex; and (d) the Reward Expectation Filter in the basal ganglia detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomes of the striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the SNc/VTA (substantia nigra pars compacta/ventral tegmental area). The network model connecting brain regions is consistent with both quantum and classical approaches and in no way eliminates subjective conscious volition from having an autonomous role. All it implies is that conscious volition arises from an evolved basis in these circuit relationships in mammals.
Grossberg sees the brain as presenting new issues for science as measurement devices confounding their separation between measured effect and the observer making a quantum measurement:
Since brains are also universal measurement devices, how do they differ from these more classical physical ideas? I believe that it is the brain’s ability to rapidly self-organize, through development and life-long learning, that sets it apart from previous physical theories. The brain thus represents a new frontier in measurement theory for the physical sciences, no less than the biological sciences. It remains to be seen how physical theories will develop to increasingly incorporate concepts about the self-organization of matter, and how these theories will be related to the special case of brain self-organization.
Experimental and theoretical evidence will be summarized in several chapters in support of the hypothesis that principles of complementarity and uncertainty that are realized within processing streams, better explain the brain’s functional organization than concepts about independent modules. Given this conclusion, we need to ask: If the brain and the physical world are both organized according to such principles, then in what way is the brain different from the types of physical theories that are already well-known? Why haven’t good theoretical physicists already “solved” the brain using known physical theories?
The brain’s universal measurement process can be expected to have a comparable impact on future science, once its implications are more broadly understood. Brain dynamics operate, however, above the quantum level, although they do so with remarkable efficiency, responding to just a few photons of light in the dark, and to faint sounds whose amplitude is just above the level of thermal noise in otherwise quiet spaces. Knowing more about how this exquisite tuning arose during evolution could provide important new information about the design of perceptual systems, no less than about how quantum processes interface with processes whose main interactions seem to be macroscopic.
In discussing the hierarchical feedback of the cortex and basal ganglia and the limbic system, Grossberg (2015) fluently cites both consciousness and volition as adaptive features of the brain as a self-organising system:
The basal ganglia control the gating of all phasic movements, including both eye movements and arm movements. Arm movements, unlike eye movements, can be made at variable speeds that are under volitional basal ganglia control. Arm movements realize the Three S’s of Movement Control; namely, Synergy, Synchrony, and Speed. … Many other brain processes can also be gated by the basal ganglia, whether automatically or through conscious volition. Several of these gating processes seem to regulate whether a top- down process subliminally primes or fully activates its target cells. As noted in Section 5.1, the ART Matching Rule enables the brain to dynamically stabilize learned memories using top-down attentional matching.
Such a volitionally-mediated shift enables top-down expectations, even in the absence of supportive bottom-up inputs, to cause conscious experiences of imagery and inner speech, and thereby to enable visual imagery, thinking, and planning activities to occur. Thus, the ability of volitional signals to convert the modulatory top-down priming signals into suprathreshold activations provides a great evolutionary advantage to those who possess it.
Such neurosystem models provide key insights into how processes associated with intentional acts and the reinforcement of sensory experiences through complementary adaptive networks, model the neural correlate of conscious volitional acts and their smooth motor execution in the world at large. As they stand, these are still classical objective models that do not actually invoke conscious volition as experienced, but they do provide deep insight into the brain’s adaptive processes accompanying subjective conscious volition.
My critique, which this is clear and simple, is that these designs remove such a high proportion of the key physical principles involved in biological brain function that they can have no hope of modelling subjective consciousness or volition, despite the liberal use of these terms in the network designs, such as the basal ganglia as gateways. Any pure abstract neural net model, however much it adapts to “resonate" with biological systems is missing major fundamental formative physical principles of how brains actually work.
These include:
(A) The fact that biological neural networks are both biochemical and electrochemical in two ways (1) all electrochemical linkages, apart from gap junctions, work through the mediation of biochemical neurotransmitters and (2) the internal dynamics of individual neurons and glia are biochemical, not electrochemical.
(B) The fact that the electrochemical signals are dynamic and involve sophisticated properties including both (1) unstable dynamics at the edge of chaos and (2) phase coherence tuning between continuous potential gradients and action potentials.
(C) They involve both neurons and neuroglia working in complementary relationship.
(D) They involve developmental processes of cell migration determining the global architecture of the brain including both differentiation by the influence of neurotransmitter type and chaotic excitation in early development.
(E) This neglects the fact that evolution of biological brains as neural networks is built on the excitatory neurotransmitter-driven social signalling and quantum sentience of single celled eucaryotes, forming an intimately coupled society of amoebo-flagellate cells communicating by the same neurotransmitters as in single-celled eucaryotes, so these underlying dynamics are fundamental and essential to biological neural net functionality.
Everything from simple molecules such as ATP acting as the energy currency of the cell, through protein folding, to enzymes involve quantum effects, such as tunnelling at active sites, and ion channels are at the same level.
It is only a step from there to recognising that such biological processes are actually fractal non-IID (not identically independently-distributed quantum processes, not converging to the classical, in the light of Gallego & Dakić (2021), because their defining contexts are continually evolving, to thus provide a causally open view of brain dynamics, in which the extra degree of freedom provided by consciousness, that complements objective physical computation, arises partly through quantum uncertainty itself, in conscious volition becoming subjectively manifest, and ensuring survival under uncertain environmental threats.
However, this is not just a rational or mechanistically causal process. We evolved from generation upon generation of organisms surviving existential threats in the wild, which were predominantly solved by lightning fast hunch and intuition, and never by rational thought alone, except recently and all too briefly in our cultural epoch.
The great existential crises have always been about surviving environmental threats which are not only computationally intractable due to exponentiating degrees of freedom, but computationally insoluble because they involve the interaction of live volitional agents, each consciously violating the rules of the game.
Conscious volition evolved to enable subjective living agents to make hunch-like predictions of their own survival in contexts where no algorithmic or deterministic process, including the nascent parallelism of the cortex, limbic system and basal ganglia that Steve Grossberg has drawn attention to, could suffice, other than to define boundary conditions on conscious choices of volitional action. Conscious intentional will, given these constraints, remained the critical factor, complementing computational predictivity generated through non-linear dynamics, best predicting survival of a living organism in the quantum universe, which is why we still possess it.
When we come to the enigma of subjective conscious anticipation and volition under survival threats, these are clearly, at the physiological level, the most ancient and most strongly conserved. Although the brains of vertebrates, arthropods and cephalopods show vast network differences, the underlying processes generating consciousness remain strongly conserved to the extent that baby spiders display clear REM features during sleep despite having no obvious neural net correspondence. While graded membrane excitation is universal to all eucaryotes and shared by human phagocytes and amoeba, including the genes for the poisons used to kill bacteria, the action potential appears to have evolved only in flagellate eucaryotes, as part of the flagellar escape response to existential threat, later exemplified by the group flagellation of our choano-flagellate ancestor colonies.
All brains are thus intimate societies of dynamically-coupled excitable cells (neurons and glia) communicating through these same molecular social signalling pathways that social single celled eucaryotes use. Both strategic intelligence and conscious volition as edge-of-chaos membrane excitation in global feedback thus arose long before brains and network designs emerged.
Just as circuit design models can have predictive value, so does subjective conscious volition of the excitable eucaryote cell have clear survival value in evolution and hence predictive power of survival under existential threat, both in terms of arbitrary sensitivity to external stimuli at the quantum level and neurotransmitter generated social decision-making of the collective organism. Thus the basis of what we conceive of as subjective conscious volition is much more ancient and longer and more strongly conserved than any individual network model of the vertebrate brain and underlies all attempts to form realistic network models.
Since our cultural emergence, Homo sapiens has been locked in a state of competitive survival against its own individuals, via Machiavellian intelligence, but broadly speaking, rationality – dependence on rational thought processes as a basis for adaption – just brings us closer to the machine learning of robots, rather than conscious volition. Steve’s representation of the mechanical aspects in the basal ganglia in Grossberg (2015) gives a good representation of how living neurosystems adaptively evolve to make the mechanical aspect of the neural correlate of conscious volition possible, but it says little about how we actually survive the tiger’s pounce, let alone the ultimate subtleties of human political intrigue, when the computational factor are ambiguous.. Likewise decision theory or prospect theory, as noted in Wikipedia, tells us only a relatively obvious asymmetric sigmoidal function describing how risk aversion helps us survive, essentially because being eaten rates more decisively in the cost stakes than any single square meal as a benefit.
Because proving physical causal closure of the universe in the context of brain dynamics is impossible to practically achieve in the quantum universe, physical materialism is itself not a scientific concept, so all attempts to model and understand conscious volition remain open and will continue to do so. The hard problem of consciousness is not a division between science and philosophy as Steve suggests in his (2021) book, but our very oracle of cosmological existence.
Epiphenomenalism, Conscious Volition and Free Will
Thomas Kuhn (1922–1996) is perhaps the most influential philosopher of science of the twentieth century. His book “The Structure of Scientific Revolutions” (Kuhn 1962) is one of the most cited academic books of all time. A particularly important part of Kuhn’s thesis focuses upon the consensus on exemplary instances of scientific research. These exemplars of good science are what Kuhn refers to when he uses the term ‘paradigm’ in a narrower sense. He cites Aristotle’s analysis of motion, Ptolemy’s computations of plantery positions, Lavoisier’s application of the balance, and Maxwell’s mathematization of the electromagnetic field as paradigms (ibid, 23). According to Kuhn the development of a science is not uniform but has alternating ‘normal’ and ‘revolutionary’ (or ‘extraordinary’) phases in which paradigm shifts occur.
Rejecting a teleological view of science progressing towards the truth, Kuhn favours an evolutionary view of scientific progress (1962/1970a, 170–3). The evolutionary development of an organism might be seen as its response to a challenge set by its environment. But that does not imply that there is some ideal form of the organism that it is evolving towards. Analogously, science improves by allowing its theories to evolve in response to puzzles and progress is measured by its success in solving those puzzles; it is not measured by its progress towards to an ideal true theory. While evolution does not lead towards ideal organisms, it does lead to greater diversity of kinds of organism. This is the basis of a Kuhnian account of specialisation in science in which the revolutionary new theory that succeeds in replacing another that is subject to crisis, may fail to satisfy all the needs of those working with the earlier theory. One response to this might be for the field to develop two theories, with domains restricted relative to the original theory (one might be the old theory or a version of it).
Free will is the notion that we can make real choices which are partially or completely independent of antecedent conditions – "the power of acting without the constraint of necessity or fate; the ability to act at one's own discretion", in the context of the given circumstances. Determinism denies this and maintains that causation is operative in all human affairs. Increasingly, despite the discovery of quantum uncertainty, scientists argue that their discoveries challenge the existence of free will. Studies indicate that informing people about such discoveries can change the degree to which they believe in free will and subtly alter their behaviour, leading to a social erosion of human agency, personal and ethical responsibility.
Philosophical analysis of free will divides into two opposing responses. Incompatibilists claim that free will and determinism cannot coexist. Among incompatibilists, metaphysical libertarians, who number among them Descartes, Bishop Berkeley and Kant, argue that humans have free will, and hence deny the truth of determinism. Libertarianism holds onto a concept of free will that requires the agent to be able to take more than one possible course of action under a given set of circumstances, some arguing that indeterminism helps secure free will, others arguing that free will requires a special causal power, agent-causation. Instead, compatibilists argue that free and responsible agency requires the capacities involved in self-reflection and practical deliberation; free will is the ability to make choices based on reasons, along with the opportunity to exercise this ability without undue constraints (Nadelhoffer et al. 2014). This can make rational acts or decisions compatible with determinism.
Our concern here is thus not with responsible agency, which may or may not be compatible with determinism, but affirming the existence of agency not causally determined by physical processes in the brain. Epiphenomenalists accept that subjective consciousness exists, as an internal model of reality constructed by the brain to give a global description of the coherent brain processes involved in perception attention and cognition, but deny the volitional will over our actions that is central to both reasoned and creative physical actions. This invokes a serious doubt that materialistic neuroscience can be in any way consistent with any form of consciously conceived ethics, because invoking moral or ethical reasoning is reduced to forms of aversive conditioning, consistent with behaviouralism, and Pavlov’s dogs, subjectively rationalised by the subject as a reason. This places volition as being a delusion driven by evolutionary compensation to mask the futility of any subjective belief in organismic agency over the world.
Defending subjective volitional agency thus depends centrally on the innovative ability of the subjective conscious agent to generate actions which lie outside the constraints of determined antecedents, placing a key emphasis on creativity and idiosyncracy, amid physical uncertainty, rather than cognitive rationality, as reasons are themselves subject to antecedents.
Bob Doyle notes that in the first two-stage model of free-will, William James (1884) proposed that indeterminism is the source for what James calls "alternative possibilities" and "ambiguous futures." The chance generation of such alternative possibilities for action does not in any way limit his choice to one of them. For James chance is not the direct cause of actions. James makes it clear that it is his choice that “grants consent” to one of them. In 1884, James asked some Harvard Divinity School students to consider his choice for walking home after his talk:
What is meant by saying that my choice of which way to walk home after the lecture is ambiguous and matter of chance?...It means that both Divinity Avenue and Oxford Street are called but only one, and that one either one, shall be chosen.
James was thus the first thinker to enunciate clearly a two-stage decision process, with chance in a present time of random alternatives, leading to a choice which grants consent to one possibility and transforms an equivocal ambiguous future into an unalterable and simple past. There is a temporal sequence of undetermined alternative possibilities followed by an adequately determined choice where chance is no longer a factor. James also asked the students to imagine his actions repeated in exactly the same circumstances, a condition which is regarded today as one of the great challenges to libertarian free will. James anticipates much of modern physical theories of multiple universes:
Imagine that I first walk through Divinity Avenue, and then imagine that the powers governing the universe annihilate ten minutes of time with all that it contained, and set me back at the door of this hall just as I was before the choice was made. Imagine then that, everything else being the same, I now make a different choice and traverse Oxford Street. You, as passive spectators, look on and see the two alternative universes,--one of them with me walking through Divinity Avenue in it, the other with the same me walking through Oxford Street. Now, if you are determinists you believe one of these universes to have been from eternity impossible: you believe it to have been impossible because of the intrinsic irrationality or accidentality somewhere involved in it. But looking outwardly at these universes, can you say which is the impossible and accidental one, and which the rational and necessary one? I doubt if the most ironclad determinist among you could have the slightest glimmer of light on this point.
Henri Poincaré speculated on how his mind worked when solving mathematical problems. He had the critical insight that random combinations and possibilities are generated, some in an unconsciously, then they are selected among, perhaps initially also by an unconscious process, but then by a definite conscious process of validation:
It is certain that the combinations which present themselves to the mind in a kind of sudden illumination after a somewhat prolonged period of unconscious work are generally useful and fruitful combinations… all the combinations are formed as a result of the automatic action of the subliminal ego, but those only which are interesting find their way into the field of consciousness… A few only are harmonious, and consequently at once useful and beautiful, and they will be capable of affecting the geometrician's special sensibility I have been speaking of; which, once aroused, will direct our attention upon them, and will thus give them the opportunity of becoming conscious… In the subliminal ego, on the contrary, there reigns what I would call liberty, if one could give this name to the mere absence of discipline and to disorder born of chance.
Even reductionist Daniel Dennett, who is a libertarian, has his version of decision-making:
The model of decision making I am proposing has the following feature: when we are faced with an important decision, a consideration-generator whose output is to some degree undetermined produces a series of considerations, some of which may of course be immediately rejected as irrelevant by the agent (consciously or unconsciously). Those considerations that are selected by the agent as having a more than negligible bearing on the decision then figure in a reasoning process, and if the agent is in the main reasonable, those considerations ultimately serve as predictors and explicators of the agent's final decision.
The Two-Stage Model of Arthur Compton championed the idea of human freedom based on quantum uncertainty and invented the notion of amplification of microscopic quantum events to bring chance into the macroscopic world. Years later, he clarified the two-stage nature of his idea in an Atlantic Monthly article in 1955:
A set of known physical conditions is not adequate to specify precisely what a forthcoming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur. That he does so is known only to the person himself. From the outside one can see in his act only the working of physical law. It is the inner knowledge that he is in fact doing what he intends to do that tells the actor himself that he is free.
At first Karl Popper dismissed quantum mechanics as being no help with free will, but later describes a two-stage model paralleling Darwinian evolution, with genetic mutations being probabilistic and involving quantum uncertainty.
In 1977 he gave the first Darwin Lecture "Natural Selection and the Emergence of Mind". In it he said he had changed his mind (a rare admission by a philosopher) about two things. First he now thought that natural selection was not a "tautology" that made it an unfalsifiable theory. Second, he had come to accept the random variation and selection of ideas as a model of free will. The selection of a kind of behavior out of a randomly offered repertoire may be an act of choice, even an act of free will. I am an indeterminist; and in discussing indeterminism I have often regretfully pointed out that quantum indeterminacy does not seem to help us;1 for the amplification of something like, say, radioactive disintegration processes would not lead to human action or even animal action, but only to random movements. This is now the leading two-stage model of free will. I have changed my mind on this issue. A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.
These accounts span diverse thinkers, from James, through Dennett to Compton who applied quantum uncertainty, so whether you are a materialist or a mentalist you can adapt two process volition to your taste. Therefore it says nothing about the nature of conscious decision making, or the hard problem of volition. The key is that (1) something generates a set of possibilities either randomly or otherwise and (2) the mind/brain chooses one to enact, computationally, rationally or intuitively. Computationalists can say (1) is random and (2) is computational. Quantum mechanics provides for both: (1) is the indeterminacy of collapse in von Neumann process 1 and (2) is the collapsed particle dynamics of the Schrödinger equation aka von Neumann process 2.
Symbiotic Existential Cosmology affirms two empirical modes – objective verified empirical observation and subjective affirmed empirical experience, both of which are amenable to the same statistical methods. This ties to the conclusion that subjective conscious volition has efficacy over the physical universe and to the refutation of pure physicalism because causal closure of the physical universe is unprovable but empirical experience of our subjectively conscious actions towards our own physical survival clearly affirm we have voluntary conscious volition having physical effect.
Benjamin Libet has become notorious for his readiness potential suggesting consciousness has no physical effect but his statement on free will precisely echoes Symbiotic Existential Cosmology with exactly the same ethical emphasis:
Given the speculative nature of both determinist and non-determinist theories, why not adopt the view that we do have free will (until some real contradictory evidence may appear, if it ever does). Such a view would at least allow us to proceed in a way that accepts and accommodates our own deep feeling that we do have free will. We would not need to view ourselves as machines that act in a manner completely controlled by the known physical laws.
In Symbiotic Existential Cosmology the transactional interpretation is envisaged as allowing a form of prescience because the collapse has implicit information about the future state of the universe in which the absorbers exist. This may appear logically paradoxical but no classical information is transferred so there is no inconsistency. Modelling the collapse appears to happen outside space-time, but actually it is instantaneous, so dual-time is just a core part of the heuristic to understand the non-linear process.
It is absolutely necessary for subjective conscious physical volition to be efficacious over mere computation, or it fails to confer an evolutionary advantage and would be eliminated over time by neutral and deleterious mutations in favour of purely computational brains. The fact that this hasn’t happened in the 2 bYa since the eucaryote emergence tells us it DOES have an advantage in terms of intuitive anticipation shared, by all animals, who unlike us, lack rational thought, and single celled eucaryotes who have nothing more than social neurotransmitters and excitable membranes to do the same uncanny trick. Therefore we have to look to physics and the nature of uncertainty to solve this, because environmental uncertainty has its root in quantum uncertainty, just as throwing a die does by setting off a butterfly-effect process.
This evolutionary advantage depends on a transformation of Doyle’s (1), in transactional collapse being a form of non-random hidden-variable theory in which non-local correlations of the universal wave function manifest as a complex system during collapse in a way that looks deceptively like randomness because it is a complex chaotic ergodic process. It then completely transforms part (1) of the two process model of volition because the intuitive choices are anticipatory, like integral transforms of the future which we can’t put into a logical causality without paradox, but which can coexist before collapse occurs.
There is thus a clear biological requirement for subjective conscious physical volition and that is to ensure survival of existential threats in the wild. We can imagine a computer attempting to do the two-process, by throwing up heuristic options on a weighted probabilistic basis process (1) and then optimising in a choice process (2). We can imagine this is also in a sense what we do when we approach a problem rationally. But that’s not what survival in the wild is about. It’s about computationally intractable environmental many body problems that also involve other conscious agents, snakes, tigers and other humans, so are formally and computationally undecidable. Hence the role of intuition.
The transactional interpretation as in fig 73, becomes the key to avoiding the mechanistic pseudo-deterministic random (1) plus computational (2) process of the two process decision-making and that is why we are able to exist and evolve as conscious anticipating sentient beings. You can imagine that an advanced AI package like chatGPT can get to the water hole but there is no evidence this is possible if it is covered in smelly food attractants, with unpredictable predators on the prowl. There is not even any good evidence that rational cognition can save our bacon. It all comes down to salience, sensory acuity, paranoia and intuition.
One may think one can depend on randomness alone to provide hypothetical heuristics and avoid getting "stuck in a rut", as a Hopfield network does by thermodynamic annealing and is also key to why the brain uses edge-of-chaos instability, but randomness is arbitrary and artificial. A computer uses the time and date to seed a non-random highly ergodic process to simulate randomness. All molecular billiards arises from a wave-particle process of spreading wave functions involving quantum uncertainty just as photons do. The same for decoherence models of collapse.
This is the ultimate flaw in relying on the two process approach of Doyle but it comes at the cost of a speculative leap about what is happening in von Neumann process 1. Quantum transactional collapse can occur instantaneously across space-time in a manner which may well be rationally contradictory about what time is, but is perfectly consistent with conscious intuition. If the universe is in a dynamical state between a multiverse and collapse to classicality, and conscious organisms, among other entities participate in collapse, we have a link between surviving environmental uncertainty and quantum indeterminacy. If this is just randomness no anticipatory advantage results, but if it is part of a delocalised complex system hidden variable theory it can.
Any attempt to think about it in a causal sequence or even reason it rationally to unravel intuition would lead to paradox, so rational thought can't capture it, but intuition does reveal it, but not in a way we can prove with high sigma causality statistics because to do that we have to invoke an IID process (independent identically-distributed set of measurements), which sends the whole process down the drain of the Born probability interpretation to randomness, when the biological reality in ever-changing brain states is that each step changes the measurement context, as a non-IID process, so it amounts to Schrödinger turtles all the way down.
I am prepared to make this quantum leap into retro-causal special relativistic transactions because it is consistent with quantum mechanics, it urgently needs to be stated and explored more than anything else because it has the key to why we are here as conscious sentient beings in this universe, in which life rises to climax conscious complexity.
René Descartes (1596 – 1650) was a philosopher, scientist, and mathematician, who remains is a seminal figure in the emergence of modern philosophy and science. Descartes defines "thought" (cogitatio) as "what happens in me such that I am immediately conscious of it, insofar as I am conscious of it". Thinking is thus every activity of a person of which the person is immediately conscious. He gave reasons for thinking that waking thoughts are distinguishable from dreams, and that one's mind cannot have been "hijacked" by an evil demon placing an illusory external world before one's senses.
Humans are a union of mind and body; thus Descartes' dualism embraced the idea that mind and body are distinct but closely joined. While many contemporary readers of Descartes found the distinction between mind and body difficult to grasp, he thought it was entirely straightforward. Descartes employed the concept of modes, which are the ways in which substances exist. In Principles of Philosophy, Descartes explained, "we can clearly perceive a substance apart from the mode which we say differs from it, whereas we cannot, conversely, understand the mode apart from the substance".
According to Descartes, two substances are really distinct when each of them can exist apart from the other. Thus, Descartes reasoned that God is distinct from humans, and the body and mind of a human are also distinct from one another. He argued that the great differences between body (an extended thing) and mind (an un-extended, immaterial thing) make the two ontologically distinct. According to Descartes' indivisibility argument, the mind is utterly indivisible: because "when I consider the mind, or myself in so far as I am merely a thinking thing, I am unable to distinguish any part within myself; I understand myself to be something quite single and complete."
Moreover, in The Meditations, Descartes discusses a piece of wax and exposes the single most characteristic doctrine of Cartesian dualism: that the universe contained two radically different kinds of substances—the mind or soul defined as thinking, and the body defined as matter and unthinking.
The Aristotelian philosophy of Descartes' days held that the universe was inherently purposeful or teleological. Descartes' theory of dualism supports the distinction between traditional Aristotelian science and the new science of Kepler and Galileo, which denied the role of a divine power and "final causes" in its attempts to explain nature. Descartes' dualism provided the philosophical rationale for the latter by expelling the final cause from the physical universe (or res extensa) in favor of the mind (or res cogitans). Therefore, while Cartesian dualism paved the way for modern physics, it also held the door open for religious beliefs about the immortality of the soul.
Descartes' dualism of mind and matter implied a concept of human beings. A human was, according to Descartes, a composite entity of mind and body. Descartes gave priority to the mind and argued that the mind could exist without the body, but the body could not exist without the mind. In The Meditations, Descartes even argues that while the mind is a substance, the body is composed only of "accidents".
But he did argue that mind and body are closely joined: "Nature also teaches me, by the sensations of pain, hunger, thirst and so on, that I am not merely present in my body as a pilot in his ship, but that I am very closely joined and, as it were, intermingled with it, so that I and the body form a unit. If this were not so, I, who am nothing but a thinking thing, would not feel pain when the body was hurt, but would perceive the damage purely by the intellect, just as a sailor perceives by sight if anything in his ship is broken."
Descartes originally claimed that consciousness requires an immaterial soul, which interacts with the body via the pineal gland of the brain. Gert-Jan Lokhorst (2021) describes the details of how Descartes conceived the action of the pineal on a mechanical body, noting its antiquated basis:
The pineal gland played an important role in Descartes’ account because it was involved in sensation, imagination, memory and the causation of bodily movements. Unfortunately, however, some of Descartes’ basic anatomical and physiological assumptions were totally mistaken, not only by our standards, but also in light of what was already known in his time. … The bodies of Descartes’ hypothetical men are nothing but machines: “I suppose the body to be nothing but a statue or machine made of earth, which God forms with the explicit intention of making it as much as possible like us”. The working of these bodies can be explained in purely mechanical terms.
Fig 88: Diagram from Descartes' Treatise of Man (1664), showing the formation of inverted retinal images in the eyes, and the transmission of these images, via the nerves so as to form a single, re-inverted image (an idea) on the surface of the pineal gland.
As a young man, Descartes had had a mystical experience in a sauna on the Danube: three dreams, which he interpreted as a message telling him to come up with a theory of everything and on the strength of this, dedicated his life to philosophy, leading to his iconic quote – Cogito ergo sum “I think therefore I am” – leading to Cartesian dualism, immortalised in the homunculus. This means that, in a sense, the Cartesian heritage of dualism is a genuine visionary attempt on Descartes’ part, to come to terms with his own conscious experience in terms of his cognition, in distinction from the world around him. Once the separation invoked by the term dualism is replaced by complementarity, we arrive at Darwinian panpsychism.
Experior, ergo sum, experimur, ergo sumus.
I experience therefore I am, we experience therefore we are!
Key to Descartes' interpretation, the active process that correspond to neuronal action potential are “animal spirits” running in the nerves from sense organ to the pineal and back out to the muscles:
In Descartes’ description of the role of the pineal gland, the pattern in which the animal spirits flow from the pineal gland was the crucial notion. He explained perception as follows. The nerves are hollow tubes filled with animal spirits. They also contain certain small fibers or threads which stretch from one end to the other. These fibers connect the sense organs with certain small valves in the walls of the ventricles of the brain. When the sensory organs are stimulated, parts of them are set in motion. These parts then begin to pull on the small fibers in the nerves, with the result that the valves with which these fibers are connected are pulled open, some of the animal spirits in the pressurized ventricles of the brain escape, and (because nature abhors a vacuum) a low-pressure image of the sensory stimulus appears on the surface of the pineal gland. It is this image which then “causes sensory perception” of whiteness, tickling, pain, and so on. “It is not [the figures] imprinted on the external sense organs, or on the internal surface of the brain, which should be taken to be ideas—but only those which are traced in the spirits on the surface of the gland H (where the seat of the imagination and the ‘common’ sense is located).
This account is an attempt to explain in one model both subjective consciousness and volition over the world:
Finally, Descartes presented an account of the origin of bodily movements. He thought that there are two types of bodily movement. First, there are movements which are caused by movements of the pineal gland. The pineal gland may be moved in three ways: (1) by “the force of the soul,” provided that there is a soul in the machine; (2) by the spirits randomly swirling about in the ventricles; and (3) as a result of stimulation of the sense organs. The role of the pineal gland is similar in all three cases: as a result of its movement, it may come close to some of the valves in the walls of the ventricles. The spirits which continuously flow from it may then push these valves open, with the result that some of the animal spirits in the pressurized ventricles can escape through these valves, flow to the muscles by means of the hollow, spirit-filled nerves, open or close certain valves in the muscles which control the tension in those muscles, and thus bring about contraction or relaxation of the muscles.
It also embraces higher functioning including imagination:
Imagination arises in the same way as perception, except that it is not caused by external objects. Continuing the just-quoted passage, Descartes wrote: “And note that I say ‘imagines or perceives by the senses’. For I wish to apply the term ‘idea’ generally to all the impressions which the spirits can receive as they leave gland H. These are to be attributed to the ‘common’ sense when they depend on the presence of objects; but they may also proceed from many other causes (as I shall explain later), and they should then be attributed to the imagination”
Westphal (2016) notes: According to Descartes, matter is essentially spatial, and it has the characteristic properties of linear dimensionality. Things in space have a position, at least, and a height, a depth, and a length, or one or more of these. Mental entities, on the other hand, do not have these characteristics. We cannot say that a mind is a two-by-two-by-two-inch cube or a sphere with a two-inch radius, for example, located in a position in space inside the skull. This is not because it has some other shape in space, but because it is not characterized by space at all.
The whole problem contained in such questions arises
simply from a supposition that is false and cannot in any way be proved, namely
that,
if the soul and the body are two substances whose nature is different,
this prevents them from being able to act on each other – René Descartes
Descartes is surely right about this. The “nature” of a baked Alaska pudding, for instance, is very different from that of a human being, since one is a pudding and the other is a human being — but the two can “act on each other” without difficulty, for example when the human being consumes the baked Alaska pudding and the baked Alaska in return gives the human being a stomachache.
In a letter dated May 1643, Princess Elisabeth wrote to Descartes: I beg you to tell me how the human soul can determine the movement of the animal spirits in the body so as to perform voluntary acts — being as it is merely a conscious substance. For t