Symbiotic Existential

Cosmology

 Full Colour PDF 30 mb White pages print version

Chris King

CC BY-NC-ND 4.0  doi:10.13140/RG.2.2.32891.23846
Part 2 Conscious Cosmos
Update 5-8-2021 9-2023

 

dhushara.com

 

Contents Summary - Contents in Full

 

Dedication

The Core

Symbiotic Existential Cosmology:

            Scientific OverviewDiscovery and Philosophy

Biocrisis, Resplendence and Planetary Reflowering

Psychedelics in the Brain and Mind, Therapy and Quantum ChangeThe Devil's Keyboard

Fractal, Panpsychic and Symbiotic Cosmologies, Cosmological Symbiosis

Quantum Reality and the Conscious Brain

The Cosmological Problem of Consciousness in the Quantum Universe

The Physical Viewpoint, The Neuroscience Perspective

The Evolutionary Landscape of Symbiotic Existential Cosmology

Evolutionary Origins of Conscious Experience

Science, Religion and Gene Culture Co-evolution

Animistic, Eastern and Western Traditions and Entheogenic Use

Natty Dread and Planetary Redemption

Yeshua’s Tragic Mission, Revelation and Cosmic Annihilation

Ecocrisis, Sexual Reunion and the Entheogenic Traditions

Song cycleVideo 

Communique to the World To save the diversity of life from mass extinction

The Vision Quest to Discover Symbiotic Existential Cosmology

The Evolution of Symbiotic Existential Cosmology

Resplendence

A Moksha Epiphany

   Epilogue   

            References

  Appendix:Primal Foundations of Subjectivity, Varieties of Panpsychic Philosophy

 

 

Consciousness is eternal, life is immortal.

Incarnate existence is Paradise on the Cosmic equator

in space-time the living consummation of all worlds.

But mortally coiled! As transient as the winds of fate!

 

 

 

Symbiotic Existential Cosmology – Contents in Full

 

Dedication

The Core

A Scientific Overview

Biogenic

Panpsychic

Symbiotic

Discovery and Philosophy

The Existential Condition and the Physical Universe

Turning Copernicus Inside Out

Discovering Life, the Universe and Everything

The Central Enigma: What IS the Conscious Mind?, Glossary

Biocrisis and Resplendence: Planetary Reflowering

The Full Scope: Climate Crisis, Mass Extinction. Population and Nuclear Holocaust

Psychedelics in the Brain and Mind

Therapy and Quantum Change: The Results from Scientific Studies

The Devil's Keyboard

Biocosmology, Panpsychism and Symbiotic Cosmology

Fractal Biocosmology

Darwinian Cosmological Panpsychism

Cosmological Symbiosis

Symbiosis and its Cosmological Significance

Quantum Reality and the Conscious Brain

The Cosmological Problem of Consciousness

The Physical Viewpoint, Quantum Transactions

The Neuroscience Perspective, Field Theories of Consciousness

Conscious Mind, Resonant Brain

Cartesian Theatres and Virtual Machines

Global Neuronal Workspace, Epiphenomenalism & Free Will

Consciousness and Surviving in the Wild

Consciousness as Integrated Information

Is Consciousness just Free Energy on Markov Landscapes?

Can Teleological Thermodynamics Solve the Hard Problem?, Quasi-particle Materialism

Panpsychism and its Critics

The Crack between Subjective Consciousness and Objective Brain Function

A Cosmological Comparison with ChalmersConscious Mind

Minimalist Physicalism and Scale Free Consciousness

Defence of the real world from the Case Against Reality

Consciousness and the Quantum: Putting it all Back Together

How the Mind and Brain Influence One Another

The Diverse States of Subjective Consciousness

Consciousness as a Quantum Climax

TOEs, Space-time, Timelessness and Conscious Agency

Psychedelics and the Fermi Paradox

Life After Death

The Evolutionary Landscape of Symbiotic Existential Cosmology

Evolutionary Origins of Neuronal Excitability, Neurotransmitters, Brains and Conscious Experience

The Extended Evolutionary Synthesis, Deep and dreaming sleep

The Evolving Human Genotype: Developmental Evolution and Viral Symbiosis

The Evolving Human Phenotype: Sexual and Brain Evolution, the Heritage of Sexual Love and Patriarchal Dominion

Gene Culture Coevolution

The Emergence of Language

Niche Construction, Habitat Destruction and the Anthropocene

Democratic Capitalism, Commerce and Company Law

Science, Religion and Gene-Culture Coevolution, The Spiritual Brain, Religion v Nature, Creationism

The Noosphere, Symbiosis and the Omega Point

Animism, Religion, Sacrament and Cosmology

Is Polyphasic Consciousness Necessary for Global Survival?

The Grim Ecological Reckoning of History

Anthropological Assumptions and Coexistential Realities

Shipibo: Split Creations and World Trees

Meso-American Animism and the Huichol

The Kami of Japanese Shinto

Maori Maatauranga

Pygmy Cultures and Animistic Forest Symbiosis

San Bushmen as Founding Animists

The Key to Our Future Buried in the Past

Entasis and Ecstasis: Complementarity between Shamanistic and Meditative Approaches to Illumination

Eastern Spiritual Cosmologies and Psychotropic Use

Psychedelic Agents in Indigenous American Cultures

Natty Dread and Planetary Redemption

The Scope of the Crisis

A Cross-Cultural Perspective

Forcing the Kingdom of God

The Messiah of Light and Dark

The Dionysian Heritage

The Women of Galilee and the Daughters of Jerusalem

Whom do Men say that I Am?

Descent into Hades and Harrowing Hell

Balaam the Lame: Talmudic Entries

Soma and Sangre: No Redemption without Blood

The False Dawn of the Prophesied Kingdom

Transcending the Bacchae: Revelation and Cosmic Annihilation

The Human Messianic Tradition

Ecocrisis, Sexual Reunion and the Tree of Life

Biocrisis and the Patriarchal Imperative

The Origins and Redemption of Religion in the Weltanshauung

A Millennial World Vigil for the Tree of Life

Redemption of Soma and Sangre in the Sap and the Dew

Maria Sabinas Holy Table and Gordon Wassons Pentecost

The Man in the Buckskin Suit

Santo Daime and the Union Vegetale

The Society of Friends and Non-sacramental Mystical Experience

The Vision Quest to Discover Symbiotic Existential Cosmology

The Three Faces of Cosmology

Taking the Planetary Pulse

Planetary Reflowering

Scepticism, Belief and Consciousness

Psychedelics The Edge of Chaos Climax of Consciousness

Discovering Cosmological Symbiosis

A Visionary Journey

Evolution of Symbiotic Existential Cosmology

Crisis and Resplendence

Communique on Preserving the Diversity of Life on Earth for our Survival as a Species

Affirmations: How to Reflower the Diversity of Life for our own Survival

Entheogenic Conclusion

A Moksha Epiphany

Epilogue

Symbiotic Existential Cosmology is Pandora's Pithos Reopened and Shekhinah's Sparks Returning

The Weltanshauung of Immortality

Paradoxical Asymmetric Complementarity, The Natural Face of Samadhi vs Male Spiritual Purity, Clarifying Cosmic Karma

Empiricism, the Scientific Method, Spirituality and the Subjective Pursuit of Knowledge

The Manifestation Test

References

Appendix Primal Foundations of Subjectivity, Varieties of Panpsychic Philosophy

 

 

The Conscious Brain, and the Cosmological Universe[24]

Solving  the Central Enigma of Existential Cosmology

Chris King – 21-6-2021

In memory of Maria Sabina and Gordon Wasson

 

Contents

 

1 The Cosmological Problem of Consciousness

2 Psychedelic Agents in Indigenous American Cultures

3 Psychedelics in the Brain and Mind

4 Therapy and Quantum Change: Scientific Results

5 Evolutionary Origins of Excitability, Neurotransmitters and Conscious Experience

6 The Evolutionary Landscape of  Symbiotic Existential Cosmology

7 Fractal Biocosmology, Darwinian Cosmological Panpsychism and Symbiotic Cosmology

8 Animistic, Eastern and Western Traditions and Entheogenic Use

9 Natty Dread and Planetary Redemption

10 Biocrisis and Resplendence: Planetary Reflowering,  A Moksha Epiphany

 

Abstract:

 

This article resolves the central enigma of existential cosmology – the nature and role of subjective experience – thus providing a direct solution to the "hard problem of consciousness". This solves, in a single coherent cosmological description, the core existential questions surrounding the role of the biota in the universe, the underlying process supporting subjective consciousness and the meaning and purpose of conscious existence. This process has pivotal importance for avoiding humanity causing a mass extinction of biodiversity and possibly our own demise, instead becoming able to fulfil our responsibilities as guardians of the unfolding of sentient consciousness on evolutionary and cosmological time scales.

 

The article overviews cultural traditions and current research into psychedelics [25] and formulates a panpsychic cosmology, in which the mind at large complements the physical universe, resolving the hard problem of consciousness extended to subjective conscious volition over the universe and the central enigmas of existential cosmology, and eschatology, in a symbiotic cosmological model. The symbiotic cosmology is driven by the fractal non-linearities of the symmetry-broken quantum forces of nature, subsequently turned into a massively parallel quantum computer by biological evolution (Darwin 1859, 1889). Like Darwin’s insights, this triple cosmological description is qualitative rather than quantitative, but nevertheless accurate. Proceeding from fractal biocosmology and panpsychic cosmology , through edge of chaos dynamical instability, the excitable cell and then the eucaryote symbiosis create a two-stage process, in which the biota capture a coherent encapsulated form of panpsychism, which is selected for, because it aids survival. This becomes sentient in eucaryotes due to excitable membrane sensitivity to quantum modes and eucaryote adaptive complexity. Founding single-celled eucaryotes already possessed the genetic ingredients of excitable neurodynamics, including G-protein linked receptors and a diverse array of neurotransmitters, as social signalling molecules ensuring survival of the collective organism. The brain conserves these survival modes, so that it becomes an intimately-coupled society of neurons communicating synaptically via the same neurotransmitters, modulating key survival dynamics of the multicellular organism, and forming the most complex, coherent dynamical structures in the physical universe.

 

This results in consciousness as we know it, shaped by evolution for the genetic survival of the organism. In our brains, this becomes the existential dilemma of ego in a tribally-evolved human society, evoked in core resting state networks, such as the default mode network, also described in the research as "secondary consciousness", in turn precipitating the biodiversity and climate crises. However, because the key neurotransmitters are simple, modified amino acids, the biosphere will inevitably produce molecules modifying the conscious dynamics, exemplified in the biospheric entheogens, in such a way as to decouple the ego and enable existential return to the "primary consciousness" of the mind at large, placing the entheogens as conscious equivalents of the LHC in physics. Thus a biological symbiosis between Homo sapiens and the entheogenic species enables a cosmological symbiosis between the physical universe and the mind at large, resolving the climate and biodiversity crises long term in both a biological and a psychic symbiosis, ensuring planetary survival.

 

The Decline of Ground-Breaking Disruptive Scientific Discoveries

 

The research of Park, Leahey & Funk (2022) confirms that papers and patents are becoming less disruptive over time. I want to draw the attention of readers to the fallacy that the past record of science and technology is a basis to believe pure physicalist science will show how the brain “makes” consciousness in any sense greater than the neural correlate of conscious experience. This needs to be taken seriously and is damning evidence against the assumption that the past progress of mechanistic science will solve the hard problem of conscious volition.

 

Fig 70b: Decline of disruptive science and technology

 

The figure shows just how devastating the decline has become and indicates the extreme unlikelihood of mechanistic science solving the biggest problem of all. This belief is a product of severe ignorance of the diffuse complexity of the excitation from the prefrontals through to the motor cortex modified by the basal ganglia and the cerebellum, involving both diffuse network activity and deep cyclic connections, which appear to be both uncomputable and empirically undecidable in the quantum universe.

 

Research Citation Profile

 

Fig 70c: The research citation profile of Symbolic Existential Cosmology at 16th Sep 2023

 

Growth of research and distribution of dates of citations two years since the mushroom trip that precipitated this work, it has accrued 1385 source references, with a peak of 90 in 2022. Of these 996 are from 2000 on, 755 from 2010 on and 277 from 2020 on, illustrating the real-time up-to-date nature of the work, which is roughly in four categories, (1) cosmological physics, (2) consciousness and neuroscience, (3) evolutionary biology, (4) metaphysics, animism and religious studies. Fittingly, the oldest citation is Charles Darwin (1859) "On the Origin of the Species".

 

 

Fittingly, the oldest citation is Charles Darwin (1859) “On the Origin of the Species”.

 

The Cosmological Axiom of Primal Subjectivity

 

We put this into precise formulation, taking into account that the existence of primary subjectivity is an undecidable proposition, from the physical point of view,  in the sense of Godel, but is empirically certain from the experiential point of view, we come to the following:

 

(1) We start on home ground, i.e. with human conscious volition, where we can clearly confirm both aspects of reality – subjectively experiential and objectively physical.

(2) We then affirm as empirical experience, that we have efficacy of subjective conscious volition over the physical universe, manifest in every intentional act we make, as is necessary for our behavioural survival – as evidenced by my consciously typing this passage, and that this is in manifest conflict with pure physicalism asserting the contrary.

(3) We now apply Occam's razor, not just on parsimony, but categorical inability of pure materialism, using only physical processes, which can only be empirically observed, to deal with subjective consciousness, because this can only be empirically experienced and is private to observation. This leads to intractability of the hard problem of consciousness. Extended to the physicalist blanket denial of conscious physical volition, which we perceive veridically in our conscious perception of our enacted intent, this becomes the extended hard problem. Classical neuroscience accepts consciousness only as an epiphenomenon – an internal model of reality constructed by the brain, but denies volition, as a delusion perpetrated by evolution to evoke the spectre of intentional behaviour.

(4) We then scrutinise the physical aspect and realise we cannot empirically confirm classical causal closure the universe in brain dynamics because: (a) the dynamics is fractal to the quantum-molecular level so non-IID processes don't necessarily converge to the classical and (b) experimental verification is impossible because we would need essentially to trace the neurodynamics of every neuron, or a very good statistical sample, when the relevant dynamics is at the unstable edge of chaos and so is quantum sensitive. Neither can we prove consciousness causes brain states leading to volition, because consciousness can only be experienced and not observed, so it’s a genuine undecidable proposition physically.

(5) This sets up the status of: “Does subjective conscious volition have efficacy over the universe? ” to be an empirically undecidable cosmological proposition from the physical perspective, in the sense of Godel. From the experiential perspective however, it is an empirical certainty.

(6) We therefore add a single minimal cosmological axiom, to state the affirmative proposition – “Subjective conscious volition has efficacy over the physical universe”. We also need to bear in mind that a physicalist could make the counter proposition that it doesn’t, and both could in principle be explored, like the continuum hypothesis in mathematics – that there is no infinite cardinality between those of the countable rationals and uncountable reals [1].

(7) We now need to scale this axiom all the way down to the quantum level, because it is a cosmological axiom that means that the universe has some form of primal subjective volition, so we need to investigate its possible forms. The only way we can do this, as we do with one another about human consciousness, where we can’t directly experience one another’s consciousness, is to make deductions from the physical effects of volition – in humans, organisms, amoebo-flagellates, prokaryotes, biogenesis, butterfly effect systems and quanta.

(8) We immediately find that quantum reality has two complementary processes:

(a) The wild wave function which contains both past and future implicit “information” under special relativity, corresponding to the quantum-physical experiential interface of primal subjectivity.

(b) Collapse of the wave function, which violates causality and in which the normalised wave power space leaves the quantum total free will where to be measured, which is the quantum-physical volitional interface of primal subjectivity.

(9) Two potentially valid cosmologies from the physical perspective, but only one from the experiential perspective:

As with any undecidable proposition, from the objective perspective, pure physicalists can, on the one hand, continue to contend that the quantum has no consciousness or free will and that uncertainty is “random” and cite lack of an obvious bias violating the Born interpretation, and develop that approach, thus claiming volition is a self-fulfilling delusion of our internal model of reality.  But Symbiotic Existential Cosmology can validly argue that uncertainty could be due to a complex quasi-random process, e.g. a special relativistic transactional collapse process, which by default, the quantum, by virtue of its wave function context does have “conscious” free will over, allowing us and the diversity of life to also be subjectively conscious and affect the world around us, unlike the pure materialist model.

 

An Accolade to Cathy Reason

 

The first part of the answer to the continuum hypothesis CH – that there is no infinite cardinal between the rationals and reals – was due to Kurt Gödel. In 1938 Gödel proved that it is impossible to disprove CH using the usual axioms for set theory. So CH could be true, or it could be unprovable.

 

In 1963 Paul Cohen finally showed that it was in fact unprovable.

 

The first part of the answer to the cosmological axiom CA – that subjective consciousness is a cosmological complement to the physical universe – was due to Cathy Reason. In 2016 she proved that it is impossible to establish certainty of consciousness through a physical process. So CA could be false, or it could be unprovable. In 2019, and 2021, with Kushal Shah, she proved the no-supervenience theorem – that the operation of self-certainty of consciousness is inconsistent with the properties possible in any meaningful definition of a physical system – effectively showing CA is certain experientially. A formal proof is Reason (2023).

 

In 2023 in Symbiotic Existential Cosmology, Chris King showed that CA, in the form of conscious volition, is in fact unprovable physically, although it is certain experientially.

 

1 The Cosmological Problem of Consciousness

 

The human existential condition consists of a complementary paradox. To survive in the world at large, we have to accept the external reality of the physical universe, but we gain our entire knowledge of the very existence of the physical universe through our conscious experiences, which are entirely subjective and are complemented by other experiences in dreams and visions which also sometimes have the genuine reality value we describe as veridical. The universe is thus in a fundamental sense a description of our consensual subjective experiences of it, experienced from birth to death, entirely and only through the relentless unfolding spectre of subjective consciousness.

  


Fig 71: (a) Cosmic evolution of the universe (WMAP King 2020b). Life has existed on Earth for a third of the universe’s 13.7 b ya lifetime. (b) Symmetry-breaking of a unified superforce  into the four wave-particle forces of nature, colour, weak, electromagnetic and gravity with the first three forming the standard model and with the weak-field limit of general relativity (Wilczek 2015) comprising the core model. (c) quantum uncertainty defined through wave coherence beats, (d) Schrödinger cat experiment. Schrödinger famously said The total number of minds in the universe is one”, preconceiving Huxley’s notion of the  mind at large used as this monograph’s basis for cosmological symbiosis. Quantum theory says the cat is in both live and dead states with probability 1/2 but the observer finds the cat alive or dead, suggesting the conscious observer collapses the superimposed wave function. (e) Feynman diagrams in special relativistic quantum field theories involve both retarded (usual) and advanced (time backwards) solutions because the Lorenz energy transformations ensuring the atom bomb works have positive and negative energy solutions . Thus electron scattering (iv) is the same as positron creation-annihilation [26]Each successive order Feynman diagram has a contribution reduced by a factor   the fine structure constant. (f) Double slit interference shows a photon emitted as a particle passes through both slits as a wave before being absorbed on the photographic plate as a particle. The trajectory for an individual particle is quantum uncertain but the statistical distribution confirms the particles have passed through the slits as waves. (g) Cosmology of conscious mental states (King 2021a). Kitten’s Cradle a love song.

 

The Physical Viewpoint

 

The religious anthropocentric view of the universe was overthrown, when Copernicus, in 1543 deduced that the Earth instead of being in the centre of the cosmos instead, along with the  other solar system planets, rotated in orbits around the Sun. Galileo defended heliocentrism based on his astronomical observations of 1609. By 1615, Galileo's writings on heliocentrism had been submitted to the Roman Inquisition which concluded that heliocentrism was foolish, absurd, and heretical since it contradicted Holy Scripture. He was tried by the Inquisition, found "vehemently suspect of heresy", and forced to recant. He spent the rest of his life under house arrest.

 

The Copernican revolution in turn resulted in the rise of classical materialism defined by Isaac Newton’s laws of motion (1642 – 1726), after watching the apple fall under gravity, despite Newton himself being a devout Arian Christian who used scripture to predict the apocalypse. The classically causal Newtonian world view, and Pierre Simon Laplace’s (1749 – 1827) view of mathematical determinism that if the current state of the world were known with precision, it could be computed for any time in the future or the past, came to define the universe as a classical mechanism in the ensuing waves of scientific discovery in classical physics, chemistry and molecular biology, climaxing with the decoding of the human genome, validating the much more ancient atomic theory of Democritus (c. 460 – c.370 BC). The classically causal universe of Newton and Laplace has since been fundamentally compromised by the discovery of quantum uncertainty and its spooky" features of quantum entanglement. 

 

In counterposition to materialism, George Berkeley (1685 – 1753) is famous for his philosophical position of "immaterialism", which denies the existence of material substance and instead contends that familiar objects like tables and chairs are ideas perceived by our minds and, as a result, cannot exist without being perceived. Berkeley argued against Isaac Newton's doctrine of absolute space, time and motion in a precursor to the views of Mach and Einstein. Interest in Berkeley's work increased after 1945 because he had tackled many of the issues of paramount interest to 20th century philosophy, such as perception and language.

 

The core reason for the incredible technological success of science is not the assumption of macroscopic causality, but the fact that the quantum particles come in two kinds. The integral spin particles, called bosons, such as photons, can all cohere together, as in a laser and thus make forces and radiation, but the half-integer spin particles, called fermions, such as protons and electrons, which can only congregate in pairs of complementary spin, are incompressible and thus form matter, inducing a universal fractal complexity, via the non-linearity of the electromagnetic and other quantum forces. The fermionic quantum structures are small, discrete and divisible, so the material world can be analysed in great detail. Given the quantum universe and the fact that brain processes are highly uncertain, given changing contexts and unstable tipping points at the edge of chaos, objective science has no evidential basis to claim the brain is causally closed and thus falsely conclude that we therefore have no agency to apply our subjective and consciousness to affect the physical world around us. By agency here I mean full subjective conscious volition, not just objective causal functionality (Brizio & Tirassa 2016, Moreno & Mossio 2015), or even autopoiesis (Maturana & Varela 1972).

 

The nature of conscious experience remains the most challenging enigma in the scientific description of reality, to the extent that we not only do not have a credible theory of how this comes about but we don’t even have an idea of what shape or form such a theory might take. While physical cosmology is an objective quest, leading to theories of grand unification, in which symmetry-breaking of a common super-force led to the four forces of nature in a big-bang origin of the universe, accompanied by an inflationary beginning, the nature of conscious experience is entirely subjective, so the foundations of objective replication do not apply. Yet for every person alive today, subjective conscious experiences constitute the totality of all our experience of reality, and physical reality of the world around us is established through subjective consciousness, as a consensual experience of conscious participants.

 

Erwin Schrödinger: Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental.

 

Arthur Eddington: The stuff of the world is mind stuff.

 

J. B. S. Haldane: We do not find obvious evidence of life or mind in so-called inert matter...; but if the scientific point of view is correct, we shall ultimately find them, at least in rudimentary form, all through the universe.

 

Julian Huxley: Mind or something of the nature as mind must exist throughout the entire universe. This is, I believe, the truth.

 

Freeman Dyson: Mind is already inherent in every electron, and the processes of human consciousness differ only in degree and not in kind from the processes of choice between quantum states which we call chancewhen they are made by electrons.

 

David Bohm: It is implied that, in some sense, a rudimentary consciousness is present even at the level of particle physics.

 

Werner Heisenberg: Is it utterly absurd to seek behind the ordering structures of this world a consciousnesswhose intentionswere these very structures?

 

Andrei Linde: Will it not turn out, with the further development of science, that the study of the universe and the study of consciousness will be inseparably linked, and that ultimate progress in the one will be impossible without progress in the other?

 

The hard problem of consciousness (Chalmers 1995) is the problem of explaining why and how we have phenomenal first-person subjective experiences sometimes called “qualia” that feel "like something”, and more than this, evoke the entire panoply of all our experiences of the world around us. Chalmers comments (201) Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.” By comparison, we assume there are no such experiences for inanimate things such as a computer, or a sophisticated form of artificial intelligence. Two extensions of the hard problem are the hard problem extended to volition and the hard manifestation problem how is experience manifested in waking perception, dreams and entheogenic visions?

 

Fig 71b: The hard problem's explanatory gap – an uncrossable abyss.

 

Although there have been significant strides in both electrodynamic (EEG and MEG), chemodynamic (fMRI) and connectome imaging of active conscious brain states, we still have no idea of how such collective brain states evoke the subjective experience of consciousness to form the internal model of reality we call the conscious mind, or for that matter volitional will. In Jerry Fodor’s words: “Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious.”

 

Nevertheless opinions about the hard problem and whether consciousness has any role in either perception or decision-making remain controversial and unresolved. The hard problem is contrasted with easy, functionally definable problems, such as explaining how the brain integrates information, categorises and discriminates environmental stimuli, or focuses attention. Subjective experience does not seem to fit this explanatory model. Reductionist materialists, who are common in the brain sciences, particularly in the light of the purely computational world views induced by artificial intelligence, see consciousness and the hard problem as issues to be eliminated by solving the easy problems. Daniel Dennett (2005) for example argues that, on reflection, consciousness is functionally definable and hence can be corralled into the objective description. Arguments against the reductionist position often cite that there is an explanatory gap (Levine 1983) between the physical and the phenomenal. This is also linked to the conceivability argument, whether one can conceive of a micro-physical “zombie” version of a human that is identical except that it lacks conscious experiences. This, according to most philosophers (Howell & Alter 2009), indicates that physicalism, which holds that consciousness is itself a physical phenomenon with solely physical properties, is false.

 

David Chalmers (1995), speaking in terms of the hard problem, comments: “The only form of interactionist dualism that has seemed even remotely tenable in the contemporary picture is one that exploits certain properties of quantum mechanics.” He then goes on to cite (a) David Eccles’ (1986) citing of consciousness providing the extra information required to deal with quantum uncertainty thus not interrupting causally deterministic processes, if they occur, in brain processing and (b) the possible involvement of consciousness in “collapse of the wave function” in quantum measurement.  We next discuss both of these loopholes in the causal deterministic description.

 

Two threads in our cosmological description indicate how the complementary subjective and objective perspectives on reality might be unified. Firstly, the measurement problem in the quantum universe, appears to involve interaction with a conscious observer. While the quantum description involves an overlapping superposition of wave functions, the Schrödinger cat paradox, fig 71(d), shows that when we submit a cat in a box to a quantum measurement, leading to a 50% probability of a particle detection smashing a flask of cyanide, killing the cat, when the conscious observer opens the box, they do not find a superposition of live and dead cats, but one cat, either stone dead or very alive. This leads to the idea that subjective consciousness plays a critical role in collapsing the superimposed wave functions into a single component, as noted by John von Neumann, who stated that collapse could occur at any point between the precipitating quantum event and the conscious observer, and others (Greenstein 1988, Stapp 1995, 2007).

 

Wigner & Margenau (1967) used a variant of the cat paradox to argue for conscious involvement. In this version, we have a box containing a conscious friend who reports the result later, leading to a paradox about when the collapse occurs – i.e when the friend observes it or when Wigner does. Wigner discounted the observer being in a superposition themselves as this would be preceded by being in a state of effective “suspended animation”. As this paradox does not occur if the friend is a non-conscious mechanistic computer, it suggests consciousness is pivotal. Henry Stapp (2009) in "Mind, Matter and Quantum Mechanics" has an overview of the more standard theories.

 

While systems as large as 2000 atoms (Fein et al. 2019) that of gramicidin A1, a linear antibiotic polypeptide composed of 15 amino acids (Shayeghi et al. 2020), and even a deep-frozen tardigrade (Lee at al. 2021) have been found in a superposition of states resulting in interference fringes, indicating that the human body or brain could be represented as a quantum superposition, it is unclear that subjective experience can. More recent experiments involving two interconnected Wigners’ friend laboratories also suggest the quantum description "cannot consistently describe the use of itself” (Frauchiger & Renner 2018). An experimental realisation (Proietti et al. 2019) implies that there is no such thing as objective reality, as quantum mechanics allows two observers to experience different, conflicting realities. These paradoxes underly the veridical fact that conscious observers make and experience a single course of history, while the physical universe of quantum mechanics is a multiverse of probability worlds, as in Everett’s many worlds description, if collapse does not occur. This postulates split observers, each unaware of the existence of the other, but what kind of universe they are then looking at seems inexorably split into multiverses, which we do not experience.

 

In this context Barrett (1999) presents a variety of possible solutions involving many worlds and many or one mind and in the words of Saunders (2001) in review has resonance with existential cosmology:

 

Barretts tentatively favoured solution [is] the one also developed by Squires (1990). It is a one-world dualistic theory, with the usual double-standard of all the mentalistic approaches: whilst the physics is precisely described in mathematical terms, although it concerns nothing that we ever actually observe, the mental in the Squires-Barrett case a single collective mentality is imprecisely described in non-mathematical terms, despite the fact that it contains everything under empirical control.

 

In quantum entanglement, two or more particles can be prepared within the same wave function. For example, in a laser, an existing wave function can capture more and more photons in phase with a standing wave between two mirrors by stimulated emission from the excited medium. In other experiments pairs of particles can be generated inside a single wave function. For example an excited Calcium atom with two outer electrons can emit a blue and a yellow photon with complementary polarisations in a spin-0 to spin-0 transition, as shown in fig 72(8). In this situation when we sample the polarisation of one photon, the other instantaneously has the complementary polarisation even when the two detections take place without there being time for any information to pass between the detectors at the speed of light. John Bell (1964) proved that the results predicted by standard quantum mechanics when the two detectors were set at varying angles violated the constraints defined by local Einsteinian causality, implying quantum non-locality, decried by Einstein, Rosen and Podolsky (1935) as incomplete:

 

In a complete theory there is an element corresponding to each element of reality. A sufFicient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either (1) that the description of reality as given by a wave function in quantum mechanics is not complete, or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a system on the basis of measurements made on another system that had previously interacted with it leads to the result that if  (1) is false then (2) is also false. One is thus led to conclude one precludes the knowledge of the other. Then either (1) that the description of reality as given by a wave function is not complete.

 

The experimental verification was confirmed by Alain Aspect and others (1982) over space-like intervals using rapidly time varying analysers (fig 72(8)), receiving a Nobel in 2022. There are other more complex forms of entanglement such as the W and GHZ states (Greenberger, Horne & Zeilinger 1989, Mermin 1990), used in quantum computing (Coecke et al. 2021), types of entangled state that involve at least three subsystems (particle states, or qubits). Extremely non-classical properties of the GWZ state have been observed.

 

Albert Einstein dubbed the phenomenon "spooky action at a distance" and proposed that the effect actually came about because the particles contained hidden variables, or instructions, which had already predetermined their statesThis doesn't mean that quantum mechanics is incomplete, superficial or wrong , but that a hidden variable theory we do not have direct access to within uncertainty may provide the complete description.

 

Fig 71c: Cancellation of off-diagonal entangled components in decoherence by damping,
modelling extraneous collisions (
Zurek 2003).

 

Other notions of collapse  (see King 2020b for details) involve interaction with third-party quanta and the world on classical scales. All forms of quantum entanglement (Aspect et al. 1982), or its broader phase generalisation, quantum discord (Ollivier & Zurek 2002) involve decoherence (Zurek 1991, 2003), because the system has become coupled to other wave-particles. But these just correspond to further entanglements, not collapse. Recoherence (Bouchard et al. 2015) can reverse decoherence, supporting the notion that all non-conscious physical structures can exist in superpositions. Another notion is quantum darwinism (Zurek 2009), in which some states survive because they are especially robust in the face of decoherence. Spontaneous collapse (Ghirardi, Rimini, & Weber 1986) has a similar artificiality to Zurek’s original decoherence model, in that both include an extra factor in the Schrödinger equations forcing collapse.

 

Other notions of collapse  (see King 2020b for details) involve interaction with third-party quanta and the world on classical scales. All forms of quantum entanglement (Aspect et al. 1982), or its broader phase generalisation, quantum discord (Ollivier & Zurek 2002) involve decoherence (Zurek 1991, 2003), because the system has become coupled to other wave-particles. But these just correspond to further entanglements, not collapse. Recoherence (Bouchard et al. 2015) can reverse decoherence, supporting the notion that all non-conscious physical structures can exist in superpositions.  Another notion is quantum darwinism (Zurek 2009), in which some states survive because they are especially robust in the face of decoherence.

 

Penrose's objective-collapse theory, postulates the existence of an objective threshold governing the collapse of quantum-states, related to the difference of the spacetime curvature of these states in the universe's fine-scale structure. He suggested that at the Planck scale, curved spacetime is not continuous, but discrete and that each separated quantum superposition has its own piece of spacetime curvature, a blister in spacetime. Penrose suggests that gravity exerts a force on these spacetime blisters, which become unstable above the Planck scale of and collapse to just one of the possible states. Atomic-level superpositions would require 10 million years to reach OR threshold, while an isolated 1 kilogram object would reach OR threshold in 10−37s. Objects somewhere between these two scales could collapse on a timescale relevant to neural processing. An essential feature of Penrose's theory is that the choice of states when objective reduction occurs is selected neither randomly nor algorithmically. Rather, states are selected by a "non-computable" influence embedded in the Planck scale of spacetime geometry, which in "The Emperor's New Mind" (Penrose 1989)  he associated with conscious human reasoning.

 

Spontaneous random collapse models GRW (Ghirardi, Rimini, & Weber 1986) include an extra factor complementing the Schrödinger equation forcing random collapse over a finite time. Both Penrose’s gravitationally induced collapse and the variants of GRW theories such as continuous spontaneous localisation (CSL) involving gradual, continuous collapse rather than a sudden jump have recently been partially eliminated by experiments derived from neutrino research which have failed to detect the very faint x-ray signals the local jitter of physical collapse models imply.

 

In the approach of SED (de la Peña et al. 2020), the stochastic aspect corresponds to the effects of the collapse process into the classical limit, but here consciousness can be represented by the zero point field (ZPF) (Keppler 2018). Finally we have pilot waves [27] (Bohm 1952), which identify particles as having real positions, thus not requiring wave function collapse, but have problems with handling creation of new particles. Images of such trajectories can be seen in weak quantum measurement and surreal Bohmian trajectories in fig 57.

 

David Albert (1992), in "Quantum Mechanics and Experience", cites objections to virtually all descriptions of collapse of the wave function. In terms of von Neumann's original definition, which allowed for collapse to take place any point from the initial event to the conscious observation of it, what he concluded was that there must be two fundamental laws about how the states of quantum-mechanical systems evolve:

 

Without measurements all physical systems invariably evolve in accordance with the dynamical equations of motion, but when there are measurements going on, the states of the measured systems evolve in accordance with the postulate of collapse. What these laws actually amount to will depend on the precise meaning of the word measurement. And it happens that the word measurement simply doesn't have any absolutely precise meaning in ordinary language; and it happens (moreover) that von Neumann didn't make any attempt to cook up a meaning for it, either.

 

However, if collapse always occurs at the last possible moment, as in Wigner's (1961) view:

 

All physical objects almost always evolve in strict accordance with the dynamical equations of motion. But every now and then, in the course of some such dynamical evolutions, the brain of a sentient being may enter a state wherein states connected with various different conscious experiences are superposed; and at such moments, the mind connected with that brain  opens its inner eye, and gazes on that brain, and that causes the entire system (brain, measuring instrument, measured system, everything) to collapse, with the usual quantum-mechanical probabilities, onto one or another of those states; and then the eye closes, and everything proceeds again in accordance with the dynamical equations of motion.

 

We thus end up with either purely physical systems, which evolve in accordance with the dynamical equations of motion or conscious systems which do contain sentient observers. These systems evolve in accordance with the more complicated rules described above. ... So in order to know precisely how things physically behave, we need to know precisely what is conscious and what isn't. What this "theory" predicts will hinge on the precise meaning of the word conscious; and that word simply doesn't have any absolutely precise meaning in ordinary language; and Wigner didn't make any attempt to make up a meaning for it; and so all this doesn't end up amounting to a genuine physical theory either.

 

But he also discounts related theories relating to macroscopic processes:

 

All physical objects almost always evolve in strict accordance with the dynamical equations of motion. But every now and then, in the course of some such dynamical evolutions (in the course of measurements, for example), it comes to pass that two macroscopically different conditions of a certain system (two different orientations of a pointer, say) get superposed, and at that point, as a matter of fundamental physical law, the state of the entire system collapses, with the usual quantum-mechanical probabilities, onto one or another of those macroscopically different states.  But then we again have two sorts of systems microscopic and macroscopic and again we don't precisely know what macroscopic is.

 

He even goes to the trouble of showing that no obvious empirical test can distinguish between such variations, including decoherence e.g. from air molecules, and with the GRW theory, where other problems arise about the nature and consequences of collapse on future evolution.

 

Tipler (2012, 2014), using quantum operators, shows that, in the many worlds interpretation, quantum non-locality ceases to exist because the first measurement of an entangled pair, e.g. spin up or down, splits the multiverse into two deterministic branches, in each of which the state of the the second particle is determined to be complementary in each multiverse branch, so no nonlocal "spooky action a a distance" needs, or can take place.

 

This also leads to a fully-deterministic multiverse:

 

Like the electrons, and like the measuring apparatus, we are also split when we read the result of the measurement, and once again our own split follows the initial electron entanglement. Thus quantum nonlocality does not exist. It is only an illusion caused by a refusal to apply quantum mechanics to the macroworld, in particular to ourselves.

 

Many-Worlds quantum mechanics, like classical mechanics is completely deterministic. So the observers have only the illusion of being free to chose the direction of spin measurement. However, we know my experience that there are universes of the mutilverse in which the spins are measured in the orthogonal directions, and indeed universes in which the pair of directions are at angles θ at many values between 0 and π/2 radians. To obtain the Bell Theorem quantum prediction in this more general case, where there will be a certain fraction with spin in one direction, and the remaining fraction in the other, requires using Everett’s assumption that the square of the modulus of the wave function measures the density of universes in the multiverse.

 

There is a fundamental problem with Tipler’s explanation. The observer is split into one that observes the cat alive and the other observes it dead. So everything is split. Nelson did and didn’t win the battle of Copenhagen by turning his blind eye, so Nelson is also both a live and dead Schrödinger cat. The same for every idiosyncratic conscious decision we make, so history never gets made. Free will ceases to exist and quantum measurement does not collapse the wave function. So we have a multiverse of multiverses with no history at all. Hence no future either.

 

This simply isn’t in any way how the real universe manifests. The cat IS alive or dead. The universe is superficially classical because so many wave functions have collapsed or are about to collapse that the quantum universe is in a dynamical state of creating superpositions and collapsing nearly all of them, as the course of history gets made. This edge of chaos dynamic between collapse and wave superposition allows free will to exist within the cubic centimetre of quantum uncertainty.  We are alive. Subjective conscious experience is alive and history is being unfolded as I type.

 

Nevertheless the implications of the argument are quite profound in that both a fully quantum multiverse and a classical universe are causally deterministic systems, showing that the capacity of subjectively conscious free-will to throw a spanner in the works comes from the interface we experience between these two deterministic extremes.

 

Transactional Interpretations: Another key interpretation which extends the Feynman description to real particle exchanges is the transactional interpretation TI (Cramer 1986, King 1989, Kastner 2012, Cramer & Mead 2020) where real quanta are also described as a hand-shaking between retarded (usual time direction) and advanced (retrocausal) waves from the absorber, called “offer” and “confirmation” waves.  TI arose from the Wheeler-Feynman (WF) time-symmetric theory of classical electrodynamics (Wheeler and Feynman 1945, 1949, Feynman 1965), which proposed that radiation is a time-symmetric process, in which a charge emits a field in the form of half-retarded, half-advanced solutions to the wave equation, and the response of absorbers combines with that primary field to create a radiative process that transfers energy from an emitter to an absorber.

 


Fig 72: (1) In TI a transaction is established by crossed phase advanced and retarded waves. (2) The superposition of these between the emitter and absorber results in a real quantum exchanged between emitter P and future absorber Q. (3) The origin of the positive energy arrow of time envisaged as a phase reflecting boundary at the cosmic origin (Cramer 1983). (4) Pair splitting entanglement can be explained by transactional handshaking at the common emitter. (5) The treatment of the quantum field in PTI is explained by assigning a different status to the internal virtual particle transactions (Kastner 2012). (6) A real energy emission in which time has broken symmetry involves multiple transactions between the emitter and many potential absorbers with collapse modelled as a symmetry breaking, in which the physical weight functions as the probability of that particular process as it competeswith other possible processes (Kastner 2014). (7) Space time emerging from a transaction (Kastner 2021a). (8) Entanglement experiment with time varying analysers (Aspect et al. 1982). A calcium atom emits two entangled photons with complementary polarisation each of which travels to one of two detectors oscillating so rapidly there is no time to send information at the speed of light between the two detector pairs. (9) The blue and yellow photon transitions. (10) The quantum correlations blue exceed Bell’s limits of communication between the two at the speed of light. The experiment is referred to as EPR after Einstein, Podolsky and Rosen who first suggested the problem of spooky action at a distance.

 

The only non-paradoxical way entanglement and its collapse can be realised physically, especially in the case of space-like separated detectors, as in fig 72(8) is this:

 

(A) The closer detector, say  No. 1 destructively collapses the entanglement at (1) sending a non-entangled advanced confirmation wave back in time to the source.

(B) The arrival of the advanced wave at the source collapses the wave right at source, so that the retarded wave from the source is no longer entangled although it was prepared as entangled by the experimenter. This IS instantaneous but entirely local.

(C) The retarded offer wave from the Bell experiment is no longer actually entangled and is sent at light speed to detector 2 where if it is detected it immediately has complementary polarisation to 1.

(D) If detector 1 does not record a photon at the given angle no confirmation wave is sent back to the source, so no coincidence measurement can be made.

(E) The emitted retarded wave will remain entangled unless photon 1 is or has been absorbed by another atom but then no coincidence counts will be made either.

(F) The process is relativistically covariant. In an experimenter frame if relative motion results in detector 2 sampling first, the roles of 1 and 2 become exchanged and the same explanation follows.

 

Every detection at (2) either collapses the entangled wave, or the already partially collapsed single particle wave function as in (B): If no detection has happened at 1, or anywhere else, the retarded source wave is still entangled, and detector 2 may sample it and collapse the entanglement. If a detection of photon 1 has happened elsewhere or at detector 1 the retarded source wave is no longer entangled, as in B above and then detector 2, if it samples photon 2, also collapses this non-entangled single particle wave function.

 

So there is no light-speed violating paradox but there is a deeper paradox about advanced and retarded waves in space time in the transactional principle. This as far as I can see gives the complete true real time account of how the universe actually deals with entanglement, not the fully collapsed statistical result the experimenter sees, and figures the case is already closed.

 

The standard account of the Bell theorem experiment, as in Fig 72(8) cannot explain how the universe actually does it, only that the statistical correlation agrees with the sinusoidal angular dependence of quantum reality and violates the Bell inequality. The experimenter is in a privileged position to overview the total data and can conclude this with no understanding of how an entangled wave function they prepared can arrive at detector 2 unentangled when photon 1 has already been absorbed.

 

Richard Feynman's (1965) Nobel Lecture "The Development of the Space-Time View of Quantum Electrodynamics" opened the whole transactional idea of advanced and retarded waves twenty years before Cramer (1983) did. It enshrines the very principle before QED got completed as the most accurate theory ever.

 

The same applies to single particle wave functions, where collapse of the wave function on absorption has to paradoxicially result in a sudden collapse of the wave function to zero even at space-like intervals from the emission and absorption loci, but the advanced and retarded confirmation and offer waves. Quantum mechanics also allows events to happen with no definite causal order (Goswami et al. 2018).

 

As just noted, the process of wave function collapse has generally been considered to violate Lorenz relativistic invariance (Barrett  1999 p44-45):

 

The standard collapse theory, at least, really is incompatible with the theory of relativity in a perfectly straightforward way: the collapse dynamics is not Lorentz- covariant. When one finds an electron, for example, its wave function instantaneously goes to zero everywhere except where one found it. If this did not happen, then there would be a nonzero probability of finding the electron in two places at the same time in the measurement frame. The problem is that we cannot describe this process of the wave function going to zero almost everywhere simultaneously in a way that is compatible with relativity. In relativity there is a different standard of simultaneity for each inertial frame, but if one chooses a particular inertial frame in order to describe the collapse of the wave function, then one violates the requirement that all physical processes must be described in a frame-independent way.  

 

Ruth Kastner  (2021a,b) elucidates the relativistic transactional interpretation, which claims to resolve this through causal sets (Sorkin 2003) invoking a special-relativistic theory encompassing both real particle exchange and collapse:

 

In formal terms, a causal set C is a finite, partially ordered set whose elements are subject to a binary relation that can be understood as precedence; the element on the left precedes that on the right. It has the following properties:

 

(i) transitivity: (x, y, z C)(x y z x z)
(ii) irreflexivity: (x C)(x ~ x)
(iii) local finiteness: (x, z C) (cardinality { y C | x y z } < ∞)

 

Properties (i) and (ii) assure that the set is acyclic, while (iii) assures that the set is discrete. These properties yield a directed structure that corresponds well to temporal becoming, which Sorkin describes as follows:

 

In Sorkin’s construct, one can then have a totally ordered subset of connected links (as defined above), constituting a chain. In the transactional process, we naturally get a parent/child relationship with every transaction, which defines a link. Each actualized transaction establishes three things: the emission event E, the absorption event A, and the invariant interval I(E,A) between them, which is defined by the transferred photon. Thus, the interval I(E,A) corresponds to a link. Since it is a photon that is transferred, every actualized transaction establishes a null interval, i.e., ds2 = c2t2 − r2 = 0 . The emission event E is the parent of the absorption event A (and A is the child of E).

 

A major advantage of the causal set approach as proposed by Sorkin and collaborators … is that it provides a fully covariant model of a growing spacetime. It is thus a counterexample to the usual claim (mentioned in the previous section) that a growing spacetime must violate Lorentz covariance. Specifically, Sorkin shows that if the events are added in a Poissonian manner, then no preferred frame emerges, and covariance is preserved (Sorkin 2003, p. 9).  In RTI, events are naturally added in a Poissonian manner, because transactions are fundamentally governed by decay rates (Kastner and Cramer, 2018).

 

Ruth Kastner comments in private communication in relation to her development of the transactional interpretation:

 

The main problem with the standard formulation of QM is that consciousness is brought in as a kind of 'band-aid' that does not really work to resolve the Schrodinger's Cat and Wigner's Friend of paradoxes. The transactional picture, by way of its natural non-unitarity (collapse under well-quantified circumstances), resolves this problem and allows room for consciousness to play a role as the acausal/volitional influence that corresponds to efficacy (Kastner 2016). My version of TI, however, is ontologically different from Cramer’s and it also is fully relativistic (Kastner 2021a,b). For specifics on why many recent antirealist claims about the world as alleged implications of Wigner's Friend are not sustainable, see Kastner (2021c). In particular, standard decoherence does not yield measurement outcomes, so one really needs real non-unitarity in order to have correspondence with experience. I have also shown that the standard QM formulation, lacking real non-unitarity, is subject to fatal inconsistencies (Kastner 2019, 2021d). These inconsistencies appear to infect Everettian approaches as well.

 

Kastner (2011) explains the arrow of time as a foundational quantum symmetry-breaking:

 

Since the direction of positive energy transfer dictates the direction of change (the emitter loses energy and the absorber gains energy), and time is precisely the domain of change (or at least the construct we use to record our experience of change), it is the broken symmetry with respect to energy propagation that establishes the directionality or anisotropy of time. The reason for the arrow of timeis that the symmetry of physical law must be broken: the actual breaks the symmetry of the potential. It is often viewed as a mystery that there are irreversible physical processes and that radiation diverges toward the future. The view presented herein is that, on the contrary, it would be more surprising if physical processes were reversible, because along with that reversibility we would have time-symmetric (isotropic) processes, which would fail to transfer energy, preclude change, and therefore render the whole notion of time meaningless.

 

Kastner is a possibilist who argues that OWs and CWs are possibilities that are "real." She says that they are less real than actual empirically measurable events, but more real than an idea or concept in a person's mind. She suggests the alternate term "potentia," Aristotle's term that she found Heisenberg had cited. For Kastner, the possibilities are physically real as compared to merely conceptually possible ideas that are consistent with physical law. But she says the "possibilities" described by offer and confirmation waves are "sub-empirical" and pre-spatiotemporal (i.e., they have not shown up as actual in spacetime). She calls these "incipient transactions.” She calls for a new metaphysical category to describe "not quite actual...possibilities."

 

Kastner (2012, 2014b) sets out the basis for extending the possibilist transactional interpretation or PTI, to the relativistic domain in relativistic transactional interpretation or RTI. This modified version proposes that offer and confirmation waves (OW and CW) exist in a sub-empirical, pre-spacetime realm (PST) of possibilities, and that it is actualised transactions which establish empirical spatiotemporal events. PTI proposes a growing universe picture, in which actualised transactions are the processes by which spacetime events are created from a substratum of quantum possibilities. The latter are taken as the entities described by quantum states (and their advanced confirmations); and, at a subtler relativistic level, the virtual quanta. PTI proposes a growing universe picture, in which actualised transactions are the processes by which spacetime events are created from a substratum of quantum possibilities.

 

The basic idea is that offers and confirmations are spontaneously elevated forms of virtual quanta, where the probability of elevation is given by the decay rate for the process in question. In the direct action picture of PTI, an excited atom decays because one of the virtual photon exchanges ongoing between the excited electron and an external absorber (e.g. electron in a ground state atom) is spontaneously transformed into a photon offer wave that generates a confirming response. The probability for this occurrence is the product of the QED coupling constant α and the associated transition probability. In quantum field theory terms, the offer wave corresponds to a free photonor excited state of the field, instantiating a Fock space state (Kastner 2014b).

 

In contrast, with standard QFT where the amplitudes over all interactions are added and then squared under the Born rule, according to PTI , the absorption of the offer wave generates a confirmation (the response of the absorber), an advanced field. This field can be consistently reinterpreted as a retarded field from the vantage point of an observercomposed of positive energy and experiencing events in a forward temporal direction. The product of the offer (represented by the amplitude) and the confirmation (represented by the amplitudes complex conjugate) corresponds to the Born Rule.

  

Kastner (2014a, 2021c,d) deconstructs decoherence as well as quantum Darwinism, refuting claims that the emergence of classicality proceeds in an observer-independent manner in a unitary-only dynamics, noting that quantum Darwinism holds that the emergence of classicality is not dependent on any inputs from observers, but that it is the classical experiences of those observers that the decoherence program seeks to explain from first principles:

 

“in the Everettian picture, everything is always coherently entangled, so pure states must be viewed as a fiction -- but that means that it is also fiction that the putative 'environmental systems' are all randomly phased. In helping themselves to this phase randomness, Everettian decoherentists have effectively assumed what they are trying to prove: macroscopic classicality only emergesin this picture because a classical, non-quantum-correlated environment was illegitimately put in by hand from the beginning. Without that unjustified presupposition, there would be no vanishing of the off-diagonal terms”

 

She extends this to an uncanny observation concerning the Everett view:

 

"That is, MWI does not explain why Schrodingers Cat is to be viewed as ‘alive’ in one world and deadin another, as opposed to alive + deadin one world and alive deadin the other.”

 

Kastner (2016a) notes that the symmetry-breaking of the advanced waves provides an alternative explanation to von Neumann’s citing of the consciousness of the observer in quantum measurement:

 

Von Neumann noted that this Process 1 transformation is acausal, nonunitary, and irreversible, yet he was unable to explain it in physical terms. He himself spoke of this transition as dependent on an observing consciousness. However, one need not view the measurement process as observer-dependent. … The process of collapse precipitated in this way by incipient transactions [competing probability projection operator weightings of the] absorber response(s) can be understood as a form of spontaneous symmetry breaking.

 

Kastner & Cramer (2018) confirm this picture:

And since not all competing possibilities can be actualized, symmetry must be broken at the spacetime level of actualized events. The latter is the physical correlate of non-unitary quantum state reduction.

 

However, in Kastner (2016b), Ruth considers observer participation as integral, rejecting two specific critiques of libertarian, agent-causal free will: (i) that it must be anomic or antiscientific; and (ii) that it must be causally detached from the choosing agent. She asserts that notwithstanding the Born rule, quantum theory may constitute precisely the sort of theory required for a nomic grounding of libertarian free will.

 

Kastner cites Freeman Dyson’s comment rejecting epiphenomenalism:

 

I think our consciousness is not just a passive epiphenomenon carried along by the chemical events in our brains, but is an active agent forcing the molecular complexes to make choices between one quantum state and another. In other words, mind is already inherent in every electron, and the processes of human consciousness differ only in degree but not in kind from the processes of choice between quantum states which we call chancewhen they are made by electrons.

 

Kastner then proposes, not just a panpsychic quantum reality but a pan-volitional basis for it:

 

Considering the elementary constituents of matter as imbued with even the minutest propensity for volition would, at least in principle, allow the possibility of a natural emergence of increasingly efficacious agent volition as the organisms composed by them became more complex, culminating in a human being. And allowing for volitional causal agency to enter, in principle, at the quantum level would resolve a very puzzling aspect of the indeterminacy of the quantum lawsthe seeming violation of Curies Principle in which an outcome occurs for no reason at all. This suggests that, rather than bearing against free will, the quantum laws could be the ideal nomic setting for agent-causal free will.

 

Kastner, Kauffman & Epperson (2018) formalise the relationship between potentialities and actualities into a modification of Descartes res cogitans (purely mental substance) and res extensa (material substance) to res potentiae and res extensa comprising the potential and actual aspects of ontological reality. Unlike Cartesian dualism these are not separable or distinct but are manifest in all situations where the potential becomes actual, particularly in the process of quantum measurement in PTI,  citing (McMullin 1984) on the limits of imagination of the res potentiae:  

 

… imaginability must not be made the test for ontology. The realist claim is that the scientist is discovering the structures of the world; it is not required in addition that these structures be imaginable in the categories of the macroworld.

 

They justify this by noting that human evolutionary survival has depended on dealing with the actual, so the potential may not be imaginable in our conscious frame of reference, however one can note that the strong current of animism in human cultural history suggests a strong degree of focus on the potential, and its capacity to become actual in hidden unpredictable sources of accident or misfortune. In addition to just such unexpected real world examples, they they note the applicability of this to a multiplicity of quantum phenomena:

 

Thus, we propose that quantum mechanics evinces a reality that entails both actualities (res extensa) and potentia (res potentia), wherein the latter are as ontologically significant as the former, and not merely an epistemic abstraction as in classical mechanics. On this proposal, quantum mechanics IS about what exists in the world; but what exists comprises both possibles and actuals. Thus, while John Bells insistence on beablesas opposed to just observablesconstituted a laudable return to realism about quantum theory in the face of growing instrumentalism, he too fell into the default actualism assumption; i.e., he assumed that to ‘be’ meant to be actual,so that his beableswere assumed to be actual but unknown hidden variables.

 

What the EPR experiments reveal is that while there is, indeed, no measurable nonlocal, efficient causal influence between A and B, there is a measurable, nonlocal probability conditionalization between A and B that always takes the form of an asymmetrical internal relation. For example, given the outcome at A, the outcome at B is internally related to that outcome. This is manifest as a probability conditionalization of the potential outcomes at B by the actual outcome at A.

 

Nonlocal correlations such as those of the EPR entanglement experiment below can thus be understood as a natural, mutually constrained relationship between the kinds of spacetime actualities that can result from a given possibility – which itself is not a spacetime entity. She quotes Anton Zellinger (2016):

 

..it appears that on the level of measurements of properties of members of an entangled ensemble, quantum physics is oblivious to space and time .  

 

Kastner (2021b), considers how the spacetime manifold emerges from a quantum substratum through the transactional process (fig 72(6)), in which spacetime events and their connections are established. The usual notion of a background spacetime is replaced by the quantum substratum, comprising quantum systems with non-vanishing rest mass, corresponding to internal periodicities that function as internal clocks defining proper times and in turn, inertial frames that are not themselves aspects of the spacetime manifold.

 

Three years after John Cramer published the transactional interpretation, I wrote a highly speculative paper, “Dual-time Supercausality (King 1989, Vannini 2006), based on John’s description which says many of the same things emergent in Ruth Kastner’s far more comprehensive development. Summing up the main conclusions we have:

 

(1) Symmetric-Time: This mode of action of time involves a mutual space-time relationship between emitter and absorber. Symmetric-time determines which, out of the ensemble of possibilities predicted by the probability interpretation of quantum mechanics is the actual one chosen. Such a description forms a type of hidden-variable theory explaining the selection of unique reduction events from the probability distribution. We will call this bi-directional causality transcausality.

(2) Directed-time: Real quantum interaction is dominated by retarded-time, positive-energy particles. The selection of temporal direction is a consequence of symmetry-breaking, resulting from energy polarization, rather than time being an independent parameter. The causal effects of multi-particle ensembles result from this dominance of retarded radiation, as an aspect of symmetry-breaking.

 

Dual-time is thus a theory of the interaction of two temporal modes, one time-symmetric which selects unique events from ensembles, and the other time-directed which governs the consistent retarded action of the ensembles. These are not contradictory. Each on their own form an incomplete description. Temporal causality is the macroscopic approximation of this dual theory under the correspondence principle. The probability interpretation governs the incompleteness of directed-causality to specify unique evolution in terms of initial conditions.

 

Quantum-consciousness has two complementary attributes, sentience and intent:

(a) Sentience represents the capacity to utilise the information in the advanced absorber waves and is implicitly transcausal in its basis. Because the advanced components of symmetric-time cannot be causally defined in terms of directed-time, sentience is complementary to physically-defined constraints.

(b) Intent represents the capacity to determine a unique outcome from the collection of such absorber waves, and represents the selection of one of many potential histories. Intent addresses the two issues of free-will and the principle of choice in one answer – free-will necessarily involves the capacity to select one out of many contingent histories and the principle of choice manifests the essential nature of free-will at the physical level.

 

The transactional interpretation presents a  unique view of cosmology,  involving an implicit space-time anticipation in which a real exchange, e.g. a photon emitted by a light bulb and absorbed on a photographic plate or elsewhere, or a Bell type entanglement experiment with two detectors, is split into an offer wave from the emitter and retro-causal confirmation waves from the prospective absorbers that, after the transaction is completed, interfere to form the real photon confined between the emission and absorption vertices. We also  experience these retro-causal effects in weak quantum measurement, and delayed choice experiments.

 

To get a full picture of this process, we need to consider the electromagnetic field as a whole, in which these same absorbers are also receiving offer waves form other emitters, so we get a network of  virtual emitter-absorber pairs.

 

There is a fundamental symmetry between creation and annihilation, but there is a sting in the measurement tail. When we do an interference experiment, with real positive energy photons, we know each photon came from the small region within the light source, but the locations of the potential absorbers affected by the wave function are spread across the world at large. The photon could be absorbed anywhere on the photographic plate, or before it, if it hits dust in the apparatus, or after if it goes right through the plate and out of the apparatus altogether, just as radioactive particles escape the exponential potential barrier of the nucleus. The problem concerning wave function collapse is which absorber?

 

In all these cases once a potential absorber becomes real, all the other potential absorbers have zero probability of absorption, so the change occurs instantaneously across space-time to other prospective absorbers, relative to the successful one. This is the root problem of quantum measurement. Special relativistic quantum field theory is time symmetric, so solving wave function collapse is thus most closely realised in the transactional interpretation, where the real wave function is neither the emitter's spreading linear retarded wave, nor any of the prospective absorbers’ linear advanced waves, but the results of a phase transition, in which all these hypothetical offer and confirmation waves resolve into one or more real wave functions linking creation and annihilation vertices. It is the nature of this phase transition and its non-linearity which holds the keys to life the universe and everything and potentially the nature of time itself.

 

The entire notion in Bell experiments, where communication between absorbers appears to be impossibly instantaneous, invoking super-luminal communication, is unnecessary because the retrocausal confirmation wave perfectly cancels the time elapse of the offer wave, so if detector 1 samples first, its confirmation goes back to the source photon-splitter arriving at the same time as the original emission and the offer wave collapses to a single photon emission to detector 2 which arrives there at exactly the time when 2 should have sampled the complementary polarisation, with this information as required. No superluminal interactions between absorbers occurs even if it looks like the process was instantaneous and would have to involve infinite velocity. This looks instantaneous without contradiction because of the time elapse cancellations, but if we follow it as a process, it is some kind of non-linear phase transition from a “plasma” state of offers and confirmations collapsing into a set of real photons with phonon like real excitations connecting them.

 

In Symbiotic Existential Cosmology this is envisaged as allowing a form of prescience because the collapse has implicit information about the future state of the universe in which the absorber exist. This may appear logically paradoxical but no classical information is transferred, so there is no inconsistency. Modelling the collapse appears to happen outside space-time, but actually it is instantaneous, so dual-time is just a core part of the heuristic to understand the non-linear process. This depends on transactional collapse being a non-random hidden-variable theory in which non-local correlations of the universal wave function manifest as a complex system during collapse in a way that looks deceptively like randomness because it is a complex chaotic ergodic process.

 

My perspective is that subjective conscious physical volition has to imbue an evolutionary advantage, or it would be evolutionarily unstable and be discarded by neutral evolution, but this advantage has to involve real time anticipation of existential threats to survival. So I favour the transactional interpretation, in which a real particle e.g. a photon is a superposition of a causal “offer wave” from an emitter complemented by potential retrocausal “confirmation waves” from absorbers. This is actually necessary, because the emission wave is a linear Schrödinger wave that spreads, but a real photon is an excitation between an emitter and an absorber, more like a simple harmonic phonon, non-linear in space with two foci as in fig 73.

 

Fig 73: A transaction modelled by a phase transition from a virtual plasma to a real interactive solid spanning space-time, in which the wave functions have now become like the harmonic phonons of solid state physics.

 

I remain intrigued by the transactional principle because I am convinced that subjective consciousness is a successful form of quantum prediction in space-time that has enabled single-celled eucaryotes to conquer the biosphere before there were brains, which have evolved based on intimately-coupled societies of such cells (neurons and neuroglia) now forming the neural networks neuroscience tries to understand in classical causal terms.

 

The eucaryote endo-symbiosis in this view marks a unique discrete topological transformation of the membrane to unfold attentive sentient consciousness invoking the second stage of cosmological becoming that ends up being us wondering what the hell is going on here? This is the foundation of emergence as quantum cosmology and it explains why we have the confounding existential dilemma we do have and why it all comes back to biospheric symbiosis being the centre of the cyclone of survival for us as a climax species.

 

The full picture of a transaction process is a population of real, or potential emitters in excited states and potential ground state absorbers, with their offer and confirmation wave functions extending throughout space time, as in the Feynman representation. As the transaction proceeds, this network undergoes a phase transition from a “virtual plasma” state to a “real solid”, in which the excited emitters are all paired with actual absorbers in the emitters’ future at later points in space-time. This phase transition occurs across space-time– i.e. transcausally – covering both space-like and time-like intervals. It has many properties of a phase transition from plasma to solid, with a difference – the strongest interactions don’t win, except with a probability determined by the relative power of the emitter’s wave amplitudes at the prospective absorption event. This guarantees the transaction conforms to the emitter’s probability distribution and the absorber's one as well.   If a prospective absorber has already interacted with another emitter, it will not appear in the transaction network at this space-time point, so ceases to be part of the collective transaction. Once this is the case, all other prospective absorbers of a given emitter scattered throughout space-time, both in the absorber’s past and future, immediately have zero probability of absorption from any of the emitters and no causal conflict, or time loop arises.

 

Here is the problem. The transition is laterally across the whole of space-time, not along the arrow of time in either direction, so cannot exist within space-time and really needs a dual time parameter. This is why my 1989 paper was entitled “dual-time super-causality”.

 

Now this doesn’t mean a transaction is just a random process. Rather, it is a kind of super-selection theory, in which the probability of absorption at an absorber conforms to the wave probability but the decision making process is spread between all the prospective absorbers distributed across space-time, not just an emitter-based random wave power normalised probability. The process is implicitly retro-causal in the same way weak quantum measurement and Wheeler’s delayed choice experiments are.

 

The fact that in the cat paradox experiment, we see only a live or dead cat and not a superposition doesn’t mean however, that conscious observers witness only a classical world view. There are plenty of real phenomena in which we do observe quantum superpositions, including quantum erasure and quantum recoherence, where entangled particles can be distinguished collapsing the entanglement, and then re-entangled. A laser consists of excited atoms above the ground state which can be triggered to coherently emit photons indistinguishably entangled in a superposition of in-phase states stimulated by a standing wave in the laser caught between pairs of reflecting mirrors, so we see the bright laser light and know it is a massive superimposed set of entangled photons.

 

In all forms of quantum entanglement experiment, when the state of one of the pair is detected, the informational outcome is “transmitted” instantaneously to the other detector so that the other particle’s state is definitively complementary, although the detectors can be separated by space-like as well as time-like intervals, although this transmission cannot be used to relay classical information. This again is explained by the transactional interpretation, because the confirmation wave of the first detector of the pair is transmitted retro-causally back to the source event where the splitting occurred and then causally out to the second detector where it now has obligately complementary spin or polarisation when detection occurs.

 

What the transactional interpretation does provide is a real collapse process in which the universe is neither stranded in an Everett probability multiverse, nor in a fully collapsed classical state, but can be anywhere in between, depending on which agents are dong the measuring in a given theory. Nor is collapse necessarily random and thus meaningless, but is a space-time spanning non-linear phase transition, involving bidirectional hand-shaking between past and future. The absorbers are all in an emitter’s future so there is a musical chairs dance happening in the future. And those candidates may also be absorbers of other emitters and so on, so one can’t determine the ultimate boundary conditions of this problem. Somehow the “collapse”, which we admit violates retarded causality, results in one future choice. This means that there is no prohibition on this being resolved by the future affecting the outcome because the actual choice has no relation to classical causality.

 

The only requirement is that individual observations are asymptotic to the Born probability interpretation normalised by the wave function power  φ . φ*, but this could arise from a variety of complex trans-deterministic quasi-random processes, where multiple entanglements generate effective statistical noise, while having a basis in an explicit  hidden variable theory. The reason for the Born asymptote could thus be simply that the non-linear phase transition of the transaction, like the cosmic wave function of the universe, potentially involves everything there is the ultimate pseudo-random optimisation process concealing a predictive hidden variable theory. One should point out that that near universal assumption that the probability interpretation implies pure randomness normalised by the wave power  has as much onus on scientific proof as does any hidden variable theory, such as transactional collapse.

 

Hidden variable theories assert that there is a process underlying quantum uncertainty, which is by default assumed to be “random”, but the onus on scientific proof lies as much with establishing such a source of “pure” randomness in the universe, as it does finding a succinct hidden variable theory transcending those like the pilot wave theory in a full explanation. The transcausal aspects of transactional quantum collapse may make such a naked theory impossible to establish, meaning that both the hidden TOE and assumed randomness become undecidable propositions which intuition can penetrate empirically, but logical proof cannot.

 

It is also one in which subjective conscious volition and meaning can become manifest in cosmic evolution, in which the universe is in a state of dynamic ramification and collapse of quantum superpositions. The key point here is that subjective conscious volition needs to have an anticipatory property in its own right, independent of brain mechanisms like attention processes, or it will be neutral to natural selection, even if we do have free will, and would not have been selected for, all the way from founding eucaryotes to Homo sapiens. The transactional interpretation, by involving future absorbers in the collapse process, provides just such an anticipatory feature.

 

It is one thing to have free will and its another to use free will for survival on the basis of (conscious) prediction, or anticipation. Our conscious brains are striving to be predictive to the extent that we are subject to flash-lag perceptual illusions where perceptual processes attempt, sometimes incorrectly, to predict the path of rapidly moving objects (Eagleman & Sejnowski 2000), so the question is pivotal. Anticipating future threats and opportunities is key to how we evolved as conscious organisms, and this is pivotal over short immediate time scales, like the snake’s or tiger’s strike which we survive. Anticipating reality in the present is precisely what subjective consciousness is here to do.

 

The hardest problem of consciousness is thus that, to be conserved by natural selection, subjective consciousness (a) has to be volitional i.e. affect the world physically to result in natural selection and (b) it has to be predictive as well. Free-will without predictivity is neutral to evolution, just like random behaviour, and it will not be selected for. If we are dealing with classical reality, we could claim this is merely a computational requirement, but why then do we have subjective experience at all? Why not just recursive predictive attention processes with no subjectivity?

 

Here is where the correspondence between sensitive dynamic instability at tipping points and quantum uncertainty comes into the picture. We know biology and particularly brain function is a dynamically unstable process, with sensitive instabilities that are fractal down to the quantum level of ion channels, enzyme molecules whose active sites are enhanced by quantum tunnelling and the quantum parallelism of molecular folding and interactive dynamics. We also know that the brain dynamics operating close to the edge of chaos is convergent to dynamic crisis during critical decision-making uncertainties that do not have an obvious computational, cognitive, or reasoned disposition. We also know at these points that the very processes of sensitivity on existing conditions and other processes, such as stochastic resonance, can allow effects at the micro level approaching quanta to affect the outcome of global brain states.

 

And those with any rational insight can see that, for both theoretical and experimental reasons, classical causal closure of the universe in the context of brain dynamics is an unachievable quest. Notwithstanding Libet’s attempt, there is no technological way to experimentally achieve verification that the brain is causally closed and it flies in the face of the fractal molecular nature of biological processes at the quantum level.

 

Nevertheless we can understand that subjective conscious volition cannot enter into causal conflict with brain processes which have already established an effective computational outcome, as we do when we reach a prevailing reasoned conclusion, so free will is effectively restricted to situations where the environmental circumstances are uncertain, or not effectively computable, or perceived consciously to be anything but certain.

 

This in turn means that the key role of free will is not applying it to rationally or emotionally foregone conclusions but to environmental and strategic uncertainties, especially involving other conscious agents whose outcomes become part of quantum uncertainty itself.

 

The natural conclusion is that conscious free will has been conserved by evolution because it provides an evolutionary advantage at anticipating root uncertainties in the quantum universe and only these, including environmental and contextual uncertainties which are themselves products of quantum uncertainty amplified by unstable processes in the molecular universe such as quantum kinetic billiards. This seems almost repugnantly counter-intuitive, because we tend to associate quantum uncertainty and the vagaries of fate with randomness, but this is no more scientifically established than causal closure of the universe in the context of brain function. All the major events of history that are not foregone conclusions, result from conscious free will applied to uncertainty, such as Nelson turning his bind eye to the telescope, in the eventually successful Battle of Copenhagen.

 

So the question remains that when we turn to the role of subjective consciousness volition in quantum uncertainty, this comes down to not just opening the box of Schrödinger’s cat, but to anticipating uncertain events more often than random chance would predict in real life situations.

 

That is where the transactional approach comes into its own, because, while the future at the time of casting the emission die is an indeterminate set of potential absorbers, the retro-causal information contained in the transaction is implicitly revealing which future absorbers are actually able to absorb the real emitted quantum and hence information about the real state of the future universe, not just its probabilities at emission. Therefore the transaction is carrying additional implicit “encoded” information about the actual future state of the universe and what its possibilities are that can be critical for survival in natural selection.

 

Although, like the "transmission" of a detection to the other detector in an entanglement experiment cannot be used to transfer classical information faster than the speed of light, the same will apply to quantum transactions, but this doesn't mean they are random or have no anticipatory value, just that they cannot be used for causal deduction.

 

Because the "holistic" nature of conscious awareness is an extension of the global unstable excitatory dynamics of individual eucaryote cells to brain dynamics, a key aspect of subjective consciousness may be that it becomes sensitive to the wave-particle properties of quantum transactions with the natural environment in the process of cellular quantum sentience, involving sensitivity to quantum modes, including photons, phonons and molecular orbital effects constituting cellular vision, audition and olfaction. Expanded into brain processes, this cellular quantum dynamics then becomes integral to the binding of consciousness into a coherent whole.

 

If we view neurodynamics as a fully quantum process, in the most exotic quantum material in the universe, in which the wave aspects consist of parallel excitation modes representing the competing possibilities of response to environmental uncertainties. If there is an open and shut case on logical, or tactical grounds, this mode will win out pretty much in the manner of Edelman's (1987) neural Darwinism or Dennett's (1991) multiple drafts. In terms of quantum evolution, the non-conscious processes form overlapping wave functions, proceeding according to deterministic Schrödinger solutions, (von Neumann type 2 processes), but in situations where subjective consciousness becomes critical to make an intuitive decision, the brain dynamic approaches an unstable tipping point, in which system uncertainty becomes pivotal (represented in instability of global states which are in turn sensitive to fractal scales of instability to the molecular level. Subjective consciousness then intervenes causing an intuitive decision through a (type 1 von Neumann) process of wave function collapse of the superimposed modes.

 

From the inside, this feels like and IS a choice of "free-will" aka subjective conscious volition over the physical universe. From the outside, this looks like collapse of an uncertain brain process to one of its eigenfunction states which then become apparent. There is a very deep mystery in this process because the physical process looks and remains uncertain and indeterminate, but from inside, in complete contradiction, it looks and feels like the exercise of intentional will determining future physical outcomes. So in a fundamental way it is like a Schrödinger cat experiment in which the cat survives more often than not, i.e. we survive. Now that is a really confounding issue at the very nub of what conscious existence is about and why SEC has the cosmological axiom of subjectivity to resolve it, because it is a fundamental cosmological paradox otherwise. So we end up with the ultimate paradox of consciousness how can we not only predict future outcomes that are quantum uncertain but capitalise on the ones that promote our survival, i.e. throw a live cat more often that chance would dictate!

 

This is the same dilemma that SEC addresses in primal subjectivity and is also in Cathy Reason's theorem … from the physical point of view causal closure of the brain is an undecidable proposition because we can't physically prove conscious will has physical effect, but neither can we prove causal closure of the (classical) universe. On the other hand, as Cathy's theorem intimates, conscious self certainty implies we know we changed the universe. Certainty of will as well as certainty of self. So the subjective perspective is certain and the objective perspective is undecidable. In exactly the same way, the cat paradox outcome is uncertain and can't be hijacked physically, but the autonomous intentional will used to tip the uncertain brain state has confidence of overall efficacy. This is the key to consciousness, free-will and survival in the jungle when cognition stops dead because of all the other conscious agents rustling in the grass and threatening to strike, which are uncomputable because they too are conscious! It's also the key to Psi, but in a more marginal way because it's trying to pass this ability back into the physical, where it drifts towards the probability interpretation. That's why I accept it, but don't abuse the siddhis by declaring them!

 

Consciousness is retained by evolution because it is a product of a Red Queen neurodynamic race between predators and prey in a similar way to the way sexuality has become a self-perpetuating genetic race between parasites and hosts by creating individual variation, thus avoiding boom and bust epidemics.

 

Cramer (2022) notes a possible verifiable source of advanced waves:

 

In the 1940s, young Richard Feynman and his PhD supervisor John Wheeler decided to take the advanced solution seriously and to use it to formulate a new electromagnetism, now called Wheeler-Feynman absorber theory (WF).  WF assumes that an oscillating electric charge produces advanced and retarded waves with equal strengths. However, when the retarded wave is subsequently absorbed (in the future), a cancellation occurs that erases all traces of the advanced waves and their time-backward “advanced effects.” WF gives results and predictions identical to those of conventional electromagnetic theory. However, if future retarded-wave absorption is somehow incomplete, WF suggests that this absorption deficiency might produce experimentally observable advanced effects.

 

When Bajlo (2017) measurements on cold, clear, dry days, he made the observations as the Earth rotated and the antenna axis swept across the galactic center, where wave-absorption variations might occur, in a number of these measurements, he observed strong advanced signals (6.94 to 26.5 standard deviations above noise) that arrived at the downstream antenna a time 2D/c before the main transmitted pulse signal. Variations in the advanced-signal amplitude as the antenna axis swept across the galactic center were also observed. The amplitude was reduced up to 50% of off-center maximum when pointed directly at the galactic center (where more absorption is expected.) These results constitute a credible observation of advanced waves.

  

Fig 74: Wheeler (1983) delayed choice experiment shows that different forms of measurement after light from a distant quasar has been gravitationally lensed around an intervening galaxy can be determined to have passed one or the other way around it or a superposition of both, depending on whether detection of one or other particle, or an interference is made when it reaches Earth. (b, c) An experimental implementation of Wheeler's idea along a satellite-ground interferometer that extends for thousands of kilometers in space (Vedovato et al. 2017), using shutters on an orbiting satellite.

  

Superdeterminism: There is another interpretation of quantum reality called super-determinism (Hossenfelder & Palmer 2020), which has an intriguing relationship with retro-causality and still can admit free will, despite the seeming contradiction in the title. Bell's theorem assumes that the measurements performed at each detector can be chosen independently of each other and of the hidden variables that determine the measurement outcome: ρ(λ(a,b))=ρ(λ).

  

In a super-deterministic theory, this relation is not fulfilled ρ(λ(a,b))≠ρ(λ) because the hidden variables are correlated with the measurement settings. Since the choice of measurements and the hidden variable are predetermined, the results at one detector can depend on which measurement is done at the other without any need for information to travel faster than the speed of light. The assumption of statistical independence is sometimes referred to as the free choice or free will assumption, since its negation implies that human experimentalists are not free to choose which measurement to perform. But this is incorrect. What it depends on are the actual measurements made. For every possible pair of measurements a, b there is a predefined trajectory determined both by the particle emission and the measurement at the time absorption takes place. Thus in general the experimenter still has the free will to choose a, b or even to change the detector set up, as in the Wheeler delayed choice experiment in fig 74, and science proceeds as usual, but the outcome depends on the actual measurements made. In principle, super-determinism is untestable, as the correlations can be postulated to exist since the Big Bang, making the loophole impossible to eliminate. However this has an intimate relationship with the transactional interpretation and its implicit retro-causality, because it includes the absorbing conditions in the transaction, so the two are actually compatible.

 

Sabine (Hossenfelder 2020) points out exactly how superdeterminism can violate statistical independence:

 

I here want to explain how the strangeness disappears if one is willing to accept that one of the assumptions we have made about quantum mechanics is not realized in nature: Statistical Independence. Loosely speaking, Statistical Independence means that the degrees of freedom of spa- tially separated systems can be considered uncorrelated, so in a superdeterministic model they are generically correlated, even in absence of a common past cause. The way that Statistical Independence makes its appearance in superdeterminism is that the probability distribution of the hidden variables given the detector settings ρ(λ|θ) is not independent of the detector settings, ie ρ(λ|θ) ≠ ρ(λ). What this means is that if an experimenter prepares a state for a measurement, then the outcome of the measurement will depend on the detector settings. The easiest way to think of this is considering that both the detector settings, θ, and the hidden variables, λ, enter the evolution law of the prepared state. As a consequence, θ and λ will generally be correlated at the time of measurement, even if they were uncorrelated at the time of preparation. Superdeterminism, then, means that the measurement settings are part of what determines the outcome of the time-evolution of the prepared state. What does it mean to violate Statistical Independence? It means that fundamentally everything in the universe is connected with everything else, if subtly so. You may be tempted to ask where these connections come from, but the whole point of superdeterminism is that this is just how nature is. It's one of the fundamental assumptions of the theory, or rather, you could say one drops the usual assumption that such connections are absent. The question for scientists to address is not why nature might choose to violate Statistical Independence, but merely whether the hypothesis that it is violated helps us to better describe observations.

 

However note the "toy" superdeterministic hidden variable theory (Donadi & Hossenfelder 2022) uses “the master equation for one of the most common examples of decoherence amplitude damping in a two-level system”. But decoherence  is a theory in which an additional term is added to model the increasing probability of a quantum getting hit by another quantum and literally uses forced damping to suppress the entangled off diagonalcomponents of the wave function matrix.

 

Schreiber (1995) sums up the case for consciousness collapsing the wave function as follows:

 

“The rules of quantum mechanics are correct but there is only one system which may be treated with quantum mechanics, namely the entire material world. There exist external observers which cannot be treated within quantum mechanics, namely human (and perhaps animal) minds, which perform measurements on the brain causing wave function collapse.”

 

Henry Stapp’s (2001) comment is very pertinent to the cosmology I am propounding, because it implies the place where collapse occurs lies in the brain making quantum measurements of its own internal states:

 

From the point of view of the mathematics of quantum theory it makes no sense to treat a measuring device as intrinsically different from the collection of atomic constituents that make it up. A device is just another part of the physical universe... Moreover, the conscious thoughts of a human observer ought to be causally connected most directly and immediately to what is happening in his brain, not to what is happening out at some measuring device... Our bodies and brains thus become ... parts of the quantum mechanically described physical universe. Treating the entire physical universe in this unified way provides a conceptually simple and logically coherent theoretical foundation...

 

Quantum entanglement is another area where consciousness may have a critical role. Einstein, Podolsky and Rosen (1935) proposed a locally causal limitation on any hidden variable theories describing the situation when two particles were entangled coherently in a single wave function. For example an excited  calcium atom, because of the two electrons in its outer shell, can emit two (yellow and blue) photons of complementary spin in a single transition from zero to zero spin outer shells. Bell’s (1966) theorem demonstrated a discrepancy between locally-causal theories, in which information between hidden sub-quantum variables could not be transferred faster than light. However, multiple experiments using Bell’s theorem have found the polarisations, or other quantum states of the particles, such as spin, are correlated in ways violating local causality which are not limited by the velocity of light (Aspect et al. 1982). This “spooky action at a distance” which Einstein disliked shows that the state of either particle remains indeterminate until we measure one of them, when the other’s state is the instantaneously determined to be complementary. This cannot however be used to send logical classical information faster than light, or backwards in time, but it indicates that the quantum universe is a highly entangled system in which potentially all particles in existence are involved.

 

Entanglement, Measurement and Phase Transition

 

A flurry of theoretical and experimental research has uncovered a strange new face of entanglement , that shows itself not in pairs, but in constellations of particles (Wood 2023). Entanglement naturally spreads through a group of particles, establishing an intricate web of contingencies. But if you measure the particles frequently enough, destroying entanglement in the process, you can stop the web from forming. In 2018, three groups of theorists (Chan A et al. 2019, Li et al. 2018, Skinner et al. 2019) showed that these two states — web or no web — are reminiscent of familiar states of matter such as liquid and solid. But instead of marking a transition between different structures of matter, the shift between web and no web indicates a change in the structure of information.

 

This is a phase transition in information, it’s where the properties in information — how information is shared between things — undergo a very abrupt change. Brian Skinner

 

 


Fig 74b: Entanglement phase transition and Measurement

 

More recently, a separate trio of teams tried to observe that phase transition in action (Choi et al. 2020). They performed a series of meta-experiments to measure how measurements themselves affect the flow of information. In these experiments, they used quantum computers to confirm that a delicate balance between the competing effects of entanglement and measurement can be reached. The transition’s discovery has launched a wave of research into what might be possible when entanglement and measurement collide. Matthew Fisher, a condensed matter physicist at the University of California, Santa Barbara, started studying the interplay of measurement and entanglement because he suspects that both phenomena could play a role in human cognition.

 

The Greenberger–Horne–Zeilinger state (Greenberger, Horne &  Zellinger 1989, Mermin 1990) is one of several three-particle entanglements that have become pivotal in quantum computing (Hussein et al. 2023). There is no standard measure of multi-partite entanglement because different, not mutually convertible, types of multi-partite entanglement exist. Nonetheless, many measures define the GHZ state to be maximally entangled state. The GHZ state  and the W state   represent two non-biseparable classes of 3-qubit states, which cannot be transformed (not even probabilistically) into each other by local quantum operations. This three particle entanglement problem is reminiscent of classical gravitation, which has a two body inverse square law, that in the three-body problem becomes intractably complex and chaotic as Henri Poincare found out. There is no general closed-form solution to the three-body problem, i.e. no general solution that can be expressed in terms of a finite number of standard mathematical operations.

 

In an experiment to test the influence of conscious perception on quantum entanglement (Radin, Bancel  & Delorme 2021), explored psychophysical (mind-matter) interactions with quantum entangled photons. Entanglement correlation strength measured in real-time was presented via a graph or dynamic images displayed on a computer monitor or web browser. Participants were tasked with mentally influencing that metric, with particularly strong results observed in three studies conducted (p < 0.0002). Radin, Michel & Delorme (2016) also reported a 5.72 sigma (p=1.05×10−8) deviation from a null effect in which participants focused their attention toward or away from a feedback signal linked in real time to the double-slit component of an interference pattern, suggesting consciousness affecting wave function collapse. For a review, see Milojevic & Elliot (2023). Radin (2023) has also reported 7.3 sigma beyond chance (p=1.4x10-13) deviations leaving little doubt that on average anomalous deviations in the random data emerged during events that attracted widespread attention, from a network of electronic random number generators located around the world that continuously recorded samples, used to explore a hypothesis that predicts the emergence of anomalous structure in randomness correlated with events that attract widespread human attention. Mossbridge et al. (2014)  in a meta analysis have also cited an organic unconscious anticipatory response to potential existential crises they term predictive anticipatory activity, which is similar to conscious quantum anticipation, citing anticipative entanglement swapping experiments such as Ma et al. (2002).

 

Fig75: (1) Quantum Erasure shows it is also possible to 'uncollapse' or erase such losses of entangled correlation by re-interfering the wave functions so we can no longer tell the difference. The superposition choices of the delayed choice experiment fig 74, also do this. Erasure successfully recreates the lost correlations, detecting information about one of the particles and then erasing it again by re-interfering it back into the shared wave function provided we use none of its information. Pairs of identically polarised correlated photons produced by a 'down-converter', bounce off mirrors, converge again at a beam splitter and pass into two detectors. A coincidence counter observes an interference pattern in the rate of simultaneous detections by the two detectors, indicating that each photon has gone both ways at the beam splitter, as a wave. Adding a polarisation shifter to one path destroys the pattern, by making it possible to distinguish the photons' paths. Placing two polarising filters in front of the detectors makes the photons identical again, erasing the distinction, restoring the interference pattern. (2) Delayed choice quantum eraser configuration. An individual photon goes through one (or both) of the two slits. One of the photons - the "signal" photon (red and blue lines) continues to the target detector D0, which is scanned in steps along its x-axis. A plot of "signal" photon counts detected by D0 versus x can be examined to discover whether the cumulative signal forms an interference pattern. The other entangled photon - the "idler" photon (red and blue lines going downwards from the prism), is deflected by prism PS that sends it along divergent paths depending on whether it came from slit A or slit B. Detection of the idler photon by D3 or D4 provides delayed "which-path information" indicating whether the signal photon with which it is entangled had gone through slit A or B. On the other hand, detection of the idler photon by D1 or D2 provides a delayed indication that such information is not available for its entangled signal photon. Insofar as which-path information had earlier potentially been available from the idler photon, the information has been subjected to a "delayed erasure".(3) Delayed choice entanglement swapping, in which Victor is able to decide whether Alice's and Bob's photons are entangled or not after they have already been measured. (Ma et al. 2002). (4) A photon is entangled with a photon that has already died (been sampled) even though they never coexisted at any point in time (Megidish 2012).

 

Phenomena, including delayed choice quantum erasure and entanglement-swapping fig 75, demonstrate that the time of a quantum observation can be ambiguous or possibly stand outside space-time, as the transactional picture suggests.  The Wigner’s friend experiment of fig 76c likewise shows that quantum path information can also take the form of a quantum measurement ‘observer’. Narasimhan, Chopra & Kafatos M (2019) draw particular attention to Kim et al. (2000) in regard to a “universal observer” integrating individual conscious observers and their observations:

 

While traditional double-slit experiments are usually interpreted as indicating that the collapse of the wave function involves choices by an individual observer in space-time, the extension to quantum eraser experiments brings in some additional subtle aspects relating to the role of observation and what constitutes an observer. Access to, and the interpretation of, information outside space and time may be involved. This directly ties to the question of where the Heisenberg-von Neumann cut is located and what its nature is. … There is a possibility that individual observers making choices in space and time are actually aspects of the universal Observer, a state masked by assumptions about individual human minds that may need further development and re-examination.

 

Summing up the position of physicists in a survey of participants in a foundations of quantum mechanics gathering, Schlosshauer et al. (2013) found that, while only 6% of physicists present believed consciousness plays a distinguished physical role, a majority believed it has a fundamental, although not distinguished role in the application of the formalism. They noted in particular that “It is remarkable that more than 60% of respondents appear to believe that the observer is not a complex quantum system.” Indeed on all counts queried there were wide differences of opinion, including which version of quantum mechanics they supported. Since all of the approaches are currently consistent with the predictions of quantum mechanics, these ambiguous figures are not entirely surprising.

 

The tendency towards an implicitly classical view of causality is similar to that among neuroscientists, with an added belief in the irreducible nature of randomness, as opposed to a need for hidden variables supporting quantum entanglement, rejecting Einstein’s disclaimer God does not play dice with the universe.” Belief in irreducible randomness means that the principal evidence for subjectivity in quanta – the idiosyncratic unpredictable nature of individual particle trajectories – is washed out in the bath water of irreducible randomness, converging to the wave amplitude on repetition, consistent with the correspondence principle, that the behaviour of systems described by the theory of quantum mechanics reproduces classical physics in the limit of large quantum numbers.

 

Non-IID interactions may preserve quantum reality In Born's (1920) correspondence principle, systems described by quantum mechanics are believed to reproduce classical physics in the limit of large quantum numbers – if measurements performed on macroscopic systems have limited resolution and cannot resolve individual microscopic particles, then the results behave classically – the coarse-graining principle (Kofler & Brukner 2007).  Subsequently Navascués & Wunderlich (2010) proved that in situations covered by IID (independent and identically distributed measurements) in which each run of an experiment must be repeated under exactly the same conditions and independently of other runs, we arrive at macroscopic locality. Similarly, temporal quantum correlations reduce to classical correlations and quantum contextuality reduces to macroscopic non-contextuality (Henson & Sainz 2015).

 

However Gallego & Dakić (2021) have shown that, surprisingly, quantum correlations survive in the macroscopic limit if correlations are not IID distributed at the level of microscopic constituents and that the entire mathematical structure of quantum theory, including the superposition principle is preserved in the limit. This macroscopic quantum behaviour allows them to show that Bell nonlocality is visible in the macroscopic limit.

 

The IID assumption is not natural when dealing with a large number of microscopic systems. Small quantum particles interact strongly and quantum correlations and entanglement are distributed everywhere. Given such a scenario, we revised existing calculations and were able to find complete quantum behavior at the macroscopic scale. This is completely against the correspondence principle, and the transition to classicality does not take place” (Borivoje Dakić).

 

It is amazing to have quantum rules at the macroscopic scale. We just have to measure fluctuations, deviations from expected values, and we will see quantum phenomena in macroscopic systems. I believe this opens the door to new experiments and applications” (Miguel Gallego).

 

Their approach is described as follows:

 

In this respect, one important consequence of the correspondence principle is the concept of macroscopic locality (ML): Coarse-grained quantum correlations become local (in the sense of Bell) in the  macroscopic limit. ML has been challenged in different circumstances, both theoretically and experimentally. However, as far as we know, nonlocality fades away under coarse graining when the number of particles N in the system goes to infinity.  In a bipartite Bell-type experiment where the parties measure intensities with a resolution of the order of N1/2 or, equivalently, O(N1/2)  coarse graining. Then, under the premise that particles are entangled only by independent and identically distributed pairs, Navascués & Wunderlich (2010) prove ML for quantum theory.

 

Fig 76: Macroscopic Bell-Type experiment.

 

We generalize the concept of ML to any level of coarse graining α [0, 1], meaning that the intensities are measured with a resolution of the order of Nα. We drop the IID assumption, and we investigate the existence of a boundary between quantum (nonlocal) and classical (local) physics, identified by the minimum level of coarse graining α required to restore locality. To do this, we introduce the concept of macroscopic quantum behavior (MQB), demanding that the Hilbert space structure, such as the superposition principle, is preserved in the thermodynamic limit.

 

Conclusion: We have introduced a generalized concept of macroscopic locality at any level of coarse graining α [0, 1]. We have investigated the existence of a critical value that marks the quantum-to-classical transition. We have introduced the concept of MQB at level α of coarse graining, which implies that the Hilbert space structure of quantum mechanics is preserved in the thermodynamic limit. This facilitates the study of macroscopic quantum correlations. By means of a particular MQB at α = 1/2, , we show that αc ≥ 1/2, as opposed to the IID case, for which αIID ≤ 1/2. An upper bound on αc is, however, lacking in the general case. The possibility that no such transition exists remains open, and perhaps there exist systems for which ML is violated at α = 1.

 

This means for example, that in (a) neural system processing, where the quantum unstable context is continually evolving as a result of edge-of-chaos processing, and so repeated IID measurements are not made and (b) biological evolution, where a sequence of unique mutations become sequentially fixed by natural and sexual selection, which is also consciously mediated in eucaryote organisms, both inherit implicit quantum non-locality in their evolution.

 

John Eccles (1986) proposed a quantum theory involving psychon quasi-particles mediating uncertainty of  synaptic transmission to complementary dendrons cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunnelling effect in synaptic exocytosis, while in perception the reverse process takes place. This model has been elaborated by a number of researchers (Eccles 1990, 1994, Beck & Eccles 1992, Georgiev 2002, Hari 2008). The difficulty with the theory is that the psychons are then physical quasi-particles with integrative mental properties. So it’s a quasi-physical description that doesn’t manifest subjectivity except by its integrative physical properties In the last chapter of his book The Neurophysiological Basis of Mind (1953), Eccles not only hypothesized the existence of a " self-conscious mind" relatively independent of the cerebral structures, but also supposed that a very weak influence of will on a few neurons of the cerebral cortex could cause remarkable changes in brain activity leading to the notion of volition being a form of "psychokinesis" (Giroldini 1991), supported also by Wilder Penfield (1960).

 

The Quantum Measurement Problem May Contradict Objective Reality

 

In quantum theory, before collapse, the system is said to be in a superposition of two states, and this quantum state is described by the wave function, which evolves in time and space. This evolution is both deterministic and reversible: given an initial wave function, one can predict what it’ll be at some future time, and one can in principle run the evolution backward to recover the prior state. Measuring the wave function, however, causes it to collapse, mathematically speaking, such that the system in our example shows up as either heads or tails. It’s an irreversible, one-time-only and no one knows what defines the process or boundaries of measurement.

 

One model that preserves the absoluteness of the observed event — either heads or tails for all observers—is the GRW theory, where quantum systems exist in a superposition of states until the superposition spontaneously and randomly collapses, independent of an observer. Whatever the outcome—heads or tails in our example—it shall hold for all observers. But GRW, and the broader class of “spontaneous collapse” theories, run foul of a long-cherished physical principle: the preservation of information.  By contrast, the “many worlds” interpretation of quantum mechanics allows for non-absoluteness of observed events, because the wave function branches into multiple contemporaneous realities, in which in one “world,” the system will come up heads, while in another, it’ll be tails.

 

Ormrod, Venkatesh and Barrett (2023, Ananthaswamy 2023) focus on perspectival theories that obey three properties:

 

(1) Bell nonlocality (B). Alice chooses her type of measurement freely and independently of Bob, and vice versa –  of their own free will – an important assumption. Then, when they eventually compare notes, the duo will find that their measurement outcomes are correlated in a manner that implies the states of the two particles are inseparable: knowing the state of one tells you about the state of the other.

(2) The preservation of information (I). Quantum systems that show deterministic and reversible evolution satisfy this condition.  If you are wearing a green sweater today, in an information-preserving theory, it should still be possible, in principle, 10 years hence to retrieve the colour of your sweater even if no one saw you wearing it.

(3) Local dynamics (L). If there exists a frame of reference in which two events appear simultaneous, then the regions of space are said to be “space-like separated.” Local dynamics implies that the transformation of a system that takes a set of input states and produces a set of output states in one of these regions cannot causally affect the transformation of a system in the other region any faster than the speed of light, and vice versa. Each subsystem undergoes its own transformation, and so does the entire system as a whole. If the dynamics are local, the transformation of the full system can be decomposed into transformations of its individual parts: the dynamics are said to be separable.   In contrast, when two particles share a state that’s Bell nonlocal (that is, when two particles are entangled, per quantum theory), the state is said to be inseparable into the individual states of the two particles. If transformations behaved similarly, in that the global transformation could not be described in terms of the transformations of individual subsystems, then the whole system would be dynamically inseparable.

 

Fig 76b: A graphical summary of the theorems. Possibilistic Bell Nonlocality is Bell Nonlocality that arises not only at the level of probabilities, but at the level of possibilities.

 

Their work analyses how pespectival quantum theories are BINSC, and that NSC implies L, so BINSC is BIL. Such BIL theories are then required to handle a deceptively simple thought experiment. Imagine that Alice and Bob, each in their own lab, make a measurement on one of a pair of particles. Both Alice and Bob make one measurement each, and both do the exact same measurement. For example, they might both measure the spin of their particle in the up-down direction. Viewing Alice and Bob and their labs from the outside are Charlie and Daniela, respectively. In principle, Charlie and Daniela should be able to measure the spin of the same particles, say, in the left-right direction. In an information-preserving theory, this should be possible.  Using this scenario, the team proved that the predictions of any BIL theory for the measurement outcomes of the four observers contradict the absoluteness of observed events. This leaves physicists at an unpalatable impasse: either accept the non-absoluteness of observed events or give up one of the assumptions of a BIL theory.

 

Ormrod says dynamical separability is “kind of an assumption of reductionism – you can explain the big stuff in terms of these little pieces.” Just like a Bell nonlocal state cannot be reduced to some constituent states, it may be that the dynamics of a system are similarly holistic, adding another kind of nonlocality to the universe.  Importantly, giving it up doesn’t cause a theory to fall afoul of Einstein’s theories of relativity, much like physicists have argued that Bell nonlocality doesn’t require superluminal or nonlocal causal influences but merely nonseparable states. Ormrod, Venkatesh and Barrett note: “Perhaps the lesson of Bell is that the states of distant particles are inextricably linked, and the lesson of the new ... theorems is that their dynamics are too.” The assumptions used to prove the theorem don’t explicitly include an assumption about freedom of choice because no one is exercising such a choice. But if a theory is Bell nonlocal, it implicitly acknowledges the free will of the experimenters.

 

Fig 76c: Above An experimental realisation of the Wigner' friend setup showing there is no such thing as objective reality - quantum mechanics allows two observers to experience different, conflicting realities. Below the proof of principle experiment of Bong et al. (2020) demonstrating mutual inconsistency of 'No-Superdeterminism', 'Locality' and 'Absoluteness of Observed Events’.

 

An experimental realisation of non-absoluteness of observation has been devised (Proietti et al., 2019) as shown in fig 76c using quantum entanglement. The experiment involves two people observing a single photon that can exist in one of two alignments, but until the moment someone actually measures it to determine which, the photon is in a superposition. A scientist analyses the photon and determines its alignment. Another scientist, unaware of the first's measurement, is able to confirm that the photon - and thus the first scientist's measurement - still exists in a quantum superposition of possible outcomes. As a result, each scientist experiences a different reality - both "true" even though they disagree with each other. In a subsequent experiment, Bong et al. (2020) transform the thought experiment into a mathematical theorem that confirms the irreconcilable contradiction at the heart of the Wigner scenario. The team also tests the theorem with an experiment, using photons as proxies for the humans, accompanied by new forms of Bell's inequalities, by building on a scenario with two separated but entangled friends. The researchers prove that if quantum evolution is controllable on the scale of an observer, then one of (a) No-Superdeterminism — the assumption of 'freedom of choice' used in derivations of Bell inequalities - that the experimental settings can be chosen freely — uncorrelated with any relevant variables prior to that choice, (2) Locality or (3) Absoluteness of Observed Events — that every observed event exists absolutely, not relatively – must be false. Although the violation of Bell-type inequalities in such scenarios is not in general sufficient to demonstrate the contradiction between those three assumptions, new inequalities can be derived, in a theory-independent manner, that are violated by quantum correlations. This is demonstrated in a proof-of-principle experiment where a photon's path is deemed an observer. This new theorem places strictly stronger constraints on physical reality than Bell's theorem.

 

Self-Simulated Universe Another theory put forward by gravitational theorists (Irwin, Amaral & Chester 2020) also uses retrocausality to try to explain the ultimate questions: Why is there anything here at all? What primal state of existence could have possibly birthed all that matter, energy, and time, all that everything? and the way did consciousness arise—is it some fundamental proto-state of the universe itself, or an emergent phenomenon thats purely neurochemical and material in nature?

 

Fig 77b: Self-Simulated Universe: Humans are near the point of demarcation, where EC or thinking matter emerges into the choice-sphere of the infinite set of possibilities of thought, EC. Beyond the human level, physics allows for larger and more powerful networks that are also conscious. At some stage of the simulation run, a conscious EC system emerges that is capable of acting as the substrate for the primitive spacetime code, its initial conditions, as mathematical thought, and simulation run, as a thought, to self-actualize itself. Linear time would not permit this logic, but non-linear time does.

 

This approach attempts to answer both questions in a way that weds aspects of Nick Bostroms Simulation Argument with timeless emergentism.termed the panpsychism self-simulation model,that says the physical universe may be a strange loopthat may self-generate new sub-realities in an almost infinite hierarchy of tiers in-laid with simulated realities of conscious experience. In other words, the universe is creating itself through thought, willing itself into existence on a perpetual loop that efficiently uses all mathematics and fundamental particles at its disposal. The universe, they say, was always here (timeless emergentism) and is like one grand thought that makes mini thoughts, called code-steps or actions, again sort of a Matryoshka doll.

 

David Chester comments:

 

While many scientists presume materialism to be true, we believe that quantum physics may provide hints that our reality could be a mental construct. Recent advances in quantum gravity, like seeing spacetime emergent via a hologram, is also a touch that spacetime isnt fundamental. this can be also compatible with ancient Hermetic and Indian philosophy. In a sense, the mental construct of reality creates spacetime to efficiently understand itself by creating a network of subconscious entities that may interact and explore the totality of possibilities.

 

They modify the simulation hypothesis to a self-simulation hypothesis, where the physical universe, as a strange loop, is a mental self-simulation that might exist as one of a broad class of possible code-theoretic quantum gravity models of reality obeying the principle of efficient language axiom, and discuss implications of the self-simulation hypothesis such as an informational arrow of time.

 

The self-simulation hypothesis is built upon the following axioms:

 

1. Reality, as a strange loop, is a code-based self-simulation in the mind of a panpsychic universal consciousness that emerges from itself via the information of code-based mathematical thought or self-referential symbolism plus emergent non-self-referential thought. Accordingly, reality is made of information called thought.

2. Non-local spacetime and particles are secondary or emergent from this code, which is itself a pre-spacetime thought within a self-emergent mind.

3. The panconsciousness has freewill to choose the code and make syntactical choices. Emergent lower levels of consciousness also make choices through observation that influence the code syntax choices of the panconsciousness.

4. Principle of efficient language (Irwin 2019). The desire or decision of the panconscious reality is to generate as much meaning or information as possible for a minimal number of primitive thoughts, i.e., syntactical choices, which are mathematical operations at the pre-spacetime code level.

 

Fig 77c: This emphasis on coding is problematic, as it is trying to assert a consciousness-makes-reality loop through an apparently abstract coded representation based on discrete computation-like processes, assuming  an "tit-from-bit" notion that reality is made from information, not just described by it.

 

It from bit: Otherwise put, every it — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely — even if in some contexts indirectly — from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. It from bit symbolizes the idea that every item of the physical world has at bottom — at a very deep bottom, in most instances — an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe (Wheeler 1990).

 

 

Schwartz, Stapp & Beauregard (2005) advance a quantum theory of conscious volition, in which attentive will can influence physical brain states using quantum principles, in particular von Neumann's process 1 or collapse of the wave function complementing process 2, the causal evolution of the Schrödinger wave function responsible of ongoing physical brain states.  They cite specific cognitive processes leading the physical changes in the manner of ongoing brain function:

 

There is at least one type of information processing and manipulation that does not readily lend itself to explanations that assume that all final causes are subsumed within brain, or more generally, central nervous system mechanisms. The cases in question are those in which the conscious act of wilfully altering the mode by which experiential information is processed itself changes, in systematic ways, the cerebral mechanisms used. There is a growing recognition of the theoretical importance of applying experimental paradigms that use directed mental effort to produce systematic and predictable changes in brain function. ... Furthermore, an accelerating number of studies in the neuroimaging literature significantly support the thesis that, with appropriate training and effort, people can systematically alter neural circuitry associated with a variety of mental and physical states.

 

They point out that it is necessary in principle to advance to the quantum level to achieve an adequate theory of the neurophysiology of volitionally directed activity. The reason, essentially, is that classic physics is an approximation to the more accurate quantum theory, and that this classic approximation eliminates the causal efficacy of our conscious efforts that these experiments empirically manifest.

 

They explain how structural features of ion conductance channels critical to synaptic function entail that the classical approximation to quantum reality fails in principle to cover the dynamics of a human brain, so that quantum dynamics must be used. The principles of quantum theory must then link the quantum physical description of the subjects brain to their stream of conscious experiences. The conscious choices by human agents thereby become injected non-trivially into the causal interpretation of neuroscience and neuropsychology experiments, through type 1 processes performing quantum measurement operations. This particularly applies to those experimental paradigms in which human subjects are required to perform decision-making or attention-focusing tasks that require conscious effort.

 

Conscious effort itself can, justifiably within science, be taken to be a primary variable whose complete causal origins may be untraceable in principle, but whose causal efficacy in the physical world can be explained on the basis of the laws of physics.

 

The mental act of clear-minded introspection and observation, variously known as mindfulness, mindful awareness, bare attention, the impartial spectator, etc., is a well-described psychological phenomenon with a long and distinguished history in the description of human mental states. ... In the conceived approach, the role played by the mind, when one is observing and modulating ones own emotional states, is an intrinsically active and physically efficacious process in which mental action is affecting brain activity in a way concordant with the laws of physics.

 

They propose a neurobiological interpretation where calcium channels play a pivotal role in type 1 processes at the synaptic level:

 

At their narrowest points, calcium ion channels are less than a nanometre in diameter. This extreme smallness of the opening in the calcium ion channels has profound quantum mechanical implications. The narrowness of the channel restricts the lateral spatial dimension.  Consequently, the lateral velocity is forced by the quantum uncertainty principle to become large. This causes the quantum cloud of possibilities associated with the calcium ion to fan out over an increasing area as it moves away from the tiny channel to the target region where the ion will be absorbed as a whole, or not absorbed at all, on some small triggering site. ... This spreading of this ion wave packet means that the ion may or may not be absorbed on the small triggering site. Accordingly, the contents of the vesicle may or may not be released. Consequently, the quantum state of the brain has a part in which the neurotransmitter is released and a part in which the neurotransmitter is not released. This quantum splitting occurs at every one of the trillions of nerve terminals. ... In fact, because of uncertainties on timings and locations, what is generated by the physical processes in the brain will be not a single discrete set of non-overlapping physical possibilities but rather a huge smear of classically conceived possibilities. Once the physical state of the brain has evolved into this huge smear of possibilities one must appeal to the quantum rules, and in particular to the effects of process 1, in order to connect the physically described world to the streams of consciousness of the observer/participants.

 

However, they note that this focus on the motions of calcium ions in nerve terminals is not meant to suggest that this particular effect is the only place where quantum effects enter into the brain process, or that the quantum process 1 acts locally at these sites. What is needed here is only the existence of some large quantum of effect.

 

A type 1 process beyond the local deterministic process 2 is required to pick out one experienced course of physical events from the smeared-out mass of possibilities generated by all of the alternative possible combinations of vesicle releases at all of the trillions of nerve terminals. This process brings in a choice that is not determined by any currently known law of nature, yet has a definite effect upon the brain of the chooser.

 

They single out the quantum zeno effect, in which rapid multiple measurements can act to freeze a quantum state and delay its evolution and cite James (1892 417): The essential achievement of the will, in short, when it is most voluntary,is to attend to a difficult object and hold it fast before the mind. Effort of attention is thus the essential phenomenon of will. ... Consent to the ideas undivided presence, this is efforts sole achievement. Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away."  This coincides with the studies already cited on wilful control of the emotions to imply evidence of effect.

 

Much of the work on attention since James is summarized and analysed in Pashler (1998). He emphasizes that the empirical findings of attention studies argue for a distinction between perceptual attentional limitations and more central limitations involved in thought and the planning of action. A striking difference that emerges from the experimental analysis is that the perceptual processes proceed essentially in parallel, whereas the post-perceptual processes of planning and executing actions form a single queue, is in line with the distinction between passiveand activeprocesses. A passive stream of essentially isolated process 1 events versus active processes involving effort-induced rapid sequences of process 1 events that can saturate a given capacity.

 

There is in principle, in the quantum model, an essential dynamic difference between the unconscious processing done by the Schrödinger evolution, which generates by a local process an expanding collection of classically conceivable experiential possibilities and the process associated with the sequence of conscious events that constitute the wilful selection of action. The former are not limited by the queuing effect, because process 2 simply develops all of the possibilities in parallel. Nor is the stream of essentially isolated passive process 1 events thus limited. It is the closely packed active process 1 events that can, in the von Neumann formulation, be limited by the queuing effect.  

 

This quantum model accommodates naturally all of the complex structural features of the empirical data that he describes. Chapter 6 emphasizes a specific finding: strong empirical evidence for what he calls a central processing bottleneck associated with the attentive selection of a motor action. This kind of bottleneck is what the quantum-physics-based theory predicts: the bottleneck is precisely the single linear sequence of mindbrain quantum events that von Neumann quantum theory describes.

 

Hameroff and Penrose (2014) have also proposed a controversial theory that consciousness originates at the quantum level inside neurons, rather than the conventional view that it is a product of connections between neurons, coupling orchestrated objective reduction (OOR) to hypothetical quantum cellular automata in the microtubules of neurons. The theory is regarded as implausible by critics, both physicists and neuroscientists who consider it to be a poor model of brain physiology on multiple grounds.

 

Orchestration refers to the hypothetical process by which microtubule-associated proteins, influence or orchestrate qubit state reduction by modifying the spacetime-separation of their superimposed states. The latter is based on Penrose's objective-collapse theory for interpreting quantum mechanics. Derakhshani et al. (2022) discount gravitational collapse theory experimentally:

 

We perform a critical analysis of the Orch OR consciousness theory at the crossroad with the newest experimental results coming from the search for spontaneous radiation predicted by the simplest version of gravity-related dynamical collapse models. We conclude that Orch OR theory, when based on the simplest version of gravity-related dynamical collapse [Diósi 2019], is highly implausible in all the cases analyzed.

 

The tubulin protein dimers of the microtubules have hydrophobic pockets that may contain delocalised π electrons. Hameroff claims that this is close enough for the tubulin π electrons to become quantum entangled. This would leave these quantum computations isolated inside neurons. Hameroff then proposed, although this idea was rejected by Reimers (2009),  that coherent Frolich condensates in microtubules in one neuron can link with microtubule condensates in other neurons and glial cells via the gap junctions of electrical synapses claiming these are sufficiently small for quantum tunnelling across, allowing them to extend across a large area of the brain. He further postulated that the action of this large-scale quantum activity is the source of 40 Hz gamma waves, building upon the theory that gap junctions are related to the gamma oscillation. Craddock et. al. (2017) make claims about anaesthetics based on the exclusive action of halothane types on microtubules, which focus on halothane type molecules lack consistency with the known receptor-based effects of ketamine and N20 on NMDA receptors also shared by halothanes and that of propofol on GABA receptors. Evidence for anaesthetic disruption of microtubules, Kelz & Mashour's (2019) review, applies indiscriminately to all anaesthetics, from halothane to ketamine widely across the tree of life, from paramecium to humans, including both synaptic and ion-channel effects, indicating merely that microtubular integrity is necessary for consciousness and does not indicate microtubules have a key role in consciousness itself, other than their essential architectural and transport roles.

 

Because of its dependence on Penrose’s idea of gravitational quantum collapse, the theory is confined to objective reduction, at face value crippling the role of free-will in conscious experience. However Hameroff (2012) attempts to skirt this by applying notions of retro-causality, as illustrated in fig 77(2), in which a dual-time approach (King 1989) is used to invoke a quantum of the present, the Conscious NOW. We will see that retrocausality is a process widely cited also in this work.  Hameroff justifies such retrocausality from three sources.

 

Firstly he cites an open brain experiment of Libet. Peripheral stimulus, e.g., of the skin of the hand, resulted in an “EP” spike in the somatosensory cortical area for the hand 30ms after skin contact, consistent with the time required for a neuronal signal to travel from hand to spinal cord, thalamus, and brain. The stimulus also caused several 100 ms of ongoing cortical activity following the EP. Subjects reported conscious experience of the stimulus (using Libets rapidly moving clock) near-immediately, e.g., at the time of the EP at 30ms, hinting at retro-causality of the delayed “readiness potential”.

 

Secondly, he cites a number of well-controlled studies using electrodermal activity, fMRI and other methods to look for emotional responses, e.g., to viewing images presented at random times on a computer screen. Surprisingly, the changes occurred half a second to two seconds before the images appeared. They termed the effect pre-sentiment because the subjects were not consciously aware of the emotional feelings. Non-conscious emotional sentiment (i.e., feelings) appeared to be referred backward in time. Bem (2012, 2016) reported on studies showing statistically significant backward time effects, most involving non-conscious influence of future emotional effects (e.g., erotic or threatening stimuli) on cognitive choices. Studies by others have reported both replication, and failure to replicate, the controversial results. Thirdly he cites a number of delayed choice experiments widely discussed in this work.

 

Fig 77: (1) An axon terminal releases neurotransmitters through a synapse and are received by microtubules in a neuron's dendritic spine. (2) From left, a superposition develops over time, e.g., a particle separating from itself, shown as simultaneous curvatures in opposite directions. The magnitude of the separation is related to E, the gravitational self-energy. At a particular time t, E reaches threshold by E = h ̄/t, and spontaneous OR occurs, one particular curvature is selected. This OR event is accompanied by a moment of conscious experience (NOW), its intensity proportional to E. Each OR event also results in temporal non-locality, referring quantum information backward in classical time (curved arrows). (3,4) Scale dependent resonances from the pyramidal neuron, through microtubules, to π-orbitals and gravitational effects.

 

Sahu S, et al. (2013) found that electronic conductance along microtubules, normally extremely good insulators, becomes exceedingly high, approaching quantum conductance, at certain specific resonance frequencies of applied alternating current (AC) stimulation. These resonances occur in gigahertz, megahertz and kilohertz ranges, and are particularly prominent in low megahertz (e.g. 8.9 MHz). Hameroff & Penrose (2014) suggest that EEG rhythms (brain waves) also derive from deeper level microtubule vibrations.

 

However none of these processes have been empirically verified and the complex tunnelling invoked is far from being a plausible neurophysiological process. The model requires that the quantum state of the brain has macroscopic quantum coherence, which needs to be maintained for around a tenth of a second. But, according to calculations made by Max Tegmark (2000), this property ought not to hold for more than about 10-13 s. Hameroff and co-workers (Hagen et al. 2002) have advanced reasons why this number should actually be of the order of a tenth of a second. But 12 orders of magnitude is a very big difference to explain away and serious doubts remain about whether the Penrose–Hameroff theory is technically viable. Two experiments (Lewton 2022, Tangerman 2022), presented at The Tucson Science of Consciousness conference merely showed that anaesthetics hastened delayed luminescence and that under laser excitation prolonged excitation diffused through microtubules further than expected when not under anaesthetics. There is no direct evidence for the cellular automata proposed and microtubules are critically involved in neuronal architecture, and are also involved in molecular transport, so functional conflict would result from adding another competing function. Hameroff (2022) cites processes, from the pyramidal neuron, down through microtubules, to pi-orbital resonances and gravitational space-time effects, but the linkage to microtubules is weak.

 

OOR would force collapse, but it remains unestablished how conscious volition is invoked, because collapse is occurring objectively in terms of Penrose’s notion of space-time blisters. It remains unclear how these hypothetical objective or “platonic” entities, as Penrose puts it, relate to subjective consciousness or volition. Hameroff (2012) in “How quantum brain biology can rescue conscious free will” attempts an explanation, but this simply comes down to objective OOR control:

 

Orch OR directly addresses conscious causal agency. Each reduction/conscious moment selects particular microtubule states which regulate neuronal firings, and thus control conscious behavior. Regarding consciousness occurring too late,quantum state reductions seem to involve temporal non-locality, able to refer quantum information both forward and backward in what we perceive as time, enabling real-time conscious causal action. Quantum brain biology and Orch OR can thus rescue free will.

 

For this reason Symbiotic Existential Cosmology remains agnostic about such attempts to invoke unestablished, exotic quantum effects, and instead points to the non-IID nature of brain processes generally, meaning that neurodynamics is a fractal quantum process not required to be adiabatically isolated as decoherence limits of technological quantum computing suggest.

 

QBism and the Conscious Consensus Quantum Reality

 

QBism (von Bayer 2016) is an acronym for "quantum Bayesianism" a founding idea from which it has since moved on. It is a version of quantum physics founded on the conscious expectations of each physicist and their relationships with other physicists. According to QBism, experimental measurements of quantum phenomena do not quantify some feature of an independently existing natural structure. Instead, they are actions that produce experiences in the person or people doing the measurement.

 

“When I take an action on the world, something genuinely new comes out.”

 

This is very similar to the way Symbiotic Existential Cosmology presents consciousness as primary in the sense that we all experience subjective consciousness and infer the real world through the consensus view between conscious observers of our experiences of what we come to call the physical world. So although we know the physical world is necessary for our biological survival – the universe is necessary, we derive our knowledge of it exclusively and only through our conscious experiences of it.

 

The focus is on how to gain knowledge in a probabilistic universe... In this probabilistic interpretation, collapse of the quantum wave function has little to do with the object observed/measured. Rather, the crux of the matter is change in the knowledge of the observer based on new information acquired through the process of observing. Klaus Fuchs explains: “When a quantum state collapses, it’s not because anything is happening physically, it’s simply because this little piece of the world called a person has come across some knowledge, and he updates his knowledge… So the quantum state that’s being changed is just the person’s knowledge of the world, it’s not something existent in the world in and of itself.”

 

QBism is agnostic about whether there is a world that is structured independently of human thinking. It doesn’t assume we are measuring pre-existing structures, but nor does it pretend that quantum formalism is just a tool. Each measurement is a new event that guides us in formulating more accurate rules for what we will experience in future events. These rules are not just subjective, for they are openly discussed, compared and evaluated by other physicists. QBism therefore sees physicists as permanently connected with the world they are investigating. Physics, to them, is an open-ended exploration that proceeds by generating ever new laboratory experiences that lead to ever more successful, but revisable, expectations of what will be encountered in the future.

 

In QBism the wave function is no longer an aspect of physical reality as such, but a feature of how the observer's expectations will be changed by an act of quantum measurement.

 

The principal thesis of QBism is simply this: quantum probabilities are numerical measures of personal degrees of belief. According to QBism, experimental measurements of quantum phenomena do not quantify some feature of an independently existing natural structure. Instead, they are actions that produce experiences in the person or people doing the measurement.

 

In the conventional version of quantum theory, the immediate cause of the collapse is left entirely unexplained, or "miraculous" although sometimes assumed to be essentially random. QBism solves the problem as follows. In any experiment the calculated wave function furnishes the prior probabilities for empirical observations that may be made later. Once an observation has been made new information becomes available to the agent performing the experiment. With this information the agent updates their probability and their wave function, instantaneously and without magic.

 

So in the Wigner's friend experiment, the friend reads the counter while Wigner, with his back turned to the apparatus, waits until he knows that the experiment is over. The friend learns that the wave function has collapsed to the up outcome. Wigner, on the other hand, knows that a measurement has taken place but doesn’t know its result. The wave function he assigns is a superposition of two possible outcomes, as before, but he now associates each with a definite reading of the counter and with his friend’s knowledge of that reading — a knowledge that Wigner does not share. For the QBist there is no problem: Wigner and his friend are both right. Each assigns a wave function reflecting the information available to them, and since their respective compilations of information differ, their wave functions differ too. As soon as Wigner looks at the counter himself or hears the result from his friend, he updates his wave function with the new information, and the two will agree once more—on a collapsed wave function.

 

According to the conventional interpretation of quantum mechanics, in the Schrödinger's cat experiment, the value of a superimposed wave function is a blend of two states, not one or the other. What is the state of the cat after one half-life of the atom, provided you have not opened the box? The fates of the cat and the atom are intimately entangled. An intact atom implies a living cat; a decayed atom implies a dead cat. It seems to follow that since the atom’s wave function is unquestionably in a superposition so is the cat: it is both alive and dead. As soon as you open the box, the paradox evaporates: the cat is either alive or dead. But while the box is still closed — what are we to make of the weird claim that the cat is dead and alive at the same time? According to QBism, the state of an unobserved atom, or a cat, has no value at all. It merely represents an abstract mathematical formula that gives the odds for a future observation: 0 or 1, intact or decayed, dead or alive. Claiming that the cat is dead and alive is as senseless as claiming that the outcome of a coin toss is both heads and tails while the coin is still tumbling through the air. Probability theory summarises the state of the spinning coin by assigning a probability of 1/2 that it will be heads. So QBism refuses to describe the cat’s condition before the box is opened and rescues it from being described as hovering in a limbo of living death.

 

If the wave-function, as QBism maintains, says nothing about an atom or any other quantum mechanical object except for the odds for future experimental outcomes, the unperformed experiment of looking in the box before it is opened has no result at all, not even a speculative one. The bottom line: According to the QBist interpretation, the entangled wave-function of the atom and the cat does not imply that the cat is alive and dead. Instead, it tells an agent what she can reasonably expect to find when they open the box.

 

This makes QBism compatible with phenomenologists, for whom experience is always “intentional” – i.e. directed towards something – and these intentionalities can be fulfilled or unfulfilled. Phenomenologists ask questions such as: what kind of experience is laboratory experience? How does laboratory experience – in which physicists are trained to see instruments and measurements in a certain way – differ from, say, emotional or social or physical experiences? And how do lab experiences allow us to formulate rules that anticipate future lab experiences?

 

Another overlap between QBism and phenomenology concerns the nature of experiments. Experiments are performances. They’re events that we conceive, arrange, produce, set in motion and witness, yet we can’t make them show us anything we wish. That doesn’t mean there is a deeper reality “out there” – just as, with Shakespeare, there is no “deep Hamlet” of which all other Hamlets we produce are imitations. In physics as in drama, the truth is in performance.

 

However, there is one caveat. We simply don't know whether consciousness itself can be associated only with collapsed probabilities or in some way is also steeped even as a complement in the spooky world of entanglement, so reducing the entirety of physics to collapsed probabilities may not convey the entire picture and the degree to which conscious experiences correspond to unstable brain states at the edge of chaos making phase coherence measurements akin to or homologous with quantum measurements may mean this picture is vastly more complicated than meets the eye.

 

The Born Probability Interpretation and the Notion of Quantum “Randomness”

 

The Born rule provides a link between the mathematical formalism of quantum theory and experiment, and as such is almost single-handedly responsible for practically all predictions of quantum physics (Landsman 2008). The rule projects the superimposed vector  with a basis of eigenvectors in an inner product space onto the eigenvector of one of its eigenvalues λi, as a purely algebraic  operation.

 

It states that if an observable corresponding to a self-adjoint operator A with discrete spectrum is measured in a system with normalised wave function   then:

 

(1) the measured result will be one of the eigenvalues  λi of  A, and

(2) the probability of measuring a given eigenvalue λi will equal , where Pi is the projection onto the eigenspace of  A corresponding to  λi.

Equivalently, the probability can be written as .

 

The rule for calculating probabilities was really just an intuitive guess by the German physicist Max Born. So was Schrödinger’s equation itself. Neither was supported by rigorous derivation. It is simply a probability law on the Hilbert space  representation (Griffiths 2014) and says nothing about whether quantum uncertainty is purely random or whether there is a hidden variable theory governing it. Broadly speaking the rule is postulated, as derived above, and not proven experimentally, but assumed theoretically in experimental work:

 

It's not clear what exactly is meant by an experimental verification of the Born rule - the Born rule says how the quantum state relates to the probability of measurement, but "the quantum state" itself is a construct of the quantum theory that is rarely, if ever, experimentally accessible other than running repeated tests and inferring which state it was from the results assuming the Born rule is valid.

 

This is because we start initially with a Schrödinger wave equation as a Hamiltonian energy operator , but the wave function is experimentally inaccessible to classical observation, so we have to use the Born probability interpretation to get a particle probability we can sample e.g. in the pattern of photons on the photographic plate in the two-slit interference experiment in fig  71(f).

 

There are obvious partial demonstrations, but these just lead to averages that statistically approach the probability interpretation, but don’t tell us anything about the underlying process which generates these indeterminacies.

 

Born's rule has been verified experimentally numerous times. However, only the overall averages have been verified. For example if the prediction is 60% probability, then over large number of trials, the average outcome will approach the predicted value of 60%. This has been verified by measuring particle spin at angle A relative to the angle of its previously known spin angle. The prediction is square of cos(A/2). These predictions have also been verified with entangled pairs (Bell's state) where the same spin prediction is square of sin(A/2). What has not been verified is whether the outcomes are due to independent probability, or they are guided by some balancing mechanism.

 

Landsman (2008) confirms this picture:

 

The pragmatic attitude taken by most physicists is that measurements are what experimentalists perform in the laboratory and that probability is given the frequency interpretation (which is neutral with respect to the issue whether the probabilities are fundamental or due to ignorance). Given that firstly the notion of a quantum measurement is quite subtle and hard to define, and that secondly the frequency interpretation is held in rather low regard in the philosophy of probability, it is amazing how successful this attitude has been!

 

Heisenberg (1958), notes that, in the Copenhagen interpretation, probabilities arise because we look at the quantum world through classical glasses:

 

One may call these uncertainties [i.e. the Born probabilities] objective, in that they are simply a consequence of the fact that we describe the experiment in terms of classical physics; they do not depend in detail on the observer. One may call them subjective, in that they reflect our incomplete knowledge of the world.

 

Landsman (2008) clarifies:

 

In other words, one cannot say that the Born probabilities are either subjective (Bayesian, or due to ignorance) or objective (fundamentally ingrained in nature and independent of the observer). Instead, the situation is more subtle and has no counterpart in classical physics or probability theory: the choice of a particular classical description is subjective, but once it has been made the ensuing probabilities are objective and the particular outcome of an experiment compatible with the chosen classical context is unpredictable. Or so Bohr and Heisenberg say. ... In most interpretations of quantum mechanics, some version of the Born rule is simply postulated.

 

Roger Penrose (foreword vi in Wuppuluri & Doria 2018) notes:

 

Current quantum mechanics, in the way that it is used, is not a deterministic scheme, and probabilistic behaviour is taken to be an essential feature of its workings. Some would contend that such indeterminism is here to stay, whereas others argue that there must be underlying ‘hidden variables’ which may someday restore a fully deterministic underlying ontology. ... Personally, I do not insist on taking a stand on this issue, but I do not think it likely that pure randomness can be the answer. I feel that there must be something more subtle underlying it all.

 

John von Neumann (1951) is highly critical of both physical and algorithmic sources of randomness:

 

We see then that we could build a physical instrument to feed random digits directly into a high-speed computing machine and could have the control call for these numbers as needed. The real objection to this procedure is the practical need for checking computations. If we suspect that a calculation is wrong, almost any reasonable check involves repeating something done before. At that point the introduction of new random numbers would be intolerable. I think that the direct use of a physical supply of random digits is absolutely inacceptable for this reason and for this reason alone. … Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number – there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method.

 

Ruth Kastner (2013) claims that the transactional interpretation is unique in giving a physical explanation for the Born rule. Zurek (2005) has made a derivation from entanglement and Sebens and Carroll have done so for an Everett perspective, although this is not strictly meaningful, since every branch of the multiverse is explored.

 

Because wave interference is measured through particle absorption, experiments have been made (Sinha et al. 2010) to eliminate higher-order processes which might violate the two signal interference implied by the Born interpretation because Born’s rule predicts that quantum interference, as shown by a double-slit diffraction experiment, occurs from pairs of paths. Therefore using a three slit apparatus and sampling all combinations of slits we can confirm additive two wave interference, so Born applies.

 

Other experiments and theories attempt to derive the Born interpretation from more basic quantum properties.  Masanes, Galley & Müller (2019) show Born’s rule and the post-measurement state-update, can be deduced from the other quantum postulates, referred to as unitary quantum mechanics, and the assumption that ensembles on finite-dimensional Hilbert spaces are characterised by finitely many parameters. Others such as Cabelo (2018)  use graph theory. The movement to regenerate the whole of quantum theory from more basic axioms e.g. of information, or probability itself is called quantum reconstruction, of which Qbism is an example (Ball 2017).

 

Zurek (1991, 2003, 2005) has introduced the notions of decoherence, quantum Darwinism and envariance – environment – assisted invariance, to explain the transition from quantum reality to the classical. Decoherence is the way third-party quanta disrupt the off-diagonal wave amplitudes of entanglement resulting in projection onto the “observed” classical states through exponential damping as in fig 71c. Quantum Darwinism enriches this picture by developing the notion that some quantum “pointer” states can be more robust to decoherence by replicating their information into the environment. Envariance describes this process in terms of quantum measurement, in which the environment becomes entangled with the apparatus of ideal von Neumann measurement, again promoting the transition to the classical. While these do not deal with the question of hidden variable theories versus randomness of uncertainty they have been claimed to derive the Born probabilities (Zurek 2005, Harris et al 2016) through multiple environmental interactions, illustrated by Laplace playing card probabilities in fig 77d. However all the approaches to independent derivation of the Born rule including envariance have been criticised as being logically circular (Schlosshauer  & Fine 2005, Landsman 2008).

 

Illustrating the difficulty of the problem, John Wheeler in 1983 proposed that statistical regularities in the physical world might emerge from such a situation, as they sometimes do from unplanned crowd behaviour (Ball 2019):

 

“Everything is built higgledy-piggledy on the unpredictable outcomes of billions upon billions of elementary quantum phenomena", Wheeler wrote. But there might be no fundamental law governing those phenomena — indeed, he argued, that was the only scenario in which we could hope to find a self-contained physical explanation, because otherwise we’re left with an infinite regression in which any fundamental equation governing behavior needs to be accounted for by some even more fundamental principle. “In contrast to the view that the universe is a machine governed by some magic equation, … the world is a self-synthesizing system,” Wheeler argued. He called this emergence of the lawlike behavior of physics “law without law.”

 

However the probability interpretation leads to the incorrect notion that quantum reality is somehow just a random process. Common opinions of processes like radioactive decay are treated as random by default, simply because they are indeterminate and don’t obey a fixed law. Study smarter, for example, states:

 

Radioactive decay is a random process, meaning it is impossible to predict when an atom will emit radiation. By the random nature of radioactive decay, we mean that for every atom, there are known probabilities that they will emit radiation (and thus decay radioactively) in the next second. Still, the fact that all we have is a probability makes this a random process. We can never determine ahead of time if an atom will decay in the next second or not. This is just like throwing a (fair, cubic) dice every second.

 

In effect, this is equating the quantum tunnelling of individual atoms to a dice throw, which is a chaotic classical process with geometric constraints, so it is equating quantum uncertainty with classical chaotic butterfly effect systems which might also have a quantum sensitivity.

 

Santha & Vazirani (1986) note:

 

Unfortunately, the available physical sources of randomness (including zener diodes and geiger counters) are imperfect. Their output bits are not only biased but also correlated.

 

Fig 77d: (1,2) Exponential decay of erratic radioactivity as the population of radioactive atoms becomes depleted. (3) Zener diode avalanche output. (4) Graph of the chaotic logistic iteration displaying interval-filling ergodicity in the frequency graph (4b), point plot 4(c). One can make a Born interpretation of this as a pseudo-wave function showing the relative probabilities of finding an iteration point at 0.5 and 0.9 by normalising the function over its integral, yet the process is deterministic. Therefore a probability interpretation does not imply randomness. (5,6) Quasi-random and pseudo-random or random 2-D distributions. (7) Sketch derivation of the Born formula derivation using Laplace probabilities of concealed and revealed playing cards (Zurek 2005).

 

Geiger counters measure quantum tunnelling in individual radioactive nuclei that reflect quantum uncertainty but are emitted very slowly over long periods, so the process is not random but one of erratic exponential decay over time. Zener diodes at high voltage undergo avalanche breakdown, which is a solid state feature that lets through a flood of electrons with a fixed relaxation time, so again it is not directly measuring uncertainty, but its compounded effects.

 

This is remarkably similar to chaotic molecular systems displaying a butterfly effect. Very simple algorithmic chaotic systems such as the logistic iteration, where x1 is chosen on (0,1) as a seed and the sequence is generated by modelling rabbit reproduction in a finite pasture (May 1976), where the chaotic phase is ergodic and asymptotic to an interval-filling stochastic process, when r = 4. This is shown in fig 77d, where this iteration generates an asymptotic frequency distribution, which has been normalised over its integral to produce a probability function playing the same role as as the squared wave function to give a probability interpretation parallel to the Born rule for an elementary deterministic discrete iteration, confirming the Born rule does not in any way imply that the basis of quantum uncertainty lies in randomness. 

 

The point distribution (4c) shows us closer detail that confirms this deterministic dynamical process displays pseudo-random features akin to (6), modulated by the overall probability distribution (4b), but there is a subtle anomaly in (4c) in the horizontal strip neighbouring y = 0.75. This does not appear in the probability distribution, which is asymptotically smooth for large n. The reason is that the iteration has two fixed point solutions to x = f(x): 0 an attractor, and 0.75 a chaotic repelling point. There are 2n +1 periodic points of period n, so in a classical chaotic iteration the unstable periodic points are dense, but these have measure zero, as a countable subset of [0, 1], so their probability of occurrence is zero, but their neighbouring points can be seen in (4c) as stationary butterfly effect exponential trajectories neighbouring the fixed point. None of this can happen in a quantum system and this detail is not accessible in the quantum situation either because we have recourse only to the Born rule, so the interference experiment reflects only what we see in (4b) – the macroscopic experimental arrangement, as a statistical particle approximation to the wave power, not the underlying process governing the individual outcomes. In fig 57, we see that the classical chaotic stadium billiard (1), becomes scarred by such repelling periodic orbits in the quantum situation, although open quantum systems like the quantum kicked top (2) become entangled.

 

Algorithmic processes approximating randomness for experimental purposes are classified as either pseudo-random, or quasi-random.

 

Pseudo-random numbers more closely simulate randomness, as in the pseudorandom number generator (PRNG), or deterministic random bit generator (DRBG) used in computational random functions. An example pseudo-random number generator is where An is the previous pseudo number generated, Z is a constant multiplier, I is a constant increment, and M is a constant modulus.

 

Quasi-random processes, also called low-discrepancy sequences, approach an even distribution more rapidly, but less faithfully than true randomness because they lack larger improbable fluctuations. They  are generated by a number of algorithms including Fauré, Halton and Sobol, each of which have short arithmetic computational procedures.

 

This leaves us in a position where the assumption of quantum randomness remains unestablished and in which a complex many-body non-local interaction, given the vast number of particles in the universe, could approximate the Born interpretation to the limits of any experimental error.

 

Summarising the interplay between the notion of “random” probabilistic wave function collapse and hidden variable theories, Symbiotic Existential Cosmology  favours the latter on the basis that:

 

(1) The verification of Bell entanglement was a confirmation of the EPR claim and Einstein’s quote:

 

God does not play dice with the universe.

 

(2) The transcausal view of quantum transactions being a complex underlying hidden variable process, which is also shared by (3) superdeterminism violating statistical independence, and involving additional non-local processes (4) non-IID processes in biology not converging to the classical and (5) theories in which quantum measurement may contradict objective reality through process entanglements extending beyond Bell state entanglements.

 

The intrinsic difficulty that all such theories since the Bohm’s pilot wave formulation face, is that expectations are for a master equation like Schrödinger’s, when any system is likely to be a vastly complex non-local many-body problem. This criticism of the assumption of randomness applies equally to molecular dynamics, mutational evolution and neurodynamics.

 

The Neuroscience Perspective

 

Complementing this description of the quantum world at large is the actual physics of how the brain processes information. By contrast with a digital computer, the brain uses both pulse coded action potentials and continuous gradients in an adaptive parallel network. Conscious states tend to be distinguished from subconscious processing by virtue of coherent phase fronts of the brain’s wave excitations. Phase coherence of beats between wave functions fig 71(c), is also the basis of quantum uncertainty.

 

In addition, the brain uses edge-of-chaos dynamics, involving the butterfly effect arbitrary sensitivity to small fluctuations in bounding conditions – and the creation of strange attractors to modulate wave processing, so that the dynamics doesn’t become locked into a given ordered state and can thus explore the phase space of possibilities, before making a transition to a more ordered state representing the perceived solution. Self-organised criticality is also a feature, as is neuronal threshold tuning. Feedback between the phase of brain waves on the cortex and the discrete action potentials of individual pyramidal calls, in which the phase is used to determine the timing of action potentials, creates a feedback between the continuous and discrete aspects of neuronal excitation. These processes, in combination, may effectively invoke a state where the brain is operating as an edge-of-chaos quantum computer by making internal quantum measurements of its own unstable dynamical evolution, as cortical wave excitons, complemented by discrete action potentials at the axonal level.

 

Chaotic sensitivity, combined with related phenomena such as stochastic resonance (Liljenström et al. 2005), mean that fractal scale-traversing handshaking (Grosu 2023) can occur between critically poised global brain states, neurons at threshold, ion-channels and the quantum scale, in which quantum entanglement of excitons can occur (King 2014). At the same time these processes underpin why there is ample room in physical brain processing for quantum uncertainty to become a significant factor in unstable brain dynamics, fulfilling Eccles (1986) notion that this can explain a role for consciousness, without violating any classically causal processes.

 

This means that brain function is an edge-of-chaos quantum dynamical system which, unlike a digital computer, is far from being a causally deterministic process which would physically lock out any role for conscious decision-making, but leaves open a wide scope for quantum uncertainty, consistent with a role for consciousness in tipping critical states. The key to the brain is thus its quantum physics, not just its chemistry and biology. This forms a descriptive overview of possible processes involved rather than an empirical proof, in the face of the failure of promissory materialistic neuroscience (Popper & Eccles 1984) to demonstrate physical causal closure of the universe in the context of brain function, so Occams razor cuts in the direction which avoids conflict with empirical experience of conscious volitional efficacy over the physical universe.

 

 

Fig 78: (1) Edge of chaos transitions model of olfaction (Freeman 1991). (2) Stochastic resonance as a hand-shaking process between the ion channel and whole brain states (Liljenström & Svedin 2005). (3) Hippocampal place maps (erdiklab.technion.ac.il). Hippocampal cells have also been shown to activate in response to desired locations in an animals anticipated future they have observed but not visited (Olafsdottir et al. 2015). (4) Illustration of micro-electrode recordings of local wave phase precession (LFP) enabling correct spatial and temporal encoding via discrete action potentials in the hippocampus (Qasim et al. 2021). (5) Living systems are dynamical systems. They show ensembles of eigenbehaviors, which can be seen as unstable dynamical tendencies in the trajectory of the system. Fransisco Varela’s neurophenomenology (Varela 1996, Rudrauf et al. (2003) is a valid attempt to bridge the hard and easy problems, through a biophysics of being, by developing a complementary subjective account of processes corresponding to objective brain processing.  While these efforts help to elucidate the way brain states correspond to subjective experiences, using an understanding of resonant interlocking dynamical systems, they do not of themselves solve the subjective nature of the hard problem. (6) Joachim Keppler's (2018, 2021, James et al. 2022) view of conscious neural processing uses the framework of stochastic electrodynamics (SED), a branch of physics that affords a look behind the uncertainty of quantum field theory (QFT), to derive an explanation of the neural correlates of consciousness, based on the notion that all conceivable shades of phenomenal awareness are woven into the frequency spectrum of a universal background field, called zero-point field (ZPF), implying that the fundamental mechanism underlying conscious systems rests upon the access to information available in the ZPF. This gives an effective interface description of how dynamical brain states correspond to subjective conscious experiences, but like the other dynamical descriptions, does not solve the hard problem itself of why the zero point field becomes subjective.

 

Diverse Theories of Consciousness

 


Fig 79: Overview of Theories of Consciousness reviewed, with rough comparable positions.
Field or field-like theories are in

blue/magenta. Explicit AI support magenta/red. Horizontal positions guided by specific author statements.

Section links: GNW, ART, DQF, ZPF, AST, CEMI, FEM, IIT, PEM, ORCH, GRT, CFT

 

Descriptions of the actively conscious brain revolve around extremely diverse conceptions. The neural network approach conceives of the brain as a network of neurons connected by axonal-dendritic synapses, with action potentials as discrete impulses travelling down the long pyramidal cell axons through which activity is encoded as a firing rate. In this view the notions of “brain waves” as evidenced in the EEG (electroencephalogram) and MEG (magnetoencephalogram) are just the collective averages of these spikes, having no function in themselves, being just an accessory low intensity electromagnetic cloud associated with neuronal activity, which happens to generate a degree of coupled synchronisation through the averaged excitations of the synaptic web. At the opposite extreme are field theories of the conscious brain in which fields have functional importance in themselves and help to explain the “binding” problem of how conscious experiences emerge from global brain dynamics.

 

Into the mix are also abstract theories of consciousness such as Tonioni and Koch’s (2015) IIT or integrated information theory and Graziano’s (2016) AST or attention schema theory, which attempt to formulate an abstract basis for consciousness that might arise in biological brains or synthetic neural networks given the right circumstances.

 

The mushroom experience that triggered Symbiotic Existential Cosmology caused a reversal of world view from my original point of view, King (1996), looking for the neurodynamic and quantum basis of consciousness in the brain, to realising that no such theory is possible because a pure physicalist theory cannot bridge the hard problem explanatory gap in the quantum universe, due to the inability to demonstrate causal closure.

 

No matter how fascinating and counter-intuitive the complexities of the quantum, physical and biological universe are, no purely physicalist description of the neurodynamics of consciousness can possibly succeed, because it is scientifically impossible to establish a theoretical proof, or empirical demonstration, of the causal closure of the physical universe in the context of neurodynamics. The bald facts are that, no matter to what degree we use techniques, from optogenetics, through EcoG, to direct cell recording, there is no hope within the indeterminacies of the quantum universe of making an experimental verification of classical causal closure. Causal closure of the physical universe thus amounts to a formally undecidable cosmological proposition from the physical point of view, which is heralded as a presumptive 'religious' affirmative belief without scientific evidence, particularly in neuroscience.

 

The hard problem of consciousness is thus cosmological, not biological, or neurodynamic alone. Symbiotic Existential Cosmology corrects this by a minimal extension of quantum cosmology by adding the axiom of primal subjectivity, as we shall see below.

 

In stark contrast to this, the subjective experiential viewpoint perceives conscious volition over the physical universe as an existential certainty that is necessary for survival. When any two live human agents engage in a frank exchange of experiences and communications, such as my reply to you all now, which evidences my drafting of a consciously considered opinion and intentionally sending it to you in physical form, this can be established beyond reasonable doubt by mutual affirmation of our capacity to consciously and intentionally respond with a physical communication. This is the way living conscious human beings have always viewed the universe throughout history and it is a correct veridical empirical experience and observation of existential reality, consistent with personal responsibility, criminal and civil law on intent, all long-standing cultural traditions and the fact that 100% of our knowledge of the physical world comes through our conscious experience of it. Neuroscientists thus contradict this direct empirical evidence at their peril.

 

However there is still a practical prospect of refining our empirical understanding of the part played by neurodynamics in generating subjective conscious experience and volition over the physical universe through current and upcoming techniques in neuroscience. What these can do is demonstrate experimentally the nature of the neurodynamics occurring, when conscious experiences are evoked, the so-called "neural correlate of consciousness", forming an interface with conscious experience our and ensuing decision-making actions.

 

To succeed at this scientific quest, we need to understand how quantum cosmology enters into the formation of biological tissues. The standard model of physics is symmetry broken, between the colour, weak, and EM forces and gravity, which ensures that there are a hundred positively charged atomic nuclei, with orbital electrons having both periodic quantum properties of the s, p, d, & f, orbitals and non-linear EM charge interactions, centred on first row covalent H-CNO modified by P & S and light ionic and transition elements, as shown in fig 51, to form a fractal cooperative bonding cascade from organic molecules like the amino acids and nucleotides, through globular proteins and nucleic acids, to complexes like the ribosome, and membrane, to cell organelles, cells and tissues. These constitute an interactive quantum form of matter – the most exotic form of matter in existence, whose negentropic thermodynamics in living systems is vastly more challenging than the quantum properties of solid state physics and its various excitons and quasi-particles. Although these are now genetically and enzymatically encoded, the underlying fractal dynamics is a fundamental property of cosmological symmetry-breaking and abiogenesis. It introduces a cascade of quantum effects, in protein folding, allosteric active sites with tunnelling, membrane ionic and electron transport and ultimately neurodynamics. Furthermore biological processes are non IID, not constituting identical independently distributed quantum measurements, so do not converge to the classical description and remain collectively quantum in nature throughout, spanning all or most aspects of neuronal excitability and metabolism.

 

This means that current theories of the interface between CNS neurodynamics and subjective conscious volition are all manifestly incomplete and qualitatively and quantitatively inadequate to model or explain the brain-experience interface. Symbiotic Existential Cosmology has thus made a comprehensive review of these, including GNW (Dehane et al.), ART (Grossberg), DQF (Freeman & Vitiello), ZPF (Keppler), AST (Graziano), CEMI (McFadden), FEM (Solms & Friston), IIT (Tonioni & Koch), PEM (Poznanski et al.), as well as outliers like ORCH (Hameroff & Penrose). The situation facing TOEs of consciousness are, despite experimental progress, in a more parlous state than physical TOEs, from supersymmetric, superstring, and membrane theories to quantum loop gravity, that as yet show no signs of unification over multiple decades. In both fields, this requires a foundation rethink and a paradigm shift. Symbiotic Existential Cosmology provides this to both fields simultaneously.

 

To understand this biologically, we need to understand that the nature of consciousness as we know it and all its key physical and biological features, arose in a single topological transition in the eucaryote endosymbiosis, when the cell membrane became freed for edge-of-chaos excitation and receptor-based social signalling, through the same processes that are key to human neurodynamics today, when respiration became sequestered in the mitochondria. This in turn led to the action potential via the flagellar escape reaction, and to the graded membrane potentials and neurotransmitter receptor-based synaptic neural networks we see in neuronal excitation. It took a billion years later before these purposive processes enabling sentience at the cellular level, in the quantum processes we now witness in vision, audition, olfaction and feeling sensation became linked in the colonial neural networks illustrated by hydra and later the more organised brains of arthropods, vertebrates and cephalopods. This means that a purely neural network view of cognition and consciousness is physically inadequate at the foundation. Moreover the brain derives its complexity not just from our genome which is vastly too small to generate the brain’s complexity, but interactive processes of cell migration in the developing brain that form a self organising system through mutual neuronal recognition by neurotransmitter type and mutual excitation/inhibition.

 

Of these theories, GNW is the closest to a broad brush strokes, empirically researched account. Neural network theories like Grossman’s ART generate crude necessary but insufficient conditions for consciousness because they lack almost all the biological principle involved. Pure abstract theories like IIT do likewise. Specialised quantum theories like Hameroff & Penrose are untenable both in current biology and fundamentally in evolutionary terms because they have been contrived as quantum back-pack of oddball quantum processes such as quantum microtubular CAs, not confluent with evolutionary processes, using increasingly contrived speculation to make up for inadequacies e.g. in linking cellular processes through condensates. ORCH is also objective reduction, so it cannot address conscious volition. 

 

There is good empirical support for two processes in brain dynamics. (1) Edge-of-chaos transitions from a higher energy more disordered dynamic to a less disordered attractor dynamic, which is also the basis of annealing in neural network models of a potential energy landscape. (2) Phase tuning between action potential timing in individual neurons and continuous local potential gradients, forming an analogue with quantum uncertainty based  measurement of wave beats. 

 

These mean that field and field like-theories such as ZPF, DQF and PEM all have a degree pf plausibility complementing bare neural network descriptions. However all these theories run into the problem of citing preferred physical mechanisms over the complex quantum system picture manifest in tissue dynamics. ZPF cites the zero-point field, effectively conflating a statistical semi-classical of QED with subjective consciousness as the quantum vacuum. It cites neurotransmitter molecular resonances at the synapse and periodic resonances in the brain as providing the link. DQF is well grounded in Freeman dynamics, but cites water molecule structures, which are plausible but accessory and not easy to test. PEM cites quasi-polaritonic waves involving interaction between charges and dipoles, with an emphasis on delocalised orbitals, which are just one of many quantum level processes prominently involved in respiration and photosynthesis and makes a claim to "microfeels" as the foundation of a definition of precognitive information below the level of consciousness. It also restricts itself to multiscale thermodynamic holonomic processes, eliminating the quantum level, self organised criticality and fractality.

 

Philosopher wins 25 year long bet with Neuroscientist (Lenharo 2023): In 1998, neuroscientist Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023. Both scientists agreed publicly on 23 June, at the annual meeting of the Association for the Scientific Study of Consciousness that it is an ongoing quest — and declared Chalmers the winner.

 

What ultimately helped to settle the bet was a study testing two leading hypotheses about the neural basis of consciousness. Consciousness is everything that a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says. However, despite a vast effort, researchers still don’t understand how our brains produce it. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.” It tested two of the leading hypotheses: integrated information theory (IIT) and global network workspace theory (GNWT). IIT proposes that consciousness is a ‘structure’ in the brain formed by a specific type of neuronal connectivity that is active for as long as a certain experience, such as looking at an image, is occurring. This structure is thought to be found in the posterior cortex, at the back of the brain. GNWT, by contrast, suggests that consciousness arises when information is broadcast to areas of the brain through an interconnected network. The transmission, according to the theory, happens at the beginning and end of an experience and involves the prefrontal cortex, at the front of the brain. The results didn’t perfectly match either of the theories.

 

The position of Symbiotic Existential Cosmology is that none of these theories, and particularly those that depend on pure physical materialism, have any prospect of solving the hard problem and particularly the hard problem extended to volition. Symbiotic Existential Cosmology therefore adopts a counter strategy to add an additional axiom to quantum cosmology that associates primal subjectivity and free will with an interface in each quantum, where “consciousness” is manifested in the special relativistic space-time extended wave function and "free will" is manifested in the intrinsic uncertainty of quantum collapse to the particle state. This primal subjectivity exists in germinal forms in unstable quantum-sensitive systems such as butterfly effect systems and becomes intentional consciousness as we know it in the eucaryote transition.

 

This transforms the description of conscious dynamics into one in which subjectivity is compliant with determined perceived and cognitive factors but utilises the brain state as a contextual environmental filter to deal with states of existential uncertainty threatening the survival of the organism. This is similar to  AST, but without the utopian artificial intelligence emphasis it shares with others such as ART, IIT, and PEM. Key environmental survival questions are both computationally intractable and formally uncomputable, because the tiger that may pounce is also a conscious agent who can adapt their volitional strategy to unravel any computational "solution”. This provides a clean physical cut, in which subjective consciousness remains compliant with the determined boundary conditions realised by the cognitive brain, but has decision-making ability in situations when cellular or brain dynamics becomes unstable and quantum sensitive. No causal conflict thus arises between conscious intent restricted to uncertainty and physical causes related to the environmental constraints. It invokes a model of quantum reality where uncertainty is not merely random, but is a function of unfolding environmental uncertainty as a whole. This is the survival advantage cellular consciousness fixed in evolution through anticipating existential crises and has conserved ever since, complementing cerebral cognition in decision-making. This is reflected experientially in how we make intuitive "hunch" overall decisions and physically in certain super-causal forms of the transactional QM interpretation and super-determinism, both of which can have non-random quasi-ergodic hidden variable interpretations and are compatible with free will.

 

The final and key point is that Symbiotic Existential Cosmology is biospherically symbiotic. Through this, the entire cosmology sees life and consciousness as the ultimate interactive climactic crisis of living complexity interactively consummating the universe, inherited from cosmological symmetry-breaking, in what I describe as conscious paradise on the cosmic equator in space-time. Without the symbiosis factor, humanity as we stand, will not survive a self-induced Fermi extinction, caused by a mass extinction of biodiversity, so the cosmology is both definitively and informatively accurate and redemptive, in long-term survival of the generations of life over evolutionary time scales.

 

Susan Pockett (2013) explains the history of these diverging synaptic and field theoretic views:

 

Köhler (1940) did put forward something he called “field theory”. Köhler only ever referred to electric fields as cortical correlates of percepts. His field theory was a theory of brain function. Lashley’s test was to lay several gold strips across the entire surface of one monkey’s brain, and insert about a dozen gold pins into a rather small area of each hemispheric visual cortex of another monkey. The idea was that these strips or pins should short-circuit the hypothesized figure currents, and thereby (if Köhler’s field theory was correct) disrupt the monkeys’ visual perception. The monkeys performed about as well on this task after insertion of the pins or strips as they had before (although the one with the inserted pins did “occasionally fail to see a small bit of food in the cup”) and Lashley felt justified in concluding from this that “the action of electric currents, as postulated by field theory, is not an important factor in cerebral integration.” Later Roger Sperry did experiments similar to Lashley’s, reporting similarly negative results.

 

Intriguingly, she notes that Libet, whom we shall meet later despite declaring the readiness potential preceded consciousness, also proposed a near-supernatural field theory:

 

Libet proposed in 1994 that consciousness is a field which is “not ... in any category of known physical fields, such as electromagnetic, gravitational etc” (Libet 1994). In Libet’s words, his proposed Conscious Mental Field “may be viewed as somewhat analogous to known physical fields ... however ... the CMF cannot be observed directly by known physical means.”

 

Pockett (2014) describes what she calls “process theories”:

 

The oldest classification system has two major categories, dualist and monist. Dualist theories equate consciousness with abstracta. Monist (aka physicalist) theories equate it with concreta. A more recent classification (Atkinson et al., 2000) divides theories of consciousness into process theories and vehicle theories: it says “Process theories assume that consciousness depends on certain functional or relational properties of representational vehicles, namely, the computations in which those vehicles engage. The relative number of words devoted to process and vehicle theories in this description hints that at present, process theories massively dominate the theoretical landscape. But how sensible are they really?

 

She then discusses both Tonioni & Koch’s (2015) IIT integrated information theory and Chalmers' (1996) multi-state “information spaces". And lists the following objections:

 

First, since information is explicitly defined by everyone except process theorists as an objective entity, it is not clear how process theorists can reasonably claim either that information in general, or that any subset or variety of information in particular, is subjective. No entity can logically be both mind-independent and the very essence of mind. Therefore, when process theorists use the word “information” they must be talking about something quite different from what everyone else means by that word. Exactly what they are talking about needs clarification. Second, since information is specifically defined by everybody (including Chalmers) as an abstract entity, any particular physical realization of information does not count as information at all. Third, it is a problem at least for scientists that process theories are untestable. The hypothesis that a particular brain process correlates with consciousness can certainly be tested empirically. But the only potentially testable prediction of theories that claim identity between consciousness and a particular kind of information or information processing is that this kind of information or information processing will be conscious no matter how it is physically instantiated.

 

These critiques will apply to a broad range of the theories of consciousness we have explored, including many in the figure above that do not limit themselves to the neural correlate of consciousness.

 

Theories of consciousness have, in the light of our understanding of brain processes gained from neuroscience, become heavily entwined with the objective physics and biology of brain function. Michel & Doerig (2021), in reviewing local and global theories of consciousness summarise current thinking, illustrating this dependence on neuroscience for understanding the enigmatic nature of consciousness.

 

Localists hold that, given some background conditions, neural activity within sensory modules can give rise to conscious experiences. For instance, according to the local recurrence theory, reentrant activity within the visual system is necessary and sufficient for conscious visual experiences. Globalists defend that consciousness involves the large-scale coordination of a variety of neuro-cognitive modules, or a set of high-level cognitive functions such as the capacity to form higher-order thoughts about one’s perceptual states. Localists tend to believe that consciousness is rich, that it does not require attention, and that phenomenal consciousness overflows cognitive access. Globalists typically hold that consciousness is sparse, requires attention, and is co-extensive with cognitive access.

 

According to local views, a perceptual feature is consciously experienced when it is appropriately represented in sensory systems, given some background conditions. As localism is a broad family of theories, what “appropriately” means depends on the local theory under consideration. Here, we consider only two of the most popular local theories: the micro-consciousness theory, and the local recurrence theory, focusing on the latter. According to the micro-consciousness theory “processing sites are also perceptual sites”. This theory is extremely local. The simple fact of representing a perceptual feature is sufficient for being conscious of that feature, given some background conditions. One becomes conscious of individual visual features before integrating them into a coherent whole. According to the local recurrence theory, consciousness depends on "recurrent" activity between low- and higher-level sensory areas. Representing a visual feature is necessary, but not sufficient for being conscious of it. The neural vehicle carrying that representation must also be subject to the right kind of recurrent dynamics. For instance, consciously perceiving a face consists in the feedforward activation of face selective neurons, quickly followed by a feedback signal to lower-level neurons encoding shape, color, and other visual features of the face, which in turn modulate their activity as a result.

 

The authors also stress post-dictive effects as a necessary non-local condition for consciousness which may last a third of a second after an event.

 

In postdictive effects, conscious perception of a feature depends on features presented at a later time. For instance, in feature fusion two rapidly successive stimuli are perceived as a single entity. When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk, and no red or green disk at all. This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having. Many other postdictive effects in the range of 10-150ms have been known for decades and are well documented. Postdictive effects are a challenge for local theories of consciousness. Features are locally represented in the brain but the participants report that they do not see those features.

 

This can have the implication that unconscious brain processes always precede conscious awareness, leading to the conclusion that our conscious awareness is just a post-constructed account of unconscious processes generated by the brain and that subjective consciousness, along the experience of volition have no real basis, leading to a purely physically materialist account of subjective consciousness as merely an internal model of reality constructed by the brain.

 

Pockett (2014) in supporting her own field theory of consciousness, notes structural features that may exclude certain brain regions from being conscious in their own right:

 

It is now well accepted that sensory consciousness is not generated during the first, feed-forward pass of neural activity from the thalamus through the primary sensory cortex. Recurrent activity from other cortical areas back to the primary or secondary sensory cortex is necessary. Because the feedforward activity goes through architectonic Lamina 4 of the primary sensory cortex (which is composed largely of stellate cells and thus does not generate synaptic dipoles) while recurrent activity operates through synapses on pyramidal cells (which do generate dipoles), the conscious em patterns resulting from recurrent activity in the ‘early’ sensory cortex have a neutral area in the middle of their radial pattern. The common feature of brain areas that can not generate conscious experience – which are now seen to include motor cortex as well as hippocampus, cerebellum and any sub-cortical area – is that they all lack an architectonic Lamina 4 [layer 4 of the cortex].

 

By contrast with theories of consciousness based on the brain alone, Symbiotic Existential Cosmology sees subjectivity as being a cosmological complement to the physical universe. It thus seeks to explain subjective conscious experience as a cosmological, rather than just a purely biological phenomenon, in a way which gives validation and real meaning to our experience of subjective conscious volition over the physical universe, expressed in all our behavioural activities and our sense of personal responsibility for our actions and leads towards a state of biospheric symbiosis as climax living diversity across the generations of life as a whole, ensuring our continued survival.

 

Theories of consciousness have, in the light of our understanding of brain processes gained from neuroscience, become heavily entwined with the objective physics and biology of brain function. Michel & Doerig (2021), in reviewing local and global theories of consciousness summarise current thinking, illustrating this dependence on neuroscience for understanding the enigmatic nature of consciousness.

 

Localists hold that, given some background conditions, neural activity within sensory modules can give rise to conscious experiences. For instance, according to the local recurrence theory, reentrant activity within the visual system is necessary and sufficient for conscious visual experiences. Globalists defend that consciousness involves the large-scale coordination of a variety of neuro-cognitive modules, or a set of high-level cognitive functions such as the capacity to form higher-order thoughts about one’s perceptual states. Localists tend to believe that consciousness is rich, that it does not require attention, and that phenomenal consciousness overflows cognitive access. Globalists typically hold that consciousness is sparse, requires attention, and is co-extensive with cognitive access.

 

According to local views, a perceptual feature is consciously experienced when it is appropriately represented in sensory systems, given some background conditions. As localism is a broad family of theories, what “appropriately” means depends on the local theory under consideration. Here, we consider only two of the most popular local theories: the micro-consciousness theory, and the local recurrence theory, focusing on the latter. According to the micro-consciousness theory “processing sites are also perceptual sites”. This theory is extremely local. The simple fact of representing a perceptual feature is sufficient for being conscious of that feature, given some background conditions. One becomes conscious of individual visual features before integrating them into a coherent whole. According to the local recurrence theory, consciousness depends on "recurrent" activity between low- and higher-level sensory areas. Representing a visual feature is necessary, but not sufficient for being conscious of it. The neural vehicle carrying that representation must also be subject to the right kind of recurrent dynamics. For instance, consciously perceiving a face consists in the feedforward activation of face selective neurons, quickly followed by a feedback signal to lower-level neurons encoding shape, color, and other visual features of the face, which in turn modulate their activity as a result.

 

The authors also stress post-dictive effects as a necessary non-local condition for consciousness which may last a third of a second after an event.

 

In postdictive effects, conscious perception of a feature depends on features presented at a later time. For instance, in feature fusion two rapidly successive stimuli are perceived as a single entity. When a red disk is followed by a green disk after 20ms, participants report perceiving a single yellow disk, and no red or green disk at all. This is a postdictive effect. Both the red and green disks are required to form the yellow percept. The visual system must store the representation of the first disk until the second disk appears to integrate both representations into the percept that subjects report having. Many other postdictive effects in the range of 10-150ms have been known for decades and are well documented. Postdictive effects are a challenge for local theories of consciousness. Features are locally represented in the brain but the participants report that they do not see those features.

 

This can also have implications that unconscious brain processes always precede conscious awareness, leading to the conclusion that our conscious awareness is just a post-constructed account of unconscious processes generated by the brain and that subjective consciousness, along the experience of volition have no real basis, leading to a purely physically materialist account of subjective consciousness as merely an internal model of reality constructed by the brain.

 

Seth & Bayne (2022) provide a detailed review of theories of consciousness from the perspective of neuroscience. They investigate four key types of TOC as listed below and also provide table 1 below listing a diverse range of TOCs.

 

(1) Higher-order theories  The claim uniting all these is that a mental state is conscious in virtue of being the target of a certain kind of meta­-representational state. These are not representations that occur higher or deeper in a processing hierarchy but are those that have as their targets other (implicitly subjective) representa­tions.

(2) Global workspace theories originate from architectures, in which a “blackboard” is a centralized resource through which specialised processors share and receive information. The first was framed at a cognitive level and proposed that con­scious mental states are those that are ‘globally avail­able’ to a wide range of cognitive processes, including attention, evaluation, memory and verbal report. Their core claim is that it is wide accessibility of information to such systems that constitutes conscious experience. This has been developed into ‘global neuronal work­ space theory’.

(3) Integrated information theory advances a mathematical approach to charac­terizing phenomenology. It starts by proposing axioms about the phenomenological character of con­scious experiences (that is, properties that are taken to be self-­evidently true and general to consciousness), and from these, it derives claims about the properties that any physical substrate of con­sciousness must satisfy, proposing that physical systems that instantiate these properties necessarily also instantiate consciousness.

(4) Re-entry and predictive processing theories The first associate conscious perception with top­down (recurrent, re­entrant) signalling. The second group are not primarily ToCs but more general accounts of brain (and body) function that can be used to formulate explanations and predictions regarding properties of consciousness.

 

They note a version of the measurement problem, that to test a theory of consciousness (ToC), we need to be able to reliably detect both consciousness and its absence. At present, experimenters tend to rely on a subject’s introspective capacities to identify their states of consciousness. However, they claim this approach is problematic. Firstly they claim reliability of introspection is questionable. This is a debatable claim, which tends to lead to devaluing subjective reports, possibly unfairly, in an emphasis on “objective observations”, which render subjective consciousness as having an orphan status, defeating the very purpose of TOCs in elation to the hard problem. They also note infants, individuals with brain damage and non-human animals, who might be conscious, but are unable to produce introspective reports, claiming there is a pressing need to identify non-introspective ‘markers’ or ‘signatures’ of consciousness — such as the perturbational complexity index (PCI) and the optokinetic nystagmus response, or distinctive bifurcations in neural dynamics, as markers of either general, or specific kinds of conscious contents. These however are purely functional measures of what consciousness actually is, as experienced phenomena.

 

Table 1: The full spread of TOCs, as listed in Seth & Bayne (2022).

 

Higher-order theory (HOT)

Consciousness depends on meta-representations of lower-order mental states

Self-organizing meta- representational theory

Consciousness is the brain’s (meta-representational) theory about itself

Attended intermediate representation theory

Consciousness depends on the attentional amplification of intermediate-level representations

Global workspace theories (GWTs)

Consciousness depends on ignition and broadcast within a neuronal global workspace where fronto-parietal cortical regions play a central, hub-like role

Integrated information theory (IIT)

Consciousness is identical to the cause–effect structure of a physical substrate that specifies a maximum of irreducible integrated information

Information closure theory

Consciousness depends on non-trivial information closure with respect to an environment at particular coarse-grained scales

Dynamic core theory

Consciousness depends on a functional cluster of neural activity combining high levels of dynamical integration and differentiation

Neural Darwinism

Consciousness depends on re-entrant interactions reflecting a history of value-dependent learning events shaped by selectionist principles

Local recurrency

Consciousness depends on local recurrent or re-entrant cortical processing and promotes learning

Predictive processing

Perception depends on predictive inference of the causes of sensory signals; provides a framework for systematically mapping neural mechanisms to aspects of consciousness

Neuro-representationalism

Consciousness depends on multilevel neurally encoded predictive representations

Active inference

Although views vary, in one version consciousness depends on temporally and counterfactually deep inference about self-generated actions

Beast machine theory

Consciousness is grounded in allostatic control-oriented predictive inference

Neural subjective frame

Consciousness depends on neural maps of the bodily state providing a first-person perspective

Self comes to mind theory

Consciousness depends on interactions between homeostatic routines and multilevel interoceptive maps, with affect and feeling at the core

Attention schema theory

Consciousness depends on a neurally encoded model of the control of attention

Multiple drafts model

Consciousness depends on multiple (potentially inconsistent) representations rather than a single, unified representation that is available to a central system

Sensorimotor theory

Consciousness depends on mastery of the laws governing sensorimotor contingencies

Unlimited associative learning

Consciousness depends on a form of learning which enables an organism to link motivational value with stimuli or actions that are novel, compound and non-reflex inducing

Dendritic integration theory

Consciousness depends on integration of top-down and bottom-up signalling at a cellular level

Electromagnetic field theory

Consciousness is identical to physically integrated, and causally active, information encoded in the brain’s global electromagnetic field

Orchestrated objective reduction

Consciousness depends on quantum computations within microtubules inside neurons

 

 

In addressing the  ‘hard problem’ they distinguish the easy problems concerned with the functions and behaviours associated with consciousness, from the hard problem, which concerns the experiential dimensions of consciousness, noting that what makes the hard problem hard is the ‘explanatory gap’ — the intuition that there seems to be no prospect of a fully reductive explanation of experience in physical or functional terms.

 

Integrated information theory and certain versions of higher-order theory address the hard problem directly, while other theories such as global workspace theories focus on the functional and behavioural properties normally associated with consciousness, rather than the hard problem, noting that some predictive processing theorists aim to provide a framework in which various questions about the phenomenal properties of consciousness can be addressed, without attempting to account for the existence of phenomenology — an approach called the ‘real problem’.

 

They posit that a critical question is whether the hard problem is indeed a genuine challenge that ought to be addressed by a science of consciousness, or whether it ought to be dissolved rather than solved as the solving easy problems first strategy invokes. The ‘dissolvers’ argue that the appearance of a distinctively hard problem derives from the peculiar features of the ‘phenomenal concepts’ that we employ in representing our own conscious states, citing illusionism, in which we do not have phenomenal states but merely represent ourselves as having such states, speculating that the grip of the hard problem may loosen as our capacity to explain, predict and control both phenomenological and functional properties of consciousness expands, thus effectively siding with the dissolvers.

 

In conclusion, they note that present, ToCs are generally used as ‘narrative struc­tures’ within the science of consciousness. Although they inform the interpretation of neural and behavioural data, they demure that it is still rare for a study to be designed with questions of theory validation in mind. Although there is nothing wrong with employing theories in this manner, claiming future progress will depend on experiments that enable ToCs to be tested and disambiguated. This is the kind of ideal that we will expect physicalist neuroscientists to veer into, but it runs the risk of ‘sanitising’ consciousness, just as behaviourism has  done in psychology to its nemesis.

 

Pivotal are two questions, one is the physicalist quest to use the easy functionalist notions of consciousness to explain away the hard problem of consciousness, which typifies Levine’s explanatory gap, Nagel’s what it is “to be like” something something conscious and Chalmers’ notion “how we have phenomenal first-person subjective experiences”. This is really not about the general questions of consciousness, such as “consciousness of” something, which can be viewed as a form of global attention that can be described functionally, and more specific notions like self-consciousness i.e. awareness of a form of functional agency, both of which could apply equally to artificial intelligence.

 

This becomes clear when we examine the authors’ choice of key theories of consciousness, several of which are not targeted at the hard problem at all, as they point out, knowing that Seth for example favours an ultimate functional explanation which will “dissolve” the hard problem, even if it is a form of identity theory, or dual aspect monism.

 

Really we need to distinguish consciousness from subjective consciousness – the ability to have subjective experiences and thus subjectivity itself and its cosmological status, rather than the mere functionality of consciousness as a global attentive process. This is why Symbiotic Existential Cosmology deals directly with primal subjectivity as a cosmological complement to the physical universe to capture the notion of subjectivity squarely and indepedently of consciousness. This leaves full consciousness an emergent property of the eucaryote endo-symbiosis that results in the cellular mechanisms of edge-of-chaos excitable membrane and informational membrane signalling using neurotransmitters, both of which are functionally emergent properties but with non-classical implications in the quantum universe.

 

We can immediately see this is a critically important step, when we see the above research being cited as a basis to determine whether future AI developments would be considered “conscious”, as Butlin et al. (2023) cite precisely the functional expressions of the same theories of consciousness as above, to provide criteria where a purely objective physical process could become “conscious”, in view of its functional properties in recurrent processing, global workspace, higher-order processes, attention schemas predictive processing and functional agency,  none of which address the hard problem, let alone the extended hard problem of subjective volition over the physical universe.

 

Butlin et al. note: This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness. From these theories we derive ”indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

 

Recurrent processing theory

RPT-1: Input modules using algorithmic recurrence

RPT-2: Input modules generating organised, integrated perceptual representations

Global workspace theory

GWT-1: Multiple specialised systems capable of operating in parallel (modules)

GWT-2: Limited capacity workspace, entailing a bottleneck in information flow and a selective attention mechanism

GWT-3: Global broadcast: availability of information in the workspace to all modules

GWT-4: State-dependent attention, giving rise to the capacity to use the workspace to query modules in succession to perform complex tasks

Computational higher-order theories

HOT-1: Generative, top-down or noisy perception modules

HOT-2: Metacognitive monitoring distinguishing reliable perceptual representations from noise

HOT-3: Agency guided by a general belief-formation and action selection system, and a strong disposition to update beliefs in accordance with the outputs of metacognitive monitoring

HOT-4: Sparse and smooth coding generating a “quality space”

Attention schema theory

AST-1: A predictive model representing and enabling control over the current state of attention

Predictive processing

PP-1: Input modules using predictive coding

Agency and embodiment

AE-1: Agency: Learning from feedback and selecting outputs so as to pursue goals, especially where this involves flexible responsiveness to competing goals

AE-2: Embodiment: Modeling output-input contingencies, including some systematic effects, and using this model in perception or control

Table 2: Indicator Properties (Butlin et al. 2023).

Polák & Marvan (2019), in a different kind of “dissolving” approach to the hard problem, attempt to assert dual theories, in which scientists study pairs of phenomenal mental states of which one is and the other is not conscious, the presence/absence of consciousness being their sole distinguishing feature, claiming this facilitates unpacking the unitary nature of the hard problem, thus partly decomposing it.

 

They note that Chalmers (2018 30) contains the acceptance of unconscious sensory qualities, saying such a move is: perhaps most promising for deflating the explanatory gap tied to qualities such as redness: if these qualities [...] can occur unconsciously, they pose less of a gap. As before, however, the core of the hard problem is posed not by the qualities themselves but by our experience of these qualities: roughly, the distinctive phenomenal way in which we represent the qualities or are conscious of them.

 

They cite two examples of separation of brain processes forming a neural correlate conscious experiences. The first is hemispheric visual neglect caused by localised brain damage such as a stroke, where information in the neglected hemisphere appears to unconsciously influence a person’s choices:

 

Unilateral visual neglect, the inability to see objects in one half of the visual field, might serve as an illustration. In the most famous neglect example (Marshall and Halligan, 1988), a person cannot consciously discriminate between two depicted houses. The houses are identical except that one of them is on fire in that half of the visual field the person, due to neglect, cannot see. Although the person was constantly claiming that both houses look the same to her, she repeatedly said she would prefer to live in the house not consumed by the flames.

 

A second example cites Lamme’s (2006, 2015) theory of brain processes which may constitute separate phases in the generation of a conscious experience, which permit clean separation of the brain mechanisms for the creation of phenomenal content from the mechanism that “pushes this content into consciousness”:

 

The theory revolves around the notion of local recurrent neural activity within the cortex and decomposes the formation of conscious visual content into two phases. The first one is called fast feedforward sweep. It is a gradual activation of different parts of the visual system in the brain. The dual view interprets this process as the formation of the unconscious but phenomenal mental state. A later process, that may or may not occur, is called recurrent activity. It is a neural feedback processing during which higher visual centers send the neural signal back to the lower ones. The time delay between the initiation of the first and the second process might be seen as corresponding to the difference between processing of the phenomenal character (feedforward sweep) and making and maintaining this phenomenal character conscious (recurrent processing).

 

They note that in several other theories already listed, including Global Neural Workspace theory, thalamo-cortical circuits, and apical amplification within the cortical pyramidal neurons, the phase of phenomenal content creation and the phase of this content becoming conscious are distinguishable. But all these theories are describing purely physical brain processes, being imbued with subjective aspects only by inference. So we need to look carefully at how the authors treat subjectivity itself. Essentially they are making a direct attack on the unitary nature of subjective conscious experience by attempting to separate consciousness from phenomenal experience so subjectivity is being held hostage in the division:

 

What constantly fuels this worry, we believe, is taking the conscious subjective phenomenal experience to be something monolithic. The peculiar nature of subjective qualities and their being conscious comes as a package and it is difficult to conceive how science might begin explaining it.  … The conscious subjective experience is being felt as something unitary, we grant that. But that does not mean that if we look behind the subjective level and try to explain how such unitary experience arises, the explanation itself has to have unitary form. … Awareness in this sense is simply the process, describable in neuroscientific terms, of making the sensory qualities conscious for the subject. We could then keep using the term “consciousness” for the subjectively felt unitary experience, while holding that in reality this seemingly unitary thing is the result of an interaction between the neural processes constituting the phenomenal contents and the neural processes constituting awareness.

 

This effectively a form of physicalist illusionism (Frankish 2017), because, the claim made is that the subjective experience is falsely represented as integrated when the underlying physical reality is subdivided by the dual interpretation. It is an illustration of how functionalist theories of consciousness can be misused in an attempt to form a bridgehead decomposing the unitarity of subjective consciousness into interacting divisible physical systems, simply because multiple physical processes are held to be functional, or temporally sequential components, of the associated integrated brain processing state. The trouble with this is that these functional processes can invoke an integrated conscious experience only when they are functionally confluent, so we can’t actually separate “the fast feedforward sweep” from the “recurrent activity” in generating a real conscious experience and in pathological cases like hemispherical visual neglect this provides no evidence that healthy integrated conscious brain processes can be so decomposed into dual states.

 

By contrast with theories of consciousness based on the physical brain alone, in Symbiotic Existential Cosmology, subjectivity is itself a primal cosmological complement to the physical universe. It thus explains subjective conscious experience as a cosmological, rather than just a purely biological or neuroscience phenomenon, thus giving validation and real meaning to our experience of subjective conscious volition over the physical universe, expressed in all our behavioural activities and our sense of personal responsibility for our actions and leads towards a state of biospheric symbiosis as climax living diversity across the generations of life as a whole, ensuring our continued survival.

  

Psychotic Fallacies of the Origin of Consciousness

 

Theories of consciousness that are poles apart from any notion of the subjectivity of conscious experience, or the hard problem of consciousness and the explanatory gap of the physical description, arise from treating consciousness merely as purely a type of culturally derived cognitive process. Such theories fall into the philosophers trap of confining the nature of the discourse to rational processes and arguments, which fail to capture the raw depths of subjective experience, characteristic of mystical, shamanic and animistic  cultures.  

 

In "The Origin of Consciousness in the Bicameral Mind”, Julian Jaynes (1976, 1986) claimed human “ancestors", as late as the Ancient Greeks did not consider emotions and desires as stemming from their own minds but as the consequences of actions of gods external to themselves. The theory posits that the human mind once operated in a bicameral state in which cognitive functions were divided between one part of the brain which appears to be "speaking", and a second part which listens and obeys and that the breakdown of this division gave rise to “consciousness” in humans. He used the term "bicameral" metaphorically to describe a mental state in which the right hemisphere's experiences were transmitted to the left hemisphere through auditory hallucinations.  In the assumed bicameral phase, individuals lacked self-awareness and introspection. Instead of conscious thought, they heard external voices or "gods" guiding their actions and decisions. Jaynes claimed this form of consciousness, devoid of meta-consciousness and autobiographical memory, persisted until about 3,000 years ago, when societal changes led to the emergence of our current conscious mode of thought. Auditory hallucinations experienced by those with schizophrenia, including command hallucinations, paralleled the external guidance experienced by bicameral individuals implying mental illness was a bicameral remnant.

 

To justify his claim, he highlighted instances in ancient texts like the Iliad and the Old Testament where he claimed there was no evidence of introspection or self-awareness and noted that gods in ancient societies were numerous and anthropomorphic, reflecting the personal nature of the external voices guiding individuals. However in the Epic of Gilgamesh, copies of which are many centuries older than even the oldest passages of the Old Testament, describes introspection and other mental processes.

 

According to Jaynes, language is a necessary but not sufficient condition for consciousness: language existed thousands of years earlier, but consciousness could not have emerged without language. Williams (2010) defends the notion of consciousness as a social–linguistic construct learned in childhood, structured in terms of lexical metaphors and narrative practice. Ned Block's (1981) review criticism is  direct – that it is "ridiculous" to suppose that consciousness is a cultural construction.

 

Jaynes argued that the breakdown of the bicameral mind was marked by societal collapses and environmental challenges. As people lost contact with external voices, practices like divination and oracles emerged as attempts to reconnect with the guidance they once received. However this shows an ethnocentric rationalist lack of awareness and understanding of how earlier animistic cultures perceived the natural world, in which both humans and natural processes like storms, rivers and trees were imbued with spirits that were interacted with, but by no means were regarded as voices which humans had to blindly obey, but ones in which they were in dynamic interaction as sentient beings. There are diverse existing cultures, from the founding San to the highly evolved Maori, who practice animistic beliefs, actually and metaphorically who were not influenced by political upheavals at the periphery of founding urban cultures and can appreciate their world views in both rational and spiritual terms, while at all times being as fully integrated in their conscious experiences as modern dominant cultures. We know that doctrinal religions have evolved from mystical and animistic roots as means to hold together larger urban societies, but these are no more rational beliefs. Neither are polytheists more bicameral in their thinking than monotheists are, but less starkly absolute. Neither is it true that intelligent primates display evidence of a bicameral mind, but rather a fully adapted social intelligence, attuned by social evolution to facilitate their strategic survival as consciously aware intentional agents.

 

McGilchrist (2009) reviews scientific research into the complementary role of the brain's hemispheres, and cultural evidence, in his book "The Master and His Emissary",  proposing that, since the time of Plato, the left hemisphere of the brain (the "emissary" in the title) has increasingly taken over from the right hemisphere (the "master"), to our detriment. McGilchrist felt that Jaynes's hypothesis was "the precise inverse of what happened" and that rather than a shift from bicameral mentality there evolved a separation of the hemispheres into bicameral mentality.  This has far more reality value in the fact that the dominance of rational discourse over subjective conscious experience has risen  to the degree that many people cannot rationally distinguish themselves from computational machines.

 

Field and Wave Theories of Consciousness v Connectome Networks and Action Potentials

 

Brain dynamics are a function of a variety of interacting processes. Major pyramidal neuron axon circuits functionally connect distant regions of the cortex to enable integrated processing forming the axonal connectome of the networked brain, driven by individual pulse-coded action potentials. Complementing this are waves of continuous potential in the cortical brain tissue indirectly sampled by electrodes on the scalp in the electroencephalogram or EEG and magnetic effects of currents in MEG. While the network view of brain activity is based on individual action potentials and regards the EEG brain waves as just tissue excitation averages, there is increasing evidence of phase coupling between between the two, so that both the discrete action potentials and the continuous tissue potentials are in mutual feedback. The complex interaction of these can be seen in Qasim et al. (2021), Cariani & Baker (2022) and Pinotsis et al. (2023). This leads to two views of brain dynamics the networked view based on the connectome and field theories centered on continuous tissue gradients and the folded tissue anatomy.

 

Pang et al. (2023) have compared the influence of these two physical features in the outer folds of the cerebral cortex, where most higher-level brain activity occurs — and the connectome, the web of nerves that links distinct regions of the cerebral cortex. Excited neurons in the cerebral cortex can communicate their state of excitation to their immediate neighbours on the surface. But each neuron also has a long axon that connects it to a far away region within or beyond the cortex, allowing neurons to send excitatory messages to distant brain cells. In the past two decades, neuroscientists have painstakingly mapped this web of connections — the connectome — in a raft of organisms, including humans. The brain’s neuronal excitation can also come in waves, which can spread across the brain and travel back in periodic oscillations.

 

They found that the shape of the outer surface was a better predictor of brainwave data than was the connectome, contrary to the paradigm that the connectome has the dominant role in driving brain activity.  Predictions from neural field theory, an established framework for modelling large-scale brain activity, suggest that the geometry of the brain may represent a more fundamental constraint on dynamics than complex interregional connectivity.

 

Fig 79b: Comparison of the influences of connectome network based processing, volumetric wave modes in the cortex and exponential distance rule (EDR) networks connectivity and found geometric eigenmodes to be predominant.

 

They calculated the modes of brainwave propagation for the cortical surface and for the connectome. As a model of the connectome, they used information gathered from diffusion magnetic resonance imaging (MRI), which images brain anatomy. They then looked at data from more than 10,000 records of functional MRI, which images brain activity based on blood flow. The analysis showed that brainwave modes in the resting brain as well as during a variety of activities — such as during the processing of visual stimuli — were better explained by the surface geometry model than by the connectome.of activities — such as during the processing of visual stimuli — were better explained by the surface geometry model than by the connectome one, the researchers found.

 

There are a number of field theories of conscious brain dynamics each with their own favoured process.

 

 Benjamin Libet (1994), the controversial discoverer of the readiness potential, notes the extreme contrast between the integral nature of conscious experience and the complex localised nature of network-based neurodynamics, leaning towards a field theory as the only plausible explanation:

 

One of the most mysterious and seemingly intractable problems in the mind-brain relationship is that of the unitary and integrated nature of conscious experience. We have a brain with an estimated 100 billion neurons, each of which may have thousands of interconnections with other neurons. It is increasingly evident that many functions of cerebral cortex are localized. This is not merely true of the primary sensory areas for each sensory modality, of the motor areas which command movement, and of the speech and language areas, all of which have been known for some time. Many other functions now find other localized representations, including visual interpretations of colour, shape and velocity of images, recognition of human faces, preparation for motor actions, etc. Localized function appears to extend even to the microscopic level within any given area. The cortex appears to be organized into functional and anatomical vertical columns of cells, with discrete interconnections within the column and with other columns near and far, as well as with selective subcortical structures.

 

In spite of the enormously complex array of localized functions and representations, the conscious experiences related to or elicited by these neuronal features have an integrated and unified nature. Whatever does reach awareness is not experienced as an infinitely detailed array of widely individual events. It may be argued that this amazing discrepancy between particularized neuronal representations and unitary integrated conscious experiences should simply be accepted as part of a general lack of isomorphism between mental and neural events. But that would not exclude the possibility that some unifying process or phenomenon may mediate the profound transformation in question.

 

The general problem had been recognized by many others, going back at least to Sherrington (1940) and probably earlier. Eccles (in, Popper and Eccles, 1977, p. 362) specifically proposed that the experienced unity comes not from a neurophysiological synthesis but from the proposed integrating character of the self-conscious mind. This was proposed in conjunction with a dualist-interactionist view in which a separate non-material mind could detect and integrate the neuronal activities. Some more monistically inclined neuroscientists have also been arriving at related views, i.e. that integration seems to be best accountable for in the mental sphere even if one views subjective experience as an inner quality of the brain "substrate" (as in "identity theory" or as an emergent property of it. There has been a growing consensus that no single cell or group of cells is likely to be the site of a conscious experience, but rather that conscious experience is an attribute of a more global or distributed function of the brain.

 

A second apparently intractable problem in the mind-brain relationship involves the reverse direction. There is no doubt that cerebral events or processes can influence, control and presumably "produce" mental events, including conscious ones. The reverse of this, that mental processes can influence or control neuronal ones, has been generally unacceptable to many scientists on (often unexpressed) philosophical grounds. Yet, our own feelings of conscious control of at least some of our behavioural actions and mental operations would seem to provide prima facie evidence for such a reverse interaction, unless one assumes that these feelings are illusory. Eccles (1990; Popper and Eccles, 1977) proposed a dualistic solution, in which separable mental units (called psychons) can affect the probability of presynaptic release of transmitters. Sperry (1952, 1985, 1980) proposed a monistic solution, in which mental activity is an emergent property of cerebral function; although the mental is restrained within a macro-deterministic frame- work, it can "supervene", though not "intervene", in neuronal activity. However, both views remain philosophical theories, with explanatory power but without experimentally testable formats. As one possible experimentally testable solution to both features of the mind-brain relationship, I would propose that we may view conscious subjective experience as if it were a field, produced by appropriate though multifarious neuronal activities of the brain.

 

There are a number of field theories of conscious brain dynamics each with their own favoured process.

 

Joachim Keppler (2018, 2021) presents an analysis drawing conscious experiences into the orbit of stochastic electrodynamics (SED) a form of quantum field theory, utilising the conception that the universe is imbued with an all-pervasive electromagnetic background field, the zero-point field (ZPF), which, in its original form, is a homogeneous, isotropic, scale-invariant and maximally disordered ocean of energy with completely uncorrelated field modes and a unique power spectral density. This is basically a stochastic treatment of the uncertainty associated with the quantum vacuum in depictions such as the Feynman approach to quantum electrodynamics (fig 71(e)). The ZPF is thus the multiple manifestations of uncertainty in the quantum vacuum involving virtual photons, electrons and positrons, as well as quarks and gluons, implicit in the muon's anomalous magnetic moment (Borsanyi et al. 2021).

 

In the approach of SED (de la Peña et al. 2020), in which the stochastic aspect corresponds to the effects of the collapse process into the classical limit [28], consciousness is represented by the zero point field (ZPF) (Keppler 2018). This provides a basis to discuss the brain dynamics accompanying conscious states in terms of two hypotheses concerning the zero-point field (ZPF):

 

“The aforementioned characteristics and unique properties of the ZPF make one realize that this field has the potential to provide the universal basis for consciousness from which conscious systems acquire their phenomenal qualities. On this basis, I posit that all conceivable shades of phenomenal awareness are woven into the fabric of the background field. Accordingly, due to its disordered ground state, the ZPF can be looked upon as a formless sea of consciousness that carries an enormous range of potentially available phenomenal nuances. Proceeding from this postulate, the mechanism underlying quantum systems has all the makings of a truly fundamental mechanism behind conscious systems, leading to the assumption that conscious systems extract their phenomenal qualities from the phenomenal color palette immanent in the ZPF. ”

 

Fig 80: In Keppler's model, the phase transitions underlying the formation of coherent activity patterns (attractors) are triggered by modulating the concentrations of neurotransmitters. When the concentration of neurotransmitter molecules lies above a critical threshold and selected ZPF modes are in resonance with the characteristic transition frequencies between molecular energy levels, receptor activations ensue that drive the emergence of neuronal avalanches. The set of selected ZPF modes that is involved in the formation and stabilisation of an attractor determines the phenomenal properties of the conscious state.

 

His description demonstrates the kind of boundary conditions in brain dynamics likely to correspond to subjective states and thus provides a good insight into the stochastic uncertainties of brain dynamics of conscious states that would correspond to the subjective aspect, and it even claims to envelop all possible modes of qualitative subjectivity in the features of the ZPF underlying uncertainty, But it would remain to be established that the ZPF can accomodate all the qualitative variations spanning the senses of sight, sound and smell, which may rather correspond to the external quantum nature of these senses.

 

The ZPF does not of itself solve the hard problem as such, because, at face value it is a purely physical manifestation of quantum uncertainty with no subjective manifestation, however Keppler claims to make this link clear as well:   A detailed comparison between the findings of SED and the insights of Eastern philosophy reveals not only a striking congruence as far as the basic principles behind matter are concerned. It also gives us the important hint that the ZPF is a promising candidate for the carrier of consciousness, suggesting that consciousness is a fundamental property of the universe, that the ZPF is the substrate of consciousness and that our individual consciousness is the result of a dynamic interaction process that causes the realization of ZPF information states. …In that it is ubiquitous and equipped with unique properties, the ZPF has the potential to define a universally standardized substratum for our conscious minds, giving rise to the conjecture that the brain is a complex instrument that filters the varied shades of sensations and emotions selectively out of the all-pervasive field of consciousness, the ZPF (Keppler, 2013).

 

In personal communication regarding these concerns, Joachim responds as follows:

 

I understand your reservations about conventional field theories of consciousness. The main problem with these approaches (e.g., McFadden’s approach) is that they cannot draw a dividing line between conscious and unconscious field configurations. This leads to the situation that the formation of certain field configurations in the brain is claimed to be associated with consciousness, while the formation of the same (or similar) field configurations in an electronic device would usually not be brought in relation with consciousness. This is what you call quite rightly a common category error.  Now, the crucial point is that the ZPF, being the primordial basis of the electromagnetic interaction, offers a way to avoid this category error. According to the approach I propose, the ZPF (with all its field modes) is the substrate of consciousness, everywhere and unrestrictedly. The main difference between conscious and unconscious systems (processes) is their ability to enter into a resonant coupling with the ZPF, resulting in an amplification of selected ZPF modes. Only a special type of system has this ability (the conditions are described in my article). If a system meets the conditions, one must assume that it also has the ability to generate conscious states.

 

Keppler, J., and Shani, I. (2020) link this process to a form of cosmopsychism confluent with Symbiotic Existential Cosmology:

 

The strength of the novel cosmopsychist paradigm presented here lies in the bridging of the explanatory gap the conventional materialist doctrine struggles with. This is achieved by proposing a comprehensible causal mechanism for the formation of phenomenal states that is deeply rooted in the foundations of the universe. More specifically, the sort of cosmopsychism we advocate brings a new perspective into play, according to which the structural, functional, and organizational characteristics of the NCC are indicative of the brains interaction with and modulation of a UFC. In this respect, the key insights from SED suggest that this field can be equated with the ZPF and that the modulation mechanism is identical with the fundamental mechanism underlying quantum systems, resulting in our conclusion that a coherently oscillating neural cell assembly acquires its phenomenal properties by tapping into the universal pool of phenomenal nuances predetermined by the ZPF.

 


Fig 80b (Left): It is postulated that conscious systems must be equipped with a fundamental mechanism by means of which they are able to influence the basic structure of the ubiquitous field of consciousness (UFC). This requires the interaction of a physical system with the UFC in such a way that a transiently stable dynamic equilibrium, a so-called attractor state characterised by long-range coherence, is established in which the involved field modes enter into a phase-locked coupling. (Right) Cortical column coherence.  

Keppler (2023) also proposes a model where long-range coherence is developed in the functioning of cortical microcolumns, based on the interaction of a pool of glutamate molecules, with the vacuum fluctuations of the electromagnetic field, involving a phase transition from an ensemble of initially independent molecules toward a coherent state, resulting in the formation of a coherence domain that extends across the full width of a microcolumn.

 

Without accepting any materialistic notion of quantum fields being identifiable with subjective consciousness, this does provide a basis confluent with the description invoked in this article, which uses the infinite number of ground states in quantum field theory, as opposed to quantum mechanics to thermodynamically model memory states and the global amplitude and frequency-modulated binding in the EEG. The dissipative quantum model of brain dynamics (Freeman W & Vitiello 2006, 2007, 2016, Capolupo A, Freeman & Vitiello 2013, Vitiello 2015, Sabbadini & Vitiello 2019) provides another field theoretic description.

 

Karl Pribram (2004) has noted both the similarity of wave coherence interactions as an analogy or manifestation of quantum measurement and the ‘holographic’ nature of wave potential fluctuations, in the dendritic web:

 

The holonomic brain theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm. Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These wave oscillations create interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform. Gabor, Pribram and others noted the similarities between these and the storage of information in a hologram, which can also be analyzed with a Fourier transform.

 

The dissipative quantum model of brain dynamics (Freeman W & Vitiello 2006, 2007, 2016, Capolupo A, Freeman & Vitiello 2013, Vitiello 2015, Sabbadini & Vitiello 2019) provides another field theoretic description. I include a shortened extract from Freeman & Vitiello (2015), which highlights to me the most outstanding field theoretic description of the neural correlate of consciousness I know of, which also has the support of Freeman’s dynamical attractor dynamics as illustrated in fig 78, and likewise has similar time dual properties to the transactional interpretation discussed above, invoking complementary time directed-roles of  emergence and imagination:

 

Fig 81: Molecular biology is a theme and variations on the polar and non-polar properties of organic molecules residing in an aqueous environment. Nucleotide double helices, protein folding and micelle structures, as well as membranes, are all energetically maintained by their surrounding aqueous structures. Water has one of the highest specific heats of all, because of the large number of internal dynamic quantum states. Myoglobin (Mb) the oxygen transporting protein in muscle, containing a heme active site illustrates this (Ansari et al. 1984), both in its functionally important movements (fim) and its equilibrium fluctuations invoking fractal energetics between it high and low energy states of Mb and MbCO. This activity in turn is stabilised both by non polar side chains maintaining the aqueous structure and polar side chains interacting with the aqueous environment to form water hydration structures (top left) The hydration shell of myoglobin (blue surface) with 1911 water molecules (CPK model), the approximate number needed for optimal function (Vajda & Perczel 2014). Lower: Here we show that molecules taking part in biochemical processes from small molecules to proteins are critical quantum mechanically. Electronic Hamiltonians of biomolecules are tuned exactly to the critical point of the metal-insulator transition separating the Anderson localized insulator phase from the conducting disordered metal phase. Left: The HOMO/LUMO orbitals for Myoglobin calculated with the Extended Hückel method. Right: Generalized fractal dimensions Dq of the wave functions (Vattay et al. 2015).

 

We began by using classical physics to model the dendritic integration of cortical dynamics with differential equations, ranging in complexity from single positive loops in memory through to simulated intentional behavior (Kozma and Freeman 2009). We identified the desired candidate form in a discrete electrochemical wave packet embedded in the electroencephalogram (EEG), often with the form of a vortex like a hurricane, which carried a spatial pattern of amplitude modulation (AM) that qualified as a candidate for thought content.

 

Measurement of scalp EEG in humans (showed that the size and speed of the formation of wave packets were too big to be attributed to the classical neurophysiology of neural networks, so we explored quantum approaches. In order to use dissipative quantum field theory it is necessary to include the impact of brain and body on the environment. Physicists do this conceptually and formally by doubling the variables (Vitiello 1995, 2001, Freeman and Vitiello 2006) that describe dendritic integration in the action-perception cycle.  By doing so they cre- ate a Double, and then integrate the equations in reverse time, so that every source and sink for the brain-body is matched by a sink or source for the Double, together creating a closed system.

 

Fig 82: Field theory model of inward projecting electromagnetic fields
overlapping in basal brain centres (MacIver 2022).

 

On convergence to the attractor the neural activity in each sensory cortex condenses from a gas-like regime of sparse, disordered firing of action potentials at random intervals to a liquid-like macroscopic field of collective activity. The microscopic pulses still occur at irregular intervals, but the probability of firing is no longer random. The neural mass oscillates at the group frequencies, to which the pulses conform in a type of time multiplexing.  The EEG or ECoG (electrocorticogram) scalar field during the liquid phase revealed a burst of beta or gamma oscillation we denoted as a wave packet. Its AM patterns provided the neural correlates of perception and action. The surface grain inferred that the information capacity of wave packets is very high. The intense electrochemical energy of the fields was provided everywhere by the pre-existing trans-membrane ionic concentration gradients.

 

The theory cites water molecules and the cytosol as the basis for the quantum field description, a position supported at the molecular level by the polarisation of the cytoplasmic medium and all its constituents between aqueous polar and hydrophobic non-polar energetics as illustrated in fig 81.

 

Neurons, glia cells and other physiological units are [treated as] classical objects. The quantum degrees of freedom of the model are associated to the dynamics of the electrical dipoles of the molecules of the basic components of the system, i.e. biomolecules and molecules of the water matrix in which they are embedded. The coherence of the long-range correlations is of the kind described by quantum field theory in a large number of physical systems, in the standard model of particle physics as well as in condensed matter physics, ranging from crystals to magnets, from superconductive metals to superfluids. The coherent states characterizing such systems are stable in a wide range of temperatures.

 

In physiological terms the field consists of heightened ephaptic [0] excitability in an interactive region of neuropil, which creates a dominant focus by which every neuron is sensitized, and to which every neuron contributes its remembrance. In physical terms, the dynamical output of the many-body interaction of the vibrational quanta of the electric dipoles of water molecules and other biomolecules energize the neuropil, the densely compartmentalized tissue of axons, dendrites and glia through which neurons force ionic currents. The boson condensation provides the long-range coherence, which in turn allows and facilitates synaptic communication among neuron populations.

 

The stages of activation of the quantum field boson condensation correspond closely to stages of the Freeman attractor dynamics investigated empirically in the EEG and ECoG:

 

We conceive each action-perception cycle as having three stages, each with its neurodynamics and its psychodynamics (Freeman 2015). Each stage has at least one phase transition and may have two or more before the next stage. In the first stage a boson condensation forms a gamma wave packet by a phase transition in each of the primary sensory cortices. Only in stage one a phase transition would occur in a single cortex. In stage two the entorhinal cortex integrates all modalities before making a gestalt.

When the boson condensation carrying its AM pattern invades and recruits the amygdala and hypothalamus, we propose that this correlates with awareness of emotion and value with incipient awareness of content. In the second stage a more extended boson condensation forms a larger wave packet in the beta range that extends through the entire limbic system including the entorhinal cortex, which is central in an AM pattern. We believe it correlates with a flash memory unifying the multiple primary percepts into a gestalt, for which the time and place of the subject forming the gestalt are provided by the hippocampus. A third phase transition forms a boson condensation that sustains a global AM pattern, the manifestations of which in the EEG extend over the whole scalp. We propose that the global AM pattern is accompanied by comprehension of the stimulus meaning, which constitutes an up-to-date status summary as the basis for the next intended action.

 

The dual time representation of the quantum field and its double invokes the key innovative and anticipatory features of conscious imagination:

 

Open systems require an environment to provide the sink where their waste energy goes, and a source of free energy which feeds them. From the standpoint of the energy flux balance, brains describe the relevant restructured part of the environment using the time-reversed copy of the system, its complement or Double (Vitiello 2001).  Where do the hypotheses come from? The answer is: from imagination. In theory the best sources for hypotheses are not memories as they appear in experience, but images mirrored backward in time. The imaginings are not constrained by thermodynamics. The mirror sinks and sources are imagined, not emergent. From this asymmetry we infer that the mirror copy exists as a dynamical system of nerve energy, by which the Double produces its hypotheses and predictions, which we experience as perception, and which we test by taking action. It is the Double that imagines the world outside, free from the shackles of thermodynamic reality. It is the Double that soars.

 

Johnjoe Mcfadden (2020) likewise has a theory of consciousness associated with the electromagnetic wave properties of the brain’s EM field interacting with the matter properties of “unconscious” neuronal processing. In his own words he summarises his theory as follows:

 

I describe the conscious electromagnetic information (cemi) field theory which has proposed that consciousness is physically integrated, and causally active, information encoded in the brain’s global electromagnetic (EM) field. I here extend the theory to argue that consciousness implements algorithms in space, rather than time, within the brain’s EM field. I describe how the cemi field theory accounts for most observed features of consciousness and describe recent experimental support for the theory.  … The cemi field theory differs from some other field theories of consciousness in that it proposes that consciousness — as the brain’s EM field — has outputs as well as inputs. In the theory, the brain’s endogenous EM field influences brain activity in a feedback loop (note that, despite its ‘free’ adjective, the cemi field’s proposed influence is entirely causal acting on voltage-gated ion channels in neuronal membranes to trigger neural firing.

 

The lack of correlation between complexity of information integration and conscious thought is also apparent in the common-place observation that tasks that must surely require a massive degree of information integration, such as the locomotory actions needed to run across a rugged terrain, may be performed without awareness but simple sensory inputs, such as stubbing your toe, will over-ride your conscious thoughts. The cemi field theory proposes that the non-conscious neural processing involves temporal (computational) integration whereas operations, such as natural language comprehension, require the simultaneous spatial integration provided by the cemi field. … Dehaene (2014) has recently described four key signatures of consciousness: (i) a sudden ignition of parietal and prefrontal circuits; (ii) a slow P3 wave in EEG; (iii) a late and sudden burst of high-frequency oscillations; and (iv) exchange of bidirectional and synchronized messages over long distances in the cortex. It is notable that the only feature common to each of these signatures—aspects of what Dehaene calls a ‘global ignition’ or ‘avalanche’—is large endogenous EM field perturbations in the brain, entirely consistent with the cemi field theory.

 

Jones & Hunt (2023) provide a wide-ranging review of field theories of consciousness culminating in their own favoured theory, combining a panpsychist view concordant with Symbiotic Existential Cosmology although specifically dependent on EM fields as it’s key interface. They begin with a critical review of neuronal network approaches to conscious brain function:

 

Neuroscientists usually explain how our different sensory qualia arise in terms of specialized labeled lines with their own detector fibers and processing areas for taste, vision, and other sensory modes. Photoreceptors thus produce color qualia regardless of whether they are stimulated by light, pressure, or other stimuli. This method is supplemented by detailed comparisons of the fibers within each labeled line. For example, the three color fibers overlap in their response to short, medium, and long wavelengths of incoming light. So across-fiber comparisons of their firing rates help disambiguate which wavelengths are actually present. This longstanding view has arisen from various historical roots. But the overall problem is that these operations are so similar in the visual, tactile, and other sensory modes that it is unclear how these methods can differ enough to account for all the stark differences between color and taste qualia, for example. Another issue (which will be addressed more below) concerns the hard problemof why this biological information processing is accompanied by any conscious experience of colors, pains, et cetera.

 

It might be thought that recently proposed neuron-based neuroscientific theories of consciousness would offer more viable accounts of how different qualia arise. But they rarely do. For example, Global Neuronal Workspace Theory GNWT (e.g., Dehaene and Naccache, 2001; Dehaene, 2014) and Higher-Order Theories (e.g., Rosenthal, 2005) focus on access consciousnessthe availability of information for acting, speaking, and reasoning. This access involves attention and thought. But these higher cognitive levels do not do justice to qualia, for qualia appear even at the very lowest levels of conscious cognition in pre-attentive iconic images.

 

They then explore both integrated information theory and quantum approaches such as Hameroff Penrose, illustrating their limitations:

 

Integrated Information Theory represents qualia information abstractly and geometrically in the form of a system’s “qualia space(Tononi 2008). This is the space where each axis represents a possible state of the systema single combination of logic-gate interactions (typically involving synapses). .. IITs accounts of qualia spaces are far too complex to specify except in the simplest of cases, and no tests for this method of characterizing qualia has yet been proposed, as far as we are aware.

 

Hameroff and Penrose have not yet addressed how different qualia arise from different quantum states. This latter issue applies to many quantum theories of consciousness. They generally omit mention of how quantum states yield the primary sensory qualia (redness, sweetness, etc.) we are familiar with. For example, Beshkar (2020) contains an interesting QBIT theory of consciousness that attributes qualia to quantum information encoded in maximally entangled states. Yet this information ultimately gets its actual blueness, painfulness, etc. from higher cortical mechanisms criticized above. Another example is Lewtas (2017). He also attributes our primary qualia to quantum levels. Each fundamental particle has some of these various qualia. Synchronized firing by neurons at different frequencies selects from the qualia and binds them to form images. ... The general problem with these highly philosophical qualia theories is that they are hard to evaluate. Their uniting of qualia to quanta is not spelt out in testable detail.

 

They then outline the difficulties network based neuroscience has dealing with qualia:

 

Standard neuroscience has not explained well how the brains separate, distributed visual circuits bind together to support a unified image. This is an aspect of the so-called binding problemof how the minds unity arises ... visual processing uses separate, parallel circuits for color and shape, and it is unclear how these circuits combine to form complete images. Ascending color and shape circuits have few if any synapses for linking their neurons to create colored shapes. Nor do they converge on any central visual area.

 

(1) The coding/correlation problem: As argued above, the neuronal and computational accounts above have failed to find different information-processing operations among neurons that encode our different qualia.

(2) The qualia-integration problem: Computational accounts also face the problem of explaining how myriad qualia are integrated together to produce overall unified perceptions such as visual images.

(3) The hard problem: In addition to the two empirical problems above, computational accounts face a hard, metaphysical problem. Why are neural events accompanied by any qualia at all?

 

They then explore how field theories can address these fundamental issues:

 

EM field approaches to minds have offered new theories of qualia and consciousness, some of which are testable. These electromagnetic approaches seat consciousness primarily in the various complex EM fields generated by neurons, glia and the rest of the brain and body. ... These EM field approaches are proliferating because they draw on considerable experimental evidence and withstand past criticisms from standard neuroscience. For example, they have explained the unity of consciousness in terms of the physical unity (by definition) of EM fieldsin contrast to the discrete nature of neurons and their synaptic firing. In the last two decades, they have also offered explanations of how neural EM activity creates different qualia.

 

Pocketts (2000) theory of qualia is an important landmark in EM field theories of mind. It is rooted in extensive experimental evidence, makes testable predictions, and is strongly defended against critics. If Kohler, Libet, Eccles, and Popper helped establish the EM field approach to minds, Susan Pockett has arguably done more to develop it than anyone elseexcept for perhaps Johnjoe McFadden.  … Pocketts basic claim is that consciousness is identical with certain spatiotemporal patterns in the electromagnetic field” (ibid., pp. vi, 109, 136–7). Her evidence comes mainly from extensive EEG and MEG studies of neural electromagnetic fields. They show correlations between sensory qualia and field patterns. For example, EEG studies by Freeman (1991) show that various odors (e.g., from bananas or sawdust) correlate with specific spatial patterns distributed across mammalian olfactory areas.

 

McFaddens (202b) theory says that information is conscious at all levels, which seems to entail a form of panpsychism (McFadden, 2002b). The discreteconsciousness of elementary particles is limited and isolated. But as particles join into a field, they form a unified fieldconsciousness. As these fields affect motor neurons, the brains consciousness is no longer an epiphenomenon, for its volition can communicate with the world. This level of accessconsciousness serves as a global workspace where specialized processors compete for access to volitions global, conscious processes. McFadden rejects popular views that minds are just ineffectual epiphenomena of brain activity. Instead, fieldnerve interactions are the basis of free will. The conscious field is deterministic, yet it is free in that it affects behavior instead of being epiphenomenal (McFadden, 2002a,b). This treats determinism as compatible with free will construed as self-determination.

 

They postpone the hard problem and focus on the first two above:

 

(1) The coding/correlation problem: What different EM-field activities encode or correlate with the various qualia? Both field theories above face difficulties here.

(2) The qualia-integration problem: How do EM fields integrate myriad qualia to form (for example) unified pictorial images? Here field theories seem quite promising in their ability to improve upon standard neuroscience.

 

They then cite three emergent field theories which have sought to address the outstanding problems faced by the field theories already discussed:

 

Ward and Guevara (2022) localize qualia in the fields generated by a particular part of the brain. Their intriguing thesis is that our consciousness and its qualia are based primarily on structures in thalamic EM fields which serve to model environmental and bodily information in ways relevant to controlling action. Ward and Guevara argue that the physical substrate of consciousness is limited to strong neural EM fields where synchronously firing neurons reinforce each others information in a manner which is also integrated and complex. Finally, local, nonsynchronous fields can be canceled out in favor of a dominant field that synchronously and coherently represents all the information from our senses, memories, emotions, et cetera. For these reasons, Ward and Guevara believe that fields are better candidates than neurons and synaptic firing for the primary substrate of consciousness. … they cite four reasons for ascribing consciousness to the thalamus. (1) We are not conscious of all sensory computations, just their end result, which involves the thalamic dynamic core. (2) Thalamic dysfunctions (but not necessarily cortical dysfunctions) are deeply involved in nonconsciousness conditions such as anesthesia, unresponsive wakefulness syndrome, and anoxia. (3) The thalamus is a prime source and controller of synchronization (in itself and in cortex), which is also associated with consciousness. (4) The thalamus (especially its DM nucleus Ouhaz et al. 2018) is ideally suited for the integrative role associated with consciousness, for cortical feedbacks seem to download cortical computations into thalamus. ... These lines of evidence indicate that while cortex computes qualia, thalamus displays qualia.

 

Another author who attributes qualia to fundamental EM activity is Bond (2023). This clear, succinct paper explains that quantum coherence involves the entanglement of quanta within energy fields, including the EM fields generated by neurons. Neural matter typically lacks this coherence because the haphazard orientation of quantum spins in the matter creates destructive interference and decoherence. Bond proposes the novel idea that firing neurons generate EM fields that can flow through nearby molecular structures and entangle with their atoms. This coherence produces our perceptions. The different subjective feelings of these perceptions come from different hybrids or mixtures of the fieldswavelengths as they vibrate or resonate. ... On a larger scale, this coherence ties into the well-known phase- locking of corticothalamic feedback loops. Together, they produce the holism or unity of consciousness. This combination of coherent, phase-locked feedback loops and coherent, entangled wave-particles in EM fields is called by Bond a coherence field.It is investigated by his Coherence Field Theory (CFT).

 

Finally, as joint authors, they elucidate their favoured theory General Resonance Theory, or GRT arising from their independent research:

 

Another approach to the Qualia Problem is Hunt and Schoolers General Resonance Theory (GRT), which is grounded in a panpsychist framework. GRT assumes that all matter is associated with at least some capacity for phenomenal consciousness (this is called the “panpsychism axiom”), but that consciousness is extremely rudimentary in the vast majority of cases due to a lack of physical complexity mirrored by the lack of mental complexity. The EM fields associated with all baryonic matter (i.e., charged particles) are thought to be the primary seat of consciousness simply because EM fields are the primary force at the scale of life (strong and weak nuclear fields are operative at scales far smaller and gravity is operative mostly at scales far larger). Accordingly, GRT is applicable to all physical structures and as a theory is not limited only to neurobiological or even biological structures (Hunt and Schooler, 2019).

 

GRT suggests that resonance (similar but not synonymous with synchronization and coherence) of various types is the key mechanism by which the basic constituents of consciousness, when in sufficient proximity, combine into more complex types of consciousness. This is the case because shared resonance allows for phase transitions in the speed and bandwidth of information exchange to occur at various organizational levels, allowing previously disordered systems to self-organize and thus become coherent by freely sharing information and energy.

 

Qualia, in GRT, are synonymous with consciousness, which is simply subjective experience:

 

Jones (2017, 2019), a coauthor of the current paper, has developed an EM-field theory of qualia. Like other field theories, it attributes qualia and images to neural EM-field patterns (and probably the EM-charged matter emitting the fields). Yet these are not the coded images of computational field theories that are based on information processing. Instead, in his theory images actually reside in conscious, pictorial form within the EM fields of neural maps. This is a neuroelectrical, pure panpsychist theory of mind (NP). The pure panpsychismsays that everything (not just EM) is comprised purely of consciousness. NP addresses the hard problem, qualia-integration problem, and qualia coding/ correlation problem in the following ways.

 

(1) The hard problem: How are qualia metaphysically related to brains and computations? In NP, consciousness and its qualia are the hidden nature of observable matter and energy. We are directly aware of our inner conscious thoughts and feelings. Yet we are just indirectly aware of the observable, external world through reflected light, instruments, sense organs, et cetera.

 

(2) The qualia coding/correlation problem: How do our various qualia arise? Yet there is now growing evidence that different qualia correlate with different electrically active substances in cellular membranes found in sensory and emotional circuits. These substances are the membranesion-channel proteins and associated G-protein-coupled receptors (GPCRs). For example, the different primary colors correlate with different OPN1 GPCRs ...  oxytocin and vasopressin receptor proteins correlate with feelings of love, estrogen and testosterone receptors correlate with lust, the endorphin receptor correlates with euphoria, and the adrenaline receptor correlates with vigilance.

 

(3) The qualia-integration problem: First, how do various qualia unify together into an overall whole? Second, how specifically do qualia join point by point to form pictorial images? In NPs field theory, active circuits create a continuous EM field between neurons that pools their separate, atomized consciousness. This creates a unified conscious mind along brain circuits (with the mind itself residing in the field and perhaps in the charged matter creating the field). This unity is strongest around the diffuse ion currents that run along (and even between) neuronal circuits. It is very strong among well-aligned cortical cells that fire together coherently.

 

In conclusion they state: Consciousness is characterized mainly by its privately experienced qualities (qualia). Standard, computation-based and synapse-based neuroscience have serious difficulties explaining them. ... field theories have improved in key ways upon standard neuroscience in explaining qualia. But this progress is sometimes tentativeit awaits further evidence and development. 

 

Earlier John Eccles (1986) proposed a brain mind identity theory involving psychon quasi-particles mediating uncertainty of  synaptic transmission to complementary dendrons cylindrical bundles of neurons arranged vertically in the six outer layers or laminae of the cortex. Eccles proposed that each of the 40 million dendrons is linked with a mental unit, or "psychon", representing a unitary conscious experience. In willed actions and thought, psychons act on dendrons and, for a moment, increase the probability of the firing of selected neurons through quantum tunnelling effect in synaptic exocytosis, while in perception the reverse process takes place. This model has been elaborated by a number of researchers (Eccles 1990, 1994, Beck & Eccles 1992, Georgiev 2002, Hari 2008). The difficulty with the theory is that the psychons are then physical quasi-particles with integrative mental properties. So it’s a contradictory description that doesn’t manifest subjectivity except by its integrative physical properties.

 

Summarising the state of play, we have two manifestations of consciousness at the interface with objective physical description, (a) the hard problem of consciousness and (b) the problem of quantum measurement, both of which are in continual debate. Together these provide complementary windows on the abyss in the scientific description and a complete solution of existential cosmology that we shall explore in this article.

   

Neural Nets versus Biological Brains

 

Steven Grossberg is recognised for his contribution to ideas using nonlinear systems of differential equations such as laminar computing, where the layered cortical structures of mammalian brains provide selective advantages, and for complementary computing, which concerns the idea that pairs of parallel cortical processing streams compute complementary properties in the brain, each stream having complementary computational strengths and weaknesses, analogous to physical complementarity in the uncertainty principle. Each can possess multiple processing stages realising a hierarchical resolution of “uncertainty”, which here means that computing one set of properties at a given stage prevents computation of a complementary set of properties at that stage.

 

“Conscious Mind, Resonant Brain” (Grossberg 2021) provides a panoramic model of the brain, from neural networks to network representations of conscious brain states. In so doing, he presents a view based on resonant non-linear systems, which he calls adaptive resonance theory (ART), in which a subset of “resonant” brain states are associated with conscious experiences. While I applaud his use of non-linear dynamics, ART is a structural abstract neural network model and not what I as a mathematical dynamicist conceive of as "resonance", compared with the more realistic GNW, or global neuronal workspace model.

 

The primary intuition behind the ART model is that object identification and recognition generally occur as a result of the interaction of 'top-down' observer expectations with 'bottom-up' sensory information. The model postulates that 'top-down' expectations take the form of a memory template or prototype that is then compared with the actual features of an object as detected by the senses. This comparison gives rise to a measure of category belongingness. As long as this difference between sensation and expectation does not exceed a set threshold called the 'vigilance parameter', the sensed object will be considered a member of the expected class. The system thus offers a solution to the 'plasticity/stability' problem, i.e. the problem of acquiring new knowledge without disrupting existing knowledge that is also called incremental learning.

 

The basic ART structure.

 

The work shows in detail how and why multiple processing stages are needed before the brain can construct a complete and stable enough representation of the information in the world with which to predict environmental challenges and thus control effective behaviours. Complementary computing and hierarchical resolution of uncertainty overcome these problems until perceptual representations that are sufficiently complete, context-sensitive, and stable can be formed. The brain regions where these representations are completed are different for seeing, hearing, feeling, and knowing.

 

His proposed answer is that a resonant state is generated that selectively lights upthese representations and thereby renders them conscious. These conscious representations can then be used to trigger effective behaviours:

 

My proposed answer is: A resonant state is generated that selectively lights upthese representations and thereby renders them conscious. These conscious representations can then be used to trigger effective behaviors. Consciousness hereby enables our brains to prevent the noisy and ambiguous information that is computed at earlier processing stages from triggering actions that could lead to disastrous consequences. Conscious states thus provide an extra degree of freedom whereby the brain ensures that its interactions with the environment, whether external or internal, are as effective as possible, given the information at hand.

 

He addresses the hard problem of consciousness in its varying aspects:

 

As Chalmers (1995) has noted: The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. ... Even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? There seems to be an unbridgeable explanatory gap between the physical world and consciousness. All these factors make the hard problem hard. … Philosophers vary passionately in their views between the claim that no Hard Problem remains once it is explained how the brain generates experience, as in the writings of Daniel Dennett, to the claim that it cannot in principle be solved by the scientific method, as in the writings of David Chalmers. See the above reference for a good summary of these opinions.

 

Grossberg demonstrates that, over and above information processing, our brains sometimes go into a context-sensitive resonant state that can involve multiple brain regions. He explores experimental evidence that all conscious states are resonant states but not vice versa. Showing that, since not all brain dynamics are resonant, consciousness is not just a whir of information-processing:

 

When does a resonant state embody a conscious experience? Why is it conscious? And how do different resonant states support different kinds of conscious qualia? The other side of the coin is equally important: When does a resonant state fail to embody a conscious experience? Advanced brains have evolved in response to various evolutionary challenges in order to adapt to changing environments in real time. ART explains how consciousness enables such brains to better adapt to the worlds changing demands.

 

Grossberg is realistic about the limits on a scientific explanation of the hard problem:

 

It is important to ask: How far can any scientific theory go towards solving the Hard Problem? Let us suppose that a theory exists whose neural mechanisms interact to generate dynamical states with properties that mimic the parametric properties of the individual qualia that we consciously experience, notably the spatio-temporal patterning and dynamics of the resonant neural representations that represent these qualia. Suppose that these resonant dynamical states, in addition to mirroring properties of subjective reports of these qualia, predict properties of these experiences that are confirmed by psychological and noninvasive neurobiological experiments on humans, and are consistent with psychological, multiple-electrode neurophysiological data, and other types of neurobiological data that are collected from monkeys who experience the same stimulus conditions.

 

He then develops a strategy to move beyond the notion of the neural correlate of consciousness (Crick & Koch 1990), claiming these states are actually the physical manifestation of the conscious state:

 

Given such detailed correspondences with experienced qualia and multiple types of data, it can be argued that these dynamical resonant states are not just neural correlates of consciousnessthat various authors have also discussed, notably David Chalmers and Christof Koch and their colleagues. Rather, they are mechanistic representations of the qualia that embody individual conscious experiences on the psychological level. If such a correspondence between detailed brain representations and detailed properties of conscious qualia occurs for a sufficiently large body of psychological data, then it would provide strong evidence that these brain representations create and support these conscious experiences. A theory of this kind would have provided a linking hypothesis between brain dynamics and the conscious mind. Such a linking hypothesis between brain and mind must be demonstrated before one can claim to have a theory of consciousness”.

 

However he then delineates the claim that this is the most compete scientific account of subjective experience possible, while conceding that it may point to a  cosmological problem akin those in relativity and quantum theory:

 

If, despite such a linking hypothesis, a philosopher or scientist claims that, unless one can see red” or “feel fearin a theory of the Hard Problem, then it does not contribute to solving that problem, then no scientific theory can ever hope to solve the Hard Problem. This is true because science as we know it cannot do more than to provide a mechanistic theoretical description of the dynamical events that occur when individual conscious qualia are experienced. However, as such a principled, albeit incrementally developing, theory of consciousness becomes available, including increasingly detailed psychological, neurobiological, and even biochemical processes in its explanations, it can dramatically shift the focus of discussions about consciousness, just as relativity theory transformed discussions of space and time, and quantum theory of how matter works. As in quantum theory, there are measurement limitations in understanding our brains.

 

Although he conceives of brain dynamics as being poised just above the level of quantum effects in vision and hearing, Grossberg sees brains as a new frontier of scientific discovery subject to the same principles of complementarity and uncertainty as arise in quantum physics:

 

Since brains form part of the physical world, and interact ceaselessly with it to adapt to environmental challenges, it is perhaps not surprising that brains also obey principles of complementarity and uncertainty. Indeed, each brain is a measurement device for recording and analyzing events in the physical world. In fact, the human brain can detect even small numbers of the photons that give rise to percepts of light, and is tuned just above the noise level of phonons that give rise to percepts of sound.

 

Complementarity and uncertainty principles also arise in physics, notably in quantum mechanics. Since brains form part of the physical world, and interact ceaselessly with it to adapt to environmental challenges, it is perhaps not surprising that brains also obey principles of complementarity and uncertainty. Indeed, each brain is a measurement device for recording and analyzing events in the physical world. In fact, the human brain can detect even small numbers of the photons that give rise to percepts of light, and is tuned just above the noise level of phonons that give rise to percepts of sound.

 

The Uncertainty Principle identified complementary variables, such as the position and momentum of a particle, that could not both be measured with perfect precision. In all of these theories, however, the measurer who was initiating and recording measurements remained out- side the measurement process. When we try to understand the brain, this is no longer possible. The brain is the measurement device, and the process of understanding mind and brain is the study of how brains measure the world. The measurement process is hereby brought into physical theory to an unprecedented degree.

 

Fig 83: Brain centres involved in intentional behaviour and subjectively conscious physical volition: (a) The cortex overlaying the basal ganglia, thalamus and amygala and substantia nigra  involved in planned action, motivation and volition. (b) The interactive circuits in the cortex, striatum and thalamus facilitating intentional motor bahaviour. (c) The Motivator model clarifies how the basal ganglia and amygdala coordinate their complementary functions in the learning and performance of motivated acts. Brain areas can be divided into four regions that process information about conditioned stimuli (CSs) and unconditioned stimuli (USs). (a) Object Categories represent visual or gustatory inputs, in anterior inferotemporal (ITA) and rhinal (RHIN) cortices; (b) Value Categories represent the value of anticipated outcomes on the basis of hunger and satiety inputs, in amygdala (AMYG) and lateral hypothalamus (LH); (c) Object-Value Categories resolve the value of competing perceptual stimuli in medial (MORB) and lateral (ORB) orbitofrontal cortex; and (d) the Reward Expectation Filter in the basal ganglia detects the omission or delivery of rewards using a circuit that spans ventral striatum (VS), ventral pallidum (VP), striosomes of the striatum, the pedunculopontine nucleus (PPTN) and midbrain dopaminergic neurons of the SNc/VTA (substantia nigra pars compacta/ventral tegmental area). The network model connecting brain regions is consistent with both quantum and classical approaches and in no way eliminates subjective conscious volition from having an autonomous role. All it implies is that conscious volition arises from an evolved basis in these circuit relationships in mammals.

 

Grossberg sees the brain as presenting new issues for science as measurement devices confounding their separation between measured effect and the observer making a quantum measurement:

 

Since brains are also universal measurement devices, how do they differ from these more classical physical ideas? I believe that it is the brains ability to rapidly self-organize, through development and life-long learning, that sets it apart from previous physical theories. The brain thus represents a new frontier in measurement theory for the physical sciences, no less than the biological sciences. It remains to be seen how physical theories will develop to increasingly incorporate concepts about the self-organization of matter, and how these theories will be related to the special case of brain self-organization.

 

Experimental and theoretical evidence will be summarized in several chapters in support of the hypothesis that principles of complementarity and uncertainty that are realized within processing streams, better explain the brains functional organization than concepts about independent modules. Given this conclusion, we need to ask: If the brain and the physical world are both organized according to such principles, then in what way is the brain different from the types of physical theories that are already well-known? Why havent good theoretical physicists already solvedthe brain using known physical theories?

 

The brains universal measurement process can be expected to have a comparable impact on future science, once its implications are more broadly understood. Brain dynamics operate, however, above the quantum level, although they do so with remarkable efficiency, responding to just a few photons of light in the dark, and to faint sounds whose amplitude is just above the level of thermal noise in otherwise quiet spaces. Knowing more about how this exquisite tuning arose during evolution could provide important new information about the design of perceptual systems, no less than about how quantum processes interface with processes whose main interactions seem to be macroscopic.

 

In discussing the hierarchical feedback of the cortex and basal ganglia and the limbic system, Grossberg (2015) fluently cites both consciousness and volition as adaptive features of the brain as a self-organising system:

 

The basal ganglia control the gating of all phasic movements, including both eye movements and arm movements. Arm movements, unlike eye movements, can be made at variable speeds that are under volitional basal ganglia control. Arm movements realize the Three Ss of Movement Control; namely, Synergy, Synchrony, and Speed. … Many other brain processes can also be gated by the basal ganglia, whether automatically or through conscious volition. Several of these gating processes seem to regulate whether a top- down process subliminally primes or fully activates its target cells. As noted in Section 5.1, the ART Matching Rule enables the brain to dynamically stabilize learned memories using top-down attentional matching.

 

Such a volitionally-mediated shift enables top-down expectations, even in the absence of supportive bottom-up inputs, to cause conscious experiences of imagery and inner speech, and thereby to enable visual imagery, thinking, and planning activities to occur. Thus, the ability of volitional signals to convert the modulatory top-down priming signals into suprathreshold activations provides a great evolutionary advantage to those who possess it.

 

Such neurosystem models  provide key insights into how processes associated with intentional acts and the reinforcement of sensory experiences through complementary adaptive networks, model the neural correlate of conscious volitional acts and their smooth motor execution in the world at large. As they stand, these are still classical objective models that do not actually invoke conscious volition as experienced, but they do provide deep insight into the brain’s adaptive processes accompanying subjective conscious volition.

 

My critique, which this is clear and simple, is that these designs remove such a high proportion of the key physical principles involved in biological brain function that they can have no hope of modelling subjective consciousness or volition, despite the liberal use of these terms in the network designs, such as the basal ganglia as gateways. Any pure abstract neural net model, however much it adapts to “resonate" with biological systems is missing major fundamental formative physical principles of how brains actually work.

 

These include: 

(A)  The fact that biological neural networks are both biochemical and electrochemical in two ways (1) all electrochemical linkages, apart from gap junctions, work through the mediation of biochemical neurotransmitters and (2) the internal dynamics of individual neurons and glia are biochemical, not electrochemical. 

(B) The fact that the electrochemical signals are dynamic and involve sophisticated properties including both (1) unstable dynamics at the edge of chaos and (2) phase coherence tuning between continuous potential gradients and action potentials. 

(C) They involve both neurons and neuroglia working in complementary relationship. 

(D) They involve developmental processes of cell migration determining the global architecture of the brain including both differentiation by the influence of neurotransmitter type and chaotic excitation in early development.

(E) This neglects the fact that evolution of biological brains as neural networks is built on the excitatory neurotransmitter-driven social signalling and quantum sentience of single celled eucaryotes, forming an intimately coupled society of amoebo-flagellate cells communicating by the same neurotransmitters as in single-celled eucaryotes, so these underlying dynamics are fundamental and essential to biological neural net functionality.

 

Everything from simple molecules such as ATP acting as the energy currency of the cell, through protein folding, to enzymes involve quantum effects, such as tunnelling at active sites, and ion channels are at the same level.

 

It is only a step from there to recognising that such biological processes are actually fractal non-IID (not identically independently-distributed quantum processes, not converging to the classical, in the light of Gallego & Dakić (2021), because their defining contexts are continually evolving, to thus provide a causally open view of brain dynamics, in which the extra degree of freedom provided by consciousness, that complements objective physical computation, arises partly through quantum uncertainty itself, in conscious volition becoming subjectively manifest, and ensuring survival under uncertain environmental threats.

 

However, this is not just a rational or mechanistically causal process. We evolved from generation upon generation of organisms surviving existential threats in the wild, which were predominantly solved by lightning fast hunch and intuition, and never by rational thought alone, except recently and all too briefly in our cultural epoch.

 

The great existential crises have always been about surviving environmental threats which are not only computationally intractable due to exponentiating degrees of freedom, but computationally insoluble because they involve the interaction of live volitional agents, each consciously violating the rules of the game.

 

Conscious volition evolved to enable subjective living agents to make hunch-like predictions of their own survival in contexts where no algorithmic or deterministic process, including the nascent parallelism of the cortex, limbic system and basal ganglia that Steve Grossberg has drawn attention to, could suffice, other than to define boundary conditions on conscious choices of volitional action. Conscious intentional will, given these constraints, remained the critical factor, complementing computational predictivity generated through non-linear dynamics, best predicting survival of a living organism in the quantum universe, which is why we still possess it.

 

When we come to the enigma of subjective conscious anticipation and volition under survival threats, these are clearly, at the physiological level, the most ancient and most strongly conserved. Although the brains of vertebrates, arthropods and cephalopods show vast network differences, the underlying processes generating consciousness remain strongly conserved to the extent that baby spiders display clear REM features during sleep despite having no obvious neural net correspondence. While graded membrane excitation is universal to all eucaryotes and shared by human phagocytes and amoeba, including the genes for the poisons used to kill bacteria, the action potential appears to have evolved only in flagellate eucaryotes, as part of the flagellar escape response to existential threat, later exemplified by the group flagellation of our choano-flagellate ancestor colonies.

 

All brains are thus intimate societies of dynamically-coupled excitable cells (neurons and glia) communicating through these same molecular social signalling pathways that social single celled eucaryotes use. Both strategic intelligence and conscious volition as edge-of-chaos membrane excitation in global feedback thus arose long before brains and network designs emerged.

 

Just as circuit design models can have predictive value, so does subjective conscious volition of the excitable eucaryote cell have clear survival value in evolution and hence predictive power of survival under existential threat, both in terms of arbitrary sensitivity to external stimuli at the quantum level and neurotransmitter generated social decision-making of the collective organism. Thus the basis of what we conceive of as subjective conscious volition is much more ancient and longer and more strongly conserved than any individual network model of the vertebrate brain and underlies all attempts to form realistic network models.

 

Since our cultural emergence, Homo sapiens has been locked in a state of competitive survival against its own individuals, via Machiavellian intelligence, but broadly speaking, rationality – dependence on rational thought processes as a basis for adaption – just brings us closer to the machine learning of robots, rather than conscious volition. Steves representation of the mechanical aspects in the basal ganglia in Grossberg (2015) gives a good representation of how living neurosystems adaptively evolve to make the mechanical aspect of the neural correlate of conscious volition possible, but it says little about how we actually survive the tigers pounce, let alone the ultimate subtleties of human political intrigue, when the computational factor are ambiguous.. Likewise decision theory or prospect theory, as noted in Wikipedia, tells us only a relatively obvious asymmetric sigmoidal function describing how risk aversion helps us survive, essentially because being eaten rates more decisively in the cost stakes than any single square meal as a benefit.

 

Because proving physical causal closure of the universe in the context of brain dynamics is impossible to practically achieve in the quantum universe, physical materialism is itself not a scientific concept, so all attempts to model and understand conscious volition remain open and will continue to do so. The hard problem of consciousness is not a division between science and philosophy as Steve suggests in his (2021) book, but our very oracle of cosmological existence.

 

Epiphenomenalism, Conscious Volition and Free Will

 

Thomas Kuhn (19221996) is perhaps the most influential philosopher of science of the twentieth century. His book “The Structure of Scientific Revolutions” (Kuhn 1962) is one of the most cited academic books of all time.  A particularly important part of Kuhns thesis focuses upon the consensus on exemplary instances of scientific research. These exemplars of good science are what Kuhn refers to when he uses the term paradigmin a narrower sense. He cites Aristotles analysis of motion, Ptolemys computations of plantery positions, Lavoisiers application of the balance, and Maxwells mathematization of the electromagnetic field as paradigms (ibid, 23). According to Kuhn the development of a science is not uniform but has alternating ‘normal’ and revolutionary(or extraordinary) phases in which paradigm shifts occur.

 

Rejecting a teleological view of science progressing towards the truth, Kuhn favours an evolutionary view of scientific progress (1962/1970a, 1703). The evolutionary development of an organism might be seen as its response to a challenge set by its environment. But that does not imply that there is some ideal form of the organism that it is evolving towards. Analogously, science improves by allowing its theories to evolve in response to puzzles and progress is measured by its success in solving those puzzles; it is not measured by its progress towards to an ideal true theory. While evolution does not lead towards ideal organisms, it does lead to greater diversity of kinds of organism. This is the basis of a Kuhnian account of specialisation in science in which the revolutionary new theory that succeeds in replacing another that is subject to crisis, may fail to satisfy all the needs of those working with the earlier theory. One response to this might be for the field to develop two theories, with domains restricted relative to the original theory (one might be the old theory or a version of it).

 

Free will is the notion that we can make real choices which are partially or completely independent of antecedent conditions – "the power of acting without the constraint of necessity or fate; the ability to act at one's own discretion", in the context of the given circumstances. Determinism denies this and maintains that causation is operative in all human affairs. Increasingly, despite the discovery of quantum uncertainty, scientists argue that their discoveries challenge the existence of free will. Studies indicate that informing people about such discoveries can change the degree to which they believe in free will and subtly alter their behaviour, leading to a social erosion of human agency, personal and ethical responsibility.

 

Philosophical analysis of free will divides into two opposing responses. Incompatibilists claim that free will and determinism cannot coexist. Among incompatibilists, metaphysical libertarians, who number among them Descartes, Bishop Berkeley and Kant, argue that humans have free will, and hence deny the truth of determinism. Libertarianism holds onto a concept of free will that requires the agent to be able to take more than one possible course of action under a given set of circumstances, some arguing that indeterminism helps secure free will, others arguing that free will requires a special causal power, agent-causation. Instead, compatibilists argue that free and responsible agency requires the capacities involved in self-reflection and practical deliberation; free will is the ability to make choices based on reasons, along with the opportunity to exercise this ability without undue constraints (Nadelhoffer et al. 2014). This can make rational acts or decisions compatible with determinism.

 

Our concern here is thus not with responsible agency, which may or may not be compatible with determinism, but affirming the existence of agency not causally determined by physical processes in the brain. Epiphenomenalists accept that subjective consciousness exists, as an internal model of reality constructed by the brain to give a global description of the coherent brain processes involved in perception attention and cognition, but deny the volitional will over our actions that is central to both reasoned and creative physical actions. This invokes a serious doubt that materialistic neuroscience can be in any way consistent with any form of consciously conceived ethics, because invoking moral or ethical reasoning is reduced to forms of aversive conditioning, consistent with behaviouralism, and Pavlov’s dogs, subjectively rationalised by the subject as a reason. This places volition as being a delusion driven by evolutionary compensation to mask the futility of any subjective belief in organismic agency over the world.

 

Defending subjective volitional agency thus depends centrally on the innovative ability of the subjective conscious agent to generate actions which lie outside the constraints of determined antecedents, placing a key emphasis on creativity and idiosyncracy, amid physical uncertainty, rather than cognitive rationality, as reasons are themselves subject to antecedents.

 

Bob Doyle notes that in the first two-stage model of free-will, William James (1884) proposed that indeterminism is the source for what James calls "alternative possibilities" and "ambiguous futures." The chance generation of such alternative possibilities for action does not in any way limit his choice to one of them. For James chance is not the direct cause of actions. James makes it clear that it is his choice that grants consentto one of them. In 1884, James asked some Harvard Divinity School students to consider his choice for walking home after his talk:

 

What is meant by saying that my choice of which way to walk home after the lecture is ambiguous and matter of chance?...It means that both Divinity Avenue and Oxford Street are called but only one, and that one either one, shall be chosen.

 

James was thus the first thinker to enunciate clearly a two-stage decision process, with chance in a present time of random alternatives, leading to a choice which grants consent to one possibility and transforms an equivocal ambiguous future into an unalterable and simple past. There is a temporal sequence of undetermined alternative possibilities followed by an adequately determined choice where chance is no longer a factor. James also asked the students to imagine his actions repeated in exactly the same circumstances, a condition which is regarded today as one of the great challenges to libertarian free will. James anticipates much of modern physical theories of multiple universes:

 

Imagine that I first walk through Divinity Avenue, and then imagine that the powers governing the universe annihilate ten minutes of time with all that it contained, and set me back at the door of this hall just as I was before the choice was made. Imagine then that, everything else being the same, I now make a different choice and traverse Oxford Street. You, as passive spectators, look on and see the two alternative universes,--one of them with me walking through Divinity Avenue in it, the other with the same me walking through Oxford Street. Now, if you are determinists you believe one of these universes to have been from eternity impossible: you believe it to have been impossible because of the intrinsic irrationality or accidentality somewhere involved in it. But looking outwardly at these universes, can you say which is the impossible and accidental one, and which the rational and necessary one? I doubt if the most ironclad determinist among you could have the slightest glimmer of light on this point.

 

Henri Poincaré speculated on how his mind worked when solving mathematical problems. He had the critical insight that random combinations and possibilities are generated, some in an unconsciously, then they are selected among, perhaps initially also by an unconscious process, but then by a definite conscious process of validation:

 

It is certain that the combinations which present themselves to the mind in a kind of sudden illumination after a somewhat prolonged period of unconscious work are generally useful and fruitful combinationsall the combinations are formed as a result of the automatic action of the subliminal ego, but those only which are interesting find their way into the field of consciousnessA few only are harmonious, and consequently at once useful and beautiful, and they will be capable of affecting the geometrician's special sensibility I have been speaking of; which, once aroused, will direct our attention upon them, and will thus give them the opportunity of becoming consciousIn the subliminal ego, on the contrary, there reigns what I would call liberty, if one could give this name to the mere absence of discipline and to disorder born of chance.

 

Even reductionist Daniel Dennett, who is a libertarian, has his version of decision-making:

 

The model of decision making I am proposing has the following feature: when we are faced with an important decision, a consideration-generator whose output is to some degree undetermined produces a series of considerations, some of which may of course be immediately rejected as irrelevant by the agent (consciously or unconsciously). Those considerations that are selected by the agent as having a more than negligible bearing on the decision then figure in a reasoning process, and if the agent is in the main reasonable, those considerations ultimately serve as predictors and explicators of the agent's final decision.

 

The Two-Stage Model of Arthur Compton championed the idea of human freedom based on quantum uncertainty and invented the notion of amplification of microscopic quantum events to bring chance into the macroscopic world. Years later, he clarified the two-stage nature of his idea in an Atlantic Monthly article in 1955:

 

A set of known physical conditions is not adequate to specify precisely what a forthcoming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur. That he does so is known only to the person himself. From the outside one can see in his act only the working of physical law. It is the inner knowledge that he is in fact doing what he intends to do that tells the actor himself that he is free.

 

At first Karl Popper dismissed quantum mechanics as being no help with free will, but later describes a two-stage model paralleling Darwinian evolution, with genetic mutations being probabilistic and involving quantum uncertainty.

 

In 1977 he gave the first Darwin Lecture  "Natural Selection and the Emergence of Mind". In it he said he had changed his mind (a rare admission by a philosopher) about two things. First he now thought that natural selection was not a "tautology" that made it an unfalsifiable theory. Second, he had come to accept the random variation and selection of ideas as a model of free will. The selection of a kind of behavior out of a randomly offered repertoire may be an act of choice, even an act of free will. I am an indeterminist; and in discussing indeterminism I have often regretfully pointed out that quantum indeterminacy does not seem to help us;1 for the amplification of something like, say, radioactive disintegration processes would not lead to human action or even animal action, but only to random movements. This is now the leading two-stage model of free will. I have changed my mind on this issue. A choice process may be a selection process, and the selection may be from some repertoire of random events, without being random in its turn. This seems to me to offer a promising solution to one of our most vexing problems, and one by downward causation.

 

These accounts span diverse thinkers, from James, through Dennett to Compton who applied quantum uncertainty, so whether you are a materialist or a mentalist you can adapt two process volition to your taste. Therefore it says nothing about the nature of conscious decision making, or the hard problem of volition. The key is that (1) something generates a set of possibilities either randomly or otherwise and (2) the mind/brain chooses one to enact, computationally, rationally or intuitively. Computationalists can say (1) is random and (2) is computational. Quantum mechanics provides for both: (1) is the indeterminacy of collapse in von Neumann process 1 and (2) is the collapsed particle dynamics of the Schrödinger equation aka von Neumann process 2.

 

Symbiotic Existential Cosmology affirms two empirical modes – objective verified empirical observation and subjective affirmed empirical experience, both of which are amenable to the same statistical methods. This ties to the conclusion that subjective conscious volition has efficacy over the physical universe and to the refutation of pure physicalism because causal closure of the physical universe is unprovable but empirical experience of our subjectively conscious actions towards our own physical survival clearly affirm we have voluntary conscious volition having physical effect.

 

Benjamin Libet has become notorious for his readiness potential suggesting consciousness has no physical effect but his statement on free will precisely echoes Symbiotic Existential Cosmology with exactly the same ethical emphasis:

 

Given the speculative nature of both determinist and non-determinist theories, why not adopt the view that we do have free will (until some real contradictory evidence may appear, if it ever does). Such a view would at least allow us to proceed in a way that accepts and accommodates our own deep feeling that we do have free will. We would not need to view ourselves as machines that act in a manner completely controlled by the known physical laws.

 

In Symbiotic Existential Cosmology the transactional interpretation is envisaged as allowing a form of prescience because the collapse has implicit information about the future state of the universe in which the absorbers exist. This may appear logically paradoxical but no classical information is transferred so there is no inconsistency. Modelling the collapse appears to happen outside space-time, but actually it is instantaneous, so dual-time is just a core part of the heuristic to understand the non-linear process.

 

It is absolutely necessary for subjective conscious physical volition to be efficacious over mere computation, or it fails to confer an evolutionary advantage and would be eliminated over time by neutral and deleterious mutations in favour of purely computational brains. The fact that this hasn’t happened in the 2 bYa since the eucaryote emergence tells us it DOES have an advantage in terms of intuitive anticipation shared, by all animals, who unlike us, lack rational thought, and single celled eucaryotes who have nothing more than social neurotransmitters and excitable membranes to do the same uncanny trick. Therefore we have to look to physics and the nature of uncertainty to solve this, because environmental uncertainty has its root in quantum uncertainty, just as throwing a die does by setting off a butterfly-effect process.

 

This evolutionary advantage depends on a transformation of Doyle’s (1), in transactional collapse being a form of non-random hidden-variable theory in which non-local correlations of the universal wave function manifest as a complex system during collapse in a way that looks deceptively like randomness because it is a complex chaotic ergodic process. It then completely transforms part (1) of the two process model of volition because the intuitive choices are anticipatory, like integral transforms of the future which we can’t put into a logical causality without paradox, but which can coexist before collapse occurs.

 

There is thus a clear biological requirement for subjective conscious physical volition and that is to ensure survival of existential threats in the wild. We can imagine a computer attempting to do the two-process, by throwing up heuristic options on a weighted probabilistic basis process (1) and then optimising in a choice process (2). We can imagine this is also in a sense what we do when we approach a problem rationally. But that’s not what survival in the wild is about. It’s about computationally intractable environmental many body problems that also involve other conscious agents, snakes, tigers and other humans, so are formally and computationally undecidable. Hence the role of intuition.

 

The transactional interpretation as in fig 73, becomes the key to avoiding the mechanistic pseudo-deterministic random (1) plus computational (2) process of the two process decision-making and that is why we are able to exist and evolve as conscious anticipating sentient beings. You can imagine that an advanced AI package like chatGPT can get to the water hole but there is no evidence this is possible if it is covered in smelly food attractants, with unpredictable predators on the prowl. There is not even any good evidence that rational cognition can save our bacon. It all comes down to salience, sensory acuity, paranoia and intuition.

 

One may think one can depend on randomness alone to provide hypothetical heuristics and avoid getting "stuck in a rut", as a Hopfield network does by thermodynamic annealing and is also key to why the brain uses edge-of-chaos instability, but randomness is arbitrary and artificial. A computer uses the time and date to seed a non-random highly ergodic process to simulate randomness. All molecular billiards arises from a wave-particle process of spreading wave functions involving quantum uncertainty just as photons do. The same for decoherence models of collapse.

 

This is the ultimate flaw in relying on the two process approach of Doyle but it comes at the cost of a speculative leap about what is happening in von Neumann process 1. Quantum transactional collapse can occur instantaneously across space-time in a manner which may well be rationally contradictory about what time is, but is perfectly consistent with conscious intuition. If the universe is in a dynamical state between a multiverse and collapse to classicality, and conscious organisms, among other entities participate in collapse, we have a link between surviving environmental uncertainty and quantum indeterminacy. If this is just randomness no anticipatory advantage results, but if it is part of a delocalised complex system hidden variable theory it can.

 

Any attempt to think about it in a causal sequence or even reason it rationally to unravel intuition would lead to paradox, so rational thought can't capture it, but intuition does reveal it, but not in a way we can prove with high sigma causality statistics because to do that we have to invoke an IID process (independent identically-distributed set of measurements), which sends the whole process down the drain of the Born probability interpretation to randomness, when the biological reality in ever-changing brain states is that each step changes the measurement context, as a non-IID process, so it amounts to Schrödinger turtles all the way down.

 

I am prepared to make this quantum leap into retro-causal special relativistic transactions because it is consistent with quantum mechanics, it urgently needs to be stated and explored more than anything else because it has the key to why we are here as conscious sentient beings in this universe, in which life rises to climax conscious complexity.

 

Fig 88: Diagram from Descartes' Treatise of Man (1664), showing the formation of inverted retinal images in the eyes, and the transmission of these images, via the nerves so as to form a single, re-inverted image (an idea) on the surface of the pineal gland.

 

As a young man, Descartes had had a mystical experience in a sauna on the Danube: three dreams, which he interpreted as a message telling him to come up with a theory of everything and on the strength of this, dedicated his life to philosophy, leading to his iconic quote – Cogito ergo sum “I think therefore I am” – leading to Cartesian dualism, immortalised in the homunculus. This means that, in a sense, the Cartesian heritage of dualism is a genuine visionary attempt on Descartes’ part, to come to terms with his own conscious experience in terms of his cognition, in distinction from the world around him. Once the separation invoked by the term dualism is replaced by complementarity, we arrive at Darwinian panpsychism.

 

Experior, ergo sum, experimur, ergo sumus.

I experience therefore I am, we experience therefore we are!

 

The traditional view of subjective consciousness stemming from Thomas Huxley is that of epiphenomenalism –  the view that mental events are caused by physical events in the brain, but have no effects upon any physical events.

 

The way paradigm shifts can occur can be no more starkly illustrated than in the way in which epiphenomenalism, behaviourism and pure materialism, including reductionism came to dominate the scientific view of reality and the conscious mind.

 

Fig 89: A decapitated frog uses its right foot to try to remove burning acid
but when it is cut off it uses its left, although having no brain.

 

Huxley (1874) held the view, comparing mental events to a steam whistle that contributes nothing to the work of a locomotive. William James (1879), rejected this view, characterising epiphenomenalistsmental events as not affecting the brain activity that produces them any more than a shadow reacts upon the steps of the traveller whom it accompanies – thus turning subjective consciousness from active agency to being a mere passenger. Huxley’s essay likewise compares consciousness to the sound of the bell of a clock that has no role in keeping the time, and treats volition simply as a symbol in consciousness of the brain-state cause of an action. Non-efficacious mental events are referred to in this essay as collateral productsof their physical causes.

 

Klein (2021), in continuing paragraphs, notes that the story begins with Eduard Pflüger’s 1853 experiments showing that some decapitated vertebrates exhibit behaviour it is tempting to call purposive. The results were controversial because purposive behaviour had long been regarded as a mark of consciousness. Those who continued to think it was such a mark had to count a pithed frog and presumably, a chicken running around with its head cut off as conscious. You can see such ideas echoing today in theories such as Solms and Friston's (2018) brain-stem based model of consciousness.

 

But this view opened the way for epiphenomenalism: just as pithed frogs seem to act with purpose even though their behaviour is not really guided by phenomenal consciousness, so intact human behaviours may seem purposive without really being guided by phenomenal consciousness.

 

Fig 90: Representation of consciousness from the seventeenth
century by Robert Fludd, an English Paracelsian physician
.

 

Descartes had famously contended that living animals might be like machines in the sense of being non-conscious organisms all of whose behaviours are produced strictly mechanistically. Those in the seventeenth and eighteenth century who adopted a broadly Cartesian approach to animal physiology are often called mechanists, and their approach is typically contrasted with so-called animists. What separated the two groups was the issue of whether and to what extent the mechanical principles of Newton and Boyle could account for the functioning of living organisms.

 

Even for those more inclined towards mechanism, though, animistic tendencies still underlay much physiological thinking throughout the early modern period. For instance, Giovanni Borelli (16081679) had developed a mechanistic account of how the heart pumps blood. But even Borelli gave the soul a small but important role in this motion. Borelli contended that the unpleasant accumulation of blood in the heart of the preformed embryo would be perceived by the sentient faculty(facultas sensitiva) of the soul through the nerves, which would then prompt the ventricle to contract. Only after the process was thus initiated would the circulation continue mechanistically, as a kind of physical, acquired habit. But the ultimate cause of this motion was the soul.

 

Now, suppose one accepts purposive behaviour as a mark of consciousness (or sensation, or volition, or all of these). Then one arrives at a surprising result indeed that the brainless frog, properly prepared, remains a conscious agent. Of course, there is a lot riding on just what is meant by consciousness’, ‘sensation, and volition. Pflüger himself often wrote about the decapitated frogs supposed consciousness(Bewusstsein), but was rather loose and poetic in spelling out what that term was to mean. Still, his general thesis was clear enough: that in addition to the brain, the spinal cord is also an organ that independently produces consciousness. One controversial implication is that consciousness itself may be divisible (and so literally extended; see Huxley, 1870 5–6) – it may exist in various parts of the nervous system, even in a part of the spinal cord that has been divided from the brain (Fearing 1930 162–3).

 

Lotzes thought was that these behaviours seem purposive only because they are complex. If we allow that the nervous system can acquire complex, reflexive actions through bodily learning, then we can maintain that these behaviours are mechanically determined, and not guided or accompanied by any phenomenal consciousness. The difficulty with this response is that pithed frogs find ways to solve physical challenges they cannot be supposed to have faced before being pithed. For instance, suppose one places a pithed frog on its back, holds one leg straight up, perpendicular to the body, and irritates the leg with acid. The pithed frog will then raise the other leg to the same, odd position so as to be able to wipe away the irritant (Huxley 1870 3). Huxley also reports that a frog that is pithed above the medulla oblongata (but below the cerebellum) loses the ability to jump, even though the frog with the brain stem and cerebellum both intact is able to perform this action, at least in response to irritation. A frog pithed just below the cerebrum can see, swallow, jump, and swim, though still will typically move only if prompted by an outer stimulus (Huxley 1870 3–4).

 

Now what does Lewes mean by sensationand volition’?   

 

Do what we will, we cannot altogether divest Sensibility of its psychological con- notations, cannot help interpreting it in terms of Consciousness; so that even when treating of sensitive phenomena observed in molluscs and insects, we always imagine these more or less suffused with Feeling, as this is known in our own conscious states.  (Lewes 1877 188–9)

 

He saw that one must first settle an important issue before it is possible to interpret these experiments. He wrote, “we have no proof, rigorously speaking, that any animal feels; none that any human being feels; we conclude that men feel, from certain external manifestations, which resemble our own, under feeling; and we conclude that animals feel on similar grounds.”

 

Now, inasmuch as the actions of animals furnish us with our sole evidence for the belief in their feeling, and this evidence is universally considered as scientifically valid, it is clear that similar actions in decapitated animals will be equally valid; and when I speak of proof, it is in this sense. Spontaneity and choice are two signs which we all accept as conclusive of sensation and volition. (Lewes 1859 237–8).

  

Does Pflüger’s experiment prove that there is sensation or volition in the pithed frog? We cannot tell, Lewes suggests, until we first settle on some third-person-accessible mark of sensation and volition. And the marks Lewes proposes are spontaneity and choice.

 

For Lewes, every physiological change is in some sense sensory, and every physiological change thereby influences the stream of Consciousness, however slightly.

 

Thomas Huxley (1874) offered the most influential and provocative version of the conscious automaton theory in an address in Belfast. According to this view, consciousness, synonymous with Lewes’ ‘sensationaccompanies the body without acting on it, just as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery. Conscious states are continually being caused by brain states from moment to moment, on this view, but are themselves causally inert. In other words, although Huxley accepted the existence of sensation, he rejected the existence of volition(as Lewes had used that word). This is an early form of epiphenomenalism.

 

Pflüger and Lewes had indeed established the existence of purposive behaviour in pithed frogs, Huxley readily conceded (Huxley 1874 223). But since it is absurd (according to Huxley) to think the behaviour of brainless frogs is under conscious control, the correct lesson to draw from Pflüger and Lewesresults was that purposive actions are not sufficient to establish volition. In fact, Huxley evidently was unwilling to accept the existence of any behavioural mark of either sensation or volition.

 

It must indeed be admitted, that, if any one think fit to maintain that the spinal cord below the injury is conscious, but that it is cut off from any means of making its consciousness known to the other consciousness in the brain, there is no means of driving him from his position by logic. But assuredly there is no way of proving it, and in the matter of consciousness, if in anything, we may hold by the rule, De non apparentibus et de non existentibus eadem est ratio’ [‘what does not appear and what does not exist have the same evidence’].

(Huxley, 1874, 220)

 

The mechanist’s dilemma is the following ‘paradox’:

 

A: If one accepts any behavioural mark of sensation and volition, then the experimental data will force us to attribute sensation and volition to both decapitated and intact vertebrates alike.

B: If one rejects the existence of a behavioural mark, then one has no grounds for ascribing sensation or volition to either decapitated or intact vertebrates.

 

Huxleys pronouncement piggybacks on the position he took in the mechanist’s dilemma. His claim that spinal consciousness cannot be observed amounts to the claim that such a consciousness cannot be observed first-personally. But that is the crux of the mechanist’s dilemma.  

 

Huxley nevertheless was reverential of the contribution made by Rene Descartes in understanding the physiology of the brain and body:

 

The first proposition culled from the works of Descartes which I have to lay before you, is one which will sound very familiar. It is the view, which he was the first, so far as I know, to state, not only definitely, but upon sufficient grounds, that the brain is the organ of sensation, of thought, and of emotion-using the word "organ" in this sense, that certain changes which take place in the matter of the brain are the essential antecedents of those states of consciousness which we term sensation, thought and emotion. ... It remained down to the time of Bichat [150 years later] a question of whether the passions were or were not located in the abdominal viscera. In the second place, Descartes lays down the proposition that all movements of animal bodies are affected by a change in form. of a certain part of the matter of their bodies, to which he applies the general term of muscle.

 

The process of reasoning by which Descartes arrived at this startling conclusion is well shown in the following passage of the “Réponses:”– But as regards the souls of beasts, although this is not the place for considering them, and though, without a general exposition of physics, I can say no more on this subject than I have already said in the fifth part of my Treatise on Method; yet, I will further state, here, that it appears to me to be a very remarkable circumstance that no movement can take place, either in the bodies of beasts, or even in our own, if these bodies have not in themselves all the organs and instruments by means of which the very same movements would be accomplished in a machine. So that, even in us, the spirit, or the soul, does not directly move the limbs, but only determines the course of that very subtle liquid which is called the animal spirits, which, running continually from the heart by the brain into the muscles, is the cause of all the movements of our limbs, and often may cause many different motions, one as easily as the other.

 

Descartesline of argument is perfectly clear. He starts from reflex action in man, from the unquestionable fact that, in ourselves, co-ordinate, purposive, actions may take place, without the intervention of consciousness or volition, or even contrary to the latter. As actions of a certain degree of complexity are brought about by mere mechanism, why may not actions of still greater complexity be the result of a more refined mechanism? What proof is there that brutes are other than a superior race of marionettes, which eat without pleasure, cry without pain, desire nothing, know nothing, and only simulate intelligence as a bee simulates a mathematician? ... Suppose that only the anterior division of the brainso much of it as lies in front of the optic lobes” – is removed. If that operation is performed quickly and skilfully, the frog may be kept in a state of full bodily vigour for months, or it may be for years; but it will sit unmoved. It sees nothing: it hears nothing. It will starve sooner than feed itself, although food put into its mouth is swallowed. On irritation, it jumps or walks; if thrown into the water it swims.

 

Klein (2018) notes that he crux of the paradigm shift was the competing research by the opposing groups and the way in which their research successes at the time led to success:

 

But by the time of the Lewes contribution from 1877, the question was no longer whether this one subset of muscular action could be accounted for purely mechanistically. Now, the question had become whether the mechan- istic approach to reflex action might be expanded to cover all muscular action. Lewes wrote that the Reflex Theoryhad become a strategy where one attempted to specify the elementary parts involvedin every physiological function without ever appealing to Sensation and Volition(Lewes, Problems of Life and Mind, 354).24

 

That the majority of physiological opinion by the close of the century was in favor of the position of Pflüger’s opponents seems certain, Fearing writes. Mechanistic physiology and psychology was firmly seated in the saddle(Fearing, 1930, 185).

 

The concept of a mechanistic reflex arc came to dominate not just physiology, but psychology too. The behaviourist B. F. Skinner, for example, wrote his 1930 doctoral dissertation on how to expand the account of reflex action to cover all behaviour, even the behaviour of healthy organisms. Through the innovations of people like Skinner and, before him, Pavlov, behaviourism would establish itself as the dominant research paradigm.

 

Cannon (1911, 38) gave no real argument for why students should not regard purposive movement as a mark of genuine volition (beyond a quick gesture at Lotzes long-discredited retort to Pflüger). Without citing any actual experiments, Cannon simply reported, as settled scientific fact, that purposiveness does not entail intended action:

 

Purposive movements are not necessarily intended movements. It is probable that reaction directed with apparent purposefulness is in reality an automatic repetition of movements developed for certain effects in the previous experience of the intact animal. (ibid)

 

Schwartz et al. (2005) highlight the key role William James played in establishing the status of volitional will:

 

William James (1890 138) argued against epiphenomenal consciousness, by claiming that The particulars of the distribution of consciousness, so far as we know them, points to its being efficacious.’  James (136) stated that 'consciousness is at all times primarily a selecting agency.It is present when choices must be made between different possible courses of action. It is to my mind quite inconceivable that consciousness should have nothing to do with a business to which it so faithfully attends’.

 

These liabilities of the notion of epiphenomenal mind and consciousness lead many thinkers to turn to the alternative possibility that a persons mind and stream of consciousness is the very same thing as some activity in their brain: mind and consciousness are emergent propertiesof brains. A huge philosophical literature has developed arguing for and against this idea.

 

They cite Sperry, who adopted an identity theory approach which he claimed was monist, in invoking a top-down systems theoretic notion of the mind as an abstraction of certain higher-level brain processes:

 

The core ideas of the arguments in favour of an identity-emergent theory of mind and consciousness are illustrated by Roger Sperrys (1992) example of a wheel. A wheel obviously does something: it is causally efficacious; it carries the cart. It is also an emergent property: there is no mention of wheelnessin the formulation of the laws of physics and wheelnessdid not exist in the early universe; wheelnessemerges only under certain special conditions. And the macro-scopic wheel exercises top-downcontrol of its tiny parts. ... The reason that mind and consciousness are not analogous to wheelness, within the context of classic physics, is that the properties that characterize wheelnessare properties that are entailed, within the conceptual framework of classic physics, by properties specified in classic physics, whereas the properties that characterize conscious mental processes, namely the various ways these processes feel, are not entailed within the conceptual structure provided by classic physics, but by the properties specified by classic physics.

 

They quote James again in their theory of volition, based on the repeated application of attention to the issue at hand:

 

In the chapter on will, in the section entitled Volitional effort is effort of attention, James (1892 417) writes: “Thus we find that we reach the heart of our inquiry into volition when we ask by what process is it that the thought of any given action comes to prevail stably in the mind. ... The essential achievement of the will, in short, when it is most voluntary,is to attend to a difficult object and hold it fast before the mind. Effort of attention is thus the essential phenomenon of will. ... Consent to the ideas undivided presence, this is efforts sole achievement. Everywhere, then, the function of effort is the same: to keep affirming and adopting the thought which, if left to itself, would slip away”.

 

Enshrining the concept of pure behaviourism, and reductionism more generally Gilbert Ryle (1949) claimed in “The Concept of Mind” that "mind" is "a philosophical illusion hailing from René Descartes, and sustained by logical errors and 'category mistakes' which have become habitual. Ryle rejected Descartes' theory of the relation between mind and body, on the grounds that it approaches the investigation of mental processes as if they could be isolated from physical processes. According to Ryle, the classical theory of mind, or "Cartesian rationalism," makes a basic category mistake (a new logical fallacy Ryle himself invented), as it attempts to analyze the relation between "mind" and "body" as if they were terms of the same logical category. The rationalist theory that there is a transformation into physical acts of some purely mental faculty of "Will" or "Volition" is therefore a misconception because it mistakenly assumes that a mental act could be and is distinct from a physical act, or even that a mental world could be and is distinct from the physical world. This theory of the separability of mind and body is described by Ryle as "the dogma of the ghost in the machine.”  However Ryle was not regarded as a philosophical behaviourist and writes that the "general trend of this book will undoubtedly, and harmlessly, be stigmatised as ‘behaviourist’."

 

Symbiotic Existential Cosmology, classes itself as ICAM interactively complementary aspect monism, rather than dualism. The Stanford Encyclopaedia of Philosophy definitions for dualism (Robinson 2023) are:

 

Genuine property dualism occurs when, even at the individual level, the ontology of physics is not sufficient to constitute what is there. The irreducible language is not just another way of describing what there is, it requires that there be something more there than was allowed for in the initial ontology. Until the early part of the twentieth century, it was common to think that biological phenomena (‘life’) required property dualism (an irreducible ‘vital force’), but nowadays the special physical sciences other than psychology are generally thought to involve only predicate dualism (that psychological or mentalistic predicates are (a) essential for a full description of the world and (b) are not reducible to physicalistic predicates). In the case of mind, property dualism is defended by those who argue that the qualitative nature of consciousness is not merely another way of categorizing states of the brain or of behaviour, but a genuinely emergent phenomenon.

 

Substance dualism: There are two important concepts deployed in this notion. One is that of substance, the other is the dualism of these substances. A substance is characterized by its properties, but, according to those who believe in substances, it is more than the collection of the properties it possesses, it is the thing which possesses them. So the mind is not just a collection of thoughts, but is that which thinks, an immaterial substance over and above its immaterial states.

 

In Stanford, Tanney (2022) notes Ryle’s category error critique was centrally about the assumed distinctness or separability of mind and body as “substances” in the context of absurdity of certain verbal sentence constructions:

 

When a sentence is (not true or false but) nonsensical or absurd, though its vocabulary is conventional and its grammatical construction is regular, we say that it is absurd because at least one ingredient expression in it is not of the right type to be coupled or to be coupled in that way with the other ingredient expression or expressions in it. Such sentences, we may say, commit type-trespasses or break type-rules. (1938, 178)

 

The category mistake Ryle identifies in “There is a mind and a body” or “there is a mind or a body” is less obvious. For it takes a fair bit of untangling to show that “mind” and “body” are different logical or grammatical types; a fact which renders the assertion of either the conjunction or the disjunction nonsensical.

 

Robinson (2023) further notes both the veridical affirmation of interactivity in everyday life and the unverifiability of physical causal closure:

 

Interactionism is the view that mind and body – or mental events and physical events – causally influence each other. That this is so is one of our common-sense beliefs, because it appears to be a feature of everyday experience. The physical world influences my experience through my senses, and I often react behaviourally to those experiences. My thinking, too, influences my speech and my actions. There is, therefore, a massive natural prejudice in favour of interactionism. 

 

Causal Closure Most discussion of interactionism takes place in the context of the assumption that it is incompatible with the world's being 'closed under physics'. This is a very natural assumption, but it is not justified if causal overdetermination of behaviour is possible. There could then be a complete physical cause of behaviour, and a mental one. The problem with closure of physics may be radically altered if physical laws are indeterministic, as quantum theory seems to assert. If physical laws are deterministic, then any interference from outside would lead to a breach of those laws. But if they are indeterministic, might not interference produce a result that has a probability greater than zero, and so be consistent with the laws? This way, one might have interaction yet preserve a kind of nomological closure, in the sense that no laws are infringed. … Some argue that indeterminacy manifests itself only on the subatomic level, being cancelled out by the time one reaches even very tiny macroscopic objects: and human behaviour is a macroscopic phenomenon. Others argue that the structure of the brain is so finely tuned that minute variations could have macroscopic effects, rather in the way that, according to 'chaos theory', the flapping of a butterfly's wings in China might affect the weather in New York. (For discussion of this, see Eccles (1980), (1987), and Popper and Eccles (1977).) Still others argue that quantum indeterminacy manifests itself directly at a high level, when acts of observation collapse the wave function, suggesting that the mind may play a direct role in affecting the state of the world (Hodgson 1988; Stapp 1993).

 

Symbiotic Existential Cosmology does not assert “substance” dualism, as subjective conscious volition is not treated as a “substance”, in the way mind was in the manner of objective physical entities, in Ryle's complaint against Cartesian dualism. SEC invokes a unified Cosmos in which primal subjectivity and the objective universe are complementary mutually-interactive principles in a universe which is not causally closed and in which volitional will can act without causal conflict, through quantum uncertainty. Life is also subject to overdeterminism due to teleological influences such as autopoiesis, e.g. in the negentropic nature of life and evolution as self-organising far-from-equilibrium thermodynamic systems. The subjective aspect is fully compliant with determined physical boundary conditions of brain states , except in so far as subjective volition interacts with environmental quantum-derived uncertainty through quantum-sensitive unstable brain dynamics, forming a contextual filter theory of brain function on conscious experience, rather than a causally-closed universe determining ongoing brain states. Thus, no pure-subjective interactivity is required, as occurs in traditional forms of panpsychism, such as pan-proto- or cosmo-psychism.

 

The key counter to Ryle's complaint is that if I say in response to a received e-mail that the author has demonstrated through consciously intending to compose and send their response in physical form that "you have demonstrated that your subjective conscious volition has efficacy over the physical universe" this is not grammatically, semantically, or categorically absurd, but a direct empirical observation from experience that raises no physical or philosophical inconsistencies, but fully confirms empirical experience of subjective physical conscious agency, consistent with civil and criminal law of conscious intentional responsibility. Ryle's strategy is linguistic. He attacks both the ontological commitment (the view that mind and body are somehow fundamentally different or distinct, but nonetheless interact) and the epistemological commitment (the inability to confirm other people are conscious because subjectivity is private) of what he calls the "official doctrine" (Tanney 2022). The problem is that, by dealing with it in a purely linguistic analysis, we are dealing only with objective semantic and grammatical connotations so the argument is intrinsically objective. We know that subjectivity is private and objectivity is public. That's just the way it is! We also know that in all our discourses subjective-objective interactivity occurs. A hundred percent of our experience is subjective and the world around us is inferred from our subjectively conscious experiences of it.

 

The way out is not to deny mind, or consciousness itself which we are all trying to fathom, or we are back to the hard problem of the objectively unfathomable explanatory gap. The way out is that the above statement "you have demonstrated that your subjective conscious volition has efficacy over the physical universe" is something that also involves conscious physical volition we can mutually agree on because it's evidenced in our behaviour in consciously responding to one another. Ryle is sitting by himself in his office dreaming up linguistic contradictions, but these evaporate through mutual affirmation of subjective volition. That's the transactional principle manifest. Then the category error vanishes in the subjective empirical method. This is why extending the hard problem to volition has been essential, because it's the product of conscious volition in behaviour that is verifiable.

 

In Stanford (Tanney 2022) notes that Cartesianism is at worst "dead" in only one of its ontological aspects. Substance dualism may have been repudiated but property dualism still claims a number of contemporary defenders. Furthermore, although Descartes  embraced a form of substance dualism, in the sense that the pineal acted in response to the soul by making small movements that initiated wider responses in the brain, the pineal is still a biological entity, so the category error is misconceived. His description is remarkably similar to instabilities in brain dynamics potentially inducing global changes in brain dynamics. Compounded with the inability of materialism to solve the hard problem, science is thus coming full circle. It is not just a question of sentence construction but Cosmology.

 

But Ryle’s rejection of Cartesian dualism led to a second paradigm shift in which molecular biology, succeeding Watson and Crick’s discovery of the structure of DNA, led to ever more effective ‘laying bare’ of all biological processes including the brain, accompanied by new technologies of multi-electrode EEG and MEG and functional fMRI imaging using magnetic resonance imaging. So that subjective consciousness became effectively ignored in the cascade of purely functionalist results of how human brain dynamics occurs.

 

Anil Seth (2018) notes:

 

The relationship between subjective conscious experience and its biophysical basis has always been a defining question for the mind and brain sciences. But, at various times since the beginnings of neuroscience as a discipline, the explicit study of consciousness has been either treated as fringe or excluded altogether. Looking back over the past 50 years, these extremes of attitude are well represented. Roger Sperry (1969, 532), pioneer of split-brain operations and of what can now be called consciousness sciencelamented in 1969 that most behavioral scientists today, and brain researchers in particular, have little use for consciousness. Presciently, in the same article he highlighted the need for new technologies able to record the pattern dynamics of brain activityin elucidating the neural basis of consciousness. Indeed, modern neuroimaging methods have had a transformative impact on consciousness science, as they have on cognitive neuroscience generally.

 

Informally, consciousness science over the last 50 years can be divided into two epochs. From the mid-1960s until around 1990 the fringe view held sway, though with several notable exceptions. Then, from the late 1980s and early 1990s, first a trickle and more recently a deluge of research into the brain basis of consciousness, a transition catalysed by among other things the activities of certain high-profile scientists (e.g. the Nobel laureates Francis Crick and Gerald Edelman) and by the maturation of modern neuroimaging methods, as anticipated by Sperry.

 

Symbiotic cosmology, based on complementary, unlike a strictly dualist description, is coherent. This coherence – forming a complete whole without discrete distinction – is manifestly true in that we can engage either a subjective discourse on our experiences or an objective account of their material circumstances in every situation in waking life, just as the wave and particle aspects of quanta are coherent and cannot be separated, as complementary manifestations. We thus find that the human discourse on our existential condition has two complementary modes, the one fixed in the objective physical description of the world around us using logical and causal operations and the other describing our subjective conscious experiences, as intelligent sensual beings, which are throughout our lives, our sole source of personal knowledge of the physical world around us, without which we would have no access to the universe at large, let alone to our dreams, memories and reflections (Jung 1963), all of which are conscious in nature, and often ascribed to be veridical, rather than imaginary, in the case of dreams and visionary states.

 

In Erwin Schrödinger’s words (1944):  The world is a construction of our sensations, perceptions, memories. It is convenient to regard it as existing objectively on its own. But it certainly does not become manifest by its mere existence” … “The reason why our sentient, percipient and thinking ego is met nowhere within our scientific world picture can easily be indicated in seven words: Because it is itself that world picture”.

 

A central problem faced by detractors of the role of consciousness in both the contexts of the brain and the quantum universe is that many of the materialist arguments depend on an incorrectly classical view of causality, or causal closure, in the context of brain dynamics, which are fundamentally inconsistent with quantum reality. In the brain context, this is purported to eliminate an adaptive role for consciousness in human and animal survival, reducing it to a form of epiphenomenalism, in which volitional will would be a self-serving delusion. This follows lines of thinking derived from computational ideas that interfering with a computational process would hinder its efficiency.

 

In relation to volitional will, Chalmers & McQueen (2021) note: There are many aspects to the problem of consciousness, including the core problem of why physical processes should give rise to consciousness at all.  One central aspect of the problem is the consciousness-causation problem: It seems obvious that consciousness plays a causal role, but it is surprisingly hard to make sense of what this role is and how it can be played.

 

The problem with the idea of objective brain processing being causally closed is fivefold. Firstly the key challenges to organismic survival are computationally intractable, open environment problems which may be better served by edge of chaos dynamics than classical computation. Secondly, many problems of survival are not causally closed at all because both evolution and organismic behaviour are creative processes, in which there are many viable outcomes, not just a single logically defined, or optimal one. Thirdly, quantum uncertainty and its deeper manifestations in entanglement, are universal, both in the brain and the environment, so there are copious ways for consciousness to intervene, without disrupting causally deterministic processes, and this appears to be its central cosmological role. Fourthly, the notion runs headlong into contradiction with our everyday experience of volition, in which we are consciously aware of our volitional intent and of its affects both in our purposive decision-making and acts affecting the world around us. For causal closure to be true, all our purposive decisions upon which we depend for our survival would be a perceptual delusion, contradicting the manifest nature of veridical perception generally.  Fifthly, the work of Libet through to Schurger et al. demonstrates causal closure is unproven and is unlikely to remain so given the edge-of-chaos instability of critical brain processes in decision-making in the quantum universe.

 

The Readiness Potential and its Critics

 

Challenging the decision-making role of consciousness, Libet (1983, 1989) asked volunteers to flex a finger or wrist. When they did, the movements were preceded by a dip in the brain signals being recorded, called the "readiness potential". He interpreted this RP a few tenths of a second before the volunteers said they had decided to move, as the brain preparing for movement. Libet concluded that unconscious neural processes determine our actions before we are ever aware of making a decision. Since then, others have quoted the experiment as evidence that free will is an illusion.

 

However Libet (1999) in "Do we have free-will?”, himself makes the most convincing case possible for  subjective consciousness  having the capacity for free-will:

 

I have taken an experimental approach to this question. Freely voluntary acts are preceded by a specific electrical change in the brain (the ‘readiness potential’, RP) that begins 550 ms before the act. Human subjects became aware of intention to act 350–400 ms after RP starts, but 200 ms. before the motor act. The volitional process is therefore initiated unconsciously. But the conscious function could still control the outcome; it can veto the act. Free will is therefore not excluded. These findings put constraints on views of how free will may operate; it would not initiate a voluntary act but it could control performance of the act. The findings also affect views of guilt and responsibility.

 

But the deeper question still remains: Are freely voluntary acts subject to macro- deterministic laws or can they appear without such constraints, non-determined by natural laws and ‘truly free’? I shall present an experimentalist view about these fundamental philosophical opposites. ... The question of free will goes to the root of our views about human nature and how we relate to the universe and to natural laws. Are we completely defined by the deterministic nature of physical laws? Theologically imposed fateful destiny ironically produces a similar end-effect. In either case, we would be essentially sophisticated automatons, with our conscious feelings and intentions tacked on as epiphenomena with no causal power. Or, do we have some independence in making choices and actions, not completely determined by the known physical laws? The initiation of the freely voluntary act appears to begin in the brain unconsciously, well before the person consciously knows he wants to act! Is there, then, any role for conscious will in the performance of a voluntary act? (see Libet, 1985). To answer this it must be recognized that conscious will (W) does appear about 150 msec. before the muscle is activated, even though it follows onset of the RP.

 

Potentially available to the conscious function is the possibility of stopping or vetoing the final progress of the volitional process, so that no actual muscle action ensues. Conscious-will could thus affect the outcome of the volitional process even though the latter was initiated by unconscious cerebral processes. Conscious-will might block or veto the process, so that no act occurs. The existence of a veto possibility is not in doubt. The subjects in our experiments at times reported that a conscious wish or urge to act appeared but that they sup- pressed or vetoed that. … My conclusion about free will, one genuinely free in the non-determined sense, is then that its existence is at least as good, if not a better, scientific option than is its denial by determinist theory. Given the speculative nature of both determinist and non-determinist theories, why not adopt the view that we do have free will (until some real contradictory evidence may appear, if it ever does). Such a view would at least allow us to proceed in a way that accepts and accommodates our own deep feeling that we do have free will. We would not need to view ourselves as machines that act in a manner completely controlled by the known physical laws.

 

Nevertheless, articulating a theory heavily dependent on the readiness potential, Budson et al. (2022) claim all the brain’s decision-making procedures are unconscious, but followed half a second later by conscious experience that is just a memory-based constructive representation of future outcomes. According to the researchers, this theory is important because it explains that all our decisions and actions are actually made unconsciously, although we fool ourselves into believing that we consciously made them:

 

In a nutshell, our theory is that consciousness developed as a memory system that is used by our unconscious brain to help us flexibly and creatively imagine the future and plan accordingly. What is completely new about this theory is that it suggests we don’t perceive the world, make decisions, or perform actions directly. Instead, we do all these things unconsciously and then—about half a second later—consciously remember doing them. We knew that conscious processes were simply too slow to be actively involved in music, sports, and other activities where split-second reflexes are required. But if consciousness is not involved in such processes, then a better explanation of what consciousness does was needed.

 

But this notion is itself a delusion. The conscious brain has evolved to be able to co-opt very fast subconscious processes to orchestrate in real time, highly accurate, innovative conscious responses, which the agent is fully aware of exercising in real time. The evidence is that conscious control of subconscious fast processing, e.g. via insular von-Economo neurons, and basal ganglia, occurs in parallel in real time. Tennis could not be played if the players’ conscious reactions were half a second behind the ball. They could not represent, or accurately respond to the actual dynamics.

 

Likewise Earl (2014) cite the notion that consciousness is solely information in various forms that is associated with a flexible response mechanism (FRM) for decision-making, planning, and generally responding in nonautomatic ways. Both these are tautologous because information is both subjective and objective and non-conscious responses ARE physically automatic. Earl attempts to discount the validity of our subjective experience of volition by claiming it is a false assumption and fails to include all the mechanical details of how an act is generated:

 

When I decide to pick up a cup and do so, I may believe that my thought initiates my action, but what I observe is I have the thought of picking up the cup and then reach out and take the cup. I do not experience the information manipulations that must occur to initiate my action, and I have no evidence that my action is consciously initiated. One tends to assume one’s intentional actions are consciously initiated, but as Wegner and Wheatley (1999) reported, we may perceive our actions as consciously caused if the thought occurs before the act and is consistent with the act, and there are no other likely causes.

 

While this is not going so far as to claim the conscious experience of volition is a delusion that evolved to give the epiphenomenal organism confidence in its ability to act, it is incorrectly claiming our experience of willed intentional decision making behaviour, key to our survival, is a false assumption, associating unconnected causes and effects:

 

In any intentional action, one never experiences the complete sequence of events from the starting conditions to completing the action. Bowers (1984, p. 249) wrote that “one can introspectively notice and/or recall antecedents of one’s behavior but the causal connection linking the determining antecedents and the behavior to be explained is simply not directly accessible to introspection. Rather, the causal link between antecedents and their consequences is provided by an inference, however implicit and invisible.” There are gaps in every experience of intentional choice, intentional initiation of responses, intentional control of attention or behavior, and in thinking, speaking, problem solving, creativity, and every other action with which consciousness is associated; and in each of these activities the executive mental process is missing from consciousness. 

 

These arguments do not constitute a valid critique, given the ability of non-conscious processes  to complement  and prepare the experiential context for a comprehensive conscious decision. To have to experience every mechanical aspect of an intentional action would subject the flow of subjective consciousness to strategic overload and obliterate the  efficiency of the FRM model. Conscious experience gives us the effective overview to act decisively in real time.

 

Libet’s claim has been undermined by more recent studies. Bredikhin et al.(2023) have discovered confounding faults in Libet's procedure. Instead of letting volunteers decide when to move, Trevena and Miller (2010) asked them to wait for an audio tone before deciding whether to tap a key. If Libet's interpretation were correct, the RP should be greater after the tone when a person chose to tap the key. While there was an RP before volunteers made their decision to move, the signal was the same whether or not they elected to tap. Miller concludes that the RP may merely be a sign that the brain is paying attention and does not indicate that a decision has been made. They also failed to find evidence of subconscious decision-making in a second experiment. This time they asked volunteers to press a key after the tone, but to decide on the spot whether to use their left or right hand. As movement in the right limb is related to the brain signals in the left hemisphere and vice versa, they reasoned that if an unconscious process is driving this decision, where it occurs in the brain should depend on which hand is chosen, but they found no such correlation.

 

Schurger and colleagues (2012) have a key explanation. Previous studies have shown that, when we have to make a decision based on sensory input, assemblies of neurons start accumulating evidence in favour of the various possible outcomes. The team reasoned that a decision is triggered when the evidence favouring one particular outcome becomes strong enough to tip the dynamics – i.e. when the neural noise generated by random or chaotic activity accumulates sufficiently so that its associated assembly of neurons crosses a threshold tipping point. The team repeated Libet's experiment, but this time if, while waiting to act spontaneously, the volunteers heard a click they had to act immediately. The researchers predicted that the fastest response to the click would be seen in those in whom the accumulation of neural noise had neared the threshold - something that would show up in their EEG as a readiness potential. In those with slower responses to the click, the readiness potential was indeed absent in the EEG recordings. "We argue that what looks like a pre-conscious decision process may not in fact reflect a decision at all. It only looks that way because of the nature of spontaneous brain activity.” Schurger and Uithol (2015) specifically note the evidence of a sensitively dependent butterfly effect (London et al. 2010) as a reason why nervous systems vary their responses on identical stimuli as an explanation for why it could be impossible to set out a deterministic decision making path from contributory systems to a conscious decision, supporting their stochastic accumulator model. Hans Liljenström (2021) using stochastic modelling concludes that if decisions have to be made fast, emotional processes and aspects dominate, while rational processes are more time consuming and may result in a delayed decision.

 

Alexander et al. (2016) establish the lack of linkage of the RP to motor activity:

 

“The results reveal that robust RPs occured in the absence of movement and that motor-related processes did not significantly modulate the RP. This suggests that the RP measured here is unlikely to reflect preconscious motor planning or preparation of an ensuing movement, and instead may reflect decision-related or anticipatory processes that are non-motoric in nature.”

 

More recently the actual basis coordinating a decision to act has been found to reside in slowly evolving dopamine modulation. When you reach out to perform an action, seconds before you voluntarily extend your arm, thousands of neurons in the motor regions of your brain erupt in a pattern of electrical activity that travels to the spinal cord and then to the muscles that power the reach. But just prior to this massively synchronised activity, the motor regions in your brain are relatively quiet. For such self-driven movements, a key piece of the “go” signal that tells the neurons precisely when to act has been revealed in the form of slow ramping up of dopamine in a region deep below the cortex which closely predicted the moment that mice would begin a movement — seconds into the future (Hamilos et al. 2021).

 

The authors imaged mesostriatal dopamine signals as mice decided when, after a cue, to retrieve water from a spout. Ramps in dopamine activity predicted the timing of licks. Fast ramps preceded early retrievals, slow ones preceded late ones. Surprisingly, dopaminergic signals ramped-up over seconds between the start-timing cue and the self-timed movement, with variable dynamics that predicted the movement/reward time on single trials. Steeply rising signals preceded early lick-initiation, whereas slowly rising signals preceded later initiation. Higher baseline signals also predicted earlier self-timed movements. Consistent with this view, the dynamics of the slowly evolving endogenous dopaminergic signals quantitatively predicted the moment-by-moment probability of movement initiation on single trials. The authors propose that ramping dopaminergic signals, likely encoding dynamic reward expectation, can modulate the decision of when to move.

 

Slowly varying neuromodulatory signals could allow the brain to adapt to its environment. Such flexibility wouldn’t be afforded by a signal that always led to movement at the exact same time. Allison Hamilos notes: “The animal is always uncertain, to some extent, about what the true state of the world is. You don’t want to do things the same way every single time — that could be potentially disadvantageous.”

 

This introduces further complexity into the entire pursuit of Libet's readiness potential, which is clearly not itself the defining event, which rather is at first call concealed in a slowly varying dopamine modulation, which in itself does not determine the timing of the event except on a probabilistic basis. Furthermore the striatum itself is a gatekeeper in the basal ganglia for coordinating the underlying conscious decision to act and not the conscious decision itself.

 

Celia Green and Grant Gillett (1995) have also cited three grounds for the readiness potential to be unreliable:

 

First, there is a dual assumption that an intention is the kind of thing that causes an action and that can be accurately introspected. Second, there is a real problem with the method of timing the mental events concerned given that Libet himself has found the reports of subjects to be unreliable in this regard. Third, there is a suspect assumption that there are such things as timable and locatable mental and brain events accompanying and causing human behaviour.

 

Catherine Reason (2016), drawing on Caplain (1996, 2000) and Luna (2016), presents an intriguing logical proof that computing machines, and by extension physical systems, can never be certain if they possess conscious awareness, undermining the principal of computational equivalence (Wolfram 2002, 2021):

 

An omega function is any phi-type function which can be performed, to within a quantified level of accuracy, by some

conscious system. A phi-type function is any mappng which associates the state of some system with the truth value of some proposition.  This significance of this is that it can be shown that no purely physical system can perform any phi-type function to within any quantified level of accuracy, if that physical system is required to be capable of humanlike reasoning.

 

The proof is as follows:  Let us define a physical process as some process whose existence is not dependent on some observation of that process.  Now let X be the set of all physical processes necessary to perform any phi-type function.   Since the existence of X is not dependent on any given observation of X,  it is impossible to be sure empirically of the existence of X.  If it is impossible to be sure of the existence of X, then it is impossible to be sure of the accuracy of X.  If it is impossible to be sure of the accuracy of X, then it is impossible to be sure that X correctly performs the phi-type function it is supposed to perform.  Since any system capable of humanlike reasoning can deduce this, it follows that no physical system capable of humanlike reasoning can perform any phi-type function without becoming inconsistent.

 

Counterintuitively, this implies that human consciousness is associated with a violation of energy conservation. It also provides another objection to Libet:

 

“even if the readiness potential can be regarded as a predictor of the subjects decision in a classical system, it cannot necessarily be regarded as such in a quantum system. The reason is that the neurological properties underlying the readiness potential may not actually have determinate values until the subject becomes consciously aware of their decision”.

 

In subsequent papers (Reason 2019, Reason & Shah 2021) she expands this argument:

 

I identify a specific operation which is a necessary property of all healthy human conscious individuals — specifically the operation of self-certainty, or the capacity of healthy conscious humans to “know” with certainty that they are conscious. This operation is shown to be inconsistent with the properties possible in any meaningful definition of a physical system.

 

In an earlier paper, using a no-go theorem, it was shown that conscious states cannot be comprised of processes that are physical in nature (Reason, 2019). Combining this result with another unrelated work on causal emergence in physical systems (Hoel, Albantakis and Tononi, 2013), we show in this paper that conscious macrostates are not emergent from physical systems and they also do not supervene on physical microstates.

 

Pivotally in a forthcoming formalisation of the argument, Reason (2023) cites Descartes' "cogito ergo sum" as counter example requiring human consciousness so the success of her theorem also frees Cartesian duality from Ryle's deathly grip.

 

In a counterpoint to this Travers et al. (2020) suggest the RP is associated with learning and thus reflects motor planning or temporal expectation, but neither planning nor expectation inform about the timing of a decision to act:

 

“Participants learned through trial and error when to make a simple action. As participants grew more certain about when to act, and became less variable and stochastic in the timing of their actions, the readiness potential prior to their actions became larger in amplitude. This is consistent with the proposal that the RP reflects motor planning or temporal expectation. … If the RP reflects freedom from external constraint, its amplitude should be greater early in learning, when participants do not yet know the best time to act. Conversely, if the RP reflects planning, it should be greater later on, when participants have learned, and know in advance, the time of action. We found that RP amplitudes grew with learning, suggesting that this neural activity reflects planning and anticipation for the forthcoming action, rather than freedom from external constraint.”

 

Fifel (2018) reviewing the state of the current research described the following picture:

 

Results from Emmons et al. (2017) suggest that such ramping activity en- codes self-monitored time intervals. This hypothesis is particularly pertinent given that self-monitoring of the passing of time by the experimental subjects is intrinsic to the Libet et al. (1983) experiment. Alternatively, although not mutually exclusive, RP might reflect general anticipation (i.e., the conscious experience that an event will soon occur) (Alexander et al., 2016) or simply background neuronal noise (Schurger et al., 2016). Future studies are needed to test these alternatives. … Consequently, we might conclude that: Neuroscience may in no way interfere with our first-person experience of the will, it can in the end only describe it ... it leaves everything as it is.

 

The difficulty of the hard problem, which remains unresolved 26 years later, is also tied to the likewise unresolved problem of assumed causal closure of the universe in the context of the brain at the basis of pure materialistic neuroscience. Until it is empirically confirmed it remains simply a matter of opinion that has grown into a belief system academically prejudiced against hypotheses not compliant with the physical materialistic weltanshauung.

 

While some neuroscientists (Johnson 2020) imply the hard problem is not even a scientific question, the neuroscience concept of causal closure (Chalmers 2015) based on classical causality, or quantum correspondence to it, not only remains empirically unverified in the light of Libet, Schurger and others, but it is unclear that a convincing empirical demonstration is even possible, or could be, given the fact that neuronal feedback processes span all scales from the organism to the quantum uncertain realm and the self-organised criticality of brain dynamics. Finally, it is in manifest conflict with all empirical experience of subjective conscious volitional intent universal to sane human beings.

 

As Barnard Baars commented in conversation:

 

I don't think science needs to, or CAN prove causal closure, because what kind of evidence will prove that? We don't know if physics is "causally closed," and at various times distinguished physicists think they know the answer, but then it breaks down. The Bertrand Russell story broke down, and the Hilbert program in math, and ODEs, and the record is not hopeful on final solutions showing a metaphysically closed system .

 

The status of the neuroscience perspective of causal closure has led to an ongoing debate about the efficacy of human volition and the status of free will (Nahamias 2008, Mele, 2014), however Joshua Shepherd (2017) points out, that the neuroscientific threat to free will has not been causally established, particularly in the light of Schurger et al. (2015).

 

For this reason, in treating the hard problem and volitional intent, I will place the onus on proof on materialism to demonstrate itself and in defence of volition have simply outlined notable features of central nervous processing, consistent with an in principle capacity to operate in a quantum-open state of seamless partial causal closure involving subjectively conscious efficacy of volitional will physically in decision-making (in the brain) and behaviour (in the world). From this point of view, efficacy of volition is itself a validated empirical experience which is near universal to sane conscious humans, thus negating causal closure by veridical affirmation in the framework of symbiotic existential cosmology, where empirical experience has equally valid cosmological status to empirical observation.

 

Libet’s experiment purported to demonstrate an inconsistency, by implying the brain had already made a decision before the conscious experience of it, but Trevena and Miller and Schurger’s team have deprecated this imputation.

 

Emergence, Weak, Edge-of-chaos and Strong

 

Key to the question of conscious volition is the profound difference between the notions of strong and weak emergence.  Turkheimer et al. (2019) spell out the difference between these two:

 

Modern Emergence can be divided into two epistemological types: strong and weak. A system is said to exhibit strong emergence when its behaviour, or the consequence of its behaviour, exceeds the limits of its constituent parts. Thus the resulting behavioural properties of the system are caused by the interaction of the different layers of that system, but they cannot be derived simply by analysing the rules and individual parts that make up the system. Weak emergence on the other hand, differs in the sense that whilst the emergent behaviour of the system is the product of interactions between its various layers, that behaviour is entirely encapsulated by the confines of the system itself, and as such, can be fully explained simply though an analysis of interactions between its elemental units.

 

They note that he kind of emergence that surfaced first in the neurosciences was greatly shaped by earlier work of Roger Sperry (1980), who proposed a view of the brain characterised by a strong top-down organisational component. Sperry was adamant that his model did not imply any form of mind brain dualism nor a parallel existence of neurobiological and mental processes, but that, after emergence, mental processes would take over and exert control down to the cellular level:

 

It is the idea, in brief, that conscious phenomena as emergent functional properties of brain processing exert an active control role as causal detents in shaping the flow patterns of cerebral excitation. Once generated from neural events, the higher order mental patterns and programs have their own subjective qualities and progress, operate and interact by their own causal laws and principles which are different from and cannot be reduced to those of neurophysiology.

 

Emergence is not just an assumed property of human brains but applies more generally to systems notions of the emergence of living systems generally including ideas like autopoiesis and the question of whether biological laws are entirely reductionist or form more general fundamental constraint on the behaviour of natural systems.

 

Physical materialism rejects any form of strong emergence that asserts known physical laws can somehow be overridden by mental processes. For example weak emergence allows for a reductionistic  computational paradigm of brain dynamics to putatively replicate the functional agency of an autonomous system through feedback processes between the environment and the organism, so that any purely physicalist descriptions, from artificial intelligence, to ideas like Dennett’s multiple drafts model of consciousness, in the next section fit within the pure physicalist regime.

 

Symbiotic Existential Cosmology invokes primal subjectivity as a foundational cosmological complement to the physical universe that is ultimately compliant with physical boundary conditions, and so poses no conflicts between subjective panpsychic qualia and empirical physics and neuroscience, but it is not a form of passive mentalism (Carroll 2021) as it is conceived as interacting with physical uncertainty. It also cites the eucaryote endo-symbiosis as an emergent topological transition, in which the excitable membrane and neurotransmitter-based social signalling in single celled species, enabled the form of subjective conscious sentience and volition we see in all eucaryotes today.

 

This emergent transition sits right on the boundary between strong and weak emergence, as a form of quantum edge-of-chaos emergence that affirms subjective conscious volition. Having efficacy over the physical universe, in much the same way Sperry originally cited in 1980, but without claiming to violate established physical laws. This is because it focuses on the indeterminacy of the quantum universe and collapse of the wave function in biological systems as a key avenue through which subjective conscious volition can be physically efficacious and yet consistent with the known laws of physics, by citing interpretations such as transactional super-causality and super-determinism to provide processes below the classical level to both explain quantum indeterminacy and conscious intentional will in one step, without violating the Born probability interpretation.

 

Hopeful Monster 1: Virtual Machines v Cartesian Theatres

 

Reductionistic descriptions attempting to explain subjective experience objectively frequently display similar pitfalls to creationist descriptions of nature, and those in Biblical Genesis, which project easy, familiar concepts, such as human manufacture breath, or verbal command onto the natural universe.

 

Paul  Churchland (1985) makes  a definitive play for a reductionistic paradigm based on promissory materialism,  that the emerging neuroscience description will eclipse and supplant our subjective "folk psychology" views of conscious experience in a utopian vision of neuroscience ascendent:

 

Consider now the possibility of learning to describe, conceive, and introspectively apprehend the teeming intricacies of our inner lives within the conceptual framework of a matured neuroscience, a neuroscience that successfully reduces, either smoothly or roughly, our common-sense folk psychology. Suppose we trained our native mechanisms to make a new and more detailed set of discriminations, a set that corresponded not to the primitive psychological taxonomy of ordinary language, but to some more penetrating taxonomy of states drawn from a completed neuroscience. And sup- pose we trained ourselves to respond to that reconfigured discriminative activity with judgments that were framed, as a matter of course, in the appropriate concepts from neuroscience.'

 

If the examples of the symphony conductor (who can hear the Am7 chords), the oenologist (who can see and taste the glycol), and the astronomer (who can see the temperature of a blue giant star) provide a fair parallel, then the enhancement in our introspective vision could approximate a revelation. Dopamine levels in the limbic system, the spiking frequencies in specific neural pathways, resonances in the nth layer of the occipital cortex, inhibitory feed- back to the lateral geniculate nucleus, and countless other neuro-physical niceties could be moved into the objective focus of our introspective discrimination, just as Gm7 chords and Adim chords are moved into the objective focus of a trained musician's auditory discrimination. We will of course have to learn the conceptual framework of a matured neuroscience in order to pull this off. And we will have to practice its non-inferential application. But that seems a small price to pay for the quantum leap in self-apprehension.

 

All of this suggests that there is no problem at all in conceiving the eventual reduction of mental states and properties to neurophysiological states and properties. A matured and successful neuroscience need only include, or prove able to define, a taxonomy of kinds with a set of embedding laws that faithfully mimics the taxonomy and causal generalizations of folk psychology. Whether future neuro-scientific theories will prove able to do this is a wholly empirical question, not to be settled a priori. The evidence for a positive answer is substantial and familiar, centering on the grow- ing explanatory success of the several neurosciences.

 

But there is negative evidence as well: I have even urged some of it myself ("Eliminative Materialism and the Propositional Attitudes," op. cit.). My negative arguments there center on the explanatory and predictive poverty of folk psychology, and they question whether it has the categorial integrity to merit the reductive preservation of its familiar ontology. That line suggests substantial revision or outright elimination as the eventual fate of our mentalistic ontology. The qualia-based arguments of Nagel, Jackson, and Robinson, however, take a quite different line. They find no fault with folk psychology. Their concern is with the explanatory and descriptive poverty of any possible neuroscience, and their line suggests that emergence is the correct story for our mentalistic ontology. Let us now examine their arguments.

 

John Searle (1980) devised his famous "Chinese Room" as a counterexample to machine having consciousness and intentionality. He supposed that artificial intelligence research had succeeded in constructing a computer large language model that behaves as if it understands Chinese, just as chatGPT now does and that it performs its task so convincingly that it comfortably passes the Turing test, convincing a human Chinese speaker that the program is itself a live Chinese speaker. Searle then uses the English version of the algorithm to replicate its performance manually in Chinese without being aware in  "mind", "understanding", or "consciousness", of the actual language responses or their meaning, thus demonstrating that even though intentionality in human beings (and animals) my be an empirical  product of causal features about the relations between mental processes and brains, running a computer program is never by itself a sufficient condition of intentionality. The argument is directed against the philosophical functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols, and that simulation of a given mental state is sufficient for its presence.

 

P & P Churchland (1981) in response present a bleak landscape of the mind, lacking any intentionality over a machine:

 

Functionalism - construed broadly as the thesis that the essence our psychological states resides in the abstract causal roles they play in a complex economy of internal states mediating environment inputs and behavioral outputs - seems to us to be free from any fatal or essential shortcomings ... The correct strategy is to argue that our own mental states are just as innocent of "intrinsic intentionality" as are the states of any machine simulation. On our view, all ascriptions of meaning or propositional content are relative - in senses to be explained. The notion of "intrinsic intentionality" (Searle 1980) makes no more empirical sense than does the notion of position in absolute space.

 

In his reductionist account in “Consciousness Explained” Daniel Dennett (1991) cites his “multiple drafts” model of brain processing, as a case of evolutionary competition among competing neural assemblies, lacking overall coherence, thus bypassing the need for subjective consciousness. This exposes a serious problem of conceptual inadequacy with reductionism. Daniel is here writing his book using the same metaphors as the very activities he happens to be using – the message is thus the medium. He can do this as a subjectively conscious being only by suppressing the significance of virtually every form of coherent conscious experience around him, subjugating virtually all features of his conscious existence operating for 100% of his conscious life, in favour of a sequence of verbal constructs having little more explanatory value than a tautology. This is what I call the psychosis of reductionistic materialism, which is shared by many AI researchers and cognitive scientists.

 

Despite describing the mind as a virtual machine, Dennett & KInsbourne (1995) do concede a conscious mind exists at least as an observer:

 

Wherever there is a conscious mind, there is a point of view. A conscious mind is an observer, who takes in the information that is available at a particular (roughly) continuous sequence of times and places in the universe. ... It is now quite clear that there is no single point in the brain where all information funnels in, and this fact has some far from obvious consequences.

 

But neuroscience has long ceased talking about a single point or single brain locus responsible for consciousness, which is associated with coherent “in phase” activity as a whole. Nevertheless Dennett attempts to mount a lethal attack on any coherent manifestation of subjectivity, asserting there is no single, constitutive "stream of consciousness”:

 

“The alternative, Multiple Drafts model holds that whereas the brain events that discriminate various perceptual contents are distributed in both space and time in the brain, and whereas the temporal properties of these various events are determinate, none of these temporal properties determine subjective order, since there is no single, constitutive "stream of consciousness" but rather a parallel stream of conflicting and continuously revised contents” (Dennett & KInsbourne (1995).

 

“There is no single, definitive "stream of consciousness," because there is no central Headquarters, no Cartesian Theatre where "it all comes together" for the perusal of a Central Meaner. Instead of such a single stream (however wide), there are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. Most of these fragmentary drafts of "narrative" play short-lived roles in the modulation of current activity but some get promoted to further functional roles, in swift succession, by the activity of a virtual machine in the brain. The seriality of this machine (its "von Neumannesque" character) is not a "hard-wired" design feature, but rather the upshot of a succession of coalitions of these specialists.” (Dennett 1991)

 

However we know and shall discuss in the context of the default mode network in the context of psychedelics, the balance between top-down processes of control and integration, against just such a flood of competing regional bottom-up excitations, which become more able to enter consciousness, because of lowered barriers under the drug. 

 

Yet the ghost Dennett claims to have crushed just keeps coming back to haunt him:

 

“Cartesian materialism is the view that there is a crucial finish line or boundary somewhere in the brain, marking a place where the order of arrival equals the order of "presentation" in experience because what happens there is what you are conscious of. ... Many theorists would insist that they have explicitly rejected such an obviously bad idea. But ... the persuasive imagery of the Cartesian Theater keeps coming back to haunt us—laypeople and scientists alike—even after its ghostly dualism has been denounced and exorcized.”

 

Fig 84: Baars’ (1997) view of the Cartesian theatre of consciousness has genuine explanatory power about the easy problem of the relation between peripheral unconscious processing and integrated coherent states associated with consciousness.

 

Bernard Baars(1997) global workspace theory, in the form of the actors in the Cartesian theatre of consciousness, is creatively provocative of the psyche, and concedes a central role for consciousness. His approach suggests that consciousness is associated with the whole brain, in integrated coherent activity and is thus a property of the brain as a whole functioning entity, in relation to global workspace, rather than arising from specific subsystems.

 

Furthermore, the approach rather neatly identifies the distinction between unconscious processing and conscious experience, in the spotlight of attention, accepts conscious experience as a central arena consistent with whether a given dynamic is confined to asynchronous regional activity or is part of a coherent global response. But again this description is an imaginative representation of Descarteshomunculus in the guise of a Dionysian dramatic production, so it is also a projection onto subjective consciousness, albeit a more engaging one.

 

Lenore and Manuel Blum (2021) have developed a theoretical model of conscious awareness designed in relation to Baars' global workspace theory that applies as much to a computer as an organism:

 

Our view is that consciousness is a property of all properly organized computing systems, whether made of flesh and blood or metal and silicon. With this in mind, we give a simple abstract substrate-independent computational model of consciousness. We are not looking to model the brain nor to suggest neural correlates of consciousness, interesting as they are. We are looking to understand consciousness and its related phenomena.

 

Essentially the theory builds on the known feedbacks between peripheral unconscious processing and short term memory and the spotlight of conscious attention, paraphrasing these in purely computational terms, utilising a world model that is updated, notions corresponding to "feelings" and even "dream creation", in which a sleep processor alters the modality of informational chunking.

 

While it is possible to conceive of such analogous models it remains extremely unlikely that any such computational model can capture the true nature of subjective consciousness. By contrast with a Turing machine which operates discretely and serially on a single mechanistic scale, biological neurosystems operate continuously and discretely on fractal scales from the quantum level through molecular, subcellular dynamics up to global brains states, so it remains implausible in the extreme that such computational systems however complex in structural design can replicate organismic subjective consciousness. The same considerations apply to artificial neural net designs which lack the fractal edge of chaos dynamic of biological neurosystems.

 

Another discovery pertinent here (Fernandino et al. (2022) is that a careful neuroscientific study has found that lexical semantic information can be reliably decoded from a wide range of heteromodal cortical areas in the frontal, parietal, and temporal cortex, but that in most of these areas, they found a striking advantage for experience-based representational structures (i.e., encoding information about sensory-motor, affective, and other features of phenomenal experience), with little evidence for independent taxonomic or distributional organisation. This shows that experience is the foundational basis for conceptual and cognitive thought, giving it a primary universal status over rational or verbal thought.

 

Consciousness and Broad Integrated Processing: The Global Neuronal Workspace (GNW) model

 

Stanislas Dehaene and Jean-Pierre Changeux (2008, 2011) have combined experimental studies and theoretical models, including Baars' global workspace theory to address the challenge of establishing a causal link between subjective conscious experience and measurable neuronal activity in the form of the the Global Neuronal Workspace (GNW) model according to which conscious access occurs when incoming information is made globally available to multiple brain systems through a network of neurons with long-range axons densely distributed in prefrontal, parieto-temporal, and cingulate cortices.

 

Converging neuroimaging and neurophysiological data, acquired during minimal experimental contrasts between conscious and nonconscious processing, point to objective neural measures of conscious access: late amplification of relevant sensory activity, long-distance cortico-cortical synchronization at beta and gamma frequencies, and ‘ignition’ i.e. "lighting up" of a large-scale prefronto-parietal network. By contrast, as shown in fig 86, states of reduced consciousness have large areas of cortical metabolic deactivation.

 

Fig 85: Both fMRI (1) and (2) EEG/MEG data, show broad activation across diverse linked cortical regions, when non-conscious processing rises to the conscious level. Likewise local feed forward propagation (3) leads to reverberating cortical connections. These influences are combined in the GRW model (4) in which Baars’ global workspace theatre becomes a more precisely defined model attempting to solve several of the easier problems of consciousness into a globally resonant network theory.

 

In conclusion, the authors look ahead to the quest of understanding the conscious brain and what it entails:

 

The present review was deliberately limited to conscious access. Several authors argue, however, for additional, higher-order concepts of consciousness. For Damasio and Meyer (2009), core consciousness of incoming sensory information requires integrating it with a sense of self (the specific subjective point of view of the perceiving organism) to form a representation of how the organism is modified by the information; extended consciousness occurs when this representation is additionally related to the memorized past and anticipated future (see also Edelman, 1989). For Rosenthal (2004), a higher-order thought, coding for the very fact that the organism is currently representing a piece of information, is needed for that information to be conscious. Indeed, metacognition, or the ability to reflect upon thoughts and draw judgements upon them is often proposed as a crucial ingredient of consciousness. In humans, as opposed to other animals, consciousness may also involve the construction of a verbal narrative of the reasons for our behavior (Gazzaniga et al., 1977).

 

Fig 86: Top: Conscious brain states are commonly associated with phase correlated global cortical activity. Conscious brain activity in healthy controls is contrasted with diminished cortical connectivity of excitation in unaware and minimally conscious states (Demertzi et al. 2019).  Bottom: Reduced metabolism during loss of consciousness (Dehaene & Changeux J 2011).

 

In the future, as argued by Haynes (2009), the mapping of conscious experiences onto neural states will ultimately require not only a neural distinction between seen and not-seen trials, but also a proof that the proposed conscious neural state actually encodes all the details of the participant’s current subjective experience. Criteria for a genuine one-to-one mapping should include verifying that the proposed neural state has the same perceptual stability (for instance over successive eye movements) and suffers from the same occasional illusions as the subject’s own report.

 

However, decoding the more intermingled neural patterns expected from PFC and other associative cortices is clearly a challenge for future research. Another important question concerns the genetic mechanisms that, in the course of biological evolution, have led to the development of the GNW architecture, particularly the relative expansion of PFC, higher associative cortices, and their underlying long-distance white matter tracts in the course of hominization. Finally, now that measures of conscious processing have been identified in human adults, it should become possible to ask how they transpose to lower animal species and to human infants and fetuses.

 

In "A better way to crack the brain”, Mainen, Häusser & Pouget (2016) cite novel emerging technologies such as optogenetics as tools likely to eclipse the overriding emphasis on electrical networking data, but at the same time illustrate the enormity of the challenge of neuroscience attempting to address consciousness as a whole.

 

Some sceptics point to the teething problems of existing brain initiatives as evidence that neuroscience lacks well-defined objectives, unlike high-energy physics, mathematics, astronomy or genetics.

In our view, brain science, especially systems neuroscience (which tries to link the activity of sets of neurons to behaviour) does not want for bold, concrete goals. Yet large-scale initiatives have tended to set objectives that are too vague and not realistic, even on a ten-year timescale.

 

Fig 8: Optogenetic images of pyramidal cells in a rodent cortex.

 

Several advances over the past decade have made it vastly more tractable to solve funda- mental problems such as how we recognize objects or make decisions. Researchers can now monitor and manipulate patterns of activity in large neuronal ensembles, thanks to new technologies in molecular engineering, micro-electronics and computing. For example, a combination of advanced optical imaging and optogenetics can now read and write patterns of activity into populations of neurons. It is also possible to relate firing patterns to the biology of the neurons being recorded, including their genetics and connectivity.

 

 

Several advances over the past decade have made it vastly more tractable to solve fundamental problems such as how we recognize objects or make decisions. Researchers can now monitor and manipulate patterns of activity in large neuronal ensembles, thanks to new technologies in molecular engineering, micro- electronics and computing. For example, a combination of advanced optical imaging and optogenetics can now read and write patterns of activity into populations of neurons . It is also possible to relate firing patterns to the biology of the neurons being recorded, including their genetics and connectivity.

 

However none of these are coming even close to stitching together a functional view of brain processing that comes anywhere near to solving the hard problem or even establishing causal closure of the universe in the context of brain function, given the extreme difficulty of verifying classical causality in every brain process and the quantum nature of all brain processes at the molecular level. Future prospects for solving the hard problem via the easy ones thus remain unestablished.

  

Hopeful Monster 2: Consciousness and Surviving in the Wild v Attention Schema Theory

 

Real world survival problems in the open environment don’t necessarily have a causally-closed or even a computationally tractable solution, due to exponential runaway like the travelling salesman problem, thus requiring sensitive dependence on the butterfly effect and intuitive choices. Which route should the antelope take to reach the water hole when it comes to the fork in the trail? The shady path where a tiger might lurk, or the savannah where there could be a lion in the long grass? All the agents are conscious sentient beings using innovation and stealth and so  computations depending on reasoned memory are unreliable because the adversaries can also adapt their strategies and tactics to frustrate the calculations. The subtlest sensory hints of crisis amid split-second timing is also pivotal. There is thus no tractable solution. Integrated anticipatory intuition, combined with a historical knowledge of the terrain, appears to be the critical survival advantage of sentient consciousness in the prisoners’ dilemma of survival, just as sexuality is, in the Red Queen race (Ridley 1996) between hosts and parasites. This coherent anticipation possessed by subjective consciousness appears to be the evolutionary basis for the emergence and persistence of subjective consciousness as a quantum-derived form of anticipation of adventitious risks to survival, not cognitive processes of verbal discourse.

 

Michael Graziano’s (2016, 2017, Webb & Graziano 2015), attention schema theory, or AST, self-described as a mechanistic account of subjective awareness which emerged in parallel with my own work (King 2014), gives an account of the evolutionary developments of the animal brain, taking account of the adaptive processes essential for survival to arrive at the kind of brains and conscious awareness we experience: 

 

We propose that the topdown control of attention is improved when the brain has access to a simplified model of attention itself. The brain therefore constructs a schematic model of the process of attention, the attention schema,in much the same way that it constructs a schematic model of the body, the body schema.The content of this internal model leads a brain to conclude that it has a subjective experiencea non-physical, subjective awareness and assigns a high degree of certainty to that extraordinary claim”.

 

Fig 91: Which route should the antelope take to reach the water hole when it comes to the fork in the trail? The shady path where a tiger might lurk, or the savannah where there could be a lion in the long grass? Real world survival problems require intuitive multi-option decisions, creativity and and often split-second timing requiring anticipatory consciousness. Thus modelling the existence of subjective consciousness or otherwise based only on causal concepts and verbal reasoning processes gives a false evolutionary and cosmological view. Here is where the difference between a conscious organism and an AI robot attempting to functionally emulate it is laid bare in tooth and claw.

 

However, this presents the idea that subjective consciousness and volitional will are a self-fulfilling evolutionary delusion so that the author believes AST as a purely mechanistic principle could in principle be extended to a machine without the presence of subjective consciousness: “Such a machine would believeit is conscious and act like it is conscious, in the same sense that the human machine believes and acts. 

 

However it remains unclear that a digital computer, or AI process can achieve this with given architectures.  Ricci et al. (2021) note in concluding remarks towards one of the most fundamental and elementary tasks, abstract same-different discrimination:  The aforementioned attention and memory network models are stepping stones towards the flexible relational reasoning that so epitomizes biological intelligence. However, current work falls short of the — in our view, correct — standards for biological intelligence set by experimentalists like Delius (1994) or theorists like Fodor (1988).

 

Yet AST is a type of filter theory similar to Huxley’s ideas about consciousness, so it invokes a principle of neural organisation that is consistent with and complementary to subjective consciousness: “Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence.

 

The overall idea of a purely physical internal model of reality representing its own attention process, thus enabling it to observe itself, is an astute necessary condition for the sort of subjective consciousness we find in the spread of metazoa, but it is in no way sufficient to solve the hard problem or address any more than the one easy problem it addresses, about recursive attention. However its description, of fundamental changes in overall brain architecture summarised in Graziano (2016) highlights the actual evolutionary forces shaping the development of the conscious mind lie in the paranoia of survival the jungle as noted in fig 91, rather than the verbal contortions of philosophical discourse:

 

 “If the wind rustles the grass and you misinterpret it as a lion, no harm done.
But if you fail to detect an actual lion, youre taken out of the gene pool” (Michael Graziano 2016).

 

However Graziano (2020), in claiming why AST “has to be right”, commits to de-subjectifying  consciousness in favour of an AI analysis of recursive attention systems. In relation to the reality of consciousness in his words, the claim that: I have a subjective, conscious experience. Its real; its the feeling that goes along with my brains processing of at least some things. I say I have it and I think I have it because, simply, I do have it. Let us accept its existence and stop quibbling about illusions”, he attempts a structural finesse based on recursive attention:

 

Suppose the brain has a real consciousness. Logically, the reason why we intuit and think and say we have consciousness is not because we actually have it, but must be because of something else; it is because the brain contains information that describes us having it. Moreover, given the limitations on the brains ability to model anything in perfect detail, one must accept that the consciousness we intuit and think and say we have is going to be different from the consciousness that we actually have. . … I will make the strong claim here that this statement the consciousness we think we have is different from, simpler than, and more schematic than, the consciousness we actually have is necessarily correct. Any rational, scientific approach must accept that conclusion. The bane of consciousness theorizing is the naïve, mistaken conflation of what we actually have with what we think we have. The attention schema theory systematically unpacks the difference between what we actually have and what we think we have. In AST, we really do have a base reality to consciousness: we have attention the ability to focus on external stimuli and on internal constructs, and by focusing, process information in depth and enable a coordinated reaction. We have an ability to grasp something with the power of our biological processor. Attention is physically real. Its a real process in the brain, made out of the interactions of billions of neurons. The brain not only uses attention, but also constructs information about attention a model of attention. The central hypothesis of AST is that, by the time that information about attention reaches the output end of the pathway … , were claim-ing to have a semi-magical essence inside of us conscious awareness. The brain describes attention as a semi-magical essence because the mechanistic details of attention have been stripped out of the description.

 

These are simply opinions of a hidden underlying information structure, confusing conscious experience itself with the recursive attention structures that any realistic description has to entail to bring physical brain processing into any kind of concordance with environmental reality. His inability to distinguish organismic consciousness from AI is evidenced in Graziano (2017)  where he sets out AST as a basis for biologically realisable artificial intelligence systems.

 

The actual answer to this apparent paradox that leaves our confidence in our conscious volition in tatters, is that the two processes, neural net attention schemes and subjective consciousness have both been selected by evolution to ensure survival of the organism from existential threats and they have done so as complementary processes. Organismic brains evolved from the excitable sentience of single-celled eucaryotes and their social signalling molecules that became our neurotransmitters a billion yers after these same single-celled eucaryotes had to solve just these problems of growth and survival in the open environment. Brains are thus built as an intimately coupled society of eucaryote excitable cells communicating by both electrochemical and biochemical means via neurotransmitters, in such a way that the network process is an evolutionary elaboration of the underlying cellular process, both of which have been conserved by natural selection because both contribute to organismic survival by anticipating existential threats.

 

This is the only possible conclusion, because the presence of attention schemae does not require the manifestation of subjective consciousness to the conscious participant unless that too plays an integral role in survival of the organism.  Indeed an artificial neural net with recursive schemes would do just that and have no consciousness implied, as it would be superfluous to energy demands unless it had selective advantage.

 

An adjunct notion is the ALARM theory (Newen & Montemayor 2023), we need to distinguish two levels of consciousness, namely basic arousal and general alertness. Basic arousal functions as a specific alarm system, keeping a biological organism alive under sudden intense threats, and general alertness enables flexible learning and behavioural strategies. This two-level theory of consciousness helps us to account for recent discoveries of subcortical brain activities with a central role of thalamic processes, and observations of differences in the behavioural repertoire of non-human animals indicating two types of conscious experiences. The researchers claim his enables them to unify the neural evidence for the relevance of sub-cortical processes, and of cortico-cortical loops, on the other, and to clarify the evolutionary and actual functional role of conscious experiences.

 

They derive evidence primarily from two animal studies. In Afrasiabi et al. (2021) macaques were anaesthetised, and the researchers stimulated the central lateral thalamus. The stimulation acted as a switch to trigger consciousness. However, it only prompted fundamental arousal because the macaques could feel pain, see things, and react to them, but they were unable, unlike regular wakefulness, to participate in learning tasks. A second experiment, Nakajima et al. (2019), provides evidence mice possess general wakefulness in their daily lives. The animals were trained to respond to a sound differently than to a light signal. They were also capable of interpreting a third signal that indicated whether they should focus on the sound or the light signal. Given that the mice learned this quickly, it is clear that they have acquired learning with focused conscious attention and, therefore, possess general vigilance.

 

In "Homo Prospectus" (Seligman et al. 2016), which asserts that the unrivalled human ability to be guided by imagining alternatives stretching into the future – “prospection” – uniquely describes Homo sapiens, addresses the question of how ordinary conscious experience might relate to the prospective processes that by contrast psychology’s 120-year obsession with memory (the past) and perception (the present) and its absence of serious work on such constructs as expectation, anticipation, and will. Peter Railton cites:

 

Intuition: The moment-to-moment guidance of thought and action is typically intuitive rather than deliberative. Intuitions often come unbidden, and we can seldom explain just where they came from or what their basis might be. They seem to come prior to judgment, and although they often inform judgment, they can also stubbornly refuse to line up with our considered opinions.

Affect: According to the prospection hypothesis, our emotional or affective system is constantly active because we are constantly in the business of evaluating alternatives and selecting among them.

Information: A system of prospective guidance is information-intensive, calling